diff --git "a/F9AyT4oBgHgl3EQfSveK/content/tmp_files/load_file.txt" "b/F9AyT4oBgHgl3EQfSveK/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/F9AyT4oBgHgl3EQfSveK/content/tmp_files/load_file.txt" @@ -0,0 +1,2010 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf,len=2009 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='00092v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='ML] 31 Dec 2022 Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves Xiaohong Chen∗ Yuan Liao† Weichen Wang‡ First draft: September 2020, revised January 3, 2023 Abstract General nonlinear sieve learnings are classes of nonlinear sieves that can approxi- mate nonlinear functions of high dimensional variables much more flexibly than various linear sieves (or series).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This paper considers general nonlinear sieve quasi-likelihood ra- tio (GN-QLR) based inference on expectation functionals of time series data, where the functionals of interest are based on some nonparametric function that satisfy conditional moment restrictions and are learned using multilayer neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' While the asymp- totic normality of the estimated functionals depends on some unknown Riesz representer of the functional space, we show that the optimally weighted GN-QLR statistic is asymp- totically Chi-square distributed, regardless whether the expectation functional is regular (root-n estimable) or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This holds when the data are weakly dependent beta-mixing condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We apply our method to the off-policy evaluation in reinforcement learning, by formulating the Bellman equation into the conditional moment restriction frame- work, so that we can make inference about the state-specific value functional using the proposed GN-QLR method with time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, estimating the averaged partial means and averaged partial derivatives of nonparametric instrumental variables and quantile IV models are also presented as leading examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, a Monte Carlo study shows the finite sample performance of the procedure ∗Cowles Foundation for Research in Economics, Yale University, New Haven, CT 06520, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' xiaohong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='chen@yale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' †Department of Economics, Rutgers University, New Brunswick, NJ 08901, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' yuan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='liao@rutgers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='edu ‡Faculty of Business and Economics, The University of Hong Kong, Pokfulam Road, Hong Kong weichenw@hku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='hk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 1 1 Introduction Consider a conditional moment restriction model E[ρ(Yt+1, α0)|σt(X )] = 0 , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) where ρ is a scalar residual function;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0 = (θ0, h0) contains a finite dimensional parameter θ0 and an infinite dimensional parameter h0, which may depend on some endogenous variables Wt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The conditioning filtration σt(X ) is the sigma-algebra generated by variables {Xs : s ≤ t}, where Xs is a vector of multivariate (finite dimensional) exogenous variables, including all relevant lagged variables of Yt and other instrumental variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The model therefore allows for endogenous variables and weakly dependent data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This paper considers optimal estimation and inference for linear functionals φ(α0) of the infinite dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The functional may be either known or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' When it is unknown, it is assumed to take the form φ(α0) = El(h0(Wt)) , where l is a known linear function and h0(Wt) is the nonparametric function on endogenous variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We use general nonlinear sieve learning spaces, whose complexity grows with the sample size, to estimate the infinite dimensional parameter, such as multi-layer neural networks and Gaussian radial basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The motivation of using general nonlinear sieve learning space, besides being adaptive to high dimensional covariates, is that they allow unbounded supports of the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This is particularly desirable for models of dependent time series data, such as nonlinear autoregressive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We formally establish inferential theories of these functionals learned using the general nonlinear sieve learning space, and conduct inference using quasi-likelihood ratio (QLR) statis- tics based on the optimally weighted minimum distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Of particular interest is the estima- tion of an expectation functional, such as averaged partial means, weighted average derivatives and averaged squared partial derivatives, of a nonparametric conditional moment restriction via nonlinear sieve learning sieves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' An important insight from our main theory is that the asymptotic distribution does not depend on the actual choice of the learning space, but is only determined by the functional and the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore, estimators produced by either deep neural networks, Gaussian radial basis, or other nonlinear sieve learning basis, have the same asymptotic distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In general, machine learning inference often relies on sample splitting/ cross-fitting, which does not work well in the time series setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We propose a new time series efficient inference 2 based on the optimal quasi-likelihood ratio test, without requiring cross-fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It is shown that the optimally weighted QLR statistic, based on the general nonlinear sieve learning of h0(), is asymptotically chi-square distributed regardless of whether the information bound for the expectation functional is singular or not, which can be used to construct confidence sets without the need to compute standard errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We present a Monte Carlo study to illustrate finite sample performance of our inference procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Depending on the specific applications, our model may involve Fredholm integral equation of either the first kind (NPIV and NPQIV) or the second kind (Bellman equations).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the for- mer case, it is well known that estimating h0 is an ill-posed problem and the rate of convergence might be slow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the latter case, the problem can be well-posed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As one of the leading ex- amples of the Fredholm integral equation of second kind, we show that our framework implies a natural neural network-based inference in the context of Reinforcement Learning (RL), a popular learning device behind many successful applications of artificial intelligence such as Al- phaGo, video games, robotics, and autonomous driving (Sutton and Barto, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Vinyals et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shalev-Shwartz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Due to the dynamics of the RL model, theoretical analysis of reinforcement learning naturally requires to explicitly allow time series dependency among the observed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Earlier theoretical studies focused on the settings where the value function are approximated by linear functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' More recent developments on non- linear learning space include Farahmand et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Geist et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Duan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Long et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen and Qi (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020), among others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Our innovation lies in making inference about the functionals (such as the value functional for specific states) of the Q-function using general nonlinear sieve learning spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' While the reinforcement learning is based on the well known Bellman equation, it can be formulated as the conditional moment restriction model with time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore, one can apply the GN-QLR inference to estimating the state-specific value function in the setting of the off-policy evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' These applications are potentially useful for dynamic causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' case, existing theoretical works on neural networks have focused on deriv- ing approximation theories and optimal rates of convergence for estimations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theoretically, deep learning has been shown to be able to approximate a broad class of highly nonlinear functions, see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Mhaskar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Rolnick and Tegmark (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hsu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Schmidt-Hieber (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Yang and Barron (1999) ob- tained the minimax L2- rate of convergence for neural network models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recently, Chen, Chen and Tamer (2021) considered NN efficient estimation of the (weighted) average derivatives in a NPIV model for i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' data, and presented consistent variance estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In contrast, using 3 a general theory of Riesz representations, we derive the asymptotic distribution of the finite dimensional parameter θ0 and functionals of the infinite dimensional parameter h0 that is learned from the general learning space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The uncertainty of the general nonlinear sieve learn- ing estimator plays an essential role in the asymptotic distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chernozhukov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2018a,b,c) proposed double machine learning and debias methods to achieve valid inference;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Dikkala et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020) studied a minimax criterion function to study the unknown functional approximated by neural networks for NPIV models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, the Riesz representation is playing a central role in our inferential theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' See Newey (1994);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shen (1997);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen and Shen (1998);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chernozhukov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020) for related approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the time series setting, the neural networks have been applied to economic demand estimations as in Chen and Ludvigson (2009), and is widely applicable in financial asset pric- ing such as Guijarro-Ordonez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Gu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Bali et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' These papers approximate unknown functions by neural networks, but without rigorous theoretical justifi- cations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' All these models can be formulated as an inference problem for conditional moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The rest of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Section 2 first introduces the model, the NN sieve space, the estimation and inference procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Section 3 establishes the converence rate of the NN sieve estimator for the unknown function satisfying the conditional moment restrictions with weakly dependent data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Section 4 provides the limiting distribution of the estimator for functionals that can be regular or irregular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Section 5 reveals that the NN sieve QLR statistics is asymptotically Chi-square distributed for both the regular and irregular functionals for time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In Section 6 we apply our approach to the estimation of the value function of RL and the weighted average derivative of NPIV and NPQIV as leading examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Section 7 contains simulation studies and Section 8 briefly concludes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 2 The model 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 The General sieve learning space This paper studies inference with the general nonlinear sieve learning space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The unknown function is estimated on a learning space, denoted by Hn, is a general approximation space that consists of either linear or nonlinear sieves, provided that the function of interest can be approximated well by the learning space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The popular feedforward neural network (NN) is one of the leading examples that fits into this context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Many theoretical studies have shown that NN can well approximate a broad class of functions and achieves nice statistical properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The multilayer feedforward NN 4 composites functions taking the form: h(x) = θJ+1hJ(x), · · hj(x) = σ(θjhj−1(x)), · · , h0(x) = x where the parameters θ = (θ1, · · · , θJ) with θj ∈ Rdj×dj−1 , hj(x) ∈ Rdj, and σ : Rdj → Rdj is a elementwise nonlinear activation function, usually the same across components and layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' One of the popularly used activation functions is known as ReLU, defined as σ(x) = max(0, x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The number of neurons being used in layer j, denoted by dj, is called the width of that layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We could also use other nonlinear approximation learning spaces, which uses nonlinear combinations of inputs and neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' One such example is the space spanned by Gaussian radial bases, which is a multilayer compositions of functions of the form: h(x) = α0 + J � j=1 αjG(σ−1 j ∥x − γj∥), α0, αj, γj ∈ R, σj > 0, where G is the standard normal density function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' A key feature is that here inputs and neurons (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', a vector of x) are “nonlinearly combined” as ∥x − γj∥, while they are linearly combined as indices θjx in the ordinary neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Additional examples of nonlinear sieves include spline and wavelet sieves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' They are very flexible and enjoy better approximation properties than linear sieves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' One of the key motivations of using general nonlinear sieve learning space, besides being adaptive to high dimensional covariates, is that it allows unbounded supports of input covari- ates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This is particularly desirable for time series models dependent data, such as nonlinear autoregressive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Semiparametric learning We shall assume a finite-order Markov property: for some known and fixed integer r ≥ 1, let Xt := (Xt, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Xt−r) for all t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' define m(Xt, α) = E[ρ(Yt+1, α)|σt(X )], Σ(Xt) = Var(ρ(Yt+1, α0)|σt(X )), where we assume that E[ρ(Yt+1, α)|σt(X )] and Var(ρ(Yt+1, α0)|σt(X )) only depend on (Xt, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Xt−r) for all α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The model is then equivalent to Q(α0) = 0 where Q(α) = Em(Xt, α)2Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 5 Here we use the optimal weighting function Σ(Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose there are nonparametric es- timators �m(X, α) and �Σ(Xt) for m(Xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', α) and Σ(Xt), we then define the sample criterion function Qn(α) = 1 n n � t=1 �m(Xt, α)2�Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The estimated optimal weighting matrix is needed for the quasi-likelihood inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In prac- tice, one can start with the identity weighting function to obtain an initial estimator for α0, use it to estimate Σ(Xt), then update the estimator using the estimated optimal weighting matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We focus on the general nonlinear sieve learning approximation to the true nonparametric function, and restrict to the following estimation space: An := Θ × Hn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here Θ is a compact set as the parameter space for θ0 but not necessarily for Hn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, let Pen(h) denote some functional penalty for the infinite dimensional parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We then define the estimator �α = (�θ,�h) ∈ An as an approximate minimizer of the penalized loss function restricted to the general nonlinear sieve learning space: Qn(�α) + λnPen(�h) ≤ inf α∈An Qn(α) + λnPen(h) + oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The tuning parameter λn is chosen to decay relatively fast, so that the penalization Pen(·) does not have a first-order impact on the asymptotic theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Nevertheless, the functional penalization is imposed to overcome undesirable properties associated with estimates based on a large parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Essentially, it plays a role of forcing the optimization to be carried out within a weakly compact set (Shen, 1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The functions (x, α) �→ �m(x, α) and x �→ �Σ(x) are nonparametric estimators of (x, α) �→ m(x, α) and x �→ Σ(x) (a positive definite weighting matrix) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The projection m(Xt, α) can be also estimated using linear sieves: �m(·, α) = min m∈Dn T � t=1 [ρ(Yt+1, α) − m(Xt)]2 6 where we consider linear sieve space: let {Ψj : j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' , kn} denote a set of sieve bases, Dn := � g(x) = kn � j=1 πjΨj(x) : ∥g∥∞,ω < ∞, πj ∈ R � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So we use the general nonlinear sieve learning space Hn to approximate the function space for h0, and a linear sieve space Dn to approximate the instrumental space, which is easier to implement computationally than using nonlinear sieve approximations to the instrumental space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' A more important motivation of using linear sieve space to estimate the conditional mean function E[ρ(Yt+1, α)|σt(X )] is that the sample loss function Qn(α) can be shown to have a local quadratic approximation (LQA): for some Bn = OP(1) and Zn →d N(0, 1), Qn(α + xun) − Qn(α) = Bnx2 + 2x[n−1/2Zn + ⟨un, α − α0⟩] + oP(n−1) (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) uniformly for all α in a shrinking neighborhood of α0 and |x| ≤ Cn−1/2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' here ⟨un, α − α0⟩ is some inner product between α − α0 and some function un, to be defined explicitly later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This LQA plays a fundamental role for the inferential theory of semiparametric inference using general nonlinear sieve learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 Semiparametric efficient estimations Let the parameter space of the true function be H0 and let A0 = Θ × H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We are interested in the inference of φ(α0), where φ : A0 → R can be a known functional of α0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We also study the inference problem of unknown functionals, taking the form φ(α0) = El(h0(Wt)) , where l(·) is a known function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' While the naive plug-in estimator 1 n �n t=1 l(�h(Wt)) is also asymptotically normal, when the model contains endogenous variables, it is not semipara- metrically efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' An important example of φ(α0) is the weighted average derivative of nonparametric instrumental variable regression (NPIV), defined as φ(α0) = E[Ω(Wt)′∇h0(Wt)] , where Ω(·) is a known positive weight function and ∇h0 denotes the gradient of the non- parametric regression function h0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As documented by Ai and Chen (2012), the simple plug-in estimator is not an efficient estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To obtain a more efficient estimator, on the popu- lation level consider conditional (given Xt) projection of l(h0(Wt)) onto ρ(Yt+1, α0), and the 7 corresponding functional of interest also can be represented as φ(α0) with the functional: φ(α) = E [l(h(Wt)) − Γ0(Xt)ρ(Yt+1, α)] , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) where Γ0(Xt) = E[l(h0(Wt))ρ(Yt+1, α0)|σt(X )]Σ(Xt)−1 is the projection coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We shall obtain efficient estimator of φ(α0) based on this expectation expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It is worthy to know that the added term Γ0(Xt)ρ(Yt+1,α0) is in effect only for endogenous regressors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In pure exogeneous models where Wt = Xt, we have Γ0(Xt) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In this case the moment condition (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) reduces to the original one φ(α) = El(h(Wt)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �φ(α) = 1 n n � t=1 [l(h(Wt)) − �Γtρ(Yt+1, α)] (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3) for some estimator �Γt to be defined later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then we estimate the functional by �φ(�α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Asymp- totically, we shall show that �φ(�α) − φ(α0) = [φ(�α) − φ(α0)] + 1 n n � t=1 [Wt − EWt] + oP(σn−1/2), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) where Wt = l(h0(Wt)) − Γ0(Xt)ρ(Yt+1, α0) and σ2 is the asymptotic variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It is clear that the asymptotic distribution arises from two sources of uncertainties, and importantly, the nonparametric learning error φ(�α) − φ(α0) plays a first-order role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We shall show that in both known and unknown functional case, estimated φ(α0) is asymptotically normal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We then provide quasi-likelihood inference to construct confidence intervals for φ(α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 3 Rates of Convergence for Semi-parametric Neural Network 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Weighted function space and sieve learning space Since the supports of the endogenous variable Wt could be unbounded, we use a weighted sup-norm metric defined as ∥h∥∞,ω = sup s |h(s)|(1 + |s|2)−ω/2, for some ω > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) This is known as “admissible weight” which is often used for h0(Wt) when Wt has fat tailed dis- tribution (Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 of Haroske and Skrzypczak (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Smooth functions with unbounded support might still be well approximated under the weighted sup-norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The L2(W)-norm can 8 be bounded by the weighted sup-norm as: for any function h(w): ∥h∥2 L2(W ) = � h(s)2fW(s)ds ≤ ∥h∥2 ∞,ω � (1 + |s|2)ωfW(s)ds, provided the distribution of the endogenous variable W has as density fW such that fW(s)(1+ |s|2)ω is integrable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We do not consider the overparametrized regime, but impose restrictions on the complex- ity of the general nonlinear sieve learning space Hn, measured by the “number of parameters” of the space, denoted by p(Hn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' More specifically, we impose the following condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (function and learning space).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) The function space: The unknown func- tion h0 ∈ H0, which is a weighted H¨older ball: for some γ > 0, g ≥ 0, H0 = {h : ∥h(·)(1 + | · |2)−g/2∥Λγ ≤ c} where ∥f∥Λγ = sup w |f(w)| + max |a|=d sup w1̸=w2 |∇af(w1) − ∇af(w2)| ∥w1 − w2∥γ−d .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, we require g < ω for ω defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Approximation rate under the ∥∥∞,ω norm: inf h∈H0 ∥h0 − h∥∞,ω ≤ cp(Hn)−m for some m > 0, and some sequence p(Hn) → ∞, p(Hn) log n = o(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) Complexity: Let N (δ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω) denote the minimal covering number, that is, the minimal number of closed balls of radius δ with respect to ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω needed to cover Hn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We assume, there is a constant C > 0, so that for any δ > 0, N (δ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω) ≤ �Cn δ �p(Hn) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We need to assume that h(w) is smooth in some sense with respect to h(w).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Condition (i) is a standard weighted smoothness condition for functions with unbounded support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here two weighted norms are being defined, the weighted sup norm ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω with a weight parameter ω in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The weighted sup norm intead of the usual sup norm is being considered, as discussed above, for the purpose of allowing the nonparametric function h(·) to have possibly 9 unbounded support, which is the typical case for autoregressive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The other norm is ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥Λγ for the H¨older ball with a weight parameter g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here we require g < ω so that the closure of the function space H0 with respec to the norm ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω is compact, following from Gallant and Nychka (1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In Condition (ii), p(Hn) → ∞ measures the dimension of of the learning space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For mul- tilayer neural networks with ReLU activation functions, Anthony and Bartlett (2009) showed that the bound holds with p(Hn) being the pseudo-dimension of the space and is bounded by CJ2K2 log(JK2), where J and K respectively denote the width and depth of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For finite-dimensional linear sieve, the inequality also holds with p(Hn) being bounded by the number of sieve bases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' When the function h has bounded support, Condition (ii) has been verified for numerous learning spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For instance, for feed forward multilayer neural networks, Bauer and Kohler (2019) showed that the approximation rate is n−c, for c = p 2p+d∗ and p = a + γ, with properly chosen depth and width of layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Importantly, d∗ ≤ dim(Wt) is the “intrinsic dimension” of the true function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For instance if h0 has a hierarchical interaction structure or multi- index structure, d∗ is the number of index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' When the function h has unbounded support, it is known that for linear sieves such as B-splines and wavelets the approximation rate is m = p(Hn)−γ/ dim(Wt) where p(Hn) is the number of basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The approximation rate is however still an open question for feed forward neural networks in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Ill-posedness In this section we present the rate of convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For simplicity throughout the rest of the paper, we focus on the case dim(ρ(Yt+1, α)) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the identification condition, Q(α) = 0 if and only if α = α0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So the usual risk consistency refers to Q(�α) = oP(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the presence of endogenous variables, the risk consistency however, is not sufficient to guarantee the estimation consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The latter is often defined under a strong norm: ∥α1 − α2∥∞,ω := ∥θ1 − θ2∥ + ∥h1 − h2∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We first introduce a pseudometric on An that is weaker than ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To do so, recall the general Gateaux derivative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Given generic α = (θ, h) and v = (vθ, vh), let F(x, α) = F(x, θ, h) be a function that is assumed to be differentiable with respect to θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define dF(x, α) dα [v] = ∂F(x, α) ∂θ ′ vθ + dF(x, θ, h + τvh) dτ ���� τ=0 , 10 where we implicitly assume dF (x,θ,h+τvh) dτ exists at τ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then the weak norm is defined to be ∥v∥2 := E �dm(Xt, α0) dα [v] �2 Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define πnα0 ∈ An be such that ∥πnα0 − α0∥∞,ω = min α∈An ∥α − α0∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The following assumption imposes conditions on the local curvature of the criterion function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (criterion function).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' There are c1, c2 > 0 so that (i) ∥α − α0∥2 ≤ c1Em(Xt, α)2Σ(Xt)−1 for all α ∈ An.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Em(Xt, πnα0)2Σ(Xt)−1 ≤ c2∥α0 − πnα0∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now discuss the ill-posedness which reflects the relation between the risk consistency and estimation consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let the sieve modulus of continuity be ωn(δ) := sup α∈An:∥α−πnα0∥≤δ ∥α − πnα0∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We say that the problem is ill-posed if δ = o(ωn(δ)) as δ → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The growth of ωn(δ)δ−1 reflects the difficulty of recovering α0 through minimizing the criterion function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 Rates of convergence Below we present regularity conditions to achieve the rates of convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We allow weakly dependent time series data satisfying β-mixing conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define the mixing coefficient β(j) := sup t E sup{|P(B|F t −∞) − P(B)| : B ∈ F ∞ t+j} where F t s denotes the σ-field generated by (Ys+1, Xs), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', (Yt+1, Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 (Dependences).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) {(Yt+1, Xt)}n t=1 is a strictly stationary and β-mixing sequence with β(j) ≤ β0 exp(−cj) for some β0, c > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) There is a known and finite integer r ≥ 1 so that for each α ∈ An and t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', n, The conditional expectation E[f(St, α)|σt(X )] depend on σt(X ) only through Xt := (Xt, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Xt−r), for St = (Yt+1, Wt) and f(St, α) ∈ {ρ(Yt+1, α), ρ(Yt+1, α0)2, l(h0(Wt))ρ(Yt+1, α0)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 11 Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Q(α) = 0 if and only if α = α0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, Q(α) is lower semicontinuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The lower semicontinuity of the criteria function is satisfied by the risk function of many interesting models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This condition ensures that it has a minimum on any compact set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5 (Penalty).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) There is M0 > 0, Pen(h) ≤ M0 for all h ∈ Hn ∪ {h0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Pen is lower semicompact on (An, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' {h : Pen(h) ≤ M} is compact for any M > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) kn n + Q(πnα0) = O(λn) where recall kn is the number of linear sieve bases in Dn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define ǫ(St, α) := ρ(Yt+1, α) − m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' One of the major technical steps is to establish the stochastic equicontinuity for the function class Ψj(Xt)ǫ(St, α) for β-mixing observations, where α belongs to the class of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' More specifically, we shall derive the bound for, with Ψ(Xt) := (Ψj(Xt) : j ≤ kn): sup α∈An:Em(Xt,α)≤r2n ����� 1 √n n � t=1 Ψ(Xt)[ǫ(St, α) − ǫ(St, α0)] ����� for a given convergence sequence rn → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This is achieved under the following Assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' There is C > 0, (i) There are κ > 0 and C > 0 so that for all δ > 0 and all α1, α2 ∈ An, max j≤kn E[Ψj(Xt)2 + 1] sup ∥α1−α2∥∞,ω<δ |ǫ(St, α1) − ǫ(St, α2)|2 ≤ Cδ2κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) E maxj≤kn Ψj(Xt)2 supα∈An ρ(Yt+1, α)2 ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) There is a ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω- neighborhood of α0 on which m(·, α) is continuously pathwise differentiable with respect to α, and there is a constant C > 0 such that ∥α − α0∥ ≤ C∥α − α0∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next we present regularity conditions on the linear sieve space Dn used to approximate the conditional mean function m(X, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7 (Linear sieve space).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) There is ϕn → 0 so that uniformly for α ∈ An, there is kn × 1 vector bα, E[g(Xt, α) − Ψ(Xt)′bα]2 = O(ϕ2 n), for all g(Xt, α) ∈ {m(Xt, α), E[l(h0(Wt))ρ(Yt+1, α0)|Xt], dm(Xt,α) dα [un], dm(Xt,α) dα [un]Σ(Xt)−1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 12 (ii) Let Ψn be the n × kn matrix of the linear sieve bases: Ψn = (Ψ(Xt) : t = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n)n×kn: and let A := 1 nEΨ′ nΨn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The linear sieve satisfies: λmin(A) > c and ∥ 1 nΨ′ nΨn − A∥ = oP(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, we apply the pseudo dimension to quantify the complexity of the neural network class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) supx[Σ(x)−1 + Σ(x)] < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, supx |�Σ(x) − Σ(x)| = oP(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) The distribution of the endogenous variable Wt has a density function fW, which satisfies � w(x)−2fW(x)dx < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recall that kn denotes the number of sieve bases being used to estimate the expectation function m(X, α);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' ϕn is the approximation rate in Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let dn := � p(Hn) log2 n n , ¯δn := ∥πnα0 − α0∥ + � λn + � kndn + ϕn, δn := ∥πnα0 − α0∥∞,ω + ωn(¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (Rate of convergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Under Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8, for any ǫ > 0, ∥�α − α0∥∞,ω = OP(δn), Q(�α) = OP(¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The derived rate of convergence is comparable with that of Chen and Pouzo (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In ¯δn, the term ∥πnα0 − α0∥ is the approximation error on the general nonlinear sieve learning space;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' √λn is the effect of penalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, ϕn and √kndn respectively arise from the bias and variance of estimating m(X, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In particular, the variance term √kndn depends on the complexity of the general nonlinear sieve learning space, which arises from the stochastic equicontinuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, ωn(¯δn) connects the convergence under the weak norm OP(¯δn) to the convergence under the strong norm via the sieve modulus of continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' When there are no endogeneity, ¯δn and ωn(¯δn) are of the same order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' General nonlinear sieve spaces with more complicated structures (with larger “dimension” p(Hn)) have increased covering numbers on the learning space, and thus lead to slower decays of these two terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 4 Asymptotic Distributions for NN Functionals We now study estimating linear functionals of α0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We establish the asymptotically normality of the estimated functionals formed via pluging-in the general learning estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 13 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Riesz representation A key ingredient of our analysis, as in Chen and Pouzo (2015), relies on representing the estimation error φ(�α) − φ(α0) using a linear inner product induced from the loss function via the Riesz representation theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We define an inner product space as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For any space H, let span{H} denote the closed linear span of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For any v1, v2 in span(An ∪ {α0}), the linear span of An ∪ {α0}, define the inner product: ⟨v1, v2⟩ = EΣ(Xt)−1 �dm(Xt, α0) dα [v1] � �dm(Xt, α0) dα [v2] � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let α0,n ∈ span(An) be such that ∥α0,n − α0∥ = min α∈span(An) ∥α − α0∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We note that it is likely α0,n ̸= πnα0 because πnα0 ∈ An, which is not the same as span(An), when An is a nonlinear NN space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Given Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, we can focus on shrinking neighborhoods Aosn := {α ∈ An : ∥α − α0∥∞,ω ≤ Cδn, Q(α) ≤ C¯δ2 n} Cn := {α + xun : α ∈ Aosn, |x| ≤ Cn−1/2}, un := v∗ n/∥v∗ n∥, ¯Vn := span(Aosn − {α0,n}) ⊂ span(An).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) for a generic constant C > 0, where v∗ n is the Riesz representer to be defined below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Because both Aosn and α0,n are functions inside the general nonlinear sieve learning space, ( ¯Vn, ⟨.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='⟩) is a finite dimensional Hilbert space under the weak-norm ∥v∥ = � ⟨v, v⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose dφ(α0) dα [v] is a linear functional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As any linear functional on a finite dimensional Hilbert space is bounded, by the Riesz representation Theorem, there is v∗ n ∈ ¯Vn so that dφ(α0) dα [v] = ⟨v∗ n, v⟩, ∀v ∈ ¯Vn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To appreciate the role of Riesz representation in the semiparametric inference, note that �α − α0,n ∈ ¯Vn, and we have, φ(�α) − φ(α0) = dφ(α0) dα [�α − α0] = dφ(α0) dα [�α − α0,n] + dφ(α0) dα [α0,n − α0] 14 = ⟨v∗ n, �α − α0,n⟩ + dφ(α0) dα [α0,n − α0] � �� � negligible .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' where the first equality follows from the smoothness condition (Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 below) of the functional;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' the second equality is to the linearity of the functional pathwise derivative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, suppose dφ(α0) dα [α0,n − α0] is negligible, a claim we shall discuss in Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 later, we can then apply the Riesz representation theorem to reach the last line of the expansion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, one of the key technical steps in the proof, by locally expanding the risk function, is to prove: √n⟨v∗ n, �α − α0,n⟩ = √n⟨v∗ n, �α − α0⟩ = − 1 √n � t Zt + oP(∥v∗ n∥) where Zt = ρ(Yt+1, α0)Σ(Xt)−1 dm(Xt,α0) dα [v∗ n], and ∥v∗ n∥2 = Var( 1 √n � t Zt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then together we have √n(φ(�α) − φ(α0)) ∥v∗n∥ →d N (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Importantly, our inference procedure does not require estimating the Riesz representer v∗ n or ∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Instead, we propose a quasi-likelihood ratio (QLR) inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We shall provide regularity conditions in the next section to formalize the above derivations, and subsequently address estimating the known and unknown functionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Asymptotic distributions for known functionals We have the following assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (smoothness).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) The functional φ is linear in the sense that the functional φ is linear in the sense that φ(α) − φ(α0) = dφ(α0) dα [α − α0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) √ndφ(α0) dα [α0,n − α0] = oP(∥v∗ n∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (iii) requires that the neural network bias term dφ(α0) dα [α0,n − α0] should be negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here we present a sufficient condition following the discussion of Chen and Pouzo (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' First, since α0,n is the projection of α0 on to span(An) and v∗ n ∈ ¯Vn ⊂ span(An), we have ⟨v∗ n, α0,n−α0⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, define an infinite dimensional Hilbert space ¯V as the closure of the linear span of A − {α0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose dφ(α0) dα [·] is bounded, then there is a unique Riesz representer v∗ ∈ ¯V so that dφ(α0) dα [v] = ⟨v∗, v⟩, ∀v ∈ ¯V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 15 As α0,n − α0 ∈ ¯V , we have ���� √ndφ(α0) dα [α0,n − α0] ���� = ��√n⟨v∗ − v∗ n, α0,n − α0⟩ �� ≤ √n∥v∗ − v∗ n∥∥α0,n − α0∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So condition (iii) holds as long as √n∥v∗ − v∗ n∥∥α0,n − α0∥ = oP(∥v∗ n∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To allow quantile applications that involve nonsmooth loss functions, we need to show that the sample criterion function Qn(α) can be replaced with a smoothed criterion �Qn(α) := 1 n � t ℓ(Xt, α)2�Σ(Xt)−1, where Ψn = (Ψ(Xt) : t = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n)n×kn: ℓ(x, α) := �m(x, α) + �m(x, α0), �m(x, α) := Ψ(x)′(Ψ′ nΨn)−1Ψ′ nmn(α), and mn(α) denotes the n × 1 vector of m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The replacement error is negligible: sup α∈Aosn sup |x|≤Cn−1/2 |Qn(α + xun) − �Qn(α + xun)| = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore, theoretical analysis of Qn(α) is asymptotically equivalent to that of �Qn(α), while the latter is second-order pathwise differentiable, and admits a local quadratic approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Formalizing this argument would require the following conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' m(x, t) is twice differentiable with respect to t, and there is C > 0, so that, recall that un = v∗ n/∥v∗ n∥ being the “normalized Riesz representer”: (i) E|ρ(Yt+1, α0)|2+ζ ���dm(Xt,α0) dα [un] ��� 2+ζ + E|ρ(Yt+1, α0)|2+ζ < C for some ζ > 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) E supα∈Cn sup|τ|≤Cn−1/2 1 n � t � d2 dτ 2m(Xt, α + τun)| �2 < C;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) supτ∈(0,1) supα∈Cn E � d2 dτ 2m(Xt, α0 + τ(α − α0)) �2 = o(n−1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iv) kn supα∈Cn 1 n � t[ dm(Xt,α) dα [un] − dm(Xt,α0) dα [un]]2 = oP(1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (v) E � [maxj≤kn Ψj(Xt)2+1] supα∈Cn(ρ(Yt+1, α)−ρ(Yt+1, α0))2� < Cδ2η n for some κ, η > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, we need to strengthen conditions on the penalty and some rates of convergence as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) Let Ch := {h : (θ, h) ∈ Cn for some θ ∈ Θ}, which is the local neighbor- hood for the estimated h(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We assume λn sup h∈Ch |Pen(h) − Pen(h0)| + λn sup h∈Ch |Pen(πnh) − Pen(h0)| = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 16 (ii) √n¯δn∥�Σn − Σn∥ = o(1), where �Σn and Σn be the diagonal matrix of �Σ(Xt) and Σ(Xt) for all t, and furthermore ϕ2 n¯δ2 n + knd2 nδ2η n + √kndnδη n¯δn = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The following condition is similar to Condition C in Shen (1997), which is used to control the approximation error of the NN space for locally perturbed elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' There is µn → 0 so that µn¯δn = o(n−1), we have sup α∈Cn 1 n n � t=1 [m(Xt, πnα) − m(Xt, α)]2 = OP(µ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (Limiting distribution).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Under Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4, √nφ(�α) − φ(α0) ∥v∗ n∥ →d N (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' An important insight from this theorem is that the asymptotic distribution does not depend on the actual choice of the learning space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The asymptotic variance ∥v∗ n∥2 = EΣ(Xt)−1 �dm(Xt, α0) dα [v∗ n] �2 is only determined by the functional forms φ and m(X, α), and more generally, the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So whether the multilayer neural network, B-spline, Gaussian radial basis, etc, are being used to estimate α0, the asymptotic distribution is the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' What really matters is the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 Estimation for unknown functionals We now consider estimating unknown (probably not √n-estimable) functionals, taking the form γ0 := El(h0(Wt)), where l(·) is a known function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Ai and Chen (2012) used the following moment condition (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) to construct the optimal criterion function: γ0 = EWt, Wt = l(h0(Wt)) − Γ(Xt)ρ(Yt+1, α0), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) where Γ(Xt) = E[l(h0(Wt))ρ(Yt+1, α0)|σt(X )]Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' They showed that estimating γ0 based on this moment condition leads to more efficient estimator than based on the naive plug-in method 1 n � i l(�h(Wt)), whenever Wt is endogenous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Because the naive plug-in estimator does 17 not take into account the potential correlations between the moment functions m(Xt, α) and l(h(Wt)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Using the more efficient moment condition of γ0, and letting φ(α) := El(h(Wt)) − EΓ(Xt)ρ(Yt+1, α), we note that φ(α0) = γ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose the functional φ(·) were known, and Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 continues to hold for φ(α), then we can show √n(φ(�α) − φ(α0)) ≈ − 1 √n n � t=1 Zt, Zt := ρ(Yt+1, α0)Σ(Xt)−1dm(Xt, α0) dα [v∗ n], where v∗ n is the Riesz representer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' But we in fact are facing a problem of estimating an unknown functional φ(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To do so, we first estimate Γ(Xt) by �Γt := n � s=1 l(�h(Ws))ρ(Ys+1, �α)φ(Xs)′(Ψ′ nΨn)−1Ψ(Xt)�Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then define the final estimator: �γ := �φ(�α), where �φ(α) = 1 n n � t=1 [l(h(Wt)) − �Γtρ(Yt+1, α)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3) The following asymptotic expansion holds for the estimated functional: �γ − γ0 = [φ(�α) − φ(α0)] + 1 n n � t=1 [Wt − EWt] + oP(σn−1/2) = 1 n n � t=1 [−Zt + Wt − EWt] + oP(σn−1/2) where Zt = ρ(Yt+1, α0)Σ(Xt)−1 dm(Xt,α0) dα [v∗ n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This explicitly presents two leading sources for the asymptotic distribution, where the asymptotic variance is given by σ2 := 1 n Var � n � t=1 (Wt − Zt) � = 1 n Var � n � t=1 Wt � + ∥v∗ n∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) where Wt and Zt are uncorrelated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We impose the following conditions 18 Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) supx |Γ(x)|2 + supw suph∈Hn l(h(w))2 < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) l(h) is linear in h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) E supα∈Cn |l(h(Wt)) − l(h0(Wt))|2 ≤ Cδ2η n , where for simplicity we assume the same η as in Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5 regulates the approximation quality of the instrumental space using linear sieves, which is not stringent since E(l(h(Wt))ρ(Yt+1, α)|σt(X )) is a function of the instrumen- tal variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The next assumption imposes a condition on the accuracy of estimating the optimal weighting function Σ(Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the NPQIV model this assumption is trivially satisfied since �Σ(Xt) = Σ(Xt) = ̟(1 − ̟) is known (see Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 for the definition of ̟).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We shall verify it for the NPIV model in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' There is a sequence pn so that pn¯δnσ = o(n−1) and 1 n � t Γ(Xt)Σ(Xt)(�Σ(Xt)−1 − Σ(Xt)−1)ρ(Yt+1, α0) = OP(pn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The asymptotic normality requires some rate restrictions, which we impose below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) There is c0 > 0 so that σ2 > c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Let νn := δη n supx |�Σ(x) − Σ(x)| + √kndnδη n + ϕ2 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then νn¯δnσ = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4 hold for φ(α) = El(h(Wt)) − EΓ(Xt)ρ(Yt+1, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, Assumptions 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then √nσ−1(�γ − γ0) →d N (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 5 Quasi-Likelihood Ratio Inference for Functionals As shown by Theorems 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, computing the asymptotic variance requires estimating Riesz representer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' While Chen and Pouzo (2015) and Chernozhukov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2018c) proposed framework of estimating the Riesz representer, the task is in general quite challenging when its does not have closed-form approximations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In this section we propose to make inference directly using the optimally weighted quas-likelihood ratio statistic (QLR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 19 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 QLR Inference for known functionals Consider testing H0 : φ(α0) = φ0 for some known φ0 ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Consider the restricted null space AR n := {α ∈ An : φ(α) = φ0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The GN-QLR statistic is defined as Sn(φ0) = n � Qn(�αR) − Qn(�α) � where �αR ∈ AR n approximately minimizes the penalized loss function over the general nonlinear sieve learning restricted on the null space: Qn(�αR) + λnPen(�hR) ≤ inf α∈AR n Qn(α) + λnPen(α) + oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define πR n α = arg min b∈An,φ(b)=φ0 ∥b − α∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) Recall µn as defined in Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It also satisfies: sup α∈Aosn,φ(α)=φ0 1 n n � t=1 [m(Xt, πR n (α + xun)) − m(Xt, α + xun)]2 = OP(µ2 n) (ii) (1 + ∥v∗ n∥) supα∈Cn |φ(πnα) − φ(α)| = o(n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The following theorem shows the asymptotic null distribution of Sn(φ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose conditions of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 and Assumption 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then under H0 : φ(α0) = φ0, Sn(φ0) →d χ2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 QLR inference for unknown functionals We now move on to the inference for the unknown functional γ0 := El(h0(Wt)), which is estimated by �γ as defined in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Consider testing H0 : El(h0(Wt)) = φ0 20 for some known φ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define Ln(α, γ) := Qn(α) + (�φ(α) − γ)2�Σ−1 2 , where �Σ2 consistently estimates the long-run variance (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Newey and West (1987)): Σ2 := Var � 1 √n n � t=1 Wt � = 1 n n � t=1 Var(Wt) + 1 n � t̸=s cov(Wt, Ws).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We recall that Wt = l(h0(Wt)) − Γ(Xt)ρ(Yt+1, α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that (�α, �γ) is numerically equivalent to the solution to the following problem: Ln(�α, �γ) + λnPen(�h) ≤ inf α∈An min γ Ln(α, γ) + λnPen(h) + oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We define the GN-QLR statistic as �Sn(φ0) = n � Ln(�αR, φ0) − Ln(�α, �γ) � , where �αR ∈ AR n approximately minimizes the penalized loss function in the learning space Hn, but fixing γ = φ0: Ln(�αR, φ0) + λnPen(�hR) ≤ inf α∈An Ln(α, φ0) + λnPen(α) + oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The asymptotic analysis of �Sn(φ0) is rather sophisticated, which requires additional rate constraints stated as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose �Σ2 − Σ2 = oP(1)Σ2 and conditions of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then under H0 : γ0 = φ0 �Sn(φ0) →d χ2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 6 Examples In this section, we illustrate our main results using three important models: Reinforcement learning, NPIV and NPQIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We impose premitive conditions to verify the high level Assump- tions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 respectively in the two models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 21 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Reinforcement learning Reinforcement learning (RL) has been an important learning device behind many successes in applications of artificial intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theories of RL have been developed in the literature of statistical learning and computer science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Most of the existing theoretical works formulate the problem as a least-square regression and approximate the value function by a linear function, such as Bradtke and Barto (1996), etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Nonlinear approximations using kernel methods or deep learning appeared in the more recent literature, for example Farahmand et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Geist et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Duan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Long et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen and Qi (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020) also conducted inference for the optimal policy using linear sieve representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We proceed learning using neural networks, and study the inference for a given policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We follow the recent literature on the off-policy evaluation problem, and formulate the rein- forcement learning problem as a conditional moment restriction model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assume the observed data trajectory {(St, At, Rt)}t≥0 is obtained from an unknown behavior policy probability πb(a|s), where (St, At, Rt) denote the state, action and observed reward at time t respectively and πb(a|s) is the distribution to take action a at state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We denote the space of states and actions as S and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It is assumed that the reward Rt is jointly determined by (St, At, St+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Standing at state St at period t, one takes action At and receives reward Rt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The state then transits to St+1 at the next period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The value of a given policy π is measured by the so-called Q-function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Specifically, for any given π and any state-action pair (s, a), Q-function is defined as the expected discounted reward: Qπ(s, a) = ∞ � t=0 γtEπ(Rt|S0 = s, A0 = a) , where Eπ or in short E is the expectation when we take actions according to π, 0 ≤ γ < 1 is the discount factor and we consider the discounted infinite-horizon sum of expected rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To estimate Qπ, a classical approach is to solve the Bellman equation below: Qπ(s, a) = E � Rt + γ � x∈A π(x|St+1)Qπ(St+1, x)dx ����St = s, At = a � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The goal is to recover Qπ of a given target policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In practice, multiple trajectories {(Si,t, Ai,t, Ri,t, Si,t+1)}0≤t≤T,1≤i≤N may be observed to help estimate the Q-function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' But for simplicity we assume N = 1 and T = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The more general case can be cast by merging the N time series into a single series of size n = TN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 22 The Bellman equation can be formulated as a conditional moment restriction with respect to Qπ for weakly dependent time series: E[ρ(Yt+1, Qπ)|St, At] = 0, Yt+1 = (Rt, St, At, St+1), Xt = (St, At), where ρ(Yt+1, h) = Rt − h(St, At) + γ � x∈A π(x|St+1)h(St+1, x)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In this framework, the estimation of the function Qπ(s, a) can be conducted on the neural network space, and we assume that computationally the integration in the ρ-function can be well approximately by the Monte Carlo method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For off-policy evaluations, the following value function is of the major interest in this section: given state s ∈ S, φs(Qπ) = � a∈A π(a|s)Qπ(s, a)da, (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) which is a known functional φs(·) for a single state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Bellman equation also admits a Fredholm integral equation of the second kind (Kress, 1989), which is a well-posed problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore, estimating the Q-function may achieve fast- rate of convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' That is, the sieve modulus of continuity satisfies: ωn(δ) := sup α∈An:∥α−πnα0∥≤δ ∥α − πnα0∥s ≍ δ Recently Chen and Qi (2022) showed this result for ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥s to be either the sup-norm or the ℓ2-norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The inner product is defined, in this case, as ⟨v1, v2⟩ = EΣ(Xt)−1 � dm dh [v1] � �dm dh [v2] � , where dm dh [v] = γ � x∈A E [π(x|St+1)v(St+1, x)|St, At] dx − v(St, At), (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) and induced a Riesz representer v∗ whose closed form is unavailable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Meanwhile, it follows from the Bellman equation that m(Xt, h) = dm dh [h − Qπ] for all h ∈ Hn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore, the weak norm ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥ can be expressed as: ∥h − Qπ∥2 = Em(Xt, h)2Σ(Xt)−1, which shows that the employed minimum distance criterion function is directly estimating the squared weak norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �Qπ be the estimated Qπ using the general nonlinear learning space, and the functional 23 is naturally estimated using φs( �Qπ) = � a∈A π(a|s) �Qπ(s, a)da As the moment restriction function E[ρ(Yt+1, h)|St, At] is linear in h in this case, it is straight- forward to verify the high-level conditions as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) For some ζ > 4, the Riesz representer satisfies E � π(x|St+1)|v∗ n(St+1, x)|ζdx + E|v∗ n(St, At)|ζ ≤ ∥v∗ n∥ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) ER4 t < ∞, E maxj≤kn Ψj(Xt)4 < ∞, E(1+|St|2 +|At|2)2ω < ∞ and EM(St+1)4 < ∞, where M(St+1) := � π(x|St+1)(1 + x2 + S2 t+1)ω/2dx, and ω is the degree of the weighted-sup metric ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the Reinforcement Learning model considered here, Assumption 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 implies Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It then follows from Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 that ∥v∗ n∥−1√n � φs( �Qπ) − φs(Qπ) � →d N (0, 1) Inference about φs(Qπ) based on pivotal statistics can be conducted using the GN-QLR test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 The NPIV model In the nonparametric instrumental variable model (NPIV), consider yt+1 = h0(Wt) + Ut+1, E(Ut+1|σt(X )) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' where σt(X ) is the filtration generated from instrumental variables Xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then m(Xt, α) = E[(yt+1−h(Wt))|σt(X )] and the Gateaux derivative is defined as dm(Xt,α) dh [v] = E(v(Wt)|σt(X )), implying ⟨un, h − h0⟩ = E � E(un(Wt)|σt(X ))E(h − h0|σt(X ))Σ(Xt)−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We estimate the conditional variance Σ(Xt) by �Σt = �A′ nΨn(Ψ′ nΨn)−1Ψ(Xt) where �An is a n×1 vector of ρ(Yt+1, �α)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recall that for δn and ¯δn defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2), ∥�h − h∥∞,ω = OP(δn), ∥�h − h∥ = OP(¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We impose the following low-level conditions to verify Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 24 Assumption 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) δ2 n¯δnσ = o(n−1), E maxj≤kn |Ψj(Xt)|2(U2 t +1) < C, and E(U2 t |σt(X )) < C almost surely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, E(1 + |Wt|2)ω < C and E maxj≤kn Ψj(Xt)2(1 + |Wt|2)ω < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) The Riesz representer v∗ n satisfies: there are C, ζ > 0, E(maxj≤kn Ψj(Xt)2 + 1)v∗ n(Wt)2 < CEK2 t and E|Ut|2+ζ|Kt|2+ζ ≤ C(EK2 t )1+ζ/2, where Kt := E(v∗ n(Wt)|σt(X )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the NPIV model, (i) Assumption 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 implies Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) For the known functional φ(·), in addition Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then √n(φ(�α) − φ(α0)) σn →d N (0, 1), where σ2 n := Var (E(v∗ n(Wt)|σt(X ))Σ(Xt)−1Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) For the unknown functional γ0 = El(h0(Wt)), if additionally Assumptions 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7 hold, then √nv−1(�γ − γ0) →d N (0, 1), where v2 := 1 n Var(� t Wt−Zt) with Wt = l(h0(Wt))−Γ0(Xt)Ut+1 and Zt = Ut+1Σ(Xt)−1E[v∗ n(Wt)|σt(X )].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 The NPQIV model Consider the nonparametric quantile instrumental variable (NPQIV) model E[1{yt+1 ≤ h0(Wt)}|σt(X )] = ̟ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then m(Xt, α) = P(Ut+1 < h−h0|σt(X ))−̟ where Ut+1 = yt+1 −h0(Wt) and α = h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Within this framework, we now verify the high-level assumptions presented in the previous sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose the conditional distribution of Ut given (Xt, Wt) is absolutely continuous with density function fUt|σt(X),Wt(u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In this context, Σ(Xt) is known, given by Σ(Xt) = Var(1{yt+1 ≤ h0(Wt)}|σt(X )) = ̟ − ̟2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then the Gateaux derivative is defined as dm(Xt, α) dh [v] = E(fUt|σt(X),Wt(h(Wt) − h0(Wt))v(Wt)|σt(X )), 25 implying, for g1 = fUt|σt(X),Wt(0)un(Wt) and g2 = fUt|σt(X),Wt(0)(h(Wt) − h0(Wt)), ⟨un, h − h0⟩ = E [E(g1|σt(X ))E(g2|σt(X ))] (̟ − ̟2)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, ∥v∗ n∥2 = (̟ − ̟2)−1Eg(Xt)2 where g(Xt) = E[fUt|σt(X),Wt(0)v∗ n(Wt)|σt(X )].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We impose the following low-level conditions to verify Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let At(v) := � 1 0 fUt|σt(X),Wt (x(v(Wt) − h0(Wt))) dx Bt(v, h) := E {At(v)[h(Wt) − h0(Wt)]|Xt} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Assumption 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) There are c1, c2, ǫ0 > 0 so that for all ∥h − h0∥∞,ω < ǫ0, c2EBt(h, h)2Σ(Xt)−1 ≤ EBt(h0, h)2Σ(Xt)−1 ≤ c1EBt(h, h)2Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Almost surely, supu f ′ Ut|σt(X),Wt(u) < C and supu,x,w fUt|σt(X),Wt(u) < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also and there is L > 0, for all u, almost surely, supx,w |fUt|σt(X),Wt(u) − fUt|σt(X),Wt(0)| ≤ L|u|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) E[maxj≤kn Ψj(Xt)2 + At(h0)2](1 + |Wt|2)ω < C and E[un(Wt)4|σt(X )] < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iv) δ2 nkn = o(1) and δ4 n = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The following proposition, proved in the appendix, is the main result in this subsection, which verifies the high-level conditions in the NPQIV context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the NPQIV model, (i) Assumption 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 implies Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) For the known functional φ(·), in addition Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then √n(φ(�α) − φ(α0)) σn →d N (0, 1), where σ2 n := (̟ − ̟2)−1E � [EfUt|σt(X),Wt(0)v∗ n(Wt)|σt(X )]2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) For the unknown functional γ0 = El(h0(Wt)), if additionally Assumptions 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7 hold, then √nv−1(�γ − γ0) →d N (0, 1), where v2 := 1 n Var(� t Wt − Zt) with Wt = l(h0(Wt)) − Γ0(Xt)Ut+1 and 26 Zt = (̟ − ̟2)−11{Ut+1 ≤ 0}EfUt|σt(X),Wt(0)v∗ n(Wt)|σt(X ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 7 Simulation Studies In this section, we set up nonparametric endogenous models to illustrate the performance of our proposed estimators and testing statistics using some synthetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Consider the following data generating process Yt = h(Zt, Yt−1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' , Yt−L) + et , where h(Zt, Yt−1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' , Yt−L) = Ztϑ0 + f( L � l=1 blYt−l) , and φ(α) = E[∂h/∂Zt] = ϑ0 = 1 is the quantity to be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We choose L = 3, bl = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4l and consider the nonlinear mapping f(x) = 1−exp(−x) 1+exp(−x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The endogenous Zt is generated using the following auto-regressive model: Zt = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3Zt−1 + ut, (ut, εt) ∼iid N(0, Σ), Σ = � 1 ρ ρ 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' And et is generated with the following ARCH model using εt as the innovation: et = σtεt, σ2 t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5(1 − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='32)Z2 t−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We set ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5 to make Zt endogenous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We also make et heterogeneous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that E[e2 t] = E[σ2 t ] = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The endogenous variable is Wt = Zt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The instruments are Xt = (Zt−1, Yt−1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Yt−L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We chose to generate n = 5000 samples (some burning period has been thrown away to make sure data are stationary).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that the model can be used for both NPIV and NPQIV with ̟ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We applied a fully-connected J-layer ReLU-activated NN with hidden layer width of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The optimization of the unconstrained NPIV or NPQIV objective used vanilla gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We did not apply mini-batch in gradient descent training as using mini-batches may hurt performance due to insufficient smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The training epoch was as large as 10000 with learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='01 for NPIV and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 for NPQIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Furthermore we did not apply any penalty term for this example since the problem is relatively easy and the NN under consideration is of a small scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The linear sieve bases (Ψ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Ψkn) for the instrumental variable space were ˜kn cubic B-splines for X and each of the three Y lags concatenated together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For simplicity, no interaction terms between X and Y lags were included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Thus in total, we have kn = 4˜kn −3 27 Table 1: Estimation and hypothesis testing under NPIV and NPQIV with synthetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here (J, K, kn) respectively denote the number of layers, width of the neural nets and the number of sieve bases for estimating the instrumental space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The true value for ϑ0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 95% qtl refers to the empirical 95% quantile, where the theoretical quantile for the chi square distribution is 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Layer Width Basis Estimator of ϑ0 Testing Statistic for ϑ0 Problem J K kn mean std mean std 95% qtl size NPIV 3 10 17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='968 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='116 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='999 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='432 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='814 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='0% NPIV 3 10 13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='957 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='115 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='874 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='236 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='727 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8% NPIV 1 40 13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='984 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='108 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='032 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='418 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='215 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='0% NPQIV 3 10 49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='997 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='129 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='086 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='565 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='280 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4% NPQIV 3 10 45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='994 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='130 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='002 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='409 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='955 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6% NPQIV 1 40 29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='977 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='126 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='050 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='421 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='678 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='9% bases (since all B-spline bases sum up to 1, we remove the last basis for each dimension and finally add the intercept term as another basis).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In our simulations, we find that NPQIV requires more number of sieve basis kn for estimating the instrumental space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the NPIV problem, we first optimize the equal weighted quadratic loss to obtain �h, which is used to estimate Σ(Xt) and Γ(Xt) consistently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the second step, we optimize the optimally weighted quadratic loss with the weighting matrix �Σ(Xt)−1 and apply the forward filter to estimate our expectation functional, which in this example is the constant ϑ0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, we carry out the hypothesis testing for H0 : φ(h) = E[∂h/∂Zt] = φ0 = 1 to check the size of the testing statistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Specifically, we estimated the forward filtered residuals as � Wt = ∂�h(Wt)/∂Wt−�Γt(Yt−�h(Wt)) and estimated Σ2 = Var(Wt) by the Newey-West estimator given � Wt, then solved the constrained optimization of Ln(h, φ0) and finally constructed the testing statistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For NPQIV problem, since the optimal weighting is proportional to equal weighting, we do not need the initial step to estimate Σ(Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So we directly optimized the optimally weighted quadratic loss and estimated Γ(Xt) using the results and then used the forward filter to correct the estimation of the average partial derivative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, similar to NPIV, we conduct the hypothesis testing for H0 : φ(α) = 1 under NPQIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As for the computational practice, we find that for NPQIV models, it is helpful to apply truncations to the learned gradients in each step of training the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Specifically, we smooth the loss function of the NPQIV model and truncate the updated gradient: θk+1 = θk − lr ∗ min{|∇Ln,k|, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='001} ∗ sgn(∇Ln,k) 28 where lr is the learning rate, fixed to be 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 for NPQIV;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' ∇Ln,k is the gradient of the NN at the current step;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' θk+1 is the updated neural network coefficients at the current step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The truncation prevents the network from having very large gradients during iterations, helping stabilize the training process empirically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We repeat each setting for 1000 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the efficient estimation, we report the mean and standard deviation of the forward filtered average gradient for the optimal weighting optimizaiton in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For hypothesis testing, we also report in Table 1 the mean, standard deviation and 95% quantile of the empirical testing statistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, if we use the theoretical critical value corresponding to 5% significance level, which is 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='84 for χ2 1, the p-value is also reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As we can see from Table 1, for NPIV, optimal weighting estimates φ(h) accurately in the sense that the mean insignificantly differs from the true value ϑ0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' NPQIV is less efficient with a larger standard deviation, and thus requires more samples to be estimated to the same accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that the instrumental space with a step function can be harder to approximate with the cubic B-spline linear sieve bases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In terms of the performance of QLR testing statistic, the p-values are all close to the nominal 5% level for the NPIV and NPQIV models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Admittedly through our experiments the results can be sensitive to some tuning parameters, which is typically the case when applying deep learning for statistical inference: at the moment we still heavily rely on ad-hoc tuning in many problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In comparison, the estimation of φ(h) is more stable with respect to different J and K values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here we only mean to present some results without heavily tuning the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Methods using NN for real applications require more extensive tuning in practice and some rough sense on the model complexity would be useful to determine the balance between the dimensions of the NN sieve and the linear IV sieve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 8 Conclusion In this paper we establish neural network estimation and inference on functionals of unknown function satisfies a general time series conditional moment restrictions containing endogenous variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We consider quasi-likelihood ratio (GN-QLR) based inference, where nonparametric functions are learned using multilayer neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' While the asymptotic normality of the estimated functionals depends on some unknown Riesz representer of the functional space, we show that the GN-QLR statistic is asymptotically Chi-square distributed, regardless whether the expectation functional is regular (root-n estimable) or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This holds when the data are weakly dependent and satisfy the beta-mixing condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition to estimating partial derivatives in nonparametric endogenous problems as 29 examples, our study is well motivated by the setting of reinforcement learning where data are time series in nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We apply our method to the off-policy evaluation, by formulating the Bellman equation into the conditional moment restriction framework, so that we can make inference about the state-specific value functional using the proposed GN-QLR method with time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 30 A Stochastic equicontinuity on the NN space for β-mixing obser- vations A key technical result is the stochastic equicontinuity of the residual function on the general nonlinear sieve learning space, which is established in the following proposition in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let St = (Yt+1, Xt) and ǫt(α) ≡ ǫ(St, α) := ρ(Yt+1, α) − m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We derive bounds that require the pseudo dimension of the deep neural network class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recall δn := ∥πnα0 − α0∥∞,ω + ωn(¯δn), ¯δ2 n := ∥πnα0 − α0∥2 + λn + knd2 n + ϕ2 n where dn := � p(Hn) log2 n n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Cn = {α + xun : α ∈ An, ∥α − α0∥∞,ω ≤ Cδn, Q(α) ≤ C¯δ2 n, |x| ≤ Cn−1/2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose : (a) E maxj≤kn Ψj(Xt)2 supα∈Cn(ρ(Yt+1, α) − ρ(Yt+1, α0))2 = Cδ2η n for some η, C > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (b) For some κ, C > 0 , EΨj(Xt)2 sup∥α1−α∥∞,ω<δ |ǫt(α1) − ǫt(α)|2 ≤ Cδ2κ for all δ > 0 and α, α1 ∈ cl{a + xb : a, b ∈ An, x ∈ R}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then max j≤kn sup |x|≤Cn−1/2 sup α∈Cn | 1 n � t Ψj(Xt)(ǫ(St, α + xun) − ǫ(St, α0))| ≤ OP (dnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let E := {(ǫ(, α + xun) − ǫ(, α0))Ψj : α ∈ Cn, j ≤ kn, |x| ≤ Cn−1/2} and let St = (Yt+1, Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We divide the proof into several steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 1: construct blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Consider the following independent blocks: for any integer pair (an, bn), with bn = [n/(2an)], divide {St : t ≤ n} into 2bn blocks with length an and the remaining block of length n − 2anbn: H1,l = {i : 2(l − 1)an + 1 ≤ i ≤ (2l − 1)an} H2,l = {i : (2l − 1)an + 1 ≤ i ≤ 2lan} where l = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', bn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Υ = {i : 2anbn + 1 ≤ i ≤ n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Now let {�S1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', �Sn} be a random sequence that is independent of {S1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='., Sn} and has independent blocks such that each block has the same joint distribution as the corresponding block of the St-sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Because the St-sequence is β-mixing, by Lemma 2 of Eberlein (1984), for any measurable set A, with the mixing coefficient β(), |P � {�St : t ∈ H1,l, l = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', bn} ∈ A � − P ({St : t ∈ H1,l, l = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', bn} ∈ A) | ≤ (bn − 1)β(an).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) 31 The same inequality holds when H1,l is replaced with H2,l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, for any function f, define U1,f(�Sl) = 1 an � t∈H1,l f(�St), U2,f(�Sl) = 1 an � t∈H2,l f(�St), where �Sl = {�St : t ∈ H1,l}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By construction, U1,f(�Sl) and U2,f(�Sl) are independent across l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Similarly, let Sl = {St : t ∈ H1,l}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then 1 n � t f(St) − Ef(St) = 1 n � t∈Υ f(St) − Ef(St) + 1 bn � l≤bn anbnn−1[U1,f(Sl) − EU1,f(Sl)] + 1 bn � l≤bn anbnn−1[U2,f(Sl) − EU2,f(Sl)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) Next, we shall bound each term on the right hand side uniformly for f ∈ E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We replace U1,f(Sl) with U1,f(�Sl);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' the latter is easier to bound because blocks �Sl are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We then show that the effect of such replacements is negligible due to (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) by properly chosen (an, bn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 2: the envelop function for U1,f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that Ef = 0 for f ∈ E and that �St and St are identically distributed within each block H1,l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Cauchy-Schwarz, E sup f∈E U1,f(�Sl)2 ≤ E sup f∈E \uf8eb \uf8ed 1 an � t∈H1,l f(�St) \uf8f6 \uf8f8 2 ≤ 1 an � t∈H1,l E sup f∈E f(St)2 ≤ 2E max j≤kn Ψj(Xt)2 sup |x|≤Cn−1/2 sup α∈Cn (ρ(Yt+1, α + xun) − ρ(Yt+1, α0))2 ≤ Cδ2η n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Now take some p > η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let F = {U1,f : f ∈ E} and let F := max{n−p, supf∈E |U1,f|}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then both supf∈E |U1,f| and F are envelope functions of F, and n−p ≤ G := ∥F∥L2(St) ≤ Cn−p + Cδη n ≤ 2Cδη n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 3: the bracketing number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We aim to apply Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 of van der Vaart and Wellner (1996) to bound 1 bn � l≤bn anbnn−1U1,f(�Sl), which requires bounding the bracketing number of F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To do so, suppose h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', hN is a δ-cover of Hn under the norm ∥h∥∞,ω and N := N(δ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', θR is a δ-cover of Θ and R := N(δ, Θ, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥) (the Euclidean norm in Θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here N(δ, A, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=') de- notes the covering number for space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also let x1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='xMn be a δ-cover of [−Cn−1/2, Cn−1/2], with Mn ≤ 4Cn−1/2/δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for any f = (ǫ(, α + xun) − ǫ(, α0))Ψj ∈ E, there are Ψj, xq and αik = (θk, hi) so that 32 ∥α−αik∥∞,ω ≤ ∥h−hi∥∞,ω +∥θ−θk∥ ≤ 2δ and |x−xq| < δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let fijkq = (ǫ(, αik +xqun)−ǫ(, α0))Ψj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' sup f=(ǫ(,α+xun)−ǫ(,α0))Ψj:∥α−αik∥∞,ω<2δ,|x−xq|<δ |U1,f(�Sl) − U1,fijkq(�Sl)| ≤ sup f=(ǫ(,α+xun)−ǫ(,α0))Ψj:∥α−αik∥∞,ω<2δ,|x−xq|<δ | 1 an � t∈H1,l f(�St) − fijkq(�St)| ≤ 1 an � t∈H1,l |Ψj( � Xt)| sup ∥α−αik∥∞,ω<2δ sup |x−xq|<δ |ǫ(�St, α + xun) − ǫ(�St, αik + xqun)| := bijkq(�Sl, δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then U1,f ∈ [lijkq, uijkq], where lijkq := U1,fijkq −bijkq(, δ) and uijkq = U1,fijkq +bijkq(, δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, E[uijkq − lijkq]2 ≤ 4Ebijkq(�Sl, δ)2 ≤ CE \uf8eb \uf8ed 1 an � t∈H1,l |Ψj( � Xt)| sup |x−xq|<δ sup ∥α−αik∥∞,ω<2δ |ǫ(�St, α + xun) − ǫ(�St, αik + xqun)| \uf8f6 \uf8f8 2 ≤ CEΨj( � Xt)2 sup ∥α−αik∥∞,ω<2δ sup |x−xq|<δ |ǫ(�St, α + xun) − ǫ(�St, αik + xqun)|2 ≤ Cδ2κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence {[lijkq, uijkq] : i ≤ N, j ≤ kn, k ≤ R} is a Cδκ bracket of F, whose bracketing number satisfies N[](Cδκ, F, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥L2(�St)) ≤ N(δ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω) � �� � N (C/δ)d � �� � R (n−1/2/δ) � �� � Mn kn, where we used R ≤ (C/δ)d for d = dim(θ0) since θ0 ∈ Θ is compact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for a generic constant C > 0, N[](Gx, F, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥L2(�St)) ≤ CN(x1/κ(G/C)1/κ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω)G−(d+1)/κx−(d+1)/κkn, ∀x > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 4: bound independent blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that U1,f(�Sl) are independent across l and is mean-zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the envelop G defined in step 2 and some constant ¯ M > 0, E sup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(�Sl) ������ ≤ 1 2E sup g∈F ������ 1 bn � l≤bn g(�Sl) ������ ≤(i) b−1/2 n G � 1 0 � 1 + log N[](Gx, F, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥L2(�St))dx ≤ C √bn δη n � 1 0 � 1 + log N(x1/κ(G/C)1/κ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞) + log C n1/2G(d+1)/κx(d+1)/κ + log kndx ≤(ii) Cδη n √bn � 1 0 � 1 + p(Hn) log Cn x1/κG1/κ + (d + 1) log C G1/κx1/κ + log kndx ≤(iii) Cδη n √bn � 1 0 � 2 log kn + 2p(Hn) log Cn x1/κG1/κ dx 33 ≤(iv) δη n � Cp(Hn) log n bn , where (i) follows from Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 of van der Vaart and Wellner (1996);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) follows from As- sumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) is due to p(Hn) → ∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now prove the inequality (iv), which is to show � 1 0 � 2 log kn + g(x)dx ≤ C � p(Hn) log n where g(x) = 2p(Hn) log Cn x1/κG1/κ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let A := 2 log kn + 2p(Hn) log Cn G1/κ − 2κ−1p(Hn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have log Cn G → ∞, hence 2κ−1p(Hn) ≤ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that log(y) ≤ y − 1 for all y > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence 2 log kn + g(x) = 2 log kn + 2p(Hn) log Cn G1/κ + 2κ−1p(Hn) log 1 x ≤ 2 log kn + 2p(Hn) log Cn G1/κ + 2κ−1p(Hn)(1 x − 1) = A + 2κ−1p(Hn)x−1 ≤ A + Ax−1 ≤ 2Ax−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The last inequality holds for x < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Thus with n−10 ≤ G, and kn = O(bn), � 1 0 � 2 log kn + g(x)dx ≤ √ 2A � 1 0 x−1/2dx ≤ 4 � 2 log kn + 2p(Hn)[log(Cn) + log G−1/κ] ≤ 4 � 2 log kn + 2p(Hn)[log(Cn) + log n10/κ] ≤ C � p(Hn) log n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore by the Markov inequality, for any ε > 0, with probability at least 1 − ε/4, sup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(�Sl) ������ ≤ cn ε , cn = δη n � Cp(Hn) log n bn .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 5: completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) and step 4, P \uf8eb \uf8edsup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(Sl) ������ > cn ε \uf8f6 \uf8f8 ≤ P \uf8eb \uf8edsup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(�Sl) ������ > cn ε \uf8f6 \uf8f8+(bn−1)β(an).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now take an = M log n/2 with M > 0 and bn = [n/(M log n)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then (bn − 1)β(an) → 0 for sufficiently large M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, the requirement in step 4 that p(Hn) = o(bn) holds as long as p(Hn) log n = o(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence with this choice of bn, sup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(Sl) ������ = OP \uf8eb \uf8edδη n � p(Hn) log2 n n \uf8f6 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The same rate applies when U1,f is replaced with U2,f following from the same proof of steps 2,3,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 34 In addition, |Υ|0 ≤ 2an.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence E sup f∈E ����� 1 n � t∈Υ f(St) − Ef(St) ����� ≤ E 1 n � t∈Υ sup f∈E |f(St)| ≤ Can n E max j≤kn |Ψj(Xt)| sup α∈Cn |ǫt(α) − ǫt(α0)| ≤ Cδη n log n n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together, by (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) maxj≤kn supα∈Cn | 1 n � t Ψj(Xt)(ǫt(St, α) − ǫ(St, α0))| = OP (δη ndn) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' B Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Consistency Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (Consistency).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose kn n + Q(πnα0) = O(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also suppose Pen(h) is lower semi- compact on (Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω) and Q(α) is lower semicontinuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then ∥�α − α0∥∞,ω = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The proof of this lemma does not depend on Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' First we show Pen(�h) = OP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let ρn(α), mn(α) be the n × 1 vectors of ρ(Yt+1, α) and m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �Σ−1 n be the diagonal matrix of �Σ(Xt)−1 for all t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By steps 1, 3 of the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 below, λPen(�h) ≤ Qn(πnα0) + λPen(πnh0) + oP (n−1) ≤ 2 n � t [ �m(Xt, πnα0) − �m(Xt, πnα0)]2�Σ(Xt)−1 + CE �m(Xt, πnα0)2 + λPen(πnh0) + oP(n−1) ≤ 2[ρn(πnα0) − mn(πnα0)]′Pn�Σ−1 n Pn[ρn(πnα0) − mn(πnα0)] +CEm(Xt, πnα0)2 + λPen(πnh0) + oP (n−1) ≤ OP (kn n + Q(πnα0) + λ) = OP (λ) with the condition that kn n +Q(πnα0) = O(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So let M0 > 0 be a large constant so that Pen(�h) ≤ M0 with probability arbitrarily close to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Now take an arbitrary ǫ > 0, let Bǫ = {α = (θ, h) ∈ An : ∥α − α0∥∞,ω ≥ ǫ, Pen(h) ≤ M0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Be- cause Pen(h) is lower semicompact on (H0, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω) and Q(α) is lower semicontinuous, minα∈Bǫ Q(α) exists, that is, there is α∗ ∈ Bǫ so that infα∈Bǫ Q(α) = Q(α∗) > c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' If ∥�α − α0∥∞,ω > ǫ, then Q(�α) ≥ infα∈Bǫ Q(α) > c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Meanwhile, by (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) (to be proved below), c0 ≤ Q(�α) ≤ Q(πnα0) + λn|Pen(πnh0) − Pen(�h)| + OP (knd2 n + ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' But the right hand side is oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence we must have ∥�α − α0∥∞,ω = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 35 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 The proof depends on some important technical lemmas, one of which is the stochastic equicontinuity of ǫ(St, α) = ρ(Yt+1, α) − m(Xt, α), given by Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We divide the proof in the following steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Dn be the sieve space used to estimate m(X, α), and �m(X, α) = arg min �m∈Dn n � t=1 (m(Xt, α) − �m(Xt))2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We show the following steps: step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Show that for c, C > 0, uniformly in α ∈ An, cE �m(Xt, α)2 ≤ 1 n n � t=1 �m(Xt, α)2 ≤ CE �m(Xt, α)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To prove it, we shall apply an empirical identifiability result that first proved by Huang (1998) for the i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' case and then extended by Chen and Christensen (2015) to more general setting with a much simpler proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We note that �m(·, α) ∈ Dn := {g(x) = �kn j=1 πjΨj(x) : ∥g∥∞,ω < ∞}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Ψn be the n × kn matrix of the linear sieve bases, and let A := 1 nEΨ′ nΨn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose the linear sieve satisfies: λmin(A) > c and ∥ 1 nΨ′ nΨn − A∥ = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then ∥A−1/2 1 nΨ′ nΨnA−1/2 − I∥ = oP(1), so the conditions of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 of Chen and Christensen (2015) are satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We then apply this lemma to reach that sup α∈An | 1 n � t �m(Xt, α)2 − E �m(Xt, α)2| E �m(Xt, α)2 ≤ sup g∈Dn | 1 n � t g(Xt)2 − Eg(Xt)2| Eg(Xt)2 = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This then leads to the desired result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Show that sup α∈An 1 n n � t=1 [ �m(Xt, α) − �m(Xt, α)]2 = OP (knd2 n), d2 n := p(Hn) log2 n n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let ǫ(St, α) = ρ(Yt+1, α) − m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also let Pn = Ψn(Ψ′ nΨn)−1Ψ′ n and ¯ǫn(α) be the n × 1 vector of ǫ(St, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We then have sup α∈An 1 n n � t=1 [ �m(Xt, α) − �m(Xt, α)]2 = sup α∈An 1 n¯ǫn(α)′Pn¯ǫn(α) = OP (1) sup α ∥ 1 nΨ′ n¯ǫn(α)∥2 ≤ OP (kn) sup α max j≤kn | 1 n n � t=1 Ψj(Xt)ǫ(St, α)|2 = OP (knd2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 36 The last bound is given by Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Show that supα∈An E[ �m(Xt, α) − m(Xt, α)]2 = O(ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �mn(α) and mn(α) respectively be the n × 1 vectors of �m(Xt, α) and m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also let mn(α) = Ψnbα + rα where rα is the sieve approximation error and bα is the sieve coefficient to approximate mn(X, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then �mn(α) = Pnmn(α) and sup α∈An E[ �m(Xt, α) − m(Xt, α)]2 = 1 n sup α∈An Emn(α)′(I − Pn)mn(α) = 1 n sup α∈An Er′ α(I − Pn)rα ≤ 1 n sup α E∥rα∥2 = OP (ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' After achieving the above three steps, then we have (since �Σ(Xt)−1 and Σ(Xt)−1 are bounded away from zero) Qn(�α) ≥ c n � t �m(Xt, �α)2 ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5c n � t �m(Xt, �α)2 − c n � t [ �m(Xt, �α) − �m(Xt, �α)]2 ≥(i) cE �m(Xt, �α)2 − OP (knd2 n) ≥(ii) cEm(Xt, �α)2 − OP (knd2 n + ϕ2 n) ≥ Q(�α) − OP (knd2 n) Qn(πnα0) ≤ C n � t �m(Xt, πnα0)2 ≤ 2C n � t �m(Xt, πnα0)2 + 2C n � t [ �m(Xt, πnα0) − �m(Xt, πnα0)]2 ≤(iii) CE �m(Xt, πnα0)2 + OP (knd2 n) ≤(iv) CEm(Xt, πnα0)2 + OP (knd2 n + ϕ2 n) ≤ Q(πnα0) + OP (knd2 n + ϕ2 n) where (i) (iii) follow from steps 1,2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) (iv) follow from step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence Qn(�α) + λnPen(�h) ≤ Qn(πnα0) + λnPen(πnh0) + oP (n−1) implies Q(�α) ≤ Q(πnα0) + λn|Pen(πnh0) − Pen(�h)| + OP (knd2 n + ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) Now by Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, ∥�α − α0∥2 ≤ C∥πnα0 − α0∥2 + OP (λn + knd2 n + ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence ∥�α − πnα0∥ ≤ ∥�α − α0∥ + ∥πnα0 − α0∥ ≤ C∥πnα0 − α0∥ + OP (√λn + √kndn + ϕn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Thus ∥�α − α0∥∞,ω ≤ ∥�α − πnα0∥∞,ω + ∥πnα0 − α0∥∞,ω ≤ OP (∥πnα0 − α0∥∞,ω + ωn(∥πnα0 − α0∥ + � λn + � kndn + ϕn)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose (a) E maxj≤kn Ψj(Xt)2 supα∈An ρ(Yt+1, α)2 ≤ C2 37 (b) There are κ > 0 and C > 0 so that EΨj(Xt)2 sup∥α1−α2∥∞,ω<δ |ǫ(St, α1) − ǫ(St, α2)|2 ≤ Cδ2κ holds for any δ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (c) p(Hn) → ∞ and p(Hn) log n = o(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then supα maxj≤kn | 1 n �n t=1 Ψj(Xt)ǫ(St, α)| = OP ( � p(Hn) log2 n n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let E := {ǫ(·, α)Ψj : α ∈ An, j ≤ kn}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We divide the proof into several steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 1: construct blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This step is the same as that of the proof of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 2: the envelop function for U1,f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that Ef = 0 for f ∈ E and that �St and St are identically distributed within each block H1,l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Cauchy-Schwarz, E sup f∈E U1,f(�Sl)2 ≤ E sup f∈E \uf8eb \uf8ed 1 an � t∈H1,l f(�St) \uf8f6 \uf8f8 2 ≤ 1 an � t∈H1,l E sup f∈E f(St)2 ≤ 2E max j≤kn Ψj(Xt)2 sup α∈An ρ(Yt+1, α)2 ≤ C2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let F = {U1,f : f ∈ E} and let F := max{n−10, supf∈E |U1,f|}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then both supf∈E |U1,f| and F are envelope functions of F, and n−10 ≤ G := ∥F∥L2(St) ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 3: the bracketing number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We aim to apply Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 of van der Vaart and Wellner (1996) to bound 1 bn � l≤bn anbnn−1U1,f(�Sl), which requires bounding the bracketing number of F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To do so, suppose h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', hN is a δ-cover of Hn under the norm ∥h∥∞,ω and N := N(δ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', θR is a δ-cover of Θ and R := N(δ, Θ, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥) (the Euclidean norm in Θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Here N(δ, A, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=') denotes the covering number for space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for any f = ǫ(, α)Ψj ∈ E, there are Ψj and αik = (θk, hi) so that ∥α − αik∥∞,ω ≤ ∥h − hi∥∞,ω + ∥θ − θk∥ ≤ 2δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let fijk = ǫ(·, αik)Ψj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have sup f=ǫ(·,α)Ψj:∥α−αik∥∞,ω<2δ |U1,f(�Sl) − U1,fijk(�Sl)| ≤ sup f=ǫ(·,α)Ψj:∥α−αik∥∞,ω<2δ | 1 an � t∈H1,l f(�St) − fijk(�St)| ≤ 1 an � t∈H1,l |Ψj( � Xt)| sup ∥α−αik∥∞,ω<2δ |ǫt(�St, α) − ǫt(�St, αik)| := bijk(�Sl, δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then U1,f ∈ [lijk, uijk], where lijk := U1,fijk − bijk(, δ) and uijk = U1,fijk + bijk(, δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, E[uijk − lijk]2 ≤ 4Ebijk(�Sl, δ)2 ≤ CE \uf8eb \uf8ed 1 an � t∈H1,l |Ψj( � Xt)| sup ∥α−αik∥∞,ω<2δ |ǫt(�St, α) − ǫt(�St, αik)| \uf8f6 \uf8f8 2 38 ≤ CEΨj( � Xt)2 sup ∥α−αik∥∞,ω<2δ |ǫ(�St, α) − ǫ(�St, αik)|2 ≤ Cδ2κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence {[lijk, uijk] : i ≤ N, j ≤ kn, k ≤ R} is a Cδκ bracket of F, whose bracketing number satisfies N[](Cδκ, F, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥L2(�St)) ≤ N(δ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω) � �� � N (C/δ)d � �� � R kn, where we used R ≤ (C/δ)d for d = dim(θ0) since θ0 ∈ Θ is compact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for a generic constant C > 0, N[](Gx, F, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥L2(�St)) ≤ CN(x1/κ(G/C)1/κ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞,ω)G−d/κx−d/κkn, ∀x > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 4: bound independent blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that U1,f(�Sl) are independent across l and is mean-zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For the envelop G defined in step 2 and some constant ¯ M > 0, E sup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(�Sl) ������ ≤ 1 2E sup g∈F ������ 1 bn � l≤bn g(�Sl) ������ ≤(i) b−1/2 n G � 1 0 � 1 + log N[](Gx, F, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥L2(�St))dx ≤ C √bn � 1 0 � 1 + log N(x1/κ(G/C)1/κ, Hn, ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥∞) + log C Gd/κxd/κ + log kndx ≤(ii) C √bn � 1 0 � 1 + p(Hn) log Cn x1/κG1/κ + d log C G1/κx1/κ + log kndx ≤(iii) C √bn � 1 0 � 2 log kn + 2p(Hn) log Cn x1/κG1/κ dx ≤(iv) � Cp(Hn) log n bn , where (i) follows from Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 of van der Vaart and Wellner (1996);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) follows from As- sumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) is due to p(Hn) → ∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iv) follows from the same proof as that of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 5: completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By an inequality similar to (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) and step 4, P \uf8eb \uf8edsup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(Sl) ������ > cn ε \uf8f6 \uf8f8 ≤ P \uf8eb \uf8edsup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(�Sl) ������ > cn ε \uf8f6 \uf8f8+(bn−1)β(an).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now take an = M log n/2 with M > 0 and bn = [n/(M log n)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then (bn − 1)β(an) → 0 for sufficiently large M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, the requirement in step 4 that p(Hn) = o(bn) holds as long as p(Hn) log n = 39 o(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence with this choice of bn, sup f∈E ������ 1 bn � l≤bn anbnn−1U1,f(Sl) ������ = OP \uf8eb \uf8ed � p(Hn) log2 n n \uf8f6 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The same rate applies when U1,f is replaced with U2,f following from the same proof of steps 2,3,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, |Υ|0 ≤ 2an.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence E sup f∈E ����� 1 n � t∈Υ f(St) − Ef(St) ����� ≤ 2E 1 n � t∈Υ sup f∈E |f(St)| ≤ Can n E max j≤kn |Ψj(Xt)| sup α |ǫ(St, α)| ≤ C log n n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together, maxj≤kn supα∈An | 1 n � t Ψj(Xt)ǫ(St, α)| = OP �� p(Hn) log2 n n � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' C Proofs for Section 4 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Local quadratic approximation Proposition C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (LQA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Cn = {α + xun : |x| < Cn−1/2, α ∈ An, ∥α − α0∥∞,ω ≤ Cδn, Q(α) ≤ C¯δ2 n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose for un = v∗ n/∥v∗ n∥, there are C > 0, so that (a) √n¯δn∥�Σn − Σn∥ = o(1), ϕ2 n¯δ2 n + knd2 nδ2η n + √kndnδη n¯δn = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (b) 1 √n∥(I − Pn)Σ−1 n dmn(α) dα [un]∥ + 1 √n∥(I − Pn)dmn(α) dα [un]∥ = OP (ϕn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (c) kn supα∈Cn 1 n � t[dm(Xt,α) dα [un] − dm(Xt,α0) dα [un]]2 = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (d) conditions of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (e) supτ∈(0,1) supα∈Cn E � d2m(Xt,α0+τ(α−α0)) dτ 2 | �2 = o(n−1) and (f) E supα∈Cn sup|τ|≤Cn−1/2 1 n � t � d2 dτ 2 m(Xt, α + τun)| �2 = O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then sup α∈Aosn sup |x|≤Cn−1/2 |Qn(α + xun) − Qn(α) − An(α(x))| = oP(n−1), where (a1) An(α(x)) := 2x[n−1/2Zn + ⟨un, α − α0⟩] + Bnx2 (a2) Bn = 1 n dmn(α0) dα [un]′Σ−1 n dmn(α0) dα [un] →P 1, and (a3) Zn →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 40 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �Qn(α) = 1 n � t ℓ(Xt, α)2�Σ(Xt)−1, and ℓ(x, α) := �m(x, α) + �m(x, α0), �m(x, α) := Ψ(x)′(Ψ′ nΨn)−1Ψ′ nmn(α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 1: expansions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By assumption, �Qn(α) is differentiable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So we shall prove the LQA for �Qn(α) via the mean value theorem, and show that �Qn(α)−Qn(α) is “small” locally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Indeed, Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 shows that supα∈Cn |Qn(α) − �Qn(α)| = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We write f(s) := �Qn(α+sxun) and by the second order mean value theorem, for some s ∈ (0, 1), �Qn(α + xun) − �Qn(α) = f ′(0) + 1 2f ′′(s) = 2xG(α) + x2Bx + x2Dx, G(α) := 1 n � t ℓ(Xt, α)�Σ(Xt)−1 d �m(Xt, α) dα [un] Bx := 1 n � t �d �m(α + sxun) dα [un] �2 �Σ(Xt)−1 Dx := 1 n � t ℓ(α + sxun)�Σ(Xt)−1 d2 dτ 2 �m(α + τxun)|τ=s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 shows that uniformly Dx = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence sup|x|≤Cn−1/2 x2|Dx| = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 2: convergence of Bx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let dmn(α) dα [un] and ρn be the n × 1 vectors of dm(Xt,α) dα [un] and ρ(Yt+1, α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also let ∥v∥2 Σ := v′Σ−1 n v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Write Bn := 1 n∥dmn(α0) dα [un]∥2 Σ = OP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Uniformly for α(x), s, Bx − Bn ≤ 1 n∥d �mn(α + sxun) dα [un]∥2 �Σn − 1 n∥d �mn(α0) dα [un]∥2 �Σn + 1 n∥d �mn(α0) dα [un]∥2 �Σn − 1 n∥dmn(α0) dα [un]∥2 �Σn + 1 n∥dmn(α0) dα [un]∥2 �Σn − 1 n∥dmn(α0) dα [un]∥2 Σn = oP(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence sup|x|≤Cn−1/2 |Bx − Bn|x2 = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To show that Bn →P 1, we have Bn = ⟨un, un⟩ + � 1 n dmn(α0) dα [un]′Σ−1 n dmn(α0) dα [un] − ⟨un, un⟩ � = ⟨un, un⟩ + oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Zt = ρ(Yt+1, α0)Σ(Xt)−1 dm(Xt,α0) dα [v].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for each v, ∥v∥2 = Var( 1 √n � t Zt) = Var(Zt) + 2 n � s>t EZtE(Zs|σs(X)) = Var(Zt) = ⟨v, v⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence ⟨un, un⟩ = ⟨v∗ n, v∗ n⟩∥v∗ n∥−2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence Bn = 1 + oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 3: expansion of G(α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have supα∈Cn 1 √n∥mn(α)′Pn∥+supα∈Cn 1 √n∥mn(α)∥ = OP (¯δn) 41 and (√n¯δn + √kn)∥�Σn − Σn∥ = o(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='G(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nmn(α)′Pn�Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d �mn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nρ′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nPn�Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d �mn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nmn(α)′PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d �mn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nρ′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nPnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='d �mn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + oP (n−1/2) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nmn(α)′PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nρ′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nPnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='+ 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nmn(α)′PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (Pn − I)dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nρ′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nPnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (Pn − I)dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + oP (n−1/2) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nmn(α)′PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nρ′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nPnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + OP (ϕn¯δn) + oP (n−1/2) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nmn(α)′Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nρ′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='nPnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dmn(α) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='dα ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='[un] + oP (n−1/2) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='=(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='⟨un,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α − α0⟩ + 1 nρ′ nPnΣ−1 n dmn(α) dα [un] + oP(n−1/2) =(b) ⟨un,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α − α0⟩ + 1 nρn(α0)′Σ−1 n dmn(α0) dα [un] � �� � 1 √nZn +oP (n−1/2),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' where (a) follows from Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (b) is due to � Eρn(α0)′Pnρn(α0) � sup α∈Cn 1 n � t [dm(Xt, α) dα [un] − dm(Xt, α0) dα [un]]2 ≤ � EtrPnΣ(Xt)−1 � sup α∈Cn 1 n � t [dm(Xt, α) dα [un] − dm(Xt, α0) dα [un]]2 = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 4: weak convergence of Zn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It then remains to show Zn →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that Zn = 1 √n � t Zt∥v∗ n∥−1, Zt = ρ(Yt+1, α0)Σ(Xt)−1 dm(Xt, α0) dα [v∗ n], where un = v∗ n/∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' When s > t, we have Zt ∈ σs(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence E(ZtZs|σs(X)) = ZtE(Zs|σs(X)) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Thus Var( 1 √n � t Zt) = Var(Zt) + 2 1 n � s>t EE(ZtZs|σs(X)) = Var(Zt) = E Var � ρ(Yt+1, α0)Σ(Xt)−1 dm(Xt, α0) dα [v∗ n] ����σt(X) � = EΣ(Xt)−2 �dm(Xt, α0) dα [v∗ n] �2 Var(ρ(Yt+1, α0)|σs(X)) = ⟨v∗ n, v∗ n⟩ = ∥v∗ n∥2 where we used Var(ρ(Yt+1, α0)|σs(X)) = Σ(Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 42 Next, it is assumed that there is some ζ > 0, E|Zt∥v∗ n∥−1|2+ζ ≤ CE|ρ(Yt+1, α0)|2+ζ ���� dm(Xt, α0) dα [un] ���� 2+ζ < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, Zt is strictly stationary, satisfying the β-mixing condition (Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let α(n) denote the α-mixing coefficient (the strong mixing coefficient).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have that, by Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, α(n) ≤ 1 2β(n) ≤ C exp(−cn) for some c, C > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence ∞ � n=1 α(n)ζ/(2+ζ) ≤ C ∞ � n=1 exp(−cζn/(2 + ζ)) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7 of Ibragimov (1962), Zn →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �Qn(α) = 1 n � t ℓ(Xt, α)2�Σ(Xt)−1, and ℓ(x, α) := �m(x, α) + �m(x, α0), �m(x, α) := Ψ(Xt)′(Ψ′ nΨn)−1Ψ′ nmn(α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose knd2 nδ2η n + √kndnδη n¯δn = o(n−1) and 1 √n∥mn(πnα) − mn(α)∥¯δn ≤ o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for Cn = {α + xun : α ∈ An : ∥α − α0∥∞,ω ≤ Cδn, Q(α) ≤ C¯δ2 n, |x| ≤ Cn−1/2}, (i) sup α∈Cn |Qn(α) − �Qn(α)| = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) sup α∈Cn |Qn(α) − Qn(πnα)| = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) Recall that ǫt(α) = ρ(Yt+1, α) − m(Xt, α) and mn(α), ¯ǫn(α) and ρn(α) are n × 1 vectors of m(Xt, α), ǫt(α) and ρ(Yt+1, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also write α(x) := α + xun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Qn(α + xun) − �Qn(α + xun) = 1 n � t [ �m(Xt, α(x))2 − ℓ(Xt, α(x))2]�Σ(Xt)−1 = 1 n[¯ǫn(α + xun) − ¯ǫn(α0)]′Pn�Σ−1 n Pn[¯ǫn(α + xun) − ¯ǫn(α0) + 2mn(α + xun) + 2ρn(α0)] ≤ OP (1) 1 n∥Pn[¯ǫn(α + xun) − ¯ǫn(α0)∥2 + OP (1) 1 n∥Pn[¯ǫn(α + xun) − ¯ǫn(α0)∥∥Pnmn(α + xun)∥ +OP(1) 1 n∥Pn[¯ǫn(α + xun) − ¯ǫn(α0)∥∥Pnρn(α0)∥ ≤ OP (d2 1 + d1 × d2 + d1 × d3) d1 := 1 √n∥Pn[¯ǫn(α + xun) − ¯ǫn(α0)∥ d2 := 1 √n∥Pnmn(α + xun)∥, d3 := 1 √n∥Pnρn(α0)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 43 We shall respectively calculate d1 ∼ d3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, d1 = OP (√kndnδη n) uniformly in α(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As for d2, by steps 1 and 3 in the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, uniformly in α(x), d2 2 ≤ 1 n � t �m(Xt, α(x))2 ≤ CE �m(Xt, α(x))2 ≤ C(ϕ2 n + Em(Xt, α(x))2) ≤ C¯δ2 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, d2 3 = OP (kn n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together, Qn(α+xun)− �Qn(α+xun) ≤ OP (knd2 nδ2η n +√kndnδη n¯δn) = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Let mn(α) and �mn(α) respectively be the n × 1 vectors of m(Xt, α) and �m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' First, 1 √n∥ �mn(πnα) − �mn(α)∥ ≤ 1 √n∥mn(πnα) − mn(α)∥ ≤ OP (µn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Second, 1 n∥ �mn(α)∥2 ≤ OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Third, 1 √n∥ �mn(α0)∥ = OP (1) � 1 nρn(α0)′Pnρn(α0) = OP ( � kn n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence for �Qn(α) = 1 n[ �mn(α) + �mn(α0)]′�Σ−1 n [ �mn(α) + �mn(α0)], we have �Qn(α) − �Qn(πnα) ≤ OP (1) 1 √n∥ �mn(πnα) − �mn(α)∥ � 1 √n∥ �mn(πnα) − �mn(α)∥ + 1 √n∥ �mn(α)∥ + 1 √n∥ �mn(α0)∥ � ≤ OP (µn¯δn) = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, by part (i) | �Qn(α) − Qn(α)| = oP(n−1) uniformly in α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose supτ∈(0,1) supα∈Cn E � d2m(Xt,α0+τ(α−α0)) dτ 2 | �2 = o(n−1) and E supα∈Cn sup|τ|≤Cn−1/2 1 n � t � d2 dτ 2 m(Xt, α + τun)| �2 = O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then uniformly for α ∈ Cn, (i) supα∈Cn sup|s|≤1,|x|≤Cn−1/2 ��� 1 n � t ℓ(Xt, α + sxun)�Σ(Xt)−1 d2 dτ 2 �m(Xt, α + τxun)|τ=s ��� = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) supα∈Cn √n| 1 nmn(α)′Σ−1 n dmn(α0) dα [un] − ⟨un, α − α0⟩| = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) We have that ��� 1 n � t ℓ(Xt, α + sxun)�Σ(Xt)−1 d2 dτ 2 �m(Xt, α + τxun)|τ=s ��� 2 ≤ OP (1)AB where A := 1 n � t ℓ(Xt, α + sxun)2, B := 1 n � t d2 dτ 2 �m(Xt, α + τxun)|2 τ=s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let mn and ρn denote the n × 1 vectors of m(Xt, ·) and ρ(Yt+1, α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Uniformly for α ∈ Cn, A ≤ 2 n∥Pnmn(α + sxun)∥2 + 2 n∥Pnρn∥2 = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have B ≤ OP (1)E supα∈Cn sup|τ|≤Cn−1/2 | d2 dτ 2 m(Xt, α + τun)|2 = OP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 44 (ii) By the second order mean value theorem,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' for some ξ ∈ (0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 1 nmn(α)′Σ−1 n dmn(α0) dα [un] = 1 n � t [m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α) − m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0)]Σ(Xt)−1 dm(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0) dα [un] = 1 n � t f(Xt) − Ef(Xt) + E[m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α) − m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0)]Σ(Xt)−1 dm(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0) dα [un] = 1 n � t f(Xt) − Ef(Xt) + Edm(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0) dα [α − α0]Σ(Xt)−1 dm(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0) dα [un] +1 2Ed2m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0 + τ(α − α0)) dτ 2 |τ=ξΣ(Xt)−1 dm(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0) dα [un] = 1 n � t f(Xt) − Ef(Xt) + ⟨un,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α − α0⟩ + o(n−1/2) = ⟨un,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α − α0⟩ + oP (n−1/2) where f(Xt) = [m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α) − m(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' α0)]Σ(Xt)−1 dm(Xt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='α0) dα [un] and the last equality follows from sup f∈En | 1 √n � t (f(Xt) − Ef(Xt))| = oP (1) (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) with En := {m(Xt, α)Σ(Xt)−1 dm(Xt,α0) dα [un] : α ∈ Cn} and that m(Xt, α0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the Riesz representation Theorem, there is v∗ n ∈ cl{An − α0} dφ(α0) dα [�α − α0] = ⟨v∗ n, �α − α0⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next, we show √n⟨un, �α − α0⟩ →d N(0, 1), or more precisely, for Zn →d N(0, 1), Zn + √n⟨un, �α − α0⟩ = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) The proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 implies that for any ǫ > 0, there is C > 0 so that with probability at least 1 − ǫ, �α ∈ Aosn := {α ∈ An : Q(α) ≤ C¯δ2 n, ∥α − α0∥∞,ω ≤ Cδn}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now condition on this event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Proposition C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, sup α∈Aosn sup |x|≤Cn−1/2 |Qn(α + xun) − Qn(α) − An(α(x))| = oP (n−1), (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3) where An(α(x)) := 2x[n−1/2Zn + ⟨un, α − α0⟩] + Bnx2 with Bn = OP (1), Zn →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Write un = (uγ, uh).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Now let ∆n be such that sup|x|≤Cn−1/2 |Pen(πn(�h + xuh)) − Pen(�h)| = OP (∆n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then En := λnPen(πn(�h + xuh)) − λnPen(�h) = OP (λn∆n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 45 Now by definition, πn(�α + xun) ∈ An, hence 0 ≤ Qn(πn(�α + xun)) − Qn(�α) + En ≤ Qn(�α + xun) − Qn(�α) + En + |Qn(�α + xun) − Qn(πn(�α + xun))| ≤ Qn(�α + xun) − Qn(�α) + En + oP (n−1) ≤ 2x[n−1/2Zn + ⟨un, �α − α0⟩] + Bnx2 + En + oP (n−1), where the third inequality follows from Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 and the last inequality follows from (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the assumption λn∆n = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence there is ηn = o(n−1), so that 0 ≤ x[n−1/2Zn + ⟨un, �α − α0⟩] + Bnx2 + OP (ηn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' From n1/2ηn = o(n−1/2), we can find ǫn → 0+ so that n1/2ηn ≪ ǫn ≪ n−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Set x ∈ {ǫn, −ǫn}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Multiply by (2ǫn)−1n1/2 on both sides, −1 2 √nBnǫn ≤ Zn + √n⟨un, �α − α0⟩ + OP (ηnǫ−1 n n1/2) ≤ 1 2 √nBnǫn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have ηnǫ−1 n n1/2 + √nBnǫn = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore we reach Zn + √n⟨un, �α − α0⟩ = oP(1), which implies √n⟨un, �α − α0⟩ = −Zn + oP (1) →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, let ζn = ∥v∗ n∥n−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Apply Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 with α = �α and un = v∗ n/∥v∗ n∥, ζ−1 n (φ(�α) − φ(α0)) = ζ−1 n dφ(α0) dα [�α − α0] + oP (1) = ζ−1 n dφ(α0) dα [�α − α0,n] + ζ−1 n dφ(α0) dα [α0,n − α0] + oP (1) = √n⟨un, �α − α0,n⟩ + oP (1) = √n⟨un, �α − α0⟩ + oP(1) →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' where in the last equality we used √n⟨un, α0,n − α0⟩ = 0 because α0,n is the projection (under ∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='∥) of α0 onto span{An} and un ∈span{An}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 Proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We divide the proof in the following steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 1: decompose �γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Write σ2 := Var � 1 √n � t Wt � + ∥v∗ n∥2, which will be shown to be the asymptotic variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also write bn(α) and ¯bn(α) respectively as the n × 1 vectors of b(St, α) := 46 l(h(Wt))ρ(Yt+1, α) and ¯b(Xt, α) := E(l(h(Wt))ρ(Yt+1, α)|σt(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then �γ − γ = [φn(α0) − φ(α0)] + [φ(�α) − φ(α0)] + 1 n n � t=1 (Γ(Xt) − �Γt)ρ(Yt+1, �α) + a1, a1 := φn(�α) − φ(�α) − [φn(α0) − φ(α0)] φn(α) = 1 n � t l(h(Wt)) − Γ(Xt)ρ(Yt+1, α) φ(α) = Eφn(α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Bounding a1 is based on the stochastic equicontinuity of φn − φ, established in Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, which yields a1 = OP (dnδη n) = oP (σn−1/2) by the assumption that dnδη n = o(σn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 2: decompose �Γ(Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have Γ(Xt) = ¯b(Xt, α0)Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let �α ∈ Cn denote the estimated α0 used in defining �Γt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then �Γt = Ψ(Xt)′(Ψ′ nΨn)−1Ψ′ nbn(�α)�Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We then achieve the following decomposition: 1 n �n t=1(�Γt − Γ(Xt))ρ(Yt+1, �α) = 1 nbn(�α)′Pn�Σ−1 n ρn(�α) − 1 n¯bn(α0)′Σ−1 n ρn(�α) = a2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' + a8 where a2 := 1 n[bn(�α) − ¯bn(�α)]′Pn�Σ−1 n ρn(�α) a3 := 1 n[¯bn(�α) − ¯bn(α0)]′Pn�Σ−1 n ρn(�α) a4 := 1 n ¯bn(α0)′(Pn − I)(�Σ−1 n − Σ−1 n )ρn(�α) a5 := 1 n ¯bn(α0)′(Pn − I)Σ−1 n (ρn(�α) − mn(�α)) a6 := 1 n ¯bn(α0)′(Pn − I)Σ−1 n mn(�α) a7 := 1 n ¯bn(α0)′(�Σ−1 n − Σ−1 n )(ρn(�α) − ρn(α0)) a8 := 1 n ¯bn(α0)′(�Σ−1 n − Σ−1 n )ρn(α0) (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 shows a2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' + a7 = OP (δη n supx |�Σ(x) − Σ(x)| + √kndnδη n + ϕ2 n), which is oP(σn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The bound for a8 = oP (σn−1/2) is from Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence 1 n n � t=1 (�Γt − Γ(Xt))ρ(Yt+1, �α) = oP (σn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 3: Complete proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the same proof of that of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, φ(�α) − φ(α0) = ∥v∗ n∥⟨un, �α − α0⟩ + oP(∥v∗ n∥n−1/2) = −∥v∗ n∥n−1/2Zn + oP (∥v∗ n∥n−1/2) 47 = − 1 n � t Zt + oP(∥v∗ n∥n−1/2) Zt := ρ(Yt+1, α0)Σ(Xt)−1 dm(Xt, α0) dα [v∗ n] φn(α0) − φ(α0) = 1 n n � t=1 Wt − EWt, Wt := l(h0(Wt)) − Γ(Xt)ρ(Yt+1, α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Putting together, �γ − γ = [φn(α0) − φ(α0)] + [φ(�α) − φ(α0)] + oP (σn−1/2), and [φn(α0) − φ(α0)] + [φ(�α) − φ(α0)] = 1 n n � t=1 Wt − EWt − Zt + oP (σn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next, Wt − EWt − Zt is strictly stationary, satisfying the strong mixing condition (Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3) with �∞ n=1 α(n)ζ/(2+ζ) ≤ C �∞ n=1 exp(−cζn/(2 + ζ)) < ∞ for any constant ζ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, E ��(Wt − EWt − Zt)σ−1��2+ζ ≤ CE ��Wtσ−1��2+ζ + CE ��Zt∥v∗ n∥−1��2+ζ ≤ CE ����ρ(Yt+1, α0)dm(Xt, α0) dα [un] ���� 2+ζ + CE |ρ(Yt+1, α0)|2+ζ < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7 of Ibragimov (1962), √nσ−1 [φn(α0) − φ(α0) + φ(�α) − φ(α0)] → N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5) This implies the asymptotic normality of �γ − γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 (for Theorems 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recall that bn(α) and ¯bn(α) are the n × 1 vectors of l(h(Wt))ρ(Yt+1, α) and E(l(h(Wt))ρ(Yt+1, α)|σt(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose (a) supx |Γ(x)|2 + supw supHn l(h(w))2 < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (b) |l(h1(w)) − l(h2(w))| ≤ C|h1(w) − h2(w)| uniformly for all h1, h2 ∈ Hn,and w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (c) E supα∈Cn |l(h(Wt)) − l(h0(Wt))|2 ≤ Cδ2η n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (d) E supα∈Cn(ρ(Yt+1, α) − ρ(Yt+1, α0))2 = Cδ2η n for some η, C > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (e) For some κ, C > 0 , E sup∥α1−α∥∞,ω<δ |ǫt(α1) − ǫt(α)|2 ≤ Cδ2κ for all δ > 0 and α, α1 ∈ cl{a + xb : a, b ∈ An, x ∈ R}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (f) 1 √n∥¯bn(α0)′(Pn − I)∥ = OP (ϕn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then for ¯ǫn(α) as the n × 1 vector of ρ(Yt+1, α) − m(Xt, α), 48 (i) supα1,α2∈Cn∪{α0} |φn(α1) − φ(α1) − [φn(α2) − φ(α2)]| = OP (dnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) supCn | 1 n¯bn(α0)′(�Σ−1 n − Σ−1 n )[ρn(α) − ρn(α0)]| = OP (δη n) supx |�Σ(x) − Σ(x)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) supCn 1 √n∥Pn�Σ−1 n ρn(α)∥ = OP (supx ∥�Σ(x) − Σ(x)∥ + dnδη n + ¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iv) supCn 1 n¯bn(α0)′(Pn − I)Σ−1 n ¯ǫn(α) = OP (√kndnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (v) supCn 1 n[bn(α) − ¯bn(α)]′Pn�Σ−1 n ρn(α) + supCn 1 n[¯bn(α) − ¯bn(α0)]′Pn�Σ−1 n ρn(α) = OP (δη n supx ∥�Σ(x) − Σ(x)∥ + dnδ2η n + √kndnδη n¯δn + � kn/n¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (vi) supCn 1 n¯bn(α0)′(Pn − I)(�Σ−1 n − Σ−1 n )ρn(α) = OP (ϕn supx ∥�Σ(x) − Σ(x)∥) (vii) supCn 1 n¯bn(α0)′(Pn − I)Σ−1 n mn(α) = OP (ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) First recall ǫ(St, α) = ρ(Yt+1, α) − m(Xt, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' a := sup α1,α2∈Cn∪{α0} | 1 n n � t=1 Γ(Xt)[ρ(Yt+1, α1) − ρ(Yt+1, α2)] − EΓ(Xt)[ρ(Yt+1, α1) − ρ(Yt+1, α2)]| = sup α1,α2∈Cn∪{α0} | 1 n n � t=1 Γ(Xt)[ǫ(St, α1) − ǫ(St, α2)]| ≤ 2 sup α∈Cn∪{α0} | 1 n n � t=1 Γ(Xt)[ǫ(St, α) − ǫ(St, α0)]|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' b := sup α1,α2∈Cn∪{α0} | 1 n n � t=1 l(h1(Wt)) − l(h2(Wt)) − E[l(h1(Wt)) − l(h2(Wt))]| ≤ 2 sup α∈Cn∪{α0} | 1 n n � t=1 l(h(Wt)) − l(h0(Wt)) − E[l(h(Wt)) − l(h0(Wt))]|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note E supα∈Cn Γ(Xt)2[ǫ(St, α) − ǫ(St, α0)]2 ≤ CE supα∈Cn[ǫ(St, α) − ǫ(St, α0)]2 ≤ Cδ2η n , η ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then the convergence of a and b follow from the same argument of that of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 with Ψj(Xt) replaced with Γ(Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Term b follows from the same proof of this Proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We reach a + b = OP (dnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore supα1,α2∈Cn∪{α0} |φn(α1) − φ(α1) − [φn(α2) − φ(α2)]| ≤ a + b = OP (dnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) First, E supα∈Cn[ρ(Yt+1, α) − ρ(Yt+1, α0)]2 ≤ O(δ2η n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This implies 1 √n∥ρn(α) − ρn(α0)∥ = OP (δη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The target of interest is then bounded by ∥�Σn − Σn∥ 1 √n∥ρn(α) − ρn(α0)∥ = OP (δη n) sup x ∥�Σ(x) − Σ(x)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) First, write �mΣ(Xt, α) := Ψ(Xt)′(Ψ′ nΨn)−1Ψ′ nΣ−1 n mn(α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then step 1 of the proof of 49 Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 carries over, leading to sup Cn 1 n∥PnΣ−1 n mn(α)∥2 = sup Cn 1 n � t �mΣ(Xt, α)2 ≤ C sup Cn E �mΣ(Xt, α)2 ≤ C sup Cn E[ �mΣ(Xt, α) − m(Xt, α)Σ(Xt)−1]2 + C sup Cn Em(Xt, α)2 = OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, for ¯ǫn(α) := ρn(α)−mn(α), the first inequality below follows from the same proof of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (¯ǫn(α) − ¯ǫn(α0))∥ ≤ OP (dnδη ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥Pn(�Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n − Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n )(ρn(α) − ρn(α0))∥ ≤ OP (δη ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n) sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='x ∥�Σ(x) − Σ(x)∥ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (ρn(α) − ρn(α0))∥ ≤ sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (¯��n(α) − ¯ǫn(α0))∥ + sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n mn(α)∥ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='≤ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='OP (dnδη ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n + ¯δn) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥Pn�Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (ρn(α) − ρn(α0))∥ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='≤ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥Pn(�Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n − Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n )(ρn(α) − ρn(α0))∥ + sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='Cn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥PnΣ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n (ρn(α) − ρn(α0))∥ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='≤ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='OP (δη ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n) sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='x ∥�Σ(x) − Σ(x)∥ + OP (dnδη ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n + ¯δn) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='√n∥Pn�Σ−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='n ρn(α0)∥ = OP (1) sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='x ∥�Σ(x) − Σ(x)∥ + OP ( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='kn/n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together, supCn 1 √n∥Pn�Σ−1 n ρn(α)∥ ≤ supCn 1 √n∥Pn�Σ−1 n (ρn(α) − ρn(α0))∥ + 1 √n∥Pn�Σ−1 n ρn(α0)∥ whose final rate is OP (supx ∥�Σ(x) − Σ(x)∥ + dnδη n + ¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iv) First, 1 n¯bn(α0)′(Pn −I)Σ−1 n ¯ǫn(α0) = OP (ϕnn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next, from the proof of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, sup Cn 1 n ¯bn(α0)′(Pn − I)Σ−1 n [¯ǫn(α) − ¯ǫn(α0)] ≤ sup Cn 1 n∥Ψ′ nΣ−1 n [¯ǫn(α) − ¯ǫn(α0)]∥ + sup Cn 1 n ¯bn(α0)′Σ−1 n [¯ǫn(α) − ¯ǫn(α0)] = OP ( � kndnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So supCn 1 n¯bn(α0)′(Pn − I)Σ−1 n ¯ǫn(α) = OP (√kndnδη n + ϕnn−1/2) = OP (√kndnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (v) The same proof of Proposition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 carries over to here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So sup α∈Cn 1 n∥Ψ′ n(¯bn(α) − bn(α)) − Ψ′ n(¯bn(α0) − bn(α0))∥ = OP ( � kndnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, 1 n∥Ψ′ n(¯bn(α0) − bn(α0))∥ = OP (√knn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This implies sup Cn 1 √n[bn(α) − ¯bn(α)]′Pn ≤ OP (1) sup α∈Cn 1 n∥Ψ′ n(¯bn(α) − bn(α))∥ = OP (dnδη n + n−1/2) � kn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 50 Also, 1 n∥¯bn(α) − ¯bn(α0)∥2 ≤ OP (1)E sup Cn [E|ρ(Yt+1, α) − ρ(Yt+1, α0)||σt(X)]2 + OP (1) sup Cn E|l(h) − l(h0)|2 ≤ OP (δ2η n ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence supCn 1 n[bn(α)−¯bn(α)]′Pn�Σ−1 n ρn(α) = OP (√kndnδη n+ � kn/n)(supx ∥�Σ(x)−Σ(x)∥+dnδη n +¯δn) and supCn 1 n[¯bn(α) − ¯bn(α0)]′Pn�Σ−1 n ρn(α) = OP (supx ∥�Σ(x) − Σ(x)∥δη n + dnδ2η n + ¯δnδη n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So the final rate of the sum of the two is δη n supx ∥�Σ(x) − Σ(x)∥ + dnδ2η n + √kndnδη n¯δn + � kn/n¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (vi) (vii) The proof is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' D Proofs for Section 5 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proposition C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 shows the following LQA: sup α∈Cn sup |x|≤Cn−1/2 |Qn(α + xun) − Qn(α) − An(α(x))| = oP (n−1) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) where An(α(x)) := 2x[n−1/2Zn + ⟨un, α − α0⟩] + Bnx2 with Bn = 1 + oP(1), Zn →d N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We respectively provide lower and upper bounds for Qn(�αR) − Qn(�α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 1: lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To apply the LQA, we need to first show that �αR ∈ Cn with a high probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In fact, there is πR n α0 ∈ AR n so that Qn(�αR) + λnPen(�hR) ≤ Qn(πR n α0) + λnPen(πR n h0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Given the above inequalities, the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 carries over, establishing that �αR ∈ Cn with a high probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now condition on this event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence by (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1), uniformly for all |x| ≤ Cn−1/2, Qn(�αR + xun) − Qn(�αR) = 2x[n−1/2Zn + ⟨un, �αR − α0⟩] + Bnx2 + oP(n−1) = 2xn−1/2Zn + Bnx2 + oP (n−1), (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) where the second equality follows from Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next, we note one technical difficulty that the inequality Qn(�α) + λnPen(�α) ≤ Qn(α) + λnPen(α) may not hold for α = �αR + xun, as An is a nonlinear space so �αR + xun is not necessarily in An.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Nevertheless, we can apply this inequality for α = πn(�αR + xun), and show that |Qn(πn(�αR + xun)) − Qn(�αR + xun)| is negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Specifically, by 51 Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 and Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3, Qn(�α) − Qn(�αR + xun) ≤ λnPen(πn(�αR + xun)) − λnPen(�α) + Qn(πn(�αR + xun)) − Qn(�αR + xun) ≤ oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3) Thus (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) and (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3) imply Qn(�αR)−Qn(�α) ≥ −2xn−1/2Zn−Bnx2−oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Take x = −ZnB−1 n n−1/2 which maximizes −2xn−1/2Zn − Bnx2, then Qn(�αR) − Qn(�α) ≥ Z2 nB−1 n n−1 − oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 2: upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Fix x∗ determined as in Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, this lemma shows that x∗ = n−1/2ZnB−1 n + oP (n−1/2) and that |Qn(πR n (�α + x∗un)) − Qn(�α + x∗un)| = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence by (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1) again, Qn(�αR) − Qn(�α) ≤ Qn(πR n (�α + x∗un)) − Qn(�α) + λn(Pen(πR n (�α + x∗un)) − Pen(�αR)) = Qn(�α + x∗un) − Qn(�α) + oP (n−1) = 2x∗n−1/2[Zn + n1/2⟨un, �α − α0⟩] + Bnx∗2 + oP (n−1) = Bnx∗2 + oP (n−1) = Z2 nB−1 n n−1 + oP (n−1), where the third equality is due to the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 that Zn + √n⟨un, �α − α0⟩ = oP (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 3: matching bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together, we have Sn(φ0) = n(Qn(�αR) − Qn(�α)) = B−1 n Z2 n + oP (1) →d χ2 1 given that Bn →P 1 proved in Proposition C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 (for Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose (a) supα∈Cn 1 n �n t=1[m(Xt, πnα) − m(Xt, α)]2 = OP (µ2 n) and supα∈Cn,φ(α)=φ0 1 n �n t=1[m(Xt, πR n (α + xun)) − m(Xt, α + xun)]2 = OP (µ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (b) µn¯δn = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (c) t → φ(α + tun) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then (i) ⟨un, �αR − α0⟩ = oP (n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) sup|x|≤Cn−1/2 |Qn(πn(�αR + xun)) − Qn(�αR + xun)| = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 52 (iii) there is x∗ so that φ(�α + x∗un) = φ0 and |Qn(πR n (�α + x∗un)) − Qn(�α + x∗un)| = oP(n−1) and x∗ = n−1/2ZnB−1 n + oP(n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) Note that φ(�αR) − φ(α0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, ���dφ(α0) dα [�αR − α0] ��� = o(∥v∗ n∥n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the Riesz representation Theorem, dφ(α0) dα [�αR − α0] = dφ(α0) dα [�αR − α0,n] + dφ(α0) dα [α0,n − α0] = ∥v∗ n∥⟨un, �αR − α0⟩ + o(∥v∗ n∥n−1/2) with the definition un = v∗ n/∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) The proof is the same as that of Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) First, we prove there is x∗ so that φ(�α+x∗un) = φ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Define F(x) := ⟨v∗ n, α−α0⟩+x∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also define R(x) := φ(α+xun)−φ(α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, there is a positive sequence bn = o(∥v∗ n∥n−1/2), uniformly for all α ∈ Cn, for all x ≤ Cn−1/2, |R(x) − F(x)| ≤ bn Now fix some r such that |r−⟨v∗ n, α−α0⟩| ≤ C∥v∗ n∥n−1/2 and define x1 = (r−⟨v∗ n, α−α0⟩−2bn)∥v∗ n∥−1 and x2 = (r − ⟨v∗ n, α − α0⟩ + 2bn)∥v∗ n∥−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This ensures that F(x1) + 2bn = F(x2) − 2bn = r and |x1| + |x2| ≤ Cn−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Therefore, R(x1) ≤ F(x1) + bn < r, R(x2) ≥ F(x2) − bn > r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence there is x∗ between x1, x2 so that R(x∗) = r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In the above proof, suppose r = 0 and α = �α are admitted, then φ(�α + x∗un) = φ(α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To show the admissibility, we note (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2) that n−1/2Zn + ∥v∗ n∥−1⟨v∗ n, �α − α0⟩ = oP (n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence indeed, for any ǫ > 0, there is C > 0, |⟨v∗ n, α − α0⟩| = ∥v∗ n∥n−1/2|Zn| + oP (∥v∗ n∥n−1/2) ≤ C∥v∗ n∥n−1/2 with probability at least 1 − ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Now |x∗ − n−1/2Zn| ≤ |x1 − n−1/2Zn| + |x2 − n−1/2Zn| ≤ 2|bn| ∥v∗n∥ + oP (n−1/2) = oP (n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, the proof of |Qn(πR n (�α + x∗un)) − Qn(�α + x∗un)| = oP (n−1) is the same as part (ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 53 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' As in the proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, we respectively provide lower and upper bounds for 1 n �Sn(φ0) = Ln(�αR, φ0) − Ln(�α, �γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that Ln(�α, �γ) = Qn(�α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let g1 = (�φ(�αR) − γ0)2�Σ−1 2 g2 = φ(�αR) − φ(α0) g3 = [φn(α0) − φ(α0)] + [φ(�α) − γ0] g4 = [φn(α0) − γ0 + g2]2�Σ−1 2 g5 = φn(α0) − γ0 − ∥v∗ n∥n−1/2Zn g6 = n−1/2Zn + ∥v∗ n∥−1g2 Also note that �αR ∈ Cn with a high probability, by Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now condition on this event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 1: lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Due to λnPen(�αR + xun) − λnPen(�α) = oP (n−1) and �αR ∈ An,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' so uniformly for all |x| ≤ Cn−1/2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Ln(�α,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' �γ) − Ln(�αR,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' φ0) = Qn(�α) − Qn(�αR) − g1 + oP (n−1) ≤(a) Qn(πn(�αR + xun)) − Qn(�αR + xun) + Qn(�αR + xun) − Qn(�αR) − g1 + oP (n−1) =(b) Qn(�αR + xun) − Qn(�αR) − g1 + oP (n−1) =(c) 2x[n−1/2Zn + ⟨un,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' �αR − α0⟩] + x2 − g1 + oP (n−1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' =(d) 2x[n−1/2Zn + ∥v∗ n∥−1 dφ(α0) dα [�αR − α0]] + x2 − g1 + oP (n−1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' =(e) 2x[n−1/2Zn + ∥v∗ n∥−1g2] + x2 − g1 + oP (n−1) =(f) 2x[n−1/2Zn + ∥v∗ n∥−1g2] + x2 − g4 � �� � F (x) +oP(n−1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' where in (a) we used Qn(�α) ≤ Qn(πn(�αR + xun));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (b) follows from |Qn(πn(�αR + xun)) − Qn(�αR + xun)| ≤ oP (n−1) following the same proof of that of Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1(ii);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (c) is from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (d) is from the Riesz representation: (⟨v∗ n, α0 − α0,n⟩ = 0) dφ(α0) dα [�αR − α0] = dφ(α0) dα [�αR − α0,n] + dφ(α0) dα [α0,n − α0] = ⟨v∗ n, �αR − α0,n⟩ + oP (n−1/2∥v∗ n∥);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (e) is from Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (f) is from Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We choose x = x∗ to minimize F(x) on the right hand side, leading to the choice x∗ = −[n−1/2Zn + ∥v∗ n∥−1g2] = −g6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We shall verify that |x∗| = OP (n−1/2) in Step 3 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose for now this is true, then we have obtained the lower bound: 1 n �Sn(φ0) ≥ −F(x∗) − oP(n−1), where −F(x∗) = [n−1/2Zn + ∥v∗ n∥−1g2]2 + g4 = g2 6 + g4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 54 Step 2: upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Uniformly for all |x| ≤ Cn−1/2, Ln(�αR, φ0) − Ln(�α, �γ) ≤ Ln(πn(�α + xun), φ0) − Ln(�α, �γ) + λnPen(πn(�α + xun)) − λnPen(�αR) + oP (n−1) ≤(g) Ln(�α + xun, φ0) − Ln(�α, �γ) + oP(n−1) = Qn(�α + xun) − Qn(�α) + (�φ(�α + xun) − φ0)2�Σ−1 2 + oP(n−1) =(h) x2 + 2x[n−1/2Zn + ⟨�α − α0, un⟩] + (�φ(�α + xun) − φ0)2�Σ−1 2 + oP(n−1) =(i) x2 + (�φ(�α + xun) − γ0)2�Σ−1 2 + oP(n−1) =(j) x2 + [x∥v∗ n∥ + g3]2�Σ−1 2 � �� � G(x) +oP (n−1) where (g) follows from Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 and that λnPen(πn(�α + xun)) − λnPen(�αR) = oP (n−1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (h) is from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) is from (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (j) is from Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We choose x = τ ∗ to minimize G(x), leading to the choice τ ∗ = −g3∥v∗ n∥(∥v∗ n∥2 + �Σ2)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It is easy to see that |τ ∗| = OP (n−1/2), following this argument: from the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, g3 = OP (σn−1/2), and σ2 = Σ2 + ∥v∗ n∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So provided that �Σ2 − Σ2 = oP(1)Σ2, |τ ∗| = OP (n−1/2) � Σ2 + ∥v∗n∥2∥v∗ n∥ ∥v∗n∥2 + �Σ2 = OP (n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Thus τ ∗ is admitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then we have obtained the upper bound: 1 n �Sn(φ0) ≤ G(τ ∗) + oP(n−1), where G(τ ∗) = g2 3 ∥v∗n∥2 + �Σ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Step 3: matching bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now show that the lower and upper bounds match, that is, −F(x∗) = G(τ ∗)+oP(n−1), which requires analyzing g2 = φ(�αR) − φ(α0) and g6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' First, Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 yields, uniformly in |x| ≤ Cn−1/2, (�φ(�αR + xun) − γ0)2�Σ−1 2 − (�φ(�αR) − γ0)2�Σ−1 2 = H(x) + oP (n−1) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) where H(x) = �Σ−1 2 x2∥v∗ n∥2 + 2x�Σ−1 2 ∥v∗ n∥[�φ(α0) − γ0 + g2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next, the basic inequality yields Ln(�αR, φ0) ≤ Ln(πn(�αR + xun), γ0) + oP (n−1) ≤ Ln(�αR + xun, γ0) + oP(n−1) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5) where the first inequality follows from with the assumption that λnPen(�αR + xun) − λnPen(�αR) = oP (n−1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' the second inequality follows from Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Uniformly for |x| ≤ Cn−1/2, Qn(�αR + xun) − Qn(�αR) = oP (n−1) + x2 + 2x[n−1/2Zn + ⟨un, �αR − α0⟩] = oP (n−1) + x2 + 2x[n−1/2Zn + g2∥v∗ n∥−1], 55 where ⟨un, �αR − α0⟩ = ∥v∗ n∥−1 dφ(α0) dα [�αR − α0] + oP (n−1/2) = ∥v∗ n∥−1g2 + oP(n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This along with (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5) give rise to, 0 ≤ x2 + 2x[n−1/2Zn + g2∥v∗ n∥−1] + H(x) + oP(n−1) = (1 + ∥v∗ n∥2)x2 + 2x[n−1/2Zn + g2∥v∗ n∥−1 + ∥v∗ n∥(�φ(α0) − γ0 + g2)] + oP (n−1) = x2(1 + �Σ−1 2 ∥v∗ n∥2) + 2x[n−1/2Zn + g2∥v∗ n∥−1 + �Σ−1 2 ∥v∗ n∥(�φ(α0) − γ0 + g2)] + oP (n−1) = x2(1 + �Σ−1 2 ∥v∗ n∥2) + 2x[g6 + �Σ−1 2 ∥v∗ n∥(φn(α0) − γ0 + g2)] + oP (n−1), (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6) where in the last equality, �Σ−1 2 ∥v∗ n∥(�φ(α0) − φn(α0)) = oP (n−1/2), from Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2: |�Σ−1 2 ∥v∗ n∥(�φ(α0) − φn(α0))| ≤ |�Σ−1 2 ∥v∗ n∥ 1 n n � t=1 (Γ(Xt) − �Γt)ρ(Yt+1, α0)| = oP (n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6) implies there is some ¯ηn = oP (n−1) so that x2(1 + �Σ−1 2 ∥v∗ n∥2) + 2x[g6 + �Σ−1 2 ∥v∗ n∥(φn(α0) − γ0 + g2)] + ¯ηn ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7) We now derive some important intermediate results from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' First, let Cn := min{∥v∗ n∥Σ−1/2 2 , ∥v∗ n∥2 ∗Σ−1 2 , ∥v∗ n∥σΣ−1 2 }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It is known that (Cn+C2 n)/(1+�Σ−1 2 ∥v∗ n∥2) ≤ 2 because ∥v∗ n∥ ≤ σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So ¯ηn = oP (n−1)C2 n/(1+�Σ−1 2 ∥v∗ n∥2), implying ¯ηnn1/2C−1 n ≪ n−1/2Cn/(1+ �Σ−1 2 ∥v∗ n∥2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence there is a positive sequence ǫn = OP (n−1/2) so that ¯ηnn1/2C−1 n ≪ ǫn ≪ n−1/2Cn/(1+�Σ−1 2 ∥v∗ n∥2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence ¯ηn/ǫn+ǫn(1+�Σ−1 2 ∥v∗ n∥2) = oP (n−1/2Cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Take x = ±ǫn and divide by 2ǫn on (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We reach four intermediate results: g6 + �Σ−1 2 ∥v∗ n∥(φn(α0) − γ0 + g2) = OP (¯ηn/ǫn + ǫn(1 + �Σ−1 2 ∥v∗ n∥2)) = oP (n−1/2Cn) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8) (1 + �Σ−1 2 ∥v∗ n∥2)g6 + �Σ−1 2 ∥v∗ n∥g5 = oP (n−1/2Cn) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='9) g4 = (oP (n−1/2Cn) − g6)2�Σ2∥v∗ n∥−2 (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='10) g6 = OP (n−1/2) (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='11) where (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8) follows from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='7) with x = ±ǫn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' the left hand sides of (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8) and (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='9) are equal;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='10) is from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='8) and the definition of g4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='11) is from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='9), g5 = OP (σn−1/2) and that oP (n−1/2Cn) = σ2Σ−1 2 OP (n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, the proof of (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='11) does not rely on the conclusion of Step 1, so it verifies that |x∗| = OP (n−1/2), a claim used in step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We are now ready to match the bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' From (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='10) and (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='11), −F(x∗) = g2 6 + g4 = g2 6 + (oP (n−1/2Cn) − g6)2�Σ2∥v∗ n∥−2 = g2 6(1 + �Σ2∥v∗ n∥−2) + oP (n−1) 56 =(k) [oP (n−1/2Cn) − �Σ−1 2 ∥v∗ n∥g5]2 (1 + �Σ−1 2 ∥v∗n∥2)2 (1 + �Σ2∥v∗ n∥−2) + oP (n−1) = g2 5 �Σ2 + ∥v∗n∥2 + oP(n−1) =(l) g2 3 �Σ2 + ∥v∗n∥2 + oP (n−1) = G(τ ∗) + oP (n−1), where (k) is from (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='9);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (l) is from the fact that (due to (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2)) |g2 3 − g2 5| ≤ |φ(�α) − φ(α0) + ∥v∗ n∥n−1/2Zn|OP (n−1/2σ) ≤ ���� dφ(α0) dα [�α − α0] + ∥v∗ n∥n−1/2Zn ���� OP (n−1/2σ) + oP(σ2n−1) = ���⟨�α − α0, un⟩ + n−1/2Zn + oP (n−1/2) ��� OP (∥v∗ n∥n−1/2σ) + oP(σ2n−1) = oP (σ2n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Thus we have proved that the upper and lower bounds match up to oP (n−1), implying �Sn(φ0) = nG(τ ∗) + oP (1) = ng2 3 �σ2 + oP (1) →d χ2 1 where the convergence in distribution follows from (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (for Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose (δη n supx |�Σ(x)−Σ(x)|+√kndnδη n+ϕ2 n) = oP (n−1/2 min{1, σ−1}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, suppose (1 + ∥v∗ n∥) supα∈Cn |φ(πnα) − φ(α)| = o(n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Write �φ(α) := 1 n �n t=1[l(h(Wt)) − �Γtρ(Yt+1, α)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then (i) ∥�αR − α∥∞,ω = OP (δn), Q(�αR) ≤ OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) supα1,α2∈Cn[�φ(α1) − �φ(α2)] − [φ(α1) − φ(α2)] = OP (δη n supx |�Σ(x) − Σ(x)| + √kndnδη n + ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) |Ln(πn(�α + xun), γ0) − Ln(�α + xun, γ0)| = oP (n−1) (iv) |Ln(πn(�αR + xun), γ0) − Ln(�αR + xun, γ0)| = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) The inequality Ln(�αR, φ0) + λnPen(�hR) ≤ Ln(πnα0, γ0) + λnPen(πnh0) implies Qn(�αR) ≤ Qn(πnα0) + Fn(πnα0) + OP (λ) = O(¯δ2 n) + Fn(πnα0) where Fn(α) := (�φ(α) − γ0)′�Σ−1 2 (�φ(α) − γ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now bound Fn(πnα0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note that �φ(πnα0) − γ0 = 1 n n � t=1 l(πnh0(Wt)) − El(h0(Wt)) − 1 n n � t=1 (�Γt − Γ(Xt))ρ(Yt+1, πnα0) − 1 n n � t=1 [Γ(Xt)ρ(Yt+1, πnα0) − EΓtρ(Yt+1, α0)] − EΓt[m(Xt, α0) − m(Xt, πnα0)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 57 The first term is bounded by OP (n−1/2)+E[l(πnh0(Wt))−l(h0(Wt))];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' the second term is bounded by OP (¯δn), following from the same argument as those for (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' the third and fourth terms are bounded by OP (n−1/2)+EΓ(Xt)[m(Xt, πnα0)−m(Xt, α0)] ≤ OP ( � Q(πnα0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence �φ(πnα0)−γ0 = OP (¯δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This implies Fn(πnα0) = OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This yields Qn(�αR) = OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then from the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, Q(�αR) ≤ CE �m(Xt, �αR)2 + OP (¯δ2 n) ≤ C 1 n � t �m(Xt, �αR)2 + OP (¯δ2 n) ≤ Qn(�αR) + OP (¯δ2 n) = OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' It also implies ∥�αR − πnα0∥ ≤ ∥�αR − α0∥ + ∥πnα0 − α0∥ = OP (¯δn), and hence ∥�αR − α0∥∞,ω ≤ ∥πnα0 − α0∥∞,ω + ∥�αR − πnα0∥∞,ω = ∥πnα0 − α0∥∞,ω + OP (ωn(¯δn)) = OP (δn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Let a1 = [(φn(α1) − φ(α1)) − (φn(α2) − φ(α2))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have [�φ(α1) − �φ(α2)] − [φ(α1) − φ(α2)] = a1 + 1 n n � t=1 (Γ(Xt) − �Γt)ρ(Yt+1, α1) + 1 n n � t=1 (�Γt − Γ(Xt))ρ(Yt+1, α2) ≤ OP (δη n sup x |�Σ(x) − Σ(x)| + � kndnδη n + ϕ2 n), where the first inequality follows from bounds for (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) and Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) Let α = �α+xun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the same proof of Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1(ii), Qn(πnα)−Qn(α) = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Next, �φ(α) − γ0 = a1(α) + a3 + a4(α) − 1 n � t (�Γt − Γ(Xt))ρ(Yt+1, α), a1(α) := φn(α) − φ(α) − [φn(α0) − φ(α0)] a3 := φn(α0) − φ(α0) a4(α) := φ(α) − φ(α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 and the same proof for bounding (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4), a1− 1 n � t (�Γt−Γ(Xt))ρ(Yt+1, α) = OP (δη n sup x |�Σ(x)−Σ(x)|+ � kndnδη n+ϕ2 n) = oP (n−1/2 min{1, σ−1}), where the last equality follows from the assumption (δη n supx |�Σ(x) − Σ(x)| + √kndnδη n + ϕ2 n) = oP (n−1/2 min{1, σ−1}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The same bound holds when α is replaced with πnα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Meanwhile, by the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, a3 = OP (σn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' To bound a4(α), first note that ∥πn(�α + xun) − α0∥2 ≤ CEm(Xt, πn(�α + xun))2 ≤ CE[m(Xt, πn(�α + xun)) − m(Xt, �α + xun)]2 +CE[m(Xt, �α + xun) − m(Xt, �α)]2 + CQ(�α) ≤ OP (µ2 n + ¯δ2 n) ∥(�α + xun) − α0∥2 ≤ OP (¯δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 58 So by Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1, and that ⟨v∗ n, α0 − α0,n⟩ = 0, a4(�α + xun) ≤ |φ(�α + xun) − φ(α0)| ≤ ���� dφ(α0) dα [�α + xun − α0] ���� ≤ ���� dφ(α0) dα [�α + xun − α0,n] ���� + ���� dφ(α0) dα [α0,n − α0] ���� = oP (∥v∗ n∥)n−1/2 + |⟨�α − α0, v∗ n⟩ + x⟨un, v∗ n⟩| = OP (∥v∗ n∥n−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' a4(πn(�α + xun)) ≤ |φ(πn(�α + xun)) − φ(α0)| ≤ ���� dφ(α0) dα [πn(�α + xun) − α0] ���� ≤ ���� dφ(α0) dα [πn(�α + xun) − α0,n] ���� + ���� dφ(α0) dα [α0,n − α0] ���� = oP (∥v∗ n∥)n−1/2 + |⟨πn(�α + xun) − α0, v∗ n⟩| ≤ oP (∥v∗ n∥)n−1/2 + ∥πn(�α + xun) − α0∥∥v∗ n∥ ≤ OP (¯δn)∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, by the proof of Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1(ii), Qn(πn(�α + xun)) − Qn(�α + xun) = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' with α = �α + xun,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' |�φ(α) − γ0| ≤ oP(n−1/2 min{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' σ−1}) + OP (∥v∗ n∥n−1/2),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and cn := |φ(πnα) − φ(α)|,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Ln(πn(�α + xun),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' γ0) − Ln(�α + xun,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' γ0) = Qn(πnα) − Qn(α) +(�φ(πnα) − γ0)′�Σ−1 2 (�φ(πnα) − γ0) − (�φ(α) − γ0)′�Σ−1 2 (�φ(α) − γ0) = oP (n−1) + (�φ(πnα) − �φ(α))′ �Σ−1 2 (�φ(πnα) − �φ(α)) + 2(�φ(πnα) − �φ(α))′�Σ−1 2 (�φ(α) − γ0) = oP (n−1) + OP (1)|φ(πnα) − φ(α)|2 + OP (1)|φ(πnα) − φ(α)||�φ(α) − γ0| ≤ oP (n−1) + OP (c2 n) + oP (n−1/2 min{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' σ−1})cn + OP (∥v∗ n∥n−1/2cn) = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iv) The proof is the same for part (iii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Write νn := δη n supx |�Σ(x) − Σ(x)| + √kndnδη n + ϕ2 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose νn = oP (n−1/2σ−1), (pn + νn)¯δn∥v∗ n∥ = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then Uniformly for |x| ≤ Cn−1/2, for b(x) := �φ(α0) − γ0 + φ(�αR) − φ(α0), (i) (�φ(�α + xun) − γ0)2�Σ−1 2 = [x∥v∗ n∥ + g3]2�Σ−1 2 + oP (n−1) (ii) (�φ(�αR + xun) − γ0)2�Σ−1 2 − (�φ(�αR) − γ0)2�Σ−1 2 = x2∥v∗ n∥2�Σ−1 2 + 2x∥v∗ n∥b(x)�Σ−1 2 + oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) (�φ(�αR) − γ0)2�Σ−1 2 = [φn(α0) − γ0 + φ(�αR) − φ(α0)]2�Σ−1 2 + oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (i) Let g3 = [φn(α0) − φ(α0)] + [φ(�α) − φ(α0)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' �φ(�α + xun) − γ0 = φ(�α + xun) − φ(�α) + g3 + b1 + b2 b1 = [�φ(�α + xun) − �φ(�α)] − [φ(�α + xun) − φ(�α)] 59 b2 = �γ − γ0 − g3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We now work with φ(�α + xun) − φ(�α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 and the Riesz representation, φ(�α + xun) − φ(�α) = φ(�α + xun) − φ(α0) − [φ(�α) − φ(α0)] = dφ(α0) dα [�α − α0 + xun] − dφ(α0) dα [�α − α0] = ⟨v∗ n, �α − α0 + xun⟩ − ⟨v∗ n, �α − α0⟩ + oP (∥v∗ n∥)n−1/2 = ⟨v∗ n, un⟩x = x∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='12) Hence �φ(�α + xun) − γ0 = x∥v∗ n∥ + g3 + b1 + b2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also note that g3 = OP (σn−1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Together (�φ(�α + xun) − γ0)2�Σ−1 2 is bounded by [x∥v∗ n∥ + g3]2�Σ−1 2 + oP(n−1) + OP (b2 1 + b2 2) + OP (b1 + b2)(∥v∗ n∥ + σ)n−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, b1 = OP (νn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, b2 = OP (νn) = oP (n−1/2σ−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So the above is bounded by (σ ≥ ∥v∗ n∥), [x∥v∗ n∥ + g3]2�Σ−1 2 + oP (n−1) + OP (νnn−1/2)(∥v∗ n∥ + σ) = [x∥v∗ n∥ + g3]2�Σ−1 2 + oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) (�φ(�αR + xun) − γ0)2�Σ−1 2 − (�φ(�αR) − γ0)2�Σ−1 2 equals ∆1 := (�φ(�αR + xun) − �φ(�αR))2�Σ−1 2 + 2(�φ(�αR + xun) − �φ(�αR))�Σ−1 2 (�φ(�αR) − γ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The same argument as in (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='12) yields �φ(�αR + xun) − �φ(�αR) = x∥v∗ n∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Meanwhile, �φ(�αR) − γ0 = �φ(α0) − γ0 + φ(�αR) − φ(α0) + [�φ(�αR) − �φ(α0)] − [φ(�αR) − φ(α0)] � �� � =OP (νn) by Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='13) �φ(�αR + xun) − �φ(�αR) = OP (µn) + φ(�αR + xun) − φ(�αR) = ∥v∗ n∥x + OP (νn), φ(�αR) − φ(α0) = dφ(α0) dα [�αR − α0] = ⟨�αR − α0, v∗ n⟩ ≤ ∥�αR − α0∥∥v∗ n∥ ≤ C � Q(�αR)∥v∗ n∥ ≤ OP (¯δn∥v∗ n∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='14) Hence with �φ(α0) − γ0 = OP (n−1/2σ), and ̟nσ = O(1) ∆1 = x2∥v∗ n∥2�Σ−1 2 + oP (n−1) + 2[x∥v∗ n∥][�φ(α0) − γ0 + φ(�αR) − φ(α0) + OP (µn)]�Σ−1 2 = x2∥v∗ n∥2�Σ−1 2 + 2x∥v∗ n∥[�φ(α0) − γ0 + φ(�αR) − φ(α0)]�Σ−1 2 + oP (n−1) +OP (µn)n−1/2∥v∗ n∥ 60 = x2∥v∗ n∥2�Σ−1 2 + 2x∥v∗ n∥[�φ(α0) − γ0 + φ(�αR) − φ(α0)]�Σ−1 2 + oP (n−1), (iii) Define z1 := (�φ(�αR) − γ0)2�Σ−1 2 z2 := [�φ(α0) − γ0 + φ(�αR) − φ(α0)]2�Σ−1 2 z3 := [φn(α0) − γ0 + φ(�αR) − φ(α0)]2�Σ−1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' First the proof for bounding (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='4) can be simplified to yield |�φ(α0) − φn(α0)| ≤ 1 n n � t=1 (�Γt − Γ(Xt))ρ(Yt+1, α0) = 1 n ¯bn(α0)′(�Σ−1 n − Σ−1 n )ρn(α0) + OP (δη n)[sup x ∥�Σ(x) − Σ(x)∥ + � kn n ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' √z2 + √z3 = OP (n−1/2σ + ¯δn∥v∗ n∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' By (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='13), and with the assumption (νn + pn)¯δn∥v∗ n∥ = oP (n−1), |z1 − z2| ≤ OP (ν2 n) + OP (µn)|�φ(α0) − γ0 + φ(�αR) − φ(α0)| = oP(n−1) + OP (νn)(n−1/2σ + ¯δn∥v∗ n∥) = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' |z2 − z3| ≤ OP (1)|�φ(α0) − φn(α0)|(√z2 + √z3) = oP (n−1) + OP (pn¯δn∥v∗ n∥) = oP(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence z1 − z3 = oP (n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' E Verifying conditions for RL, NPIV and NPQIV in Section 6 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Reinforcement learning model: proof of Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recall that Qπ denotes the true Q-function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Xt = (St, At).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have m(Xt, h) = E(Rt|Xt) − h(St, At) + γE �� x∈A π(x|St+1)h(St+1, x) ����St, At � dx In addition, for dm dh [v] defined in (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2), ∥v∥2 := E � dm dh [v] �2 Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Bellman equation implies m(Xt, Qπ) = 0 so for all h ∈ Hn, m(Xt, h) = m(Xt, h) − m(Xt, Qπ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence m(Xt, h) = dm dh [h − Qπ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' ∥h − Qπ∥2 = E �dm dh [h − Qπ] �2 Σ(Xt)−1 = Em(Xt, h)2Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 61 This shows condition (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For condition (ii), it is also easy to see: Em(Xt, πnα0)2Σ(Xt)−1 = E �dm dh [πnQπ − Qπ] �2 Σ(Xt)−1 = ∥πnQπ − Qπ∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For condition (i), let Tt = Ψj(Xt)2 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, ρ(Yt+1, h) = Rt − h(St, At) + γK(h), K(h) = � x∈A π(x|St+1)h(St+1, x)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and ǫ(St, h1) − ǫ(St, h2) = γK(h1) − γK(h2) − γE[K(h1) − K(h2)|St, At].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Now |K(h1) − K(h2)| ≤ ∥h1 − h2∥∞,ωM(St+1), M(St+1) := � π(x|St+1)(1 + x2 + S2 t+1)ω/2dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Uniform in j, with E maxj≤kn Ψj(Xt)4 < ∞, and EM(St+1)4 < ∞, ETt sup ∥h1−h∥∞,ω<δ |ǫ(St, h1) − ǫ(St, h)|2 ≤ 4γ2 sup ∥h1−h∥∞,ω<δ ∥h1 − h∥2 ∞,ωC ≤ Cδ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For (ii), Let T1 := maxj≤kn Ψj(Xt)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also ER4 t < C, ET1R2 t < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Since suph∈Hn ∥h∥2 ∞,ω < C, ET1 suph∈Hn h(St, At)2 ≤ ET1(1+|St|2+|At|2)ω∥h∥2 ∞,ω < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also ET1 suph K(h)2 ≤ EM(St+1)2 suph ∥h∥2 ∞,ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence ET1 suph∈Hn ρ(Yt+1, h)2 ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For (iii), the pathwise derivative of m is given by (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2), for C := � E(1 + |St|2 + |At|2)ω + EM(St+1)2� , ∥h − Qπ∥2 ≤ CE[h − Qπ]2 + CE|E(K(h) − K(Qπ)|St, At)|2 ≤ C∥h − Qπ∥∞,ω Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We note that for any h ∈ Hn ∪ {Qπ}, dm(Xt, h) dh [un] = γ � x∈A E [π(x|St+1)un(St+1, x)|St, At] dx − un(St, At), which does not depend on h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, for any h, τ, v, because of the linearity, d2 dτ 2 m(Xt, h + τv) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For condition (i), let r := 2 + ζ, a := |ρ(Yt+1, Qπ)|2+ζ, b := ���dm(Xt,Qπ) dh [un] ��� then we have b ≤ |γEtEπun(St+1, A)|+|un(St, At)| where Et = E(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='|St, At) and Eπ is with respect to the distribution π(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='|St+1) for A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let dπ := EEπ|un(St+1, A)|2r and d := E|un(St, At)|2r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then Ea2 ≤ C + E|Rt|4+2ζ < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also, d + dπ ≤ C because EEπ|un(St+1, A)|2r + E|v∗ n(St, At)|2r ≤ ∥v∗ n∥2r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence E|ρ(Yt+1, Qπ)|2+ζ ���� dm(Xt, Qπ) dh [un] ���� 2+ζ + E|ρ(Yt+1, Qπ)|2+ζ ≤ C(Ea2)1/2 � d1/2 + d1/2 π + 1 � < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 62 Conditions (ii)(iii)(iv) are trivially satisfied because of the linearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For condition (v), let T2 = maxj≤kn Ψj(Xt)2 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Recall that for h ∈ Cn, h = h1 + xun where ∥h1 − Qπ∥∞,ω < Cδn and |x| ≤ Cn−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Hence ET2 sup h∈Cn (ρ(Yt+1, h) − ρ(Yt+1, Qπ))2 ≤ CET2 sup h∈Cn |h(Xt) − Qπ(Xt)|2 + ET2 sup h∈Cn γ2[K(h) − K(Qπ)]2 ≤ � CET2(1 + ∥Xt∥2)ω + γ2ET2M(St+1)2� sup h∈Cn ∥h − Qπ∥2 ∞,ω ≤ O(δ2 n + n−1) ≤ Cδ2 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 NPIV model: proof of Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 In this case m(Xt, h) = E(h0(Wt) − h(Wt)|σt(X)) and ǫ(St, α) = Ut + h0(Wt) − h(Wt) − E(h0(Wt) − h(Wt)|σt(X)), dm(Xt,α) dh [v] = E(v(Wt)|σt(X)), and d2 dτ 2 m(Xt, h + τv) = 0 because of the linearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We sequentially verify conditions in Assumptions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 This assumption follows immediately from ∥α1 − α2∥2 = E (E(h1(Wt) − h2(Wt)|σt(X)))2 Σ(Xt)−1 = E[m(Xt, h1) − m(Xt, h2)]2Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 (i) Uniformly in j ≤ kn, for Mt := (1 + |Wt|2)ω, E[Ψj(Xt)2 + 1] sup ∥α−α1∥∞,ω<δ |ǫ(St, α1) − ǫ(St, α)|2 ≤ E[Ψj(Xt)2 + 1] sup ∥h−h1∥∞,ω<δ [h1(Wt) − h(Wt) − E(h1(Wt) − h(Wt)|σt(X))]2 ≤ 4E[Ψj(Xt)2 + 1] [Mt + E(Mt|σt(X))] sup ∥h−h1∥∞,ω<δ ∥h1 − h∥2 ∞,ω ≤ Cδ2 given that E[Ψj(Xt)2 + 1]Mt < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) Suppose E maxj≤kn Ψj(Xt)2[(1 + |Wt|2)ω + U 2 t ] < ∞, E max j≤kn Ψj(Xt)2 sup α∈An ρ(Yt+1, α)2 ≤ 2E max j≤kn Ψj(Xt)2 sup α∈An [h0(Wt) − h(Wt)]2 + 2E max j≤kn Ψj(Xt)2U 2 t ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) We have ∥α1 − α2∥2 ≤ C∥α1 − α2∥2 ∞,ωE(1 + |Wt|2)ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 63 For (i) we have E|ρ(Yt+1, α0)|2+ζ ���� dm(Xt, α0) dα [un] ���� 2+ζ = E|Ut|2+ζ|E(v∗ n(Wt)|Xt)|2+ζ∥v∗ n∥−(2+ζ) < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For (ii)(iii), we have d2 dτ 2 m(Xt, h + τv) = 0 for any h and v inside H0 ∪ Hn because of the linearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For (iv), we have supα∈Cn 1 n � t[dm(Xt,α) dα [un] − dm(Xt,α0) dα [un]]2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For (v), let A := maxj≤kn Ψj(Xt)2 +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For h ∈ Cn, we know there is hn ∈ Hn and |x| ≤ Cn−1/2 so that h = hn + xun, ∥hn − h0∥∞,ω ≤ Cδn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Because EAun(W)2 < C, hence EA sup α∈Cn (ρ(Yt+1, h) − ρ(Yt+1, α0))2 = EA sup α∈Cn (h(Wt) − h0(Wt))2 ≤ 2EA sup Cn |hn(Wt) − h0(Wt)|2 + Cn−1EAun(W)2 ≤ Cδ2 n + Cn−1 ≤ Cδ2 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' For notational simplicity, write Γt = Γ(Xt), Σt = Σ(Xt), �Σt = �Σ(Xt) and ρt = ρ(Yt+1, α0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Using �Σ−1 t − Σ−1 t = �Σ−1 t (Σt − �Σt)Σ−1 t , the triangular inequality yields 1 n � t ΓtΣt(�Σ−1 t − Σ−1 t )ρt ≤ | 1 n � t ΓtΣt(�Σ−1 t − Σ−1 t )(�Σt − Σt)Σ−1 t ρt| +| 1 n � t Γt(�Σt − Σt)Σ−1 t ρt| ≤ | 1 n � t Γt(�Σt − Σt)Σ−1 t ρt| + OP (1) 1 n � t |(�Σt − Σt)|2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Note �Σt = �A′ nΨn(Ψ′ nΨn)−1Ψ(Xt) where �An is a n × 1 vector of �ρ2 t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also let (An, E(An|X), Gn, Un) respectively be n × 1 vectors of (ρ2 t , Σt, gt, ut) where gt = ΓtΣ−1 t ρt and ut = ρ2 t − E(ρ2 t|σt(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let Jt be the t th element of (I − Pn)E(An|X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have ( 1 √n∥ �An − An∥)2 ≤ C 1 n � t(�ρt − ρt)2ρ2 t + C 1 n � t(�ρt − ρt)4 = OP (δ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In addition, let D be the diagonal matrix of ΓtΣ−1 t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then 1 √n∥PnGn∥ = OP ( 1 √n � ρ′nDPnDρn) = OP ( � kn/n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So we have the following decomposition 1 n � t Γt(�Σt − Σt)Σ−1 t ρt = 1 n[ �A′ nPn − E(An|X)]Gn = a1 + a2 + a3, 1 n � t |(�Σt − Σt)|2 = 1 n∥Pn �An − E(An|X)∥2 ≤ C(a4 + a5 + a6) a1 = 1 nE(An|X)′(Pn − I)Gn = 1 n � t JtΓtΣ−1 t ρt = OP ( 1 √n) � EJ2 t Γ2 t Σ−1 t 64 = OP (n−1/2 � EJ2 t ) = OP ( � ϕ2n/n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' a2 = 1 nU ′ nPnGn ≤ OP (1)∥ 1 nU ′ nΨn∥∥ 1 n � t Ψ(Xt)ΓtΣ−1 t ρt∥ = OP (kn n ) a3 = 1 n[ �An − An]′PnGn ≤ 1 √n∥ �An − An∥ 1 √n∥PnGn∥ ≤ OP (δ2 n + kn n ) = OP (δ2 n) a4 = 1 n∥(I − Pn)E(An|X)∥2 = OP (ϕ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' a5 = 1 n∥PnUn∥2 ≤ OP (1)∥ 1 nU ′ nΨn∥2 = OP (kn n ) a6 = 1 n∥Pn( �An − An)∥2 = OP (δ2 n) Putting together, 1 n � t ΓtΣt(�Σ−1 t − Σ−1 t )ρt = OP (pn) where pn = ϕ2 n + kn n + δ2 n ≤ Cδ2 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 NPQIV model: proof of Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='3 In this model m(Xt, α) = P(Ut < h−h0|σt(X))−̟ where Ut = Yt−h0(Wt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Suppose the conditional distribution of Ut given (Xt, Wt) is absolutely continuous with density function fUt|σt(X),Wt(u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then the derivative is defined as dm(Xt, α) dh [v] = E(fUt|σt(X),Wt(h(Wt) − h0(Wt))v(Wt)|σt(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let At(h) := � 1 0 fUt|σt(X),Wt (x(h(Wt) − h0(Wt))) dx Bt(v, h) := E {At(v)[h(Wt) − h0(Wt)]|σt(X)} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then m(Xt, h) = Bt(h, h), Em(Xt, h)2Σ(Xt)−1 = EBt(h, h)2Σ(Xt)−1 and ∥α−α0∥2 = EBt(h0, h)2Σ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' This assumption then follows from the condition that c2EBt(h, h)2Σ(Xt)−1 ≤ EBt(h0, h)2Σ(Xt)−1 ≤ c1EBt(h, h)2Σ(Xt)−1 for all ∥h − h0∥ < ǫ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 (i) Let Aj := [Ψj(Xt)2 + 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Fix any α = h ∈ An, EAj sup ∥α−α1∥∞,ω<δ |ǫ(St, α1) − ǫ(St, α)|2 ≤ 2EAj sup ∥α−α1∥∞,ω<δ |ρ(Yt+1, α1) − ρ(Yt+1, α)|2 + 2EAj sup ∥α−α1∥∞,ω<δ |m(Xt, α1) − m(Xt, α)|2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' On one hand, EAj sup ∥α−α1∥∞,ω<δ |m(Xt, α1) − m(t, α)|2 65 ≤ 2EAj sup ∥h−h1∥∞,ω<δ P(h(Wt) − h0(Wt) ≤ Ut ≤ h1(Wt) − h0(Wt)|X)21{h1(Wt) > h(Wt)} +2EAj sup ∥h−h1∥∞,ω<δ P(h1(Wt) − h0(Wt) ≤ Ut ≤ h(Wt) − h0(Wt)|X)21{h(Wt) > h1(Wt)} ≤ 2EAj sup u fUt|σt(X),Wt(u)2(1 + |Wt|2)ω sup ∥h−h1∥∞,ω<δ ∥h1 − h2∥2 ∞,ω ≤ 2EAj sup u fUt|σt(X),Wt(u)2(1 + |Wt|2)ωδ2 ≤ Cδ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' On the other hand, for notational simplicity, write a = h(Wt) − h0(Wt), and a1 = h1(Wt) − h0(Wt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Then ∥h − h1∥∞,ω < δ implies |a − a1| ≤ δ(1 + |Wt|2)ω/2 := gt(δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So EAj sup ∥α−α1∥∞,ω<δ |ρ(Yt+1, α1) − ρ(Yt+1, α)|2 ≤ EAj sup ∥h−h1∥∞,ω<δ 1{a ≤ Ut ≤ a1}1{a1 > a} + EAj sup ∥h−h1∥∞,ω<δ 1{a1 ≤ Ut ≤ a}1{a > a1} ≤ EAj � sup h1:∥h−h1∥∞,ω<δ 1{a ≤ Ut ≤ a1}fUt|σt(X),Wt(u)du1{a1 > a} +EAj � sup h1:∥h−h1∥∞,ω<δ 1{a1 ≤ Ut ≤ a}fUt|σt(X),Wt(u)du1{a > a1} ≤ EAj � a+gt(δ) a fUt|σt(X),Wt(u)du1{a1 > a} + EAj � a a−gt(δ) fUt|σt(X),Wt(u)du1{a > a1} ≤ 2 sup u fUt|σt(X),Wt(u)δEAj(1 + |Wt|2)ω/2 ≤ Cδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (ii) We have E max j≤kn Ψj(Xt)2 sup α∈An ρ(Yt+1, α)2 ≤ CE max j≤kn Ψj(Xt)2 < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (iii) Because Bt(h0, h)2 ≤ E � At(h0)2(1 + W 2 t )ω|σt(X) � ∥h − h0∥2 ∞,ω, we have ∥h − h0∥2 ≤ EBt(h0, h)2Σ(Xt)−1 ≤ ∥h − h0∥2 ∞,ωEAt(h0)2(1 + W 2 t )ωΣ(Xt)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Trivially |ρ(y, h)| + |m(x, h)| ≤ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Also dm(Xt, α) dh [un] = E(fUt|σt(X),Wt(0)un(Wt)|σt(X)) < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' So E|ρ(Yt+1, α0)|2+ζ ���dm(Xt,α0) dα [un] ��� 2+ζ + E|ρ(Yt+1, α0)|2+ζ < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let f ′ Ut|σt(X),Wt denote the first derivative of fUt|σt(X),Wt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' We have d2 dτ 2 m(Xt, h + τv) = E[f ′ Ut|σt(X),Wt(h(Wt) − h0(Wt) + τv(Wt))v(Wt)2|σt(X)] 66 Hence E sup α∈Cn sup |τ|≤Cn−1/2 1 n � t � d2 dτ 2 m(Xt, α + τun)| �2 ≤ E sup α∈Cn sup x sup |τ|≤Cn−1/2 E � f ′2 Ut|σt(X),Wt(h(Wt) − h0(Wt) + τv(Wt))un(Wt)4|σt(X) = x � ≤ sup u,x,w f ′2 Ut,x,w(u)E[un(Wt)4|σt(X)] < C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (iii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' sup τ∈(0,1) sup α∈Cn E � d2 dτ 2 m(Xt, α0 + τ(α − α0)) �2 ≤ sup τ∈(0,1) sup α∈Cn E � E[f ′ Ut|σt(X),Wt(τ(h − h0))(h − h0)2|σt(X)] �2 ≤ sup α∈Cn E � E(h − h0)2|σt(X) �2 ≤ sup h∈Cn sup w |h(w) − h(w)|4 ≤ O(δ4 n) = o(n−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (iv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let g1 := h(Wt) − h0(Wt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' kn sup α∈Cn 1 n � t [dm(Xt, α) dα [un] − dm(Xt, α0) dα [un]]2 ≤ kn sup α∈Cn 1 n � t [E(fUt|σt(X),Wt(g1) − fUt|σt(X),Wt(0))un(Wt)|σt(X)]2 ≤ knL sup α∈Cn 1 n � t E(g2 1|σt(X))E(un(Wt)2|σt(X)) ≤ Ckn sup α∈Cn 1 n � t (E(h(Wt) − h0(Wt))2|σt(X)) = O(knδ2 n) = oP(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Verifying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='2 (v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Let A = maxj≤kn Ψj(Xt)2 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' EA sup h∈Cn (ρ(Yt+1, h) − ρ(Yt+1, h0))2 ≤ EA sup h∈Cn 1{−|h − h0| < Ut < |h − h0|} ≤ EA1{− sup h∈Cn |h − h0| < Ut < sup h∈Cn |h − h0|} = EA � suph∈Cn |h−h0| − suph∈Cn |h−h0| fUt|σt(X),Wt(u)du ≤ 2EA sup u fu|σt(X),Wt(u) sup Cn |h(Wt) − h0(Wt)| ≤ O(δn)EA(1 + Wt)ω/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Finally, Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='6 is naturally satisfied in the NPQIV model where �Σ(Xt) = Σ(Xt) = ̟(1 − ̟).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 67 References Ai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The semiparametric efficiency bound for models of sequential moment restrictions containing unknown functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Journal of Econometrics 170 442–457.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Anthony, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Bartlett, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Neural network learning: Theoretical founda- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' cambridge university press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Bali, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Goyal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Huang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Jiang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Wen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Different strokes: Return predictability across stocks and bonds with machine learning and big data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Swiss Finance Institute, Research Paper Series 20–110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Bauer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Kohler, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' On deep learning as a remedy for the curse of dimen- sionality in nonparametric regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Annals of Statistics 47 2261–2285.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Bradtke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Barto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Linear least-squares algorithms for temporal difference learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Machine learning 22 33–57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Christensen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Optimal uniform convergence rates and asymp- totic normality for series estimators under weak dependence and weak conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Journal of Econometrics 188 447–465.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Ludvigson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Land of addicts?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' an empirical investigation of habit-based asset pricing models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Journal of Applied Econometrics 24 1057–1093.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Pouzo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Estimation of nonparametric conditional moment models with possibly nonsmooth generalized residuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Econometrica 80 277–321.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Pouzo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Sieve wald and qlr inferences on semi/nonparametric conditional moment models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Econometrica 83 1013–1079.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Qi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' On well-posedness and minimax optimal rates of nonparametric q-function estimation in off-policy evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In Proceedings of the 39th International Conference on Machine Learning (to appear).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Shen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Sieve extremum estimates for weakly dependent data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Econometrica 289–314.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chernozhukov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Chetverikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Demirer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Duflo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Hansen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Newey, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Robins, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2018a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Double/debiased machine learning for treatment and structural parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 68 Chernozhukov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Demirer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Duflo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Fernandez-Val, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2018b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Generic machine learning inference on heterogenous treatment effects in randomized experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', National Bureau of Economic Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chernozhukov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Newey, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Singh, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Syrgkanis, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Adversarial estimation of riesz representers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='00009 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Chernozhukov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Newey, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Singh, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2018c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Automatic debiased machine learning of causal and structural effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:1809.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='05224 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Dikkala, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Lewis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Mackey, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Syrgkanis, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Minimax estimation of conditional moment models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 12248–12262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Duan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Wainwright, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Optimal policy evaluation using kernel-based temporal difference methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='12002 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Eberlein, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1984).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Weak convergence of partial sums of absolutely regular sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Statistics & probability letters 2 291–293.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Fan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' A theoretical analysis of deep Q-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In Learning for Dynamics and Control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Farahmand, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='-m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Ghavamzadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Szepesv´ari, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Reg- ularized policy iteration with nonparametric function spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Journal of Machine Learning Research 17 4809–4874.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Gallant, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Nychka, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Semi-nonparametric maximum likelihood estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Econometrica: Journal of the econometric society 363–390.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Geist, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Scherrer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Pietquin, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' A theory of regularized markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Gu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Kelly, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Xiu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Empirical asset pricing via machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Review of Financial Studies 33 2223–2273.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Guijarro-Ordonez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Pelger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Zanotti, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Deep learning statistical arbitrage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Available at SSRN 3862004 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Haroske, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Skrzypczak, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Nuclear embeddings in weighted function spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Integral Equations and Operator Theory 92 1–37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 69 Hsu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Sanford, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Servedio, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Vlatakis-Gkaragkounis, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' On the approximation power of two-layer networks of random relus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' In Conference on Learning Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Projection estimation in multiple regression with application to func- tional anova models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Annals of Statistics 26 242–272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Ibragimov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1962).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Some limit theorems for stationary processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Theory of Probability & Its Applications 7 349–382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Kress, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Linear integral equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Tegmark, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Rolnick, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Why does deep and cheap learning work so well?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Journal of Statistical Physics 168 1223–1247.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Long, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Han, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and E, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' An l2 analysis of reinforcement learning in high dimensions with kernel and neural network approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='07794 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Mhaskar, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Liao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Poggio, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Learning functions: when is deep better than shallow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:1603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='00988 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Newey, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and West, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Econometrica 55 703–708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Newey, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The asymptotic variance of semiparametric estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Econometrica 1349–1382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Rolnick, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Tegmark, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The power of deeper networks for expressing natural functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='05502 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Schmidt-Hieber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Nonparametric regression using deep neural networks with relu activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Annals of Statistics 48 1875–1897.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shalev-Shwartz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Shammah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Shashua, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Safe, multi-agent, reinforce- ment learning for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='03295 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' On methods of sieves and penalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The Annals of Statistics 2555–2591.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Neural network approximation: Three hidden layers are enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Neural Networks 141 160–173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 70 Shi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Lu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Song, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Statistical inference of the value function for reinforcement learning in infinite horizon settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' arXiv preprint arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content='04515 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Maddison, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Guez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Sifre, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Van Den Driessche, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Schrittwieser, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Antonoglou, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Panneershelvam, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Lanctot, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Mastering the game of go with deep neural networks and tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' nature 529 484–489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Sutton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Barto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Reinforcement learning: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' MIT press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' van der Vaart, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Wellner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Weak convergence and empirical processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' The first edition ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Babuschkin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Chung, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Mathieu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Jaderberg, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Czarnecki, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Dudzik, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Georgiev, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=', Powell, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Alphastar: Mastering the real-time strategy game starcraft ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' DeepMind blog 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' and Barron, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' (1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Information-theoretic determination of minimax rates of convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' Annals of Statistics 1564–1599.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'} +page_content=' 71' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/F9AyT4oBgHgl3EQfSveK/content/2301.00092v1.pdf'}