diff --git "a/KdE0T4oBgHgl3EQfigFl/content/tmp_files/2301.02446v1.pdf.txt" "b/KdE0T4oBgHgl3EQfigFl/content/tmp_files/2301.02446v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/KdE0T4oBgHgl3EQfigFl/content/tmp_files/2301.02446v1.pdf.txt" @@ -0,0 +1,10946 @@ +Optimal Scaling Results for a Wide Class of Proximal MALA +Algorithms +Francesca R. Crucinio∗1, Alain Durmus2, Pablo Jim´enez3, and Gareth O. Roberts4 +1CREST, ENSAE Paris +2Centre de Math´ematiques Appliqu´ees, Ecole Polytechnique, France, Institut +Polytechnique de Paris +3Sorbonne Universit´e and Universit´e Paris Cit´e, CNRS, Laboratoire de +Probabilit´es, Statistique et Mod´elisation, F-75005 Paris, France +4Department of Statistics, University of Warwick +Abstract +We consider a recently proposed class of MCMC methods which uses proximity maps in- +stead of gradients to build proposal mechanisms which can be employed for both differentiable +and non-differentiable targets. These methods have been shown to be stable for a wide class +of targets, making them a valuable alternative to Metropolis-adjusted Langevin algorithms +(MALA); and have found wide application in imaging contexts. The wider stability properties +are obtained by building the Moreau-Yoshida envelope for the target of interest, which depends +on a parameter λ. In this work, we investigate the optimal scaling problem for this class of +algorithms, which encompasses MALA, and provide practical guidelines for the implementation +of these methods. +Contents +1 +Introduction +2 +2 +Proximal MALA Algorithms +5 +3 +Optimal scaling of Proximal MALA +7 +3.1 +Differentiable targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +8 +3.2 +Laplace target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +10 +4 +Practical Implications and Numerical Simulations +13 +4.1 +Numerical Experiments +. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +14 +5 +Discussion +15 +∗Corresponding author: francesca.crucinio@gmail.com +1 +arXiv:2301.02446v1 [stat.CO] 6 Jan 2023 + +6 +Proof of the Result for the Laplace distribution +16 +6.1 +Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +16 +6.2 +Proof of Proposition 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +18 +6.3 +Proof of Proposition 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +25 +6.4 +Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +27 +A Proof of Theorem 1 +38 +A.1 Auxiliary Results for the Proof of Case (a) . . . . . . . . . . . . . . . . . . . . . . . . +39 +A.2 Auxiliary Results for the Proof of Case (b) +. . . . . . . . . . . . . . . . . . . . . . . +47 +A.3 Auxiliary Results for the Proof of Case (c) . . . . . . . . . . . . . . . . . . . . . . . . +51 +A.4 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +54 +B Numerical Experiments +54 +B.1 +Differentiable Targets +. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +54 +B.2 +Laplace Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +55 +C Taylor Expansions for the Results on Differentiable Targets +63 +C.1 Coefficients of the Taylor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . +63 +C.1.1 +Case (a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +63 +C.1.2 +Case (b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +63 +C.1.3 +Case (c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +64 +C.2 +Taylor Expansions of the Log-acceptance Ratio . . . . . . . . . . . . . . . . . . . . . +66 +C.2.1 +R1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +66 +C.2.2 +R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +67 +C.3 Derivatives of the Proximity Map for Differentiable Targets . . . . . . . . . . . . . . +71 +D Moments and Integrals for the Laplace Distribution +73 +D.1 Moments of Acceptance Ratio for the Laplace Distribution +. . . . . . . . . . . . . . +73 +D.2 Bound on Second Moment of Acceptance Ratio for the Laplace Distribution . . . . . +79 +D.3 Additional Integrals for the Laplace Distribution . . . . . . . . . . . . . . . . . . . . +83 +D.4 Integrals for Moment Computations +. . . . . . . . . . . . . . . . . . . . . . . . . . . +84 +D.4.1 +First Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +84 +D.4.2 +Second Moment +. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +87 +D.4.3 +Third Moment +. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +88 +1 +Introduction +Gradient-based Markov chain Monte Carlo methods have proved to be very successful at sampling +from high-dimensional target distributions [9]. +The key to their success is that in many cases +their mixing time appears to be better than their competitor algorithms which do not use gradient +information (see for example [34]), while their implementation has similar computational cost. +Indeed, gradients of target densities can often be computed with computational complexity (in +dimension d) which scales no worse than evaluation of the target density itself. +Gradient-based MCMC methods are mainly motivated from stochastic processes constructed +to have the target density as limiting distribution [25, 8, 6, 44]. Our analysis will concentrate on +2 + +Metropolis Adjusted Langevin Algorithm (MALA) and its proximal variants which are based on +the Langevin diffusion +dLt = dBt + ∇ log π(Lt) +2 +dt , +(1) +where π denotes the target density with respect to the Lebesgue measure. It is well-known that un- +der appropriate conditions, (1) defines a continuous-time Markov process associated with a Markov +semigroup which is reversible with respect to π. From this observation, it has been suggested to +use a Euler-Maruyama (EM) approximation of (1). This scheme has been popularized in statistics +by [20] and referred to as the Unadjusted Langevin Algorithm (ULA) in [36]. Due to the time- +discretization, ULA does not have π as stationary distribution. To address this problem, [39] and +independently Besag in his contribution to [20] proposed to add a Metropolis acceptance step at +each iteration of the EM scheme, leading to the Metropolis Adjusted Langevin Algorithm (MALA) +following [36] who also derive basic stability analysis. The accept/reject step in this algorithm +confers two significant advantages: it ensures that the resulting algorithm has exactly the correct +invariant distribution, while step sizes can be chosen larger than in the unadjusted case as there +is not need to make step size small to reduce discretization error. +On the other hand, MALA +algorithms are typically hard to analyze theoretically (see e.g. [7, 13, 16]). However, [34] (see also +[5, 32]) have established that MALA has better convergence properties than the Random Walk +Metropolis (RWM) algorithm with respect to the dimension d from an optimal scaling perspective +(see also [33]). +Whereas gradient-based methods have been successively applied and offer interesting features, +they are typically less robust than their vanilla alternatives (for example see [36]) while intuition +suggests, and existing underpinning theory requires, that target densities need to be sufficiently +smooth for the gradients to be aiding Markov chain convergence. Moreover, while gradient-based +MCMC have been successful for smooth densities, there is no reason to believe that they should be +effective for densities which are not differentiable at a subset D ⊆ Rd. For non-smooth densities, +[30] proposes modified gradient-based algorithms. Their proposed P-MALA algorithm is inspired +by the proximal algorithms popular in the optimization literature (e.g. [29]). The main idea is +to approximate the (possibly non differentiable but) log-concave target density π ∝ exp(−G) by +substituting the potential G with its Moreau-Yoshida envelope Gλ (see (3) below for its definition), +to obtain a distribution πλ whose level of smoothness is controlled by the proximal parameter λ > 0. +Given this smooth approximation to π one can then build proposals based on time discretizations +of the Langevin diffusion targeting πλ [30, 14]: +ξk+1 = ξk − σ2 +2 ∇Gλ(ξk) + σZk+1 , +(2) +where σ2 > 0 is a fixed stepsize and (Zk)k∈N∗ is a sequence of i.i.d. zero-mean Gaussian random +variables with identity covariance matrix. Our aims in this paper are broadly to provide theo- +retical underpinning for a slightly larger family of proximal MALA algorithms, analyze how these +methods scale with dimension, and to give insights and practical guidance into how they should be +implemented supported by the theory we establish. +Proximal optimization and MCMC methods proved to be particularly well-suited for image +estimation, where penalties involving the sparsity inducing norms are common [30, 14, 43]. Similar +targets are also common in sparse regression contexts [2, 19, 46]. In these situations, the set of +non-differentiability points for the target density π is a null set for the Lebesgue measure, and, +3 + +following [12], we shall focus on this case. However, in contrast to the conclusions of [12] for RWM, +we shall demonstrate that optimal scaling of proximal MALA may be affected by non-smoothness. +More precisely, in this work, we first extend the results of [31] and consider a wider range of +proximal MALA algorithms, as well as a wider class of finite dimensional target distributions. We +let both the steps size σ2 and the regularization parameter λ depend on the dimension d of the +target and find that the scaling properties of proximal MALA depend on the relative speed at which +λ and σ converge to 0 as d → ∞. We start by considering a class of sufficiently differentiable target +distributions π to which MALA can also be applied, to allow direct comparison between MALA +and proximal MALA and thus between a gradient-based method and one which approximates the +gradient through proximal operators. When λ goes to 0 at least as fast as σ2, we find that the +scaling properties of proximal MALA are equivalent to those of MALA (i.e. σ2 should decay as +d���1/3; see Theorem 1–(b), Theorem 1–(c) and Theorem 2); when λ converges to 0 more slowly than +σ2, proximal MALA is less efficient than MALA with σ2 decaying as d−1/2 (Theorem 1–(a)). As +in some cases the proximal operator for a given distribution π is cheaper to compute than ∇ log π +[29, 11, 30], we anticipate that proximal MALA with an appropriately tuned λ might provide a +cheaper alternative to MALA retaining similar scaling properties. +We then turn to the optimal scaling of proximal MALA applied to the Laplace distribution +π(x) ∝ e−|x|. We focus on this particular non-smooth target since it is the most widely used in +applications of proximal MALA, including image deconvolution [30, 14, 43], LASSO, and sparse +regression [2, 19, 46]. We establish that non-differentiability of the target even at one point leads +to a different optimal scaling than MALA. In particular, the step size has to scale as d−2/3 and not +as d−1/3. +This appears to be a new optimal scaling for Metropolis MCMC algorithms which is between +the one of RWM and MALA. From the conclusion for smooth target distributions, we restrict our +study to the choice of λ going to 0 at least as fast as σ2. +The proof of the result for the differentiable case extends that of [34] for MALA, while the +structure of the proof for the Laplace target is similar to that of [12] and constitutes the main +element of novelty in this paper. As a special case of the result for the Laplace distribution, we also +obtain the optimal scaling for MALA on Laplace targets. We point out that the strategy adopted +in the proof of this result is not unique to the Laplace distribution, and could be applied to other +distributions provided that the required integrals can be obtained. +To sum up, our main contributions are: +1) We extend the result of [31] beyond the Gaussian case, covering all finite dimensional (suffi- +ciently) differentiable targets, and show that, in some cases, proximal MALA affords the same +scaling properties of MALA if the proximal parameter λ is chosen appropriately. +2) Motivated by applications in imaging and sparse regression applications, we study the scaling of +proximal MALA methods for the Laplace target, and show that for values of λ decaying sufficiently +fast, the optimal scaling of proximal MALA, i.e. the choice for σ2, is different from the one for +MALA on differentiable targets and is of order d−2/3. +3) We use the insights obtained with the aforementioned results to provide practical guidelines for +the selection of the proximal parameter λ. +Organization of the paper +The paper is structured as follows. In Section 2, we rigorously +introduce the class of proximal MALA algorithms that are studied and discuss related works on +optimal scaling for MCMC algorithms. In Section 3.1 we state the main result for differentiable +targets, showing that the scaling properties of proximal MALA depend on the relative speed at +4 + +which λ goes to 0 with respect to σ. In Section 3.2 we obtain a scaling limit for proximal MALA +when π is a Laplace distribution, as a special case of our result we also obtain the scaling properties +of a sub-gradient version of MALA for this target. +We collect in Section 4 the main practical +takeaways from these results and discuss possible extensions in Section 5. Finally, in Section 6 we +prove the result for the Laplace distribution. The proof of the result for differentiable targets is +postponed to Appendix A. +2 +Proximal MALA Algorithms +We now introduce the general class of proximal MALA algorithms, first studied in [30]. This class +of algorithms aims at sampling from a density with respect to the Lebesgue measure on Rd of the +form π(x) = exp(−G(x))/ +� +Rd exp(−G(˜x))d˜x, with G satisfying the following assumption +A0. The function G : Rd → R is convex, proper and lower semi-continuous. +The main idea behind proximal MALA is to approximate the (possibly non differentiable) target +density π by approximating the potential G with its Moreau-Yoshida envelope Gλ : Rd → R defined +for λ > 0 by +Gλ(x) = min +u∈Rd[G(u) + ∥u − x∥2/(2λ)] . +(3) +Since G is supposed to be convex, by [38, Theorem 2.26], the Moreau-Yoshida envelope is well- +defined, convex and continuously differentiable with +∇Gλ(x) = λ−1(x − proxλ +G(x)) , +proxλ +G(x) = arg min +u∈Rd[G(u) + ∥u − x∥2/(2λ)] . +(4) +The proximity operator x �→ proxλ +G(x) behaves similarly to a gradient mapping and moves points +in the direction of the minimizers of G. In the limit λ → 0 the quadratic penalty dominates (4) +and the proximity operator coincides with the identity operator, i.e. proxλ +G(x) = x; in the limit +λ → ∞, the quadratic penalty term vanishes and (4) maps all points to the set of minimizers of G. +It was shown in [14, Proposition 1] that, under A0, +� +Rd exp(−Gλ(x))dx < ∞, and therefore +the probability density πλ ∝ exp(−Gλ) is well-defined. In addition, it has been shown that ∥π − +πλ∥TV → 0 as λ → 0. Based on this observation and since as we have emphasized πλ is now +continuously differentiable, it has been suggested in [30, 14] to use the discretization of the Langevin +diffusion associated with πλ given by (2), which can be rewritten using (4) as +ξk+1 = +� +1 − σ2 +2λ +� +ξk + σ2 +2λ proxλ +G(ξk) + σZk+1 . +(5) +Similarly to other MCMC methods based on discretizations of the Langevin diffusion (e.g. +[36]), one can build unadjusted schemes which target πλ, expecting draws from these schemes to +be close to draws from π for small enough λ, or add a Metropolis-Hastings step to ensure that the +resulting algorithm targets π. Unadjusted proximal MCMC methods have been analyzed in [14]; +in this paper we focus on Metropolis adjusted proximal MCMC methods and study their scaling +properties. More precisely, at each step k and given the current state of the Markov chain Xk, +a candidate Yk+1 is generated from the transition density associated to (5), (x, y) �→ q(x, y) = +ϕ(y; [1−σ2/(2λ)]x+σ2 proxλ +G(x)/2λ, σ2 Id), where ϕ(· ; u, Σ) stands for the d-dimension Gaussian +density with mean u and covariance matrix Σ. Given Xk and Yk+1, Then, the next state is set as: +Xk+1 = Yk+1bk+1 + Xk(1 − bk+1) , bk+1 = 1R+ +�π(Yk+1)q(Yk+1, Xk) +π(Xk)q(Xk, Yk+1) ∧ 1 − Uk+1 +� +, +(6) +5 + +where (Ui)i∈N∗ is a sequence of i.i.d. uniform random variables on [0, 1]. +The value of λ characterizes how close the distribution πλ is to the original target π and therefore +how good the proposal is. Small values of λ provide better approximations to π and therefore better +proposals (see [14, Proposition 1]), while larger values of λ provide higher levels of smoothing for +non-differentiable distributions (see [30, Figure 1]). In the case λ = σ2/2 we obtain the special case +of proximal MALA referred to as P-MALA in [30]. +The main contribution of this paper is to analyze the optimal scaling for proximal MALA defined +by (6). +Optimal scaling and related works +We briefly summarize here some examples of MCMC +algorithms and their optimal scaling results; a full review is out of the scope of this paper and +we only mention algorithms to which we will compare proximal MALA in the development of this +work. +Popular examples of Metropolis MCMC are RWM and MALA. RWM uses as a proposal the +transition density (x, y) �→ ϕ(y ; x, σ2 Id), where σ2 > 0. The MALA scheme uses as proposal +(x, y) �→ ϕ(y ; x + (σ2/2)∇ log π(x), σ2 Id). As we will show in Section 3.1, proximal MALA can +be considered as an extension of MALA. +A natural question to address when implementing Metropolis adjusted algorithms is how to set +the parameter σ2 (variance parameter for RWM, step size parameter for MALA) to maximize the +efficiency of the algorithm. Small values of σ2 result in higher acceptance probability and cause +sticky behaviour, while large values of σ2 result in a high number of rejections with the chain +(Xk)k≥0 moving slowly [35]. Optimal scaling studies aim to address this question by investigating +how σ2 should behave with respect to the dimension d of the support of π in the high dimensional +setting d → ∞, to obtain the best compromise. +The standard optimal scaling set-up considers the case of d-dimensional targets πd which are +product form, i.e. +πd(xd) = +d +� +i=1 +π(xd +i ) , +(7) +where xd +i stands for the i-th component of xd and π is a one-dimensional probability density with +respect to the Lebesgue measure. +Under appropriate assumptions on the regularity of π, and +assuming that the MCMC algorithm is initialized at stationarity, the optimal value of σ2 scales as +ℓ/dα with ℓ > 0, α = 1 for RWM [33] and α = 1/3 for MALA [34]. +By setting α to these values, it is then possible to show that each as d → ∞ each 1-dimensional +component of the Markov chain defined by RWM and MALA, appropriately rescaled in time, +converges to the Langevin diffusion +dLt = h(ℓ)1/2dBt − h(ℓ) +2 [log π]′(x)dt , +where (Bt)t≥0 is a standard Brownian motion and h(ℓ), referred to as speed function of the diffusion, +is a function of the parameter ℓ > 0 that we may tune. Indeed, it is well-known that (Lh(ℓ)t)t≥0 is +a solution of the Langevin diffusion (1). As a result, we may identify the values of ℓ maximizing +h(ℓ) for the algorithms at hand to approximate the fastest version of the Langevin diffusion. The +optimal values for ℓ results in an optimal average acceptance probability of 0.234 for RWM and +0.574 for MALA. +6 + +The scaling properties allow to get an intuition of the efficiency of the corresponding algorithms: +RWM requires O(d) steps to achieve convergence on a d-dimensional target, i.e. its efficiency is +O(d−1), while MALA has efficiency O(d−1/3). While these results are asymptotic in d, the insights +obtained by considering the limit case d → ∞ prove to be useful in practice [35]. +In the context of non-smooth and even discontinuous target distributions, studying the simpler +RWM algorithm applied to a class of distributions on compact intervals, [27, 28] show that the lack +of smoothness effects the optimal scaling of RWM with respect to dimension d. More precisely, +they show that for a class of discontinuous densities which includes the uniform distribution on +[0, 1], the optimal scaling of RWM is of order O(d−2). On the other hand, in the case where the set +of non-differentiability D of π is a null set with respect to the Lebesgue measure, [12] shows that +under appropriate conditions, including Lp differentiability, the optimal scaling of RWM is of order +O(d−1) still. +The scaling properties of proximal MALA have been partially investigated in [31], which shows +that P-MALA, obtained when λ = σ2/2, has the same scaling properties of MALA for the finite +dimensional Gaussian density and for a class of infinite dimensional target measures (Theorem 2.1 +and Theorem 5.1 therein, respectively). +3 +Optimal scaling of Proximal MALA +We consider the same set up of [34] and briefly recalled above. +Given a real-valued function +g : R → R satisfying A0 we consider the i.i.d. d-dimensional target specified by (7) with +π(x) ∝ exp(−g(x)) . +(8) +Since for any xd, G(xd) = �d +i=1 g(xd +i ), we have by [29, Section 2.1] +proxλ +G(xd) = (proxλ +g(xd +1), . . . , proxλ +g(xd +d))⊤ . +It follows that the distribution of the proposal (10) with target πd in (7)-(8) is also product form +qd(xd, yd) = �d +i=1 q(xd +i , yd +i ) , +q(xd +i , yd +i ) = +1 +(2πσ2)1/2 exp +� +−(yd +i −xd +i +σ2g′[proxλ +g (xd +i )]/2) +2 +2σ2 +� +, +with λ > 0. For any dimension d ∈ N∗, we denote by (Xd +k)k∈N the Markov chain defined by the +Metropolis recursion (6) with target distribution πd and proposal density qd and associated to the +sequence of candidate moves +Y d +k+1 = +� +1 − σ2 +2λ +� +Xd +k + σ2 +2λ proxλ +G(Xd +k) + σZd +k+1 . +(9) +As mentioned in the introduction, the focus of this work is on investigating the optimal depen- +dence of the proposal variance σ2 on the dimension d of the target π. In this section, we make the +dependence of the proposal variance on the dimension explicit and let σ2 +d = ℓ2/d2α and λd = c2/2d2β +for some α, β > 0 and some constants c, ℓ independent on d. Thus, we can write λd as a function of +σd, λd = σ2m +d r/2, where we defined r = c2/ℓ2m > 0 and m = β/α. By writing λd as a function of +σd we can decouple the effect of the constants c, ℓ from that of the dependence on d (i.e. α, β). The +value of m controls the relative speed at which σd and λd converge to 0 as d → ∞, when m = 1, +7 + +σd and λd decay to 0 at the same rate, for m > 1 the decay of λd is faster than that of σd and for +m < 1 the decay of λd is slower than that of σd. The parameter r allows to refine the comparison +between σd and λd as β = α. In the limit r → 0 (i.e. λd/σd → 0 if α = β), the proposal (10) +coincides with that of MALA, in the case m = 1, r = 1 we get the P-MALA algorithm studied +in [30, 31]. Note that for all other values of r, m we have a family of proposals whose behaviour +depends on r and m. +3.1 +Differentiable targets +We start with the case where π is continuously differentiable. Since MALA can be applied to this +class of targets, the results obtained in this section allow direct comparison of proximal MALA +algorithms with MALA and thus between gradient-based algorithms (MALA) and algorithms that +use proximal operator-based approximations of the gradient (proximal MALA). If G = − log π is +continuously differentiable, using [3, Corollary 17.6], proxλ +G(x) = −λ∇G(proxλ +G(x)) + x, and (5) +reduces to +ξk+1 = ξk − σ2 +2 ∇G(proxλ +G(ξk)) + σZk+1 . +(10) +Hence, the value of λ controls how close to ξk is the point at which the gradient is evaluated. +For λ → 0, the proximal MALA proposal becomes arbitrarily close to that of MALA, while, as λ +increases (10) moves away from MALA. +Our main result, Theorem 1 below, shows that the relative speed of decay (i.e. m) influences +the optimal scaling of the resulting proximal MALA algorithm, while the constant r influences the +speed function of the limiting diffusion. +We make the following assumptions on the regularity of g. +A1. g is a C8-function whose derivatives are bounded by some polynomial: there exists k0 ∈ N +such that +sup +x∈R +max +i∈{0,...,8}[g(i)(x)/(1 + |x|k0)] < ∞ . +Note that under A0 and A1, [14, Lemma A.1] implies that +� +R xk exp(−g(x))dx < ∞ for any +k ∈ N. We also assume that the sequence of proximal MALA algorithms is initialized at stationarity. +A2. For any d ∈ N∗, Xd +0 has distribution πd. +The assumptions above closely resemble those of [34] used to obtain the optimal scaling results +for MALA. In particular, A1 ensures that we can approximate the log-acceptance ratio in (6) with +a Taylor expansion, while A2 avoids technical complications due to the transient phase of the +algorithm. We discuss how the latter assumption could be relaxed in Section 5. +For technical reasons, and to allow direct comparisons with the results established in [34] for +MALA, we will also consider the following regularity assumption +A3. The function g′ is Lipschitz continuous. +We denote by Ld +t the linear interpolation of the first component of the discrete time Markov +chain (Xd +k)k≥0 obtained with the generic proximal MALA algorithm described above +Ld +t = (⌈d2αt⌉ − d2αt)Xd +⌊d2αt⌋,1 + (d2αt − ⌊d2αt⌋)Xd +⌈d2αt⌉,1 , +(11) +8 + +where ⌊·⌋ and ⌈·⌉ denote the lower and upper integer part functions, respectively, and denote by +Xd +k,1 the first component of Xd +k. The following result shows that in the limit d → ∞ the properties +of proximal MALA depend on the relative speed at which σ2 +d = ℓ2/d2α and λd = c2/2d2β converge +to 0. Recall that we set r = c2/ℓ2m > 0 and under A2, consider for any d ∈ N∗, +ad(ℓ, r) = E +� πd(Y d +1 )qd(Y d +1 , Xd +0) +πd(Xd +0)qd(Xd +0, Y d +1 ) ∧ 1 +� +. +(12) +Theorem 1. Assume A0, A1 and A2. For any d ∈ N∗, let σ2 +d = ℓ2/d2α and λd = c2/2d2β with +α, β > 0. Then, the following statements hold. +(a) If α = 1/4, β = 1/8 and r > 0, we have limd→+∞ ad(ℓ, r) = 2Φ +� +−ℓ2K1(r)/2 +� +, where Φ is the +distribution function of a standard normal and +K2 +1(r) = r2 +4 E +�� +g′′(Xd +0,1)g′(Xd +0,1) +�2� +. +If in addition, A3 holds. +(b) If α = 1/6, β = 1/6 and r > 0, we have limd→+∞ ad(ℓ, r) = 2Φ +� +−ℓ3K2(r)/2 +� +, where Φ is the +distribution function of a standard normal and +K2 +2(r) = +�r +8 + r2 +4 +� +E +� +g′′(Xd +0,1)g′(Xd +0,1) +2� ++ +� 1 +16 + r +8 +� +E +� +g′′(Xd +0,1)3� ++ 5 +48E +� +g′′′(Xd +0,1)2� +. +(c) If α = 1/6, β > 1/6 and r > 0, we have limd→+∞ ad(ℓ, r) = 2Φ +� +−ℓ3K2(0)/2 +� +, where Φ is the +distribution function of a standard normal. +In addition, in all these cases, as d → ∞ the process (Ld +t )t≥0 converges weakly to the Langevin +diffusion +dLt = h(ℓ, r)1/2dBt − h(ℓ, r) +2 +g′(x)dt , +(13) +where (Bt)t≥0 denotes standard Brownian motion and h(ℓ, r) = ℓ2a(ℓ, r) is the speed of the diffusion, +setting a(ℓ, r) = limd→∞ ad(ℓ, r). If α = 1/4, β = 1/8, for any r > 0, ℓ �→ h(ℓ, r) is maximized at +the unique value of ℓ such that a(ℓ, r) = 0.452; while if α = 1/6, β = m/6 with m ≥ 1 and r > 0, +ℓ �→ h(ℓ, r) is maximized at the unique value of ℓ such that a(ℓ, r) = 0.574. +Proof. The proof follows that of [34, Theorem 1, Theorem 2] and is postponed to Appendix A. +The theorem above shows that the relative speed at which λd converges to 0 influences the +scaling of the resulting proximal algorithm. In case (c), m > 1 and λd decays with d at a faster +rate than σ2 +d. This causes the proximity map (4) to collapse onto the identity and therefore the +proposal (10) is arbitrarily close to that of MALA. The resulting scaling limit also coincides with +that of MALA established in [34, Theorem 1, Theorem 2]. +9 + +If λd and σ2 +d decay at the same rate (case (b)), the amount of gradient information provided by +the proximity map is controlled by r. Comparing our result for case (b) with [34, Theorem 1] we +find that +K2 +2(0) = 1 +16E +� +g′′(Xd +0,1)3� ++ 5 +48E +� +g′′′(Xd +0,1)2� += K2 +MALA; +thus, we have +K2 +2(r) = K2 +2(0) + +�r +8 + r2 +4 +� +E +� +g′′(Xd +0,1)2g′(Xd +0,1)2� ++ r +8E +� +g′′(Xd +0,1)3� += K2 +MALA + +�r +8 + r2 +4 +� +E +� +g′′(Xd +0,1)2g′(Xd +0,1)2� ++ r +8E +� +g′′(Xd +0,1)3� +≥ K2 +MALA , +since the convexity of g implies that g′′ ≥ 0. In particular, K2 +2(r) is an increasing function of r +achieving its minimum when r → 0 (i.e. MALA), see Figure 1(a). +In case (a), m = 1/2 and λd decays more slowly than σ2 +d. +As a consequence, the gradient +information provided by the proximity map is smaller than in cases (b)–(c), and the resulting +scaling differs from that of MALA. The value of K2 +1(r) is increasing in r and the speed of the +corresponding diffusion also depends on r (see Figure 1(a) gray lines and Figure 1(b)). +Example 1 (Gaussian target). Take g(x) = x2/2, proxg +λ(x) = x/(1 + λ). In this case, g′ is Lipschitz +continuous and we have K2 +1(r) = r2/4, K2 +2(r) = +� +1 + 4r + 4r2� +/16 and K2 +2(0) = K2 +MALA = 1/16. +The corresponding speeds are given in Figure 1(a). Optimizing for m = 1, r = 0 (MALA) and +m = 1, r = 1 (P-MALA) we obtain +hMALA(ℓ, r) = 1.5639, +hP-MALA(ℓ, r) = 0.7519, +achieved with ℓMALA = 1.6503 and ℓP-MALA = 1.1443, respectively. For Gaussian targets, MALA +is geometrically ergodic [13], and therefore the optimal choice in terms of speed of convergence is +MALA which is obtained for r = 0. The result for r = 1 and m = 1 are also given in [31, Theorem +2.1]. +Example 2 (Target with light tails). Take g(x) = x4, which gives a normalized distribution with +normalizing constant 2Γ(5/4). The proximity map is +proxλ +g(x) = 1 +2 +� +3� +9λ2x + +√ +54λ4x2 + 3λ3 +32/3λ +− +1 +3� +27λ2x + 3 +√ +54λ4x2 + 3λ3 +� +. +In this case g′ is not Lipschitz continuous and therefore we only consider (a), for which we have +K2 +1(r) = 144r2Γ(11/4)/Γ(5/4). The corresponding speed is given in Figure 1(b). +3.2 +Laplace target +As discussed in the introduction, proximal MALA has been widely used to quantify uncertainty in +imaging applications, in which target distributions involving the ℓ1 norm are particularly common +[30, 14, 1, 46]. +Here, we consider πL +d to be the product of d i.i.d. Laplace distributions as in (7), +πL +d (xd) = +d +� +i=1 +πL(xd +i ), for xd ∈ Rd, where πL(x) = 2−1 exp(−|x|) . +(14) +10 + +r +Speed +(a) Gaussian target +r +Speed +(b) Light tail target +Figure 1: Value of K for i = 1, 2 and speed of the corresponding Langevin diffusion as a function +of r for a Gaussian target and a light tail target. We denote by h1 the speed obtained in case (a), +by h2 that obtained in (b). In case (c) both K3 and the speed h3 are constant w.r.t. r and coincide +with that of MALA. For the Gaussian target we report the results for case (a)–(c) while for the +light tail target we only report (a). +For this particular choice of one-dimensional target distribution, the corresponding potential G is +x �→ |x| and satisfies A0. Then, the proximity map is given by the soft thresholding operator [29, +Section 6.1.3] +proxλ +G(x) = (x − sgn(x)λ)1{|x| > λ} , +(15) +where sgn : R → {−1, 1} is the sign function, given by sgn(x) = −1 if x ≤ 0, and sgn(x) = +1 otherwise. This operator is a continuous but not continuously differentiable map whose non- +differentiability points are the extrema of the interval [−λ, λ] and are controlled by the value of the +proximity parameter λ. +Plugging (15) in (9), the proximal MALA algorithm applied to πL +d proposes component-wise +for i = 1, . . . , d +Y d +k+1,i = Xd +k,i − σ2 +d +2 sgn(Xd +k,i)1{|Xd +k,i| > λd} − σ2 +d +2λd +Xd +k,i1{|Xd +k,i| ≤ λd} + σdZd +k+1,i . +(16) +For Xd +k,i close to 0 (i.e. the point of non-differentiability) the proximal MALA proposal is a biased +random walk around Xd +k,i, while outside the region [−λd, λd] the proposal coincides with that of +MALA. As λd → 0 the region in which the proximal MALA proposal coincide with that of MALA +increases and when λd ≈ 0 the region [−λd, λd] in which the proposal corresponds to a biased +random walk is negligible, as confirmed by the asymptotic acceptance rate in Theorem 2. +We also consider the case λd = 0 for any d. Then, the proposal (16) becomes the proposal +for the subgradient version of MALA: Y d +k+1,i = Xd +k,i − (σ2 +d/2) sgn(Xd +k,i) + σdZd +k+1,i, referred to as +sG-MALA. +The proof of the optimal scaling for the Laplace distribution follows the structure of that of +[12] for Lp-mean differentiable distributions. We start by characterizing the asymptotic acceptance +11 + +ratio of a generic proximal MALA algorithm; contrary to Theorem 1 for differentiable targets, in +the limit d → ∞ the properties of proximal MALA do not depend on the relative speed at which +σ2 +d = ℓ2/d2α and λd = c2/2d2β converge to 0, as long as λd decays at least at the same rate as σ2 +d. +In this regime, the region in which the proposal (16) corresponds to a biased random walk proposal +is negligible, and therefore we obtain the same scaling obtained with λd = 0 and corresponding to +sG-MALA. +Theorem 2. Assume A2 and consider the sequence of target distributions {πL +d }d∈N∗ given in (14). +For any d ∈ N∗, let σ2 +d = ℓ2/d2α and λd = c2/2d2β with α = 1/3 and β = m/3 for m ≥ 1. Then, we +have limd→∞ ad(ℓ, r) = aL(ℓ) = 2Φ(−ℓ3/2/(72π)1/4), where (ad(ℓ, r))d∈N∗ is defined in (12), with +r = c2/ℓ2m, and Φ is the distribution function of a standard normal. +Proof. The proof is postponed to Section 6.1. +Note that the asymptotic average acceptance rate aL(ℓ) does not depend on r and as a result +on c. +Having identified the possible scaling for proximal MALA with Laplace target, we are now ready +to show weak convergence to the appropriate Langevin diffusion. To this end, we adapt the proof +strategy followed in [22] and [12]. +As for the differentiable case, consider the linear interpolation (Ld +t )t≥0 of the first component +of the Markov chain (Xd +k)k≥0 given in (11). For any d ∈ N∗, denote by νd the law of the process +(Ld +t )t≥0 on the space of continuous functions from R+ to R, C(R+, R), endowed with the topology +of uniform convergence over compact sets and its corresponding σ-field. We first show that the +sequence (νd)d∈N∗, admits a weak limit point as d → ∞. +Proposition 1. Assume A2 and consider the sequence of target distributions {πL +d }d∈N∗ given +in (14). For any d ∈ N∗, let σ2 +d = ℓ2/d2α and λd = c2/2d2β with α = 1/3 and β = m/3. The +sequence (νd)d∈N∗ is tight in M1 (C(R+, R)), the set of probability measures acting on C(R+, R). +Proof. See Section 6.2. +By Prokhorov’s theorem, the tightness of (νd)d∈N∗ implies existence of a weak limit point ν. In +our next result, we give a sufficient condition to show that any limit point of (νd)d∈N∗ coincides +with the law of a solution of: +dLt = [hL(ℓ)]1/2dBt − hL(ℓ) +2 +sgn(Lt)dt . +(17) +To this end, we consider the martingale problem (see [42]) associated with (17), that we now +present. Let us denote by C∞ +c (R, R) the subset of functions of C(R, R) which are infinitely many +times differentiable and with compact support, and define the generator of (17) for V ∈ C∞ +c (R, R) +by +LV (x) = hL(ℓ) +2 +[V ′′(x) − sgn(x)V ′(x)] . +(18) +Denote by (Wt)t≥0 the canonical process on C(R+, R), Wt : {ws}s≥0 �→ wt and the corresponding +filtration by (Ft)t≥0. A probability measure ν is said to solve the martingale problem associated +12 + +with (17) with initial distribution πL, if the pushforward of ν by W0 is πL and if for all V ∈ +C∞ +c (R, R), the process +� +V (Wt) − V (W0) − +� t +0 +LV (Wu)du +� +t≥0 +is a martingale with respect to ν and the filtration (Ft)t≥0. +The following proposition gives a +sufficient condition to prove that ν is a solution of the martingale problem: +Proposition 2. Suppose that for any V ∈ C∞ +c (R, R), m ∈ N, ρ : Rm → R bounded and continuous, +and for any 0 ≤ t1 ≤ ... ≤ tm ≤ s ≤ t: +lim +d→+∞ Eνd +�� +V (Wt) − V (Ws) − +� t +s +LV (Wu)du +� +ρ(Wt1, ..., Wtm) +� += 0 . +Then any limit point of (νd)d∈N∗ on M1 (C(R+, R)) is a solution to the martingale problem associated +with (17). +Proof. See Section 6.3. +Finally, we use this sufficient condition to establish that any limit point of (νd)d∈N∗ is a solution +of the martingale problem for (17). Uniqueness in law of solutions of (17) allows to conclude that +(Ld +t )t≥0 converges weakly to the Langevin diffusion (17), which establishes our main result. +Theorem 3. The sequence of processes {(Ld +t )t≥0 : d ∈ N∗} converges in distribution towards +(Lt)t≥0, solution of (17) as d → ∞, with hL(ℓ) = ℓ2aL(ℓ) and aL defined in Theorem 2. +In +addition, hL is maximized at the unique value of ℓ such that aL(ℓ) = 0.360. +Proof. See Section 6.4. +4 +Practical Implications and Numerical Simulations +The optimal scaling results in Sections 3.1 and 3.2 provide some guidance on the choice of the +parameters σ and λ of proximal MALA algorithms, suggesting that smaller values of λ provide +better efficiency in terms of number of steps necessary to convergence (Theorem 1). +However, a number of other factors must be taken into account. First, as shown in [26, 37, 36, 21] +the convergence properties of Metropolis adjusted algorithms are influenced by the shape of the +target distribution and, in particular, by its tail behavior. Secondly, when comparing proximal +MALA algorithms with gradient-based methods (e.g. MALA) one must take into account the cost +of obtaining the gradients, whether this comes from automatic differentiation algorithms or from +evaluating a potentially complicated gradient function. On the other hand, proximity mappings can +be quickly found or approximated solving convex optimization problems which have been widely +studied in the convex optimization literature (e.g. [29, Chapter 6], [11] and [30, Section 3.2.3]). +In terms of convergence properties, we are usually interested in the family of distributions for +which the discrete time Markov chain produced by our algorithm is geometrically ergodic, together +with the optimal scaling results briefly recalled in Section 2. Normally, the ergodicity results are +given by considering the one-dimensional class of distributions E(β, γ) introduced in [36] and defined +for γ > 0 and 0 < β < ∞ by +E(β, γ) : +� +π : R → [0, +∞) : π(x) ∝ exp +� +−γ|x|β� +, |x| > x0 for some x0 > 0 +� +. +13 + +As observed by [24], there usually is a trade-off between ergodicity and optimal scaling results, +algorithms providing better optimal scaling results tend to be geometrically ergodic for a smaller +set of targets (e.g. MALA w.r.t. RWM). +As suggested by Theorem 1, the scaling properties of proximal MALA on differentiable targets +are close to those of MALA. This leads to a natural comparison between the two algorithms. First, +we observe that A0 rules out targets for which G is not convex and therefore restricts the families +E(β, γ) to β ≥ 1. To compare MALA with proximal MALA we therefore focus on distributions +with β ≥ 1. +It is shown in [36] that MALA is geometrically ergodic for targets in E(β, γ) with 1 ≤ β ≤ 2 +(with some caveat for β = 2). Theorem 1–(b) and (c) show that in this case proximal MALA +has the same scaling properties of MALA but in case (b) the asymptotic speed of convergence +decays as the constant r increases (Figure 1(a)), with the maximum achieved for r → 0, for which +proximal MALA collapses onto MALA. Since MALA is geometrically ergodic, and achieves better +(or equivalent) scaling properties than proximal MALA, it would be natural to prefer MALA to +proximal MALA for this set of targets. However, if the gradient is costly to obtain, one might +instead consider to use proximal MALA with a small λ, to retain scaling properties as close as +possible to that of MALA but to reduce the computational cost of evaluating the gradient. +In the case of differentiable targets with light-tails (i.e. β > 2), MALA is known not to be +geometrically ergodic [36, Section 4.2] while the ergodicity properties of proximal MALA have only +been partially studied in [30, Section 3.2.2] for the case λ = σ2/2 (P-MALA). As shown in [30, +Section 2.1], given a distribution π ∈ E(β, γ) with β ≥ 1, the distribution πλ obtained using the +potential (3) belongs to E(β′, γ′), where β′ = min(β, 2) and γ′ depending on λ. This suggests that +proximal MALA is likely to be ergodic for appropriate choices of λ; a first result in this direction +is given in [30, Corollary 3.2] for the P-MALA case λ = σ2/2. Theorem 1–(c) restricts the sets of +available λs showing that for light-tail distributions (for which A3 does not hold) λ should decay +at half the speed of σ2. Studying the ergodicity properties of proximal MALA in function of the +parameter λ is, of course, an interesting problem that we leave for future work. +For the Laplace distribution, Theorem 2 shows that the value of λ does not influence the +asymptotic acceptance ratio of proximal MALA, as long as λ decays with d at least as fast as σ2. +The scaling properties and the asymptotic speed h(ℓ) in Theorem 3 do not depend on λ and coincide +with that of the sG-MALA (obtained for λ = 0). Hence, in terms of optimal scaling, there does not +seem to be a difference between proximal MALA and sG-MALA for the Laplace distribution. +4.1 +Numerical Experiments +To illustrate the results established in Section 3.1 and 3.2 we consider here a small collection +of simulation studies. +The aim of these studies is to empirically confirm the optimal scalings +identified in Theorem 1 and 2, investigate the dimension d at which the asymptotic acceptance +ratio limd→∞ ad(ℓ, r) well approximates the empirical average acceptance ratio and, consequently, +for which dimensions d we can expect the optimal asymptotic acceptances in Theorem 1 and 2 to +guarantee maximal speed h(ℓ, r) (approximated by the expected squared jumping distance, see, e.g. +[18]) for the corresponding diffusion. We summarize here our findings, a more detailed discussion +can be found in Appendix B. +For the differentiable case, we consider the Gaussian distribution in Example 1 and four algorith- +mic settings which correspond to the three cases identified in Theorem 1 and MALA. The different +values of r and m influence the dimension required to observe convergence to the theoretical limit +14 + +in Theorem 1: for r → 0 and m = 1 (MALA) and m = 1/2, r = 1 (corresponding to Theorem 1–(a)) +the theoretical limit is already achieved for d of order 102, while in the cases m = 3, r = 2 and +m = r = 1 (corresponding to Theorem 1–(c) and (b), respectively) our simulation result match the +theoretical limit only for d of order 105 or higher. +The results for the Laplace case are similar, with the case m > 1 requiring a higher d to observe +convergence to the theoretical limit. +In general, we find that the optimal average acceptance ratios in Theorem 1 guarantee maximal +speed h(ℓ, r) for d sufficiently large (for small d the optimal acceptance ratio often differs from the +optimal asymptotic one, see, e.g. [40, Section 2.1]). +5 +Discussion +In this work we analyze the scaling properties of a wide class of proximal MALA algorithms intro- +duced in [30, 14] for smooth targets and for the Laplace distribution. We show that the scaling +properties of proximal MALA are influenced by the relative speed at which the proximal parameter +λd and the proposal variance σd decay to 0 as d → ∞ and suggest practical ways to choose λd as a +function of σd to guarantee good results. +In the case of smooth targets, we provide a detailed comparison between proximal MALA and +MALA, showing that proximal MALA scales no better than MALA (Theorem 1). In particular, +Theorem 1–(a) shows that if λd is too large w.r.t. σd then the efficiency of proximal MALA is +of order O(d−1/2) and therefore worse than the O(d−1/3) of MALA, suggesting that λd should +be chosen to decay approximately as σd, if possible. If λd decays sufficiently fast, then MALA +and proximal MALA have similar scaling properties and, in the case in which the proximity map +is cheaper to compute that the gradient, one can build proximal MALA algorithms which are as +efficient as MALA in terms of scaling but more computationally efficient. +In the case of the Laplace distribution, we show that the scaling of proximal MALA is O(d−2/3) +for any λd decaying sufficiently fast w.r.t. σd and, in the limit λd ≈ 0, we obtain a novel optimal +scaling result for MALA on Laplace targets. +As discussed in Section 4, our analysis provides some guidance on the choice of the parameters +that need to be specified to implement proximal MALA, but this analysis should be complemented +by an exploration of the ergodicity properties of proximal MALA to obtain a comprehensive descrip- +tion of the algorithms. We conjecture that for sufficiently large values of λ, proximal MALA applied +to light tail distributions will be exponentially ergodic; establishing exactly how large should λ be +to guarantee fast convergence is an interesting question that we leave for future work. Obtaining +these results would open the doors to adaptive tuning strategies for proximal MALA, which are +likely to produce better results than those given by the strategies currently used. +The set up under which we carried out our analysis closely resembles that of [34]; we anticipate +that A2 could be relaxed following similar ideas as those in [10, 22] and that our analysis could be +extended to d-dimensional targets πd possessing some dependence structure following the approach +of [40, 4, 45]. Finally, the analysis carried out for the Laplace distribution could be extended to +other piecewise smooth distributions provided that the moments necessary for the proof in Section 6 +can be computed. +15 + +6 +Proof of the Result for the Laplace distribution +In this section we prove the results in Section 3.2 which give the scaling properties of proximal +MALA (and sG-MALA) for the Laplace distribution. We collect technical results (e.g. moment +computations, bounds, etc.) in Appendix D. +We recall that σ2 +d = ℓ2/d2α and λd = c2/2d2β for some α, β > 0 and some constants c, ℓ +independent on d. +Thus, we can write λd as a function of σd, λd = σ2m +d r/2, where we define +r = c2/ℓ2m > 0 and m = β/α. +In order to study the scaling limit of proximal MALA with Laplace target, consider the mapping +bd : R2 → R given by +bd : (x, z) �→ z − σd +2 sgn(x)1 +� +|x| > σ2m +d r/2 +� +− +1 +σ2m−1 +d +rx1 +� +|x| ≤ σ2m +d r/2 +� +, +(19) +which allows us to write the proposal as Y d +1,i = Xd +0,i + σdbd(Xd +0,i, Zd +1,i) , for any i ∈ {1, . . . , d}. +We consider also the function φd : R2 → R, given by +φd : (x, z) �→ log π(x + σdbd(x, z))q(x + σdbd(x, z), x) +π(x)q(x, x + σdbd(x, z)) +(20) += |x| − |x + σdbd(x, z)| + z2 +2 +− +1 +2σ2 +d +�σ2 +d +2 sgn [x + σdbd(x, z)] 1 +� +|x + σdbd(x, z)| > σ2m +d r +2 +� +− σdbd(x, z) ++ +1 +σ2(m−1) +d +r +[x + σdbd(x, z)] 1 +� +|x + σdbd(x, z)| ≤ σ2m +d r +2 +��2 +. +We introduce, for i ∈ {1, . . . , d}, φd,i = φd(Xd +0,i, Zd +1,i) for the sake of conciseness. This allows us to +rewrite ad(ℓ, r), defined in (12), in the following way, +ad(ℓ, r) = E +� +exp +� d +� +i=1 +φd,i +� +∧ 1 +� +. +(21) +Remark 1. Under A2, the families of random variables (bd(Xd +0,i, Zd +1,i))i∈{1,...,d} and (φd,i)i∈{1,...,d} +are i.i.d. +6.1 +Proof of Theorem 2 +The proof of Theorem 2 uses the first three moments of φd,1, whose computation is postponed to +Appendix D.1, and is an application of Lindeberg’s central limit theorem. +To identify the optimal scaling for the Laplace distribution, we look for those values of α such +that �d +i=1 E[φd,i] and Var(�d +i=1 φd,i) converge to a finite value. Using Remark 1, we have that, +d +� +i=1 +E [φd,i] = d E [φd,1] +and +Var +� d +� +i=1 +φd,i +� += d Var (φd,1) . +(22) +16 + +Then, using the integrals in Appendix D.1, we find that the only value of α for which (22) converge to +a finite value with the variance strictly positive is α = 1/3 as confirmed empirically in Appendix B.2. +Having identified α = 1/3, we can then proceed applying Lindeberg’s CLT. +Proof of Theorem 2. We start by showing that the acceptance ratio converges to a Gaussian dis- +tribution. Define µd = E[φd,1] and Fd,i = σ((Xd +0,j, Zd +1,j), 1 ≤ j ≤ i), the natural filtration for +(Xd +0,i, Zd +1,i)d∈N,1≤i≤d. The square-integrable martingale sequence +� +�Sd,i = +i +� +j=1 +Wd,i, Fd,i +� +� +d∈N∗,1≤i≤d +where Wd,i = φd,i − µd, forms a triangular array, to which we can apply the corresponding CLT +(e.g. [41, Theorem 4, page 543]). In particular, we have that, +lim +d→∞ +d +� +i=1 +E +� +W 2 +d,i | Fd,i−1 +� += lim +d→∞ d Var (φd,1) = +2ℓ3 +3 +√ +2π , +as shown in Proposition 17 in Appendix D.1. It remains to verify Lindeberg’s condition: for ε > 0, +lim +d→∞ dE +� +W 2 +d,11 {|Wd,1| > ε} +� += 0 . +In order to verify Lindeberg’s condition we verify the stronger Lyapunov condition: there exists +ϵ > 0 such that +lim +d→∞ dE +� +W 2+ϵ +d,1 +� += 0 . +Pick ϵ = 1 and expand the cube using µd = E[φd,i], +E +� +W 3 +d,1 +� += E +� +φ3 +d,i +� +− 3µdE +� +φ2 +d,i +� ++ 2µ3 +d . +(23) +By Proposition 16 in Appendix D.1, we have limd→∞ dµ3 +d = 0, limd→∞ µd = 0, and, by Proposi- +tion 17 in Appendix D.1, +lim +d→∞ dE +� +φ2 +d,i +� += +2ℓ3 +3 +√ +2π . +Finally, for the remaining term in (23) we use Proposition 18 in Appendix D.1 to show that +limd→∞ dE[φ3 +d,i] = 0. The above and the fact that, by Proposition 16 in Appendix D.1, +lim +d→∞ dµd = − +ℓ3 +3 +√ +2π , +show, by Lindeberg’s CLT, that the acceptance ratio converges in law to a normal random variable +�Z with mean −ℓ3/(3 +√ +2π) and variance 2ℓ3/(3 +√ +2π). +To conclude the proof, we plug this convergence into (21), since x �→ ex ∧ 1 is a continuous and +bounded mapping, we have that +lim +d→∞ exp +� d +� +i=1 +φd,i +� +∧ 1 +d= e +� +Z ∧ 1 +and +lim +d→∞ ad(ℓ, r) = E +� +e +� +Z ∧ 1 +� +, +where the limit does not depend on r. Defining aL(ℓ) = limd→∞ ad(ℓ, r) and using [33, Proposition +2.4], we have the result. +17 + +6.2 +Proof of Proposition 1 +We are interested in the law νd of the linear interpolant (Ld +t )t≥0, defined in (11), of the first +component of the chain (Xd +k)k∈N. Let us recall the definition of the chain: assumption A2 gives +the initial distribution πd, then, for any k ∈ N, the proposal Y d +k+1 = (Y d +k+1,i)1≤i≤d is defined in (16) +with σ2 +d = ℓ2/d2α, λd = σ2m +d r/2 with α = 1/3 and m ≥ 1. With the notations introduced in +Section 6, we can rewrite (16), for any i ∈ {1, . . . , d}, +Y d +k+1,i = Xd +k,i + σdbd(Xd +k,i, Zd +k+1,i) , +(24) +where bd is defined in (19) with r = c2/ℓ2m. From there, we apply the acceptance-rejection step +described in (6), we additionally define the acceptance event Ad +k+1 = +� +bd +k+1 = 1 +� +. +We can now expand the expression of the linear interpolant Ld +t , for t ≥ 0, using (6), (11) and +the definition of Ad +k+1, +Ld +t = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +Xd +⌊d2αt⌋,1 + (d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − σ2 +d +2 sgn(Xd +⌊d2αt⌋,1) +� +1Ad +⌈d2αt⌉ +if |Xd +⌊d2αt⌋,1| > σ2m +d +r +2 +Xd +⌊d2αt⌋,1 + (d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − +1 +σ2(m−1) +d +rXd +⌊d2αt⌋,1 +� +1Ad +⌈d2αt⌉ +otherwise +, +(25) +or, equivalently, +Ld +t = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +Xd +⌈d2αt⌉,1 − (⌈d2αt⌉ − d2αt) +� +σdZd +⌈d2αt⌉,1 − σ2 +d +2 sgn(Xd +⌊d2αt⌋,1) +� +1Ad +⌈d2αt⌉ +if |Xd +⌊d2αt⌋,1| > σ2m +d +r +2 +Xd +⌈d2αt⌉,1 − (⌈d2αt⌉ − d2αt) +� +σdZd +⌈d2αt⌉,1 − +1 +σ2(m−1) +d +rXd +⌊d2αt⌋,1 +� +1Ad +⌈d2αt⌉ +otherwise +. +In order to prove Proposition 1, we consider Kolmogorov’s criterion for tightness (see [23, The- +orem 23.7]): the sequence (νd)d≥1 is tight if +E +� +(Ld +t − Ld +s)4� +≤ γ(t)(t − s)2 , +for some non-decreasing positive function γ, all 0 ≤ s ≤ t and all d ∈ N∗ and the sequence (Ld +0)d∈N∗ +is tight. The latter condition is straightforward to check, since by A2 the distribution of Ld +0 = Xd +0,1 +is πL for all d ∈ N∗. +Proof of Proposition 1. Consider E +� +(Ld +t − Ld +s)4� +, if ⌊d2αs⌋ = ⌊d2αt⌋, the inequality follows straight- +forwardly recalling that the moments of normal distributions are bounded: in the case |Xd +⌊d2αt⌋,1| = +|Xd +⌊d2αs⌋,1| > σ2m +d r/2 it follows directly from the boundedness of the sgn function, while in the case +|Xd +⌊d2αt⌋,1| = |Xd +⌊d2αs⌋,1| ≤ σ2m +d r/2 we exploit the boundedness of Xd +⌊d2αt⌋,1 itself. +For all 0 ≤ s ≤ t such that ⌈d2αs⌉ ≤ ⌊d2αt⌋, we can distinguish three cases. +18 + +Case 1 +If |Xd +⌊d2αt⌋,1| > σ2m +d r/2 and |Xd +⌊d2αs⌋,1| > σ2m +d r/2, then +Ld +t − Ld +s = Xd +⌊d2αt⌋,1 − Xd +⌈d2αs⌉,1 ++ (d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − σ2 +d +2 sgn(Xd +⌊d2αt⌋,1) +� +1Ad +⌈d2αt⌉ ++ (⌈d2αs⌉ − d2αs) +� +σdZd +⌈d2αs⌉,1 − σ2 +d +2 sgn(Xd +⌊d2αs⌋,1) +� +1Ad +⌈d2αs⌉ . +Using H¨older’s inequality and the fact that 0 ≤ d2αt − ⌊d2αt⌋ ≤ 1 (and similarly for s) we have +E +� +(Ld +t − Ld +s)4� +≤ CE +�� +Xd +⌊d2αt⌋,1 − Xd +⌈d2αs⌉,1 +�4� ++ C (d2αt − ⌊d2αt⌋)2 +d4α +E +�� +ℓZd +⌈d2αt⌉,1 +�4 ++ +ℓ4 +24d4α +� ++ C (⌈d2αs⌉ − d2αs)2 +d4α +E +�� +ℓZd +⌈d2αs⌉,1 +�4 ++ +ℓ4 +24d4α +� +. +Recalling that the moments of Zd are bounded and that d2αs ≤ ⌈d2αs⌉ ≤ ⌊d2αt⌋ ≤ d2αt, it follows +E +� +(Ld +t − Ld +s)4� +≤ C +� +(t − s)2 + E +�� +Xd +⌊d2αt⌋,1 − Xd +⌈d2αs⌉,1 +�4�� +. +(26) +Case 2 +If |Xd +⌊d2αt⌋,1| > σ2m +d r/2 and |Xd +⌊d2αs⌋,1| ≤ σ2m +d r/2 or |Xd +⌊d2αt⌋,1| ≤ σ2m +d r/2 and |Xd +⌊d2αs⌋,1| > +σ2m +d r/2. We only describe the argument for the first case, the second case follows from analogous +steps. Take +Ld +t − Ld +s = Xd +⌊d2αt⌋,1 + (d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − σ2 +d +2 sgn(Xd +⌊d2αt⌋,1) +� +1Ad +⌈d2αt⌉ +− Xd +⌈d2αs⌉,1 − (⌈d2αs⌉ − d2αs) +� +σdZd +⌈d2αs⌉,1 − +1 +σ2(m−1) +d +r +Xd +⌊d2αs⌋,1 +� +1Ad +⌈d2αs⌉ . +Proceeding as above, we find that +E +� +(Ld +t − Ld +s)4� +≤ C +� +(t − s)2 + E +�� +Xd +⌊d2αt⌋,1 − Xd +⌈d2αs⌉,1 +�4� ++(⌈d2αs⌉ − d2αs)4E +� +� +� +1 +σ2(m−1) +d +r +Xd +⌊d2αs⌋ +�4� +� +� +� , +and recalling that |Xd +⌊d2αs⌋,1| ≤ σ2m +d r/2 we have that |Xd +⌊d2αs⌋,1|/(rσ2(m−1) +d +) ≤ σ2 +d/2. Using this +and the same arguments as above, we have +E +� +(Ld +t − Ld +s)4� +≤ C +� +(t − s)2 + E +�� +Xd +⌊d2αt⌋,1 − Xd +⌈d2αs⌉,1 +�4�� +. +(27) +19 + +Case 3 +If |Xd +⌊d2αt⌋,1| ≤ σ2m +d r/2 and |Xd +⌊d2αs⌋,1| ≤ σ2m +d r/2, then +Ld +t − Ld +s = Xd +⌊d2αt⌋,1 + (d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − +1 +σ2(m−1) +d +r +Xd +⌊d2αt⌋,1 +� +1Ad +⌈d2αt⌉ +− Xd +⌈d2αs⌉,1 + (⌈d2αs⌉ − d2αs) +� +σdZd +⌈d2αs⌉,1 − +1 +σ2(m−1) +d +r +Xd +⌊d2αs⌋,1 +� +1Ad +⌈d2αs⌉ . +Using the boundedness of moments of Gaussian distributions and of Xd +⌊d2αt⌋,1, Xd +⌊d2αs⌋,1, we have +E +� +(Ld +t − Ld +s)4� +≤ C +� +(t − s)2 + E +�� +Xd +⌊d2αt⌋,1 − Xd +⌈d2αs⌉,1 +�4�� +. +(28) +Putting (26), (27) and (28) together and using Lemma 1 below we obtain +E +�� +Ld +t − Ld +s +�4� +≤ C +� +(t − s)2 + +4 +� +p=2 +� +⌊d2αt⌋ − ⌈d2αs⌉ +�p +d2αp +� +≤ C(t − s)2 + C +4 +� +p=2 +d2αp (t − s)p +d2αp +≤ C +� +2 + t + t2� +(t − s)2 , +which concludes the proof. +We are now ready to state and prove Lemma 1: +Lemma 1. There exists C > 0 such that for any k1, k2 ∈ N with 0 ≤ k1 < k2, +E +�� +Xd +k2,1 − Xd +k1,1 +�4� +≤ C +4 +� +p=2 +(k2 − k1)p +d2αp +. +Proof. Recalling the definition of the proposal in (24) and the notations of (19) we can write +E +�� +Xd +k2,1 − Xd +k1,1 +�4� += E +� +� +� +k2 +� +k=k1+1 +σdbd +� +Xd +k−1,1, Zd +k,1 +� +1Ad +k +�4� +� . +Then, we expand all acceptance or rejection terms between k1 and k2 and use H¨older’s inequality +to obtain +E +�� +Xd +k2,1 − Xd +k1,1 +�4� +≤ σ4 +dE +� +� +� +k2 +� +k=k1+1 +bd +� +Xd +k−1,1, Zd +k,1 +� +�4� +� ++ σ4 +dE +� +� +� +k2 +� +k=k1+1 +bd +� +Xd +k−1,1, Zd +k,1 +� +1(Ad +k)c +�4� +� , +20 + +where bd is defined in (19). Using again H¨older’s inequality, for the first term we have +E +� +� +� +k2 +� +k=k1+1 +bd +� +Xd +k−1,1, Zd +k,1 +� +�4� +� ≤ C +� +� +�E +� +� +� +k2 +� +k=k1+1 +Zd +k,1 +�4� +� ++ σ4 +d +24 E +� +� +� +k2 +� +k=k1+1 +sgn +� +Xd +k−1,1 +� +1 +� +|Xd +k−1,1| > σ2m +d r/2 +� +�4� +� ++σ4 +d +24 E +� +� +� +k2 +� +k=k1+1 +1 +σ2m−1 +d +rXd +k−1,11 +� +|Xd +k−1,1| ≤ σ2m +d r/2 +� +�4� +� +� +� +� +≤ C +� +3(k2 − k1)2 + 2σ4 +d +24 (k2 − k1)4 +� +, +(29) +where the last line follows using the moments of Zd +k,1 and the boundedness of Xd +k−1,1 in the set +{|Xd +k−1,1| ≤ σ2m +d r/2}. +Using a Binomial expansion of the rejection term, we obtain +E +� +� +� +k2 +� +k=k1+1 +bd +� +Xd +k−1,1, Zd +k,1 +� +1(Ad +k) +c +�4� +� = +� +E +� 4 +� +i=1 +bd +� +Xd +mi−1,1, Zd +mi,1 +� +1(Admi) +c +� +, +(30) +where the sum is over the quadruplets (mi)1≤i≤4 with mi ∈ {k1 + 1, . . . , k2}. +We separate the terms in the sum according to their cardinality, let us denote, for j ∈ {1, . . . , 4}, +Ij = +� +(m1, . . . , m4) ∈ {k1 + 1, . . . , k2}4 : # {m1, . . . , m4} = j +� +; +and define, for any (m1, . . . , m4) ∈ {k1 + 1, . . . , k2}4, � +Xd +0 = Xd +0 and for any i ∈ {1, . . . , d}, +� +Xd +k+1,i = � +Xd +k,i + 1{m1−1,...,m4−1}c(k)1�Ad +k+1σdbd +� +� +Xd +k,i, Zd +k+1,i +� +, +where +�Ad +k+1 = +� +Uk+1 ≤ exp +� d +� +i=1 +φd +� +� +Xd +k,i, Zd +k+1,i +��� +, +(31) +and φd in (20). Denote by F the σ-algebra generated by the process � +Xd and observe that on the +event +4� +j=1 +� +Ad +mj +�c +, Xd is equal to � +Xd. +We consider now the terms in the sum (30). +(i) If (m1, . . . , m4) ∈ I4, then the mis are all distinct and +E +� +� +4 +� +j=1 +bd +� +Xd +mj−1,1, Zd +mj,1 +� +1� +Admj +�c +������ +F +� +� = E +� +� +4 +� +j=1 +bd +� +� +Xd +mj−1,1, Zd +mj,1 +� +1� +�Admj +�c +������ +F +� +� . +21 + +However, {bd( � +Xd +mj−1,1, Zd +mj,1)1(�Ad +mj )c}j=1,...,4 are independent conditionally on F. Thus, +E +� +� +4 +� +j=1 +bd +� +� +Xd +mj−1,1, Zd +mj,1 +� +1� +�Admj +�c +������ +F +� +� = +4 +� +j=1 +E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +� +1� +�Admj +�c +����F +� += +4 +� +j=1 +E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +� +× +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +� +, +by integrating the uniform variables Umj in (31). +Recalling the definition of bd in (19), we can bound the expectation above with +������E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +� � +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +������ +(32) +≤ +���E +��σd +2 sgn +� +� +Xd +mj−1,1 +� +1 +� +| � +Xd +mj−1,1| > σ2m +d r/2 +� +− +1 +σ2m−1 +d +r +� +Xd +mj−1,11 +� +| � +Xd +mj−1,1| ≤ σ2m +d r/2 +�� +× +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +������ ++ +�����E +� +Zd +mj,1 +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +������ . +For the first one, we use the boundedness of the sgn function and of � +Xd +mj−1,1 in the set +{| � +Xd +mj−1,1| ≤ σ2m +d r/2} to obtain +���E +��σd +2 sgn +� +� +Xd +mj−1,1 +� +1 +� +| � +Xd +mj−1,1| > σ2m +d r/2 +� +− +1 +σ2m−1 +d +r +� +Xd +mj−1,11 +� +| � +Xd +mj−1,1| ≤ σ2m +d r/2 +�� +× +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +������ +≤ σd +2 E +������ +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +����� +�����F +� +≤ σd +2 . +(33) +We can write the second term as +E +� +Zd +mj,1 +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +� += E +� +G +� +� +Xd +mj−1,1, +d +� +i=2 +φd +� +� +Xd +mj−1,i, Zd +mj,i +�������F +� +, +22 + +where we define G(a, b) = E +� +Z (1 − exp (φd (a, Z) + b))+ +� +with Z a standard Gaussian. Be- +cause the function x �→ (1 − exp(x))+ is 1-Lipschitz, we have, using Cauchy-Schwarz and +Lemma 3 in Appendix D.2, +��E +� +Z (1 − exp (φd (a, Z) + b))+ +� +− E +� +Z (1 − exp (b))+ +��� ≤ E [|Z| |φd (a, Z)|] +≤ E +� +Z2�1/2 E +� +φd (a, Z)2�1/2 +≤ E +� +φd (a, Z)2�1/2 +≤ Cd−α . +However, E +� +Z (1 − exp (b))+ +� += E [Z] (1 − exp (b))+ = 0, and therefore +�����E +� +G +� +� +Xd +mj−1,1, +d +� +i=2 +φd +� +� +Xd +mj−1,i, Zd +mj,i +�������F +������ ≤ Cd−α . +(34) +Combining equations (32), (33) and (34) and recalling that σd = ℓd−α, we have +�����E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +� � +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +������ ≤ Cd−α , +(35) +from which follows that +� +(m1,...,m4)∈I4 +�����E +� 4 +� +i=1 +bd +� +Xd +mi−1,1, Zd +mi,1 +� +1(Admi) +c +������ ≤ +� +(m1,...,m4)∈I4 +E +� +� +4 +� +j=1 +C +dα +� +� +≤ +�k2 − k1 +4 +� C +d4α ≤ C (k2 − k1)4 +d4α +, +(36) +using that |I4| = +�k2−k1 +4 +� +. +(ii) If (m1, .., m4) ∈ I3, only three of the mis take distinct values; proceeding as in case (i), we +have +������ +E +� +� +3 +� +j=1 +bd +� +Xd +mj−1,1, Zd +mj,1 +�1+δ1,j +1� +Admj +�c +������ +F +� +� +������ += +3 +� +j=1 +�����E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +�1+δ1,j +� +1 − exp +� d +� +i=1 +φd +� +� +Xd +mj−1,i, Zd +mj,i +��� ++ +�����F +������ , +where δ1,j denotes a Dirac’s delta. For the terms j ̸= 1, we use (35), while for the term j = 1 +23 + +we bound the indicator function by 1 to obtain +������ +E +� +� +3 +� +j=1 +bd +� +Xd +mj−1,1, Zd +mj,1 +�1+δ1,j +1� +Admj +�c +������ +F +� +� +������ +≤ +����E +� +bd +� +� +Xd +m1−1,1, Zd +m1,1 +�2����F +����� +3 +� +j=2 +C +dα +≤ +� +3 + 2σ2 +d +22d2α +� C2 +d2α ≤ C 1 +d2α , +where the second-to-last inequality follows using the same approach taken for (29) and recall- +ing that σd = ℓd−α. Hence, +� +(m1,...,m4)∈I3 +�����E +� 4 +� +i=1 +bd +� +Xd +mi−1,1, Zd +mi,1 +� +1(Admi) +c +������ +(37) +≤ C +�k2 − k1 +3 +� 1 +d2α ≤ C (k2 − k1)3 +d2α +. +(iii) If (m1, .., m4) ∈ I2, we have two different cases: the mis take the two values twice or three +mis have the same value. For the first one, we have, bounding the indicator function with 1, +E +� +�E +� +� +2 +� +j=1 +bd +� +Xd +mj−1,1, Zd +mj,1 +�2 +1� +Admj +�c +������ +F +� +� +� +� +≤ E +� +� +2 +� +j=1 +E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +�2����F +�� +� . +Since, conditionally on F, the random variables inside the expectation are Gaussians with +bounded mean and variance 1, we have, using the same approach taken for (29), +E +� +� +2 +� +j=1 +E +� +bd +� +� +Xd +mj−1,1, Zd +mj,1 +�2����F +�� +� ≤ +� +1 + 2σ2 +d +22 +�2 +≤ C . +The second case follows similarly +������ +E +� +�E +� +� +2 +� +j=1 +bd +� +Xd +mj−1,1, Zd +mj,1 +�1+2δ1,j +1� +Admj +�c +������ +F +� +� +� +� +������ +≤ E +� +�E +� +� +2 +� +j=1 +���bd +� +� +Xd +mj−1,1, Zd +mj,1 +���� +1+2δ1,j +������ +F +� +� +� +� ≤ C , +24 + +where δ1,j denotes a Dirac’s delta. Therefore, +� +(m1,...,m4)∈I2 +�����E +� 4 +� +i=1 +� +bd(Xd +mi−1,1, Zd +mi,1 +� +1(Admi) +c +������ +(38) +≤ C +��4 +2 +� ++ +�4 +3 +�� �k2 − k1 +2 +� +≤ C(k2 − k1)2 . +(iv) If (m1, .., m4) ∈ I1 (i.e. all mis take the same value), we bound the indicator function by 1 +and, using the same approach taken for (29), we find +E +� +bd +� +Xd +m1−1,1, Zd +m1,1 +�4 1(Adm1) +c +� +≤ C +� +3 + 2σ4 +d +24 +� +≤ C , +since σd = ℓd−α and d ∈ N. Hence, +� +(m1,...,m4)∈I1 +�����E +� 4 +� +i=1 +bd +� +Xd +m1−1,1, Zd +m1,1 +� +1(Ad +mi) +c +������ ≤ C +�k2 − k1 +1 +� += C(k2 − k1) . +(39) +The result follows combining (36), (37), (38) and (39) in (30). +6.3 +Proof of Proposition 2 +We start by proving the following lemma. +Lemma 2. Let ν be a limit point of the sequence of laws (νd)d≥1 of {(Ld +t )t≥0 | d ∈ N}. Then for +any t ≥ 0, the pushforward measure of ν by Wt is πL(dx) = exp(−|x|)dx/2. +Proof. Using (25), we have +E +����Ld +t − Xd +⌊d2αt⌋,1 +��� +� +≤ E +�����(d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − σ2 +d +2 sgn(Xd +⌊d2αt⌋,1) +� +1|Xd +⌊d2αt⌋,1|>σ2m +d +r/21Ad +⌈d2αt⌉ +���� +� ++ E +������(d2αt − ⌊d2αt⌋) +� +σdZd +⌈d2αt⌉,1 − +1 +σ2(m−1) +d +r +Xd +⌊d2αt⌋,1 +� +1|Xd +⌊d2αt⌋,1|≤σ2m +d +r/21Ad +⌈d2αt⌉ +����� +� +≤ (d2αt − ⌊d2αt⌋) +� +σdE +����Zd +⌈d2αt⌉,1) +��� +� ++ σ2 +d +2 E +����sgn(Xd +⌊d2αt⌋,1) +��� +� ++ +1 +σ2(m−1) +d +r +E +����Xd +⌊d2αt⌋,1 +��� 1|Xd +⌊d2αt⌋,1|≤σ2m +d +r/2 +�� +≤ (d2αt − ⌊d2αt⌋) +� +ℓ +dα E +� +(Zd +⌈d2αt⌉,1)2�1/2 ++ +ℓ2 +2d2α + +1 +σ2(m−1) +d +r +E +�σ2m +d r +2 +�� +≤ (d2αt − ⌊d2αt⌋) +� ℓ +dα + +ℓ2 +2d2α + +ℓ2 +2d2α +� +≤ C +dα , +25 + +where we used Cauchy-Schwarz inequality and the fact that the moments of Zd +⌈d2αt⌉,1 are bounded. +The above guarantees that, +lim +d→∞ E +����Ld +t − Xd +⌊d2αt⌋,1 +��� +� += 0 . +As (νd)d≥1 converges weakly towards ν, for any Lipschitz bounded function ψ : R → R, +lim +d→∞ E +� +ψ +� +Xd +⌊d2αt⌋,1 +�� += lim +d→∞ E +� +ψ +� +Ld +t +�� += Eν [ψ(Wt)] . +The result follows since Xd +⌊d2αt⌋,1 is distributed according to πL(dx) = exp(−|x|)dx/2 for any t ≥ 0 +and d ∈ N. +We are now ready to prove Proposition 2: +Proof of Proposition 2. Let ν be a limit point of (νd)d≥1. +We start by showing that if for any +V ∈ C∞ +c (R, R), m ∈ N, any bounded and continuous mapping ρ : Rm → R and any 0 ≤ t1 ≤ · · · ≤ +tm ≤ s ≤ t, ν satisfies +Eν +�� +V (Wt) − V (Ws) − +� t +s +LV (Wu)du +� +ρ(Wt1, . . . , Wtm) +� += 0 , +(40) +then ν is a solution to the martingale problem associated with L. +Let Fs denote the σ-algebra generated by +{ρ(Wt1, . . . , Wtm) : m ∈ N, ρ : Rm → R bounded and continuous, and 0 ≤ t1 ≤ · · · ≤ tm ≤ s} . +Then, +Eν +� +V (Wt) − V (Ws) − +� t +s +LV (Wu)du +����Fs +� += 0 , +showing that the process +� +V (Wt) − V (W0) − +� t +0 +LV (Wu)du +� +t≥0 +is a martingale w.r.t. ν and the filtration (Ft)t≥0. +To prove (40), it is enough to show that for any V ∈ C∞ +c (R, R), m ∈ N and any bounded and +continuous mapping ρ : Rm → R and any 0 ≤ t1 ≤ · · · ≤ tm ≤ s ≤ t, the mapping +Ψs,t : w �−→ +� +V (wt) − V (ws) − +� t +s +LV (wu)du +� +ρ (wt1, . . . , wtm) , +is continuous on a ν-almost sure subset of C(R+, R). Let +W = {w ∈ C(R+, R) : wu ̸= 0 for almost any u ∈ [s, t]} . +Since w ∈ Wc if and only if +� t +s 1{0}(wu)du > 0, using Lemma 2 and the Fubini–Tonelli’s theorem, +Eν +�� t +s +1{0}(Wu)du +� += +� t +s +Eν � +1{0}(Wu) +� +du = +� t +s +πL({0})du = 0 , +26 + +and we have that ν(Wc) = 0. +Since w �→ wu is continuous for any u ≥ 0, so are w �→ V (wu) and w �→ ρ(wt1, . . . , wtm). +Thus, it is enough to prove that the mapping w �→ +� t +s LV (wu)du is continuous. +Let (wn)n≥0 +be a sequence in C(R+, R) that converges to w ∈ W in the uniform topology on compact sets. +Let u be such that wu ̸= 0, therefore, since the sgn function is continuous in a neighbourhood +of wu, limn→∞ LV (wn +u) = LV (wu), thus limn→∞ LV (wn +u) = LV (wu) for almost any u ∈ [s, t]. +Finally, using the boundedness of the sequence (LV (wn +u))n≥0 and Lebesgue’s dominated convergence +theorem, +lim +n→∞ +� t +s +LV (wn +u)du = +� t +s +LV (wu)du , +which proves that the mappings Ψs,t are continuous on W. +6.4 +Proof of Theorem 3 +Let us introduce, for any n ∈ N, Fd +n,1 = σ({Xd +k,1, 0 ≤ k ≤ n}), the σ-algebra generated by the first +components of {Xd +k | 0 ≤ k ≤ n}. We also introduce for any V ∈ C∞ +c (R, R) +M d +n(V ) = ℓ +dα +n−1 +� +k=0 +V ′(Xd +k,1) +× +� +bd +� +Xd +k,1, Zd +k+1,1 +� +1Ad +k+1 − E +� +bd +� +Xd +k,1, Zd +k+1,1 +� +1Ad +k+1 +���Fd +k,1 +�� +(41) ++ +ℓ2 +2d2α +n−1 +� +k=0 +V ′′(Xd +k,1) +× +� +bd +� +Xd +k,1, Zd +k+1,1 +�2 1Ad +k+1 − E +� +bd +� +Xd +k,1, Zd +k+1,1 +�2 1Ad +k+1 +���Fd +k,1 +�� +. +where bd is defined in (19). +The proof of Theorem 3 follows using the sufficient condition in Proposition 2, the tightness of +the sequence (νd)d≥1 established in Proposition 1 and Proposition 3 below. +Proof. Using Proposition 1, Proposition 2 and Proposition 3 below, it is enough to show that for +any V ∈ C∞ +c (R, R), m ≥ 1, any 0 ≤ t1 ≤ · · · ≤ tm ≤ s ≤ t and any bounded and continuous +mapping ρ : Rm → R, +lim +d→∞ E +�� +M d +⌈d2αt⌉(V ) − M d +⌈d2αs⌉(V ) +� +ρ(Ld +t1, ..., Ld +tm) +� += 0 , +where, for any n ≥ 1, M d +n(V ) is given by (41). However, this is straightforwardly obtained by taking +successively the conditional expectations with respect to Fd +k,1 for k = ⌈d2αt⌉, . . . , ⌈d2αs⌉. +Proposition 3. For any 0 ≤ s ≤ t, V ∈ Cc(R, R) we have +lim +d→∞ E +�����V +� +Ld +t +� +− V +� +Ld +s +� +− +� t +s +LV +� +Ld +u +� +du − +� +M d +⌈d2αt⌉ (V ) − M d +⌈d2αs⌉ (V ) +����� +� += 0 , +(42) +where (Ld +t )t≥0 is defined in (11). +27 + +Proof. The process (Ld +t )t≥0 is piecewise linear, thus it has finite variation. For any τ ≥ 0, we define +dLd +τ = d2ασdbd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1) +� +1Ad +⌈d2ατ⌉dτ . +Thus, recalling that σd = ℓd−α and using the fundamental theorem of integral calculus for piecewise +C1 maps +V +� +Ld +t +� +− V +� +Ld +s +� += ℓdα +� t +s +V ′ � +Ld +τ +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉dτ , +(43) +where bd is defined in (19). A Taylor expansion of V ′ with Lagrange remainder about Xd +⌊d2ατ⌋,1 +gives +V ′ � +Ld +τ +� += V ′ � +Xd +⌊d2ατ⌋,1 +� ++ ℓ +dα +� +d2ατ − ⌊d2ατ⌋ +� +V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉ ++ +ℓ2 +2d2α +� +d2ατ − ⌊d2ατ⌋ +�2 V (3) (χτ) bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉ , +where for any point τ ∈ [s, t], there exists χτ ∈ [Xd +⌊d2ατ⌋,1, Y d +τ,1]. Substituting the above into (43) +we obtain +V +� +Ld +t +� +− V +� +Ld +s +� += ℓdα +� t +s +V ′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉dτ ++ ℓ2 +� t +s +� +d2ατ − ⌊d2ατ⌋ +� +V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉dτ +(44) ++ ℓ3 +2dα +� t +s +� +d2ατ − ⌊d2ατ⌋ +�2 V (3) (χτ) bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�3 +1Ad +⌈d2ατ⌉dτ . +Since V (3) is bounded, using Fubini-Tonelli’s theorem and recalling the definition of bd in (19), we +have that +ℓ3 +2dα E +����� +� t +s +� +d2ατ − ⌊d2ατ⌋ +�2 V (3) (χτ) bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�3 +1Ad +⌈d2ατ⌉dτ +���� +� +≤ C ℓ3 +2dα +� t +s +E +�����Zd +⌈d2ατ⌉,1 +��� + +ℓ +2dα +�3� +dτ −→ +d→∞ 0 , +since the moments of Zd +⌈d2ατ⌉,1 are bounded. +For the second term in (44), we observe that most of the integrand is piecewise constant since +the process Xd +⌊d2ατ⌋,1 evolves in discrete time. Then, for any integer d2αs ≤ k ≤ d2αt − 1, +� (k+1)/d2α +k/d2α +� +d2ατ − ⌊d2ατ⌋ +� +V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉dτ += +1 +2d2α V ′′ � +Xd +k,1 +� +bd +� +Xd +k,1, Zd +k+1,1 +�2 1Ad +k+1 += 1 +2 +� (k+1)/d2α +k/d2α +V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉dτ . +28 + +Thus, we can write +I = +� t +s +� +d2ατ − ⌊d2ατ⌋ +� +V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉dτ += I1 + I2 , +where we define +I2 = 1 +2 +� t +s +V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉dτ , +and +I1 = +�� ⌈d2αs⌉/d2α +s ++ +� t +⌊d2αt⌋/d2α +� � +d2ατ − ⌊d2ατ⌋ − 1 +2 +� +V ′′ � +Xd +⌊d2ατ⌋,1 +� +× bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉dτ . +In addition, we have +I1 = +1 +2d2α +� +d2αs ��� ⌊d2αs⌋ +� � +⌈d2αs⌉ − d2αs +� +V ′′ � +Xd +⌊d2αs⌋,1 +� +bd +� +Xd +⌊d2αs⌋,1, Zd +⌈d2αs⌉,1 +�2 +1Ad +⌈d2αs⌉ ++ +1 +2d2α +� +d2αt − ⌊d2αt⌋ +� � +⌈d2αt⌉ − d2αt +� +V ′′ � +Xd +⌊d2αt⌋,1 +� +bd +� +Xd +⌊d2αt⌋,1, Zd +⌈d2αt⌉,1 +�2 +1Ad +⌈d2αt⌉ , +and, since V ′′ and the moments of Zd +⌈d2αt⌉,1 are bounded, limd→∞ E [|I1|] = 0. Thus, +lim +d→∞ E +���V +� +Ld +t +� +− V +� +Ld +s +� +− Is,t +��� += 0 , +where +Is,t = +� t +s +� +ℓdαV ′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +(45) ++ ℓ2 +2 V ′′ � +Xd +⌊d2ατ⌋,1 +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉ +� +dτ . +Next, we use (18) and write +� t +s +LV +� +Ld +τ +� +dτ = +� t +s +hL(ℓ) +2 +� +V ′′ � +Xd +⌊d2ατ⌋1 +� +− sgn +� +Xd +⌊d2ατ⌋,1 +� +V ′ � +Xd +⌊d2ατ⌋,1 +�� +dτ − T d +3 , +(46) +where we define +T d +3 = +� t +s +� +LV +� +Xd +⌊d2ατ⌋,1 +� +− LV +� +Ld +τ +�� +dτ . +Finally, we write the difference M d +⌈d2αt⌉(V ) − M d +⌈d2αs⌉(V ) as the integral of a piecewise constant +29 + +function +M d +⌈d2αt⌉(V ) − M d +⌈d2αs⌉(V ) = Is,t +(47) +− +� t +s +� +ℓdαV ′ � +Xd +⌊d2ατ⌋,1 +� +E +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� ++ℓ2 +2 V ′′ � +Xd +⌊d2ατ⌋,1 +� +E +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉ +����Fd +⌊d2ατ⌋,1 +�� +dτ +− T d +4 − T d +5 , +where T d +4 and T d +5 account for the difference between the sum in (41) and the integral, and are +defined as +T d +4 = − ℓ +dα +� +⌈d2αt⌉ − d2αt +� +V ′ � +Xd +⌊d2αt⌋,1 +� � +bd +� +Xd +⌊d2αt⌋,1, Zd +⌈d2αt⌉,1 +� +1Ad +⌈d2αt⌉ +−E +� +bd +� +Xd +⌊d2αt⌋,1, Zd +⌈d2αt⌉,1 +� +1Ad +⌈d2αt⌉ +���Fd +⌊d2αt⌋,1 +�� +− +ℓ2 +2d2α +� +⌈d2αt⌉ − d2αt +� +V ′′ � +Xd +⌊d2αt⌋,1 +� � +bd +� +Xd +⌊d2αt⌋,1, Zd +⌈d2αt⌉,1 +�2 +1Ad +⌈d2αt⌉ +−E +� +bd +� +Xd +⌊d2αt⌋,1, Zd +⌈d2αt⌉,1 +�2 +1Ad +⌈d2αt⌉ +����Fd +⌊d2αt⌋,1 +�� +, +and T d +5 = −T d +4 with t substituted by s. +Putting (45), (46) and (47) together we obtain +Is,t − +� t +s +LV +� +Ld +τ +� +dτ − +� +M d +⌈d2αt⌉(V ) − M d +⌈d2αs⌉(V ) +� += T d +1 + T d +2 + T d +3 + T d +4 + T d +5 , +where T d +1 takes into account all the terms involving V ′(Xd +⌊d2ατ⌋,1), and T d +2 the terms involving +V ′′(Xd +⌊d2ατ⌋,1): +T d +1 = +� t +s +V ′ � +Xd +⌊d2ατ⌋,1 +� +× +� +ℓdαE +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� ++ hL(ℓ) +2 +sgn +� +Xd +⌊d2ατ⌋,1 +�� +dτ , +T d +2 = +� t +s +V ′′ � +Xd +⌊d2ατ⌋,1 +� +× +�ℓ2 +2 E +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉ +����Fd +⌊d2ατ⌋,1 +� +− hL(ℓ) +2 +� +dτ . +To obtain (42) it is then sufficient to prove that for any 1 ≤ i ≤ 5, limd→∞ E +���T d +i +��� += 0. +Since V ′, V ′′ are bounded and bd is bounded in expectation because the moments of Zd +⌈d2ατ⌉,1 +are bounded, it is easy to show that limd→∞ E +���T d +i +��� += 0 for i = 4, 5. For T d +3 , we write T d +3 = +30 + +hL(ℓ)(T d +3,1 − T d +3,2)/2, where +T d +3,1 = +� t +s +� +V ′′ � +Xd +⌊d2ατ⌋,1 +� +− V ′′ � +Ld +τ +�� +dτ , +T d +3,2 = +� t +s +� +sgn +� +Xd +⌊d2ατ⌋,1 +� +V ′ � +Xd +⌊d2ατ⌋,1 +� +− sgn +� +Ld +τ +� +V ′ � +Ld +τ +�� +dτ . +Using Fubini-Tonelli’s theorem, the convergence of Xd +⌊d2ατ⌋,1 to Ld +τ in Lemma 2 and Lebesgue’s +dominated convergence theorem we obtain +E +���T d +3,1 +��� +≤ +� t +s +E +����V ′′ � +Xd +⌊d2ατ⌋,1 +� +− V ′′ � +Ld +τ +���� +� +dτ −→ +d→∞ 0 . +We can further decompose T d +3,2 as +T d +3,2 = +� t +s +� +sgn +� +Xd +⌊d2ατ⌋,1 +� +− sgn +� +Ld +τ +�� +V ′ � +Xd +⌊d2ατ⌋,1 +� +dτ ++ +� t +s +sgn +� +Ld +τ +� � +V ′ � +Xd +⌊d2ατ⌋,1 +� +− V ′ � +Ld +τ +�� +dτ . +Proceeding as for T d +3,1, it is easy to show that the second integral converges to 0 as d → ∞. We +then bound the first integral by +E +����� +� t +s +� +sgn +� +Xd +⌊d2ατ⌋,1 +� +− sgn +� +Ld +τ +�� +V ′ � +Xd +⌊d2ατ⌋,1 +� +dτ +���� +� +≤ C +� t +s +E +����sgn +� +Xd +⌊d2ατ⌋,1 +� +− sgn +� +Ld +τ +���� +� +dτ . +However, since {sgn(Xd +⌊d2ατ⌋,1) ̸= sgn(Ld +τ)} ⊂ {sgn(Xd +⌊d2ατ⌋,1) ̸= sgn(Xd +⌈d2ατ⌉,1)}, using Lemma 4 in +Appendix D.3 we have that +E +����sgn +� +Xd +⌊d2ατ⌋,1 +� +− sgn +� +Ld +τ +���� +� += 2E +� +1� +sgn +� +Xd +⌊d2ατ⌋,1 +� +̸=sgn(Ldτ ) +� +� += 2E +� +1� +sgn +� +Xd +⌊d2ατ⌋,1 +� +̸=sgn +� +Xd +⌈d2ατ⌉,1 +�� +� +−→ +d→∞ 0 . +The above and the dominated converge theorem show that +E +����� +� t +s +� +sgn +� +Xd +⌊d2ατ⌋,1 +� +− sgn +� +Ld +τ +�� +V ′ � +Xd +⌊d2ατ⌋,1 +� +dτ +���� +� +−→ +d→∞ 0 . +Consider then T d +1 , recalling that the derivatives of V are bounded, we have +E +���T d +1 +��� +≤ +� t +s +CE +����ℓdαE +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� ++ hL(ℓ) +2 +sgn +� +Xd +⌊d2ατ⌋,1 +����� +� +dτ +≤ +� t +s +C +� +E +����D(1) +1,τ +��� +� ++ E +����D(1) +2,τ +��� +�� +dτ , +31 + +where we define +D(1) +1,τ = ℓdαE +� +Zd +⌈d2ατ⌉,11Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� +, +D(1) +2,τ = hL(ℓ) +2 +sgn +� +Xd +⌊d2ατ⌋,1 +� +− ℓdα +�σd +2 sgn(Xd +⌊d2ατ⌋,1)1|Xd +⌊d2ατ⌋,1|>σ2m +d +r/2 + +1 +σ2m−1 +d +rXd +⌊d2ατ⌋,11|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 +� +× E +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� +. +Let us start with D(1) +1,τ: +D(1) +1,τ = ℓdαE +� +Zd +⌈d2ατ⌉,1 +� +1 ∧ exp +� d +� +i=1 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +��������Fd +⌊d2ατ⌋,1 +� +, +where φd is given in (20). Then, by independence of the components of Zd +⌈d2ατ⌉, we have +E +� +Zd +⌈d2ατ⌉,1 +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +��������Fd +⌊d2ατ⌋,1 +� += E +� +Zd +⌈d2ατ⌉,1 +� +E +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�������Fd +⌊d2ατ⌋,1 +� += 0 . +This allows us to write +E +� +|D(1) +1,τ| +� +≤ ℓdαE +� +|Zd +⌈d2ατ⌉,1| +�����1 ∧ exp +� d +� +i=1 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�� +− 1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +������� +� +. +However, x �→ 1 ∧ exp(x) is a 1-Lipschitz function, thus +E +� +|D(1) +1,τ| +� +≤ ℓdαE +� +|Zd +⌈d2ατ⌉,1| +���φd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +���� +� +, +and D(1) +1,τ → 0 as d → ∞ by Lemma 5 in Appendix D.3. +For D(1) +2,τ, we observe that +− σd +2 1|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 ≤ +1 +σ2m−1rXd +⌊d2ατ⌋,11|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 ≤ σd +2 1|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 . +(48) +Distinguishing between Xd +⌊d2ατ⌋,1 < 0 and Xd +⌊d2ατ⌋,1 ≥ 0, it follows that +|D(1) +2,τ| ≤ +���sgn +� +Xd +⌊d2ατ⌋,1 +���� +× +���� +hL(ℓ) +2 +− ℓdα �σd +2 1|Xd +⌊d2ατ⌋,1|>σ2m +d +r/2 + σd +2 1|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 +� +E +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +����� +≤ 1 +2 +���hL(ℓ) − ℓ2E +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2αr⌋,1⌋ +���� , +32 + +where we recall that σd = ℓd−α with α = 1/3. Using the triangle inequality we obtain +2E +� +|D(1) +2,τ| +� +≤ E +������hL(ℓ) − ℓ2E +� +1 ∧ exp +� d +� +i=1 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�������Fd +⌊d2ατ⌋,1 +������ +� +≤ E +������hL(ℓ) − ℓ2E +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�������Fd +⌊d2ατ⌋,1 +������ +� ++ ℓ2E +������1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�� +− 1 ∧ exp +� d +� +i=1 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +������� +� +, +where we used Jensen’s inequality to remove the conditional expectation in the last term. Recalling +that x �→ 1 ∧ exp(x) is 1-Lipschitz, we can then bound the second term +ℓ2E +������1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�� +− 1 ∧ exp +� d +� +i=1 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +������� +� +≤ ℓ2E +����φd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +���� +� +, +(49) +≤ ℓ2E +� +φd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2�1/2 +, +where the final expectation converges to zero as d → ∞ by Proposition 17. For the remaining term +in D(1) +2,τ, since (Xd +⌊d2ατ⌋,i, Zd +⌊d2ατ⌋,i)2≤i≤n is independent of Fd +⌊d2ατ⌋,1, we have +ℓ2E +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�������Fd +⌊d2ατ⌋,1 +� += ℓ2E +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +��� +, +and, using again the fact that x �→ 1 ∧ exp(x) is 1-Lipschitz, we have +�����hL(ℓ) − ℓ2E +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�������� +≤ +�����hL(ℓ) − ℓ2E +� +1 ∧ exp +� d +� +i=1 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +�������� ++ ℓ2E +����φd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +���� +� +. +The last term goes to 0 as shown in (49), and, as hL(ℓ) = ℓ2aL(ℓ), with +aL(ℓ) = lim +d→∞ E +� +1 ∧ exp +� d +� +i=1 +φd,i +�� +, +33 + +by Theorem 2, we obtain +lim +d→∞ +�����hL(ℓ) − ℓ2E +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +������� +� += 0 , +showing that D(1) +2,τ → 0 as d → ∞. To obtain convergence of T d +1 , we observe that for any τ ∈ +[s, t], D(1) +1,τ and D(1) +2,τ follow the same distributions as D(1) +1,s and D(1) +2,s, since for any k ∈ N, Xd +k has +distribution πL +d . Therefore, the convergence towards zero of E[|D(1) +1,τ|] and E[|D(1) +2,τ|] is uniform for +τ ∈ [s, t], which gives us T d +1 → 0 as d → ∞. +Finally, consider T d +2 . Using analogous arguments to those used for T d +1 , we obtain +E +� +|T d +2 | +� +≤ C +� t +s +ℓ2 +2 E +�����E +� +bd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉ +����Fd +⌊d2ατ⌋,1 +� +− aL(ℓ, r) +���� +� +dτ +≤ C +� t +s +ℓ2 +2 +� +E +� +|D(2) +1,τ| +� ++ E +� +|D(2) +2,τ| +� +E +� +|D(2) +3,τ| +�� +dτ , +where we define +D(2) +1,τ = E +�� +Zd +⌈d2ατ⌉,1 +�2 +1Ad +⌈d2ατ⌉ +����Fd +⌊d2ατ⌋,1 +� +− aL(ℓ, r) , +D(2) +2,τ = +�σd +2 sgn(Xd +⌊d2ατ⌋,1)1|Xd +⌊d2ατ⌋,1|>σ2m +d +r/2 + +1 +σ2m−1 +d +rXd +⌊d2ατ⌋,11|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 +�2 +× E +� +1Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� +, +D(2) +3,τ = 2 +�σd +2 sgn(Xd +⌊d2ατ⌋,1)1|Xd +⌊d2ατ⌋,1|>σ2m +d +r/2 + +1 +σ2m−1 +d +rXd +⌊d2ατ⌋,11|Xd +⌊d2ατ⌋,1|≤σ2m +d +r/2 +� +× E +� +Zd +⌈d2ατ⌉,11Ad +⌈d2ατ⌉ +���Fd +⌊d2ατ⌋,1 +� +. +Using (48), Cauchy-Schwarz’s inequality and the fact that the moments of Zd +⌈d2ατ⌉,1 are bounded +we have +E +� +|D(2) +2,τ| +� +≤ σ2 +d +4 +−→ +d→∞ 0 , +E +� +|D(2) +3,τ| +� +≤ Cσd −→ +d→∞ 0 , +since σd = ℓd−α with α = 1/3. The remaining term is bounded similarly to D(1) +2,τ, using the fact +that x �→ 1 ∧ exp(x) is 1-Lipschitz, we have +E +� +|D(2) +3,τ| +� +≤ E +������E +�� +Zd +⌈d2ατ⌉,1 +�2 +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +��������Fd +⌊d2ατ⌋,1 +� +− aL(ℓ, r) +����� +� ++ E +�� +Zd +⌈d2ατ⌉,1 +�2 ���φd +� +Xd +⌊d2ατ⌋,1, Zd +⌈d2ατ⌉,1 +���� +� +. +34 + +The second expectation is bounded as (49) using Cauchy-Schwarz’s inequality and Proposition 17. +For the first expectation, we use the conditional independence of the components of Zd +⌈d2ατ⌉ and +write +E +�� +Zd +⌈d2ατ⌉,1 +�2 +� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +��������Fd +⌊d2ατ⌋,1 +� += E +�� +Zd +⌈d2ατ⌉,1 +�2� +E +�� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +���� += E +�� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +���� +. +It follows that E[|D(2) +3,τ|] → 0 as d → ∞ since, by Theorem 2, +�����E +�� +1 ∧ exp +� d +� +i=2 +φd +� +Xd +⌊d2ατ⌋,i, Zd +⌈d2ατ⌉,i +���� +− aL(ℓ, r) +����� → 0 . +Combining the results for T d +i , i = 1, . . . , 5 we obtain the result. +Acknowledgments +F.R.C. and G.O.R. acknowledge support from the EPSRC (grant # EP/R034710/1). +G.O.R. +acknowledges further support from the EPSRC (grant # EP/R018561/1) and the Alan Turing +Institute. A.D. acknowledges support from the Lagrange Mathematics and Computing Research +Center. The authors would like to thank ´Eric Moulines for helpful discussions. +For the purpose of open access, the author has applied a Creative Commons Attribution (CC +BY) licence to any Author Accepted Manuscript version arising from this submission. +References +[1] S. Agrawal, D. Vats, K. �Latuszy´nski, and G. O. Roberts. Optimal scaling of MCMC beyond +Metropolis. arXiv preprint arXiv:2104.02020, 2021. +[2] Y. F. Atchad´e. A Moreau-Yosida approximation scheme for a class of high-dimensional poste- +rior distributions. arXiv preprint arXiv:1505.07072, 2015. +[3] H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert +spaces, volume 408. Springer, 2011. +[4] M. B´edard and J. S. Rosenthal. Optimal scaling of Metropolis algorithms: Heading toward +general target distributions. Canadian Journal of Statistics, 36(4):483–503, 2008. +[5] A. Beskos, G. Roberts, and A. Stuart. Optimal scalings for local Metropolis–Hastings chains +on nonproduct targets in high dimensions. The Annals of Applied Probability, 19(3):863 – 898, +2009. +35 + +[6] J. Bierkens, P. Fearnhead, and G. Roberts. The zig-zag process and super-efficient sampling +for Bayesian analysis of big data. The Annals of Statistics, 47(3):1288–1320, 2019. +[7] N. Bou-Rabee and M. Hairer. Nonasymptotic mixing of the MALA algorithm. IMA Journal +of Numerical Analysis, 33(1):80–110, 2013. +[8] A. Bouchard-Cˆot´e, S. J. Vollmer, and A. Doucet. The bouncy particle sampler: A nonre- +versible rejection-free Markov chain Monte Carlo method. Journal of the American Statistical +Association, 113(522):855–867, 2018. +[9] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov chain Monte Carlo. +CRC press, 2011. +[10] O. F. Christensen, G. O. Roberts, and J. S. Rosenthal. Scaling limits for the transient phase +of local Metropolis–Hastings algorithms. Journal of the Royal Statistical Society: Series B +(Statistical Methodology), 67(2):253–268, 2005. +[11] P. L. Combettes and J.-C. Pesquet. +Proximal splitting methods in signal processing. +In +H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, +editors, Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185– +212. Springer, 2011. +[12] A. Durmus, S. Le Corff, E. Moulines, and G. O. Roberts. Optimal scaling of the random +walk Metropolis algorithm under Lp mean differentiability. Journal of Applied Probability, +54(4):1233–1260, 2017. +[13] A. Durmus and E. Moulines. On the geometric convergence for MALA under verifiable condi- +tions. arXiv preprint arXiv:2201.01951, 2022. +[14] A. Durmus, E. Moulines, and M. Pereyra. Efficient Bayesian computation by proximal Markov +chain Monte Carlo: +when Langevin meets Moreau. +SIAM Journal on Imaging Sciences, +11(1):473–506, 2018. +[15] R. Durrett. Probability: Theory and Examples, volume 49. Cambridge University Press, 2019. +[16] R. Dwivedi, Y. Chen, M. J. Wainwright, and B. Yu. +Log-concave sampling: Metropolis- +Hastings algorithms are fast! In S. Bubeck, V. Perchet, and P. Rigollet, editors, Proceedings +of the 31st Conference On Learning Theory, volume 75 of Proceedings of Machine Learning +Research, pages 793–797. PMLR, 06–09 Jul 2018. +[17] S. N. Ethier and T. G. Kurtz. Markov processes: characterization and convergence, volume +282. John Wiley & Sons, 2009. +[18] A. Gelman, G. O. Roberts, and W. Gilks. +Efficient Metropolis jumping rules. +Bayesian +Statistics, 1996. +[19] J. V. Goldman, T. Sell, and S. S. Singh. Gradient-based Markov chain Monte Carlo for Bayesian +inference with non-differentiable priors. Journal of the American Statistical Association, pages +1–12, 2021. +[20] U. Grenander and M. I. Miller. Representations of knowledge in complex systems. Journal of +the Royal Statistical Society: Series B (Methodological), 56(4):549–581, 1994. +36 + +[21] S. F. Jarner and G. O. Roberts. +Convergence of heavy-tailed Monte Carlo Markov chain +algorithms. Scandinavian Journal of Statistics, 34(4):781–815, 2007. +[22] B. Jourdain, T. Leli`evre, and B. Miasojedow. +Optimal scaling for the transient phase of +Metropolis Hastings algorithms: the longtime behavior. Bernoulli, 20(4):1930–1978, 2014. +[23] O. Kallenberg. Foundations of Modern Probability. Springer, 2021. +[24] S. Livingstone and G. Zanella. The Barker proposal: Combining robustness and efficiency in +gradient-based MCMC. Journal of the Royal Statistical Society Series B, 84(2):496–523, 2022. +[25] Y.-A. Ma, T. Chen, and E. Fox. A complete recipe for stochastic gradient MCMC. Advances +in Neural Information Processing Systems, 28, 2015. +[26] K. L. Mengersen and R. L. Tweedie. Rates of convergence of the Hastings and Metropolis +algorithms. Annals of Statistics, 24(1):101–121, 1996. +[27] P. Neal and G. Roberts. Optimal scaling for random walk Metropolis on spherically constrained +target densities. Methodology and Computing in Applied Probability, 10(2):277–297, 2008. +[28] P. Neal, G. Roberts, W. K. Yuen, et al. Optimal scaling of random walk metropolis algorithms +with discontinuous target densities. Annals of Applied Probability, 22(5):1880–1927, 2012. +[29] N. Parikh and S. Boyd. +Proximal algorithms. +Foundations and Trends in Optimization, +1(3):127–239, 2014. +[30] M. Pereyra. +Proximal Markov chain Monte Carlo algorithms. +Statistics and Computing, +26(4):745–760, 2016. +[31] N. S. Pillai. Optimal scaling for the proximal Langevin algorithm in high dimensions. arXiv +preprint arXiv:2204.10793, 2022. +[32] N. S. Pillai, A. M. Stuart, and A. H. Thi´ery. Optimal scaling and diffusion limits for the +Langevin algorithm in high dimensions. The Annals of Applied Probability, 22(6):2320 – 2356, +2012. +[33] G. O. Roberts, A. Gelman, and W. R. Gilks. Weak convergence and optimal scaling of random +walk Metropolis algorithms. Annals of Applied Probability, 7(1):110–120, 1997. +[34] G. O. Roberts and J. S. Rosenthal. Optimal scaling of discrete approximations to Langevin dif- +fusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1):255– +268, 1998. +[35] G. O. Roberts, J. S. Rosenthal, et al. Optimal scaling for various Metropolis-Hastings algo- +rithms. Statistical Science, 16(4):351–367, 2001. +[36] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin distributions and their +discrete approximations. Bernoulli, 2(4):341 – 363, 1996. +[37] G. O. Roberts and R. L. Tweedie. +Geometric convergence and central limit theorems for +multidimensional Hastings and Metropolis algorithms. Biometrika, 83(1):95–110, 1996. +37 + +[38] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis, volume 317 of Grundlehren der +mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, +Berlin, Heidelberg, 1998. +[39] P. J. Rossky, J. D. Doll, and H. L. Friedman. +Brownian dynamics as smart Monte Carlo +simulation. The Journal of Chemical Physics, 69(10):4628–4633, 1978. +[40] C. Sherlock, G. Roberts, et al. Optimal scaling of the random walk Metropolis on elliptically +symmetric unimodal targets. Bernoulli, 15(3):774–798, 2009. +[41] A. N. Shiryaev. Probability, volume 25. Springer, 1996. +[42] D. W. Stroock and S. Varadhan. Multidimensional Diffusion Processes. Springer, 1979. +[43] M. Vono, N. Dobigeon, and P. Chainais. Bayesian image restoration under Poisson noise and +log-concave prior. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech +and Signal Processing (ICASSP), pages 1712–1716. IEEE, 2019. +[44] J. Vorstrup Goldman, T. Sell, and S. S. Singh. Gradient-based Markov chain Monte Carlo for +Bayesian inference with non-differentiable priors. Journal of the American Statistical Associ- +ation, 0(0):1–12, 2021. +[45] J. Yang, G. O. Roberts, and J. S. Rosenthal. +Optimal scaling of random-walk Metropo- +lis algorithms on general target distributions. +Stochastic Processes and their Applications, +130(10):6094–6132, 2020. +[46] X. Zhou, E. C. Chi, and H. Zhou. Proximal MCMC for Bayesian inference of constrained and +regularized estimation. arXiv preprint arXiv:2205.07378, 2022. +A +Proof of Theorem 1 +The proof of Theorem 1 follows that of [34, Theorem 1, Theorem 2] and consists of four propositions +showing convergence of the log-acceptance probability to a normal random variable and (weak) +convergence of the process (11) to a Langevin diffusion. +We start by recalling and defining a number of quantities that we will use in the following proofs. +Recall that σd = ℓ/dα, that λd = σ2m +d r/2 where m ≥ 1/2 and r > 0 are to be chosen according to +the different cases in Theorem 1. Recalling the expression of the proposal given in (9) and using +the simplification given in (10), we define the proposal with starting point xd ∈ Rd, +yd(xd, zd) = xd − σ2 +d +2 ∇G +� +proxσ2m +d +r/2 +G +(xd) +� ++ σdzd , +where zd ∈ Rd. Since G is the d-times tensor product of g, the i-th component of the proposal only +depends on the i-th components of xd and zd. Thus, we introduce the notation for any x, z ∈ R, +yd(x, z) = x − σ2 +d +2 g′ � +proxσ2m +d +r/2 +g +(x) +� ++ σdz . +With these notations, we obtain the proposal for the chain (Xd +k)k≥0 using Y d +k = yd(Xd +k, Zd +k+1) = +(yd(Xd +k,i, Zd +k+1,i))i∈{1,...,d}. +Let us define the generator of the discrete process (Xd +k)k≥0 for all +38 + +V ∈ C∞ +c (Rd, R), i.e. infinitely differentiable R-valued multivariate functions with compact support, +and any xd ∈ Rd, +LdV (xd) = d2αE +�� +V (yd(xd, Zd +1)) − V (xd) +� πd(yd(xd, Zd +1))qd(yd(xd, Zd +1), xd) +πd(xd)qd(xd, yd(xd, Zd +1)) +∧ 1 +� += d2αE +� +� +V (yd(xd, Zd +1)) − V (xd) +� +d +� +i=1 +exp +� +φd(xd +i , Zd +1,i) +� +∧ 1 +� +, +where the expectation is w.r.t. Zd +1 = (Zd +1,i)i∈{1,...,d}, a d-dimensional standard normal random +variable, and where we defined +φd(x, z) = log π(yd(x, z))q(yd(x, z), x) +π(x)q(x, yd(x, z)) +(50) += g(yd(x, z)) − g(x) + log q(yd(x, z), x) − log q(x, yd(x, z)) . +In the remainder we will work with one-dimensional functions V ∈ C∞ +c (R, R) applied to the first +component of xd so that +LdV (xd) = d2αE +�� +V (yd(xd +1, Zd +1,1)) − V (xd +1) +� πd(yd(xd, Zd +1))qd(yd(xd, Zd +1), xd) +πd(xd)qd(xd, yd(xd, Zd +1)) +∧ 1 +� += d2αE +� +� +V (yd(xd +1, Zd +1,1)) − V (xd +1) +� +d +� +i=1 +exp +� +φd +� +xd +i , Zd +1,i +�� +∧ 1 +� +. +(51) +We also define �Ld to be a variant of Ld in which the first component of the acceptance ratio is +omitted: +�LdV (xd) = d2αE +� +� +V (yd(xd +1, Zd +1,1)) − V (xd +1) +� +d +� +i=2 +exp +� +φd +� +xd +i , Zd +1,i +�� +∧ 1 +� +. +(52) +We further define the generator of the Langevin diffusion (13) +LV (x) = h(ℓ, r) +2 +[V ′′(x) − g′(x)V ′(x)] , +(53) +where h(ℓ, r) = ℓ2a(ℓ, r) is the speed of the diffusion and a(ℓ, r) = limd→∞ ad(ℓ, r) is given in +Theorem 1. +We will make use of the derivatives of g in (8) up to order 8, which we denote by g′, g′′, g′′′ and +g(k) for all k = 4, . . . , 8. +We recall that (gλ)′ is Lipschitz continuous with Lipschitz constant λ−1 [38, Proposition 12.19] +and that (gλ)′(x) = λ−1(proxλ +g(x)−x), hence proxλ +g is Lipschitz continuous with Lipschitz constant +1. +A.1 +Auxiliary Results for the Proof of Case (a) +First, we characterize the limit behaviour of the acceptance ratio (12). +Proposition 4. Under A0, A1 and A2, if α = 1/4, β = 1/8 and r > 0, then +39 + +(i) the log-acceptance ratio (50) satisfies +φd(x, z) = d−1/2C2(x, z) + d−3/4C3(x, z) + d−1C4(x, z) + C5(x, z, σd) , +where C2(x, z) is given in (57), C3 and C4 are polynomials in z and the derivatives of g, such +that E[C3(Xd +0,1, Zd +1,1)] = 0 and E[C2(Xd +0,1, Zd +1,1)2] = −2E[C4(Xd +0,1, Zd +1,1)]; +(ii) there exists sets Fd ⊆ Rd with d2απd(F c +d) → 0 such that +lim +d→∞ sup +xd∈Fd +E +������ +d +� +i=2 +φd(xd +i , Zd +1,i) − d−1/2 +d +� +i=2 +C2(xd +i , Zd +1,i) + ℓ4K1(r)2 +8 +����� +� += 0 , +(54) +where K1(r) is given in Theorem 1–(a). +Proof. Take one component of the log-acceptance ratio +φd(x, z) = g(yd(x, z)) − g(x) + log q(yd(x, z), x) − log q(x, yd(x, z)) , +with yd(x, z) = x−σ2 +dg′(proxσ2m +d +r/2 +g +(x))/2+σdz. We have that φd(x, z) = R1(x, z, σd)+R2(x, z, σd), +where +R1(x, z, σ) = −g +� +x − σ2 +2 g′ � +proxσ2mr/2 +g +(x) +� ++ σz +� ++ g(x) , +R2(x, z, σ) = 1 +2z2 − 1 +2 +� +z − σ +2 g′ +� +proxσ2mr/2 +g +� +x + σz − σ2 +2 g′ � +proxσ2mr/2 +g +(x) +��� +−σ +2 g′ � +proxσ2mr/2 +g +(x) +��2 +. +(55) +Following the approach of [34] we approximate φd(x, z) with a Taylor expansion about σd → 0. +(i) Using a Taylor expansion of order 5, we obtain +φd(x, z) = d−1/2C2(x, z) + d−3/4C3(x, z) + d−1C4(x, z) + C5(x, z, σd) , +(56) +where +C2(x, z) = ℓ2 +2 (−rzg′′(x)g′(x)) , +(57) +C3(x, z) and C4(x, z) are given in Appendix C.1.1 and we use the integral form for the re- +mainder +C5(x, z, σd) = +� σd +0 +∂5 +∂σ5 R(x, z, σ) +���� +σ=u +(σd − u)4 +4! +du , +with u between 0 and σd and the derivatives of R1 and R2 given in Appendix C.2. +In +addition, integrating by parts and using the moments of Zd +1,1 we find that E[C2(Xd +0,1, Zd +1,1)] = +E[C3(Xd +0,1, Zd +1,1)] = 0 and +2E +� +C4(Xd +0,1, Zd +1,1) +� ++ E +� +C2(Xd +0,1, Zd +1,1)2� += 0 . +40 + +(ii) To construct the sets Fd, consider, for j = 3, 4, Fd,j = F 1 +d,j ∩ F 2 +d,j where we define +F 1 +d,j = +� +xd ∈ Rd : +����� +d +� +i=2 +E +� +Cj(xd +i , Zd +1,i) − Cj(Xd +0,i, Zd +1,i) +� +����� ≤ d5/8 +� +, +and +F 2 +d,j = +� +xd ∈ Rd : +����� +d +� +i=2 +Vj(xd +i ) − E +� +Vj(Xd +0,i) +� +����� ≤ d6/5 +� +, +where Vj(x) := Var(Cj(x, Zd +1,1)). +Using Markov’s inequality and the fact that Cj, Vj are +bounded by polynomials since g and its derivatives are bounded by polynomials, it is easy to +show that d1/2πd((F 1 +d,j)c) → 0 and d1/2πd((F 2 +d,j)c) → 0, from which follows d1/2πd(F c +d,j) → 0 +as d → ∞. To prove L1 convergence of Cj for j = 3, 4, observe that +E +� +� +� d +� +i=2 +Cj(xd +i , Zd +1,i) − E +� +Cj(Xd +0,1, Zd +1,1) +� +�2� +� += +d +� +i=2 +Vj(xd +i ) + +� d +� +i=2 +E +� +Cj(xd +i , Zd +1,i) − Cj(Xd +0,1, Zd +1,1) +� +�2 +, +and that, for xd ∈ Fd,j, we have +E +� +� +� d +� +i=2 +Cj(xd +i , Zd +1,i) − E +� +Cj(Xd +0,1, Zd +1,1) +� +�2� +� ≤ E +� +Vj(xd +1) +� +(d − 1) + d6/5 + d5/4 . +Thus, the third and fourth term in the Taylor expansion (56) converge in L1 to 0 and +−ℓ4K2 +1(r)/8 respectively. Now, consider C5(xd +i , Zd +1,i, σd). We can bound +∂5R +∂σ5 (x, z, σ) with +the derivatives of g evaluated at +x + σ2 +2 proxσ2mr/2 +g +(x) + σz +and +proxσ2mr/2 +g +(x) . +Under our assumptions, the derivatives of g are bounded by polynomials M0, it follows that +there exist polynomials p of the form +A +� +1 + +� +proxσ2mr/2 +g +(x) +�N� � +1 + zN� � +1 + xN� � +1 + σN� +, +for sufficiently large A and sufficiently large even integer N, such that +����g(k) +� +x + σ2 +2 proxσ2mr/2 +g +(x) + σdz +����� ∨ +���g(k) � +proxσ2mr/2 +g +(x) +���� +≤ p(proxσ2mr/2 +g +(x), x, z, σd) . +41 + +In addition, | proxσ2mr/2 +g +(x)| ≤ C(1 + |x|) for some C ≥ 1, and we can bound +p(proxσ2mr/2 +g +(x), x, z, σ) ≤ A +� +1 + zN� � +1 + x2N� � +1 + σN� +. +Therefore, we have +E +���C5(xd +i , Zd +1,i, σd) +��� +≤ AE +� +1 + (Zd +1,i)N� � +1 + (xd +i )2N� � σd +0 +(1 + uN)(σd − u)4 +4! +du +≤ AE +� +1 + (Zd +1,i)N� � +1 + (xd +i )2N� +d−5/2 +≤ A +� +1 + (xd +i )2N� +d−5/2 , +where the last inequality follows since all the moments of Zd +1 are bounded. Let us denote +p(x) = A +� +1 + x2N� +and +Fd,5 = +� +xd ∈ Rd : +�����d−1 +d +� +i=1 +p(xd +i ) − E +� +p(Xd +0,i) +� +����� < 1 +� +. +By Chebychev’s inequality we have πd(F c +d,5) ≤ Var(p(Xd +0,1))d−1. Additionally, for all xd ∈ +Fd,5, +d +� +i=2 +E +���C5(xd +i , Zd +1,i, σd) +��� +≤ +d +� +i=2 +d−5/2 � +E +� +p(Xd +0,1) +� ++ d−1� +≤ d−3/2 � +E +� +p(Xd +0,1) +� ++ 1 +� +. +Finally, set Fd = ∩5 +j=3Fd,j. On Fd the last three terms of (56) converge uniformly in L1, +and (54) follows using the triangle inequality. +Next, we compare the generator Ld and �Ld in (51) and (52) respectively. +Proposition 5. Under A0, A1 and A2, if α = 1/4, β = 1/8 and r > 0, there exists sets Sd ⊆ Rd +with d2απd(Sc +d) → 0 such that for any V ∈ C∞ +c (R, R) +lim +d→∞ sup +xd∈Sd +���LdV (xd) − �LdV (xd) +��� = 0 , +and +lim +d→∞ sup +xd∈Sd +E +������ +� +exp +� d +� +i=1 +φd(xd +i , Zd +1,i) +� +∧ 1 +� +− +� +exp +� d +� +i=2 +φd(xd +i , Zd +1,i) +� +∧ 1 +������ +� += 0 . +Proof. The function x �→ exp(x) ∧ 1 is Lipschitz continuous with Lipschitz constant 1, hence +���LdV (xd) − �LdV (xd) +��� ≤ d2αE +���V +� +yd(xd +1, Zd +1,1) +� +− V (xd +1) +�� |R(xd +1, Zd +1,1, σd)| +� +, +42 + +where R(x, z, σ) = R1(x, z, σ) + R2(x, z, σ) as in (55). Using a Taylor expansion of order 1 about +σ = 0 with integral remainder: +R(x, z, σ) = R(x, z, 0) + ∂R +∂σ (x, z, σ) +���� +σ=0 +σ + +� σ +0 +∂2R +∂σ2 (x, z, σ) +���� +σ=u +(σ − u)du , +we obtain +R(x, z, σ) = +� σ +0 +∂2R +∂σ2 (x, z, σ) +���� +σ=u +(σ − u)du , +where ∂2R +∂σ2 (x, z, σ) is bounded by the derivatives of g evaluated at +x + σ2 +2 proxσ2mr/2 +g +(x) + σz +and +proxσ2mr/2 +g +(x) . +Under our assumptions, the derivatives of g are bounded by polynomials M0, it follows that there +exist polynomials p of the form +A +� +1 + +� +proxσ2mr/2 +g +(x) +�N� � +1 + zN� � +1 + xN� � +1 + σN� +, +for sufficiently large A and sufficiently large even integer N, such that +����g(k) +� +x + σ2 +2 proxσ2mr/2 +g +(x) + σz +����� ∨ +���g(k) � +proxσ2mr/2 +g +(x) +���� ≤ p(proxσ2mr/2 +g +(x), x, z, σ) . +Proceeding as in Proposition 4, we can bound +p(proxσ2mr/2 +g +(x), x, z, σ) ≤ A +� +1 + zN� � +1 + x2N� � +1 + σN� +. +Therefore, we have +��R +� +xd +1, Zd +1,1, σd +��� ≤ A +� +1 + (Zd +1,1)N� � +1 + (xd +1)2N� +× +� σd +0 +(1 + uN)(σd − u)du ≤ A +� +1 + (Zd +1,1)N� � +1 + (xd +1)2N� σ2 +d +2 . +(58) +Since V ∈ C∞ +c (R, R), there exists a constant C such that +��V +� +yd(xd +1, Zd +1,1) +� +− V (xd +1) +�� ≤ C +��yd(xd +1, Zd +1,1) − xd +1 +�� +≤ C +� +σd|Zd +1,1| + σ2 +d +2 +���g′ � +proxσ2m +d +r/2 +g +(xd +1) +���� +� +. +Recalling that g′(proxλ +g(x)) = (gλ)′(x) with (gλ)′ Lipschitz continuous with Lipschitz constant λ−1, +we have +���g′ � +proxσ2m +d +r/2 +g +(xd +1) +���� ≤ +2 +σ2m +d r(1 + |xd +1|) , +43 + +and +��V +� +yd(xd +1, Zd +1,1) +� +− V (xd +1) +�� ≤ C +� +σd|Zd +1,1| + σ2−2m +d +r +� +1 + |xd +1| +�� +(59) +≤ Cσd +� +|Zd +1,1| + 1 +r +� +1 + |xd +1| +�� +, +since m = 1/2. Combining (58) and (59) we obtain +d2α ��V +� +yd(xd +1, Zd +1,1) +� +− V (xd +1) +�� ��R(xd +1, Zd +1,1, σd) +�� +≤ Cσd +� +1 + (Zd +1,1)N� � +1 + (xd +i )2N� � +|Zd +1,1| + 1 +r (1 + |xd +1|) +� +, +for some C > 0. +Set Sd to be the set in which 1 + (xd +1)2N+1 ≤ dα/2, applying Markov’s inequality we obtain +d2απd(Sc +d) = d2απd +�� +1 + (xd +1)2N+1�5 ≥ d5α/2� +≤ d−α/2E +�� +1 + (xd +1)2N+1�5� +−→ +d→∞ 0 . +Recalling that |Zd +1,1| and 1 + (Zd +1,1)N are bounded, we have that +sup +xd∈Sd +���LdV (xd) − �LdV (xd) +��� ≤ Cdα/2 ℓ +dα −→ +d→∞ 0 . +The second results follows from (58) using the same argument. +The following result considers the convergence to the generator of the Langevin diffusion (53). +Proposition 6. Under A0, A1 and A2, if α = 1/4, β = 1/8 and r > 0, there exists sets Td ⊆ Rd +with d2απd(T c +d) → 0 as d → ∞, such that for any V ∈ C∞ +c (R, R) +lim +d→∞ sup +xd∈Td +����d2αE +� +V +� +yd(xd +1, Zd +1,1) +� +− V (xd +1) +� +− ℓ2 +2 (V ′′(xd +1) + g′(xd +1)V ′(xd +1)) +���� = 0 . +Proof. Take +yd(xd +1, Zd +1,1) = xd +1 + σ2 +d +2 g′ � +proxσ2m +d +r/2 +g +(xd +1) +� ++ σdZd +1,1 , +and use a Taylor expansion of order 2 of +W(x, z, σ) = V +� +x + σ2 +2 g′ � +proxσ2mr/2 +g +(x) +� ++ σz +� +, +about σ = 0 with integral remainder: +W(x, z, σ) = W(x, z, 0) + ∂W +∂σ (x, z, σ) +���� +σ=0 +σ + 1 +2 +∂2W +∂σ2 (x, z, σ) +���� +σ=0 +σ2 ++ +� σ +0 +∂3W +∂σ3 (x, z, σ) +���� +σ=u +(σ − u)2 +2 +du . +44 + +Using the derivatives +W(x, z, 0) = V (x) , +∂W +∂σ (x, z, σ) +���� +σ=0 += V ′(x)z , +∂2W +∂σ2 (x, z, σ) = V ′′(x)z2 + V ′(x)g′(x) , +and recalling that E +� +Zd +1,1 +� += 0, E +� +(Zd +1,1)2� += 1, we have +E +� +V +� +yd(xd +1, Zd +1,1) +� +− V (xd +1) +� += σ2 +d +2 +� +V ′′(xd +1) + V ′(xd +1)g′(xd +1) +� ++ E +�� σd +0 +∂3W +∂σ3 (xd +1, Zd +1,1, σ) +���� +σ=u +(σd − u)2 +2 +du +� +. +Proceeding as in the previous proposition, we can bound +���� +� σd +0 +∂3W +∂σ3 (xd +1, Zd +1,1, σ) +�� +σ=u +(σd − u)2 +2 +du +���� ≤ A +� +1 + (Zd +1,1)N� � +1 + (xd +i )2N� +d−3α . +Setting Td to be the set in which (1 + (xd +1)2N) ≤ dα/2, the result follows by applying Markov’s +inequality as in Proposition 5. +Before proceeding to stating and proving the last auxiliary result, let us denote by ψ1 : R → +[0, +∞) the characteristic function of the distribution N(0, K2 +1(r)), where K2 +1(r) is given in Theo- +rem 1–(a), +ψ1(t) = exp(−t2ℓ2K1(r)2/2) , +and by ψd +1(xd; t) = +� +exp(itw)Qd +1(xd; dw) the characteristic functions associated with the law +Qd +1(xd; ·) = L +� +d−1/2 +d +� +i=2 +C2(xd +i , Zd +1,i) +� +, +Proposition 7. Under A0, A1 and A2, if α = 1/4, β = 1/8 and r > 0, there exists a sequence of +sets Hd ⊆ Rd such that +(i) +lim +d→∞ d2απd(Hc +d) = 0 , +(ii) for all t ∈ R +lim +d→∞ sup +xd∈Hd +|ψd +1(xd; t) − ψ1(t)| = 0 , +(iii) +d−1/2 +d +� +i=2 +C2(xd +i , Zd +1,i) +L +−→ N +� +0, ℓ4K2 +1(r) +2 +� +, +where +L +−→ denotes convergence in law, +45 + +(iv) +lim +d→∞ sup +xd∈Hd +�����E +� +1 ∧ exp +� +d−1/2 +d +� +i=2 +C2(xd +i , Zd +1,i) − ℓ4K2 +1(r) +2 +�� +− 2Φ +� +−ℓ2K1(r) +2 +������ , +where Φ is the distribution function of the standard normal random variable. +Proof. +(i) Define the functions hj(x) = [−g′′(x)g′(x)]j with j = 1, . . . , 4 and let Hd = Hd,1 ∩Hd,2 +where +Hd,1 = +� +xd ∈ Rd : +����� +1 +d +d +� +i=2 +hj(xd +i ) − +� +R +hj(u)π(u)du +����� ≤ d1/3 for j = 1, . . . , 4 +� +, +Hd,2 = +� +xd ∈ Rd : |hj(xd +i )| ≤ d2/3 for i = 1, . . . , d and j = 1, . . . , 4 +� +. +Using Chebychev’s inequality, the fact that the derivatives of g are bounded by polynomials +and that π has finite moments, we have d1/2πd((Hd,1)c) → 0 as d → ∞. Similarly, by Markov’s +inequality we have d1/2πd((Hd,2)c) → 0 as d → ∞. +(ii) We follow [34, Lemma 3(b)] and decompose +|ψd +1(xd; t) − ψ1(t)| ≤ +�����ψd +1(xd; t) − +d +� +i=2 +� +1 − t2 +2dv(xd +i ) +������ +(60) ++ +����� +d +� +i=2 +� +1 − t2 +2dv(xd +i ) +� +− +d +� +i=2 +exp +� +−t2 v(xd +i ) +2d +������ ++ +����� +d +� +i=2 +exp +� +−t2 v(xd +i ) +2d +� +− exp +� +−t2 ℓ2K1(r)2 +2 +������ +where v(xd +i ) = Var(C2(xd +i , Zd +1,i)) = ℓ4E[C2(xd +i , Zd +1,i)2]. +For the first term, decompose the +characteristic function ψd +1(xd; t) = �d +i=2 θd +i (xd +i ; t) as the product of the characteristic functions +of d−1/2Wi where we define Wi = C2(xd +i , Zd +1,i), using [15, equation (3.3.3)] as in the proof of +[15, Theorem 3.4.10] we obtain +����θd +i (xd +i ; t) − +� +1 − t2 +2dv(xd +i ) +����� ≤ EZ +� |t|3 +d3/2 +|Wi|3 +3! +∧ 2|t|2 +d +|Wi|2 +2! +� +≤ EZ +� |t|3 +d3/2 +|Wi|3 +3! +; |Wi| ≤ d1/2ε +� ++ t2 +d EZ +� +|Wi|2; |Wi| > d1/2ε +� +≤ ε|t|3 +6d EZ +� +|Wi|2� ++ +t2 +ε2d2 EZ +� +|Wi|4� +for any ε > 0. For sufficiently large d, we have that t2v(xd +i )/(2d) ≤ 1 for x ∈ Hd,2, and we +can use [15, Lemma 3.4.3] +�����ψd +j (xd; t) − +d +� +i=2 +� +1 − t2 +2dv(xd +i ) +������ ≤ +d +� +i=2 +�ε|t|3 +6d EZ +� +|Wi|2� ++ +t2 +ε2d2 EZ +� +|Wi|4�� +≤ εℓ4|t|3 +6 +(K2 +1(r) + D1d−1/3) + t2 +ε2d(E +� +|Wi|4� ++ D2ℓ8d−1/4) +46 + +where the last inequality follows from the fact that xd ∈ Hd,2 and D1, D2 are positive con- +stants. For any δ > 0 we can chose ε small enough so that the first term in the above is less +than δ/2 and we can chose d sufficiently large to make the second term less than δ/2. Thus, +for any δ > 0 we can find ε > 0 and d ∈ N such that +�����ψd +1(xd; t) − +d +� +i=2 +� +1 − t2 +2dv(xd +i ) +������ < δ; +the uniform convergence then follows. The second term in (60) converges to 0 uniformly for +all xd ∈ Hd,1; while for the third term in (60) we use again [15, Lemma 3.4.3] +����� +d +� +i=2 +exp +� +−t2 v(xd +i ) +2d +� +− exp +� +−t2 ℓ2K1(r)2 +2 +������ ≤ +d +� +i=2 +t4v(xd +i )2 +4d2 +→ 0 +for all x ∈ Hd,2. The result then follows. +(iii) This is a straightforward consequence of (ii) and the L´evy’s continuity Theorem (e.g. [41, +Theorem 1, page 322]). +(iv) This statement follows directly from (iii) and [33, Proposition 2.4]. +A.2 +Auxiliary Results for the Proof of Case (b) +First, we characterize the limit behaviour of the acceptance ratio (12). The following result is an +extension of [34, Lemma 1]. +Proposition 8. Under A0, A1, A2 and A3, if α = 1/6, β = 1/6 and r > 0, then +(i) the the log-acceptance ratio (50) satisfies +φd(xd +i , Yi) = d−1/2C3(xi, Zi) + d−2/3C4(xi, Zi) ++ d−5/6C5(xi, Zi) + d−1C6(xi, Zi) + C7(xi, Zi, σd), +where C3(xi, Zi) is given in (61), C4, C5, C6 are polynomials in Zi and the derivatives of g, +such that EX [EZ [Cj(X, Z)]] = 0 for j = 3, 4, 5 and +EX +� +EZ +� +C3(X, Z)2�� += −2EX [EZ [C6(X, Z)]] ; +(ii) there exists sets Fd ⊆ Rd with d2απd(F c +d) → 0 such that +lim +d→∞ sup +xd∈Fd +E +������ +d +� +i=2 +φd(xd +i , Yi) − d−1/2 +d +� +i=2 +C3(xd +i , Zi) + ℓ6K2(r)2 +2 +����� +� += 0, +where K2(r) is given in Theorem 1–(b). +47 + +Proof. Take one component of the log-acceptance ratio +φd(xd +i , Yi) = g(Yi) − g(xd +i ) + log q(Yi, xd +i ) − log q(xd +i , Yi) +with Yi = xd +i − σ2 +d +2 g′ � +proxσ2m +d +r/2 +g +(xd +i ) +� ++ σdZi and Zi ∼ N(0, 1). Proceeding as in the proof for +case (a), we have that φd(xd +i , Yi) = R1(xd +i , Z, σd) + R2(xd +i , Z, σd) where R1, R2 are given in (55). +Following the approach of [34] we approximate φd(xd +i , Yi) with a Taylor expansion about σd = 0. +(i) Using a Taylor expansion of order 7, we find that +φd(xd +i , Yi) = d−1/2C3(xd +i , Zi) + d−2/3C4(xd +i , Zi) + d−5/6C5(xd +i , Zi) ++ d−1C6(xd +i , Zi) + C7(xd +i , Zi, σd), +where +C3(x, z) = ℓ3 +6 +�1 +2g′′′(x)z3 − 3 +2g′′(x)g′(x)z (1 + 2r) +� +, +(61) +C4(x, z), C5(x, z) and C6(x, z) are given in Appendix C.1.2 and integral form of the remainder +C7(xd +i , Zi, σd) = +� σd +0 +∂7 +∂σ7 R(xd +i , Zi, σ)|σ=ϵ +(σd − ϵ)6 +6! +dϵ, +with ϵ between 0 and σd and the derivatives of R1 and R2 are given in Appendix C.2. In +addition, integrating by parts and using the moments of Z we find that EX [EZ [C3(X, Z)]] = +EX [EZ [C4(X, Z)]] = EX [EZ [C5(X, Z)]] = 0 and +EX [EZ [C6(X, Z)]] += ℓ6 +� +− 1 +16 +� +r + 2r2� +[g′′(X)g′(X)]2 − 1 +16 +�1 +2 + r +� +g′′(X)3 − 5 +96g′′′(X)2 +� +, +which shows that EX +� +EZ +� +C3(X, Z)2 + 2C6(X, Z) +�� += 0. +(ii) The proof of this result follows using the same steps as case (a) and is analogous to that of +[34, Lemma 1]. +Next, we compare the generator Ld and �Ld in (51) and (52) respectively, extending [34, Theorem +3]. +Proposition 9. Under A0, A1, A2 and A3, if α = 1/6, β = 1/6 and r > 0, there exists sets +Sd ⊆ Rd with d2απd(Sc +d) → 0 such that for any V ∈ C∞ +c (R, R) +lim +d→∞ sup +xd∈Sd +���LdV (xd) − �LdV (xd) +��� = 0 +and +lim +d→∞ sup +xd∈Sd +E +������ +πd(Y )qd(Y , xd) +πd(xd)qd(xd, Y ) ∧ 1 − +d +� +i=2 +φd(xd +i , yi) ∧ 1 +����� +� += 0. +48 + +Proof. The function x �→ exp(x) ∧ 1 is Lipschitz continuous with Lipschitz constant 1, hence +���LdV (xd) − �LdV (xd) +��� ≤ d2αE +� +|V (Y1) − V (xd +1)||R(xd +1, Z1, σd)| +� +where R(x, z, σ) = R1(x, z, σ) + R2(x, z, σ) as in (55). Using a Taylor expansion of order 1 about +σ = 0 with integral remainder: +R(x, z, σ) = R(x, z, 0) + ∂R +∂σ (x, z, σ)|σ=0σ + +� σ +0 +∂2R +∂σ2 (x, z, σ)|σ=ϵ(σ − ϵ)dϵ, +we obtain +R(xd +1, Z, σd) = +� σd +0 +∂2R +∂σ2 (x, z, σ)|σ=ϵ(σd − ϵ)dϵ, +where ∂2R +∂σ2 (x, z, σ) is bounded by the derivatives of g evaluated at +x + σ2 +d +2 proxσ2mr/2 +g +(x) + σz +and +proxσ2mr/2 +g +(x). +Under our assumptions, the derivatives of g are bounded by polynomials M0, it follows that there +exist polynomials p of the form A +� +1 + +� +proxσ2mr/2 +g +(x) +�N� � +1 + zN� � +1 + xN� � +1 + σN� +, for suffi- +ciently large A and sufficiently large even integer N, such that +����g(k) +� +x + σ2 +2 proxσ2mr/2 +g +(x) + σz +����� ∨ +���g(k) � +proxσ2mr/2 +g +(x) +���� ≤ p(proxσ2mr/2 +g +(x), x, z, σ). +Proceeding as in Proposition 8, we can bound +p(proxσ2mr/2 +g +(x), x, z, σ) ≤ A +� +1 + zN� � +1 + x2N� � +1 + σN� +. +Therefore, we have +|R(xd +1, Z1, σd)| ≤ A +� +1 + ZN +1 +� � +1 + (xd +i )2N� +× +� σd +0 +(1 + ϵN)(σd − ϵ)dϵ ≤ A +� +1 + ZN +1 +� � +1 + (xd +1)2N� σ2 +d +2 . +Since V ∈ C∞ +c (R, R), there exists a constant C such that +|V (Y1) − V (xd +1)| ≤ C|Y1 − xd +1| ≤ C +� +σd|Z1| + σ2 +d +2 |g′ � +proxσ2m +d +r/2 +g +(xd +1) +� +| +� +. +Under A3, g′ is Lipschitz continuous and we have, for some C ≥ 1, +|g′ � +proxσ2m +d +r/2 +g +(xd +1) +� +| ≤ C(1 + | proxσ2m +d +r/2 +g +(xd +1)|) ≤ C(1 + |xd +1|), +where we used the fact that proxλ +g is 1-Lipschitz continuous for all λ > 0. The result then follows +exactly as in [34, Theorem 3]. +49 + +The following result considers the convergence to the generator of the Langevin diffusion (53) +and is a generalization of [34, Lemma 2]. +Proposition 10. Under A0, A1, A2 and A3, if α = 1/6, β = 1/6 and r > 0, there exists sets +Td ⊆ Rd with d2απd(T c +d) → 0 such that for any V ∈ C∞ +c (R, R) +lim +d→∞ sup +xd∈Td +����d2αE +� +V (Y1) − V (xd +1) +� +− ℓ2 +2 (V ′′(xd +1) + g′(xd +1)V ′(xd +1)) +���� = 0. +Proof. The proof is identical to that of Proposition 6. +Before proceeding to stating and proving the last auxiliary result, let us denote by ψ2 : R → +[0, +∞) the characteristic function of the distribution N(0, K2 +2(r)), where K2 +2(r) is given in Theo- +rem 1, +ψ2(t) = exp(−t3ℓ2K2(r)2/2), +and by ψd +2(xd; t) = +� +exp(itw)Qd +2(xd; dw) the characteristic functions associated with the law +Qd +2(xd; ·) = L +� +d−1/2 +d +� +i=2 +C3(xd +i , Zi) +� +. +The following result extends [34, Lemma 3]. +Proposition 11. Under A0, A1, A2 and A3, if α = 1/6, β = 1/6 and r > 0, there exists a +sequence of sets Hd ⊆ Rd such that +(i) +lim +d→∞ d2απd(Hc +d) = 0, +(ii) for all t ∈ R +lim +d→∞ sup +xd∈Hd +|ψd +2(xd; t) − ψ2(t)| = 0, +(iii) +d−1/2 +d +� +i=2 +C3(xd +i , Zi) +L +−→ N +� +0, ℓ6K2 +2(r) +2 +� +, +where +L +−→ denotes convergence in law, +(iv) +lim +d→∞ sup +xd∈Hd +�����EZ +� +1 ∧ exp +� +d−1/2 +d +� +i=2 +C3(xd +i , Zi) − ℓ6K2 +2(r) +2 +�� +− 2Φ +� +−ℓ3K2(r) +2 +������ . +50 + +Proof. +(i) The proof is analogous to that of Proposition 7 and follows the same steps of that of +[34, Lemma 3(a)]. +(ii) The proof is analogous to that of Proposition 7 and follows the same steps of that of [34, +Lemma 3(b)]. +(iii) This is a straightforward consequence of (ii) and the L´evy’s continuity Theorem (e.g. [41, +Theorem 1, page 322]). +(iv) This statement follows directly from (iii) and [33, Proposition 2.4]. +A.3 +Auxiliary Results for the Proof of Case (c) +First, we characterize the limit behaviour of the acceptance ratio (12). +Proposition 12. Under A0, A1, A2 and A3 and, if α = 1/6, β = m/6 for m > 1 and r > 0, +then +(i) the log-acceptance ratio (50) satisfies +φd(xd +i , Yi) = d−1/2C3(xi, Zi) + d−2/3C4(xi, Zi) ++ d−5/6C5(xi, Zi) + d−1C6(xi, Zi) + C7(xi, Zi, σd), +where C3(xi, Zi) is given in (62), C4, C5, C6 are polynomials in Zi and the derivatives of g, +such that EX [EZ [Cj(X, Z)]] = 0 for j = 3, 4, 5 and +EX +� +EZ +� +C3(X, Z)2�� += −2EX [EZ [C6(X, Z)]] ; +(ii) there exists sets Fd ⊆ Rd with d2απd(F c +d) → 0 such that +lim +d→∞ sup +xd∈Fd +E +������ +d +� +i=2 +φd(xd +i , Yi) − d−1/2 +d +� +i=2 +C3(xd +i , Zi) + ℓ6K2 +3 +2 +����� +� += 0, +where K3 = K2(0). +Proof. Take one component of the log-acceptance ratio +φd(xd +i , Yi) = g(Yi) − g(xd +i ) + log q(Yi, xd +i ) − log q(xd +i , Yi) +with Yi = xd +i − σ2 +d +2 g′ � +proxσ2m +d +r/2 +g +(xd +i ) +� ++ σdZi and Zi ∼ N(0, 1). Proceeding as in the proof for +case (a), we have that φd(xd +i , Yi) = R1(xd +i , Z, σd) + R2(xd +i , Z, σd) where R1, R2 are given in (55). +Following the approach of [34] we approximate φd(xd +i , Yi) with a Taylor expansion about σd = 0. +(i) Using a Taylor expansion of order 7, we find that +φd(xd +i , Yi) = d−1/2C3(xd +i , Zi) + d−2/3C4(xd +i , Zi) + d−5/6C5(xd +i , Zi) ++ d−1C6(xd +i , Zi) + C7(xd +i , Zi, σd), +51 + +where +C3(x, z) = ℓ3 +6 +�1 +2g′′′(x)z3 − 3 +2g′′(x)g′(x)z +� +, +(62) +C4(x, z), C5(x, z) and C6(x, z) are given in Appendix C.1.3 and integral form of the remainder +C7(xd +i , Zi, σd) = +� σd +0 +∂7 +∂σ7 R(xd +i , Zi, σ)|σ=ϵ +(σd − ϵ)6 +6! +dϵ, +with ϵ between 0 and σd and the derivatives of R1 and R2 are given in Appendix C.2. In +addition, integrating by parts and using the moments of Z we find that EX [EZ [C3(X, Z)]] = +EX [EZ [C4(X, Z)]] = EX [EZ [C5(X, Z)]] = 0 and +EX [EZ [C6(X, Z)]] = ℓ6 +� +− 1 +32g′′(X)3 − 5 +96g′′′(X)2 +� +, +which shows that EX +� +EZ +� +C3(X, Z)2 + 2C6(X, Z) +�� += 0. +(ii) The proof of this result follows using the same steps as case (a) and is analogous to that of +[34, Lemma 1]. +Next, we compare the generator Ld and �Ld in (51) and (52) respectively. +Proposition 13. Under A0, A1, A2 and A3, if α = 1/6, β = m/6 for m > 1 and r > 0, there +exists sets Sd ⊆ Rd with d2απd(Sc +d) → 0 such that for any V ∈ C∞ +c (R, R) +lim +d→∞ sup +xd∈Sd +���LdV (xd) − �LdV (xd) +��� = 0 +and +lim +d→∞ sup +xd∈Sd +E +������ +πd(Y )qd(Y , xd) +πd(xd)qd(xd, Y ) ∧ 1 − +d +� +i=2 +φd(xd +i , yi) ∧ 1 +����� +� += 0. +Proof. The proof is identical to that of Proposition 9. +The following result considers the convergence to the generator of the Langevin diffusion (53). +Proposition 14. Under A0, A1, A2 and A3, if α = 1/6, β = m/6 for m > 1 and r > 0, there +exists sets Td ⊆ Rd with d2απd(T c +d) → 0 such that for any V ∈ C∞ +c (R, R) +lim +d→∞ sup +xd∈Td +����d2αE +� +V (Y1) − V (xd +1) +� +− ℓ2 +2 (V ′′(xd +1) + g′(xd +1)V ′(xd +1)) +���� = 0. +Proof. The proof is identical to that of Proposition 6. +52 + +Before proceeding to stating and proving the last auxiliary result, let us denote by ψ3 : R → +[0, +∞) the characteristic function of the distribution N(0, K2 +3), where K2 +3 = K2 +2(0), +ψ3(t) = exp(−t2ℓ3K2 +3/2), +and by ψd +3(xd; t) = +� +exp(itw)Qd +3(xd; dw) the characteristic functions associated with the law +Qd +3(xd; ·) = L +� +d−1/2 +d +� +i=2 +C3(xd +i , Zi) +� +. +Proposition 15. Under A0, A1, A2 and A3, if α = 1/6, β = m/6 for m > 1 and r > 0, there +exists a sequence of sets Hd ⊆ Rd such that +(i) +lim +d→∞ d2απd(Hc +d) = 0, +(ii) for all t ∈ R +lim +d→∞ sup +xd∈Hd +|ψd +3(xd; t) − ψ3(t)| = 0, +(iii) +d−1/2 +d +� +i=2 +C3(xd +i , Zi) +L +−→ N +� +0, ℓ6K2 +3 +2 +� +, +where +L +−→ denotes convergence in law, +(iv) +lim +d→∞ sup +xd∈Hd +�����EZ +� +1 ∧ exp +� +d−1/2 +d +� +i=2 +C3(xd +i , Zi) − ℓ6K2 +3 +2 +�� +− 2Φ +� +−ℓ3K3 +2 +������ . +Proof. +(i) The proof is analogous to that of Proposition 7 and follows the same steps of that of +[34, Lemma 3(a)]. +(ii) The proof is analogous to that of Proposition 7 and follows the same steps of that of [34, +Lemma 3(b)]. +(iii) This is a straightforward consequence of (ii) and the L´evy’s continuity Theorem (e.g. [41, +Theorem 1, page 322]). +(iv) This statement follows directly from (iii) and [33, Proposition 2.4]. +53 + +A.4 +Proof of Theorem 1 +Proof of Theorem 1. +(a) The asymptotic acceptance rate follows by combining Propositions 4– +6 with part (iv) of Proposition 7 as in the proof of [34, Theorem 1]. +To prove the weak +convergence of the process it suffices to show that there exists events F ⋆ +d ∈ Rd such that for +all t > 0 +P +� +Ld +t ∈ F ⋆ +d for all 0 ≤ s ≤ t +� +→ 1 +and +lim +d→∞ sup +xd∈F ⋆ +d +��LdV (xd) − LV (xd) +�� +for all V ∈ C∞ +c (R, R) [17, Chapter 4, Corollary 8.7]. We take F ⋆ +d = Fd ∩ Sd ∩ Td ∩ Hd. Then, +d2απd ((F ⋆ +d )c) → 0 and +P +� +Ld +t ∈ F ⋆ +d for all 0 ≤ s ≤ t +� +→ 1 +for all fixed t. Combining Propositions 4–7 we obtain convergence of the generators. +To obtain the value of a(ℓ, r) maximizing the speed, we observe that K2 +1(r) is a function of +the ratio r = c2/ℓ2m = c2/ℓ only, we can take c ∝ ℓ1/2 so that K2 +1(r) is constant for given +c. Using the same substitution as in [34, Theorem 2] we find that h(ℓ, r) is maximized at the +unique value of ℓ such that a(ℓ, r) = 0.452. +(b) The proof is analogous to that of case (a) replacing Propositions 4, 5, 6 and 7 with Proposi- +tions 8, 9, 10 and 11. To obtain the value of a(ℓ, r) maximizing the speed, we observe that +K2 +2(r) is a function of the ratio r = c2/ℓ2m = c2/ℓ2 only, we can take c ∝ ℓ so that K2 +2(r) is +constant for given c. Using the same substitution as in [34, Theorem 2] we find that h(ℓ, r) is +maximized at the unique value of ℓ such that a(ℓ, r) = 0.574. +(c) The proof is analogous to that of case (a) replacing Propositions 4, 5, 6 and 7 with Proposi- +tions 12, 13, 14 and 15. To obtain the value of a(ℓ, r) maximizing the speed, we observe that +K2 +3 is constant w.r.t. r, we can use the same substitution as in [34, Theorem 2] we find that +h(ℓ, r) is maximized at the unique value of ℓ such that a(ℓ, r) = 0.574. +B +Numerical Experiments +B.1 +Differentiable Targets +We collect here a number of numerical experiments confirming the results in Section 3.2. To do so, +we consider the Gaussian distribution in Example 1 and four algorithmic settings summarized in +Table 1 which correspond to the three cases identified in Theorem 1 and MALA. +The first plot in Figure 2–5 show that for values of α different from those identified in Theorem 1 +the acceptance ratio ad(ℓ, r) becomes degenerate as d increases. For the values of α identified in +Theorem 1 we analyze the influence of ℓ on the acceptance ad(ℓ, r) (second plot), obtaining for +d → +∞ the expression given by Theorem 1–(a) for Figure 2, the expression in Theorem 1–(b) for +Figures 3 and 5 and that in Theorem 1–(c) for Figure 4. +54 + +Case +Figure +Algorithm +α +β +m +r +(a) +2 +Proximal MALA +1/4 +1/8 +1/2 +1 +(b) +3 +P-MALA +1/6 +1/6 +1 +1 +(c) +4 +Proximal MALA +1/6 +1/2 +3 +2 +— +5 +MALA +1/6 +1/6 +1 +≈ 0 +Table 1: Algorithm setting for the simulation study on the Gaussian target. +Finally, we consider the relationship between acceptance ratio ad(ℓ, r) and the speed of the +diffusion h(ℓ, r) approximated by the expected squared jumping distance (see, e.g. [18]) +ESJDd := d2αE +� +(Xd +0 − Xd +1)2� +. +Looking at the last plot in Figure 2–5 we find that, even for relatively small values of d, the shape of +the plot of ESJDd as a function of the acceptance ad(ℓ, r) is similar to that of the theoretical limit. +This suggests that tuning the acceptance ratio to be approximately 0.452 when α = 1/4, β = 1/8 +and approximately 0.574 when α = 1/6, β = m/6 with m ≥ 1 should generally guarantee high +efficiency. +B.2 +Laplace Target +We collect here a number of numerical experiments confirming the results for the Laplace distribu- +tion in Section 3.2. Similarly to Appendix B.1 we consider three algorithmic settings, summarized +in Table 2. +Figure +Algorithm +α +β +m +r +6 +MALA +1/3 +1/3 +1 +≈ 0 +7 +P-MALA +1/3 +1/3 +1 +1 +8 +Proximal MALA +1/3 +1 +3 +2 +Table 2: Algorithm setting for the simulation study on the Laplace target. +The first plot in Figures 6–8 shows that for values of α ̸= 1/3 the acceptance ration ad(ℓ, r) +becomes degenerate as d increases; while the second plot shows that, for sufficiently large d, the +average acceptance ratio and the ESJDd converge to aL(ℓ) and hL(ℓ) given in Theorems 2 and 3, +respectively. In the case m = 3, r = 2 in Figure 8, we find that the behaviour for low values of +d significantly differs from the limiting one. For values of d lower than 130 the ESJDd achieves +its maximum at a value of the acceptance ad(ℓ, r) different from that predicted by Theorem 3. In +practice, this might mean that for low dimensional settings the recommended choice of a(ℓ, c) = +0.360 is far from optimal. Similar behaviours for small d have also been observed in the case of +RWM and MALA (e.g., [40, Section 2.1]). +55 + +0 +10 +20 +30 +40 +50 +60 +70 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 1 +8 += 1 +4 += 1 +2 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 15 +d = 50 +d = 300 +d = 10000 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +1.6 +d=15 +d=50 +d=300 +d=10000 +theory +ad(ℓ, r) +ESJDd +Figure 2: Case (a): Proximal MALA with Gaussian target and m = 1/2, r = 1. Average ac- +ceptance rate for different choices of α (first); acceptance rate as a function of ℓ for increasing +dimension d (second); ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +56 + +10 +20 +30 +40 +50 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 1 +3 += 1 +6 += 1 +12 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 3 +d = 41 +d = 300 +d = 10000 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d=41 +d=300 +d=30000 +theory +ad(ℓ, r) +ESJDd +Figure 3: Case (b): Proximal MALA with Gaussian target and m = 1, r = 1 (P-MALA). Average +acceptance rate for different choices of α (first); acceptance rate as a function of ℓ for increasing +dimension d (second); ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +57 + +10 +20 +30 +40 +50 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 1 +3 += 1 +6 += 1 +12 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 100 +d = 1000 +d = 60000 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +1.6 +d=100 +d=1000 +d=60000 +theory +ad(ℓ, r) +ESJDd +Figure 4: Case (c): Proximal MALA with Gaussian target and m = 3, r = 2. Average acceptance +rate for different choices of α (first); acceptance rate as a function of ℓ for increasing dimension d +(second); ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +58 + +10 +20 +30 +40 +50 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 1 +3 += 1 +6 += 1 +12 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 3 +d = 22 +d = 41 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.00 +0.25 +0.50 +0.75 +1.00 +1.25 +1.50 +1.75 +2.00 +d=22 +d=100 +d=5000 +theory +ad(ℓ, r) +ESJDd +Figure 5: Proximal MALA with Gaussian target and m = 1, r → 0 (MALA). Average acceptance +rate for different choices of α (first); acceptance rate as a function of ℓ for increasing dimension d +(second); ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +59 + +10 +20 +30 +40 +50 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 2 +3 += 1 +3 += 1 +6 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 5 +d = 26 +d = 100 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.0 +0.5 +1.0 +1.5 +2.0 +d = 15 +d = 80 +d = 1000 +theoretical limit +ad(ℓ, r) +ESJDd +Figure 6: Proximal MALA with Laplace target and m = 1, r = 0 (sG-MALA). Average acceptance +rate for different choices of α (first); acceptance rate as a function of ℓ for increasing dimension d +(second); ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +60 + +0 +10 +20 +30 +40 +50 +60 +70 +80 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 2 +3 += 1 +3 += 1 +6 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 5 +d = 50 +d = 300 +d = 10000 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +d = 300 +d = 1000 +d = 15000 +theoretical limit +ad(ℓ, r) +ESJDd +Figure 7: Proximal MALA with Laplace target and m = 1, r = 1 (P-MALA). Average acceptance +rate for different choices of α (first); acceptance rate as a function of ℓ for increasing dimension d +(second); ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +61 + +0 +10 +20 +30 +40 +50 +60 +70 +80 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 += 2 +3 += 1 +3 += 1 +6 +d +ad(ℓ, r) +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +0.2 +0.4 +0.6 +0.8 +1.0 +d = 15 +d = 32 +d = 58 +d = 300 +theoretical limit +ℓ +ad(ℓ, r) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +d = 15 +d = 50 +d = 150 +d = 1000 +theoretical limit +ad(ℓ, r) +ESJDd +Figure 8: Proximal MALA with Laplace target and m = 3, r = 2. Average acceptance rate for +different choices of α (first); acceptance rate as a function of ℓ for increasing dimension d (second); +ESJDd as a function of the acceptance rate ad(ℓ, r) (third). +62 + +C +Taylor Expansions for the Results on Differentiable Tar- +gets +C.1 +Coefficients of the Taylor Expansion +We collect here the coefficients of the Taylor expansions in Propositions 4, 8 and 12. +C.1.1 +Case (a) +If α = 1/4, β = 1/8 and r > 0, then the log-acceptance ratio (50) satisfies +φd(x, z) = d−1/2C2(x, z) + d−3/4C3(x, z) + d−1C4(x, z) + C5(x, z, σd) , +where +C2(x, z) = ℓ2 +2 (−rzg′′(x)g′(x)) , +and +C3(x, z) = ℓ3 +6 +�z3 +2 g′′′(x) − 3 +2z2rg′(x)g′′′(x) − 3 +2rz2 [g′′(x)]2 + 3 +4zr2g′′′(x) [g′(x)]2 +−3 +2zg′(x)g′′(x) + 3 +2r [g′(x)]2 g′′(x) +� +, +C4(x, z) = ℓ4 +24 +� +g(4)(x) +� +z4 − zr3 +2 [g′(x)]3 − 3z3rg′(x) + 3 +2z2r2 [g′(x)]2 +� ++ g′′′(x) +� +−6z2g′(x) − 9rz3g′′(x) + 9z2r2g′(x)g′′(x) + 9rz [g′(x)]2 +−9 +2zr3 [g′(x)]2 g′′(x) − 3 +2r2 [g′(x)]3 +� ++ 12rzg′(x) [g′′(x)]2 + 3g′′(x) [g′(x)]2 − 3z2 [g′′(x)]2 + 3z2r2 [g′′(x)]3 +−6r2 [g′(x)g′′(x)]2 − 3zr3g′(x) [g′′(x)]3� +, +and we use the integral form for the remainder +C5(x, z, σd) = +� σd +0 +∂5 +∂σ5 R(x, z, σ) +���� +σ=u +(σd − u)4 +4! +du , +with u between 0 and σd and the derivatives of R1 and R2 given in Appendix C.2. +C.1.2 +Case (b) +If α = 1/6, β = 1/6 and r > 0, the the log-acceptance ratio (50) satisfies +φd(xd +i , Yi) = d−1/2C3(xi, Zi) + d−2/3C4(xi, Zi) ++ d−5/6C5(xi, Zi) + d−1C6(xi, Zi) + C7(xi, Zi, σd), +63 + +where +C3(x, z) = ℓ3 +6 +�1 +2g′′′(x)z3 − 3 +2g′′(x)g′(x)z (1 + 2r) +� +, +and +C4(x, z) = ℓ4 +24 +� +z4g(4)(x) − 6z2g′′′(x)g′(x)(1 + r) +−3z2 [g′′(x)]2 (1 + 2r) + 3g′′(x) [g′(x)]2 (1 + 2r) +� +, +C5(x, z) = ℓ5 +120 +�3 +2z5g(5)(x) − 15z3g(4)(x)g′(x)(1 + r) + 15z [g′(x)]2 g′′′(x) +�3 +2 + 3r + r2 +� ++15z(1 + 4r + 2r2)g′(x) [g′′(x)]2 − 15z3g′′(x)g′′′(x)(1 + 3r) +� +, +C6(x, z) = ℓ6 +720 +� +2z6g(6)(x) − 30 (1 + r) z4g′(x)g(5)(x) + 45 +� +2 + 4r + r2� +z2 [g′(x)]2 g(4)(x) ++ 90 +� +r + r2� +z2 [g′′(x)]3 − 15 +� +2 + 6r + 3r2� +g′′′(x) [g′(x)]3 +− 30 (1 + 4r) z4g′′(x)g(4)(x) + 45 +� +3 + 16r + 6r2� +z2g′(x)g′′(x)g′′′(x) +−45 +2 (1 + 4r) z4 [g′′′(x)]2 − 45 +2 +� +1 + 8r + 8r2� +[g′(x)g′′(x)]2 +� +, +and integral form of the remainder +C7(xd +i , Zi, σd) = +� σd +0 +∂7 +∂σ7 R(xd +i , Zi, σ)|σ=ϵ +(σd − ϵ)6 +6! +dϵ, +with ϵ between 0 and σd and the derivatives of R1 and R2 are given in Appendix C.2. +C.1.3 +Case (c) +If α = 1/6, β = m/6 for m > 1 and r > 0, then the log-acceptance ratio (50) satisfies +φd(xd +i , Yi) = d−1/2C3(xi, Zi) + d−2/3C4(xi, Zi) ++ d−5/6C5(xi, Zi) + d−1C6(xi, Zi) + C7(xi, Zi, σd), +where +C3(x, z) = ℓ3 +6 +�1 +2g′′′(x)z3 − 3 +2g′′(x)g′(x)z +� +64 + +and +C4(x, z) = ℓ4 +24 +� +� +� +� +� +� +� +� +� +z4g(4)(x) − 6z2g′′′(x)g′(x) − 3z2 [g′′(x)]2 + 3g′′(x) [g′(x)]2 +−12zrg′(x)g′′(x) +if m = 3/2 +z4g(4)(x) − 6z2g′′′(x)g′(x) − 3z2 [g′′(x)]2 + 3g′′(x) [g′(x)]2 +otherwise +, +C5(x, z) = ℓ5 +120 +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +3 +2z5g(5)(x) − 15z3g(4)(x)g′(x) + 45 +2 z [g′(x)]2 g′′′(x) + 15zg′(x) [g′′(x)]2 +−15z3g′′(x)g′′′(x) − 30z2r [g′′(x)]2 + 30r [g′(x)]2 g′′(x) +−30rz2g′(x)g′′′(x) +if m = 3/2 +3 +2z5g(5)(x) − 15z3g(4)(x)g′(x) + 45 +2 z [g′(x)]2 g′′′(x) + 15zg′(x) [g′′(x)]2 +−15z3g′′(x)g′′′(x) − 60zrg′(x)g′′(x) +if m = 2 +3 +2z5g(5)(x) − 15z3g(4)(x)g′(x) + 45 +2 z [g′(x)]2 g′′′(x) ++15zg′(x) [g′′(x)]2 − 15z3g′′(x)g′′′(x) +otherwise +, +C6(x, z) = ℓ6 +720 +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +2z6g(6)(x) − 30z4g′(x)g(5)(x) + 90z2 [g′(x)]2 g(4)(x) − 30g′′′(x) [g′(x)]3 +− 45 +2 [g′(x)g′′(x)]2 − 30z4g′′(x)g(4)(x) + 135z2g′(x)g′′(x)g′′′(x) +− 45 +2 z4 [g′′′(x)]2 − 90z3rg′(x)g(4)(x) + 540rzg′(x) [g′′(x)]2 ++180rzg′(x)g′′(x) + 270rzg′′′(x) [g′(x)]2 − 45zrg′′(x)g′′′(x) ++90rz3g′′′(x) +if m = 3/2 +2z6g(6)(x) − 30z4g′(x)g(5)(x) + 90z2 [g′(x)]2 g(4)(x) − 30g′′′(x) [g′(x)]3 +− 45 +2 [g′(x)g′′(x)]2 − 30z4g′′(x)g(4)(x) + 135z2g′(x)g′′(x)g′′′(x) +− 45 +2 z4 [g′′′(x)]2 − 180z2rg′(x)g′′′(x) − 180z2r [g′′(x)]2 ++180rg′′(x) [g′(x)]2 +if m = 2 +2z6g(6)(x) − 30z4g′(x)g(5)(x) + 90z2 [g′(x)]2 g(4)(x) − 30g′′′(x) [g′(x)]3 +− 45 +2 [g′(x)g′′(x)]2 − 30z4g′′(x)g(4)(x) + 135z2g′(x)g′′(x)g′′′(x) +− 45 +2 z4 [g′′′(x)]2 − 360zrg′(x)g′′(x) +if m = 5/2 +2z6g(6)(x) − 30z4g′(x)g(5)(x) + 90z2 [g′(x)]2 g(4)(x) − 30g′′′(x) [g′(x)]3 +− 45 +2 [g′(x)g′′(x)]2 − 30z4g′′(x)g(4)(x) + 135z2g′(x)g′′(x)g′′′(x) +− 45 +2 z4 [g′′′(x)]2 +otherwise +, +and integral form of the remainder +C7(xd +i , Zi, σd) = +� σd +0 +∂7 +∂σ7 R(xd +i , Zi, σ)|σ=ϵ +(σd − ϵ)6 +6! +dϵ, +with ϵ between 0 and σd and the derivatives of R1 and R2 are given in Appendix C.2. +65 + +C.2 +Taylor Expansions of the Log-acceptance Ratio +C.2.1 +R1 +Recall that R1(x, z, σ) = −g +� +x − σ2 +2 g′ � +proxσ2mr/2 +g +(x) +� ++ σz +� ++ g(x). We compute the derivatives +of R1 w.r.t. σ evaluated at 0: +R1(x, z, 0) = 0, +∂R1 +∂σ (x, z, σ)|σ=0 = −g′(x)z, +∂2R1 +∂σ2 (x, z, σ)|σ=0 = −z2g′′(x) + [g′(x)]2 , +∂3R1 +∂σ3 (x, z, σ)|σ=0 = −z3g′′′(x) + 3g′(x)g′′(x) +� +z + ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +� +, +∂4R1 +∂σ4 (x, z, σ)|σ=0 = −z4g(4)(x) + 6z2g′′′(x)g′(x) − 3g′′(x) [g′(x)]2 ++ 12z [g′′(x)]2 ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 ++ 6g′(x) +× +� +g′′(x) ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 + +� ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�2 +g′′′(x) +� +. +In addition, for m > 1/2 we will also use +∂5R1 +∂σ5 (x, z, σ)|σ=0 = −z5g(5)(x) + 10z3g(4)(x)g′(x) − 15zg′′′(x) [g′(x)]2 ++ 30z [g′′(x)]2 ∂2 +∂2σ proxσ2mr/2 +g +(x)|σ=0 ++ 10g′(x)g′′(x) ∂3 +∂3σ proxσ2mr/2 +g +(x)|σ=0, +∂6R1 +∂σ6 (x, z, σ)|σ=0 = −z6g(6)(x) + 15z4g(5)(x)g′(x) − 45z2g(4)(x) [g′(x)]2 + 15g′′′(x) [g′(x)]3 +− 90g′(x) [g′′(x)]2 ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +− 60zg′′(x) ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 ++ 90g′′(x)g′′′(x) ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 ++ 15g′(x) +× +� +g′′(x) ∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 + 3 +� ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +�2 +g′′′(x) +� +. +66 + +C.2.2 +R2 +Recall that +R2(x, z, σ) = 1 +2z2 +− 1 +2 +� +z − σ +2 g′ +� +proxσ2mr/2 +g +� +x + σz − σ2 +2 g′ � +proxσ2mr/2 +g +(x) +��� +− σ +2 g′ � +proxσ2mr/2 +g +(x) +��2 +. +67 + +We compute the derivatives of R2 w.r.t. σ evaluated at 0: +R2(x, z, 0) = 0, +∂R2 +∂σ (x, z, σ)|σ=0 = zg′(x), +∂2R2 +∂σ2 (x, z, σ)|σ=0 = − [g′(x)]2 + zg′′(x) +� +z + 2 ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +� +, +∂3R2 +∂σ3 (x, z, σ)|σ=0 = −3g′(x)g′′(x) +� +z + 2 ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +� ++ 3 +2z +�� +z + ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�2 +g′′′(x) ++ +� +−g′(x) + 2z +∂2 +∂σ∂x proxσ2mr/2 +g +(x)|σ=0 + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� +g′′(x) ++ +� ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�2 +g′′′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0g′′(x) +� +, +∂4R2 +∂σ4 (x, z, σ)|σ=0 = −3 [g′′(x)]2 +� +z + 2 ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�2 +− 6g′(x) +�� +z + ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�2 +g′′′(x) ++ +� +−g′(x) + 2z +∂2 +∂σ∂x proxσ2mr/2 +g +(x)|σ=0 + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� +g′′(x) ++ +� ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�2 +g′′′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0g′′(x) +� ++ 2zg(4)(x) +� +z + ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�3 ++ 6zg′′′(x) +� +z + ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +� +× +� +−g′(x) + 2z +∂2 +∂σ∂x proxσ2mr/2 +g +(x)|σ=0 + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� ++ 2zg′′(x) +� +−3g′′(x) ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 − 3g′(x) ∂2 +∂σ∂x proxσ2mr/2 +g +(x)|σ=0 ++3z2 +∂3 +∂σ∂x2 proxσ2mr/2 +g +(x)|σ=0 + 3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 ++ ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� ++ 2zg(4)(x) +� ∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 +�3 ++ 2zg′′(x) ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 ++ 6z ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +∂ +∂σ proxσ2mr/2 +g +(x)|σ=0g′′′(x). +68 + +We then proceed to get the derivatives needed for m > 1/2: +∂5R2 +∂σ5 (x, z, σ)|σ=0 = −15zg′′(x) +� +z2g′′′(x) + +� +−g′(x) + 2 ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� +g′′(x) +� +− 5g′(x) +� +2z3g(4)(x) + 6zg′′′(x) +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� ++2g′′(x) +� +3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 + ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� ++2g′′(x) ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� ++ 5 +2g(5)(x)z5 + 5 +2zg′′(x) ∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 ++ 15g(4)(x)z3 +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� ++ 15 +2 zg′′′(x) +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +�2 ++ 15 +2 zg′′′(x) +� ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +�2 ++ 10z2g′′′(x) +� +3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 + ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� ++ 5 +2zg′′(x) +� ∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 − 6g′′(x) ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +−6g′(x) +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 + 6z2 +∂4 +∂σ2∂x2 proxσ2mr/2 +g +(x)|σ=0 ++4z +∂4 +∂σ3∂x proxσ2mr/2 +g +(x)|σ=0 +� +and +∂6R2 +∂σ6 (x, z, σ)|σ=0 = −45 +2 +� +z2g′′′(x) + +� +−g′(x) + 2 ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� +g′′(x) +�2 +− 15zg′′(x) +� +2z3g(4)(x) + 6zg′′′(x) +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� ++ 2g′′(x) +� +3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 + ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� ++2g′′(x) ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� ++ 6g′(x)A(5) − zA(6), +69 + +with +A(5) = −5 +2g(5)(x)z4 − 5 +2g′′(x) ∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 +− 15g(4)(x)z2 +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� +− 15 +2 g′′′(x) +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +�2 +− 15 +2 g′′′(x) +� ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +�2 +− 10zg′′′(x) +� +3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 + ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 +� +− 5 +2g′′(x) +� ∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 − 6g′′(x) ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +−6g′(x) +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 ++6z2 +∂4 +∂σ2∂x2 proxσ2mr/2 +g +(x)|σ=0 + 4z +∂4 +∂σ3∂x proxσ2mr/2 +g +(x)|σ=0 +� +, +A(6) = −3 +� +10g′′′(x) ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 ++g′′(x) ∂5 +∂σ5 proxσ2mr/2 +g +(x)|σ=0 + g(6)(x)z5 ++ 10g(5)(x)z3 +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� ++ 15zg(4)(x) +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +�2 ++ 10g(4)(x)z2 +� ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 + 3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 +� ++ 10g′′′(x) +� +−g′(x) + ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 +� +× +� ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 + 3z +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 +� ++ 5g′′′(x)z +� +−6g′′(x) ∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 − 6g′(x) +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 ++6z2 +∂4 +∂σ2∂x2 proxσ2mr/2 +g +(x)|σ=0 + 3z +∂4 +∂σ3∂x proxσ2mr/2 +g +(x)|σ=0 ++ ∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 +� ++ g′′(x) +� +−10g′′(x) ∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 − 10g′(x) +∂4 +∂σ3∂x proxσ2mr/2 +g +(x)|σ=0 ++ 5z +∂5 +∂σ4∂x proxσ2mr/2 +g +(x)|σ=0 + 10z2 +∂5 +∂σ3∂x2 proxσ2mr/2 +g +(x)|σ=0 ++ 10z3 +∂5 +∂σ2∂x3 proxσ2mr/2 +g +(x)|σ=0 +−30g′(x)z +∂4 +∂σ2∂x2 proxσ2mr/2 +g +(x)|σ=0 + ∂5 +∂σ5 proxσ2mr/2 +g +(x)|σ=0 +�� +. +70 + +C.3 +Derivatives of the Proximity Map for Differentiable Targets +Recall that, in the differentiable case, proxσ2mr/2 +g +(x) = − σ2mr +2 +g′(proxσ2mr/2 +g +(x)) + x then +∂ +∂σ proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +− r +2g′(x) +if m = 1/2 +0 +if m > 1/2 +∞ +otherwise +∂2 +∂σ2 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +r2 +2 g′(x)g′′(x) +if m = 1/2 +−rg′(x) +if m = 1 +0 +if m > 1 +∞ +otherwise +∂3 +∂σ3 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +− 3r3 +8 g′′′(x) [g′(x)]2 − 3r3 +4 g′(x) [g′′(x)]2 +if m = 1/2 +−3rg′(x) +if m = 3/2 +0 +if m = 1, m > 3/2 +∞ +otherwise +∂4 +∂σ4 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +6r2g′(x)g′′(x) +if m = 1 +−12rg′(x) +if m = 2 +0 +if m = 3/2, m > 2 +∞ +otherwise +∂5 +∂σ5 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +−60rg′(x) +if m = 5/2 +0 +if m = 1, m = 3/2, m = 2, m > 5/2 +∞ +otherwise +and +∂ +∂x proxσ2mr/2 +g +(x)|σ=0 = 1, +∂(k) +∂x(k) proxσ2mr/2 +g +(x)|σ=0 = 0, +71 + +for all integers k > 1. For the mixed derivatives we have +∂2 +∂σ∂x proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +− r +2g′′(x) +if m = 1/2 +0 +if m > 1/2 +∞ +otherwise +∂3 +∂σ2∂x proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +r2 +2 [g′′(x)]2 + r2 +2 g′(x)g′′′(x) +if m = 1/2 +−rg′′(x) +if m = 1 +0 +if m > 1/2 +∞ +otherwise +∂4 +∂σ3∂x proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +−3rg′′(x) +if m = 3/2 +0 +if m = 1, m > 3/2 +∞ +otherwise +∂5 +∂σ4∂x proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +6r2 � +g′(x)g′′′(x) + [g′′(x)]2� +if m = 1 +−12rg′′(x) +if m = 2 +0 +if m = 3/2, m > 2 +∞ +otherwise +72 + +and +∂3 +∂σ∂x2 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +− r +2g′′′(x) +if m = 1/2 +0 +if m > 1/2 +∞ +otherwise +∂4 +∂σ∂x3 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +< ∞ +if m = 1/2 +0 +if m > 1/2 +∞ +otherwise +∂4 +∂σ2∂x2 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +−rg′′′(x) +if m = 1 +0 +if m > 1/2 +∞ +otherwise +∂5 +∂σ3∂x2 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +−3rg′′′(x) +if m = 3/2 +0 +if m = 1, m > 3/2 +∞ +otherwise +∂5 +∂σ2∂x3 proxσ2mr/2 +g +(x)|σ=0 = +� +� +� +� +� +� +� +� +� +< ∞ +if m = 1/2 +−rg(4)(x) +if m = 1 +0 +if m > 1 +∞ +otherwise +D +Moments and Integrals for the Laplace Distribution +D.1 +Moments of Acceptance Ratio for the Laplace Distribution +The indicator functions in the definition of φd identify four different regions: +R1 := +� +(x, z) : |x| ≤ σ2mr/2 ∧ +���� +� +1 − +1 +σ2(m−1)r +� +x + σz +���� ≤ σ2mr/2 +� +, +R2 := +� +(x, z) : |x| > σ2mr/2 ∧ +����x − σ2 +2 sgn(x) + σz +���� ≤ σ2mr/2 +� +, +R3 := +� +(x, z) : |x| ≤ σ2mr/2 ∧ +���� +� +1 − +1 +σ2(m−1)r +� +x + σz +���� > σ2mr/2 +� +, +R4 := +� +(x, z) : |x| > σ2mr/2 ∧ +����x − σ2 +2 sgn(x) + σz +���� > σ2mr/2 +� +, +73 + +with corresponding acceptance ratios +φ1 +d(x, z) = |x| − +���� +� +1 − +1 +σ2(m−1)r +� +x + σz +���� + z2 +2 +− +1 +2σ2 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +x − +� +1 − +1 +σ2(m−1)r +� +σz +�2 +φ2 +d(x, z) = |x| − +����x − σ2 +2 sgn(x) + σz +���� + z2 +2 +− +1 +2σ2 +� +1 +σ2(m−1)rx + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 sgn(x) − σz +��2 +φ3 +d(x, z) = |x| − +���� +� +1 − +1 +σ2(m−1)r +� +x + σz +���� + z2 +2 +− +1 +2σ2 +� +1 +σ2(m−1)rx − σz + σ2 +2 sgn +�� +1 − +1 +σ2(m−1)r +� +x + σz +��2 +φ4 +d(x, z) = |x| − +����x − σ2 +2 sgn(x) + σz +���� + z2 +2 +− +1 +2σ2 +�σ2 +2 sgn(x) − σz + σ2 +2 sgn +� +x − σ2 +2 sgn(x) + σz +��2 +. +Let us denote +A1 := +� +x : 0 ≤ x ≤ σ2mr +2 +� +, +A2 := +� +x : −σ2mr +2 +≤ x ≤ 0 +� +, +A3 := +� +x : x > σ2mr +2 +� +, +A4 := +� +x : x < −σ2mr +2 +� +, +and +B1 := +� +z : 0 ≤ +� +1 − +1 +σ2(m−1)r +� +x + σz ≤ σ2mr +2 +� +, +B2 := +� +z : −σ2mr +2 +≤ +� +1 − +1 +σ2(m−1)r +� +x + σz ≤ 0 +� +, +B3 := +� +z : +� +1 − +1 +σ2(m−1)r +� +x + σz > σ2mr +2 +� +, +B4 := +� +z : +� +1 − +1 +σ2(m−1)r +� +x + σz < −σ2mr +2 +� +, +74 + +and +C1 := +� +(x, z) : 0 ≤ x − σ2 +2 sgn(x) + σz ≤ σ2mr +2 +� +, +C2 := +� +(x, z) : −σ2mr +2 +≤ x − σ2 +2 sgn(x) + σz ≤ 0 +� +C3 := +� +(x, z) : x − σ2 +2 sgn(x) + σz > σ2mr +2 +� +, +C4 := +� +(x, z) : x − σ2 +2 sgn(x) + σz < −σ2mr +2 +� +, +so that +R1 = (A1 ∪ A2) ∩ (B1 ∪ B2), +R2 = (A3 ∪ A4) ∩ (C1 ∪ C2), +R3 = (A1 ∪ A2) ∩ (B3 ∪ B4), +R4 = (A3 ∪ A4) ∩ (C3 ∪ C4). +Proposition 16. Take X a Laplace random variable and Z a standard normal random variable +independent of X, then if σ2 = ℓ2d−2/3, we have +lim +d→+∞ dE [φd(X, Z)] = − +ℓ3 +3 +√ +2π. +Proof. Taking expectations of φi +d1Ri for i = 1, . . . , 4 and exploiting the symmetry of the laws of X +75 + +and Z, we can write +E +� +φ1 +d(X, Z)1R1(X, Z) +� += 2E +�� +1 +σ2(m−1)rX − σZ +� +1A1(X)1B1(X, Z) +� ++ 2E +�� +2X − +1 +σ2(m−1)rX + σZ +� +1A1(X)1B2(X, Z) +� ++ 2E +�� +Z2 +2 − +1 +2σ2 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +X − +� +1 − +1 +σ2(m−1)r +� +σZ +�2� +×1A1(X)1B1∪B2(X, Z)] , +E +� +φ2 +d(X, Z)1R2(X, Z) +� += 2E +��σ2 +2 − σZ +� +1A3(X)1C1(X, Z) +� ++ 2E +�� +2X − σ2 +2 + σZ +� +1A3(X)1C2(X, Z) +� ++ 2E +�� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σZ +��2� +×1A3(X)1C1∪C2(X, Z)] , +E +� +φ3 +d(X, Z)1R3(X, Z) +� += 2E +�� +1 +σ2(m−1)rX − σZ +� +1A1(X)1B3(X, Z) +� ++ 2E +�� +2X − +1 +σ2(m−1)rX + σZ +� +1A1(X)1B4(X, Z) +� ++ 2E +�� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ + σ2 +2 +�2� +1A1(X)1B3(X, Z) +� ++ 2E +�� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ − σ2 +2 +�2� +1A1(X)1B4(X, Z) +� +, +E +� +φ4 +d(X, Z)1R4(X, Z) +� += 2E +�� +2X − σ2 +2 + σZ +� +1A3(X)1C4(X, Z) +� +. +Using the integrals in Appendix D.4 and Lebesgue’s dominated convergence theorem, we find that +76 + +for α = β = 1/3 and r ≥ 0 +lim +d→+∞ dE +� +φ1 +d(X, Z) +� += 0 +lim +d→+∞ dE +� +φ2 +d(X, Z) +� += −2 ℓ3r +4 +√ +2π +� 0 +−∞ +e−z2/2zdz = +ℓ3r +2 +√ +2π +lim +d→+∞ dE +� +φ3 +d(X, Z) +� += 3ℓ3r +8 +√ +2π +� 0 +−∞ +e−z2/2zdz − +ℓ3r +8 +√ +2π +� +∞ +0 +e−z2/2zdz = − ℓ3r +2 +√ +2π +lim +d→+∞ dE +� +φ4 +d(X, Z) +� += +ℓ3 +6 +√ +2π +� 0 +−∞ +e−z2/2z3dz = − +ℓ3 +3 +√ +2π, +which gives +lim +d→+∞ dE [φd(X, Z)] = +lim +d→+∞ d +� +E +� +φ1 +d(X, Z) +� ++ E +� +φ2 +d(X, Z) +� ++ E +� +φ3 +d(X, Z) +� ++ E +� +φ4 +d(X, Z) +�� += − +ℓ3 +3 +√ +2π. +For α = 1/3, β = m/3 for m > 1 and r ≥ 0 we have +lim +d→+∞ dE +� +φ1 +d(X, Z) +� += 0 +lim +d→+∞ dE +� +φ2 +d(X, Z) +� += 0 +lim +d→+∞ dE +� +φ3 +d(X, Z) +� += 0 +lim +d→+∞ dE +� +φ4 +d(X, Z) +� += +ℓ3 +6 +√ +2π +� 0 +−∞ +e−z2/2z3dz = − +ℓ3 +3 +√ +2π, +which gives +lim +d→+∞ dE [φd(X, Z)] = +lim +d→+∞ d +� +E +� +φ1 +d(X, Z) +� ++ E +� +φ2 +d(X, Z) +� ++ E +� +φ3 +d(X, Z) +� ++ E +� +φ4 +d(X, Z) +�� += − +ℓ3 +3 +√ +2π. +Proposition 17. Take X a Laplace random variable and Z a standard normal random variable +independent of X, then if σ2 = ℓ2d−2/3 +lim +d→+∞ d Var (φd(X, Z)) = +2ℓ3 +3 +√ +2π. +Proof. As a consequence of the previous Proposition we have +lim +d→+∞ dE [φd(X, Z)]2 = 0. +77 + +Then, because Rj ∩ Ri = ∅ for all j ̸= i, we have that +E +� +φ(X, Z)2� += E +� +φ1 +d(X, Z)2R1(X, Z) +� ++ E +� +φ2 +d(X, Z)2R2(X, Z) +� ++ E +� +φ3 +d(X, Z)2R3(X, Z) +� ++ E +� +φ4 +d(X, Z)2R4(X, Z) +� +, +and, exploiting again the symmetry of the laws of X and Z, we have +E +� +φ1 +d(X, Z)2R1(X, Z) +� += 2E +� +� +� +1 +σ2(m−1)rX − σZ + Z2 +2 − +1 +2σ2 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +X − +� +1 − +1 +σ2(m−1)r +� +σZ +�2�2 +×1A1(X)1B1(X, Z)] , ++ 2E +� +� +� +2X − +1 +σ2(m−1)rX − σZ + Z2 +2 − +1 +2σ2 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +X − +� +1 − +1 +σ2(m−1)r +� +σZ +�2�2 +×1A1(X)1B2(X, Z)] , +E +� +φ2 +d(X, Z)21R2(X, Z) +� += 2E +� +� +� +σ2 +2 − σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σZ +��2�2 +1A3(X)1C1(X, Z) +� +� ++ 2E +� +� +� +2X − σ2 +2 + σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σZ +��2�2 +×1A1(X)1C2(X, Z)] , +E +� +φ3 +d(X, Z)21R3(X, Z) +� += 2E +� +� +� +1 +σ2(m−1)rX − σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ + σ2 +2 +�2�2 +1A1(X)1B3(X, Z) +� +� ++ 2E +� +� +� +2X − +1 +σ2(m−1)rX + σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ − σ2 +2 +�2�2 +1A1(X)1B4(X, Z) +� +� , +E +� +φ4 +d(X, Z)21R4(X, Z) +� += 2E +�� +2X − σ2 +2 + σZ +�2 +1A3(X)1C4(X, Z) +� +. +Proceeding as for Proposition 16, using the integrals in Appendix D.4 and Lebesgue’s dominated +convergence theorem we can then show that for α = 1/3, β = m/3 for m ≥ 1 and r ≥ 0 +lim +d→+∞ d Var (φd(X, Z)) = +2ℓ3 +3 +√ +2π. +78 + +Proposition 18. Take X a Laplace random variable and Z a standard normal random variable +independent of X, then if σ2 = ℓ2d−2/3 we have +lim +d→+∞ dE +� +φd(X, Z)3� += 0. +Proof. Following the same structure of the previous propositions we have that +E +� +φ(X, Z)3� += E +� +φ1 +d(X, Z)3R1(X, Z) +� ++ E +� +φ3 +d(X, Z)2R2(X, Z) +� ++ E +� +φ3 +d(X, Z)3R3(X, Z) +� ++ E +� +φ4 +d(X, Z)3R4(X, Z) +� +, +exploiting again the symmetry of the laws of X and Z, using the integrals in Appendix D.4, the +dominated convergence theorem we can then show that +lim +d→+∞ dE +� +φd(X, Z)3� += 0. +D.2 +Bound on Second Moment of Acceptance Ratio for the Laplace Dis- +tribution +Lemma 3. Let Z be a standard normal random variable and σ = ℓ/dα for α = 1/3. Then, there +exists a constant C > 0 such that for all a ∈ R and d ∈ N: +E +� +φd(a, Z)2� +≤ C +d2α . +Proof. We consider the case a ≥ 0 and r ≥ σ2(m−1) only, all the other cases follow from identical +arguments. As in the derivation of the moments of φd in Appendix D.1, we distinguish four regions. +We recall that σ = ℓ/dα for α = 1/3 and thus σp+1 ≤ σp for all p ∈ N. Take r ≥ σ−2(m−1), for R1, +79 + +we have, using H¨older’s inequality multiple times, +E +� +φ1 +d(a, Z)21R1(a, Z) +� += E +� +φ1 +d(a, Z)21B1∪B2(a, Z) +� +≤ Cσ2 +� σ2m−1r/2−(1−1/σ2(m−1)r)a +−(1−1/σ2(m−1)r)a/σ +e−z2/2 +√ +2π +�� +1 +σ2m−1ra − z +�2 ++ +� +z2 +2σ − +1 +2σ3 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +a − +� +1 − +1 +σ2(m−1)r +� +σz +�2�2� +� dz ++ Cσ2 +� −(1−1/σ2(m−1)r)a/σ +−σ2m−1r/2−(1−1/σ2(m−1)r)a +e−z2/2 +√ +2π +��2a +σ − +1 +σ2m−1ra + z +�2 ++ +� +z2 +2σ − +1 +2σ3 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +a − +� +1 − +1 +σ2(m−1)r +� +σz +�2�2� +� dz +≤ Cσ2 +� +∞ +−∞ +e−z2/2 +√ +2π +� +4 +� a +σ +�2 ++ 2 +� +1 +σ2m−1ra +�2 ++ 2z2 +� +dz ++ Cσ2 +� σ2m−1r/2−(1−1/σ2(m−1)r)a +−σ2m−1r/2−(1−1/σ2(m−1)r)a +e−z2/2 +√ +2π +× +� +z2 +2σ − +1 +2σ3 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +� +a − +� +1 − +1 +σ2(m−1)r +� +σz +�2�2 +≤ Cσ2 + Cσ2 +� σ2m−1r/2−(1−1/σ2(m−1)r)a +−σ2m−1r/2−(1−1/σ2(m−1)r)a +e−z2/2 +√ +2π +× +� +z4 +4σ2 + +1 +4σ6 +�� +2 +σ2(m−1)r − +1 +σ4(m−1)r2 +�4 +a4 + +� +1 − +1 +σ2(m−1)r +�4 +σ4z4 +�� +dz +≤ Cσ2, +where we used the fact that the moments of Z are bounded and a ≤ σ2mr/2 for the first term, +and the fact that z ≤ σ2m−1r/2 for the second one. +Proceeding as above, for R3, a > 0 and +80 + +r ≥ σ−2(m−1), we have +E +� +φ3 +d(a, Z)21R3(a, Z) +� += E +� +φ3 +d(a, Z)21B3∪B4(a, Z) +� +≤ Cσ2 +� +∞ +σ2m−1r/2−(1−1/σ2(m−1)r)a/σ +e−z2/2 +√ +2π +�� +1 +σ2m−1ra − z +�2 ++ +� +z2 +2σ − +1 +2σ3 +� +1 +σ2(m−1)ra − σz + σ2 +2 +�2�2� +� dz ++ Cσ2 +� −σ2m−1r/2−(1−1/σ2(m−1)r)a/σ +−∞ +e−z2/2 +√ +2π +��2a +σ − +1 +σ2m−1ra + z +�2 ++ +� +z2 +2σ − +1 +2σ3 +� +1 +σ2(m−1)ra − σz + σ2 +2 +�2�2� +� dz +≤ Cσ2 + Cσ2 +� +∞ +−∞ +e−z2/2 +√ +2π +� 1 +2σ3 +� +a2 +σ4(m−1)r2 + σ4 +4 − σ3z + +a +σ2m−4r − +2az +σ2m−3r +��2 +dz +≤ Cσ2, +where we used again the boundedness of the moments of Z, the fact that a ≤ σ2mr/2 and that +σp+1 ≤ σp. For R2 and a > 0, we have +E +� +φ2 +d(a, Z)21R2(a, Z) +� += E +� +φ2 +d(a, Z)21C1∪C2(a, Z) +� +≤ Cσ2 +� σ/2+σ2m−1r/2−a/σ +σ/2−a/σ +e−z2/2 +√ +2π +��σ2 +2 − σz +�2 ++ +� +z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)ra + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σz +��2�2� +� dz ++ Cσ2 +� σ/2−a/σ +σ/2−σ2m−1r/2−a/σ +e−z2/2 +√ +2π +�� +2a − σ2 +2 + σz +�2 ++ +� +z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)ra + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σz +��2�2� +� dz. +The first integral is bounded using the moments of Z, while for the third one let us denote +81 + +χ(a, σ, z) := a − σ2/2 + σz, then +� σ/2+σ2m−1r/2−a/σ +σ/2−σ2m−1r/2−a/σ +e−z2/2 +√ +2π +� +z2 +2σ − +1 +2σ3 +� +1 +σ2(m−1)ra + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σz +��2�2 +dz += +� σ/2+σ2m−1r/2−a/σ +σ/2−σ2m−1r/2−a/σ +e−z2/2 +√ +2π +� +z2 +2σ − +1 +2σ3 +�χ(a, σ, z) +σ2(m−1)r + σ2 +2 − σz +�2�2 +dz +≤ C +� σ/2+σ2m−1r/2−a/σ +σ/2−σ2m−1r/2−a/σ +e−z2/2 +√ +2π +� +z2 +2σ − +1 +2σ3 +�σ2 +2 − σz +�2�2 +dz ++ C +� σ/2+σ2m−1r/2−a/σ +σ/2−σ2m−1r/2−a/σ +e−z2/2 +√ +2π +�χ(a, σ, z)2 +2σ4m−1r2 +�2 ++ +�χ(a, σ, z) +rσ2m−1 +�σ2 +2 − σz +��2 +dz; +recalling that in R2 we have |χ(a, σ, z)| ≤ σ2mr/2, we obtain that this term is also bounded by +Cσ2. For R4 and a > 0, we have +E +� +φ4 +d(a, Z)21R4(a, Z) +� += E +� +φ4 +d(a, Z)21C4(a, Z) +� += +� σ/2−σ2m−1r/2−a/σ +−∞ +e−z2/2 +√ +2π +� +2a − σ2 +2 + σz +�2 +dz += σ2 +� σ/2−σ2m−1r/2−a/σ +−∞ +e−z2/2 +√ +2π +�2a +σ − σ +2 + z +�2 +dz +Collecting all the terms together, we obtain +E +� +φd(a, Z)2� += +4 +� +i=1 +E +� +φi +d(a, Z)21Ti(a, Z) +� +≤ Cσ2 + Cσ2 +� σ/2−a/σ +−∞ +e−z2/2 +√ +2π +�2a +σ − σ +2 + z +�2 +dz. +Recall that σ = ℓd−1/3. To bound the last integral we use H¨older’s inequality +� σ/2−a/σ +−∞ +e−z2/2 +√ +2π +�2a +σ − σ +2 + z +�2 +dz ≤ C +� σ/2−a/σ +−∞ +e−z2/2 +√ +2π +��σ +2 + z +�2 ++ 4 +�σ +2 − a +σ +�2� +dz +≤ C +� σ/2−a/σ +−∞ +e−z2/2 +√ +2π +��ℓ2 +4 + z2 +� ++ 4 +�σ +2 − a +σ +�2� +dz. +The first term is bounded since the moments of Z are bounded. For the second term we use an +estimate of the Gaussian cumulative distribution function. Let κ(ℓ, d, a) := ℓd−1/3/2 − ad1/3/ℓ. +When z < κ(ℓ, d, a) < 0, we have 1 < z/κ(ℓ, d, a) and therefore +(2π)−1/2κ(ℓ, d, a)2 +� κ(ℓ,d,a) +−∞ +e−z2/2dz ≤ (2π)−1/2κ(ℓ, d, a) +� κ(ℓ,d,a) +−∞ +ze−z2/2dz, += (2π)−1/2κ(ℓ, d, a) exp(−κ(ℓ, d, a)2/2). +82 + +However y �→ ye−y2/2 is bounded over R, therefore (a, d) �−→ (2π)−1/2κ(ℓ, d, a) exp(−κ(ℓ, d, a)2/2) +is bounded over R∗ ++ × N. +If κ(ℓ, d, a) ≥ 0, then we still have κ(ℓ, d, a) < ℓ and thus have the +inequality +(2π)−1/2κ(ℓ, d, a)2 +� κ(ℓ,d,a) +−∞ +e−z2/2dz ≤ (2π)−1/2ℓ2 +� +∞ +−∞ +e−z2/2dz = ℓ2. +The result then follows since σ = ℓd−1/3. +D.3 +Additional Integrals for the Laplace Distribution +We collect here two auxiliary Lemmata which are used in the proof of Proposition 3. +Lemma 4. Take X a Laplace random variable and Z a standard normal random variable indepen- +dent of X. Let ˜X := X − +1 +σ2(m−1)rX1|X|≤σ2mr/2 − σ2 +2 sgn(X)1|X|>σ2mr/2 + σZ, then, for σ = ℓd−α +with α = 1/3, +E +� +1{sgn(X)̸=sgn( ˜ +X)} +� +→ 0 +if d → ∞. +Proof. Using the same strategy of Appendix D.1 and the symmetry of the laws of X, Z, we find +that +E +� +1{sgn(X)̸=sgn( ˜ +X)} +� += 2E [1A1(X)1B2(X, Z)] + 2E [1A3(X)1C2(X, Z)] ++ 2E [1A1(X)1B4(X, Z)] + 2E [1A3(X)1C4(X, Z)] . +Using the same strategy used to obtain the moments of φd in Appendix D.1, we find that +E [1A1(X)1B2(X, Z)] = o(1), +in addition +E [1A3(X)1C2(X, Z)] = +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 +� σ2/2−σz+σ2mr/2 +σ2/2−σz +e−xdx dz + o(1) += +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 +�σ2r +2 δm1 + ... +� +dz + o(1), +where δm1 is a Dirac’s delta, and +E [1A1(X)1B4(X, Z)] + E [1A3(X)1C4(X, Z)] += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 +� σ2/2−σz−σ2mr/2 +0 +e−xdx dz + o(1) += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 [−σz + ...] dz + o(1). +83 + +Since σ = ℓd−1/3 and the remainder terms of the Taylor expansions are bounded, Lebesque’s +dominated convergence theorem gives +E [1A3(X)1C2(X, Z)] → 0, +E [1A1(X)1B4(X, Z)] + E [1A3(X)1C4(X, Z)] → 0 +as d → ∞. +Lemma 5. Take X a Laplace random variable and Z a standard normal random variable indepen- +dent of X. Then, +dαE [|Z| |φd (X, Z)|] → 0 +for α = 1/3. +Proof. Using Cauchy-Schwarz’s inequality we have that +E [|Z| |φd (X, Z)|] ≤ E +� +Z2�1/2 E +� +φd(X, Z)2�1/2 ; +the first expectation is equal to one, and the second one converges to zero at rate d1/2 by Proposi- +tion 17. The result follows straightforwardly. +D.4 +Integrals for Moment Computations +We distinguish the case m = 1/2 and m ≥ 1 since the integration bounds significantly differ in +these two cases. For values between 1/2 and 1 the integrals are not finite. The expectations below +are obtained by integrating w.r.t. x and using a Taylor expansion about σ = 0 to obtain the leading +order terms. Using the Lagrange form of the remainder for the Taylor expansions, we find that the +remainder terms are all of the form σ1/α+1f(γ(σ, z))/(1/α + 1)! where γ(σ, z) is a point between +the limits of integration w.r.t. x and f : x �→ p(x)e−x, where p is a polynomial. Therefore, using +the boundedness of the remainder and Lebesgue’s dominated convergence theorem, the integrals +w.r.t. z of the remainder terms all converge to 0. +D.4.1 +First Moment +For simplicity, we only consider the case for r ≥ σ−2(m−1), the other case follows analogously. +Region R1 +Let us consider φ1 +d first. We have +A1 ∩ B1 = +� +� +� +� +� +� +� +0 ≤ x ≤ σ2mr +2 +if 0 ≤ z ≤ σ +2 +0 ≤ x ≤ +� +σ2mr +2 +− σz +� � +1 − +1 +σ2(m−1)r +�−1 +if σ +2 ≤ z ≤ σ2m−1r +2 +−σz +� +1 − +1 +σ2(m−1)r +�−1 ≤ x ≤ σ2mr +2 +if σ +2 − σ2m−1r +2 +≤ z ≤ 0 +, +A1 ∩ B2 = +� +� +� +� +� +� +� +0 ≤ x ≤ σ2mr +2 +if − σ2m−1r +2 +≤ z ≤ σ +2 − σ2m−1r +2 +0 ≤ x ≤ −σz +� +1 − +1 +σ2(m−1)r +�−1 +if σ +2 − σ2m−1r +2 +≤ z ≤ 0 +− +� +σ2mr +2 ++ σz +� � +1 − +1 +σ2(m−1)r +�−1 ≤ x ≤ σ2mr +2 +if σ +2 − σ2m−1r ≤ z ≤ − σ2m−1r +2 +, +84 + +and +A1 ∩ (B1 ∪ B2) = +� +� +� +� +� +� +� +� +� +� +� +� +� +0 ≤ x ≤ σ2mr +2 +if − σ2m−1r +2 +≤ z ≤ σ +2 +0 ≤ x ≤ +� +σ2mr +2 +− σz +� � +1 − +1 +σ2(m−1)r +�−1 +if σ +2 ≤ z ≤ σ2m−1r +2 +− +� +σ2mr +2 ++ σz +� � +1 − +1 +σ2(m−1)r +�−1 ≤ x ≤ σ2mr +2 +if σ +2 − σ2m−1r ≤ z ≤ − σ2m−1r +2 +. +The corresponding expectations are +E +�� +X +σ2(m−1)r − σZ +� +1A1(X)1B1(X, Z) +� += +1 +2 +√ +2π +� σ2m−1r/2 +σ/2 +e−z2/2 � +z4−2mσ4−2mξ(r) + . . . +� +dz ++ +1 +2 +√ +2π +� 0 +σ/2−rσ2m−1/2 +e−z2/2 � +z4−2mσ4−2mξ(r) + . . . +� +dz ++ o(1), +E +�� +2X − +X +σ2(m−1)r + σZ +� +1A1(X)1B2(X, Z) +� += +1 +2 +√ +2π +� 0 +σ/2−σ2m−1r/2 +e−z2/2 � +z4−2mσ4−2mξ(r) + . . . +� +dz ++ +1 +2 +√ +2π +� −σ2m−1r/2 +σ/2−σ2m−1r +e−z2/2 � +z4−2mσ4−2mξ(r) + . . . +� +dz ++ o(1), +where ξ : [0, +∞) → R is a function of r only which might change from one line to the other, and +E +�� +Z2 +2 − +1 +2σ2 +�� +2 +σ2(m−1)r − +1 +σ4(m���1)r2 +� +X − +� +1 − +1 +σ2(m−1)r +� +σZ +�2� +1A1(X)1B1∪B2(X, Z) +� += +1 +2 +√ +2π +� σ/2 +−σ2m−1r/2 +e−z2/2 [+ . . . ] dz ++ +1 +2 +√ +2π +� σ2m−1r/2 +σ/2 +e−z2/2 [+ . . . ] dz ++ +1 +2 +√ +2π +� −σ2m−1r/2 +σ/2−σ2m−1r +e−z2/2 � +z2σ2ξ(r) + . . . +� +dz, +where ξ : [0, +∞) → R is a function of r only which might change from one line to the other. +85 + +Region R2 +For φ2 +d, we have +A3 ∩ C1 = +� +σ2/2 − σz ≤ x ≤ −σz + σ2/2 + σ2mr/2 +if z < σ/2 − σ2m−1r/2 +σ2mr/2 < x ≤ −σz + σ2/2 + σ2mr/2 +if σ/2 − σ2m−1r/2 ≤ z ≤ σ/2 +, +A3 ∩ C2 = +� +σ2/2 − σz − σ2mr/2 ≤ x ≤ σ2/2 − σz +if z < σ/2 − σ2m−1r +σ2mr/2 < x ≤ σ2/2 − σz +if σ/2 − σ2m−1r < z < σ/2 − σ2m−1r/2 +and +A3 ∩ (C1 ∪ C2) = +� +� +� +� +� +σ2mr/2 ≤ x ≤ σ2mr/2 + σ2/2 − σz +if σ/2 − σ2m−1r ≤ z ≤ σ/2 +−σ2mr/2 + σ2/2 − σz < x ≤ σ2mr/2 + σ2/2 − σz +if z < σ/2 − σ2m−1r +. +The corresponding expectations are +E +��σ2 +2 − σZ +� +1A3(X)1C1(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 � +−rz +2 σ2m+1 + . . . +� +dz ++ +1 +2 +√ +2π +� σ/2 +σ/2−σ2m−1r/2 +e−z2/2 � +z2σ2 + . . . +� +dz, +E +�� +2X − σ2 +2 + σZ +� +1A3(X)1C2(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 � +−rz +2 σ2m+1 + . . . +� +dz ++ +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +σ/2−σ2m−1r +e−z2/2 � +−rz +2 σ2m+1 + . . . +� +dz, +and +E +�� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σZ +��2� +1A3(X)1C1∪C2(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 �rz +2 σ2m+1 + . . . +� +dz ++ +1 +2 +√ +2π +� σ/2 +σ/2−σ2m−1r +e−z2/2 +� +−z3 +2rσ3−2m + . . . +� +dz. +Region R3 +For φ3 +d, we have, in the case r ≥ σ−2(m−1), +A1 ∩ B3 = +� +0 ≤ x ≤ σ2mr +2 +if z > σ2m−1r +2 +� +σ2mr +2 +− σz +� � +1 − +1 +σ2(m−1)r +�−1 < x ≤ σ2mr +2 +if σ +2 ≤ z ≤ σ2m−1r +2 +, +A1 ∩ B4 = +� +0 ≤ x ≤ σ2mr +2 +if z < σ +2 − σ2m−1r +0 ≤ x < +� +σ2mr +2 +− σz +� � +1 − +1 +σ2(m−1)r +�−1 +if σ +2 − σ2m−1r ≤ z ≤ − σ2m−1r +2 +. +86 + +The corresponding expectations are +E +�� +1 +σ2(m−1)rX − σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ + σ2 +2 +�2� +1A1(X)1B3(X, Z) +� += +1 +2 +√ +2π +� +∞ +σ2m−1r/2 +e−z2/2 � +−rz +8 σ2m+1 + . . . +� +dz ++ +1 +2 +√ +2π +� σ2m−1r/2 +σ/2 +e−z2/2 � +z3−2mσ3−2mξ(r) + . . . +� +dz, +E +�� +2X − +1 +σ2(m−1)rX + σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ − σ2 +2 +�2� +1A1(X)1B4(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 +�3 +8σ2m+1 + . . . +� +dz ++ +1 +2 +√ +2π +� −σ2m−1r/2 +σ/2−σ2m−1r/2 +e−z2/2 � +z3σ3−2mξ(r) + . . . +� +dz, +where ξ : [0, +∞) → R is a function of r only which might change from one line to the other. +Region R4 +Finally, for φ4 +d we have +A3 ∩ C4 = +� +z < σ +2 − σ2m−1r, σ2mr +2 +< x ≤ σ2 +2 − σz − σ2mr +2 +� +, +and +E +�� +2X − σ2 +2 + σZ +� +1A3(X)1C4(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 +�z3 +6 σ3 + . . . +� +dz. +D.4.2 +Second Moment +For simplicity, we only consider the case for r ≥ σ−2(m−1), the other case follows analogously. +Region R1 +For φ1 +d we have +E +� +φ1 +d(X, Z)21A1(X)1B1(X, Z) +� += +1 +2 +√ +2π +� σ2m−1r/2 +σ/2 +e−z2/2 � +z3σ3ξ(r) + . . . +� +dz ++ +1 +2 +√ +2π +� 0 +σ/2−rσ2m−1/2 +e−z2/2 � +z3σ3ξ(r) + . . . +� +dz ++ o(1), +E +� +φ1 +d(X, Z)21A1(X)1B2(X, Z) +� += +1 +2 +√ +2π +� 0 +σ/2−σ2m−1r/2 +e−z2/2 � +z3σ3ξ(r) + . . . +� +dz ++ +1 +2 +√ +2π +� −σ2m−1r/2 +σ/2−σ2m−1r +e−z2/2 � +z3σ3ξ(r) + . . . +� +dz ++ o(1), +87 + +where ξ : [0, +∞) → R is a function of r only which might change from one line to the other. +Region R2 +For φ2 +d we have +E +� +φ2 +d(X, Z)21A3(X)1C1(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 +�rz2 +2 σ2m+2 + . . . +� +dz ++ +1 +2 +√ +2π +� σ/2 +σ/2−σ2m−1r/2 +e−z2/2 � +−z3σ3 + . . . +� +dz, +E +� +φ2 +d(X, Z)21A3(X)1C2(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 � +z2σ2m+2 + . . . +� +dz ++ +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +σ/2−σ2m−1r +e−z2/2 +� +−rz2 +2 σ2m+2 + . . . +� +dz. +Region R3 +For φ3 +d we have +E +� +� +� +1 +σ2(m−1)rX − σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ + σ2 +2 +�2�2 +1A1(X)1B3(X, Z) +� +� += +1 +2 +√ +2π +� +∞ +σ2m−1r/2 +e−z2/2 +�rz2 +24 σ2m+2 + . . . +� +dz ++ +1 +2 +√ +2π +� σ2m−1r/2 +σ/2 +e−z2/2 +�rz2 +24 σ2m+2 + . . . +� +dz, +E +� +� +� +2X − +1 +σ2(m−1)rX + σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ − σ2 +2 +�2�2 +1A1(X)1B4(X, Z) +� +� += +1 +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 +�7rz2 +24 σ2m+2 + . . . +� +dz ++ +1 +2 +√ +2π +� −σ2m−1r/2 +σ/2−σ2m−1r/2 +e−z2/2 +�7rz2 +24 σ2m+2 + . . . +� +dz, +Region R4 +For φ4 +d we have +E +�� +2X − σ2 +2 + σZ +�2 +1A3(X)1C4(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 +� +−z3 +3 σ3 + . . . +� +dz. +D.4.3 +Third Moment +Having established that the only possible scaling is given by α = 1/3, β = m/3 with m ≥ 1, we +now proceed to bound the third moment of φd in this case. For simplicity, we only consider the +case for r ≥ 1, the other case follows analogously. +88 + +Since m ≥ 1, we find that E +� +φ1 +d(X, Z)31R1(X, Z) +� += o(1) as d → ∞ since the limits of integra- +tion all converge to 0. Then, using H¨older’s inequality for φ2 +d, we have +E +� +φ2 +d(X, Z)3� +≤ CE +��σ2 +2 − σZ +�3 +1A3(X)1C1(X, Z) +� ++ CE +�� +2X − σ2 +2 + σZ +�3 +1A3(X)1C2(X, Z) +� ++ CE +� +� +� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX + +� +1 − +1 +σ2(m−1)r +� �σ2 +2 − σZ +��2�3 +×1A3(X)1C1∪C2(X, Z)] += +C +2 +√ +2π +� σ/2−σ2m−1r/2 +−∞ +e−z2/2 +� +−rz3 +2 σ5 + ... +� +dz ++ +C +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 +� +−rz3 +2 σ5 + ... +� +dz ++ +C +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 � +z3σ5ξ(r) + ... +� +dz + o(1), +where ξ : [0, +∞) → R is a function of r only which might change from one line to the other. For +89 + +φ3 +d, we have, using again H¨older’s inequality, +E +� +� +� +1 +σ2(m−1)rX − σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ + σ2 +2 +�2�3 +1A1(X)1B3(X, Z) +� +� +≤ CE +�� +1 +σ2(m−1)rX − σZ +�3 +1A1(X)1B3(X, Z) +� ++ CE +� +� +� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ + σ2 +2 +�2�3 +1A1(X)1B3(X, Z) +� +� += +1 +2 +√ +2π +� +∞ +σ2m−1r/2 +e−z2/2 +� +−z3r +2 σ5 + ... +� +dz ++ +1 +2 +√ +2π +� +∞ +σ2m−1r/2 +e−z2/2 � +z3σ5ξ(r) + ... +� +dz + o(1) +E +� +� +� +2X − +1 +σ2(m−1)rX + σZ + Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ − σ2 +2 +�2�3 +1A1(X)1B4(X, Z) +� +� +≤ CE +�� +2X − +1 +σ2(m−1)rX + σZ +�3 +1A1(X)1B4(X, Z) +� ++ CE +� +� +� +Z2 +2 − +1 +2σ2 +� +1 +σ2(m−1)rX − σZ − σ2 +2 +�2�3 +1A1(X)1B4(X, Z) +� +� += +1 +2 +√ +2π +� +∞ +σ2m−1r/2 +e−z2/2 +�z3r +2 σ5 + ... +� +dz ++ +1 +2 +√ +2π +� +∞ +σ2m−1r/2 +e−z2/2 � +z3σ5ξ(r) + ... +� +dz + o(1), +where ξ : [0, +∞) → R is a function of r only which might change from one line to the other. +Finally, for φ4 +d we have +E +�� +2X − σ2 +2 + σZ +�3 +1A3(X)1C4(X, Z) +� += +1 +2 +√ +2π +� σ/2−σ2m−1r +−∞ +e−z2/2 +�z3 +10σ5 + ... +� +dz. +90 +