diff --git "a/19AyT4oBgHgl3EQf1fl-/content/tmp_files/2301.00736v1.pdf.txt" "b/19AyT4oBgHgl3EQf1fl-/content/tmp_files/2301.00736v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/19AyT4oBgHgl3EQf1fl-/content/tmp_files/2301.00736v1.pdf.txt" @@ -0,0 +1,5336 @@ +Mixed moving average field guided learning for spatio-temporal +data +Imma Valentina Curato∗ , Orkun Furat† and Bennet Str¨oh ‡ +January 3, 2023 +Abstract +Influenced mixed moving average fields are a versatile modeling class for spatio-temporal data. +However, their predictive distribution is not generally accessible. Under this modeling assumption, +we define a novel theory-guided machine learning approach that employs a generalized Bayesian +algorithm to make predictions. We employ a Lipschitz predictor, for example, a linear model or +a feed-forward neural network, and determine a randomized estimator by minimizing a novel PAC +Bayesian bound for data serially correlated along a spatial and temporal dimension. Performing causal +future predictions is a highlight of our methodology as its potential application to data with short +and long-range dependence. We conclude by showing the performance of the learning methodology +in an example with linear predictors and simulated spatio-temporal data from an STOU process. +MSC 2020: primary 60E07, 60E15, 60G25, 60G60; secondary 62C10. +Keywords: stationary models, weak dependence, oracle inequalities, randomized estimators, causal predic- +tions. +1 +Introduction +Modeling spatio-temporal data representing measurements from a continuous physical system introduces +various methodological challenges. These include finding models that can account for the serial correlation +typically observed along their spatial and temporal dimensions and simultaneously have good prediction +performance. Statistical models used nowadays to analyze spatio-temporal data are Gaussian processes +[5], [23], [53], and [63]; spatio-temporal kriging [19], and [46]; space-time autoregressive moving average +models [30]; point processes [31], and hierarchical models [19]. An important common denominator of +statistical modeling is that they enable predictions once the variogram (covariance) structure or the data +distribution (up to a set of parameters) is carefully chosen in relation to the studied phenomenon and +practitioners’ experience. +This paper aims to define a novel theory-guided (or physics-informed) machine learning methodology +for spatio-temporal data. By this name, go all hybrid procedures that use a stochastic (or deterministic) +∗Ulm +University, +Institute +of +Mathematical +Finance, +Helmholtzstrae +18, +89069 +Ulm, +Germany. +E-mail: +imma.curato@uni-ulm.de. +†Ulm University, Institute of Stochastics, Helmholtzstrae 18, 89069 Ulm, Germany. E-mail: orkun.furat@uni-ulm.de. +‡Imperial College, Department of Mathematics, South Kensington Campus, SW7 2AZ London, United Kingdom. E- +mail: b.stroh@imperial.ac.uk. +1 +arXiv:2301.00736v1 [stat.ML] 2 Jan 2023 + +model in synergy with a specific data science one. Such methodologies have started to gain prominence +in several scientific disciplines such as earth science, quantum chemistry, bio-medical science, climate +science, and hydrology modeling as, for example, described in [41], [51], [50], and [54]. As in the statistical +models cited above, we model the spatial-temporal covariance structure of the observed data. However, +we perform predictions using a generalized Bayesian algorithm. Let us start by introducing the stochastic +model involved in our methodology. +We assume throughout to observe data ( ˜Zt(x))(t,x)∈T×L on a regular lattice L ⊂ Rd for d ≥ 1 across +times T = {1, . . . , N} such that the decomposition +˜Zt(x) = µt(x) + Zt(x) +(1) +holds and no measurement errors are present in the observations. Here, µt(x) is a deterministic function, +and Zt(x) are considered realizations from a zero mean stationary (influenced) mixed moving average field +(in brief, MMAF). When the spatial dimension d = 2, a category of data that falls in our assumptions is +the one of frame images through time (also known as video data or multidimensional raster data). Early +applications of MMAFs in image modeling can be found in [39]. +An MMAF is defined as +Zt(x) = +� +H +� +At(x) +f(A, x − ξ, t − s) Λ(dA, dξ, ds), (t, x) ∈ R × Rd, +(2) +where f is a deterministic function called kernel, H is denoting a non-empty topological space, Λ a L´evy +basis and At(x) is a so-called ambit set [7], we refer the reader to Section 2 for more details on the +definition (2). Examples of such models can be found in [7], [11] [47], and [48]. +A significant feature of MMAFs is that they provide a direct way to specify a model for an observed +physical phenomenon based on a probabilistic understanding of the latter as exemplified by choice of +the L´evy basis and the kernel function appearing in (2). Choosing an opportune distribution Λ allows +us to work with Gaussian and non-Gaussian distributed models. Moreover, the random parameter A in +the kernel function allows us to model short and long-range temporal and spatial dependence. A further +highlight of these models is that their autocovariance functions can be exponential or power decaying by +choosing an exponential kernel function f, an opportune distribution of the random parameter A, and +assuming that the L´evy basis Λ has finite second-order moments. Therefore, such types of autocovariance +functions can be obtained without the need for further assumptions on the distribution of the random +field. MMAFs with such properties are, for example, the spatio-temporal Ornstein-Uhlenbeck process (in +brief, STOU) [47] and its mixed version called the MSTOU process [48]. We also know that the entire +class of MMAFs is θ-lex weakly dependent. This notion of dependence has first been introduced in [22], +and it is more general than α∞,v-mixing for random fields as defined in [24] for v ∈ N∪{∞} and α-mixing +[14] in the particular case of stochastic processes. +Although the MMAFs are versatile models, only few results in the literature are to be found con- +cerning their predictive distribution. To our knowledge, the only explicit result concerns Gaussian STOU +processes defined on cone-shaped ambit sets, see [47, Theorem 13]. We then learn a predictor h ∈ H, for +H the class of the Lipschitz functions, by determining a randomized estimator ˆρ (i.e., a regular condi- +tional probability measure) on H using generalized Bayesian learning, see [33] for a review. We call our +methodology mixed moving average field guided learning. This procedure is applicable, for example, when +2 + +using linear models and feed-forward neural networks. The learning task on which we focus is making a +one-step-ahead prediction of the field Z in a given spatial position x∗. +The methodology starts by computing the θ-lex coefficients of the underlying MMAF. If the analyzed +field has finite second-order moments, then such coefficients can be obtained following the calculations in +Section 2.5. We use this information to select a set of input features from (Zt(x))(t,x)∈T×L and determine +a training set S. We then prove a PAC Bayesian bound for the sampled data S. We ultimately deter- +mine a randomized estimator by minimization of the PAC Bayesian bound. The acronym PAC stands +for Probably Approximately Correct and may be traced back to [66]. A PAC inequality states that with +an arbitrarily high probability (hence ”probably”), the performance (as provided by a loss function) of +a learning algorithm is upper-bounded by a term decaying to an optimal value as more data is collected +(hence ”approximately correct”). PAC-Bayesian bounds have proven over the past two decades to suc- +cessfully address various learning problems such as classification, sequential or batch learning, and deep +learning [28]. Indeed, they are a powerful probabilistic tool to derive theoretical generalization guarantees. +To the best of our knowledge, the PAC Bayesian bounds determined in Section 3.2 are the first results +in the literature obtained for data serially correlated along a spatial and temporal dimension. +It is important to emphasize that using a randomized estimator over a classical supervised learning +methodology has the same advantages as a Bayesian approach, i.e., it allows a deeper understanding of +the uncertainty of each possible h ∈ H. Moreover, we can enable the analysis of aggregate or ensemble +predictors ˆh = ˆρ[h]. Despite these similarities, generalized Bayesian learning substantially differs from +the classical Bayesian learning approach. In the latter, we specify a prior distribution, a statistical model +connecting output-input pairs called likelihood function, and determine the unique posterior distribution +by Bayes’ theorem. When using generalized Bayes to determine a randomized estimator, no assumptions +on the likelihood function are required but just a loss function and a so-called reference distribution. These +ingredients, together with a PAC Bayesian bound, are employed to determine a randomized predictor, +which is unique just under a specific set of assumptions, see Section 3.2. +1.1 +Outline and Contributions +Between the data-science models used to tackle predictions for spatio-temporal data, we find deep learn- +ing, see [4], [54], [61] and [62] for a review, and video frame prediction algorithms as in [45] and [68]. Deep +learning techniques are increasingly popular because they successfully extract spatio-temporal features +and learn the inner law of an observed spatio-temporal system. However, these models lack interpretabil- +ity, i.e., it is not possible to disentangle the causal relationship between variables in different spatial-time +points, and typically no proofs of their generalization (predictive) abilities are available. On the other +hand, [45] and [68] are methodologies retaining a causal interpretation, see discussion below, but do not +have proven generalization (predictive) performance. +Given a model class H, mixed moving average field guided learning selects a predictor h ∈ H that +has the best generalization performance for the analyzed prediction task. Moreover, the MMAF modeling +framework has a causal interpretation when using cone-shaped ambit sets. To explain this point, we +borrow the concept of lightcone from special relativity that describes the possible paths that the light +can make in space-time leading to a point (t, x) and the ones that lie in its future. In the context of our +paper, we use their geometry to identify the points in space-time having causal relationships. Let c > 0, +for a point (t, x), and by using the Euclidean norm to assess the distance between different space-time +3 + +points, we define a lightcone as the set +Alight +t +(x) = +� +(s, ξ) ∈ R × Rd : ∥x − ξ∥ ≤ c|t − s| +� +. +(3) +The set Alight with respect to the point (t, x) can be split into two disjoint sets, namely, At(x) and +At(x)+. The set At(x) is called past lightcone, and its definition corresponds to the one of a cone-shaped +ambit set +At(x) := +� +(s, ξ) ∈ R × Rd : s ≤ t and ∥x − ξ∥ ≤ c|t − s| +� +. +(4) +The set +At(x)+ = {(s, ξ) ∈ R × Rd : s > t and ∥x − ξ∥ ≤ c|t − s|}, +(5) +is called instead the future lightcone. By using an influenced MMAF on a cone-shaped ambit set as the +underlying model, we implicitly assume that the following sets +l−(t, x) = {Zs(ξ) : (s, ξ) ∈ At(x) \ (t, x)} and l+(t, x) = {Zs(ξ) : (s, ξ) ∈ At(x)+} +(6) +are respectively describing the values of the field that have a direct influence on Zt(x) and the future field +values influenced by Zt(x). We can then uncover the causal relationships described above by estimating +the constant c from observed data, called throughout the speed of information propagation in the physical +system under analysis. A similar approach to the modeling of causal relationships can be found in [45], +[58], and [68]. In [45], and [58], the sets (6) are considered and employed to discover coherent structures, see +[37] for a formal definition, in spatio-temporal physical systems and to perform video frame prediction, +respectively. Also, in [68], predictions are performed by embedding spatio-temporal information on a +Minkowski space-time. Hence, the concept of lightcones enters into play in the definition of their algorithm. +In machine learning, we typically have two equivalent approaches towards causality: structural causal +models, which rely on the use of directed acyclical graphs (DAG) [49], and Rubin causal models, which +rely upon the potential outcomes framework [57]. The concept of causality employed in this paper can be +inscribed into the latter. In fact, by using MMAFs on cone-shaped ambit sets, the set l+(t, x) describes +the possible future outcomes that can be observed starting from the spatial position (t, x). +The paper is structured as follows. In Section 2, we introduce the MMAF framework and define +STOU and MSTOU processes. In Section 3, we introduce the notations that allow us to bridge the +MMAF framework (that by definition is continuous in time and space) with a data science one (that +by definition is discrete in time and space). Important theoretical preliminaries and the input-features +extraction method can be found in Section 3.1. We then prove PAC Bayesian bounds (also of oracle type) +in Section 3.2 for Lipschitz predictors, among which we discuss the shape of the bound for linear models +and feed-forward neural networks. We then focus on linear predictors and show in Section 3.3 how to +select the best one to be used in a given prediction task. We give in Section 4 an explicit procedure to +perform one-step ahead casual future predictions in such a framework. In conclusion, we apply our theory- +guided machine learning methodology to simulated data from an STOU process driven by a Gaussian +and a NIG-distributed L´evy basis. Appendix A contains further details on the weak dependence measures +employed in the paper, and a review of the estimation methodologies for STOU and MSTOU processes. +Appendix B contains detailed proofs of the results presented in the paper. +4 + +2 +Mixed moving average fields +2.1 +Notations +Throughout the paper, we indicate with N the set of positive integers and R+ the set of non-negative +real numbers. As usual, we write Lp(Ω) for the space of (equivalence classes of) measurable functions +f : Ω → R with finite Lp-norm ∥f∥p. When Ω = Rn, ∥x∥1 and ∥x∥ denotes the L1-norm and the Euclidean +norm, respectively, and we define ∥x∥∞ = maxj=1,...,n |x(j)|, where x(j) represents the component j of +the vector x. +To ease the notations in the following sections, unless it is important to keep track of both time and +space components separately, we often indicate the index set R × Rd with R1+d. A ⊂ B denotes a not +necessarily proper subset A of a set B, |B| denotes the cardinality of B and dist(A, B) = infi∈A,j∈B∥i − +j∥∞ indicates the distance of two sets A, B ⊂ R1+d. Let n, k ≥ 1, and F : Rn → Rk, we define ∥F∥∞ = +supt∈Rd∥F(t)∥. Let Γ = {i1, . . . , iu} ⊂ R1+d for u ∈ N, we define the random vector ZΓ = (Zi1, . . . , Ziu). +In general, we use bold notations when referring to random elements. +In the following Lipschitz continuous is understood to mean globally Lipschitz. For u, n ∈ N, G∗ +u +is the class of bounded functions from Ru to R and Gu is the class of bounded, Lipschitz continuous +functions from Ru to R with respect to the distance ∥ · ∥1. Moreover, we call L(Ω) the set of all Lipschitz +functions h on Ω with respect to the distance ∥ · ∥1 and define the Lipschitz constant as +Lip(h) = sup +x̸=y +|h(x) − h(y)| +∥x − y∥1 +. +(7) +Hereafter, we often use the lexicographic order on R1+d. Let the pedex t and s be indicating +a temporal and spatial coordinate. For distinct elements y = (y1,t, y1,s . . . , yd,s) ∈ R1+d and z = +(z1,t, z1,s . . . , zd,s) ∈ R1+d we say y 0. The definition of the set +V r +t is also used when referring to the lexicographic order on Z1+d. +2.2 +Definition and properties +Let S = H × R × Rd, where H ⊂ Rq for q ≥ 1, and the Borel σ-algebra of S be denoted by B(S) and let +Bb(S) contain all its Lebesgue bounded sets. +Definition 2.1. A family of R-valued random variables Λ = {Λ(B) : B ∈ Bb(S)} is called a L´evy basis +on (S, Bb(S)) if it is an independently scattered and infinitely divisible random measure. This means that: +(i) For a sequence of pairwise disjoint elements of Bb(S), say {Bi, i ∈ N}: +– Λ(� +n∈N Bn) = � +n∈N Λ(Bn) almost surely when � +n∈N Bn ∈ Bb(S) +– and Λ(Bi) and Λ(Bn) are independent for i ̸= j. +(ii) Let B ∈ Bb(S). Then, the random variable Λ(B) is infinitely divisible, i.e. for any n ∈ N, there +exists a law µn such that the law µΛ(B) can be expressed as µΛ(B) = µ∗n +n , the n-fold convolution of +µn with itself. +5 + +For more details on infinitely divisible distributions, we refer the reader to [59]. In the following, we will +restrict ourselves to L´evy bases which are homogeneous in space and time and factorizable, i.e., L´evy +bases with characteristic function +E +� +eiuΛ(B)� += eΦ(u)Π(B) +(8) +for all u ∈ R and B ∈ Bb(S), where Π = π ×λ1+d is the product measure of the probability measure π on +H and the Lebesgue measure λ1+d on R×Rd. Note that when using a L´evy basis defined on S = R×Rd, +Π = λ1+d. Furthermore, +Φ(u) = iγ u − 1 +2σ2u2 + +� +R +� +eiux − 1 − iux1[0,1](|x|) +� +ν(dx) +(9) +is the cumulant transform of an ID distribution with characteristic triplet (γ, σ2, ν), where γ ∈ R, σ2 ≥ 0 +and ν is a L´evy-measure on R, i.e. +ν({0}) = 0 +and +� +R +� +1 ∧ x2� +ν(dx) < ∞. +The quadruplet (γ, σ2, ν, π) determines the distribution of the L´evy basis entirely, and therefore it +is called its characteristic quadruplet. An important random variable associated with the L´evy basis, is +the so-called L´evy seed, which we define as the random variable Λ′ having as cumulant transform (9), +that is +E +� +eiuΛ′� += eΦ(u). +(10) +By selecting different L´evy seeds, it is easy to compute the distribution of Λ(B) for B ∈ Bb(S) when +S = R × Rd. In the following two examples, we compute the L´evy bases used in generating the data sets +in Section 4.1. +Example 2.2 (Gaussian L´evy basis). Let Λ′ ∼ N(γ, σ2), then its characteristic function is equal to +exp(iγu − 1 +2σ2u2). Because of (8), we have, in turn, that the characteristic function of Λ(B) is equal to +exp(iγuλ1+d(B) − 1 +2σ2λ1+d(B)u2). In conclusion, Λ(B) ∼ N(γλ1+d(B), σ2λ1+d(B)) for any B ∈ Bb(S). +Example 2.3 (Normal Inverse Gaussian L´evy basis). Let K1 denote the modified Bessel function of the +third order and index 1. Then, for x ∈ R, the NIG distribution is defined as +f(x : α, β, µ, δ) = αδ(π2(δ2 + (x − µ)2))− 1 +2 exp(δ +� +α2 − β2 + β(x − µ))K1(α +� +δ2 + (x − µ)2), +where α, β, µ and δ are parameters such that µ ∈ R, δ > 0 and 0 ≤ |β| < α. Let Λ′ ∼ NIG(α, β, µ, δ), +then by (8) we have that Λ(B) ∼ NIG(α, β, µλ1+d(B), δλ1+d(B)) for all B ∈ Bb(S). +We now follow [7], and [10] to formally define ambit sets. +Definition 2.4. A family of ambit sets (At(x))(t,x)∈R×Rd ⊂ R × Rd satisfies the following properties: +� +� +� +� +� +� +� +� +� +At(x) = A0(0) + (t, x), (Translation invariant) +As(x) ⊂ At(x), for s < t +At(x) ∩ (t, ∞) × Rd = ∅. (Non-anticipative). +(11) +6 + +We consider throughout At(x) to be defined as in (4). We further assume that the random fields +in the paper are defined on a given complete probability space (Ω, F, P), equipped with the filtration of +influence (in the sense of Definition 3.8 in [22]) F = (F(t,x))(t,x)∈R×Rd generated by Λ and the family of +ambit sets (At(x))(t,x)∈R×Rd ⊂ R × Rd, i.e., each F(t,x) is the σ-algebra generated by the set of random +variables {Λ(B) : B ∈ Bb(H × At(x))}. We call our fields adapted to the filtration of influence F if +Zt(x) is measurable with respect to the σ-algebra F for each (t, x) ∈ R × Rd. Moreover, we work with +stationary random fields. Moreover, in the following, we use the term stationary instead of spatio-temporal +stationary. +Definition 2.5 (Spatio-temporal stationarity). We say that Zt(x) is spatio-temporal stationary if for ev- +ery n ∈ N, τ ∈ R, u ∈ Rd, t1, . . . , tn ∈ R and x1, . . . , xn ∈ Rd, the joint distribution of (Zt1(x1), . . . , Ztn(xn)) +is the same as that of (Zt1+τ(x1 + u), . . . , Ztn+τ(xn + u)). +Definition 2.6 (MMAF). Let Λ = {Λ(B), B ∈ Bb(S)} a L´evy basis, f : H × R × Rd → R a B(S)- +measurable function and At(x) an ambit set. Then, the stochastic integral (2) is adapted to the filtration F, +stationary, and its distribution is infinitely divisible. We call the R-valued random field Z an (influenced) +mixed moving average field and f its kernel function. +Remark 2.7. On a technical level, we assume all stochastic integrals in this paper to be well defined in +the sense of Rajput and Rosinski [52]. For more details, including sufficient conditions on the existence +of the integral as well as the explicit representation of the characteristic triplet of the MMAF’s infinitely +divisible distribution (which can be directly determined from the characteristic quadruplet of Λ), we refer +to [22, Section 3.1]. In the latter, there can also be found a multivariate definition of a L´evy basis and +an MMAF. +2.3 +Autocovariance Structure +Moment conditions for MMAFs are typically expressed in function of the characteristic quadruplet of its +driving L´evy basis and the kernel function f. +Proposition 2.8. Let Zt(x) be an R-valued MMAF driven by a L´evy basis with characteristic quadruplet +(γ, σ2, ν, π) with kernel function f : H × R × Rd → R and defined on an ambit set At(x) ⊂ R × Rd. +(i) If +� +|x|>1 |x|ν(dx) < ∞ and f ∈ L1(H × R × Rd, π × λ1+d) ∩ L2(H × R × Rd, π × λ1+d) the first +moment of Zt(x) is given by +E[Zt] = E(Λ′) +� +H +� +At(x) +f(A, −s, −ξ)ds dξ π(dA), +where E(Λ′) = γ + +� +|x|≥1 x ν(dx). +7 + +(ii) If +� +R x2 ν(dx) < ∞ and f ∈ L2(H × R × Rd, π × λ1+d), then Zt(x) ∈ L2(Ω) and +V ar(Zt(x)) = V ar(Λ′) +� +H +� +R×Rd f(A, −s, −ξ)2ds dξ π(dA), +Cov(Z0(0), Zt(x)) = V ar(Λ′) +� +H +� +A0(0)∩At(x) +f(A, −s, −ξ)f(A, t − s, x − ξ) ds dξ π(dA) +and +Corr(Z0(0), Zt(x)) = +� +H +� +A0(0)∩At(x) f(A, −s, −ξ)f(A, t − s, x − ξ) ds dξ π(dA) +� +H +� +R×Rd f(A, −s, −ξ)2ds dξ π(dA) +, +(12) +where V ar(Λ′) = σ2 + +� +Rd xx′ν(dx). +(iii) Consider σ2 = 0 and +� +|x|≤1 |x| ν(dx) < ∞. If +� +|x|>1 |x|ν(dx) < ∞ and f ∈ L1(H ×R×Rd, π×λ1+d), +the first moment of Zt(x) is given by +E[Zt(x)] = +� +H +� +At(x) +f(A, −s, −ξ) +� +γ0 + +� +R +xν(dx) +� +ds dξ π(dA), +where +γ0 := γ − +� +|x|≤1 +x ν(dx). +(13) +Proof. Immediate from [59, Section 25] and [22, Theorem 3.3]. +From Proposition 2.8, we can evince that the autocovariance function of an MMAF depends on the +variance of the L´evy seed Λ′, the kernel function f and the distribution π of the random parameter A. +Important examples of MMAFs are the spatio-temporal Ornstein-Uhlenbeck field (STOU) and the +mixed spatio-temporal Ornstein-Uhlenbeck field (MSTOU), whose properties have been thoroughly ana- +lyzed in [47] and [48], respectively. In Definition 2.9 and 2.11, we give the formal definitions of such fields +and the explicit expression of their autocovariance functions. +Definition 2.9. Let Λ = {Λ(B), B ∈ Bb(S)} be a L´evy basis, f : R×Rd → R a B(S)-measurable function +defined as f(s, ξ) = exp(−As) for A > 0, and At(x) be defined as in (4). Then, the STOU defined as +Zt(x) := +� +At(x) +exp(−A(t − s))Λ(ds, dξ) +(14) +is adapted to the filtration F, stationary, Markovian and its distribution is ID. Moreover, let u ∈ Rd, +τ ∈ R, and E[Zt(x)2] ≤ ∞. Then, +Cov(Zt(x), Zt+τ(x + u)) = V ar(Λ′) exp(−Aτ) +� +At(x)∩At+τ (x+u) +exp(−2A(t − s))ds dξ, and +(15) +Corr(Zt(x), Zt+τ(x + u)) = +exp(−Aτ) +� +At(x)∩At+τ (x+u) exp(−2A(t − s)) ds dξ +� +At(x) exp(−2A(t − s)) ds dξ +. +(16) +8 + +Example 2.10. Let d = 1 +ρT (τ) := Corr(Zt(x), Zt+τ(x)) = exp(−A|τ|), +(17) +ρS(u) := Corr(Zt(x), Zt(x + u)) = exp +� +− A|u| +c +� +, +(18) +ρST (τ, u) := Corr(Zt(x), Zt+τ(x + u)) = min +� +exp(−A|τ|), exp +� +− A|u| +c +�� +. +(19) +An STOU exhibits exponential temporal autocorrelation (just like the temporal Ornstein-Uhlenbeck +process) and has a spatial autocorrelation structure determined by the shape of the ambit set. In addition, +this class of fields admits non-separable autocovariances, which are desirable in practice. +An MSTOU process is defined by mixing the parameter A in the definition of an STOU process; +that is, we assume that A is a random variable with support in H = (0, ∞). This modification allows the +determination of random fields with power-decaying autocovariance functions. +Definition 2.11. Let Λ = {Λ(B), B ∈ Bb(S)} be a L´evy basis with characteristic quadruplet (γ, σ2, ν, π), +f : (0, ∞) × R × Rd → R a B(S)-measurable function defined as f(A, s, ξ) = exp(−As), and At(x) be +defined in (4). Moreover, let l(A) be the density of π with respect to the Lebesgue measure such that +� ∞ +0 +1 +Ad+1 l(A)dA ≤ ∞. +Then, the MSTOU defined as +Zt(x) := +� ∞ +0 +� +At(x) +exp(−A(t − s))Λ(dA, ds, dξ) +(20) +is adapted to the filtration F, stationary and its distribution is ID. Moreover, let u ∈ Rd and τ ∈ R, and +E[Zt(x)2] ≤ ∞ +Cov(Zt(x), Zt+τ(x + u)) = V ar(Λ′) exp(−Aτ) +� ∞ +0 +� +At(x)∩At+τ (x+u) +exp(−2A(t − s))ds dξ l(A)dA, +(21) +Corr(Zt(x), Zt+τ(x + u)) = +exp(−Aτ) +� ∞ +0 +� +At(x)∩At+τ (x+u) exp(−2A(t − s)) ds dξ f(A)dA +� ∞ +0 +� +At(x) exp(−2A(t − s)) ds dξ l(A)dA +. +(22) +Example 2.12. Let l(A) = +βα +Γ(α)Aα−1 exp(−βA), the Gamma density with shape and rate parameters, +α > d + 1 and β > 0. For d = 1, u ∈ R and τ ∈ R +V ar(Zt(x)) = +V ar(Λ′)cβ2 +2(α − 2)(α − 1) +(23) +Cov(Zt(x), Zt+τ(x + u)) = +V ar(Λ′)cβα +2(β + max{|τ|, |u|/c})α−2(α − 2)(α − 1), +(24) +ρST (τ, u) := Corr(Zt(x), Zt+τ(x + u)) = +� +β +β + max{|τ|, |u|/c} +�α−2 +. +(25) +9 + +2.4 +Isotropy, short and long range dependence +Definition 2.13 (Isotropy). Let t ∈ R and x ∈ Rd. A spatio-temporal random field (Zt(x))(t,x)∈R×Rd is +called isotropic if its spatial covariance: +Cov(Zt(x), Zt(x + u)) = C(|u|), +for some positive definite function C. +STOU and MSTOU processes defined on cone-shaped ambit sets are isotropic random fields. +Moreover, we consider the following definitions of temporal and spatial short and long-range depen- +dence in the paper, as given in [48]. +Definition 2.14 (Short and long range dependence). A spatio-temporal random field (Zt(x))(t,x)∈R×Rd +is said to have temporal short-range dependence if +� ∞ +0 +Cov(Zt(x), Zt+τ(x)) dτ < ∞, +and long-range temporal dependence if the integral above is infinite. Similarly, an isotropic random field +has short-range spatial dependence if +� ∞ +0 +C(r) dr < ∞, +where Cov(Zt(x), Zt(x + u)) = C(|u|) and r = |u|. It is said to have long-range spatial dependence if +the integral is infinite. +If (Zt(x))(t,x)∈R×Rd is, for example, an STOU process then Z admits temporal and spatial short- +range dependence. When Z is an MSTOU process, short and long-range dependence models can be +obtained by carefully modeling the distribution of the random parameter A. +Example 2.15. Let us consider the model discussed in Example 2.12. Set u = 0, then Z has temporal +short-range dependence for α > 3, because +� ∞ +0 +Cov(Zt(x), Zt+τ(x)) dτ = +cβαV ar(Λ′) +2(α − 2)(α − 1) +� ∞ +0 +(β + τ)−(α−2) dτ += +cβ3V ar(Λ′) +2(α − 1)(α − 2)(α − 3). +This integral is infinite for 2 < α ≤ 3, and the process has long-range temporal dependence. We obtain +spatial short- or long-range dependence for the same choice of parameters. In fact, for r = |u| and τ = 0, +and α > 3 +� ∞ +0 +C(r) dr = +cβαV ar(Λ′) +2(α − 2)(α − 1) +� ∞ +0 +(β + r/c)−(α−2) dτ += +cβ3V ar(Λ′) +2(α − 1)(α − 2)(α − 3), +converges, whereas the integral above diverges for 2 < α ≤ 3. +10 + +2.5 +Weak dependence coefficients +MMAFs are θ-lex weakly dependent random fields. For v ∈ N ∪ {∞}}, the latter is a dependence notion +more general than α∞,v-mixing, see Lemma A.5. +Definition 2.16. Let Z = (Zt)t∈R1+d be an Rn-valued random field. Then, Z is called θ-lex-weakly +dependent if +θlex(r) = sup +u,v∈N +θu,v(r) −→ +r→∞ 0, +where +θu,v(r) = sup +�|Cov(F(ZΓ), G(ZΓ′))| +∥F∥∞vLip(G) +� +and F ∈ G∗ +u, G ∈ Gv; Γ = {ti1, . . . , tiu} ⊂ R1+d, Γ′ = {tj1, . . . , tjv} ⊂ R1+d such that |Γ| = u, |Γ′| = v +and Γ ⊂ V r +Γ′ = �v +l=1 V r +tjl for tjl ∈ Γ′. We call (θlex(r))r∈R+ the θ-lex-coefficients. +In the MMAF modeling framework, we can show general formulas for the computation of upper +bounds of the θ-lex coefficients. The latter are given as a function of the characteristic quadruplet of the +driving L´evy basis Λ and the kernel function f in (2). +Proposition 2.17. Let Λ be an R-valued L´evy basis with characteristic quadruplet (γ, σ2, ν, π), f : +H × R1+d → R a B(H × R1+d)-measurable function and Zt(x) be defined as in (2). +(i) If +� +|x|>1 x2ν(dx) < ∞, γ+ +� +|x|>1 xν(dx) = 0 and f ∈ L2(H ×R1+d, π×λ1+d), then Z is θ-lex-weakly +dependent and +θlex(r) ≤ 2 +� � +H +� ρ(r) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2dsdξπ(dA) +� 1 +2 . +(ii) If +� +|x|>1∥x∥2ν(dx) < ∞ and f ∈ L2(H × R1+d, π × λ1+d) ∩ L1(H × R1+d, π × λ1+d), then Z is +θ-lex-weakly dependent and +θlex(r) ≤ 2 +� � +H +� ρ(r) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s)2 dsdξπ(dA) ++ +���� +� +S +� ρ(r) +−∞ +E(Λ′) +� +∥ξ∥≤cs +f(A, −s) dsdξπ(dA) +���� +2� 1 +2 +. +(iii) If +� +R |x| ν(dx) < ∞, σ2 = 0 and f ∈ L1(H × R1+d, π × λ1+d) with γ0 defined in (13), then Z is +θ-lex-weakly dependent and +θlex(r) ≤ 2 +� � +H +� ρ(r) +−∞ +� +∥ξ∥≤cs +|f(A, −s)γ0| ds dξπ(dA) ++ +� +H +� ρ(r) +−∞ +� +∥ξ∥≤cs +� +R +|f(A, −s)x| ν(dx) dsπ(dA) +� +. +11 + +The results above hold for all r > 0 with +ρ(r) = +−r min(1/c, 1) +� +(d + 1)(c2 + 1) +, +(26) +V ar(Λ′) = σ2 + +� +R x2 ν(dx) and E(Λ′) = γ + +� +|x|≥1 xν(dx). +Given the results of Proposition 2.17, it is then possible to compute upper bounds for the θ-lex- +coefficients of an MSTOU process. +Corollary 2.18. Let Z be an MSTOU process as in Definition 2.11 and (γ, σ2, ν, π) be the characteristic +quadruplet of its driving L´evy basis. Moreover, let the mean reversion parameter A be Gamma(α, β) +distributed with density l(A) = +βα +Γ(α)Aα−1 exp(−βA) where α > d + 1 and β > 0. +(i) If +� +|x|>1 x2 ν(dx) < ∞ and γ + +� +|x|>1 xν(dx) = 0, then Z is θ-lex-weakly dependent. Let c ∈ [0, 1], +then for +d = 1, +θlex(h) ≤ 2 +�cV ar(Λ′)βα +2Γ(α) +� +Γ(α − 2) +(2ψ(h) + β)α−2 + 2ψ(h)Γ(α − 1) +(2ψ(h) + β)α−1 +�� 1 +2 +, +and for +d ≥ 2, +θlex(h) ≤ 2 +� +Vd(c)d!V ar(Λ′)βα +2d+1 +d +� +k=0 +(2ψ(h))k +k!(2ψ(h) + β)α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +� 1 +2 +. +Let c > 1, then for +d ∈ N, θlex(h) ≤ 2 +� +Vd(c)d!V ar(Λ′)βα +2d+1 +d +� +k=0 +� +2ψ(h) +c +�k +k! +� +2ψ(h) +c ++ β +�α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +� 1 +2 +. +The above implies that, in general, θlex(h) = O(h +(d+1)−α +2 +). +(ii) If +� +R |x| ν(dx) < ∞, Σ = 0 and γ0 as defined in (13), then Zt(x) is θ-lex-weakly dependent. Let +c ∈ (0, 1], then for +d ∈ N, +θlex(h) ≤ 2Vd(c)d!βαγabs +d +� +k=0 +ψ(h)k +k!(ψ(h) + β)α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +, +whereas for c > 1 and +d ∈ N, +θlex(h) ≤ 2Vd(c)d!βαγabs +d +� +k=0 +� +ψ(h) +c +�k +k! +� +ψ(h) +c ++ β +�α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +, +where γabs = |γ0| + +� +R |x|ν(dx), and Vd(c) denotes the volume of the d-dimensional ball with radius +c. the above implies that, in general, θlex(h) = O(h(d+1)−α). +Proof. Proof of this corollary can be obtained by modifying the proof of [22, Section 3.7] in line with the +calculations performed in Proposition 2.17. +12 + +For d = 1, 2, when the MMAF has a kernel function with no spatial component, we can compute a +more explicit bound for the θ-lex coefficients. +Proposition 2.19. Let Λ be an R-valued L´evy basis with characteristic triplet (γ, σ2, ν, π) and f : H × +R → R a B(H × R)-measurable function not depending on the spatial dimension, i.e. +Zt(x) = +� +H +� +At(x) +f(A, t − s)Λ(dA, ds, dξ), +(t, x) ∈ R1+d. +(27) +(i) For d = 1, if that +� +|x|>1 x2ν(dx) < ∞ and γ + +� +|x|>1 xν(dx) = 0, then Z is θ-lex weakly dependent +and +θlex(r)≤2 +� +2V ar(Λ′) +� ∞ +0 +� +A0(0)∩A0(r min(2,c)) +f(A, −s)2dsdξπ(dA) +�1/2 +=2 +� +2Cov(Z0(0), Z0(r min(2, c))), +where V ar(Λ′) = σ2 + +� +R x2 ν(dx). +(ii) For d = 2, if +� +|x|>1 x2ν(dx) < ∞ and γ + +� +|x|>1 xν(dx) = 0, then then Z is θ-lex weakly dependent +and +θlex(r) ≤ 2 +� +2Cov +� +Z0(0, 0), Z0 +� +r min +� +1, c +√ +2 +� +, r min +� +1, c +√ +2 +��� ++ 2Cov +� +Z0(0, 0), Z0 +� +r min +� +1, c +√ +2 +� +, −r min +� +1, c +√ +2 +��� �1/2 +. +Assumption 2.20. In general, we indicate the bounds of the θ-lex coefficients determined in Proposition +2.17, Corollary 2.18 or Proposition 2.19 using the sequence (˜θlex(r)))r∈R+ where +θlex(r) ≤ 2˜θlex(r). +Example 2.21. Let d = 1 and Z be an STOU as in Definition 2.9. If +� +|x|>1 x2 ν(dx) ≤ ∞, γ + +� +|x|>1 x ν(dx) = 0, then Z is θ-lex weakly dependent with +˜θlex(r) = +� � +A0(0)∩(A0(ψ)∪A0(−ψ)) +exp(2As)ds dξ +� 1 +2 += +� +V ar(Λ′) +� +A0(0)∩(A0(ψ)∪A0(−ψ)) +exp(2As) ds dξ +� 1 +2 +≤ +� +2V ar(Λ′) +� +A0(0)∩A0(ψ) +exp(2As) ds dξ +� 1 +2 += +� +2V ar(Λ′) +� − ψ +2c +−∞ +� −cs +ψ+cs +exp(2As) ds dξ +� 1 +2 = +� c +A2 V ar(Λ′) exp +�−Aψ +c +�� 1 +2 +13 + += +� c +A2 V ar(Λ′) exp +� +− A min(2, c) +c +� +�� +� +2λ +r +�� 1 +2 += +� +2Cov(Z0(0), Z0(r min(2, c))) := ¯α exp(−λr), +where λ > 0 and ¯α > 0. Because the temporal and spatial autocovariance functions of an STOU are +exponential, see (15), the model admits spatial and temporal short-range dependence. +Example 2.22. Let d = 1 and Z be an MSTOU as in Definition 2.11. Moreover, let us define the density +l(A) as in Example 2.12. If +� +|x|>1 x2 ν(dx) ≤ ∞, γ + +� +|x|>1 x ν(dx) = 0, then Z is θ-lex weakly dependent +with +˜θlex(r) ≤ +� c +A2 V ar(Λ��) +� ∞ +0 +exp +�−Aψ +c +� +π(dA) +� 1 +2 += +� +V ar(Λ′)cβα +(β + ψ/c)α−2(α − 2)(α − 1) +� 1 +2 += +� V ar(Λ′)cβα +(α − 2)(α − 1) +� +β + r min(2, c) +c +�−(α−2)� 1 +2 += +� +2Cov(Z0(0), Z0(r min(2, c))) := ¯αr−λ, +where λ = α−2 +2 +and ¯α > 0. As already addressed in Example 2.15, for 2 < α ≤ 3, that is 0 < λ ≤ 1 +2, +the model admits temporal and spatial long-range dependence whereas for α > 3, that is λ > 1 +2 the model +admits temporal and spatial short range dependence. +Statistical inference for STOU and MSTOU processes is reviewed in Appendix A. Such method- +ologies can be applied to the entire class of influenced mixed moving average fields (as long as certain +moment conditions are fulfilled). We remind the reader that the parameter c is involved in the definition of +the ambit set At(x) and therefore of the lightcone (3) which we assume modeling the causal relationships +between spatial-time points. Other parameters of interest appear in the kernel function or represent the +variance of the L´evy seed Λ′. We can then determine an estimate of the decay rate of the θ-lex coefficients, +which is given, for example, by the parameter λ in Examples 2.21 and 2.22. +In the MMAF framework, we can also find time series models. The latter are θ-weakly dependent, +a notion of dependence satisfied by causal stochastic processes and defined as follows. +Definition 2.23. Let Z = (Zt)t∈R be an Rn-valued stochastic process. Then, Z is called θ-weakly +dependent if +θ(k) = sup +u∈N +θu(k) −→ +k→∞ 0, +where +θu(k) = sup +�|Cov(F(Zi1, . . . , Zi1), G(Zj1))| +∥F∥∞Lip(G) +� +. +and F ∈ Gu, G ∈ G1; i1 ≤ i2 ≤ . . . ≤ iu ≤ iu + k ≤ j1. We call (θ(K))k∈R+ the θ-coefficients. +Example 2.24 (Time series case). The supOU process studied in [6] and [11] is an example of a causal +mixed moving average process. Let the kernel function f(A, s) = e−As1[0,∞)(s), A ∈ R+, s ∈ R and Λ a +14 + +L´evy basis on R+ × R with generating quadruple (γ, σ2, ν, π) such that +� +|x|>1 +log(|x|) ν(dx) < ∞, and +� +R+ +1 +Aπ(dA) < ∞, +(28) +then the process +Zt = +� +R+ +� t +−∞ +e−A(t−s) Λ(dA, ds) +(29) +is well defined for each t ∈ R and strictly stationary and called a supOU process where A represents a +random mean reversion parameter. +If E(Λ′) = 0 and +� +|x|>1 |x|2ν(dx) < ∞, the supOU process is θ-weakly dependent with coefficients +θZ(r) ≤ +� � +R+ +� r +−∞ +e−2Asσ2 ds π(dA) +� 1 +2 = +� +V ar(Λ′) +� +R+ +e−2Ar +2A +π(dA) +� 1 +2 +(30) += Cov(Z0, Z2r) +1 +2 , +by using Theorem 3.11 [11] and where V ar(Λ′) = σ2 + +� +R x2ν(dx). +If E(Λ′) = µ and +� +|x|>1 |x|2ν(dx) < ∞, the supOU process is θ-weakly dependent with coefficients +θZ(r) ≤ +� +Cov(Z0, Z2r) + +4µ2 +V ar(Λ′)2 Cov(Z0, Zr)2� 1 +2 . +(31) +If +� +R |x|ν(dx) < ∞, σ2 = 0, γ0 = γ − +� +|x|≤1 x ν(dx) > 0 and ν(R−) = 0, then the supOU process admits +θ-coefficients +θZ(r) ≤ µ +� +R+ +e−Ar +A +π(dA), +(32) +and when in addition +� +|x|>1 |x|2ν(dx) < ∞ +θZ(r) ≤ +2µ +V ar(Λ′)Cov(Z0, Zr). +(33) +Note that the necessary and sufficient condition +� +R+ 1 +A π(dA) for the supOU process to exist is +satisfied by many continuous and discrete distributions π, see [64, Section 2.4] for more details. For +example, a probability measure π being absolutely continuous with density π′ = xhl(x) and regularly +varying at zero from the right with h > 0, i.e., l is slowly varying at zero, satisfies the above condition. If +moreover, l(x) is continuous in (0. + ∞) and limx→0+ l(x) > 0 exists, it holds that +Cov(Z0, Zr) ∼ C +rh , with a constant C > 0 and r ∈ R+ +where for h ∈ (0, 1) the supOU process exhibits long memory and for h > 1 short memory. In this set-up, +concrete examples where the covariances are calculated explicitly can be found in [8] and [20]. +Another interesting example of mixed moving average processes is given by the class of trawl pro- +cesses. A distinctive feature of the class of trawl processes is that it allows one to model its correlation +structure independently from its marginal distribution, see [9] for further details on their definition. In the +case of trawl processes, we also have available in the literature likelihood-based methods for estimating +15 + +their parameters; see [13] for further details. In general, the generalized method of moments is employed +to estimate the parameters of an MMA process, see [20]. +3 +Mixed moving average field guided learning +3.1 +Pre-processing N frames: input-features extraction method +Our methodology is designed to work for spatio-temporal data when the spatial dimension d ≥ 1. With- +out loss of generality, we describe the procedure for d = 2, i.e., for frame images data through time +( ˜Zt(x))(t,x)∈T×L following the decomposition (1). We then represent the regular spatial lattice L as a +frame made of a finite amount of pixels, i.e., squared-cells representing each of them a unique spatial +position x ∈ R2, see Figure 1. +Figure 1: Space-time grid with origin in (t0, x0). +In several applications, such as satellite imagery, a pixel refers to a spatial cell of several square +meters. In the paper, a pixel refers to the spatial point x ∈ R2 corresponding to the center of the squared +cell. We then use the name pixel and spatial position throughout interchangeably. Moreover, we call +(t0, x0) the origin of the space-time grid, see Figure 1, and ht and hs the time and space discretization +step in the observed data set. Mixed moving average guided learning has the target to determine a one-time +ahead prediction of the field Z at a given pixel position x∗, not belonging to the frame boundary. +Definition 3.1. Let (X, Y )⊤ an input-output vector, H a model class, and a predictor function h ∈ H. +We define the loss function L as +L(h(X), Y ) = |Y − h(X)|. +(34) +For ϵ > 0 be an accuracy level (specified before learning), we define the truncated absolute loss as +Lϵ(h(X), Y )) = L(h(X), Y )) ∧ ϵ. +(35) +16 + +Frame +Pixel +Origin +TimeMoreover, we define the generalization error (out-of-sample risk) +Rϵ(h) = E[Lϵ(h(X), Y )], +(36) +and the empirical error (in-sample risk) +rϵ(h) = 1 +m +m +� +i=1 +Lϵ(h(Xi), Yi). +(37) +Remark 3.2 (About the accuracy level ϵ). The boundedness of the loss function Lϵ plays a key role in +the results of the paper as it allows us to prove Proposition 3.10, which is one of the main technical tools +used to obtain PAC Bayesian bounds in Section 3.2. Different choices of the parameter ϵ will result in +different randomized predictors. Therefore ϵ is an hyper-parameter. +The topic discussed in the remark below holds for a general loss function. However, given the use +of the truncated absolute loss function in the paper, we focus on this specific example. +Remark 3.3 (Generalization gap). The function Lϵ is used to measure the discrepancy between a pre- +dicted output h(X) and the true output Y . Using the in-sample risk, we measure the performance of +a given predictor h just over an observed sample. Its theoretical counterpart represented by the out-of- +sample risk gives us the performance of a given predictor h depending on the unknown distribution of the +data P. The latter represents the measure we want to evaluate when choosing a predictor h. However, +we do not know the distribution of the data, so typically, we select a predictor h by solely evaluating its +in-sample risk. We then need a guarantee that a selected predictor will perform well when used on a set of +out-of-sample observations, i.e., not belonging to the observed data. We can also rephrase the problem as +finding a predictor h for which the difference between the out-of-sample and in-sample risk Rϵ(h) − rϵ(h) +is as small as possible. We call the latter generalization gap. The PAC framework, of which in the next +section we introduce a Bayesian version, aims to find a bound of the generalization gap that holds with +high probability P, see for example [60] and[67]. Such probability inequalities are also called generalization +bounds and give a theoretical guarantee on the performance of a predictor on any unseen data. +We now prove three preliminary results needed to define the input-features extraction method at +the end of this section. In conclusion, we use these results to define a training data set, which can preserve +the dependence structure of the data and reduce as much as possible the number of past and neighboring +space-time points used in learning. +Preliminary 1: Let us consider a stationary random field (Zt(x))(t,x)∈Z×L, and select a pixel x∗ +in L not belonging to the boundary of the frame. We define +Xi = L− +p (t0 + ia, x∗), +and +Y i = Zt0+ia(x∗), for i ∈ Z, +where +L− +p (t, x) = (Zi1(ξ1), . . . , Zia(p,c)(ξa(p,c)))⊤, for +{(is, ξs) : ∥x − ξs∥ ≤ c (t − is) and 0 < t − s ≤ p} := I(t, x), c > 0 and a(p, c) := |I(t, x)|. The +parameters a > 0 and p > 0 are multiple of ht such that a = atht, p = ptht, at, pt ∈ N and +at ≥ pt + 1. Moreover, we assume throughout to store in the vectors Xi values of the fields Z in +17 + +Figure 2: Time and spatial indices in the definition of the random vector (Xi, Y i)⊤ for c = 1, pt = 2, at = +3, ht = hs = 1. In particular, the green and red pixels represent the time-spatial indices identifying the +elements of Z belonging to Xi and Y i, respectively. This sampling scheme cannot be performed starting +by a pixel x∗ at the boundary of the frame. +lexicographic order. Then, (Xi, Y i)⊤ are identically distributed for all i ∈ Z. We call ((Xi, Y i)⊤)i∈Z +a cone-shaped sampling process. An example of such a sampling scheme can be found in Figure 2. +Preliminary 2: Let us analyze the dependence structure of the stochastic processes (L(h(Xi), Y i))i∈Z +and (Lϵ(h(Xi), Y i))i∈Z for h a Lipschitz function belonging to the model class H := {h ∈ L(Ra(p,c))}. +Proposition 3.4. Let ((Xi, Y i)⊤)i∈Z be a cone-shaped sampling process from a stationary and +θ-lex weakly dependent random field (Zt(x))(t,x)∈R×Rd. Then for all h ∈ H, (L(h(Xi), Yi))i∈Z is a +θ-weakly dependent process. Moreover, for a, p > 0, k ∈ N, and r = ka − p > 0, it has coefficients +θ(k) ≤ ˜d(Lip(h)a(p, c) + 1) +�2 +˜d +E[|Zt(x) − Z(r) +t (x)|] + θlex(r) +� +, +(38) +where ˜d > 0 is a constant independent of r, and Z(r) +t (x) := Zt(x) ∧ r. +Remark 3.5 (Locally Lipschitz predictor). Let the predictor h be a locally Lipschitz function such +that h(0) = 0 and +|h(x) − h(y)| ≤ ˜c ∥x − y∥1(1 + ∥x∥1 + ∥y∥1) for x, y ∈ Ra(p,c), +for ˜c > 0. Moreover, let Z be a stationary and θ-weakly dependent random field such that |Z| ≤ C +a.s. An easy generalization of Proposition 3.4, leads to show that (L(h(Xi), Yi))i∈Z is θ-weakly +dependent with coefficients +θ(k) ≤ ˜d(˜c(1 + 2C)a(p, c) + 1) +�2 +˜d +E[|Zt(x) − Z(r) +t +(x)|] + θlex(r) +� +, +(39) +where r = ka − p > 0 for a, p > 0 and k ∈ N, ˜d > 0 is a constant independent of r, and +Z(r) +t +(x) := Zt(x) ∧ r. +MMAFs have a particular definition that allows us to obtain an explicit bound for the coefficients +18 + +Timeθlex(r), as proven in Propositions 2.17 and 2.19, and a more refined bound than (38) for the θ- +coefficients of the cone-shaped sampling process. We consider the following assumptions. +Assumption 3.6. Let T = {t0, t1, . . . , tN, N ∈ N} and L ⊂ R2, then the data set (Zt(x))(t,x)∈T×L +is drawn from (Zt(x))(t,x)∈Z×L. Moreover, the latter can be derived from a regular sampling of the +MMAF Z = (Zt(x))(t,x)∈R×R2, where Z ∈ L2(Ω). +Proposition 3.7. Let Assumption 2.20 and 3.6 hold. Then (L(h(Xi), Yi))i∈Z is θ-weakly dependent +for all h ∈ H with coefficients +θ(k) ≤ 2(Lip(h)a(p, c) + 1)�θlex(r), +(40) +where r = ka − p > 0 for a, p > 0 and k ∈ N. Moreover, for linear predictors, i.e., hβ(X) = +β0 + βT +1 X, for β := (β0, β1)⊤ ∈ B and B = Ra(p,c)+1, we have that (L(hβ(Xi), Yi))i∈Z is θ-weakly +dependent for all β ∈ B with coefficients +θ(k) ≤ 2(∥β1∥1 + 1)�θlex(r). +(41) +Lemma A.3 straightforwardly imply that the process Lϵ := (Lϵ(h(X)i, Y i)))i∈Z is θ-weakly depen- +dent with known θ-coefficients under the assumptions of Proposition 3.4, Remark 3.5 and Proposi- +tion 3.7. +Preliminary 3: θ-weakly dependent processes have an essential role in the paper because of the +property of the process Lϵ. We analyze now an exponential bound for such processes. +Proposition 3.8. Let (Xi)i∈Z be a Rn valued stationary θ-weakly dependent process, and f : Rn → +[a, b], for a, b ∈ R such that (f(Xi))i∈Z is itself θ-weakly dependent. Let l = ⌊ m +k ⌋, for m, k ∈ N, +such that l ≥ 2 and 0 < s < +3l +|b−a|, then +E +� +exp +� s +m +m +� +i=1 +(f(Xi) − E[f(Xi)]) +�� +≤ exp +� +s2V ar(f(X)) +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k), (42) +E +� +exp +� s +m +m +� +i=1 +(E[f(Xi)] − f(Xi)) +�� +≤ exp +� +s2V ar(f(X)) +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k). (43) +Remark 3.9. The assumptions of Proposition 3.8 hold, for example, if f is a Lipschitz function +or satisfies the assumptions of [21, Section 7]. Moreover, the assumption of Proposition 3.8 hold +for the function Lϵ(h(·), ·) applied to the process ((Xi, Y i)⊤)i∈Z under the assumptions analyzed in +the Preliminary 2. +We can now define a training data set S = {(X1, Y1)⊤, (X2, Y2)⊤, . . . , (Xm, Ym)⊤} as a draw from +the cone-shaped sampling process defined in the Preliminary 1, namely, +Xi = L− +p (t0 + ia, x∗), +and +Yi = Zt0+ia(x∗) for i = 1, . . . , m := +�N +at +� +, +(44) +where +19 + +L− +p (t, x) = (Zi1(ξ1), . . . , Zia(p,c)(ξa(p,c)))⊤, where (is, ξs) ∈ I(t, x) for s = 1, . . . , a(p, c). +(45) +We assume that the parameters a, p, and k follow the constraints in Table 1. Moreover, each parameter +has a precise interpretation, also summarized here. We note that each element of L− +p (t, x) belongs to +l−(t, x), where l−(t, x) is defined in (6) and identifies the set of all points in R × Rd that could possibly +influence the realization Zt(x). +Parameters +Constraints +Interpretation +a := atht +pt + 1 ≤ at ≤ +� +N +2 +� +translation vector +p := ptht +1 ≤ pt < +� +N +2 +� +− 1 +past time horizon +k +k ∈ +� +1, . . . , +� +N +at +�� +index of the θ-coefficient in Prop. 3.8. +m := N +at +2 < m < N +number of examples in S +c +c > 0 +speed of information propagation +λ +λ > 0 +decay rate of the θ-lex coefficients of Z +Table 1: Parameters involved in pre-processing N observed frames. +For each observed data set, we know the value of the constants N, ht, and hs. The remaining +parameters appearing in Table 1 have to be opportunely chosen. We explain in this section how to select +the parameter at and k by assuming the parameters pt, c, and λ are known. The selection of the parameter +pt is detailed in Section 3.3 for linear predictors, and c and λ are typically estimated from (Zt(x))(t,x)∈T×L, +see exemplary estimation methods in Sections A.2 and A.3. +The parameter at and k are selected to offset the lack of independence of the process Lϵ which is +θ-weakly dependent, as proven in the Preliminary 2. In Proposition 3.10, and Corollary 3.11 and 3.13, +we show an important building block in the proof of the PAC Bayesian bounds obtained in the next +section, which make use of an opportune selection of the parameters at and k. Such a result undergoes +minor changes when working in the class of linear predictors, feed-forward neural networks, or Lipschitz +functions. As exemplary, we show this result for linear predictors and discuss how to modify the proof +for more general model classes. We use the notation ∆(h) = Rϵ(h) − rϵ(h) and ∆′(h) = rϵ(h) − Rϵ(h) +when referring to a Lipschitz predictor; ∆(β) = Rϵ(β) − rϵ(β) and ∆′(β) = rϵ(β) − Rϵ(β), where we +call Rϵ(β) and rϵ(β) the generalization and empirical error, in the linear framework; and ∆(hnet,w) = +Rϵ(hnet,w)−rϵ(hnet,w) and ∆′(hnet,w) = rϵ(hnet,w)−Rϵ(hnet,w), where we call Rϵ(hnet,w) and rϵ(hnet,w) +the generalization and empirical error, in the case we use a feed-forward neural network. +Proposition 3.10. Let Assumption 2.20 and 3.6 hold, the parameter pt be known and r = ht(kat − pt). +Moreover, let us assume that H = {hβ(X) = β0 + βT +1 X, for β := (β0, β1)⊤ ∈ B} where B ∈ R(a(p,c)+1). +(i) Let 0 < ϵ < 3, l ≥ 9, β ∈ B and Z be a field with exponential decaying θ-lex coefficients, i.e., +�θlex(r) = ¯α exp(−λr) +for ¯α > 0, λ > 0. If +atk ≥ (λp − 1) + +� +(λp − 1)2 + 8λhtN +2λht +, +(46) +20 + +then +E +� +exp +�√ +l ∆(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(∥β1∥1 + 1)¯α, +(47) +E +� +exp +�√ +l ∆′(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(∥β1∥1 + 1)¯α. +(48) +(ii) Let 0 < ϵ < 3, l ≥ 9, β ∈ B and Z be a field with power decaying θ-lex coefficients +�θlex(r) = ¯αr−λ +for ¯α > 0. If +2N − atk +atk log(ak − p) ≤ λ, +(49) +then the inequalities (47) and (48) hold. +By substituting the bound (41) with (40) in the proof of Proposition 3.10 we obtain the following +result. +Corollary 3.11. Let Assumption 2.20 and 3.6 hold, the parameter pt be known and r = ht(kat − pt). +Moreover, let us assume that H = {h(X) ∈ L(Ra(p,c))}. Let 0 < ϵ < 3, l ≥ 9, h ∈ H and Z be a field +with exponential decaying θ-lex coefficients, i.e., +�θlex(r) = ¯α exp(−λr) +for ¯α > 0, λ > 0, or with power decaying θ-lex coefficients +�θlex(r) = ¯αr−λ +for ¯α > 0. If condition (46) or (49) hold, then for h ∈ H +E +� +exp +�√ +l ∆(h) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(Lip(h)a(p, c) + 1)¯α, +(50) +E +� +exp +�√ +l ∆′(h) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(Lip(h)a(p, c) + 1)¯α. +(51) +Corollary 3.11 applies to feed-forward neural network predictors as long as we can compute the Lip- +schitz function of the neural network. There exist numerical methods to compute the Lipschitz constants +of (deep) neural networks, see [29] and [40], which enable the computation of the inequalities (50) and +(51) for a given network. We give now an explicit version of the inequality (50) and (51) in the case a +1-layer neural network, which we define below. +Definition 3.12. Let σ : R → R an activation function, we define a 1-layer neural network as +hnet,w(X) = α0 + +K +� +l=1 +αlσ(βT +l X + γl), +where w = (α0, α1, . . . , αK, β1, . . . , βK, γ1, . . . , γK) ∈ R2K+1+Ka(p,c) := B′ for K ≥ 1. +21 + +Many frequently used activation functions are 1-Lipschitz continuous functions, meaning that their +Lipschitz constant equals one. Such property is, for example, satisfied by the activation functions ReLU, +Leaky ReLU, SoftPlus, Tanh, Sigmoid, ArcTan, or Softsign. We can then prove the following result. +Corollary 3.13. Let Assumption 2.20 and 3.6 hold, the parameter pt be known and r = ht(kat − pt). +Moreover, let us assume that H = {hnet,w(X) ∈ B′}, where the networks are defined following Definition +3.12 with a 1-Lipschitz activation function σ. Let 0 < ϵ < 3, l ≥ 9, w ∈ B′ and Z be a field with +exponential decaying θ-lex coefficients, i.e., +�θlex(r) = ¯α exp(−λr) +for ¯α > 0, λ > 0, or with power decaying θ-lex coefficients +�θlex(r) = ¯αr−λ +for ¯α > 0. If condition (46) or (49) hold, then +E +� +exp +�√ +l ∆(hnet,w) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2( +� +l +|αl|∥βl∥1 + 1)¯α, +(52) +E +� +exp +�√ +l ∆′(hnet,w) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2( +� +l +|αl|∥βl∥1 + 1)¯α. +(53) +Remark 3.14. Similarly, as in Corollary 3.13, using the bounds (38) or (39), Proposition 3.10 can be +generalized for Lipschitz and locally Lipschitz predictors, respectively, under the assumption that the data +are generated by a stationary θ-lex weakly dependent random field. +We then pre-process the data to obtain the largest possible training data set S starting by N +observed frames. +Assumption 3.15. We select k∗ being the minimum positive constants that satisfy (46) or (49) and a∗ +t +as the minimum constant satisfying (46) or (49) such that a∗ +t ≥ pt + 1. Therefore, +m = +� N +a∗ +t +� +and l = +� m +k∗ +� +. +Remark 3.16. Let us assume to work with linear predictors, for C > 1, if we choose +atk ≥ (λp − 1 − log(1/C)) + +� +(λp − 1 − log(1/C))2 + 8λhtN +2λht +, +(54) +or +2N − atk(1 + log(1/C)) +atk log(ak − p) +≤ λ, +(55) +then +E +� +exp +�√ +l ∆(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2 +C (∥β1∥1 + 1)¯α, +(56) +E +� +exp +�√ +l ∆′(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2 +C (∥β1∥1 + 1)¯α. +(57) +22 + +For a fixed N, this means that we can obtain tighter bounds than in (47) and (48). However, using such +results instead of Proposition 3.10 must be evaluated on a case-by-case basis. Typically, we obtain smaller +training data sets if we have to satisfy the inequalities (54) and (55). Without loss of generality, therefore, +we consider throughout just the type of bounds analyzed in Proposition 3.10, which results in Assumption +3.15. +Remark 3.17. The input-features extraction method discussed in this section, Proposition 3.10, and +Corollary 3.11 and 3.13 straightforwardly apply to a θ-weakly dependent time series models Z. In such +case, the parameter c = 0, (Xi, Y i)⊤ +i∈Z is a flat cone-shaped sampling scheme, and straightforwardly it +is a θ-weakly dependent process with coefficients satisfying the bound (38). +3.2 +PAC Bayesian bounds for MMAF generated data +We start with a brief introduction to generalized Bayesian learning. +Firstly, let π be a reference distribution on the space (H, T ), where T indicates a σ-algebra on the +space H. The reference distribution gives a structure on the space H, which we can interpret as our belief +that certain models will perform better than others. The choice of π, therefore, is an indirect way to make +the size of H come into play; see [16, Section 3] for a detailed discussion on the latter point. Therefore, +π belongs to M1 ++(H), which denotes the set of all probability measures on the measurable set (H, T ). +Secondly, let S be a training data set with m examples and values in X × Y. Moreover, let us call P +the distribution of the random vector S. We then aim to determine a posterior distribution, also called a +randomized estimator, which is the regular conditional probability measure +ˆρ : (X × Y)m × T → [0, 1], +satisfying the following properties: +• for any A ∈ T , the map S → ˆρ(S, A) : (X × Y)m × T → [0, 1] is measurable; +• for any S ∈ (X × Y)m, the map A → ˆρ(S, A) : T → [0, 1] is a probability measure in M1 ++(B). +From now on, we indicate with π[·], ˆρ[·] the expectations w.r.t. the reference and posterior distribu- +tions (where the latter is a conditional expectation w.r.t. S), and simply with E[·] the expectation w.r.t. +the probability distribution P. Moreover, we call ˆρ[Rϵ(h)] and ˆρ[rϵ(h)] the average generalization error +and the average empirical error, respectively. +To evaluate the generalization performance of a randomized predictor ˆρ and then have a criterion +to select such distribution, we determine a PAC bound. The latter is called in this framework a PAC +Bayesian bound and is used to give with high probability a bound on the (average) generalization gap +defined as ˆρ[Rϵ(h)] − ˆρ[rϵ(h)]. We then choose a predictor having the best generalization performance in +the model class H, see also discussion in Remark 3.3, by minimizing the PAC Bayesian bound. +As exemplary, we first show a PAC Bayesian bound for linear predictors and then discuss how to +modify the bounds to obtain probabilistic inequalities valid for more general model classes. In general, +for a given measurable space (H, T ) and for any (ρ, π) ∈ M1 ++(H)2, we indicate with +KL(ρ||π) = +� +ρ +� +log dρ +dπ +� +if ρ << π ++∞ +otherwise +23 + +the Kullback-Leibler divergence. When the model class is given in dependence of a set of parameters, as +in the case of linear predictors, T indicates the σ-algebra on the parameter space B. We then obtain the +following result. +Proposition 3.18 (PAC Bayesian bound). Let 0 < ϵ < 3, Assumption 3.6 hold and let the underlying +MMAF Z be a random field with exponential or power decaying θ-lex coefficients. Moreover, let l be +selected following Assumption 3.15 and π be a distribution on B such that π[∥β1∥1] ≤ ∞, then for any ˆρ +such that ˆρ << π, and δ ∈ (0, 1) +P +� +ˆρ[Rϵ(β)] ≤ ˆρ[rϵ(β)] + +� +KL(ˆρ||π) + log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π +� +1 + 2(∥β1∥1 + 1)¯α +��� +≥ 1 − δ +(58) +and +P +� +ˆρ[rϵ(β)] ≤ ˆρ[Rϵ(β)] + +� +KL(ˆρ||π) + log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π +��� +1 + 2(∥β1∥1 + 1)¯α +��� +≥ 1 − δ. +(59) +Example 3.19. The bound in (58) holds for all randomized estimators ˆρ absolutely continuous w.r.t. a +reference distribution π given a training data set S. Let us assume that the card(B) = M for M ∈ N. We +choose throughout as reference distribution the uniform distribution π on B and the class of randomized +predictors ˆρ = δ ˆ +β, the Dirac mass concentrated on the empirical risk minimizer, i.e., ˆβ := arg infβ rϵ(β). +For a given S, we have that +KL(δβ||π) = +� +β∈B +log +�δ ˆβ{β} +π{β} +� +δ ˆβ{β} = log +1 +π{ˆβ} += log(M). +(60) +It is crucial to notice that the bigger the cardinality of the space B is, the more the term log(M) and the +bound increase. +Therefore, for a given δ ∈ (0, 1) +P +� +Rϵ(ˆβ) ≤ rϵ(ˆβ) + +� +log +�M +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π[1 + 2(∥β1∥1 + 1)¯α] +�� +≥ 1 − δ. +Let us assume to work with an accuracy level ϵ = 1, l = 10000, M = 100, ¯α = 4, δ = 0.05, and +B := {β : ∥β∥1 ≤ 1}. Then, the generalization gap, as defined in Remark 3.3, is less than 0.12 with +probability P of at least 95%. +Remark 3.20. Differently from the i.i.d. setting, the parameter l is not tuned in the bounds obtained in +Proposition 3.18; see [17] and [65] for a discussion on this topic. The choice of the parameter l in the +bounds (58) and (59) is a consequence of Assumption 3.15. The value of l is chosen to offset the lack of +independence in the observed N frames. +We now determine an oracle inequality for the randomized estimator minimizing the average em- +pirical error, also known as Gibbs posterior distribution or Gibbs estimator, first in the case of a linear +predictor and then in the general case of a Lipschitz one. +24 + +Proposition 3.21 (Oracle type PAC Bayesian bound). Let 0 < ϵ < 3, Assumption 3.6 hold and let +the underlying MMAF Z be a random field with exponential or power decaying θ-lex coefficients. Let l +be selected following Assumption 3.15, π be a distribution on B such that π[∥β1∥1] ≤ ∞, and ¯ρ be a +regular conditional distribution that is absolutely continuous w.r.t. π and has Radon-Nikodym derivative +d¯ρ +dπ = +exp(− +√ +lrϵ(β)) +π[exp(− +√ +lrϵ(β))]. Then, for all δ ∈ (0, 1) +P +� +¯ρ[Rϵ(β)] ≤ inf +ˆρ +� +ˆρ[Rϵ(β)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 2 +√ +l ++ 2 +√ +l +log +� +π[1+2(∥β1∥1+1)¯α] +��� +≥ 1−δ. (61) +By employing Corollary 3.11 instead of Proposition 3.10 in the proof of the oracle inequality, we +obtain the general result below. +Corollary 3.22 (Oracle type PAC Bayesian bound for a general Lipschitz predictor). Let 0 < ϵ < 3, +Assumption 3.6 hold and let the underlying MMAF Z be a random field with exponential or power +decaying θ-lex coefficients. Let l be selected following Assumption 3.15, π be a distribution on H such that +π[Lip(h)] ≤ ∞, and ¯ρ be a regular conditional distribution that is absolutely continuous w.r.t. π and has +Radon-Nikodym derivative d¯ρ +dπ = +exp(− +√ +lrϵ(h)) +π[exp(− +√ +lrϵ(h))]. Then, for all δ ∈ (0, 1) +P +� +¯ρ[Rϵ(h)] ≤ inf +ˆρ +� +ˆρ[Rϵ(h)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 2 +√ +l ++ 2 +√ +l +log +� +π[1+2(Lip(h)a(p, c)+1)¯α] +��� +≥ 1−δ. +(62) +Remark 3.23. The PAC Bayesian bound (62) also applies to Lipschitz neural network predictors. +Better generalization performances are observed, i.e., tighter bounds for the generalization gap, when +Lip(hnet,w) ≤ 1. In the case of a 1-layer neural network, for which we have an explicit bound of the Lip- +schitz constant, we can then obtain an oracle inequality by employing Corollary 3.13 instead of Corollary +3.11 in the proof of the inequality (62). Finally, using the same argument as in Remark 3.14, we can +obtain an oracle inequality also in the case of locally Lipschitz predictors. +The rate at which the bounds (61) and (62) converge to zero as m → ∞ is called rate of convergence. +It allows us to give a measure on how fast the average generalization error of ¯ρ converges to the average +best theoretical risk inf ˆρ ˆρ[Rϵ(h)], which is obtained by the so-called oracle estimator. The rate depends +on the choice made for l = +� +m +k +� +in the pre-processing step. For k = 1, we obtain the fastest possible rate +of O(m− 1 +2 ). +Example 3.24 (Rate of convergence for models with spatial and temporal long range dependence). Let +us consider a data set with N = 10000 frames drawn from a regular sampling of an MSTOU as defined +in Example 2.22 with discretization steps ht = hs = 1. Further, let the parameters at = 1000, k = 1, and +pt = 10 in the pre-processing step of our methodology. From the calculations in Example 2.15, we have +that the underlying model admits temporal and spatial long-range dependence when its θ-lex coefficients +have power decay rate 0 < λ ≤ 1 +2. Because of the relationship (49, a Gibbs estimator applies as long as the +MSTOU admits θ-lex coefficients with λ ≥ 2, 76. Pre-processing the data to include a larger amount of +observations in an example (X, Y ), i.e., letting pt be a larger parameter, changes the range of applicability +of the estimator minimally. If, for example, we choose pt = 999 (the largest possible values to choose given +at = 1000), we obtain that λ ≥ 2, 09. +To apply the Gibbs estimator to MSTOU with long range dependence, we can choose a k > 1 in +the pre-processing step. Note that with this choice, however, the rate of convergence of the PAC Bayesian +25 + +bound in (61) and (62) becomes then slower than O(m− 1 +2 ). If we can also decide how to sample our +data along the temporal dimension, we can opt to work with a data set where the temporal discretization +step ht > 1. So doing, the rate of convergence of the Gibbs estimators can become O(m− 1 +2 ) also in the +long-range dependence framework. Note that log(ht) appears in (49). Therefore, a careful selection of the +parameters at, k, and pt has to be done when 0 < λ ≤ 1 such that the sampling frequency is not too low +and the rate of convergence of the PAC Bayesian bound remains the desired one. +In the literature, results can be found on the rate of convergence of PAC Bayesian bound just for +time series models. We review such results in the remark below to highlight the novelty of the results +obtained in the section. +Remark 3.25 (Rate of convergence in PAC Bayesian bounds for heavy tailed data). In [1], the au- +thors determine an oracle inequality for time series with a rate of O(m− 1 +2 ) under the assumption that +((Xi, Y i)⊤)i∈Z is a stationary and α-mixing process [14] with coefficient (αj)j∈Z such that � +j∈Z αj < ∞. +Such bound employs the chi-squared divergence and holds for unbounded losses: it is important to high- +light that the randomized estimator obtained by minimization of the PAC Bayesian bound is not a Gibbs +estimator in this framework. An explicit bound for linear predictors can be found in [1, Corollary 2]. This +result holds under the assumption that π[∥β∥6] ≤ ∞. +In [3], the authors prove oracle inequalities for a Gibbs estimator for data generated by a stationary +and bounded θ∞-weakly dependent process– which is an extension of the concept of φ-mixing discussed +in [55]– or a causal Bernoulli shift process and a Lipschitz predictor. Models that satisfy the dependence +notion just cited are causal Bernoulli shifts with bounded innovations, uniform φ-mixing sequences, and +dynamical systems; see [3] for more examples. The oracle inequality is here obtained for an absolute loss +function and has a rate of O(m− 1 +2 ). An extension of this work for Lipschitz loss functions under φ-mixing +[38] can be found in [2]. Here, the authors show that an oracle inequality for a Gibbs estimator with the +optimal rate O(m−1) can be achieved. Note that this rate is considered optimal in the literature when +using, for example, a squared loss function and independent and identically distributed data [15]. +Interesting results in the literature of PAC Bayesian bounds for heavy-tailed data can also be found +in [32], and [36]. However, the authors assume independent distribution of the underlying process in their +works. +We conclude by giving an oracle inequality for the aggregate Gibbs estimators in the general case +of a Lipschitz predictor. This result is obtained by using the convexity of the absolute loss function. +Corollary 3.26 (PAC Bound for the average Gibbs predictor). Let 0 < ϵ < 3, Assumption 3.6 hold and +let Z be a random field with exponential or power decaying θ-lex coefficients such that |Y − h(X)| < ϵ +a.s. for each h ∈ L(Ra(p,c)). Let l be selected following Assumption 3.15. Moreover, let π be a distribution +on H such that π[Lip(h)] ≤ ∞, and ¯ρ a Gibbs posterior distribution. Then, let ¯h = ¯ρ[h], for all δ ∈ (0, 1) +P +� +Rϵ(¯h) ≤ inf +ˆρ +� +ˆρ[Rϵ(h)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 2 +√ +l ++ 2 +√ +l +log +� +π[1+2(Lip(h)a(p, c)+1)¯α] +��� +≥ 1−δ. +3.3 +Modeling selection for the best randomized Gibbs estimator for linear +predictors +We discuss in this section the selection of the parameter pt for H = {hβ(X) : β ∈ R(a(p,c)+1)}. For different +pt, the predictor, we aim to define changes because the cardinality of the input-features vectors Xi changes. +26 + +Therefore, choosing this parameter means performing modeling selection. To ease the notations, we will +refer to pt simply using the symbol p in the section and its related proofs. +Let the parameter space +B = +⌊ N +2 ⌋−1 +� +p=1 +Bp +where the Bp are assumed to be disjoint sets, such that for any β ∈ B, there is only one p such that +β ∈ Bp. Our modeling selection procedure is designed to select the best predictor in the class below. +Definition 3.27. For all p = 1, . . . , +� +N +2 +� +− 1, and reference distributions πp on M1 ++(Bp) such that +πp[∥β1���1] ≤ ∞, we define the class of Gibbs estimators as the regular conditional probability measures +which are absolutely continuous w.r.t. πp and have Radon Nikodym derivative +d¯ρp +dπp += +exp(− +√ +lrϵ(β)) +πp[exp(− +√ +lrϵ(β)] +. +(63) +The proposition below uses Lemma B.6 and B.7 that can be found in Appendix B. +Proposition 3.28 (Model selection). Let 0 < ϵ < 3, and the assumptions of Proposition 3.10 and 3.18 +hold. Moreover, let +p∗ = arg inf +p +� +¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log +��N +2 +��� +. +(64) +Then for all δ ∈ (0, 1), +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +2(3 − ϵ) +√ +l∗ + +1 +√ +l∗ log ¯α +δ +� +≥ 1 − δ, +(65) +where l∗ = +� +⌊ N +a∗ ⌋ +k∗ +� +, and a∗, k∗ are constants depending on p∗. +Lastly, for a bounded parameter space B, we can obtain the following oracle inequalities. +Proposition 3.29 (Oracle type PAC Bayesian bound for the best Gibbs estimator). Let the assumptions +of Proposition 3.21 hold, and p∗ be defined as in (64). Moreover, let supB ∥β∥ = +1 +C for C > 0. For all +δ ∈ (0, 1) +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +Rϵ(β) +� ++ 2 +√ +l +2 + 3C +C ++ 2 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +(3 − ϵ) +√ +l∗ + +2 +√ +l∗ log ¯α +δ +� +≥ 1 − δ +(66) +Corollary 3.30 (PAC bound for the average best Gibbs estimator). Let the assumptions of Corollary +3.26 hold, and p∗ be defined as in Proposition 3.28. Moreover, let supB ∥β∥ = 1 +C for C > 0 and ¯β = ¯ρ[hβ]. +27 + +For all δ ∈ (0, 1), +P +� +Rϵ(¯β) ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +Rϵ(β) +� ++ 2 +√ +l +2 + 3C +C ++ 2 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +(3 − ϵ) +√ +l∗ + +2 +√ +l∗ log ¯α +δ +� +≥ 1 − δ +(67) +Corollary 3.31 (Oracle inequality for a single draw out of the best Gibbs estimator). Let the assumptions +of Proposition 3.21 hold and p∗ be defined as in (64). Moreover, let supB ∥β∥ = +1 +C for C > 0 and +β∗ = arg infB R(β). For all δ ∈ (0, 1) and ˆβ ∼ ¯ρp∗, +P¯ρp∗ +� +Rϵ[ˆβ] ≤ Rϵ(β∗) + 1 +C E[|Z|] + +3ϵ2 +(3 − ϵ) +√ +l ++ 2 +√ +l +log +��N +2 +�� +1 + 2C + 1 +C +� ¯α +δ +�� +≥ 1 − δ. +(68) +Remark 3.32. The modeling selection procedure discussed here selects the best-randomized estimator +in the Gibbs posterior distributions family in dependence on the parameter p. In the case of non-linear +models, as the 1-layer neural network in Definition 3.12, a modeling selection procedure has also to select +the layer’s dimension, i.e., the parameter K. Moreover, for L-layers neural network, the parameter L +enters the modeling selection procedure. We aim in future research to analyze such a selection strategy +and determine a feasible procedure for applying the generalized Bayesian algorithms defined in the paper +to a feed-forward neural network. +4 +Predicting spatio-temporal data with MMAF guided learning +and linear predictors +Let us start by giving in Table 2 an overview of all the parameters appearing in the learning methodology. +Parameters +Type +Interpretation +ϵ +Hyperparameters +Accuracy level in the definition of the loss functions Rϵ(β) and rϵ(β) +N +Observed Parameter +Number of image frames through time composing our data set +ht +Observed Parameter +Discretization step along the temporal dimension +hs +Observed Parameter +Discretization step along the spatial dimension +λ +Unknown Parameter +Decay rate of the θ-lex coefficients of the underlying model +c +Unknown Parameter +Speed of information propagation +¯α +Unknown Parameter +Determines how tight ˜θlex(r) is as bound of θlex(r) +pt +Unknown Parameter +Length of the past information we include in each Xi +k and at +Derived Parameters +Chosen accordingly to Assumption 3.15 +Table 2: Overview of parameters appearing in MMAF guided learning +If, for example, the underlying MMAF is an STOU or an MSTOU process,i.e., we are assuming that +the data admits exponential or power-decaying autocorrelation functions, estimation methodologies for +their parameters can be found in [47] and [48]; see review in Section A.2 and A.3. We highlight that such +modeling set-up applies to data with short and possibly long-range spatial and temporal dependence. In +general, we follow the steps below to make one-step-ahead predictions. +(i) Observed N frames, we first use the entire data set to estimate the parameters λ and c. +28 + +Figure 3: Data set with spatial dimension d = 1. The last two frames are indicated with the blue stars, +whereas the violet circles represent the points in space and time where it is possible to provide predictions +with MMAF guided learning for pt = c = 1. The prediction in the time-spatial position (5, 3) lies in the +intersection (between red lines) of the future lightcones of the points (6, 2), (5, 2) and (4, 2) (represented +with green lines), which belong to L− +p (5, 3). +(ii) We fix a pixel position x∗ and we select pt (and as consequence a training data set S following +assumption 3.15) using the rule (64). +(iii) We then draw β from the Gibbs estimator determined for the pixel x∗ and the training data set S +defined in (ii) to determine a prediction at time t = N +1: we can use single draws or averaging a set +of different draws to this aim. The methodology employs novel-input features given by L− +p (N+1, x∗). +Therefore, we can make predictions in a future time point t = N + 1 as long as the set I(N + 1, x∗) +has cardinality a(p, c). +The procedure above can be implemented for any pixel where it is possible to determine a cone- +shaped training data set S, i.e., for the pixels that do not belong to the frame boundary. Moreover, the +prediction we obtain in (N +1, x∗) lies in the intersections of the future light cones of each input contained +in L− +p (N + 1, x∗), see Figure 3. This means that MMAF guided learning enables us to make predictions +in spatial-time points that are plausible starting from the set of inputs we observe. +Similarly to the kriging literature, we need an inference step before being capable of delivering +spatio-temporal predictions. In this literature, it is often assumed that the estimated parameters used +in the calculation of kriging weights and kriging variances are the true one, see [18, Chapter 3] and +[19, Chapter 6] for a discussion on the range of applicability of such estimates. We implicitly make the +same assumptions when substituting the value of the parameters λ and c in the pre-processing step. It +remains, however, an interesting open problem to understand the interplay of the estimates’ bias, also of +the parameter ¯α, on the validity of Proposition (3.10) and the PAC Bayesian bound (58). The biggest +problem of this analysis relies upon disentangling the effect of the bias of the constant c introduced in +the pre-processing step which changes the length of the input-features vector X. An optimal selection +strategy for the hyperparameter ϵ based on the bound (58) is connected to the latter issue. +29 + +10 +8 +6 +5 +4 +2 +1 +米 +米 +0 +0 +2 +44.1 +An example with simulated data +We simulate four data sets from a zero mean STOU process by employing the diamond grid algorithm +introduced in [47] for a spatial dimension d = 1. The data sets (Zt(x))(t,x)∈T×L is drawn from the +temporal-spatial interval [0, 1000] × [0, 10], with T = 0, . . . , 20.000 and L = 0, . . . , 200, corresponding to +a time and spatial discretization step ht = hs = 0.05. We choose as distribution for the L´evy seed Λ′ a +normal distribution with mean µ = 0 and standard deviation σ = 0.5, and a NIG(α, β, µ, δ) distribution +with α = 5, β = 0, δ = 0.2 and µ = 0. We use the latter distribution to test the behavior of MMAF +guided learning for different sets of heavy-tailed data. We also generate data with different mean-reverting +parameters A and use different seeds in generating the L´evy basis distribution, see summary scheme of +data’s characteristics in Table 3. The constant c = 1 across all generated data sets. +Name +Mean Reverting Parameter +L´evy seed +Random generator seed +GAU1 +A = 1 +Gaussian +1 +GAU10 +A = 4 +Gaussian +10 +NIG1 +A = 1 +NIG +1 +NIG10 +A = 4 +NIG +10 +Table 3: Overview on simulated data sets with c = 1 and spatial dimension d = 1. +We start our procedure by step (i), i.e., estimating for each data set the parameter λ and c. We +use the estimators (77) and (78) presented in Section A.2. The true parameter λ being equal to 1 +2 or +2 depending on the chosen mean reverting parameter A. The obtained results for each data set are +given in Table 4. We go to step (ii), and for each pixel position x∗ not belonging to the boundary of +Name +A∗ +c∗ +λ∗ +GAU1 +1.014 +1.0003 +0.5068 +GAU10 +4.0145 +1.0011 +2.005 +NIG1 +1, 0024 +0.9992 +0.5016 +NIG10 +3.9682 +1.0000 +1.9841 +Table 4: Estimations of parameters A,c and λ. +the frame (a total of 199 pixels), we select the parameter pt using the criteria (64) and a value of the +parameter ϵ = 2.99. For each pixel, we use a multivariate standard normal Gaussian distribution as +reference distribution and use importance sampling (with Gaussian proposal) to estimate the integral in +(64). The modeling selection criteria output pt = 1 for each pixel. We then extract from the frames a +training data set S following assumption 3.15, which we use at step (iii) of our forecasting procedure. +An acceptance-rejection algorithm with a Gaussian proposal is used to draw β from the best randomized +Gibbs estimator. We report in Figure 4 the results obtained in each data set and an analysis of the average +MSE across pixels in Table 5. MMAF guided learning has a low MSE in most of the pixels analyzed. +Name +MSE +GAU1 +0.07042 +GAU10 +0.37028 +NIG1 +0.12365 +NIG10 +0.07102 +Table 5: Average MSE across pixels. +30 + +0 +25 +50 +75 +100 +125 +150 +175 +200 +2 +1 +0 +1 +2 +prediction set +test set +(a) GAU1 +0 +25 +50 +75 +100 +125 +150 +175 +200 +3 +2 +1 +0 +1 +2 +prediction set +test set +(b) GAU10 +0 +25 +50 +75 +100 +125 +150 +175 +200 +1 +0 +1 +2 +3 +4 +prediction set +test set +(c) NIG1 +0 +25 +50 +75 +100 +125 +150 +175 +200 +1.5 +1.0 +0.5 +0.0 +0.5 +1.0 +1.5 +prediction set +test set +(d) NIG10 +Figure 4: One step ahead average prediction on 1.000.000 draws for each pixel position in the data sets. +31 + +5 +Conclusions +We define a novel theory-guided machine learning methodology for spatio-temporal data. The method- +ology starts by modeling the data using an influenced mixed moving average field. Such a step requires +the specification of the kernel function, including the presence of random parameters, and that the L´evy +seed underlying the definition of the model has finite second order moments. To enable one-time ahead +predictions, we use the underlying model to extract a training data set from the observed data, which +preserves the dependence structure of the latter and reduces as much as possible the number of past +and neighboring space-time points used in learning. In the model class of the Lipschitz function, which +includes linear functions but also neural networks, we show novel PAC Bayesian bounds for MMAF gen- +erated data. Their minimization leads to the determination of a randomized predictor for our prediction +task. Finally, in the case of linear models, we test the performance of our learning methodology on a set of +four simulated data from an STOU process and obtain an average favorable performance in all analyzed +scenarios. +Funding +The fist author was supported by the research grant GZ:CU 512/1-1 of the German Research Foundation +(DFG). +References +[1] P. Alquier and B. Guedj. Simpler PAC-Bayesian bounds for hostile data. Mach. Learn., 107 (5):887– +902, 2018. +[2] P. Alquier, X. Li, and O. Wintenberger. Prediction of time series by statistical learning: general +losses and fast rates. Depend. Model., 1:65–93, 2013. +[3] P. Alquier and O. Wintenberger. +Model selection for weakly dependent time series forecasting. +Bernoulli, 18 (3):883–913, 2012. +[4] F. Amato, F. Guignard, S. Robert, and M. Kanevski. A novel framework for spatio-temporal pre- +diction of environmental data using deep learning. Sci. Rep., 10:22243, 2020. +[5] Z. Bai, P. X.-K. Song, and T. E. Raghunathan. Joint composite estimating functions in spatiotem- +poral models. J. R. Stat. Soc. Ser. B. Stat. Methodol., 74:799–824, 2012. +[6] O. E. Barndorff-Nielsen. Superposition of Ornstein-Uhlenbeck type processes. Theory Probab. Appl., +45:175–194, 2021. +[7] O. E. Barndorff-Nielsen, F. E. Benth, and A. E. D. Veraart. Ambit Stochastics. Springer, Cham, +2018. +[8] O. E. Barndorff-Nielsen and N. N. Leonenko. Spectral properties of superpositions of 6Ornstein- +Uhlenbeck type processes, methodology and computing. Methodol. Comput. Appl. Probab., 7:335– +352, 2005. +32 + +[9] O. E. Barndorff-Nielsen, A. Lunde, N. Shepard, and A. E. D. Veraart. Integer-valued trawl processes: +a class of stationary infinitely divisible processes. Scand. J. Stat., 3:693–724, 2014. +[10] O. E. Barndorff-Nielsen and J. Schmiegel. L´evy-based spatial-temporal modelling, with applications +to turbulence. Russian Math. Surveys, 59:65–90, 2004. +[11] O. E. Barndorff-Nielsen and R. Stelzer. Multivariate supOU processes. Ann. Appl. Probab., 21:140– +182, 2011. +[12] L. B´egin, P. Germain, F. Laviolette, and J.-F. Roy. +PAC-Bayesian bounds based on the r´enyi +divergence. In Proceedings of the 19th annual International Conference on Artificial Intelligence and +Statistics, pages 435–444, 2016. +[13] M. Bennedsen, A. Lunde, N. Shephard, and A. E. D. Veraart. +Inference and forecasting for +continuous-time integer-valued trawl processes. arXiv:2107.03674v2, 2021. +[14] R. C. Bradley. Introduction to strong mixing conditions, Volume 1. Kendrick Press, Utah, 2007. +[15] F. Bunea, A. B. Tsybakov, and M. H. Wegkamp. Aggregation for Gaussian regression. Ann. Statist., +35:1674–1697, 2007. +[16] O. Catoni. Statistical learning theory and stochastic optimization. Lecture notes in Mathematics, +Springer, Berlin, 2004. +[17] O. Catoni. PAC-Bayesian supervised classification: the thermodynamics of statistical learning. In- +stitute of Mathematical Statistics Lecture Notes – Monograph Series, 56. Institute of Mathematical +Statistics, Beachwood, OH, 2007. +[18] N. Cressie. Statistics for Spatial Data. Wiley, New York, 1993. +[19] N. Cressie and C. K. Wikle. Statistics for spatio-temporal data. John Wiley & Sons, Inc., Hoboken, +New Jersey, 2011. +[20] I. V. Curato and R. Stelzer. Weak dependence and GMM estimation for supOU and mixed moving +average processes. Electron. J. Stat., 13 (1):310–360, 2019. +[21] I. V. Curato and R. Stelzer. Erratum: Weak dependence and GMM estimation for supOU and mixed +moving average processes. arXiv:1807.05801, 2022. +[22] I. V. Curato, R. Stelzer, and B. Str¨oh. Central limit theorems for stationary random fields under +weak dependence with application to ambit and mixed moving average fields. Ann. Appl. Probab., +32:1814–1861, 2022. +[23] A. D. Davis, C. Kl¨uppelberg, and C. Steinkohl. Statistical inference for max-stable processes in +space and time. J. R. Stat. Soc. Ser. B. Stat. Methodol., 75:791–819, 2013. +[24] J. Dedecker. A central limit theorem for stationary random fields. Probab. Theory Related Fields, +110:397–426, 1998. +[25] J. Dedecker and P. Doukhan. A new covariance inequality and applications. Stochastic Process. +Appl., 106, 2003. +33 + +[26] J. Dedecker, P. Doukhan, G. Lang, L. R. J Rafael, and S. Louhichi. Weak dependence: with examples +and applications. Springer Berlin, 2007. +[27] M. D. Donsker and S. S. Varadhan. Asymptotic evaluation of certain markov process expectations +for large time. Commun. Pure Appl. Math., 28:389–461, 1976. +[28] G. K. Dziugaite and M. D. Roy. Computing nonvacuous generalization bounds for deep (stochastic) +neural networks with many more parameters than training data. In Proceedings of the Conference +on Uncertainty in Artificial Intelligence, 2017. +[29] M. Fazlyab, A. Robey, H. Hassani, M. Morari, and G. J. Pappas. Efficient and accurate estimation +of lipschitz constants for deep neural networks. In Proceedings of the 33rd Conference on Neural +Information Processing Systems, page 11427–11438, 2019. +[30] C. A. Glasbey and D. J. Allcroft. A spatiotemporal auto-regressive moving average model for solar +radiation. Journal of the Royal Statistical Society. Series C, 57:343–355, 2008. +[31] J. A. Gonz`alez, F. J. Rodr`ıguez-Cort`es, O. Cronie, and J. Mateu. Spatio-temporal point process +statistics: a review. Spatial Statistics, 18:505–544, 2016. +[32] P. D. Gr¨unwald and N. A. Mehta. Fast rates for general unbounded loss functions:from erm to +generalized bayes. J. Mach. Learn. Res., 21:1–80, 2020. +[33] B. Guedj. A primer on PAC-Bayesian learning. arXiv:1901.05353v3, 2019. +[34] H. Hang and I. Steinwart. A Bernstein-type inquality for some mixing processes and dynamical +systems with an application to learning. Ann. Stat., 45 (2):708–743, 2017. +[35] W. Hoeffding. Probability inequalities for sums of bounded random variables. J. Americ. Statist. +Assoc., 58:13–30, 1963. +[36] M. Holland. PAC-Bayes under potentially heavy tails. In Advances in Neural Information Processing +Systems, volume 32, pages 2715–2124, 2019. +[37] P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley. Turbulence, Coherent Structures, Dynamical +Systems and Symmetry. Cambridge University Press, Cambridge, 2012. +[38] I. A. Ibragimov. Some limit theorems for stationary processes. Theory Probab. Appl., 7:349–382, +1962. +[39] K. Y. J`onsd`ottir, A. Rønn-Nielsen, K. Mouridsen, and E. B. V. Jensen. L´evy based modelling in +brain imaging. Scandinavian Journal of Statistics, 40:511–529, 2013. +[40] Scaman. K. and V. Virmaux. Lipschitz regularity of deep neural networks: analysis and efficient esti- +mation. In Proceedings of the Conference on Neural Information Processing System, page 3839–3848, +2018. +[41] A. Karpatne, G. Atluri, J. H. Faghmous, M. Steinbach, A. Banerjee, A. Ganguly, S. Shekhar, N. Sam- +atova, and V. Kumar. Theory-guided data science: A new paradigm for scientific discovery from data. +IEEE Trans. Knowl. Data Eng., 29:2318–2331, 2017. +34 + +[42] S. Kullback. Information Theory and Statistics. John Wiley & Sons, 1959. +[43] S. Lahiri, Y. Lee, and N. Cressie. On asymptotic distribution and asymptotic efficiency of least +squares estimators of spatial variogram parameters. J. Statist. Plann. Inference, 103:65–85, 2002. +[44] D. S. Modha and E. Masry. +Minimum complexity regression estimation with weakly dependent +observations. IEEE T. Inform. Theory, 42:2133–2145, 1996. +[45] G. D. Montanez and C. S. Shalizi. The LICORS cabinet: nonparametric light cone methods for +spatio-temporal modeling. arXiv:1506.02686v2, 2020. +[46] J.-M. Montero, G. Fern`andez-Avil`es, and J. Mateu. Spatial and Spatio-Temporal Geostatistical Mod- +eling and Kriging. Wiley, 2015. +[47] M. Nguyen and A. E. D. Veraart. Spatio-temporal Ornstein–Uhlenbeck processes: theory, simulation +and statistical inference. Scand. J. Stat., 44:46–80, 2017. +[48] M. Nguyen and A. E. D. Veraart. Bridging between short-range and long-range dependence with +mixed spatio-temporal Ornstein-Uhlenbeck processes. Stochastics, 90:1023–1052, 2018. +[49] J. Pearl, M. Glymour, and N. P. Jewell. Causal inference in statistics a primer. Wiley, 2016. +[50] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics informed deep learning (part I): Data-driven +solutions of nonlinear partial differential equations. arXiv: 1711.10561, 2017. +[51] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics informed deep learning (part II): Data- +driven discovery of nonlinear partial differential equations. arXiv: 1711.10566, 2017. +[52] B. S. Rajput and J. Rosi´nski. +Spectral representations of infinitely divisible processes. +Probab. +Theory Rel., 82:451–487, 1989. +[53] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. The MIT Press +Cambridge, 2006. +[54] M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat. Deep +learning and process understanding for data-driven earth system science. Nature, 556:195–204, 2019. +[55] E. Rio. Sur le th´eor´eme de Berry–Esseen pour les suites faiblement d´ependantes. Theory Probab. +Appl., 104:255–282, 1996. +[56] M. Rosenblatt. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA, +42:43–47, 1956. +[57] D. B. Rubin. Causal inference using potential outcomes: design, modeling, decisions. J. Am. Stat. +Assoc., 100:322–331, 2005. +[58] A. Rupe, N. Kumar, V. Epifanov, K. Kashinath, O. Pavlzk, F. Schlimbach, M. Patwary, S. Maidanov, +V. Lee, Prabhat, and J.P. Crutchfield. +DisCo: physics-based unsupervised discovery of coherent +structures in spatiotemporal systems. In 2019 IEEE/ACM Workshop on Machine Learning in High +Performance Computing Environments, pages 75–87, 2019. +35 + +[59] K. Sato. +L´evy Processes and Infinitely Divisible Distributions. +Cambridge Studies in Advanced +Mathematics 68. Cambridge Univ. Press, Cambridge, 2013. +[60] J. Shawe-Taylor, P. L. Bartlett, C. Williamson, R, and M. Anthony. Structural risk minimization +over data-dependent hierarchies. IEEE T. Inform. Theory, 44 (5):1926–1940, 1998. +[61] X. Shi, Z. Gao, L. Lausen, H. Wang, and D.-Y. Yeung. Deep learning for precipitation nowcasting: +A benchmark and a new model. +In Advances in Neural Information Processing Systems, page +5617–5627, 2017. +[62] X. Shi and D. Y. Yeung. +Machine learning for spatiotemporal sequence forecasting: A survey. +arXiv:1808.06865, 2018. +[63] M. L. Stein. Space-time covariance functions. J. Am. Stat. Assoc., 100 (469):310–322, 2005. +[64] R. Stelzer, T. Tossdorf, and M. Wittilinger. Moment based estimation of supOU processes and a +related stochastic volatility model. Stat. Risk Model, 32:1–24, 2015. +[65] N. Thiemann, C. Igel, O. Wintenberger, and Y. Seldin. A strongly quasiconvex PAC-Bayesian bound. +J. Mach. Learn. Res., pages 1–26, 2017. +[66] L. G. Valiant. A theory of the learnable. Commun. of the ACM, 27 (11):1134–1142, 1984. +[67] V. N. Vapnik. The nature of Statistical Learning Theory. Springer, Berlin, 2000. +[68] A. Vlontyos, H.B. Rocha, and D. Rueckert. Causal future prediction in a Minkowski space-time. In +International Conference on Learning Representations, 2021. +A +Appendix +A.1 +Weak dependence notions for casual processes and (influenced) MMAF +In this section, we discuss more in details the asymptotic dependence notions called θ-weak dependence +and θ-lex weak dependence. The latter notion has been introduced in [22, Definition 2.1] as an extension to +the random field case of the notion of θ-weak dependence satisfied by causal stochastic processes [25]. This +notion of dependence is presented in Definition 2.23. However, the notion of θ-lex weak dependence given +in Definition 2.16, slightly differs from the one given in [22, Definition 2.1] and represents an extension +to the random field case of the θ-weak dependence notion defined in [26, Remark 2.1]. Note that the +definition of θ-weak dependence in [25] and [26, Remark 2.1] differ for the cardinality of the marginal +distributions on which the function G is computed, namely, G ∈ G1 in the former and G ∈ Gν for ν ∈ N +in the latter. +Remark A.1 (Mixingale-type representation of θ-weak dependence). Let L1 = {g : R → R, bounded and +Lipschitz continuous with Lip(g) ≤ 1}, a stochastic process (Xt)t∈Z, and M = σ{Xt : t < j1, t ∈ Z}, +then it is showed in [26, Proposition 2.3] that +θ(r) = sup +g∈L1 +∥E[g(Xj1)|M] − E[g(Xj1)]∥1. +(69) +An alternative proof of this result can be also found in [25, Lemma 1]. +36 + +Let us now analyze the relationship between θ-weak dependence, α-mixing, and φ-mixing. Most +of the PAC Bayesian literature for stationary and heavy tailed data employs the following two mixing +conditions, see Remark 3.25, namely α-mixing and φ-mixing. The results in the Lemma below give us a +proof that the θ-weak dependence is more general than α-mixing and φ-mixing and therefore describes +the dependence structure of a bigger class of models. +Let M and V be two sub-sigma algebras of F. First of all, the strong mixing coefficient [56] is +defined as +α(M, V) = sup{|P(M)P(V ) − P(M ∩ V )|, M ∈ M, V ∈ V}. +A stochastic process X is said to be α-mixing if +α(r) = α(σ{Xs, s ≤ 0}, σ{Xs, s ≥ r}) +converges to zero as r → ∞. The φ-mixing coefficient has been introduced in [38] and defined as +φ(M, V) = sup{|P(V |M) − P(V )|, M ∈ M, V ∈ V, P(M) > 0}. +A stochastic process X is said to be φ-mixing if +φ(r) = φ(σ{Xs, s ≤ 0}, σ{Xs, s ≥ r}) +converges to zero as r → ∞. +Lemma A.2. Let (Xt)t∈Z be a stationary real-valued stochastic process such that E[|X0|q] ≤ ∞ for +some q > 1. Then, +(a) θ(r) ≤ 2 +2q−1 +q α(r) +q−1 +q ∥X0∥q ≤ 2 +q−1 +q φ(r) +q−1 +q ∥X0∥q, and +(b) θ-weak dependence is a more general dependence notion than α-mixing and φ-mixing. +Proof. The proof of the first inequality at point (a) is proven in [22, Proposition 2.5] using the represen- +tation of the θ-coefficients (69). The proof of the second inequality follows from a classical result in [14, +Proposition 3.11]. In [22, Proposition 2.7], it is defined a stochastic process which is θ-weak dependent +but neither α-mixing or φ-mixing. +As seen in Definition 2.16 by using the lexicographic order in R1+d, an opportune extension of +θ-weak dependence valid for random fields can be defined. +The definition of θ-lex coefficients for G ∈ G1 is given in [22, Definition 2.1]. The latter can be +represented as θv +lex(r) := supu∈N{θu,v(r)} for v = 1. Therefore, an alternative way to define the θ-lex +coefficients in Definition 2.16 is obviously +θlex(r) = sup +v∈N +θv +lex(r), v ∈ N for all r ∈ R+. +(70) +The following Lemma has important applications in the following sections. +Lemma A.3. Let Z be a θ-lex weakly dependent random field, then ZM +i += Zi ∨ (−M) ∧ M is θ-lex +weakly dependent. +37 + +Proof. Let u, v ∈ N, M > 0, F ∈ G∗ +u, G ∈ Gv, and i1, i2, . . . , iu ∈ V r +Γ′ where Γ′ = {j1, . . . , jv}. Let +F M(Zi1, . . . , Ziu) = F(ZM +i1 , . . . , ZM +iu ), and GM(Zj1, . . . , Zjv) = G(ZM +j1 , . . . , ZM +jv ). We have that F M is +a bounded function on (Rn)u and GM is a bounded and Lipschitz function on (Rn)v (with the same +Lipschitz coefficients as the function G): let (Z1, . . . , Zv) and ( ˜Z1, . . . , ˜Zv), +|GM(Z1, . . . , Zv) − GM( ˜Z1, . . . , ˜Zv)| ≤ Lip(G) +v +� +i=1 +|ZM +i +− ˜Z +M +i | +≤ Lip(G) +v +� +i=1 +|Zi − ˜Zi|. +Then, it holds that +|Cov(F(ZM +i1 , . . . , ZM +iu ), G(ZM +j1 , . . . , ZM +jv ))| ≤ ∥F∥∞vLip(G)θlex(r), +where θlex(r) are the θ-coefficients of the field Z, and so the field ZM +t (x) is θ-lex weakly dependent. +Note that the above result also holds for X a θ-weakly dependent process. Therefore, the truncated +XM +t += Xt ∨ (−M) ∧ M is a θ-weakly dependent process. +The notion of θ-lex weak dependence also admits a mixingale-type representation. +Remark A.4. Let L1 = {g : R → R, bounded and Lipschitz continuous with Lip(g) ≤ 1} and Γ′ = +{j1, . . . , jv} for {j1, . . . jv} ∈ Z1+d. For a random field (Zt)t∈Z1+d, by readily applying [22, Lemma 5.1] +on the σ-algebra M = σ{Zt : t ∈ V r +Γ′ ⊂ Z1+d}, the following result can be easily proved: +θlex(r) = sup +g∈L1 +sup +v∈N +∥E[g(Zj1, . . . , Zjv)|M] − E[g(Zj1, . . . , Zjv)]∥1. +(71) +We now use the representation of the θ-lex coefficients (71) to understand its relationships to α∞,v- +mixing and φ∞,v-mixing for v ∈ N∪{∞}. These notions are defined in [24] and are strong mixing notions +used in the study of stationary random fields. +In general, for u, v ∈ N ∪ {∞}, given coefficients +αu,v(r) = sup{α(σ(ZΓ), σ(ZΓ′)), Γ, Γ′ ∈ R1+d, |Γ| ≤ u, |Γ′| ≤ v, dist(Γ, Γ′) ≥ r}, +and +φu,v(r) = sup{φ(σ(ZΓ), σ(ZΓ′)), Γ, Γ′ ∈ R1+d, |Γ| ≤ u, |Γ′| ≤ v, dist(Γ, Γ′) ≥ r}. +a random field Z is said to be αu,v-mixing or φu,v-mixing if the coefficients (αu,v(r))r∈R+ or (φu,v(r))r∈R+ +converge to zero as r → ∞. We then have the following result. +Lemma A.5. Let (Zt)t∈Z1+d be a stationary real-valued random field such that E[|Z0|q] ≤ ∞ for some +q > 1. Then, for v ∈ N ∪ {∞}, +(a) θlex(r) ≤ 2 +2q−1 +q α∞,v(r) +q−1 +q ∥Z0∥q ≤ 2 +q−1 +q φ∞,v(r) +q−1 +q ∥Z0∥q, and +(b) it holds that θ-lex weak dependence is more general than α∞,v-mixing and α-mixing in the special +case of stochastic processes. Moreover, θ-lex weak dependence is more general than φ∞,v-mixing. +38 + +Proof. From the proof of [22, Proposition 2.5], we have that +θ1 +lex(r) ≤ 2 +2q−1 +q α∞,1(r) +q−1 +q ∥Z0∥q. +Because of (70) and [14, Proposition 3.11], we have that +θlex(r) ≤ 2 +2q−1 +q α∞,1(r) +q−1 +q ∥Z0∥q ≤ 2 +q−1 +q φ∞,1(r) +q−1 +q ∥Z0∥q. +Equally, +θlex(r) ≤ 2 +2q−1 +q α∞,v(r) +q−1 +q ∥Z0∥q ≤ 2 +q−1 +q φ∞,v(r) +q−1 +q ∥Z0∥q. +The proof of the point (b) follows directly by [22, Proposition 2.7]. In fact θlex(r) = θ1,∞(r) following +the notations of [26, Definition 2.3] and the process used in the proof of the Proposition is θ-lex weakly +dependent but neither α∞,v, α or φ∞,v-mixing. +A.2 +Inference on STOU processes +Let us start by explaining the available estimation methodologies for the parameter vector θ0 = {A, c, V ar(L′)} +under the STOU modeling assumption when the spatial dimension d = 1. Throughout, we refer to the +notations and calculations in Example 2.21. +We have two ways of estimating the parameter vector θ0 in such a scenario. The first one is presented +in [47]. Here, the parameters A and c are first estimated using normalized spatial and temporal variograms +defined as +γS(u) := E((Zt(x) − Zt(x − u))2) +V ar(Zt(x)) += 2(1 − ρS(u)) = 2 +� +1 − exp +� +− Au +c +�� +, +(72) +and +γT (τ) := E((Zt(x) − Zt−τ(x))2) +V ar(Zt(x)) += 2(1 − ρT (τ)) = 2(1 − exp(−Aτ)), +(73) +where ρS and ρT are defined in Example 2.10. Note that normalized variograms are used to separate the +estimation of the parameters A and c from the parameter V ar(Λ′). Let N(u) be the set containing all the +pairs of indices at mutual spatial distance u for u > 0 and the same observation time. Let N(τ) be the +set containing all the pairs of indices where the observation times are at a distance τ > 0 and have the +same spatial position. |N(u)| and |N(τ)| give the number of the obtained pairs, respectively. Moreover, +let ˆk2 be the empirical variance which is defined as +ˆk2 = +1 +(D − 1) +D +� +i=1 +Z2 +ti(xi), +(74) +where D denotes the sample size. The empirical normalized spatial and temporal variograms are then +39 + +defined as follows: +ˆγS(u) = +1 +|N(u)| +� +i,j∈N(u) +(Zti(xi) − Ztj(xj))2 +ˆk2 +(75) +ˆγT (τ) = +1 +|N(τ)| +� +i,j∈N(τ) +(Zti(xi) − Ztj(xj))2 +ˆk2 +. +(76) +By matching the empirical and the theoretical forms of the normalized variograms, we can estimate A +and c by employing the estimators +A∗ = −τ −1 log +� +1 − ˆγT (τ) +2 +� +, and c∗ = − +A∗u +log +� +1 − ˆγS(u) +2 +�. +(77) +Alternatively, we can use a least square methodology to estimate the parameters A and c, i.e. (75) and +(76) are computed at several lags, and a least-squares estimation is used to fit the computed values to the +theoretical curves. The authors in [47] use the methodology discussed in [43] to achieve the last target. We +refer the reader also to [18, Chapter 2] for further discussions and examples of possible variogram model +fitting. The parameter V ar(Λ′) can be estimated by matching the second-order cumulant of the STOU +with its empirical counterpart. The consistency of this estimation procedure is proven in [47, Theorem +12]. +A second possible methodology for estimating the vector θ0 employs a generalized method of moment +estimator (GMM), as in [48]. It is essential to notice that by using such an estimator, we cannot separate +the parameter V ar(Λ′) from the estimation of the parameters A and c. Instead, all moment conditions +must be combined into one optimization criterion, and all the estimations must be found simultaneously. +Consistency and asymptotic normality of the GMM estimator are discussed in [48] and [22], respectively. +It is important to notice that the parameter λ can be inferred by knowing the parameter A and c +alone (this also holds for d ≥ 2). We can then plug in the estimations (77) (or the ones obtained by least +squares or GMM) and obtain the estimator +λ∗ = A∗ min(2, c∗) +2c∗ +, +(78) +of the decay rate of the θ-lex coefficients of an STOU with spatial dimension d = 1. This estimator is +consistent because of [47, Theorem 12] and the continuous mapping theorem. Furthermore, by using the +estimation of the parameter V ar(Λ′), we can also obtain a consistent estimator for ¯α. +For d ≥ 2, a least square methodology is still applicable for estimating the variogram’s parameters. +The estimator used in [47] is a normalized version of the least-square estimator for spatial variogram’s +parameters discussed in [43], which also applies for d > 1. This method, paired with a method of moments +(matching the second order cumulant of the field Z with its empirical counterparts), allows estimating +the parameter V ar(Λ′). The GMM methodology discussed in [48] also continues to apply for d ≥ 2. +However, when the spatial dimension is increasing, the shape of the normalized variograms and the field’s +moments become more complex, and higher computational effort is required to navigate through the high +dimensional surface of the optimization criterion behinds least-squares or GMM estimators. +40 + +A.3 +Inference for MSTOU processes +When estimating the parameter vector θ1 = {α, β, c, V ar(L′)} under an MSTOU modeling assumption– +see, for example, the shape of the coefficients in Example 2.22– it is evident that the shape of the +autocorrelation function, and therefore of the normalized temporal and spatial variograms, become more +complex for increasing d. As already addressed in the previous sections, when estimating the parameters +(α, β, c) alone, we can use the least-squares type estimator discussed in [43]. Moreover, by pairing the +latter with a method of moments or using a GMM estimator, we can estimate the vector θ1. +B +Appendix +B.1 +Proofs of Section 2.5 +In [22, Proposition 3.11], it is given a general methodology to show that an MMAF Z is θ-lex weakly +dependent. Given that the definition of θ-lex-weakly dependence used in the paper slightly differs from +the one given in [22], the proof of Proposition 2.17 and 2.19 differ from the one of [22, Proposition 3.11]. +Before giving a detailed account of these proofs, let us state some notations used in both. Let r > 0, +{(tj1, xj1), . . . (tjv, xjv)} = Γ′ ⊂ R1+d and {(ti1, xi1), . . . , (tiu, xiu)} = Γ ⊂ V r +Γ′ such that |Γ| = u and +|Γ′| = v for (u, v) ∈ N × N. We call the truncated (influenced) MMAF the vector +Z(ψ) +Γ′ = +� +Z(ψ) +tj1 (xj1), . . . , Z(ψ) +tjv (xjv) +�⊤ +, +where ψ := ψ(r) > 0. In particular, for all a ∈ {1, . . . , u} and a b ∈ {1, . . . , v} , ψ has to be chosen such +that it exists a set Bψ +tjb (xjb) with the following properties. +• |Bψ +tjb (xjb)| → ∞ as r → ∞ for all b, and +• Iia = H × Atia (xia) and Ijb = H × Bψ +tjb (xjb) are disjoint sets or intersect on a set H × O, where +O ∈ R1+d and dim(O) < d + 1, for all a and b +Let us now assume that it is possible to construct the sets Bψ +tjb (xjb). Then, since π×λ1+d(H×O) = 0 +and by the definition of a L´evy basis, it follows that +Ztia (xia) = +� +H +� +Ati(xi) +f(A, ti − s, xi − ξ)Λ(dA, ds, dξ) and +Z(ψ) +tjb (xjb) = +� +H +� +Bψ +tjb (xjb) +f(A, tj − s, xj − ξ)Λ(dA, ds, dξ), +and ZΓ and Z(ψ) +Γ′ +are independent. Hence, for F ∈ G∗ +u and G ∈ Gv, F(ZΓ) and G(Z(ψ) +Γ′ ) are also +41 + +independent. Now +|Cov(F(ZΓ), G(ZΓ′))| +≤ |Cov(F(ZΓ), G(Z(ψ) +Γ′ ))| + |Cov(F(ZΓ), G(ZΓ′) − G(Z(ψ) +Γ′ ))| += |E[(G(ZΓ′) − G(Z(ψ) +Γ′ ))F(ZΓ)] − E[G(ZΓ′) − G(Z(ψ) +Γ′ )]E[F(ZΓ)]| +≤ 2∥F∥∞E[|G(ZΓ′) − G(Z(ψ) +Γ′ )|] ≤ 2Lip(G)∥F∥∞ +v +� +l=1 +E[|Ztjl (xjl) − Z(ψ) +tjl (xjl)|] = += 2Lip(G)∥F∥∞vE[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|], +(79) +because an (influenced) MMAF is a stationary random field. To show that a field satisfy Definition 2.16, is +then enough to prove that E[|Ztj1 (xj1)−Z(ψ) +tj1 (xj1)|] in the above inequality converges to zero as r → ∞. +The proofs of Proposition 2.17 and 2.19 differ in the definition of the sequence ψ and the sets Bψ +tjb (xjb). +Proof of Proposition 2.17. In this proof, we assume that Bψ +tjb (xjb) = At(x)\V ψ +(t,x), where ψ = ψ(r) := +r +√ +(d+1)(c2+1). +(i) Using the translation invariance of At(x) and V (ψ) +(t,x) we obtain +E[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|] ≤ +�� +H +� +A0(0)∩V ψ +0 +V ar(Λ′)f(A, −s, −ξ)2dξdsπ(dA) +� 1 +2 += +�� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2 dξdsπ(dA) +� 1 +2 +, +where we have used Proposition 2.8-(ii) to bound the L1-distance from above. Overall, we obtain +θlex(r) ≤ 2 +�� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2dξdsπ(dA) +� 1 +2 +, +which converges to zero as r tends to infinity by applying the dominated convergence theorem. +(ii) By applying Proposition 2.8-(i) and (ii), we obtain +E[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|] +≤ +� � +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2dξdsπ(dA) ++ +�� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +E(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)dξdsπ(dA) +�2 � 1 +2 +. +Finally, we proceed similarly to proof (i) and obtain the desired bound. +42 + +(iii) We apply now Proposition 2.8-(iii). Then, +E[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|] +≤ +� � +S +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +� +∥ξ∥≤cs +|f(A, −s, −ξ)γ0|dξdsπ(dA) ++ +� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +� +∥ξ∥≤cs +� +R +|f(A, −s, −ξ)y|ν(dy)dξdsπ(dA) +� +. +The bound for the θ-lex-coefficients is obtained following the proof line in (i). +Proof of Proposition 2.19. +(i) W.l.o.g., let us determine the truncated set when (tjb, xjb) = (0, 0). We +use, to this end, two auxiliary ambit sets translated by a value ψ > 0 along the spatial axis, namely, +the cones A0(ψ) and A0(−ψ), as illustrated in Figure 5-(a),(c) for c ≤ 1 and in Figure 5-(b),(d) +for c > 1. Then, we set the truncated integration set to Bψ +0 (0) = A0(0)\(A0(ψ) ∪ A0(−ψ)). Since +(tia, xia) ∈ V r +(0,0), it is sufficient to choose ψ such that the integration set of Z(ψ) +0 +(0) is a subset of +(V r +(0,0))c. To this end, the three intersecting points ( −ψ +2c , −ψ +2 ), ( −ψ +c , 0) and ( −ψ +2c , ψ +2 ) have to be inside +the set (V r +(0,0))c, as illustrated in Figure 5-(e) for c ≤ 1 and in Figure 5-(f) for c > 1. Clearly, this +leads to the conditions ψ ≤ rc, ψ ≤ 2r and ψ ≤ 2rc, which are satisfied for ψ = r min(2, c). Hence, +by using Proposition 2.8-(ii), we have that +θlex(r) ≤ 2V ar(Λ′)1/2 +�� ∞ +0 +� +A0(0)∩(A0(ψ)∪A0(−ψ)) +f(A, −s)2dsdξπ(dλ) +�1/2 += 2V ar(Λ′)1/2 +� � ∞ +0 +� +A0(0)∩A0(ψ) +f(A, −s)2dsdξπ(dλ) + +� ∞ +0 +� +A0(0)∩A0(−ψ) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(ψ)∩A0(−ψ) +f(A, −s)2dsdξπ(dλ) +�1/2 +≤ 2 +� +2Cov(Z0(0), Z0(r min(2, c))). +which converges to zero as r → ∞ for the dominated convergence theorem. +(ii) In this proof, we indicate the spatial components and write xia = (yia, zia) ∈ R × R and xjb = +(yjb, zjb) ∈ R×R for a �� {1, . . . , u} and b ∈ {1, . . . , v}. W.l.o.g., let us then determine the truncated +set when (tjb, yjb, zjb) = (0, 0, 0). To this end, we use four additional ambit sets that are translated +by a value ψ > 0 along both spatial axis, namely, the cones A0(ψ, ψ), A0(ψ, −ψ), A0(−ψ, ψ) and +A0(−ψ, −ψ), as illustrated in Figure 7-(c) for c ≤ 1 and in Figure 7-(d) for c > 1). Then, we set the +truncated integration set to Bψ +0 (0) = A0(0, 0)\(A0(ψ, ψ) ∪ A0(ψ, −ψ) ∪ A0(−ψ, ψ) ∪ A0(−ψ, −ψ)). +Since (tia, yia, zia) ∈ V r +(0,0,0), it is sufficient to choose ψ such that the integration set of Z(ψ) +0 +(0, 0) +is a subset of (V r +(0,0,0))c, i.e. +sup +b∈Bψ +0 (0) +∥b∥∞ ≤ r. +(80) +43 + +(a) Integration set A0(0) for c = 1/ +√ +2 together with the +complement of V r +(0,0) for r = 3. +(b) Integration set A0(0) for c = 2 +√ +2 together with the +complement of V r +(0,0) for r = 3. +(c) Integration set A0(0) together with A0(ψ) and A0(−ψ) +for ψ = r min(2, c). +(d) Integration set A0(0) together with A0(ψ) and A0(−ψ) +for ψ = r min(2, c). +(e) Integration set of Z(ψ) +0 +(0) together with the complement +of V r +(0,0). +(f) Integration set of Z(ψ) +0 +(0) together with the complement +of V r +(0,0). +Figure 5: Integration set and truncated integration set of an MMAF Zt(x), exemplary for (t, x) = (0, 0). +44 + +6 + = rmin(2,c) = 3/V2 +Ao(2) +4 +(0, 2b) +2 +0 +-2 +(0,一) +-4 +Ao(2b) +-6 +1 +-6 +-5 +-4 +-3 +-2 +-1 +0 +1(0)K +Ao(cb) +6 +4 +2 +ob = rmin(2, c) +=6 +-2 +-4 +Ao(-) +(0, -山 +-6 +-6 +-5 +-4 +-3 +-2 +-1 +0Ao(O)(Ao() U Ao() +6 +2c +2 +0 +-2 +-4 +2c +2 +-6 +-6 +-5 +-4 +-3 +-2 +-1 +0 +16 +D +2c +2 +4 +2 +Ao(O)(Ao() +UAo(-)) +-2 +-4 +C +2c +-6 +2 +-6 +-5 +-4 +-3 +-2 +-1 +06 +c= 1/V2 +4 +r=3 +Ao(0) +2 +(0,0) +0 +((00)A) +-2 +-4 +-6 +-6 +-5 +-4 +-3 +-2 +-1 +0 +16 +Ao(0) +4 +3 +2 +(0,0) +0 +-2 +-4 +(V(0,0))c +-6 +-6 +-5 +-4 +-3 +-2 +0In the following we prove that the choice ψ = r min(1, c/ +√ +2) is sufficient for (80) to hold. We +investigate cross sections of the truncated integration set Bψ +0 (0) along the time axis. For a fixed +time point t, we call this cross-section Bt and, similarly, we denote the cross-section of an ambit +set by At. Note that the cross sections of our ambit sets along the time axis are circles with radius +|ct| (see also Figure 6). +t ∈ +� +−ψ +√ +2c, 0 +� +: As the distance between the center of the circle At +0(0, 0) and the centers of the circles At +0(ψ, ψ), +At +0(−ψ, ψ), At +0(ψ, −ψ), At +0(−ψ, −ψ) is +√ +2ψ, respectively, the set At +0(0, 0) (which is a circle with +radius |ct|) is disjoint from every of the additional ambit sets’ cross-sections at t and hence +Bt = At +0(0, 0) (see Figure 6-(a)). Clearly, we obtain +sup +t∈ +� +−ψ/( +√ +2c),0 +� sup +b∈Bt∥b∥ = +sup +t∈ +� +−ψ/( +√ +2c),0 +� max(c|t|, |t|) = max +� ψ +√ +2, +ψ +√ +2c +� +. +(81) +t ∈ +� +−ψ +c , −ψ +√ +2c +� +: For such t the set At +0(0, 0) intersects with every additional ambit sets’ cross-section (see Figure +6-(b)). However, as the additional ambit sets’ cross-sections do not intersect with each other, +the point p1(t) = (t, c|t|, 0) ∈ Bt on the boundary of At +0(0, 0) (see the red point in Figure 6-(b)) +is not excluded from Bt by any additional ambit set. Note that symmetry makes it sufficient +to look at p1(t). Hence, we obtain +sup +t∈ +� +−ψ/c,−ψ/( +√ +2c) +� sup +b∈Bt∥b∥ = +sup +t∈ +� +−ψ/c,−ψ/( +√ +2c) +�∥p1(t)∥ = max +� +ψ, ψ +c +� +. +(82) +t ∈ +� +− +√ +2ψ +c +, −ψ +c +� +: For such t the set At +0(0, 0) intersects with every additional ambit sets’ cross-section. Such +intersection additionally restrict At +0(0, 0) (see Figure 6-(c)). Straightforward calculations show +that the point where the boundaries of At +0(−ψ, ψ) and At +0(−ψ, ψ) as well as the set At +0(0, 0) +intersect, say p2(t), is given by (t, 0, ψ − +� +(ct)2 − ψ2) (see red point in Figure 6-(c)). Note +that symmetry makes it sufficient to look at p2(t). We obtain +sup +t∈ +� +− +√ +2ψ/c,−ψ/c +� sup +b∈Bt∥b∥ = +sup +t∈ +� +− +√ +2ψ/c,−ψ/c +�∥p2(t)∥ = max +� +ψ, +√ +2ψ +c +� +. +(83) +t ≤ − +√ +2ψ +c +: In the following, we show that for such t, the set At(0, 0) is entirely included in the union of the +additional ambit sets’ cross sections. Clearly, this is true if the upper point where the bound- +aries of At +0(−ψ, ψ) and At +0(−ψ, ψ) intersect, say p3(t), is outside of At +0(0, 0) (see the red point +in Figure 6-(d)). Note that symmetry makes it sufficient to look at p3(t). As straighforward +calculations show that p3(t) = (t, 0, ψ + +� +(ct)2 − ψ2), this is true if ψ + +� +(ct)2 − ψ2 ≥ c|t|, +or equivalently (ψ + +� +(ct)2 − ψ2)2 ≥ (ct)2. Moreover, we have +ψ2 + 2ψ +� +(ct)2 − ψ2 + (ct)2 − ψ2 ≥ (ct)2 ⇐⇒ ψ ≥ 0. +(84) +In view of condition (80) we combine (81), (82) and (83) and set ψ = r min(1, c/ +√ +2), which also +satisfies (84). +45 + +In addition to the cross sectional views from Figure 6, we give a full three-dimensional view of the +set Bψ +0 (0) for c ≤ 1 in Figure 7-(e) and for c > 1 in Figure 7-(f) that highlight the points on the +boundary of Bψ +0 (0) with maximal ∞-norm for ψ = r min(1, c/ +√ +2). +Therefore, because of Proposition 2.8-(ii), we can conclude that +θlex(r) ≤ 2V ar(Λ′)1/2 +�� ∞ +0 +� +A0(0,0)∩(A0(ψ,ψ)∪A0(−ψ,ψ)∪A0(ψ,−ψ)∪A0(−ψ,−ψ)) +f(A, −s)2dsdξπ(dλ) +�1/2 += 2V ar(Λ′)1/2 +� � ∞ +0 +� +A0(0,0)∩A0(ψ,ψ) +f(A, −s)2dsdξπ(dλ) ++ +� ∞ +0 +� +A0(0,0)∩A0(−ψ,ψ) +f(A, −s)2dsdξπ(dλ) ++ +� ∞ +0 +� +A0(0,0)∩A0(ψ,−ψ) +f(A, −s)2dsdξπ(dλ) ++ +� ∞ +0 +� +A0(0,0)∩A0(−ψ,−ψ) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(ψ,−ψ)∩A0(−ψ,−ψ) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(−ψ,ψ)∩(A0(ψ,−ψ)∪A0(−ψ,−ψ)) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(ψ,ψ)∩(A0(−ψ,ψ)∪A0(ψ,−ψ)∪A0(−ψ,−ψ)) +f(A, −s)2dsdξπ(dλ) +�1/2 +≤ 2 +� +2Cov(Z0(0, 0), Z0(ψ, ψ)) + 2Cov(Z0(0, 0), Z0(ψ, −ψ)). +which converges to zero as r → ∞ for the dominated convergence theorem. +B.2 +Proofs of Section 3 +Proof of Proposition 3.4. We drop the bold notations indicating random fields and stochastic processes +in the following. Let h ∈ H, we call Li = L(h(Xi), Yi) for i ∈ Z, Z(M) +t +(x) := Zt(x) ∧ M for M > 1, and +L(M) = (L(h(X(M) +i +), Y (M) +i +))i∈Z where +X(M) +i += L−(M) +p +(t0 + ia, x∗), +and +Y (M) +i += Z(M) +t0+ia(x∗), for i ∈ Z, +where +L−(M) +p +(t, x) = {Z(M) +s +(ξ) : (s, ξ) ∈ Z × L, ∥x − ξ∥ ≤ c (t − s) and t − s ≤ p}. +For u ∈ N, i1 ≤ i2 ≤ . . . ≤ iu ≤ iu + k = j with k ∈ N, let us consider the marginal of the field +� +(Xi1, Yi1), . . . , (Xiu, Yiu), (Xj, Yj) +� +, +(85) +46 + +(a) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −1 +and c = 2. +(b) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −5/4 +and c = 2. +(c) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −7/4 +and c = 2. +(d) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −2.2 +and c = 2. +Figure 6: Cross sections of the auxiliary ambit sets and At +0(0, 0) at different time points for ψ = 3. +and let us define +Γ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + isa, x∗) or (ti, xi) = (t0 + isa, x∗) for s = 1, . . . , u}, +and +Γ′ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + ja, x∗) or (ti, xi) = (t0 + isa, x∗)}. +Then r = dist(Γ, Γ′). In particular Γ ⊂ V r +Γ′, and r = (j − iu)a − p. For F ∈ G∗ +u and G ∈ G1, then +|Cov(F(Li1, . . . , Liu), G(Lj))| +(86) +≤|Cov(F(Li1, . . . , Liu), G(Lj) − G(L(M) +j +))| +(87) ++ |Cov(F(Li1, . . . , Liu), G(L(M) +j +))|. +(88) +The summand (87) is less than or equal to +2∥F∥∞Lip(G)E[|Lj − L(M) +j +|] ≤ 2∥F∥∞Lip(G)(E[|Yj − Y (M) +j +|] + E[|h(Xj) − h(X(M) +j +)|]) +≤ 2∥F∥∞Lip(G)(Lip(h)a(p, c) + 1)E[|Zt1(x1) − Z(M) +t1 +(x1)|] +by stationarity of the field Z, and because L and h are Lipschitz functions. Moreover, the function G(L(M) +j +) +47 + +A(, ) +At(, b) +6 +4 +2 +A(0, 0) +2 +0 +-2 +-4 +t=-1 +-6 +A(,) +A(,-) +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +yA(, ) +At(,) +6 +4 +2 +Pi(t) +—A(O, 0) +-2 +-4 +t = -5/4 +-6 +A(,) +A(,) +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +yA(, ) +At(, b) +p2(t) +6 +4 +2 +A(0, 0) +2 +0 +-2 +-4 +t = -7/4 +9- +A(,) +A(,一) +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +y8 +P3(t) +6 +A(,) +A(,) +4 +2 +At (O, 0 +0 +-2 +-4 +A(,) +A(,) +-6 +t = -2.2 +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +y(a) Integration set A0(0, 0) for c = 1/ +√ +2 together with the +complement of V r +(0,0,0) for r = 3. +(b) Integration set A0(0, 0) for c = 2 together with the +complement of V r +(0,0,0) for r = 3. +(c) Integration +set +A0(0, 0) +together +with +A0(ψ, ψ), +A0(−ψ, ψ), +A0(ψ, −ψ) +and +A0(−ψ, −ψ) +for +ψ += +r min(1, c/ +√ +2). +(d) Integration +set +A0(0, 0) +together +with +A0(ψ, ψ), +A0(−ψ, ψ), +A0(ψ, −ψ) +and +A0(−ψ, −ψ) +for +ψ += +r min(1, c/ +√ +2). +(e) B is the integration set of Z(ψ) +0 +(0, 0). In addition, we +illustrate A0(−ψ, −ψ) for c = 1/ +√ +2 together with the com- +plement of V r +(0,0,0) for r = 3. +(f) B is the integration set of Z(ψ) +0 +(0, 0). In addition, we +illustrate A0(−ψ, −ψ) for c = 2 together with the comple- +ment of V r +(0,0,0) for r = 3. +Figure 7: Integration set and truncated integration set of an MMAF Zt(y, z) for d = 2. +48 + +8- +9 +(0, 0,0) +4 +r=3 +2 +At(0, 0) +2 +0 +-2 +-4 +(000A) +9- +-8 +c = 1/V2 +5 +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t8 +Ao(0, 0) +6 +(0, 0,0) +4 +r=3 +0 +2 +-2 +-4 +(V(o,0,0) +9- +-8 +5 +C +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t8 +(0,-,) +9 +(0, b, ) +=rmin +4 +3/2 +2 +0 +-2 +-4 +9- +-8 +(0, 4,-) +5 +0,-山二山) +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t(0,一, ) +(0,, +8 +6 +b += rmin +4 +3 +0 +2 +-2 +-4 +9- +-8 +0一一山 +5 +(0,,-山) +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t,0,4 +0,0 +4 +3 +B +1 +2 +.0 +-1 +-2 +-3 +Ao(一,一山 +4 +-4 +(V0,0,0))℃ +2 +0 +-3 +-2 +-2.5 +-2 +-1.5 +-1 +0.5 +-4 +0 +0.5 +y +t于,0,4 +b. +0 +(V(0.0,0)° +4 +0 +3 +B +1 +2 +.0 +-1 +-2 +-3 +Ao(-,一山) +-4 +2 +0 +-3 +-2.5 +-2 +-2 +-1.5 +-1 +-0.5 +-4 +0 +0.5 +y +tbelongs to Ga(p,c)+1. Let (X, Y ), (X′, Y ′) ∈ Ra(p,c)+1, then +|G(L(h(X(M)), Y (M))) − G(L(h(X′(M)), Y ′(M)))| ≤ Lip(G)|L(h(X(M)), Y (M)) − L(h(X′(M)), Y ′(M))| +≤ Lip(G)(|h(X(M))) − h(X′(M))| + |Y (M) − Y ′(M)| +≤ Lip(G)(Lip(h) + 1)(∥X − X′∥1 + |Y − Y ′|), +and Lip(G(L(M) +j +)) ≤ Lip(G)(Lip(h) + 1). +Because Z is a θ-lex weakly dependent random field, (88) is less than or equal to +˜d∥F∥∞Lip(G)(Lip(h) + 1)θlex(r). +We choose now M = r and obtain that (86) is less than or equal than +∥F∥∞Lip(G) ˜d(Lip(h)a(p, c) + 1) +�2 +˜d +E[|Zt1(x1) − Z(r) +t1 (x1)|] + θlex(r) +� +. +The quantity above converges to zero as r → ∞. Therefore, (Li)i∈Z is a θ-weakly dependent process. +Proof of Proposition 3.7. We drop the bold notations indicating random fields and stochastic processes in +the following. We call Li = L(hβ(Xi), Yi) for i ∈ Z, as defined in Proposition 2.17, and β ∈ B. Moreover +we will use the notation Z(ψ) +t +(x) to indicate a truncated of the field Z and L(ψ) = (L(hβ(X(ψ) +i +), Y (ψ) +i +))i∈Z +where +X(ψ) +i += L−(ψ) +p +(t0 + ia, x∗), +and +Y (ψ) +i += Z(ψ) +t0+ia(x∗), for i ∈ Z, +where +L−(ψ) +p +(t, x) = {Z(ψ) +s +(ξ) : (s, ξ) ∈ Z × L, ∥x − ξ∥ ≤ c (t − s) and t − s ≤ p}. +For u ∈ N, i1 ≤ i2 ≤ . . . ≤ iu ≤ iu + k = j with k ∈ N, let us consider the marginal of the field +� +(Xi1, Yi1), . . . , (Xiu, Yiu), (Xj, Yj) +� +, +(89) +and let us define +Γ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + isa, x∗) or (ti, xi) = (t0 + isa, x∗) for s = 1, . . . , u}, +and +Γ′ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + ja, x∗) or (ti, xi) = (t0 + isa, x∗)}. +Then r = dist(Γ, Γ′). In particular Γ ⊂ V r +Γ′, and r = (j − iu)a − p. For F ∈ G∗ +u and G ∈ G1, then +|Cov(F(Li1, . . . , Liu), G(Lj))| +≤|Cov(F(Li1, . . . , Liu), G(Lj) − G(L(ψ) +j +))| +(90) ++ |Cov(F(Li1, . . . , Liu), G(L(ψ) +j +))|. +(91) +The summand (91) is equal to zero because Γ ⊂ V r +Γ′, see proof of Proposition 2.19 for more details about +49 + +this part of the proof. We can then bound (90) from above by +2∥F∥∞Lip(G)E[|Lj − L(ψ) +j +|] ≤ 2∥F∥∞Lip(G)(E[|Yj − Y (ψ) +j +|] + E[|h(Xj) − h(X(ψ) +j +)|]) +(92) +≤ 2∥F∥∞Lip(G)(Lip(h)a(p, c) + 1)E[|Zt1(x1) − Z(ψ) +t1 (x1)|] +(93) +where (92) holds because L is a function with Lipschitz constant equal to one, and (93) holds given that +h ∈ L(Ra(p,c)). +In this case, we work with linear functions, +E[|hβ(Xj) − hβ(X(ψ) +j +)∥] = E +���� +a(p,c) +� +l=1 +β1,l(Ztl(xl) − Z(ψ) +tl (xl)) +��� +� +. +(94) +By stationarity of the field Z, we have that (94) is smaller or equal than ∥β1∥1E[|Zt1(x1) − Z(ψ) +t1 (x1)|]. +On overall, we have that (90) is smaller or equal than +2∥F∥∞Lip(G)(Lip(h)a(p, c) + 1)E[|Zt1(x1) − Z(ψ) +t1 (x1)|] for h a Lipschitz function, +or it is smaller or equal than +2∥F∥∞Lip(G)(∥β1∥1 + 1)E[|Zt1(x1) − Z(ψ) +t1 (x1)|] for hβ a linear function. +Because of the properties of the truncated field Z(ψ) +t1 (x1), we have that the above bounds converge to +zero as r → ∞. Therefore, L is a θ-weakly dependent process. +The proof of Proposition 3.8 is based on a blocks technique used in the papers [34] and [44]. Such +results are based on the use of several lemmas. To ease the complete understanding of the proof of +Proposition 3.8, we prove these Lemmas below, given that they undergo several modifications in our +framework. Let us start by partitioning a set {1, 2, . . . , m} into k blocks. Each block will contain l = ⌊ m +k ⌋ +terms. Let h = m − k l < k denote the remainder when we divide m by k. We now construct k blocks +such that the number of elements in the jth-block is defined by +¯lj = +� +l + 1 +if j = 1, 2, . . . , h +l +if j = h + 1, . . . , k +. +Let (U i)i∈Z a stationary process, and V m = �m +i=1 U i, for j = 1, . . . , k we define the j-th block as +V j,m = U j + U j+k + . . . U j+(¯lj−1) k = +¯lj +� +i=1 +U j+(i−1) k +such that +V m = +k +� +j=1 +V j,m = +k +� +j=1 +¯lj +� +i=1 +U j+(i−1) k. +For j = 1, 2, . . . , k, let us define pj = +¯lj +m. It follows that �k +j=1 pj = 1 +m +�k +j=1 ¯lj = 1. +50 + +Lemma B.1. For all s ∈ R +E +� +exp +� +sV m +m +�� +≤ +k +� +j=1 +pjE +� +exp +� +sV j,m +¯lj +�� +The proof of the result above is due to Hoeffding [35]. +Lemma B.2. Let the assumptions of Proposition 3.8 hold and define the process (U i)i∈Z such that +U i := f(Xi) − E[f(Xi)]. For all j = 1, 2, . . . , k, l ≥ 2 and 0 < s < +3l +|b−a| +M¯lj(s) = +���E +� ¯lj +� +i=1 +exp +�s U j+(i−1) k +¯lj +��� +− +¯lj +� +i=1 +E +� +exp +�s U j+(i−1) k +¯lj +����� ≤ exp +� +l − 1 + s|b − a| +� +θ(k)s +l +(95) +The same result holds when defining the process (U i)i∈Z for U i = E[f(Xi)] − f(Xi). +Proof. Let us first discuss the case when U i = f(Xi) − E[f(Xi)], we have that the process U has mean +zero and |U i| ≤ |b − a|. Let us define Fj = σ(U i, i ≤ j). +M¯lj := +�����E +� ¯lj +� +i=1 +exp +�sU j+(i−1)k +¯lj +�� +− +¯lj +� +i=1 +E +� +exp +�sU j+(i−1)k +¯lj +������� += +�����E +� ¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +� +E +� +exp +�sU j+(¯lj−1)k +¯lj +����Fj+(¯lj−2)k +�� +− +¯lj +� +i=1 +E +� +exp +�sU j+(i−1)k +¯lj +������� +≤ +�����E +� ¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +�� +E +� +exp +�sU j+(¯lj−1)k +¯lj +����Fj+(¯lj−2)k +� +− E +� +exp +�sU j+(¯lj−1)k +¯lj +��������� ++ +�����E +� +exp +�sU j+(¯lj−1)k +¯lj +������� +�����E +� ¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +�� +− +¯lj−1 +� +i=1 +E +� +exp +�sU j+(i−1)k +¯lj +������� +≤ +����� +¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +������ +∞ +E +����E +� +exp +�sU j+(¯lj−1)k +¯lj +����Fj+(¯lj−2)k +� +− E +� +exp +�sU j+(¯lj−1)k +¯lj +����� +� ++ +����� exp +�sU j+(¯lj−1)k +¯lj +������ +∞ +M¯lj−1 +The above is then less than or equal to +exp +�s(¯lj − 1)|b − a| +¯lj +� +exp +�−s|a| +¯lj +� +E +������E +� +exp +� +sf(Xj+(¯lj−1)k) +¯lj +������Fj+(¯lj−2)k +� +(96) +− E +� +exp +� +sf(Xj+(¯lj−1)k) +¯lj +������� +� ++ exp +�s|b − a| +¯lj +� +M¯lj−1. +Note that the function g(x) = +exp +� +sx +¯lj +� +exp +� +s|b| +¯lj +� +s +¯lj +1A(x) is in L1 for each s, where A = {x : |x| ≤ |b − a|}. We +51 + +then use the mixingales-type representation of the θ-coefficients of f(X), and obtain that +M¯lj ≤ exp +�s¯lj|b − a| +¯lj +� +θ(k) s +¯lj ++ exp +�s|b − a| +¯lj +� +M¯lj−1. +Let now, u = exp +� +s|b−a| +¯lj +� +, we have that +M¯lj ≤ θ(k)u +¯lj s +¯lj ++ uM¯lj−1 +≤ (¯lj − 2)θ(k)u +¯lj s +¯lj ++ u +¯lj−2���E +� +exp +�sU j +¯lj +� +exp +�sU j+k +¯lj +�� +− E +� +exp +�sU j +¯lj +�� +E +� +exp +�sU j+k +¯lj +����� +≤ (¯lj − 2)θ(k)u +¯lj s +¯lj ++ u +¯lj−1���E +� +exp +�sU j+k +¯lj +� +|Fj +� +− E +� +exp +�sUj+k +¯lj +����� +≤ (¯lj − 1)θ(k)u +¯lj s +¯lj +≤ exp +� +¯lj − 2 + s|b − a| +� +θ(k) s +¯lj +. +By assumption ¯lj ≥ 2. Moreover, we apply the following estimation in the last inequality: for all x > 0, +we have that log(x) ≤ x−1. In conclusion, for all j = 1, . . . , k (and remembering that ¯lj = l or ¯lj = l +1) +M¯lj ≤ exp(l − 1 + s |b − a|)θ(k)s +l . +Similar calculations apply when U i = E[f(Xi)] − f(Xi). +Remark B.3. Note that by showing Lemma B.2 for U i = E[f(Xi)] − f(Xi) there is a slight change in +the proof at point (96). However, in the end, the result (95) equal holds. +Lemma B.4. Let the Assumptions of Proposition 3.8 hold and define the process (U i)i∈Z such that +U i = f(Xi) − E[f(Xi)]. For all j = 1, 2, . . . , k, l ≥ 2 and 0 < s < +3l +|b−a| +E +� +exp +�sV j,m +¯lj +�� +≤ exp +� +s2E[U 2 +1] +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k) +The same result holds when defining the process (U i)i∈Z for U i = E[f(Xi)] − f(Xi). +Proof. +E +� +exp +�sV j,m +¯lj +�� += E +� +exp +� +¯lj +� +i=1 +sU j+(i−1) k +¯lj +�� +≤ +¯lj +� +i=1 +E +� +exp +�sU j+(i−1) k +¯lj +�� ++ +���E +� ¯lj +� +i=1 +exp +�sU j+(i−1) k +¯lj +�� +− +¯lj +� +i=1 +E +� +exp +�sU j+(i−1) k +¯lj +����� += E +� +exp +�sU j+(i−1) k +¯lj +��¯lj ++ M¯lj +(97) +We have that E[U j+(i−1) k] = 0 by definition of the process U, and +U j+(i−1) k +¯lj +satisfies the Bernstein +52 + +moment condition (Remark A1 [44]) with K1 = |b−a| +3¯lj . Hence, for ¯lj ≥ 2 and 0 < s < +3¯lj +|b−a| +E +� +exp +�sU j+(i−1) k +¯lj +�� +≤ exp +� +s2E[(U j+(i−1)k/¯lj)2] +2 +� +1 − s|b−a| +3¯lj +� +� +. +(98) +Because ¯lj ≥ l, we can conclude that the inequality above holds for l ≥ 2. Moreover, by stationarity of +the process U and since for all j = 1, 2, . . . , k, we can bound (97) uniformly with respect to the index j +by using Lemma B.2, and noticing that +0 < s < +3l +|b − a| ≤ +3¯lj +|b − a|, +and then +� +1 − s|b − a| +3¯lj +� +≥ +� +1 − s|b − a| +3l +� +. +The same proof applies when defining U i = E[f(Xi)] − f(Xi). +Proof of Proposition 3.8. By combining Lemmas B.1, B.2, B.4, we can show the bound for the Laplace +transform of the process U := f(X) − E[f(X)] for 0 < s < +3l +|b−a| is given by: +E +� +exp +� +s 1 +m +m +� +i=1 +f(Xi) − E[f(Xi)] +�� += E +� +exp +� +s 1 +m +m +� +i=1 +U i +�� += E +� +exp +� s +m +k +� +j=1 +V j,m +�� += E +� +exp +� s +m +k +� +j=1 +¯lj +� +i=1 +U j+(i−1) k +�� +≤ +k +� +j=1 +pjE +� +exp +�sV j,m +¯lj +�� +(99) +≤ exp +� +s2V ar(f(X1)) +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k), +(100) +where V j,m = �¯lj +i=1 U j+(i−1) k. The inequalities (99) and (100) hold because of Lemma B.1 and Lemma +B.4, respectively. We have then proved the inequality (42). The same proof applies for showing the bound +(43) by defining U i = E[f(Xi)] − f(Xi) . +Proof of Proposition 3.10. Let us choose f(s) = +s +ϵ for 0 < s < ϵ , which satisfies the assumptions of +Proposition 3.8 and has support in [0, 1]. Note, that ∆(β) = +ϵ +m(�m +i=1 E[f(Lϵ +i)] − f(Lϵ +i)). We have in this +case that U i = E[f(Li)] − f(Lϵ +i) such that E[U i] = 0 and |U i| ≤ 1. Equally, ∆′(β) = rϵ(β) − Rϵ(β) = +ϵ +m(�m +i=1 f(Lϵ +i) − E[f(Lϵ +i)]), and again by defining U ′ +i = f(Lϵ +i) − E[f(Li +ϵ)] has E[U ′ +i] = 0 and |U ′ +i| ≤ 1. +Note that the process f(Lϵ +i)i∈Z has the same θ-weak coefficients of the process (Lϵ +i)i∈Z because f is a +Lipschitz function. We follow below the notations of Table 1. +53 + +(i) By Proposition 3.8 applied for 0 < ϵ +√ +l < 3 +√ +l, and the bound (41), +E +� +exp +�√ +l ∆(β) +�� += E +� +exp +� +ϵ +√ +l 1 +m +m +� +i=1 +U i +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2¯α(∥β1∥1 + 1) exp(l − 1 + 3 +√ +l − λr) +where the last equality holds because of the particular shape of the chosen function f. Let us +determine the conditions on the parameter at and k such that exp(l − 1 + 3 +√ +l − λr) = exp(l − 1 + +3 +√ +l − λ(ka − p)) ≤ 1. Given that l ≥ 9, the inequality above holds if +2 N +atk − 1 − λathtk + λp ≤ 0 +The above is equivalent to +λhta2 +tk2 − (λp − 1)atk − 2N ≥ 0. +That holds, if +atk ≥ (λp − 1) + +� +(λp − 1)2 + 8λhtN +2λht +. +(ii) As in (i), we have that +E +� +exp +�√ +l ∆(β) +�� += E +� +exp +� +ϵ +√ +l 1 +m +m +� +i=1 +U i +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2¯α(∥β1∥1 + 1) exp(l − 1 + 3 +√ +l)r−λ +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2¯α(∥β1∥1 + 1) exp(l − 1 + 3 +√ +l − λ log(ak − p)) +We have that exp(l − 1 + 3 +√ +l − λ log(ak − p)) ≤ 1 holds if +2 N +atk − 1 − λ log(ak − p) ≤ 0, +given that l ≥ 9. The latter inequality holds if and only if the parameters at and k are chosen such +that +2N − atk +atk log(ak − p) ≤ λ, +Proof of Corollary 3.13. We drop the bold notations indicating random fields and stochastic processes +in this proof. When working with the family of predictors hnet,w for w ∈ B′, we have that the proof of +Proposition 3.4 changes in the estimation of the bound (92). In particular, for a given w ∈ B′ we have +that +E[|h(Xj) − h(X(ψ) +j +)|] = E[| +K +� +l=1 +αlσ(β⊤ +l Xj + γl) − +K +� +l=1 +αlσ(β⊤ +l X(ψ) +j ++ γl)|] ≤ +� +l +|αl|∥βl∥1E[|Zt1(x1) − Zψ +t1(x1)|] +by the stationarity of the field Z. The proof of the Corollary proceeds then identically as in the proof of +Proposition 3.10 by modifying the bound (40) with the above calculations. +54 + +We remind the reader that the proof of Proposition 3.18 and 3.21 make use of the below Lemma +that we recall just for completeness. +Lemma B.5 (Legendre transform of the Kullback-Leibler divergence function). For any π ∈ M1 ++(B), +for any measurable function h : B → R such that π[exp(h)] ≤ ∞, we have that +π[exp(h)] = exp +� +sup +ρ∈M1 ++(B) +ρ[h] − KL(ρ||π) +� +, +with the convention ∞ − ∞ = −∞. Moreover, as soon as h is upper bounded on the support of π, the +supremum with respect to ρ in the right-hand side is reached for the Gibbs distribution with Radon- +Nikodym derivative w.r.t. π equal to +exp(h) +π[exp(h)]. +The proof of Lemma (B.5) has been known since the work of Kullback [42] in the case of a finite +space B, whereas the general case has been proved by Donsker and Varadhan [27]. Given this result, we +are now ready to prove our PAC Bayesian bounds. +Proof of Proposition 3.18. We follow a standard proof scheme developed in [12]. +√ +l (ˆρ[Rϵ(β)] − ˆρ[rϵ(β)]) = ˆρ[ +√ +l (Rϵ(β) − rϵ(β))] +≤ KL(ˆρ||π) + log(π[exp( +√ +l (Rϵ(β) − rϵ(β)))]) +(P-a.s. by Lemma B.5). +We have that π[exp( +√ +l (Rϵ(β) − rϵ(β)))] := AS is a random variable on S. By Markov’s inequality, for +δ ∈ (0, 1) +P +� +AS ≤ E[AS] +δ +� +≥ 1 − δ. +This in turn implies that with probability at least 1 − δ over S +ˆρ[Rϵ(β)] − ˆρ[rϵ(β)] ≤ KL(ˆρ||π) + log 1 +δ +√ +l ++ 1 +√ +l +log +� +π +� +E +� +exp +�√ +l��(β) +���� +(101) +≤ KL(ˆρ||π) + log 1 +δ +√ +l ++ 1 +√ +l +log +� +π +� +exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(∥β1∥1 + 1)¯α +�� +, +(102) +where (101) holds by swapping the expectation over S and over π using Fubini’s Theorem, and (102) is +obtained by using Proposition 3.10. Similarly, the second inequality can easily be obtained. +Proof of Proposition 3.21. Let h = − +√ +lrϵ(β) and d¯ρ +dπ = +exp(h) +π[exp(h)]. By Lemma B.5, we have that +¯ρ = arg inf +ˆρ +� +K(ˆρ||π) − ˆρ[h] +� += arg inf +ˆρ +�K(ˆρ||π) +√ +l ++ ˆρ[rϵ(β)] +� +, +and by using (58), for all δ ∈ (0, 1) +P +� +¯ρ[Rϵ(β)] ≤ inf +ˆρ +� +ˆρ[rϵ(β)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π[1+2(∥β1∥1+1)¯α] +��� +≥ 1−δ +(103) +55 + +A union bound gives that the inequality (103) and the one in (59) hold with probability at least 1 − 2δ. +By now substituting to ˆρ[rϵ(β)] in (103) its bound obtained in inequality (59), we obtain that +P +� +¯ρ[Rϵ(β)] ≤ inf +ˆρ +� +ˆρ[Rϵ(β)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π[1+2(∥β1∥1+1)¯α] +��� +≥ 1−2δ. +We conclude by replacing δ with δ +2. +The modeling selection procedure of the paper discussed in Section 3.3 can be proven by using the +following two Lemmas +Lemma B.6. Let 0 < ϵ < 3, and the assumptions of Proposition 3.10 hold. Moreover, let fp be a regular +conditional probability measure such that fp << ¯ρp and ¯ρp << fp for each possible training data set S. +Then, +E +� +sup +fp +exp +� +fp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +(104) +≤ E +� +sup +fp +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1 +Proof. By Proposition 3.10, it holds that +E[exp( +√ +l∆(β))] ≤ exp +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) +� +. +which is equivalent to +E +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) +�� +≤ 1. +(105) +By using the tower property, for all p = 1, . . . , ⌊ N +2 ⌋ +E +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) +�� += E +� +¯ρp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) +��� +. +(106) +Moreover, for any non-negative and measurable function b on Bp, it holds that +¯ρp[b(β)] = +� +b(β) ¯ρp(dβ) ≥ +� +dfp +d¯ +ρp (β)>0 +b(β) ¯ρp(dβ) = +� +dfp +d¯ +ρp (β)>0 +b(β) d¯ρp +dfp +fp(dβ) += fp +� +b(β) exp +� +− log dfp +d¯ρp +�� +. +(107) +From (105), (106) and (107) we obtain that +E +� +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1. +56 + +By using now the Jensen inequality, we have that +E +� +exp +� +fp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ E +� +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1. +(108) +For a given p and by taking the supremum on every possible fp distribution, we conclude. +Lemma B.7. Let the assumptions of Lemma B.6 hold. Moreover, let g be a probability distribution on +the grid 1, . . . , ⌊ N +2 ⌋, such that g = � +p wpδp, where wp = +1 +⌊ N +2 ⌋. Then, we have that +E +� � +p +wp sup +fp +exp +� +fp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1, +(109) +and +E +� � +p +wp sup +fp +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1. +(110) +Proof. Let hp = supfp fp +� +exp +�√ +l∆(β))−log(1+2(∥β1∥1 +1)¯α)− +3ϵ2 +2(3−ϵ) −log dfp +d¯ρp +�� +, and h = � +p hp1Bp. +By definition of the probability measure g, and applying Lemma B.6, +E[g[h]] = E +� � +p +wp h +� +≤ 1. +By substituting the explicit expression of h, we obtain that the inequality (110) holds and consequently +also (109) again by Lemma B.6. +Proof of Proposition 3.28. By inequality (109) and the Chernoff bound, we obtain that for all p = +1, . . . , +� +N +2 +� +and fp, and δ ∈ (0, 1), with probability greater than 1 − δ that +fp[Rϵ(β)] ≤ fp +� +rϵ(β)+ log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +KL(fp||¯ρp)+ 1 +√ +l +log +� 1 +wp +� ++ 1 +√ +l +� +log ¯α+ +3ϵ2 +2(3 − ϵ) +log 1 +δ +� +. +For all Gibbs randomized estimator ¯ρp, with probability greater than 1 − δ, it holds that +¯ρp[Rϵ(β)] ≤ ¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log 1 +wp ++ 1 +√ +l +� +log ¯α + +3ϵ2 +2(3 − ϵ) + log 1 +δ +� +, +(111) +being the KL-divergence equal to zero in this case. By the definition of the parameter p∗ +inf +p +� +¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log +��N +2 +��� += ¯ρp∗ +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l∗ +� ++ +1 +√ +l∗ log +��N +2 +�� +. +Then, by using the above equality in (111) computed for ¯ρp∗ we conclude. +57 + +Proof of Proposition 3.29. If the inequality (65) holds, then because KL(¯ρp||πp) ≥ 0 for all possible +observed training sets +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +¯ρp +� +rϵ(β) +� ++ 1 +√ +l +2 + 3C +C ++ 1 +√ +l +KL(¯ρp||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +2(3 − ϵ) +√ +l∗ + +1 +√ +l∗ log ¯α +δ +� +≥ 1 − 2δ, +(112) +by choosing a δ ∈ (0, 1) and using a union bound. Because of Lemma B.5, the above is equivalent to +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +rϵ(β) +� ++ 1 +√ +l +2 + 3C +C ++ 1 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +2(3 − ϵ) +√ +l∗ + +1 +√ +l∗ log ¯α +δ +� +≥ 1 − 2δ. +(113) +Following the line of proof in Proposition 3.18 applied to a reference distribution πp defined on M1 ++(Bp) +and such that πp[∥β1∥1] ≤ 1, it is straightforward to prove that for all ˆρ ∈ M1 ++(Bp) and δ ∈ (0, 1). +P +� +ˆρ[rϵ(β)] ≤ ˆρ[Rϵ(β)]+ +� +KL(ˆρ||πp)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) l1−2α +� 1 +√ +l ++ 1 +√ +l +log +� +πp +� +1+2(∥β1∥1+1)¯α +��� +≥ 1−δ +(114) +By using a union bound, and replacing ˆρ[rϵ(β)] in (113) with the inequality appearing in (114), +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +Rϵ(β) +� ++ 2 +√ +l +2 + 3C +C ++ 2 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +(3 − ϵ) +√ +l∗ + +2 +√ +l∗ log ¯α +δ +� +≥ 1 − 3δ. +(115) +By choosing a δ equal to δ +3 we conclude. +Proof of Corollary 3.31. First of all, let us remark that P¯ρp∗ is a well-defined probability measure as the +Gibbs estimator is defined conditionally on the observations in the training set S. By using inequality +(110) and the Chernoff bound, we can show that +P¯ρp∗ +� +Rϵ(ˆβ) ≤ rϵ(β) + 1 +√ +l +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) + log 1 +wp ++ log 1 +δ +�� +≥ 1 − δ. +(116) +It is straightforward to see that Lemma B.6 and B.7 hold also if we substitute at ∆(β) the expression +∆′(β). Therefore, with probability 1 − δ it holds that +P¯ρp∗ +� +rϵ(ˆβ) ≤ Rϵ(β) + 1 +√ +l +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) + log 1 +wp ++ log 1 +δ +�� +≥ 1 − δ. +(117) +58 + +Moreover, for a β ∈ B +Rϵ(β) − Rϵ(¯β) ≤ E +����Y − β0 − +a(c∗,p∗) +� +i=1 +β1,iXi +��� − +���Y − ¯β0 − +a(c∗,p∗) +� +i=1 +¯β1,iXi +��� +� +≤ E +����(¯β0 − β0) − +a(c∗,p∗) +� +i=1 +(¯β1,i − β1,i)Xi +��� +� +≤ ∥¯β − β∥ E[|Z|] +(118) +≤ 1 +C E[|Z|]. +(119) +Note that inequality (118) holds because the model underlying the data is stationary, whereas (119) holds +because of the assumptions made on B. +Let us now plug in (116), the estimations (117) and (118), by using a union bound we obtain that +P¯ρp∗ +� +Rϵ(ˆβ) ≤ 1 +C E[|Z|] + Rϵ(¯β) + 2 +√ +l +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) + log 1 +wp ++ log 1 +δ +�� +≥ 1 − 3δ. +By choosing δ equal to δ +3 we obtain the thesis. +59 +