text
sequencelengths
2
2.54k
id
stringlengths
9
16
[ [ "Assessing theoretical uncertainties for cosmological constraints from\n weak lensing surveys" ], [ "Abstract $ $Weak gravitational lensing is a powerful probe which is used to constrain the standard cosmological model and its extensions.", "With the enhanced statistical precision of current and upcoming surveys, high accuracy predictions for weak lensing statistics are needed to limit the impact of theoretical uncertainties on cosmological parameter constraints.", "For this purpose, we present a comparison of the theoretical predictions for the nonlinear matter and weak lensing power spectra, based on the widely used fitting functions ($\\texttt{mead}$ and $\\texttt{rev-halofit}$), emulators ($\\texttt{EuclidEmulator}$, $\\texttt{EuclidEmulator2}$, $\\texttt{BaccoEmulator}$ and $\\texttt{CosmicEmulator}$) and N-body simulations ($\\texttt{Pkdgrav3}$).", "We consider the forecasted constraints on the $\\Lambda \\texttt{CDM}$ and $\\texttt{wCDM}$ models from weak lensing for stage III and stage IV surveys.", "We study the relative bias on the constraints and their dependence on the assumed prescriptions.", "Assuming a $\\Lambda \\texttt{CDM}$ cosmology, we find that the relative agreement on the $S_8$ parameter is between $0.2-0.3\\sigma$ for a stage III-like survey between the above predictors.", "For a stage IV-like survey the agreement becomes $1.4-3.0\\sigma$.", "In the $\\texttt{wCDM}$ scenario, we find broader $S_8$ constraints, and agreements of $0.18-0.26\\sigma$ and $0.7-1.7\\sigma$ for stage III and stage IV surveys, respectively.", "The accuracies of the above predictors therefore appear adequate for stage III surveys, while the fitting functions would need improvements for future stage IV weak lensing surveys.", "Furthermore, we find that, of the fitting functions, $\\texttt{mead}$ provides the best agreement with the emulators.", "We discuss the implication of these findings for the preparation of the future weak lensing surveys." ], [ "Introduction", "The next generation of wide field cosmological surveys, such as LSST https://www.lsst.org, Euclid https://www.cosmos.esa.int/web/euclid/home, and NGRST https://roman.gsfc.nasa.gov/ will map the matter distribution of the local Universe with an unprecedented accuracy.", "These high precision measurements present a challenge for the theoretical modeling of cosmological observables.", "Cosmic shear is a cosmological observable that relies on the distortions of galaxy shapes caused by weak gravitational lensing [6].", "This effect is due to the gravitational deflection of photons by the matter density field along the line of sight.", "Cosmic shear measures the inhomogeneities in the cosmic density field with high precision and can be used as an unbiased tracer of the matter distribution.", "It is sensitive to both, the matter distribution of the Universe and the growth of cosmic structure, which is important for the understanding of the expansion history of the Universe.", "A commonly used cosmic shear summary statistic is the cosmic shear angular power spectrum, which can be predicted from the matter power spectrum.", "The modeling of the matter power spectrum on large scales can be derived using perturbation theory [13], [8], [21], [22], [14], [70], [7], [16], [26], [11], [101], where the structure formation of the Universe is linear.", "However, at non-linear, small scales with $k\\gtrsim 1h\\text{Mpc}^{-1}$ , nonlinear processes have a strong impact on the matter power spectrum, and perturbation theory is no longer valid.", "In this work, we compare the theoretical predictions of the nonlinear matter power spectrum, and the associated theoretical uncertainties on cosmological parameters from measurements of the cosmic shear angular power spectrum.", "The comparison includes some widely used models fitted from N-body simulations using analytical halo models: $\\texttt {halofit}$ [87] is fitted to low resolution, gravity-only N-body simulations, which is known to exhibit a non-negligible mismatch with current state-of-the-art hydrodynamic N-body simulations; $\\texttt {rev-halofit}$ [92], developed as the revisited version of $\\texttt {halofit}$ is used in the analysis of the Dark Energy Survey (DES) [95]; and $\\texttt {mead}$ [65], which is used in the analysis of the Kilo-Degree Survey (KiDS) combined with the VISTA Kilo-Degree Infrared Galaxy Survey (VIKING) [40].", "Apart from the halo model fitting method, emulators are generated from the interpolation of a suite of N-body simulations, e.g.", "$\\texttt {CosmicEmulator}$ [36], [56], [38], $\\texttt {BaccoEmulator}$ [3], [4], $\\texttt {EuclidEmulator}$ [53] and its updated version $\\texttt {EuclidEmulator2}$ [19], $\\texttt {COSMOPOWER}$ [63] and $\\texttt {GP emulator}$ [29].", "In this study, $\\texttt {CosmicEmulator}$ , $\\texttt {BaccoEmulator}$ , $\\texttt {EuclidEmulator}$ and $\\texttt {EuclidEmulator2}$ are representatively selected in the comparison at the level of the matter power spetrum, and a comparison between $\\texttt {rev-halofit}$ , $\\texttt {mead}$ and $\\texttt {EuclidEmulator}$ is also shown in [54].", "In order to estimate the theoretical uncertainties, we look at the weak lensing cosmological parameter constraints, by generating a forecast for a stage III, DES-like survey and a stage IV, Euclid-like survey.", "We take into account the parameters described by the standard $\\Lambda \\text{CDM}$ cosmological model and the extended $w\\text{CDM}$ model.", "This paper is organised as follows.", "In Section we describe the theoretical framework, including three halo-model based fitting functions, $\\texttt {mead}$ , $\\texttt {halofit}$ and $\\texttt {rev-halofit}$ ; four power spectrum emulators extracted from N-body simulations: $\\texttt {CosmicEmulator}$ , $\\texttt {BaccoEmulator}$ , $\\texttt {EuclidEmulator}$ and $\\texttt {EuclidEmulator2}$ , and one N-body simulation code $\\texttt {Pkdgrav3}$ [75].", "In Section we present the method and the relevant codes used in this study.", "We summarize our results in Section and our conclusions in Section ." ], [ "Theory", "In this section, we describe the theoretical background of the matter power spectrum, weak lensing and its angular power spectrum, as well as the different predictors of the matter power spectrum that we include in the comparisons." ], [ "Weak Lensing", "Considering the cosmic density field $\\rho (\\vec{r})$ at the position $\\vec{r}$ , the density contrast $\\delta (\\vec{r})$ is defined as the relative difference of $\\rho (\\vec{r})$ to the average density $\\bar{\\rho }$ $\\delta (\\vec{r})=\\frac{\\rho (\\vec{r})-\\bar{\\rho }}{\\bar{\\rho }}.$ In Fourier space, the density contrast takes the following form $\\delta (\\vec{k})=\\int \\delta (\\vec{r})\\exp {(i\\vec{k}\\cdot \\vec{r})}\\rm {d}^3r.$ Furthermore, the matter power spectrum $P(\\vec{k})$ is defined as the correlation of the density contrast in Fourier space [72]: $\\langle \\delta (\\vec{k})\\delta (\\vec{k}^{\\prime })\\rangle =(2\\pi )^3\\delta _{\\rm {D}}^{(3)}(\\vec{k}+\\vec{k}^{\\prime })P(\\vec{k}),$ where $\\delta _{\\rm {D}}$ is the three dimensional Dirac delta function.", "For full-sky surveys, the cosmic shear angular power spectrum is approximately identical to the convergence power spectrum [5], which can be defined as a weighted integration along the line-of-sight over the matter power spectrum [6], and simplified using the Kaiser-Limber approximation [61], [59], [48], [49].", "We follow the formalism of [61], [52], [51], [93], [28] to compute the cross-correlated shear power spectrum with tomographic redshift bins $i$ and $j$ : $C^{ij}_\\gamma (\\ell )=\\frac{9}{16}\\left(\\frac{H_0}{c}\\right)^4 \\Omega ^2_{\\rm {m}} \\int _0^{\\chi _{\\rm {h}}} \\rm {d}\\chi P_{\\text{NL}}\\left(\\frac{\\ell }{r},\\chi \\right)\\frac{g_i(\\chi )g_j(\\chi )}{(ar(\\chi ))^2}$ Here $P_{\\text{NL}}$ is the non linear matter power spectrum, $\\chi $ is the comoving distance, $\\chi _{\\rm {h}}$ is the comoving horizon distance, $\\Omega _{\\rm {m}}$ is the total matter density, $a=(1+z)^{(-1)}$ is the scale factor and $g(\\chi )$ is the lensing efficiency function defined as: $g_i(\\chi ) = 2\\int ^{\\chi _{\\rm {h}}}_\\chi \\rm {d}\\chi ^{\\prime } n_i(\\chi )\\frac{r(\\chi )r(\\chi ^{\\prime }-\\chi )}{r(\\chi ^{\\prime })},$ with $n_i(\\chi )$ being the normalized number density of the observed galaxies at a comoving distance $\\chi $ ." ], [ "Matter Power Spectrum", "The matter power spectrum is a fundamental statistics to study the large scale structure of the Universe.", "As seen above, it is, in particular, useful to predict the cosmic shear angular power spectrum.", "Therefore, it is necessary to have an accurate theoretical model for the matter power spectrum on all scales.", "On large scales and mildly non-linear scales, the matter power spectrum can be modeled using perturbation theory and some extended theories [64].", "On small scales, which are in the non-linear regime, these approaches are not suited to predict the power spectrum with the necessary precision, while other methods are developed with the use of halo model or simulations." ], [ "Analytical Predictions", "A common way to model the matter power spectrum on these small scales is to empirical fit physically motivated formulas to measurements from N-body simulations, e.g.", "as done in [32].", "Furthermore, modelling the density field as a collection of virialized halos, the matter power spectrum can be approximated analytically using the statistics of halos, and fitted to simulations or emulators [62], [82], [20].", "In this study, we compare 3 halo-model based fitting functions: $\\texttt {mead}$ , $\\texttt {halofit}$ , and $\\texttt {rev-halofit}$ .", "$\\texttt {halofit}$ was built using a series of N-body simulations with a total of $N=256^3$ particles and the box size from $84\\text{Mpc}/h$ to $240\\text{Mpc}/h$ .", "Using the halo model, the matter power spectrum is constructed with two terms, the one-halo term proposed by [71], [62], [82], [81] and a two-halo term [62], [82], [81] to describe the exclusion effects between dark matter halos.", "The one-halo term indicates the correlation of the matter field of one single halo, which dominates on small scales, while the two-halo term describes the cross-correlation between different halos, that has a strong impact on larger scales.", "Assuming that the halos are distributed according to the halo mass function [76], [85], the matter power spectrum modelled with this approach can achieve a high precision on large scales.", "However, due to the lack of baryons and the relatively low resolution of the N-body simulations used in their study, $\\texttt {halofit}$ does not match high resolution N-body simulations, giving an accuracy at the $5\\%$ level at $k=1h\\text{Mpc}^{-1}$ [37], and larger differences for $k>1h\\text{Mpc}^{-1}$ , which is insufficient for the non-linear regime.", "$\\texttt {rev-halofit}$ is a revised prescription of $\\texttt {halofit}$ , which provides a more accurate prediction of the matter power spectrum for $k<30h\\text{Mpc}^{-1}$ and $z<10$ , with a $5\\%$ level accuracy at $k=1h\\text{Mpc}^{-1}$ and $10\\%$ level accuracy at $k=10h\\text{Mpc}^{-1}$ .", "$\\texttt {rev-halofit}$ uses high resolution N-body simulations for 16 cosmological models around the Wilkinson Microwave Anisotropy Probe (WMAP) best-fit cosmological parameters.", "The N-body simulations were run with the $\\texttt {Gadget-2}$ N-body code [90], [88], $1024^3$ particles in total, and the box size from $320\\text{Mpc}/h$ to $2000\\text{Mpc}/h$ .", "The power spectrum is fitted using an improved fitting formula with 5 more model parameters as compared to $\\texttt {halofit}$ .", "Several extended methods have been proposed to improve the halo model [12], [68], [83].", "Here we only consider $\\texttt {mead}$ [65], which reaches an accuracy at the 5 percent level for $k=10h\\text{Mpc}^{-1}$ and $z<2$ .", "$\\texttt {mead}$ introduces more physical parameters in addition to the halo model, and is fitted to the “Coyote Universe”[38] suite of high resolution simulations, the same simulations used for the generation of $\\texttt {CosmicEmulator}$ .", "It also includes massive neutrinos [66] and baryonic effects e.g.", "active galactic nuclei (AGN) feedback, supernovae explosions, and gas cooling.", "However, we only consider the dark-matter-only case in this study." ], [ "Emulators", "The fitting functions based on halo models described in Section REF can provide accurate non-linear power spectrum predictions for large $k$ -modes and a wide redshift range, which can be used to predict cosmological observables.", "However, they also have limitations as the precision is not uniform for different cosmological parameters, and it is difficult for fitting functions to give a high precision below the $1\\%$ level compared to high resolution simulations.", "Power spectrum emulators are constructed following a different approach in which one interpolates the power spectrum from a set of N-body simulations within a certain range of relevant parameters, using interpolation methods, e.g.", "Gaussian Processes Regression [37], [38], [3] or polynomial chaos expansion [53], [19].", "Compared to fitting functions, emulators usually provide consistent precision of the predictions for different $k$ -modes.", "However, emulators also have limitations: Firstly, the covered parameter space is limited, thus making it difficult to perform a likelihood analysis, for which one needs to explore a wide range of parameter values.", "Secondly, the ranges of $k$ and redshift are also limited, making it difficult to compute the weak lensing cosmic shear observables for high $\\ell $ s, which requires an integration over a large $k$ range.", "In this study, we compare 4 emulators: $\\texttt {CosmicEmulator}$ [39], $\\texttt {BaccoEmulator}$ [3], $\\texttt {EuclidEmulator}$ [53], and $\\texttt {EuclidEmulator2}$ [19].", "$\\texttt {CosmicEmulator}$ is fitted using a set of the “Coyote Universe\" simulations and the \"Mira-Titan Universe\"[56] simulations.", "We use the latest version of the emulator [39], for which the “Mira-Titan Universe\" simulations were run with $3200^3$ particles and a simulation volume of $(2100 h^{-1}\\text{Mpc})^3$ .", "The $\\texttt {CosmicEmulator}$ successfully achieves high precision predictions of the power spectrum within the $4\\%$ level for $ k_{max}=5h\\text{Mpc}^{-1}$ and $z<2$ .", "It allows for the variation of various parameters, including the matter density $\\Omega _{\\rm {m}}$ , the amplitude of density fluctuations $\\sigma _8$ , the baryon density $\\Omega _{\\rm {b}}$ , the scalar spectral index $n_{\\rm {s}}$ , the dark energy equation of state parameters $w_0$ and $w_{\\rm {a}}$ , the dimensionless Hubble parameter $h$ , the neutrino density $\\Omega _\\nu $ , and the redshift $z$ .", "$\\texttt {EuclidEmulator}$ uses a different emulation method using N-body simulations generated with the $\\textsc {PkdGrav3}$ code [75].", "It uses 100 simulations with $2048^3$ particles in a $(1250h^{-1}\\text{Mpc})^3$ simulation volume.", "The non-linear correction is encoded as a boost factor adding up to the input linear power spectrum, achieving a precision at the $1\\%$ level for predictions within the ranges $k<1hMpc^{-1}$ and $z<1$ .", "[53] demonstrated that $\\texttt {EuclidEmulator}$ agrees with $\\texttt {rev-halofit}$ at the $8\\%$ level.", "As an updated version of $\\texttt {EuclidEmulator}$ , $\\texttt {EuclidEmulator2}$ is extended with dynamical dark energy and massive neutrinos, created with a larger parameter space and a modified version of the $\\texttt {Pkdgrav3}$ N-body code.", "$\\texttt {EuclidEmulator2}$ provides a consistent accuracy with simulations at the $2\\%$ level up to $k_{max}=10h\\text{Mpc}^{-1}$ for $z<2$ , and slightly lower accuracy for higher redshift $z\\sim 3$ .", "However, as $\\texttt {EuclidEmulator2}$ uses the amplitude of the primordial power spectrum $A_{\\rm {s}}$ instead of $\\sigma _8$ as input parameter, we use the following formula to transfer $\\sigma _8$ into $A_{\\rm {s}}$[33]: $A_{\\rm {s}}=\\left(\\frac{\\sigma _8}{\\sigma _{8,0}}\\right)^2\\times A_{\\rm {s},0}$ in our comparison, where $\\sigma _{8,0}=0.826$ and $A_{\\rm {s},0}=2.184\\times 10^{-9}$ .", "$\\texttt {BaccoEmulator}$ is another state-of-the-art emulator using an updated version of the $\\texttt {L-Gadget3}$ code [89], [2] with $4320^3$ particles in a $(1440h^{-1}\\text{Mpc})^3$ simulation volume.", "It has a $2\\%$ level accuracy over the redshift range $0<z<1.5$ and $k<5h\\text{Mpc}^{-1}$ ." ], [ "N Body Simulations", "We also include in this study a comparison with a dark-matter-only N-body simulation run with $\\textsc {PkdGrav3}$ , which is based on a binary tree algorithm.", "This code uses 5th order multipole expansions of the gravitational potential between particles and can achieve fast computational speeds with hardware acceleration.", "A comparison between PkdGrav3 and the N-body codes, $\\texttt {Gadget-3}$ , $\\texttt {Gadget-4}$ and $\\texttt {Ramses}$ is presented in [80] and [91].", "The $\\textsc {PkdGrav3}$ simulations are the same as the ones used for $\\texttt {EuclidEmulator}$ , with $2048^3$ particles in total and the box size of $L=1250h^{-1}\\text{Mpc}$ .", "The details are presented in [53].", "In this work, we perform a comparison of predictors of the nonlinear matter power spectrum, i.e.", "halo-model based fitting functions and emulators.", "We estimate the theoretical uncertainties of these predictors on the parameter constraint level by looking at the weak lensing cosmological parameter constraints from a stage III survey and a stage IV survey.", "For each survey, we perform a comparison using the standard $\\Lambda \\text{CDM}$ cosmological model and the extended $w\\text{CDM}$ model." ], [ "Survey parameters", "The estimate of the theoretical uncertainties for cosmological parameters is realised by forecasting the constraints for a stage III survey and a stage IV survey.", "The covariance matrix is estimated from simulations, using the NGSF code described in [100] and [23].", "We refer the reader to [100] for a detailed description of the method.", "Table REF shows the parameter settings used for the generation of the mock galaxy surveys.", "[64] suggests using $\\ell _{\\rm {max}}=5000$ for stage IV-like surveys to probe deep into non-linear regime.", "However, in this study we use a more conservative limit of $\\ell _{\\rm {max}}=1000$ , and do not take into account baryonic effects.", "We use [86] distributions to model the global redshift distribution of the source galaxies for both the stage III survey and the stage IV survey.", "The corresponding formulas and parameter settings for these two distributions are as follows $n(z)_{\\text{stageIII} }=z^\\alpha \\exp {[-(\\frac{z}{z_0})^\\beta ]},$ with $\\alpha =1.5$ , $\\beta =1.1$ and $z_0=0.31$ and $n(z)_{\\text{stageIV}}=(\\frac{z}{z_0})^\\alpha \\exp {[-(\\frac{z}{z_0})^\\beta ]},$ with $\\alpha =2.0$ , $\\beta =1.5$ and $z_0=0.64$ [64].", "In both cases the source galaxies are divided into four tomographic bins with equal number of galaxies in each bin.", "As a result of the auto- and cross-combinations of these four redshift bins, we have 10 combinations of auto- and cross-correlations for the cosmic shear measurements (4 auto-correlations and 6 cross-correlations).", "Figure REF shows the global and tomographic redshift distributions used in this study.", "Figure: The redshift distributions of the source galaxies.", "One can see the four tomographic distributions for the stage III survey and the stage IV survey.", "The global distributions, that follows the model, is shown by the dashed lines." ], [ "Covariance Matrix", "An accurate estimate of the survey covariance matrix is crucial for the correct calculation of the likelihood function.", "We estimate the covariance matrices for the stage III and stage IV survey setups described in Table REF from numerical simulations.", "We generate a large number ($N = 2000$ ) of realisations of the angular power spectra for each survey setup following the methodology outlined in [100].", "In the following, we introduce the used N-Body simulations, briefly summarize the forward modelling procedure used to generate the angular power spectra and describe the estimation of the covariance matrix.", "We refer the reader to [100] for a more detailed description of the methodology.", "We utilise the 50 independent PkdGrav3 [75] N-Body simulations at the fiducial cosmology that were previously used in [100], [23] and generated using the state-of-the-art dark-matter-only N-body code PkdGrav3.", "The cosmological parameters in the used simulations are fixed to the ($\\Lambda $ CDM,TT,TE,EE+lowE+lensing) results of Planck 2018 [1], except for $\\Omega _{\\mathrm {m}}$ and $\\sigma _8$ which are set to the values found in [96].", "This setup results in $\\Omega _{\\mathrm {m}}=0.26$ , $\\sigma _8=0.84$ , $\\Omega _{\\mathrm {b}}=0.0493$ , $n_{\\mathrm {s}}=0.9649$ , $w=-1$ and $h=0.6736$ .", "We include three massive neutrino species in all simulations.", "The neutrinos are modelled as a relativistic fluid [94] and a degenerate mass hierarchy with a minimal neutrino mass of $m_{\\nu }=0.02$ eV per species was chosen.", "The dark energy density $\\Omega _{\\Lambda }$ is adapted for each cosmology to achieve a flat geometry.", "Each simulation was run using a unit box with a side-length of 900 Mpc/$h$ and $768^3$ simulated particles.", "In order to achieve a simulation volume large enough to cover the redshift range up to $z=3.0$ the unit box was replicated up to 14 times per dimension depending on the cosmology.", "While such a replication scheme is known to underpredict the variance of very large, super-box modes [25], it has been demonstrated by [23] that the simulations accurately recover the angular power spectra predicted by the theory code CLASS [57] for $\\ell \\in [30, 2048]$ .", "The particle shells from each PKDGRAV3 simulation are combined into tomographic full-sky mass maps using the UFalcon software [84].", "The particle shells are weighted according to the tomographic redshift distributions shown in Figure REF .", "The UFalcon software uses the HEALPIX [31] pixelization scheme to pixelize the sphere.", "A resolution of $\\texttt {NSIDE} = 1024$ was chosen.", "UFalcon also makes use of the Born approximation, which is known to deteriorate the accuracy of the produced mass maps.", "However, [74] have demonstrated that the introduced bias is negligible for stage III-like and stage IV-like surveys.", "The spherical Kaiser-Squires mass mapping technique [50], [98] is used to obtain the cosmic shear signal from the simulated mass maps.", "To forward-model a realistic weak lensing survey a shape noise signal must then be added to the cosmic shear signal and an appropriate survey mask must be applied.", "The survey masks are chosen such that we obtain eight stage III surveys and two stage IV surveys from each full-sky map.", "The shape noise signal is obtained in the same way as described in [100].", "We randomly sample galaxy positions within the survey region until the target source density is reached.", "The intrinsic ellipicities of the galaxies are then drawn from a probability distribution that was fit to the observed galaxy ellipticities in [96] (see [100]).", "The ellipticity of each individual galaxy is rotated by a random phase.", "Using five and twenty shape noise realisations per survey patch, we achieve the desired number of $N=2000$ survey realisations for the stage III and stage IV survey setup, respectively.", "The tomographic angular power spectra realisations $C_{\\ell , \\mathrm {i}}$ are then measured from the forward-modelled surveys using the anafast routine of the healpy software [99] using 20 bins from $\\ell _{\\rm {min}}=100$ to $\\ell _{\\rm {max}}=1000$ , the same as [84], where the index $i$ runs over the number of survey realisations $N$ .", "The covariance matrix $\\Sigma $ is estimated according to $\\hat{\\Sigma } = \\frac{1}{N - 1} \\sum _{\\mathrm {i}=1}^{N} (C_{\\ell , \\mathrm {i}} - \\bar{C_{\\ell }}) (C_{\\ell , \\mathrm {i}} - \\bar{C_{\\ell }})^T,$ where $\\bar{C_{\\ell }}$ indicates the mean of the angular power spectra realisations $C_{\\ell , \\mathrm {i}}$ .", "The estimated correlation matrices $C_{\\mathrm {n,m}} \\equiv \\Sigma _{\\mathrm {n,m}}/\\sqrt{\\Sigma _{\\mathrm {n,n}} \\Sigma _{\\mathrm {m,m}}}$ are presented in Figure REF .", "Figure: Correlation matrices for the stage III survey (left panel) and the stage IV survey(right panel).", "The ordering of the redshift tomographic bin combinations for the angular power spectrais 1×11\\times 1, 1×21\\times 2, 1×31\\times 3, 1×41\\times 4, 2×22\\times 2, 2×32\\times 3, 2×42\\times 4, 3×33\\times 3, 3×43\\times 4and 4×44\\times 4, from left to right.", "For each angular power spectrum, all 20 bins ranging from ℓ\\ell = 100 to ℓ\\ell = 1000 are shown." ], [ "Likelihood Analysis", "We use a Bayesian likelihood approach to evaluate the cosmological parameter constraints of different predictors.", "We assume a Gaussian error model and the likelihood is realized by: $\\begin{aligned}&\\log \\mathcal {L} \\\\&= -\\frac{1}{2}\\sum _{ij}(C_{\\ell ,\\rm {truth}}^{i}-C_{\\ell ,\\rm {compare}}^{i})^T\\hat{\\Sigma }^{-1}(C_{\\ell ,\\rm {truth}}^{j}-C_{\\ell ,\\rm {compare}}^{j})\\end{aligned}$ Here $C_{\\ell ,\\rm {truth}}$ stands for the value of the observable, computed using PyCosmo [78], [93], [69] with a chosen predictor and the fiducial cosmological parameters, measured by the Wilkinson Microwave Anisotropy Probe satellite (WMAP) 9 [41], presented in Table REF .", "$C_{\\ell ,\\rm {compare}}$ is predicted using another predictor for comparison.", "The cosmology for the observable is different from what is used for the covariance matrix.", "However, this effect is neglected assuming the covariance matrix parameter independent [55].", "$\\Sigma ^{-1}$ is the unbiased estimate of the inverse covariance matrix [34], [73] represented as: $\\hat{\\Sigma }^{-1} = \\frac{N-N^{^{\\prime }}-2}{N-1}\\hat{\\Sigma }^{-1},$ $N$ is the number of realisations generated from the simulations and $N^{^{\\prime }}$ is the total number of data bins, which is given by $N^{^{\\prime }} = N_{\\rm {redshift}}\\times N_{\\ell }.$ Here we have $N=2000$ , $N_{\\ell }=20$ and $N_{\\rm {redshift}}=10$ .", "Table: The fiducial values for the cosmological parameters and the flat priors for the cosmological parameters that are varied in the analysis." ], [ "Parameter Inference", "The posterior is sampled efficiently using the Markov Chain Monte Carlo (MCMC) ensemble sampler, $\\texttt {emcee}$ [27].", "We vary 4 cosmological parameters $\\lbrace \\Omega _{\\rm {m}},\\sigma _8,n_{\\rm {s}},h\\rbrace $ for the $\\Lambda \\texttt {CDM}$ cosmological model and an additional parameter $w_0$ for the extended $\\texttt {wCDM}$ model, where we fix $w_{\\rm {a}}\\equiv 0$ .", "Table REF shows the priors used for these parameters.", "We run the MCMC chains with 100 walkers per parameter and cut the burn in phase for each run as one third of the chain length.", "Each individual chain has more than 100,000 samples.", "For the visualisation of the marginalised posteriors, we use the public $\\texttt {Getdist}$ [58].", "We present the results of our comparison of different predictors in this section, including the analysis of the matter power spectrum, the weak lensing power spectrum, and the cosmological parameter constraints based on the stage III and stage IV weak lensing surveys." ], [ "Power Spectrum", "We use the linear power spectrum predicted by $\\texttt {PyCosmo}$ and generated following [24] as the input for all predictors.", "Figure REF shows the comparison of dark-matter-only non-linear $P(k)$ predictions from different predictors at redshift $z=0$ , and the comparison for different redshifts ranging from $z=0$ to $z=5$ in Appendix .", "The results are shown for $k$ ranging from $k=0.01h\\text{Mpc}^{-1}$ to $9h\\text{Mpc}^{-1}$ using 10000 bins.", "$\\texttt {BaccoEmulator}$ and $\\texttt {CosmicEmulator}$ are not valid for $z>3$ , so we do not present their comparison for the higher redshift at $z=5$ .", "Figure REF and Figure REF indicate that: All the predictors except for halofit are within the $5\\%$ level of accuracy compared to $\\texttt {rev-halofit}$ for $z<2$ and $k<7h\\text{Mpc}^{-1}$ (BaccoEmulator is valid for $z<1.5$ and $k<5h\\text{Mpc}^{-1}$ , see the details in Figure REF ).", "Note that this is consistent with the comparison of $\\texttt {mead}$ , $\\texttt {rev-halofit}$ and $\\texttt {halofit}$ in [65].", "$\\texttt {halofit}$ shows stronger discrepancies compared with the other predictors at small scales for $k>0.1h\\text{Mpc}^{-1}$ and this discrepancy can reach $20\\%$ for $k\\sim 10h\\text{Mpc}^{-1}$ .", "$\\texttt {mead}$ and $\\texttt {rev-halofit}$ show close agreement with the emulators at the $5\\%$ level for $k<9h\\text{Mpc}^{-1}$ and $z<0.5$ .", "However, at higher redshifts $1<z<5$ , the discrepancies between $\\texttt {mead}$ and the emulators can reach $10\\%$ for $k>3h\\text{Mpc}^{-1}$ , while $\\texttt {rev-halofit}$ provides a more consistent precision within $5\\%$ .", "All the emulators yield an agreement within the $2-3\\%$ level compared with the $\\texttt {Pkdgrav3}$ simulation for $k<9h\\text{Mpc}^{-1}$ and $z<1.5$ .", "However, this is not valid at higher redshifts.", "For large scales with $k<0.5h\\text{Mpc}^{-1}$ , the different predictors show a better agreement at higher redshifts." ], [ "Weak Lensing Power Spectrum", "We compute the weak lensing shear power spectrum $C_\\ell $ for the Stage III survey and the Stage IV survey with different predictors.", "Limited by the range of $k_{\\rm {max}}$ of the emulators, the $C_\\ell $ s are computed using 20 $\\ell $ -bins spaced linearly between $\\ell _{\\rm {min}}=100$ and $\\ell _{\\rm {max}}=1000$ .", "The integrated redshift range is $[0.08,2.0]$ for the stage III survey and $[0.08,3.0]$ for the stage IV survey.", "This setting was chosen in order to avoid the instability of emulators for low redshifts, where we found that $\\texttt {EuclidEmulator}$ and $\\texttt {EuclidEmulator2}$ predict the $C_\\ell $ s with a discrepency larger than $10\\%$ at $z<0.08$ .", "This choice differs from the setting used for the generation of the covariance matrix.", "However, we find that this only changes the discrepencies between different predictors for $C_\\ell $ s by $0.1\\%$ , since only $1\\%$ of the low-redshift galaxies are missed for the stage III survey and $0.1\\%$ of the galaxies for the stage IV survey.", "Using this redshift range, we have to exclude $\\texttt {CosmicEmulator}$ from the comparison for the stage IV survey as it allows only up to $z=2.0$ .", "The comparison is shown in Figure REF , with the left-hand panels showing the results for the Stage III survey and the right-hand side showing the Stage IV survey results.", "In the individual panels, we present $C_{\\ell }\\, \\ell (\\ell +1)/2\\pi $ for each predictor and illustrate the comparison by subtracting and dividing $\\texttt {rev-halofit}$ as the reference.", "In Figure REF the first row shows the comparison of the auto-correlated $C_\\ell $ s for the redshift bins $1\\times 1$ , the second row for $4\\times 4$ , and the bottom row shows the cross correlated $C_\\ell $ s for $1\\times 4$ .", "From Figure REF , one can infer that: All the predictors, except for $\\texttt {halofit}$ , yield an agreement at the $5\\%$ level, both for the auto and cross $C_\\ell $ .", "This is consistent with our results for $P(k)$ .", "$\\texttt {mead}$ shows a good agreement with $\\texttt {CosmicEmulator}$ , $\\texttt {EuclidEmulator2}$ and $\\texttt {EuclidEmulator}$ , while $\\texttt {rev-halofit}$ exhibits a larger discrepancy.", "The comparison of $C_\\ell $ for different predictors does not show a significant difference between the stage III survey and the stage IV survey." ], [ "Cosmological Parameters Constraints", "The comparison of the weak lensing cosmological parameter constraints for different predictors is present in this section.", "As indicated in Section , we consider a stage III survey and a stage IV survey.", "For each survey, we perform a comparison using the standard $\\Lambda \\text{CDM}$ cosmological model and the extended $w\\text{CDM}$ model.", "A summary of the constraints on $\\lbrace S_8,\\Omega _{\\rm {m}},w_0\\rbrace $ is presented in Table REF , and the constraints on $\\lbrace S_8,\\Omega _{\\rm {m}},n_{\\rm {s}},h,w_0\\rbrace $ in Table REF ." ], [ "$\\Lambda \\text{CDM}$ cosmology constraints", "We present the two-dimensional 68% and 95% confidence level contours of the posterior distributions for the $\\Lambda \\text{CDM}$ model in Figure REF and Figure REF for the Stage 3 and Stage 4 survey setup, respectively.", "The parameters $\\lbrace \\Omega _{\\rm {m}},\\sigma _8,n_{\\rm {s}},h\\rbrace $ are varied in the MCMC analysis.", "We additionally compute the constraints on $S_8$ , and summarise the shifts in $S_8$ in Figure REF , presenting the median values of the posteriors and the error bars indicating the 68% confidence limits of the constraints.", "One can infer from the posterior distributions in Figure REF and Table REF that the agreement on $S_8$ between different predictors is less than $0.6\\sigma $ for the stage III survey ($0.2-0.3\\sigma $ if $\\texttt {halofit}$ excluded), while being much larger for the stage IV survey.", "This is caused by the higher constraining power of the IV survey.", "More specifically, the agreements are generally on the $1.4-6.1\\sigma $ level ($1.4-3.0\\sigma $ if $\\texttt {halofit}$ excluded).", "$\\texttt {mead}$ shows good agreement with $\\texttt {CosmicEmulator}$ , $\\texttt {EuclidEmulator}$ and $\\texttt {EuclidEmulator2}$ for the stage III survey while it only agrees well with $\\texttt {EuclidEmulator2}$ for the stage IV survey.", "The constraints on $h$ do not show significant discrepencies for both surveys, while $n_{\\rm {s}}$ reveals discrepencies of several $\\sigma $ s for different predictors for the stage IV survey.", "Figure: Cosmological parameter constraints for the stage III survey in the Λ𝙲𝙳𝙼\\Lambda \\texttt {CDM} model.", "For each constraint, C ℓ, truth C_{\\ell ,\\rm {truth}} is predicted using the first predictor shown in the legend, and C ℓ, compare C_{\\ell ,\\rm {compare}} computed using the second predictor, as indicated in Section .", "For the stage III survey, we set C ℓ, truth C_{\\ell ,\\rm {truth}} with the halo-model based fitting functions (𝚛𝚎𝚟-𝚑𝚊𝚕𝚘𝚏𝚒𝚝\\texttt {rev-halofit}, 𝚖𝚎𝚊𝚍\\texttt {mead} and 𝚑𝚊𝚕𝚘𝚏𝚒𝚝\\texttt {halofit}) and 3 emulators (𝙴𝚞𝚌𝚕𝚒𝚍𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛\\texttt {EuclidEmulator}, 𝙴𝚞𝚌𝚕𝚒𝚍𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛2\\texttt {EuclidEmulator2} and 𝙲𝚘𝚜𝚖𝚒𝚌𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛\\texttt {CosmicEmulator}), and compare with predictions from only the fitting functions (in this figure only 𝚛𝚎𝚟-𝚑𝚊𝚕𝚘𝚏𝚒𝚝\\texttt {rev-halofit}).Figure: Cosmological parameter constraints of the stage IV survey in the Λ𝙲𝙳𝙼\\Lambda \\texttt {CDM} model.", "Only 2 emulators, i.e.", "𝙴𝚞𝚌𝚕𝚒𝚍𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛\\texttt {EuclidEmulator} and 𝙴𝚞𝚌𝚕𝚒𝚍𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛2\\texttt {EuclidEmulator2}, are chosen for C ℓ, truth C_{\\ell ,\\rm {truth}}, as 𝙲𝚘𝚜𝚖𝚒𝚌𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛\\texttt {CosmicEmulator} does not provide a sufficient redshift range for the stage IV survey." ], [ "$\\text{wCDM}$ cosmology constraints", "We consider the constraining power of weak lensing surveys on dark energy parameters by adopting a time dependent dynamical dark energy equation of state, the CPT-parameterisation [17], [60], as an extension to the $\\Lambda \\text{CDM}$ model.", "The equation of state parameter is given by $w(a) = w_0 + w_a(1 - a).$ where we use a fixed $w_{\\rm {a}}=0$ and a free $w_0$ .", "We present the two-dimensional marginal posterior distributions for the $w\\texttt {CDM}$ cosmology parameters in Figure REF and Figure REF , for the stage III survey and the stage IV survey, respectively.", "Taking into account the dark energy model changes the shape and the contour size of the posterior distributions, decreasing the constraining power on the cosmological parameters.", "The discrepancies in $S_8$ between predictors are generally smaller compared with the $\\Lambda \\texttt {CDM}$ model due to the decrease in constraining power: $0.18-0.34\\sigma $ for the stage III survey and $0.7-2.4\\sigma $ for the stage III survey ($0.18-0.26\\sigma $ and $0.7-1.7\\sigma $ if $\\texttt {halofit}$ is excluded, respectively).", "$\\texttt {mead}$ still shows good agreement with $\\texttt {EuclidEmulator}$ and $\\texttt {EuclidEmulator2}$ for both the stage III survey and the stage IV survey.", "$\\texttt {rev-halofit}$ agrees with all the predictors within $0.3\\sigma $ for the stage III survey, and shows discrepancies at the $0.7-2.4\\sigma $ level for the stage IV survey.", "Figure: Cosmological parameter constraints of the stage III survey in the 𝚠𝙲𝙳𝙼\\texttt {wCDM} cosmological model.", "Including w 0 w_0 reduces significantly the constraining power, yielding much broader contours than the Λ𝙲𝙳𝙼\\Lambda \\texttt {CDM} model.Figure: Cosmological parameter constraints of the stage IV survey in the w𝙲𝙳𝙼w\\texttt {CDM} cosmological model.", "The discrepancies between the predictors are alleviated, taking into account a simple w𝙲𝙳𝙼w\\texttt {CDM} cosmological model with a varying w 0 w_0.Figure: Deviations of the parameter constraints on S 8 S_8.", "The upper plot shows the result for the stage III survey, for the Λ𝙲𝙳𝙼\\Lambda \\texttt {CDM} model (black) and the w𝙲𝙳𝙼w \\texttt {CDM} model (green), respectively.", "The lower plot shows the stage IV survey, for the Λ𝙲𝙳𝙼\\Lambda \\texttt {CDM} (red) and w𝙲𝙳𝙼w \\texttt {CDM} (blue), respectively.Table: Numerical constraints on the cosmological parameters corresponding to the contours in Figure , , , and .", "For each predictor, the σ\\sigma s show the theoretical discrepancies for each parameter, compared to the reference one." ], [ "Conclusions", "The different halo-model based fitting functions and emulators have been widely used for the prediction of non-linear power spectrum to study the large scale structure of the Universe.", "It is essential to understand their advantages, limitations, and theoretical uncertainties for different surveys and cosmologies.", "From our results, we conclude that: Compared with Pkdgrav3 simulations, the halo-model based fitting functions, except $\\texttt {halofit}$ , yield a $5-10\\%$ level accuracy for the matter power spectrum $P(k)$ for $k<9h\\text{Mpc}^{-1}$ and $z<2$ , while emulators show better precision at the $2\\%$ level.", "For the weak lensing shear power spectrum $C_\\ell $ , all the predictors, except for $\\texttt {halofit}$ , show a $5\\%$ level mutual agreement.", "For the stage III survey with a $\\Lambda \\texttt {CDM}$ cosmology, the agreement on $S_8$ between different predictors are within $0.6\\sigma $ , and within $0.2\\sigma $ for other cosmological parameters ($0.3\\sigma $ and $0.2\\sigma $ if we exclude $\\texttt {halofit}$ , respectively).", "This indicates the applicability of the studied predictors for the stage III surveys.", "For the stage IV survey using a $\\Lambda \\texttt {CDM}$ cosmology, the disagreements on $S_8$ are increased to several $\\sigma $ s, with the largest discrepancy of $6.1\\sigma $ between rev-halofit and $\\texttt {halofit}$ , and the best agreement between mead and EuclidEmulator2.", "If $w_0$ is taken into account for the $w\\texttt {CDM}$ cosmology, we get weaker constraints on $S_8$ , and the discrepancies between different predictors are reduced to $0.2-0.3\\sigma $ and $0.7-2.4\\sigma $ for the stage III survey and the stage IV survey respectively ($0.18-0.26\\sigma $ and $0.7-1.7\\sigma $ if we exclude $\\texttt {halofit}$ , respectively).", "The accuracy of the current fitting function models and emulators therefore appear sufficient for stage III surveys.", "However, for the future IV surveys, our results suggest that the fitting function models are currently not sufficiently accurate, and would need further improvements in the future.", "For emulators, it is required to explore wider ranges of cosmological parameters, $k$ -modes, and redshifts, while pursuing consistent precision with reliable hydrodynamic N-body simulations.", "Note that, in this study, we include dark-matter-only predictions, without any consideration of baryonic effects, which can have a strong impact on small scales [46], [79].", "Current studies of halo-model based fitting functions already include other systematics, i.e.", "massive neutrino and baryonic effects like AGN feedback and gas cooling.", "The inclusion of these systematics will significantly reduce the constraining power, and might alleviate the discrepancies between the predictors.", "There are also other sources of uncertainties in weak lensing experiments that we did not include in this work and that could affect the our results, e.g.", "photometric redshift uncertainty [40], [45], [18], shear bias [10], [43], [9], [77], [67] and galaxy intrinsic alignment [35], [25], [42], [15], [47]." ], [ "Acknowledgments", "This work was supported in part by grant 200021_192243 from the Swiss National Science Foundation.", "We thank Mischa Knabenhans from University of Zürich for the distribution of $\\texttt {Pkdgrav-3}$ .", "We further thank Aurel Schneider from the University of Zürich for the useful discussions regarding this project and the covariance matrix for a stage IV survey.", "We would also like to thank Uwe Schmitt from ETH Zürich for his support with the GitLab server and development of $\\texttt {PyCosmo}$ .", "The Collaborating Institutions are the Eidgenössische Technische Hochschule (ETH) Zürich, Ecole Polytechnique, the Laboratoire de Physique Nucléaire et des Hautes Energies of Sorbonne University.", "Most of the analysis in this work is down on the Euler clusterhttps://scicomp.ethz.ch/wiki/Euler operated by ETH Zurich.", "Here follows the computational codes used in this study: $\\texttt {PyCosmo}$ [78], [93], [69] is used as the main tool where all the non linear codes are implemented for the computation of auto (cross) power spectra, galaxy reshift distribution counts, and observable of cosmic shear.", "It is also extended to include interfaces with the emulators.", "$\\texttt {Anafast}$ is used for computation of power spectra from simulations, and all the the maps (masks, weight, shear, mass) in pipeline are in $\\texttt {HealPix}$ format.", "We use $\\texttt {Emcee-3.0.2}$ [27] for the sampling of parameter space and $\\texttt {Getdist}$ [58] for the plotting of likelihood contours and $\\texttt {Uhammer}$ for the simplification of $\\texttt {Emcee}$ running.", "Some of the results in this paper have been derived using the $\\texttt {healpy}$ and $\\texttt {HEALPix packages}$ [30].", "In this study, we made use of the functionalities provided by $\\texttt {numpy}$ [102], $\\texttt {scipy}$ [97] and $\\texttt {matplotlib}$ [44]." ], [ "Power spectrum comparison", "In this section we present the comparison of the non-linear power spectrum for all redshifts, as shown in Figure REF .", "Figure: The comparison of the dark-matter-only non linear P(k)P(k) of different predictors at different redshifts (z=0,0.5,1,1.5,2z=0,0.5,1,1.5,2 and 5), subtracted and divided by 𝚛𝚎𝚟-𝚑𝚊𝚕𝚘𝚏𝚒𝚝\\texttt {rev-halofit} as reference.", "𝙱𝚊𝚌𝚌𝚘𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛\\texttt {BaccoEmulator} and 𝙲𝚘𝚜𝚖𝚒𝚌𝙴𝚖𝚞𝚕𝚊𝚝𝚘𝚛\\texttt {CosmicEmulator} are not valid for z>3z>3, so we do not take them into comparison for z=5z=5." ], [ "Cosmological Parameter Constraints", "The summary of constraints on $\\lbrace S_8,\\Omega _{\\rm {m}},n_{\\rm {s}},h,w_0\\rbrace $ is concluded in this section, shown in Table REF .", "Table: Complet numerical constraints on the cosmological parameters corresponding to the contours in Figure , , , and .", "For each predictor, the σ\\sigma s show the theoretical discrepancies for each parameter, compared to the reference one." ] ]
2207.03598
[ [ "Removing degeneracy and multimodality in gravitational wave source\n parameters" ], [ "Abstract Quasicircular binary black hole mergers are described by 15 parameters, of which gravitational wave observations can typically constrain only $\\sim 10$ independent combinations to varying degree.", "In this work, we devise coordinates that remove correlations, and disentangle well- and poorly-measured quantities.", "Additionally, we identify approximate discrete symmetries in the posterior as the primary cause of multimodality, and design a method to tackle this type of multimodality.", "The resulting posteriors have little structure and can be sampled efficiently and robustly.", "We provide a Python package for parameter estimation, cogwheel, that implements these methods together with other algorithms for accelerating the inference process.", "One of the coordinates we introduce is a spin azimuth that is measured remarkably well in several events.", "We suggest this might be a sensitive indicator of orbital precession, and we anticipate that it will shed light on the occurrence of spin-orbit misalignment in nature." ], [ "Introduction", "Gravitational wave astronomy is advancing at a rapid pace, with over 100 signals from compact binary mergers detected since the advanced LIGO [1] and advanced Virgo [2] detectors became operational [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13].", "The rate of detections will keep accelerating as planned and ongoing hardware upgrades take place.", "The astrophysical interpretation of these detections requires measuring their source parameters, such as masses, spins, position and orientation.", "At the same time, following recent developments in waveform modeling, state-of-the-art models of compact binary coalescences now incorporate higher-order harmonics and generically oriented spins [14], [15], [16], [17].", "These improvements have increased the diversity of waveform morphologies and the parameter space dimensionality, rendering computational cost a major hurdle for parameter estimation studies.", "In this work we develop a new parametrization of the source properties, tailor-made for compact binary mergers, that removes most correlations and multimodalities encountered in their posterior distributions.", "This coordinate system emphasizes quantities that are best constrained by the data, as opposed to the physically motivated parameters used by external libraries to model waveforms.", "We derive the set of analytic, invertible, nonlinear transformations between the standard parameter space and this new coordinate system.", "Expressed in these coordinates, the posterior distributions exhibit much less structure, which makes it easier to sample them efficiently and robustly.", "This is demonstrated in Fig.", "REF for the extrinsic parameters of a binary merger: although the two panels describe the same probability distribution, the coordinate system used on the right (described later in the paper) expresses the parameter constraints more naturally.", "Figure: Marginalized posterior for extrinsic parameters in terms of standard coordinates (left) or the folded coordinates advocated in this work (right), see Sections  and .The complex structures on the left are largely absent on the right, making the latter system better suited for sampling.Crimson bars show the standard deviation expected given the signal-to-noise ratio, discrepancies with the realized uncertainties are due to correlations with the marginalized parameters.In general, under a coordinate change $x \\rightarrow y$ the prior and posterior probability densities transform as $P(y) = P(x){J},$ where ${J} = {\\partial x / \\partial y}$ is the the absolute value of the Jacobian determinant.", "This can make coordinate changes cumbersome if their Jacobian determinant is not easily computable.", "We define our coordinate transformation as a series of simple transformations, affecting few variables at a time, that have tractable Jacobian determinants.", "We will use the following properties of the Jacobian.", "First, they multiply under composition of transformations $x \\rightarrow y \\rightarrow z$ : ${\\frac{\\partial x}{\\partial z}} = {\\frac{\\partial x}{\\partial y}} \\cdot {\\frac{\\partial y}{\\partial z}}.$ Second, transformations of the form $\\begin{pmatrix}x_1 \\\\x_2\\end{pmatrix}\\rightarrow \\begin{pmatrix}a(x_2) x_1 + b(x_2) \\\\x_2\\end{pmatrix}$ (where $x_1, x_2$ may be multidimensional) have ${J} = {a(x_2)},$ which frees us to use arbitrarily sophisticated $b(x_2)$ functions.", "In particular, if $a(x_2) = \\pm 1$ the probability density is unchanged by the coordinate transformation in Eq.", "(REF ).", "The remainder of the article is structured as follows.", "In Sec.", "we review the likelihood function of gravitational wave source parameters.", "In Sec.", "we introduce coordinates that remove common correlations.", "In Sec.", "we identify approximate symmetries responsible for discrete degeneracies, and present a novel method to remove this type of multimodality.", "In Sec.", "we present our main results: Sec.", "REF assesses the performance improvement brought by our methods in terms of the accuracy of the recovered posterior and the computational cost, and Sec.", "REF shows that a particular spin azimuth can be measured surprisingly well.", "We conclude in Sec.", "with a summary and outlook.", "Appendix  discusses in further detail two of the approximate symmetries identified, and appendix  acts as a reference sheet in which we summarize the coordinate system proposed." ], [ "Likelihood", "In this section we briefly review the likelihood function $P(d \\mid \\theta )$ , through which the data $d$ constrain source parameters $\\theta $ .", "We cast its approximate dependence on extrinsic parameters in terms of the signal's amplitude, phase and time of arrival at each detector, and we provide analytical expressions for these.", "Under the approximation that the noise is stationary and Gaussian, $\\log P(d \\mid \\theta )= - \\frac{1}{2} \\sum _{k \\in {\\rm det}} 4\\int _0^\\infty {\\rm d}f \\frac{{\\tilde{d}_k(f) - \\tilde{h}_k(f; \\theta )}^2}{S_k(f)},$ where $\\tilde{d}_k$ , $\\tilde{h}_k$ and $S_k$ are the frequency-domain data, strain model, and one-sided noise power spectrum in the $k$ th detector, respectively [18].", "For quasicircular binaries, the detector strains $\\tilde{h}_k$ depend on 15 parameters: eight intrinsic (two masses $m_1, m_2$ and six components of the spin vectors $\\mathbf {\\chi }_1, \\mathbf {\\chi }_2$ ) and seven extrinsic (luminosity distance $d_L$ , inclination $\\iota $ , right ascension $\\alpha $ , declination $\\delta $ , polarization $\\psi $ , orbital phase $\\phi _{\\rm ref}$ and coalescence time $t_c$ ).", "We can understand the approximate dependence on these parameters by considering the waveform under the quadrupole, stationary-phase and post-Newtonian approximations: $\\tilde{h}_k(f; \\theta ) \\\\\\approx A(f)\\frac{{\\mathcal {M}}^{5/6}}{d_L}R_k(\\iota , \\mathbf {\\hat{n}}, \\psi )e^{i [2 \\phi _{\\rm ref}- 2\\pi f t_k(\\mathbf {\\hat{n}})+ \\Psi (f; \\theta _{\\rm int})]}, $ where the chirp mass ${\\mathcal {M}}= \\frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}}$ controls the waveform amplitude and phase to leading post-Newtonian order.", "The inclination, sky location and polarization enter through the detector response [19] $\\begin{split}R_k &= \\frac{1 + \\cos ^2\\iota }{2} F^+_k (\\mathbf {\\hat{n}}, \\psi )-i \\cos \\iota \\, F^\\times _k (\\mathbf {\\hat{n}}, \\psi );\\\\\\end{split}$ where $\\mathbf {\\hat{n}}$ is the line of sight and $F^+_k, F^\\times _k$ are the antenna factors of the $k$ th detector [20].", "The sky location additionally introduces an arrival time delay in each detector due to the finite speed $c$ of gravitational waves, so there is a separate arrival time $t_k$ at each detector $k$ .", "Altogether, to this level of approximation the extrinsic parameters only affect the amplitude, phase and time in each detector: $\\tilde{h}_k(f; \\theta )\\approx a_ke^{i \\varphi _k}e^{-i2\\pi (f-\\overline{f}_k) t_k}A(f) e^{i\\Psi (f; \\theta _{\\text{int}})},$ with $a_k &= \\frac{\\mathcal {M}^{5/6}}{d_L} {R_k(\\iota , \\mathbf {\\hat{n}}, \\psi )}\\\\\\varphi _k &= \\arg {R_k(\\iota , \\mathbf {\\hat{n}}, \\psi )} + 2 \\phi _{\\rm ref}- 2\\pi \\overline{f}_k t_k \\\\t_k &= t_c - \\frac{\\mathbf {\\hat{n}}\\cdot \\mathbf {r}_k}{c}, $ where $\\mathbf {r}_k$ is the detector location relative to the point of reference time, e.g.", "the center of Earth, and $\\overline{f}_k$ is a frequency scale defined below.", "In Eq.", "() we have included a term dependent on $t_k$ to make $\\varphi _k$ orthogonal to $t_k$ : to linear order, it is possible to absorb a small shift in the time of arrival $t_k$ (in the sense of leaving the observed waveform almost unchanged) by simultaneously changing the phase (via $\\phi _{\\rm ref}$ , say) by $2 \\pi \\overline{f}_k t_k$ [21], [22], where $\\overline{f}_k$ is the first frequency moment, i.e., $\\overline{f^n}_k \\frac{\\int _0^\\infty {\\rm d}f A^2(f) f^n / S_k(f)}{\\int _0^\\infty {\\rm d}f A^2(f) / S_k(f)} $ for $n=1$ .", "This way, we expect the measurements of $a_k, \\varphi _k, t_k$ in a detector to be uncorrelated.", "We can also estimate the uncertainties in each of these parameters, which scale inversely with the signal-to-noise ratio $\\rho _k$ : $\\Delta a_k &\\sim \\frac{a_k}{\\rho _k} \\\\\\Delta \\varphi _k &\\sim \\frac{1}{\\rho _k} \\\\\\Delta t_k &\\sim \\frac{1}{2 \\pi \\sigma _f \\rho _k} ,$ with $\\sigma _f^2 = \\overline{f^2} - \\overline{f}^2$ [19], [21].", "By convention, for each event we will sort the detectors by their signal-to-noise ratio ($\\rho _{k_0} > \\rho _{k_1} > \\ldots $ ), so the best measured quantities correspond to the “reference detector\" $k_0$ ." ], [ "Mitigating degeneracy", "Oriented by the previous discussion, in this section we introduce a system of coordinates in which some variables separately control the observables $a_{k_0}, \\varphi _{k_0}, t_{k_0}, t_{k_1}-t_{k_0}$ , which will typically be well measured, and others explore degeneracies and will be poorly constrained.", "Some of these coordinates are already known in the literature but we still review them here for completeness.", "As we will see, the posterior distribution expressed in these coordinates is largely uncorrelated." ], [ "Amplitude at reference detector", "We replace the luminosity distance by the so-called chirp distance [23], which controls the amplitude at the reference detector Eq.", "(REF ): $\\begin{split}\\hat{d}(d_L, {\\mathcal {M}}, \\iota , \\alpha , \\delta , \\psi )&\\frac{1}{a_{k_0}} \\\\&= \\frac{d_L}{{\\mathcal {M}}^{5/6}{R_{k_0}}}.\\end{split}$ This is proportional to the luminosity distance, with a scale factor that depends on the response of the detector to the given $\\iota , \\alpha , \\delta , \\psi $ , and the intrinsic amplitude of the source $\\propto {\\mathcal {M}}^{5/6}$ .", "Thus, $\\hat{d}$ avoids the correlations with these variables that the luminosity distance suffers, as shown in Fig.", "REF .", "A collateral benefit is that the observable values of $\\hat{d}$ are similar for all mass ranges (in contrast, heavier events can be observed at larger luminosity distances), so the distance range to explore does not need to be tuned for each event.", "From Eqs.", "(REF ) and (REF ) we expect that $\\hat{d}$ will typically be measured to a precision $\\Delta \\hat{d}\\sim \\hat{d}/ \\rho _{k_0}$ .", "Figure: The luminosity distance d L d_L is typically correlated with the inclination and sky location (top row), while d ^\\hat{d} (Eq.", "()) is less correlated and better measured (bottom row).Examples in this section correspond to GW151226 , but the highlighted qualitative features are generic.In the notation of Eq.", "(REF ), the transformation $\\hat{d}\\rightarrow d_L$ has $x_1 = \\hat{d}$ , $x_2=({\\mathcal {M}}, \\iota , \\alpha , \\delta , \\psi )$ , $a(x_2) = 1/{{\\mathcal {M}}^{5/6}{R_{k_0}}}$ and $b = 0$ .", "The Jacobian determinant is ${J} = {\\frac{\\partial \\hat{d}}{\\partial d_L}}= \\frac{1}{{\\mathcal {M}}^{5/6}{R_{k_0}}}= \\frac{\\hat{d}}{d_L}.$ A more elaborate solution is to marginalize the posterior semianalytically over distance, altogether removing the necessity to sample from it [25], [26].", "We include this functionality as well in the software package we are releasing along with this paper.", "In practice, this makes the sampling process more robust against a particular failure mode, in which the sampler occasionally explores the distant universe (favored by the prior) and misses a nearby solution with high likelihood." ], [ "Time of arrival at reference detector", "We use the time of arrival at the reference detector $t_{k_0}$ as our arrival time parameter [26], as opposed to, for example, the geocentric time of arrival.", "From Eq.", "(), the uncertainty in arrival time at the detector is typically $\\lesssim {1}{}$ [21].", "On the other hand, unless the sky location is particularly well constrained, the uncertainty in time of arrival at geocenter is on the order of the gravitational wave Earth-crossing time (roughly ${40}{}$ ) due to the tight nonlinear correlation with sky location shown in Fig.", "REF .", "From Eqs.", "(REF ), (REF ) and (), the transformation $t_{k_0}\\rightarrow t_c$ has unit Jacobian determinant.", "Figure: The arrival time at geocenter (top row) is worse measured than at the reference detector (bottom row), because the time delay introduces a large correlation with sky location." ], [ "Time-of-arrival difference", "When a signal is observed in multiple detectors, the arrival time differences provide the dominant constraint on its sky location.", "For each pair of detectors, the arrival time difference determines the angle $\\theta _{kk^{\\prime }}$ between the source location and an axis through both detectors according to (see Eq.", "()) $\\begin{split}t_k - t_{k^{\\prime }}&= \\mathbf {\\hat{n}}\\cdot \\frac{\\mathbf {r}_{k^{\\prime }} - \\mathbf {r}_k}{c} \\\\&= \\tau _{k k^{\\prime }} \\cos \\theta _{k k^{\\prime }},\\end{split}$ where $\\tau _{kk^{\\prime }}$ is the the gravitational wave travel time between detectors $k$ and $k^{\\prime }$ .", "Thus, a natural way of parametrizing the sky location measurement is with a polar coordinate system $({\\theta _{\\rm net}}, {\\phi _{\\rm net}})$ whose $z$ -axis contains the two detectors with the largest signal-to-noise ratios in the network [27], [26], shown in Fig.", "REF .", "This coordinate system rotates with Earth and is related to the (fixed) $\\alpha , \\delta $ by a 3D rotation that depends on the pair of detectors and the sidereal time at which the signal arrives.", "The isotropic prior is uniform in $\\cos {\\theta _{\\rm net}}, {\\phi _{\\rm net}}$ .", "Figure: Coordinate system based on the dominant pair of detectors in the network, in this example Hanford–Livingston.The zz-axis contains the two detectors and the yy-axis points perpendicularly upwards.The zenithal angle θ net {\\theta _{\\rm net}} of the line of sight 𝐧 ^\\mathbf {\\hat{n}} determines the time-of-arrival difference.", "(Detector sizes are exaggerated but otherwise the figure is to scale.", ")Figure REF shows that, unlike $\\alpha $ and $\\delta $ , ${\\theta _{\\rm net}}$ is typically well measured and largely uncorrelated with ${\\phi _{\\rm net}}$ .", "In fact, from Eqs.", "() and (REF ) its expected uncertainty is $\\begin{split}\\Delta \\cos {\\theta _{\\rm net}}&\\sim \\frac{1}{2 \\pi \\sigma _f \\tau _{k_0 k_1}}\\sqrt{\\rho _{k_0}^{-2} + \\rho _{k_1}^{-2}} \\\\&= 0.016 \\cdot \\frac{{100}{}}{\\sigma _f}\\cdot \\frac{\\tau _{\\rm HL}}{\\tau _{k_0 k_1}}\\cdot \\frac{10}{\\rho _{k_1}}\\sqrt{1+\\frac{\\rho _{k_1}^2}{\\rho _{k_0}^2}}\\end{split}$ for events observed in two detectors.", "Each pair of detectors constrains the source location to a narrow ring in the celestial sphere.", "This degeneracy is broken when the signal is prominent in three or more detectors.", "The distribution of ${\\phi _{\\rm net}}$ is often multimodal, and for this reason we will ultimately use a modified azimuthal coordinate ${\\hat{\\phi }_{\\rm net}}$ instead (see Sec.", "REF ).", "Figure: Sky location of events in the first two observing runs, expressed in terms of right ascension and declination (top) or network-based angles θ net ,φ net {\\theta _{\\rm net}}, {\\phi _{\\rm net}} defined in Fig.", "(bottom).θ net {\\theta _{\\rm net}} is typically well constrained and largely uncorrelated with φ net {\\phi _{\\rm net}}.The signals are clearly clustered near 𝐧 ^=±𝐲 ^ net \\mathbf {\\hat{n}}= \\pm \\mathbf {\\hat{y}}_{\\rm net}, at the two high-sensitivity lobes of the network antenna pattern.Accordingly, when two or less detectors dominate the information content, the azimuthal angle φ net {\\phi _{\\rm net}} can be significantly informed by the antenna-pattern dependent prior.For single-detector events, a coordinate system aligned with the arms of the detector is more convenient." ], [ "Arrival phase", "We now seek a parametrization of the form in Eq.", "(REF ) that expresses the well-measured arrival phase $\\varphi _{k_0}$ in terms of our new coordinates.", "As is clear from Eq.", "(), such a reparametrization must involve $\\iota $ , $\\mathbf {\\hat{n}}$ , $\\psi $ , $\\phi _{\\rm ref}$ and $t_{k_0}$ .", "To achieve this, we replace the coalescence phase $\\phi _{\\rm ref}$ by another $2\\pi $ -periodic coordinate $&\\hat{\\phi }_{\\rm ref}(\\phi _{\\rm ref}, \\iota , \\mathbf {\\hat{n}}, \\psi , t_{k_0}) \\nonumber \\\\&\\qquad \\phi _{\\rm ref}+ \\frac{\\arg {R_{k_0}(\\iota , \\mathbf {\\hat{n}}, \\psi )}- 2\\pi \\overline{f}^{\\rm ML}_{k_0} t_{k_0} - \\varphi _{k_0}^{\\rm ML}}{2} \\\\&\\qquad \\equiv \\frac{\\varphi _{k_0} - \\varphi _{k_0}^{\\rm ML}}{2}, $ where we choose the constants $\\overline{f}^{\\rm ML}_{k_0}$ and $\\varphi _{k_0}^{\\rm ML}$ as the first frequency moment and arrival phase at the dominant detector for the (approximate) maximum likelihood source configuration.", "$\\hat{\\phi }_{\\rm ref}$ can be interpreted as the deviation of the coalescence phase $\\phi _{\\rm ref}$ from the value that would make the arrival phase at the reference detector equal to $\\varphi _{k_0}^{\\rm ML}$ .", "Therefore, its posterior distribution should have a peak near 0.", "The factor of $1/2$ in Eq.", "() reflects the fact that the phase of the dominant quadrupolar radiation advances by twice the angle under an azimuthal rotation of the source.", "We thus expect the $\\hat{\\phi }_{\\rm ref}$ posterior to have a second mode near $\\pi $ .", "By virtue of Eqs.", "() and (), these modes are largely uncorrelated with all other parameters and have widths $\\sim 1/ (2 \\rho _{k_0})$ .", "Radiation harmonics with odd values of $m$ break the symmetry between these two modes.", "Figure REF shows that $\\hat{\\phi }_{\\rm ref}$ is indeed much better measured and less correlated than $\\phi _{\\rm ref}$ .", "The transformation $\\phi _{\\rm ref}\\rightarrow \\hat{\\phi }_{\\rm ref}$ has unit Jacobian.", "Figure: Unlike the reference orbital phase φ ref \\phi _{\\rm ref} (top), the coordinate φ ^ ref \\hat{\\phi }_{\\rm ref} is well measured (bottom), since it uniquely determines the observable phase of arrival at the reference detector ϕ k 0 \\varphi _{k_0}.There remains, however, a discrete degeneracy between solutions separated by π\\pi that we treat in Sec.", ".Despite removing correlations, the definition of $\\hat{\\phi }_{\\rm ref}$ in Eq.", "(REF ) has two undesirable properties for sampling: its posterior is bimodal and, for waveforms with higher modes, discontinuous at the branch cut of $\\arg R_{k_0}$ .", "We will solve both problems in Sec.", "REF ." ], [ "Reference frequency", "Source parameters that are not conserved throughout the inspiral and merger need to be specified at a reference point in time, typically in terms of the instantaneous frequency ${f_{\\rm ref}}$ of the quadrupole radiation.", "Such parameters include orbital phase and, for precessing binaries, spin components, orbital inclination and polarization.", "These are best constrained near frequencies where the signal is prominent; specifying them at a far-removed ${f_{\\rm ref}}$ may introduce large correlations with other source parameters [28], [29].", "This effect is most important for the orbital phase, which in turn couples to the in-plane spin azimuths if they are defined relative to the orbital separation vector (we revisit this in Sections REF and REF ).", "We will choose the reference frequency to be ${f_{\\rm ref}}= \\overline{f}_{k_0}^{\\rm ML}$ ; from Eq.", "(REF ), this is the frequency weighted by the squared signal-to-noise ratio, and hence it will naturally be within the detector band." ], [ "Aligned spin components", "The effect of spins on the phase evolution of the inspiraling binary is largely dominated by the effective spin parameter ${\\chi _{\\rm eff}}= \\frac{\\chi _{1z}+ q \\, \\chi _{2z}}{1 + q},$ where $\\chi _{1z}, \\chi _{2z}$ are the components of the constituent (dimensionless) spins parallel to the orbital angular momentum, and $q = m_2 / m_1 \\le 1$ is the mass ratio.", "It is therefore convenient to use ${\\chi _{\\rm eff}}$ directly as a spin coordinate.", "The Jacobian of the transformation to Cartesian spin components is nontrivial [30], however, note that the prior on spins is much less certain than that on extrinsic parameters.", "Our approach is to specify the prior in terms of ${\\chi _{\\rm eff}}$ , which means we do not need to compute the Jacobian.", "We choose a uniform prior between $\\pm 1$ , which has the advantage that it does not vanish for any value of the observable ${\\chi _{\\rm eff}}$ .", "In order to specify the two orbit-aligned spin components, we also sample over their (typically) poorly measured difference, whose conditional prior we choose to be uniform within the range allowed by the cosmic censorship conjecture, ${\\chi _{1, 2}} < 1$ (see appendix ).", "For systems with well measured ${\\chi _{\\rm eff}}$ , a common alternative description based on the spin magnitudes and tilts can suffer from correlations between these four variables.", "[31] proposed a different parametrization which also removes the correlation of the aligned spins with the masses, we intend to implement this in the future." ], [ "In-plane spin components", "In Sec.", "REF we have argued that $\\mathbf {L}$ defines one preferred axis to parametrize the spins, as their aligned components affect the evolution of the orbital phase.", "Describing the spin azimuths requires us to define a second axis, for which two choices are common in the literature: the orbital separation vector at ${f_{\\rm ref}}$ , or the direction of propagation $\\mathbf {\\hat{N}}\\equiv - \\mathbf {\\hat{n}}$ .", "In line with [28], here we will advocate the latter choice.", "Misaligned spins cause the orbital angular momentum $\\mathbf {L}$ , as well as the spins $\\mathbf {S}_1, \\mathbf {S}_2$ , to precess about the total angular momentum $\\mathbf {J}$ , whose direction is stable [32], [33].", "This causes the inclination of the orbit seen from Earth, determined by $\\mathbf {\\hat{L}} \\cdot \\mathbf {\\hat{N}}$ , to continuously change.", "As a consequence, in addition to the usual cycles at twice the orbital frequency, misaligned-spin waveforms exhibit amplitude modulations at the slower precession rate.", "This separation of timescales means that the precession dynamics can be described using orbit-averaged equations, thereby decoupling it from the orbital phase.", "Thus, the size and peak frequencies of the amplitude modulations are governed by the orbital and spin angular momenta at ${f_{\\rm ref}}$ relative to $\\mathbf {\\hat{N}}$ , as these determine the evolution of the inclination.", "On the other hand, per Eq.", "() the phase $\\varphi _k$ of the “carrier” wave is controlled by an approximately degenerate combination of $\\phi _{\\rm ref}, \\psi , \\iota , \\mathbf {\\hat{n}}, t_c$ .", "The practical consequence of this degeneracy is that $\\phi _{\\rm ref}$ , which defines the azimuth about $\\mathbf {L}$ between the orbital separation and the direction of propagation, is poorly measured.", "In other words, a change in $\\phi _{\\rm ref}$ can be compensated by adjusting e.g.", "$\\psi , d_L$ to keep $\\hat{\\phi }_{\\rm ref}, \\hat{d}$ fixed, but only if the spins are held constant relative to $\\mathbf {\\hat{N}}$ .", "If, instead, the spins are rigidly rotated with the binary, the peak frequencies of the observed amplitude modulations would shift in a way that cannot be compensated by adjusting other parameters.", "Thus, using the orbital separation to define the origin of spin azimuths would introduce in the parametrization a spurious coupling between the observable precession and orbital cycles.", "Following [28], we will use the angles $\\theta _{JN}, \\phi _{JL}, \\phi _{12}$ to describe the inclination of the orbit and the spin azimuths.", "$\\theta _{JN}$ and $\\phi _{JL}$ are illustrated in Fig.", "REF : they define the direction of wave propagation $\\mathbf {\\hat{N}}$ given the total and orbital angular momenta $\\mathbf {J}, \\mathbf {L}$ and irrespective of the orbital separation vector.", "$\\phi _{12}$ is defined as the difference between the primary and secondary spin azimuths about $\\mathbf {L}$ .", "$\\phi _{JL}$ posteriors are often bimodal, and for this reason we will eventually replace $\\phi _{JL}$ by a modified spin azimuth $\\hat{\\phi }_{JL}$ in Sec.", "REF .", "Figure: Angles describing relative orientation between angular momenta and the direction of wave propagation .In terms of the observable precession cycles, changing 𝐍 ^↦-𝐍 ^\\mathbf {\\hat{N}} \\mapsto -\\mathbf {\\hat{N}} (i.e.", "inverting cosθ JN \\cos \\theta _{JN} and adding π\\pi to φ JL \\phi _{JL}) is a symmetry that can cause a discrete degeneracy.We return to this point in Sec.", ".Our parametrization of the spins is more akin to cylindrical than spherical coordinates, in the sense that we favor the orbit-aligned spin components over the zenithal angles.", "The description of spins is completed by specifying the in-plane spin magnitudes $\\chi _1^\\perp , \\chi _2^\\perp $ ." ], [ "Mitigating multimodality", "Oftentimes, the posterior distribution of gravitational wave source parameters is multimodal.", "Such distributions are more challenging for stochastic samplers, which might occasionally misestimate the relative weights of the modes or miss some of them altogether.", "In Sec.", "REF we identify four approximate discrete symmetries that are responsible for frequent multimodality in the orbital phase, polarization, inclination and sky location parameters.", "In Sec.", "REF we introduce “folding”: a simple algorithm to exploit the knowledge of the underlying approximate symmetries in order to robustly sample this type of multimodal probability distribution.", "Depending on the source properties, the detector network and the signal-to-noise ratio, this procedure can generically reduce the number of disjoint modes by a factor of up to 8." ], [ "Approximate discrete symmetries", "In Sec.", "REF we already encountered an approximate discrete symmetry $\\hat{\\phi }_{\\rm ref}\\mapsto \\hat{\\phi }_{\\rm ref}+ \\pi , $ (Fig.", "REF ) which is exact for waveforms with only even values of $m$ , in particular the dominant mode $(\\ell , {m}) = (2, 2)$ .", "A similar symmetry exists for the polarization $\\psi \\mapsto \\psi + \\frac{\\pi }{2} $ at constant $\\hat{\\phi }_{\\rm ref}$ , which (per Eq.", "(REF )) entails a simultaneous change by $\\pi /2$ in $\\phi _{\\rm ref}$ .", "This symmetry arises because, under the transformation (REF ), the antenna coefficients $F^+_k, F^\\times _k$ change sign [20]; meanwhile, the $\\pi /2$ rotation of $\\phi _{\\rm ref}$ inverts the sign of the waveforms $h^+_{\\ell m}, h^\\times _{\\ell m}$ for modes with ${m}=2$ .", "These two sign flips cancel, leaving the measurable responses $h_k = F^+_k h^+ + F_k^\\times h^\\times $ invariant.", "The symmetry becomes exact for waveforms with ${m}=2$ .", "Usually, this discrete degeneracy does not produce disjoint modes in the posterior, because the uncertainty in $\\psi $ is large enough that the two solutions remain connected (as an extreme example, $\\psi $ is a dummy parameter for face-on, aligned-spin waveforms).", "Still, $\\psi $ posteriors generally do exhibit this symmetry.", "We identify two other approximate discrete symmetries.", "In Sec.", "REF –REF we derived coordinates $\\hat{d},\\hat{\\phi }_{\\rm ref},t_{k_0},{\\theta _{\\rm net}}$ , which determine the amplitude, phase and time of arrival at the leading detector, and arrival time at the second detector.", "These variables typically capture most of the information on extrinsic parameters from the likelihood, and consequently tend to be tightly constrained.", "Conversely, the remaining extrinsic parameters $\\theta _{JN}, {\\phi _{\\rm net}}, \\psi $ have a smaller effect on the likelihood.", "As a result, they can exhibit large degeneracies—some of them discrete, leading to multiple modes.", "Indeed, due to the geometry of the Hanford–Livingston network, for these detectors both the prior and likelihood are approximately symmetric under either of the following discrete transformations (at fixed $\\hat{d}, \\hat{\\phi }_{\\rm ref}, t_{k_0}, {\\theta _{\\rm net}}$ ): ${\\phi _{\\rm net}}&\\mapsto -{\\phi _{\\rm net}}, \\\\({\\phi _{\\rm net}}, \\cos \\theta _{JN}, \\phi _{JL}) &\\mapsto ({\\phi _{\\rm net}}+\\pi , -\\cos \\theta _{JN}, \\phi _{JL}+\\pi ).$ The manifestation of these symmetries in the posterior is shown in Fig.", "REF .", "We can understand them intuitively as follows.", "By design, the Hanford and Livingston detectors are nearly coaligned (plus a $\\pi /2$ rotation in the horizontal plane, which simply adds a phase difference of $\\pi $ ; see Fig.", "REF ).", "For perfectly coaligned detectors, the amplitude and phase at the second detector are determined by those at the first; in this sense their measurement does not provide new constraints, and the likelihood is largely independent of ${\\phi _{\\rm net}}, \\iota , \\psi $ .", "These parameters are therefore significantly informed by the prior.", "The prior has nontrivial structure in these variables because the distance, and thereby the observable volume, depends on $\\iota , {\\phi _{\\rm net}}, \\psi $ at constant $\\hat{d}, {\\theta _{\\rm net}}$ (which are fixed by the likelihood).", "For example, the uniform prior in luminosity volume $\\pi (d_L)\\propto d_L^2$ is (by Eqs.", "(REF ) and (REF )) $\\pi (\\hat{d}\\mid {\\mathcal {M}}, \\iota , {\\theta _{\\rm net}}, {\\phi _{\\rm net}}, \\psi )\\propto \\hat{d}^2 {\\mathcal {M}}^{5/2}{R_{k_0}(\\iota , {\\theta _{\\rm net}}, {\\phi _{\\rm net}}, \\psi )}^3,$ proportional to the cube of the absolute value of the reference detector response.", "This term has four peaks as a function of $\\iota , {\\phi _{\\rm net}}$ : near $\\cos \\iota = \\pm 1$ (Eq.", "(REF )) and ${\\phi _{\\rm net}}= \\pm \\pi /2$ (Fig.", "REF ); see also Fig.", "REF .", "In other words, corresponding to a source face-on or face-off, and above or beneath the detector (constrained to the time-delay ring).", "As a result, Hanford–Livingston posteriors typically exhibit four modes at these configurations.", "We can see that both transformations (REF ) map an overhead source to one underfoot, i.e., they send $\\mathbf {\\hat{n}}\\cdot \\mathbf {\\hat{y}}_{\\rm net} \\equiv n_y \\mapsto -n_y$ .", "The second transformation, Eq.", "(), also flips the inclination.", "All four configurations can be reached by applying combinations of these two transformations.", "Figure: Approximate discrete symmetries are frequently responsible for multiple modes in the distribution, especially for the Hanford–Livingston network due to its peculiar geometric configuration.The plot shows examples of posteriors for the cosine of the inclination and a coordinate φ ^ net {\\hat{\\phi }_{\\rm net}} (Eq.", "()) that describes the line-of-sight azimuth along a ring of constant time delay.Additional detectors and higher modes can break these symmetries, as for GW190412 .The transformation $\\cos \\theta _{JN}, \\phi _{JL}\\mapsto -\\cos \\theta _{JN}, \\phi _{JL}+\\pi $ in Eq.", "() corresponds to inverting the direction of propagation $\\mathbf {\\hat{N}}\\mapsto -\\mathbf {\\hat{N}}$ in the frame of the binary.", "In other words, it can be interpreted as observing the system from the antipodal direction.", "To the extent that the source can be modeled as a precessing quadrupole, this simply reverses the handedness of the gravitational wave and cannot be distinguished with coaligned detectors.", "In reality, due to the curvature of Earth the detectors are not perfectly coaligned.", "As a result, the phase of arrival at the second detector is not completely determined by that at the first detector and its measurement has some constraining power.", "Whether it is advanced or retarded depends on the handedness of the elliptically polarized wave and the relative orientation of the two detectors as viewed from the source (the “projected detector tensors” [35]).", "For example, with the system defined in Fig.", "REF , the projected Hanford detector appears rotated counter-clockwise relative to Livingston when seen from the $x_{\\rm net} > 0$ hemisphere, but clockwise from $x_{\\rm net} < 0$ .", "Thus, the phase at Hanford would be advanced relative to Livingston for a right-polarized wave from $n_x < 0$ , or left-polarized from $n_x > 0$ (and vice versa), making these solutions degenerate.", "Thus, $n_x$ has to be inverted simultaneously for the transformation $\\mathbf {\\hat{N}}\\mapsto - \\mathbf {\\hat{N}}$ to be a good symmetry of the likelihood.", "This is implemented by sending ${\\phi _{\\rm net}}\\mapsto {\\phi _{\\rm net}}+ \\pi $ in Eq. ().", "Incidentally, since the arrival phase difference also involves the time delay, this effect induces an observable correlation between ${\\theta _{\\rm net}}$ and ${\\phi _{\\rm net}}$ .", "We provide a more quantitative treatment of all this in Appendix .", "In order to simplify the transformation (), we will define two $2 \\pi $ -periodic coordinates ${\\hat{\\phi }_{\\rm net}}&{\\left\\lbrace \\begin{array}{ll}{\\phi _{\\rm net}}& \\text{if $\\cos \\theta _{JN}< 0$} \\\\{\\phi _{\\rm net}}+ \\pi & \\text{else}\\end{array}\\right.}", "\\\\\\hat{\\phi }_{JL}&{\\left\\lbrace \\begin{array}{ll}\\phi _{JL} & \\text{if $\\cos \\theta _{JN}< 0$} \\\\\\phi _{JL} + \\pi & \\text{else}\\end{array}\\right.", "}$ to replace ${\\phi _{\\rm net}}$ and $\\phi _{JL}$ in the characterization of the sky location and spin azimuths, respectively.", "Both transformations have unit Jacobian.", "With these coordinates, the approximate symmetries in Eq.", "(REF ) become one-parameter reflections: ${\\hat{\\phi }_{\\rm net}}&\\mapsto -{\\hat{\\phi }_{\\rm net}}, \\\\\\cos \\theta _{JN}&\\mapsto -\\cos \\theta _{JN}.$ $\\hat{\\phi }_{JL}$ has a simple interpretation as the azimuth about $\\mathbf {J}$ between $\\mathbf {L}$ and the unsigned direction of propagation, so that $\\mathbf {\\hat{N}}\\mapsto -\\mathbf {\\hat{N}}$ leaves $\\hat{\\phi }_{JL}$ invariant.", "In Sec.", "REF we will show that this azimuth can be remarkably well measured.", "The Virgo detector does not have such special alignment.", "If a signal is loud in Virgo and at least one LIGO detector, these symmetries are typically broken and the posterior distribution is unimodal." ], [ "Folding algorithm", "Having identified the approximate discrete symmetries responsible for multimodality, we now introduce “folding”, an algorithm to exploit this structure when sampling.", "The basic idea is illustrated in Fig.", "REF , beginning with sampling from the probability distribution marginalized over the discrete approximate-symmetry transformations, and then sampling over the set of transformations in postprocessing to undo the marginalization.", "Figure: Folding algorithm.", "Left: the posterior has multiple modes (shown in different colors) due to approximate symmetries that are known in advance.", "Right: we can “fold” the distribution (sum its appropriately transformed modes) to make it unimodal.We sample the folded distribution and reconstruct the original in postprocessing.The four approximate symmetry transformations in Eqs.", "(REF ), (REF ), (REF ), () allow us to divide the phase space into $2^4=16$ sectors in which the posterior has similar behavior.", "This is illustrated in the left panel of Fig.", "REF for the four quadrants of $\\cos \\theta _{JN}, {\\hat{\\phi }_{\\rm net}}$ space, which have similar solutions by virtue of the approximate symmetries in Eq.", "(REF ) (the other two folded dimensions $\\hat{\\phi }_{\\rm ref}, \\psi $ are not shown).", "We pick one of these sectors, as highlighted in the right panel, which we call the “folded space”.", "The full space can be recovered from the folded space via $2^4$ discrete mappings $\\lbrace \\sigma _1, \\ldots , \\sigma _{16}\\rbrace $ that either do or do not apply each of the transformations (REF ), (REF ), (REF ), (), respectively.", "If the symmetries were perfect, it would suffice to draw samples on the folded space, then distribute them evenly onto all sectors using these mappings.", "The symmetries in Section REF are only approximate, since the detectors are not perfectly aligned, and general waveform models have additional effects like orbital precession and higher harmonics that are not modeled by Eq.", "(REF ).", "Hence we generalize this idea to relax the requirement of perfect symmetry, as follows: Define the folded distribution $\\tilde{P}(\\tilde{\\theta }) =\\sum _{i=1}^{2^N} P(\\sigma _i(\\tilde{\\theta })),$ where $\\tilde{\\theta }$ is a set of parameters in the folded space, $N=4$ is the number of folded parameters, $\\lbrace \\sigma _i\\rbrace $ are the $2^N$ discrete mappings and $P$ is the posterior distribution in the full space.", "The number of modes of $\\tilde{P}$ can be smaller than that of $P$ , as shown in the right panel of Fig.", "REF , by a factor of up to $2^N$ .", "Draw “folded” samples from this simpler distribution, $\\lbrace \\tilde{\\theta }^j\\rbrace \\sim \\tilde{P}$ , using a traditional sampler.", "Assign each folded sample to a sector: for each $\\tilde{\\theta }^j$ , draw $\\theta ^j$ from $\\lbrace \\sigma _1(\\tilde{\\theta }^j), \\sigma _2(\\tilde{\\theta }^j), \\ldots \\rbrace $ with relative probabilities $\\lbrace P(\\sigma _1(\\tilde{\\theta }^j)), P(\\sigma _{2}(\\tilde{\\theta }^j)), \\ldots \\rbrace $ .", "The set $\\lbrace \\theta ^j\\rbrace $ is distributed according to $P(\\theta )$ .", "At first glance, Eq.", "(REF ) suggests that each evaluation of $\\tilde{P}$ requires $2^N$ evaluations of $P$ and would therefore increase the computational cost by that factor.", "However, since the folded parameters are extrinsic, the expensive computation of the waveform can be reused.", "In the case at hand, changing the polarization and sky location requires recomputing antenna factors $F_+, F_\\times $ and time delays, and changing the phase requires recomputing spherical harmonic phases $e^{i m \\phi _{\\rm ref}}$ .", "Even in a general case, all $2^N$ evaluation points are known simultaneously, facilitating vectorization and parallelization.", "These considerations make the cost of computing $\\tilde{P}$ similar to that of $P$ .", "Likewise, generating $\\theta ^j$ from $\\tilde{\\theta }^j$ in step REF above does not require additional computations of $P$ , because these values can be stored along with $\\tilde{\\theta }^j$ during the sampling process.", "In practice, the number of evaluations required for convergence is much smaller for distributions with less modes, making this method advantageous for both robustness and efficiency.", "We emphasize that the folding procedure is not an approximation: it still gives the correct answer if the distribution $P$ is not symmetric at all (in that case, it just does not provide any advantage).", "In Sec.", "REF we had mentioned that the $\\hat{\\phi }_{\\rm ref}$ posterior is bimodal and discontinuous.", "In contrast, per Eqs.", "(REF ), (REF ) and (REF ) the folded posterior is unimodal and continuous, because both sides of the branch cut are summed together.", "In a similar way, the discontinuities introduced in Eqs.", "(REF ) and () are absent in the folded posterior since they happen at the fold $\\cos \\theta _{JN}= 0$ .", "Finally, one might worry that Fig.", "REF merely shows that the marginalized 2-dimensional posterior is approximately symmetric, while the folding algorithm is efficient only if the full 15-dimensional posterior is symmetric.", "It could be possible that, although this projection looks symmetric, the mappings we proposed were incorrect descriptions of the symmetry in the full space.", "In Fig.", "REF we show histograms of the unfolding probabilities $p_i \\propto P(\\sigma _i(\\tilde{\\theta }))$ used in step REF , which have unit sum by construction.", "In the limit that the transformations $\\sigma _i$ were perfect symmetries of $P$ , these probabilities would be 1/16.", "Conversely, if they were not good symmetries, these probabilities would be very nearly 0 or 1.", "We find that they are near $1/16$ , confirming that the transformations we identified are indeed approximate symmetries of the full space.", "Whether all or some of the modes are present for any particular event depends on the source parameters, the network configuration and the signal-to-noise ratio.", "Figure: Distribution of probabilities p i p_i with which folded samples are assigned to the 16 sectors during step , for GW151012.All peak near 1/16, demonstrating that the transformations (), (), (), () are approximate symmetries of the full 15-dimensional space.An alternative to the folding algorithm, which is similar in spirit, is to specify custom jump proposals that follow the approximate symmetry transformations and effectively “connect” the modes in the sampling process.", "This approach has been used to mitigate both degeneracy and multimodality [36], [37], albeit for a different set of symmetries (that includes the transformation in Eq.", "(REF ))." ], [ "Results", "We implement the coordinate transformations described in Sec.", "and the folding algorithm introduced in Sec.", "in a Python package, which we make publicly available at https://github.com/jroulet/cogwheel.", "Apart from those improvements, the cogwheel code utilizes the relative-binning algorithm [38], [39], [40] with higher-order modes [41] to accelerate likelihood evaluations.", "It interfaces with third-party routines for downloading public data (GWOSC [42], GWpy [43]), generating waveforms (lalsuite [44]) and sampling distributions (PyMultiNest [45], [46], dynesty [47]).", "In this section, we summarize the improvements brought by the coordinate transformations and folding technique in terms of efficiency and robustness of the inference.", "Throughout, we model waveforms using IMRPhenomXPHM with next-to-next-to-leading-order (NNLO) post-Newtonian precession [17], and we sample distributions with PyMultiNest [45], [46].", "Figure REF summarizes how our choice of coordinates, combined with the folding method, naturally describes the distribution of extrinsic parameters.", "The uncertainties achieved in the one-dimensional marginal posteriors for the parameters controlling distance, arrival phase and arrival time are in much closer agreement with the expectations from Eqs.", "(REF )–().", "That being said, the uncertainty of the arrival time $t_{k_0}$ is significantly underestimated.", "This can be traced back to correlations with intrinsic parameters, most significantly chirp mass and in-plane spin magnitude (not shown).", "The definition of arrival time for waveforms with different intrinsic parameters is somewhat arbitrary.", "The convention adopted by the LIGO Algorithm Library is to define the reference time when the amplitude of the strain is maximal [48].", "In the IMRPhenomX family of waveform models this is implemented through parametric fits [49] that are accurate to $\\sim {1}{}$ [50], which is consistent with the precision of the arrival time measurement achieved in Fig.", "REF ." ], [ "Performance of stochastic sampling", "We compare the performance of parameter estimation runs using the coordinate system presented in Sec.", "(coupled to the folding algorithm of Sec.", "REF ) against an “unoptimized” system that is roughly representative of the state of the art, using $(\\cos {\\theta _{\\rm net}},{\\phi _{\\rm net}},t_{k_0},\\cos \\theta _{JN},\\phi _{JL},\\phi _{12},\\cos \\theta _1,\\cos \\theta _2,\\chi _1,\\chi _2,d_L,\\phi _{\\rm ref},\\psi ,{\\mathcal {M}},\\ln q)$ as sampling coordinates and no folding.", "We adopt an isotropic, uniform-in-spin-magnitudes prior in both cases, and so for ease of implementation in the optimized case we use the cumulative of the prior on aligned spin components as sampling coordinate instead of the effective spin (cf.", "Sec.", "REF ).", "In both examples we use PyMultiNest and vary the number of live points in factors of two between 512 and 16384 (this is an internal feature of nested samplers; a higher number of live points typically achieves a better coverage of parameter space and produces more independent samples at a proportionally higher computational cost).", "We use GW151226 as a test case.", "We find that, despite nested sampling being generally well suited for multimodal problems, and despite using numerous live points, the sampler fails to find some modes of the posterior when folding is not used.", "This is illustrated in the top panel of Fig.", "REF : of the unoptimized runs, only the one with 16384 live points found all modes, and even in this case one of the modes was undersampled.", "Conversely, all runs that used our coordinate system and folding successfully found all the modes.", "To quantify the error this induces, we compare each posterior distribution $P$ to a reference answer $P_{\\rm ref}$ , chosen as that with the largest number of live points and using our coordinate system and folding algorithm.", "Our measure of the error is the Jensen–Shannon divergence between each marginal $\\theta _{JN}, \\phi _{\\rm net}$ posterior and the reference distribution: ${\\rm JSD}_{\\theta _{JN}, {\\phi _{\\rm net}}}(P \\parallel P_{\\rm ref}) \\\\\\frac{1}{2} \\iint {\\rm d}\\theta _{JN}\\, {\\rm d}{\\phi _{\\rm net}}\\left(P \\log _2\\frac{P}{M} + P_{\\rm ref}\\log _2 \\frac{P_{\\rm ref}}{M}\\right)$ with $M = (P + P_{\\rm ref}) / 2$ .", "We singled out $\\theta _{JN}, {\\phi _{\\rm net}}$ because this projection of the unoptimized posteriors is most clearly missing modes.", "To evaluate Eq.", "(REF ), we obtain $P(\\theta _{JN}, {\\phi _{\\rm net}}), P_{\\rm ref}(\\theta _{JN}, {\\phi _{\\rm net}})$ from the posterior samples with a kernel density estimation and perform a double quadrature, using scipy implementations [51].", "We show this in the bottom panel of Fig.", "REF : we see more than an order of magnitude improvement in terms of this error measure when we use our coordinate system and folding.", "Figure: Improvement brought by our methods in terms of increased robustness to multimodality.We compare runs that do not use the methods new to this work (“Unoptimized”) to runs that do (“Folding & coordinates”).Within each setting, runs differ in number of live points.Top: Without folding, the sampler fails to find all modes of the posterior even when using a large number of live points.", "Bottom: Error achieved in the posterior distribution versus runtime.We use the Jensen–Shannon divergence with respect to a high-resolution run as our error measure (see Eq.", "()).The unoptimized runs have a higher error due to missing or undersampled modes." ], [ "Measurability of spin azimuth", "From the astrophysics standpoint, the most interesting result of this work is that the modified spin azimuth $\\hat{\\phi }_{JL}$ can be measured remarkably well.", "Figure REF shows its posterior distribution for the 15 events in the GWTC-3 and IAS catalogs with tightest bounds.", "We see that it is not uncommon for this parameter to be significantly constrained.", "Recall that $\\hat{\\phi }_{JL}$ describes the azimuth about $\\mathbf {J}$ between the orbital angular momentum $\\mathbf {L}$ and the location of the detector (Eq.", "(), Fig.", "REF ).", "Due to isotropy, all possible directions $\\mathbf {\\hat{N}}$ towards the detector are equivalent a priori, and thus the prior on $\\hat{\\phi }_{JL}$ is uniform.", "Moreover, for aligned-spin configurations we have $\\mathbf {L} \\parallel \\mathbf {J}$ , so $\\hat{\\phi }_{JL}$ has no effect on the waveform and therefore the likelihood is independent of $\\hat{\\phi }_{JL}$ .", "It follows that any departure from a flat posterior originates from the presence of misaligned spins in the waveform model.", "Furthermore, if any particular value of $\\hat{\\phi }_{JL}$ can be confidently ruled out for an event, then the source is inconsistent with having aligned spins.", "This has far-reaching consequences for the astrophysical interpretation of binary mergers.", "The degree of spin–orbit alignment constrains the formation history of the system—in particular, whether it likely formed in isolation or in a dense environment [52]—and can inform astrophysical processes such as tidal interactions and supernova kicks [53].", "For this reason, characterizing the imprints of precession is a topic of intensive research.", "Indeed, beyond the parametrization in terms of $\\theta _{JN},\\phi _{JL},\\phi _{12}$ on which ours is built [28], numerous other descriptions have been studied, both physically and observationally motivated: in terms of an effective precession spin $\\chi _{\\rm p}$  [54] or modifications thereof [55], [56]; a precessing signal-to-noise ratio $\\rho _{\\rm p}$  [57]; a taxonomy of phenomenological parameters [58]; or the spin azimuths [29].", "The practical difference between $\\phi _{JL}$ and $\\hat{\\phi }_{JL}$ is that $\\phi _{JL}$ posteriors typically have two modes separated by $\\pi $ , with opposite inclinations.", "The reason for this bimodality is an approximate discrete symmetry corresponding to observing the same source from the antipodal direction, as discussed in Sec.", "REF .", "As a result, the marginal posterior on $\\phi _{JL}$ is not particularly well constrained.", "We solve this by applying a shift of $\\pi $ depending on the sign of $\\cos \\theta _{JN}$ per Eq. ().", "Figure: Posterior distributions of modified spin azimuth φ ^ JL \\hat{\\phi }_{JL} (Eq.", "()) for the 15 events with best constraints, under a prior isotropic in spins.For aligned-spin configurations, φ ^ JL \\hat{\\phi }_{JL} becomes a dummy parameter, and therefore events with well-measured φ ^ JL \\hat{\\phi }_{JL} have spins misaligned with the orbit.Posteriors have been artificially centered by subtracting their circular means, which have no astrophysical importance.Recently, [29] found hints of precession in GWTC-2 by studying the azimuths $\\phi _1, \\phi _2$ about $\\mathbf {L}$ between each individual spin and the orbital separation vector, at a reference time $t_{\\rm ref}=-100\\, GM/c^3$ before merger.", "The results in Fig.", "REF are similar in spirit because, following an argument analogous to the one opening this section, a tight measurement of $\\phi _1$ or $\\phi _2$ would also rule out aligned spins.", "In fact, we find that significantly tighter constraints can be placed on $\\hat{\\phi }_{JL}$ than on $\\phi _1$ or $\\phi _2$ .", "This is for two reasons: first, the aforementioned symmetry leads to a similar bimodality correlated with the source inclination; second, $\\phi _1$ and $\\phi _2$ define the spin azimuths relative to the poorly measured orbital phase $\\phi _{\\rm ref}$ .", "Indeed, [29] report that varying the reference time (or equivalently, frequency) at which the source configuration is specified sensitively affects the constraints on $\\phi _1, \\phi _2$ .", "Interestingly, we find that the width of the $\\phi _{\\rm ref}$ posterior depends on ${f_{\\rm ref}}$ , but that of $\\hat{\\phi }_{JL}$ does not, at least in the range ${50}{} < {f_{\\rm ref}}< {150}{}$ .", "This indicates that, at least in some cases, the uncertainty in $\\phi _{\\rm ref}$ is the limiting factor in the measurement precision for the angle between spin and orbital separation.", "We interpret this observation as follows.", "First, while $\\phi _1, \\phi _2$ evolve at the orbital timescale, the angle $\\hat{\\phi }_{JL}$ evolves at the slower precession timescale.", "Moreover, the precession frequency is well constrained independently of directly observable precession effects, because intrinsic parameters are measured from the evolution of the orbital phase (note that to lowest post-Newtonian order, the precession frequency just depends on the masses and the orbital frequency, not the spins [28]).", "Since there are few precession cycles and their frequency is determined, their phase can be specified through $\\hat{\\phi }_{JL}$ similarly well over a broad range of reference frequencies.", "This said, we observe a degradation in its measurement if we adopt a reference frequency of 20.", "The azimuth difference $\\phi _{12}$ also evolves on the precession timescale, however it has a subtler effect on the waveform and, in agreement with previous work [59], [29], we find it is not as well-measured.", "Reassuringly, the set of events with best measured $\\hat{\\phi }_{JL}$ contains those for which precession signatures have been reported before, namely GW200129_065458 [11], [60], GW190412 [34], [61], [62], GW151226 [63], GW170818 [29] and GW190521 [64], [65], as shown in Fig.", "REF ." ], [ "Conclusions", "We have introduced a coordinate system optimized for characterization of compact binary mergers observed through gravitational waves.", "It removes commonly encountered degeneracies and multimodality, and the transformation to standard coordinates has a simple Jacobian determinant and an explicit inverse.", "These coordinates improve the robustness and efficiency of parameter estimation algorithms, and build intuition about gravitational wave measurements.", "In order to remove degeneracy, the coordinates are designed to separately control the main observable features of the signal.", "For the extrinsic parameters, these are the amplitude, phase and time of arrival at the reference (loudest) detector, and the time delay to the second-loudest detector.", "For the spins, we single out the effective spin parameter and the total spin azimuth, which affect the orbital evolution and the peak frequencies of the amplitude modulations induced by precession, respectively.", "We reemphasize previous realizations that the reference frequency at which the configuration of the binary is specified should be inside the sensitive band of the detectors, and the spin orientations should be defined relative to the direction of wave propagation rather than the orbital separation.", "Strikingly, the azimuth $\\hat{\\phi }_{JL}$ between the total spin and the unsigned direction of wave propagation is well measured in several examples (Fig.", "REF ), establishing a novel observable signature of precession.", "We anticipate that this parametrization will improve our understanding of spin–orbit misalignment in nature.", "In future work we will define a statistic based on this parameter to quantify the significance of spin misalignment.", "We identified four approximate symmetries as the leading cause of multimodality in posterior distributions, involving the orbital phase, polarization, sky location and inclination.", "Depending on the signal-to-noise ratio and the network configuration, these can lead to up to $2^4$ modes in the posterior.", "The last two of these symmetries are particularly good for the Hanford–Livingston network due to its peculiar geometric configuration, and can lead to four degenerate solutions roughly corresponding to a source overhead or underfoot, and face-on or face-off.", "We devised “folding”, an algorithm that turns a distribution with this type of multimodality into an equivalent unimodal problem by marginalizing over the discrete degeneracies.", "To facilitate its application, we adapted our coordinates so that each of the four independent approximate-symmetry transformations is achieved by a one-parameter shift or reflection.", "Using these algorithms, we were able to achieve robust parameter inference while keeping the number of live points, and thereby the computational cost, low (Fig.", "REF ).", "We make publicly available a parameter estimation code that implements these features at https://github.com/jroulet/cogwheel.", "These methods greatly simplify the distribution, to the point where a parametric approximation to the full distribution can be made based on the signal-to-noise ratio and a few inputs specifying the location of the peak.", "Beyond the applications shown here, we expect that this will have other uses in gravitational wave data analysis.", "For example, evidence integrals have application in search [66], [13] and parameter estimation [67], and knowledge of the distribution can be used to design efficient integration schemes, e.g.", "through variance reduction methods.", "As another example, a machine-learning approach based on normalizing flows was recently demonstrated to be accurate and fast at estimating gravitational wave source parameters [68].", "In that approach, the posterior distribution is described in terms of a system of coordinates in which it has a standard form (e.g.", "Gaussian).", "This change of coordinates is found automatically and contains the complexity of the problem.", "The techniques introduced in this paper can be regarded as an analytical approximation to a normalizing flow.", "This suggests that applying a normalizing flow on the space of coordinates we developed here might require a simpler neural network architecture and reduce the training cost." ], [ "Acknowledgements", "We thank Geraint Pratten for help with waveform models and Will Farr for comments on the manuscript.", "This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA.", "LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector.", "Additional support for Advanced LIGO was provided by the Australian Research Council.", "Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain.", "The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan.", "JR is supported by grant No.", "216179 to the KITP from the Simons Foundation.", "SO acknowledges support as an NSF Graduate Research Fellow under Grant No.", "DGE-2039656.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.", "TI is supported in part by the Heising-Simons Foundation, the Simons Foundation, and National Science Foundation Grant No.", "NSF PHY-1748958 and Grant No.", "PHY-2110496.", "TV acknowledges support by the National Science Foundation under Grant No.", "2012086.", "BZ is supported by a research grant from the Center for New Scientists at the Weizmann Institute of Science and a research grant from the Ruth and Herman Albert Scholarship Program for New Scientists.", "MZ is supported by the Canadian Institute for Advanced Research (CIFAR) program on Gravity and the Extreme Universe and the Simons Foundation Modern Inflationary Cosmology initiative." ], [ "Approximate symmetries, continued", "In this appendix we provide a more quantitative discussion of the argument presented in Sec.", "REF , regarding how the measurement of the Hanford–Livingston phase difference constrains the source azimuth along the time-delay ring and the discrete degeneracies associated.", "From Eqs.", "() and (REF ), the observable phase difference between detectors $k_0$ and $k_1$ is $\\begin{split}\\varphi _{k_1} - \\varphi _{k_0}&= \\arg R_{k_1} - \\arg R_{k_0}- 2 \\pi \\overline{f}_{k_1} \\tau _{k_0 k_1} \\cos {\\theta _{\\rm net}}\\\\&\\quad - 2 \\pi (\\overline{f}_{k_0} - \\overline{f}_{k_1}) t_{k_0},\\end{split}$ where the last term can be neglected if the detectors have similar noise power spectrum shapes and thus $\\overline{f}_k$ .", "The difference $\\arg R_{k_1} - \\arg R_{k_0}$ does depend on the sky location and inclination if the detectors are not coaligned, which provides a joint constraint on ${\\theta _{\\rm net}}, {\\phi _{\\rm net}}, \\iota $ .", "Likewise, the relative amplitude at the detectors $\\frac{a_{k_1}}{a_{k_0}} = {\\frac{R_{k_1}}{R_{k_0}}}$ is also measurable.", "Figure REF shows how these terms depend on ${\\hat{\\phi }_{\\rm net}}$ for various inclinations (holding fixed the values of our other coordinates, and for an aligned-spin configuration so $\\iota = \\theta _{JN}$ ).", "Inverting $\\cos \\iota $ at fixed ${\\phi _{\\rm net}}$ inverts the sign of $\\arg R_k$ and adds $\\pi $ to ${\\hat{\\phi }_{\\rm net}}$ , by Eqs.", "(REF ) and (REF ).", "Therefore, in the second panel blue and red curves are related by a vertical reflection plus a horizontal shift.", "As a result of the geometry of the Hanford–Livingston network, the relative phase and amplitude of the detector responses are symmetric under the transformations (REF ) to a good approximation, especially near the configurations favored by the prior ${\\hat{\\phi }_{\\rm net}}\\approx \\pm \\pi /2,\\cos \\iota \\approx \\pm 1$ .", "Figure: Response of the Hanford–Livingston network along a ring of constant time-delay, as a function of modified azimuth φ ^ net {\\hat{\\phi }_{\\rm net}} (Eq.", "()) for various inclinations.", "Top: prior probability, see Eq. ().", "Center: phase difference introduced by the relative orientations of the detectors.", "Bottom: relative amplitude response.In these plots, the transformation () is a horizontal reflection and () inverts the color scale.Both are approximate symmetries of all three quantities.The dependence of $\\arg R_{\\rm L} - \\arg R_{\\rm H}$ on ${\\hat{\\phi }_{\\rm net}}$ seen in Fig.", "REF is responsible for the correlation between $\\cos {\\theta _{\\rm net}}$ and ${\\hat{\\phi }_{\\rm net}}$ noticeable in Fig.", "REF : per Eq.", "(REF ), in order to match the observed $\\varphi _{k_1} - \\varphi _{k_0}$ , $\\cos {\\theta _{\\rm net}}$ needs to change with ${\\hat{\\phi }_{\\rm net}}$ to compensate the variation of $\\arg R_{\\rm L} - \\arg R_{\\rm H}$ ." ], [ "Reference sheet", "In this appendix we provide a compact list of the coordinates defined throughout the main text, as implemented in the cogwheel package.", "The sequence of transformations to a standard system is given in Table REF , together with their Jacobian determinants.", "Here, GMST refers to the Greenwich Mean Sidereal Time of the event, which determines the orientation of Earth, and $k_0, k_1$ index the two detectors with highest signal-to-noise ratio.", "Table: Modular sequence of transformations from the sampling coordinates we propose to a standard system.Each transformation involves few variables, is only conditioned on variables computed by the previous ones, and has a simple Jacobian determinant.We use $\\chi _{1x}^{N}, \\chi _{1y}^{N}, \\chi _{2x}^{N}, \\chi _{2y}^{N}$ as “standard” parameters to describe the in-plane spins; we define these as the Cartesian spins in a frame where $\\mathbf {\\hat{z}} \\parallel \\mathbf {L}$ and $\\mathbf {\\mathbf {\\hat{N}}}$ is in the $yz$ -plane.", "This is related to the “radiation frame” used by the LIGO Algorithm Library [44] (where $\\mathbf {\\hat{z}} \\parallel \\mathbf {L}$ and $\\mathbf {\\hat{x}}$ is parallel to the orbital separation vector) with a rotation by $\\phi _{\\rm ref}$ around $\\mathbf {\\hat{z}}$ : $\\begin{pmatrix}\\chi _{1x}& \\chi _{2x}\\\\\\chi _{1y}& \\chi _{2y}\\end{pmatrix} = \\begin{pmatrix}\\cos \\phi _{\\rm ref}& \\sin \\phi _{\\rm ref}\\\\-\\sin \\phi _{\\rm ref}& \\cos \\phi _{\\rm ref}\\end{pmatrix} \\begin{pmatrix}\\chi _{1x}^{N}& \\chi _{2x}^{N}\\\\\\chi _{1y}^{N}& \\chi _{2y}^{N}\\end{pmatrix}.$ Using this system has two advantageous properties: the transformations in Table REF get more decoupled since these spins are independent of $\\phi _{\\rm ref}$ , and the coprecessing-frame harmonic modes of a waveform transform under a change by $\\phi _{\\rm ref}$ as $h_{\\ell m}(f) \\rightarrow h_{\\ell m}e^{i m \\phi _{\\rm ref}}$ if these spin components are held constant [69], [17] which is useful for reusing waveform computations, e.g.", "in folding.", "We also introduced spin coordinates ${C_{\\rm diff}}, C_1^\\perp , C_2^\\perp $ which are the cumulatives of the prior on the aligned spin difference and the in-plane spin magnitudes, respectively.", "These have a uniform prior on $(0, 1)$ by definition, and their relation to the physical spins depends on the choice of prior.", "Our default choice is uniform in the aligned spin difference conditional on ${\\chi _{\\rm eff}}, q$ , in which case $\\begin{split}{C_{\\rm diff}}(\\chi _{1z}, \\chi _{2z}, q)&\\int _{\\chi _{1z}^{\\rm min}}^{\\chi _{1z}} {\\rm d}\\chi _{1z}^{\\prime } \\pi (\\chi _{1z}^{\\prime } \\mid {\\chi _{\\rm eff}}, q) \\\\&= \\frac{\\chi _{1z} - \\chi _{1z}^{\\rm min}}{\\chi _{1z}^{\\rm max} - \\chi _{1z}^{\\rm min}},\\end{split}$ where $\\chi _{1z}^{\\rm min}(\\chi _{1z}, \\chi _{2z}, q)$ is the minimum possible $\\chi _{1z}$ consistent with the value of effective spin ${\\chi _{\\rm eff}}(\\chi _{1z}, \\chi _{2z}, q)$ (given by Eq.", "(REF )) subject to the Kerr bound ${\\chi _{1z}} < 1$ , and similarly $\\chi _{1z}^{\\rm max}(\\chi _{1z}, \\chi _{2z}, q)$ is the maximum possible value: $\\begin{split}\\chi _{1z}^{\\rm min} &= \\max (\\chi _{1z}+ q \\, \\chi _{2z}- q, -1) \\\\\\chi _{1z}^{\\rm max} &= \\min (\\chi _{1z}+ q \\, \\chi _{2z}+ q, 1).\\end{split}$ For the in-plane spins, our prior choice is uniform in the disk given the aligned spin value, which yields $\\begin{split}C_1^\\perp (\\chi _1^\\perp , \\chi _{1z}) &\\int _0^{\\chi _1^\\perp } {\\rm d}{\\chi _1^\\perp }^{\\prime }\\pi ({\\chi _1^\\perp }^{\\prime } \\mid \\chi _{1z}) \\\\&= \\frac{(\\chi _1^\\perp )^2}{1 - \\chi _{1z}^2},\\end{split}$ and similarly for the secondary in-plane spin." ] ]
2207.03508
[ [ "Shift Symmetries for p-Forms and Mixed Symmetry Fields on (A)dS" ], [ "Abstract Massive fields on (anti) de Sitter space realize extended shift symmetries at particular values of their masses.", "We find these symmetries for all bosonic p-forms and mixed symmetry fields, in arbitrary spacetime dimension.", "These shift symmetric fields correspond to the missing longitudinal modes of mixed symmetry partially massless fields where the top row of the Young tableau is activated." ], [ "Introduction", "In [1], shift symmetries acting on massive symmetric tensor fields on (anti)-de Sitter space ((A)dS) were found, generalizing the extended shift symmetries of the galileon [2], [3] and special galileon [4], [5], [6], [7] on flat space.", "We refer to the introduction of [1] for more extensive background and motivation.", "A summary of the results of [1] is as follows.", "A massive spin $s$ field on AdS$_D$ of radius $L$ , carried by a symmetric tensor field $\\phi _{\\mu _1\\cdots \\mu _s}$ , satisfies on shell the Klein-Gordon equation, ${\\left\\lbrace \\begin{array}{ll} \\left(\\nabla ^2 -m^2\\right)\\phi =0\\,\\, ,\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ s=0\\,, \\\\\\left(\\nabla ^2+{1\\over L^2}\\left[s+D-2-(s-1)(s+D-4)\\right]-m^2\\right)\\phi _{\\mu _1\\cdots \\mu _s} = 0\\ ,\\ \\ s\\ge 1\\,,\\end{array}\\right.}", "$ along with transversality in all indices and full tracelessness.", "(Here the mass squared $m^2$ for $s\\ge 1$ is chosen so that $m^2=0$ corresponds to the massless point, i.e.", "the point with the largest gauge symmetry.)", "The shift symmetries of interest occur at the following special mass values, ${\\left\\lbrace \\begin{array}{ll} m_{[0],k}^2L^2=k(k+D-1),\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ s=0\\, , \\\\m^2_{[s],k}L^2= (k+2) (k+D-3+2 s) ,\\ \\ s\\ge 1\\, ,\\end{array}\\right.}", "\\ \\ \\ \\ \\ \\ \\ k=0,1,2,\\ldots \\,.", "$ The shift symmetry reads $\\delta \\phi _{\\mu _1\\cdots \\mu _s}= S_{A_1\\cdots A_{s+k},B_1\\cdots B_s}X^{A_1}\\cdots X^{A_{s+k}} {\\partial X^{B_1}\\over \\partial x^{\\mu _1}}\\cdots {\\partial X^{B_s}\\over \\partial x^{\\mu _s}} \\, ,$ where the $X^A$ are embedding coordinates of AdS$_D$ into a flat spacetime of dimension $D+1$ , and the tensor $S_{A_1\\cdots A_{s+k},B_1\\cdots B_s}$ is a fully traceless, constant embedding space tensor with the index symmetries of the following Young tableau, $S_{A_1\\cdots A_{s+k},B_1\\cdots B_s}\\in ~\\raisebox {1.15ex}{(_5{s+k},_3s)} \\, .$ As explained in [1], the $k$ -th shift symmetric spin $s$ field can be thought of as the missing longitudinal mode of a partially massless (PM) spin $s+k+1$ field of depth $s$ (i.e.", "it has a rank $s$ gauge parameter).", "There is a symmetric traceless `field strength' with the same indices as this partially massless field, made from $k+1$ derivatives of the shift symmetric field in a way that mirrors the PM gauge transformation, $F_{\\mu _1\\ldots \\mu _{s+k+1}}= \\nabla _{(\\mu _{s+1}} \\cdots \\nabla _{\\mu _{s+k+1}}\\phi _{\\mu _1\\ldots \\mu _s)_T}\\, .", "$ This field strength is invariant under the shift symmetries (REF ) and gives the basic on-shell non-trivial shift-invariant operator in the theory.", "Here, we will extend the results of [1] by finding the analogs of the above shift symmetries for all the remaining bosonic fields, the anti-symmetric $p$ -form fields and the mixed symmetry fields.", "In the process, we will also see that there are no further shift symmetries of this type beyond those we find, and none further for the symmetric tensors beyond those in (REF ).", "We will find these shift invariant fields as longitudinal modes of various mixed symmetry fields as they approach PM points.", "In what follows, we will therefore make frequent use of the classification of PM points for general mixed symmetry fields [8], [9], [10], [11], [12], [13], [14], [15].", "A short summary is as follows.", "Consider a general massive field with the symmetries of a $p$ row Young tableau with row lengths $[s_1,s_2,\\ldots ,s_p]$ , which on-shell is symmetric and divergenceless in all indices and is annihilated by the Klein-Gordon operator $\\nabla ^2-\\tilde{m}^2$ .", "This field has dual conformal field theory (CFT) dimensions $\\Delta _\\pm $ found from the mass $\\tilde{m}^2$ by finding the greater and lesser roots of $\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -\\sum _{i=1}^p s_i .", "$ The partially massless points occur when squares in the tableau from a row which is longer than the row below it are `activated'.", "If it is the $q$ -th row that is being activated, then the number of squares that can be activated ranges from $1,2,\\ldots , s_q-s_{q+1}$ .", "We assign a depth $t=0,1,\\cdots , s_q-s_{q+1}-1$ , which indicates that $s_q-s_{q+1}-t$ of the blocks in the $q$ -th row are activated.", "By removing the activated squares, we get the Young tableau of the gauge parameter, and each activated square becomes a derivative in the gauge transformation law.", "These PM points occur at integer values of the dual CFT dimension given by $\\Delta _+= d - q + s_{q+1} + t\\,.$ The gauge transformation is generally reducible, with $q-1$ levels of gauge-for-gauge parameters.", "It will be important that the only irreducible gauge symmetries are for $q=1$ , i.e.", "when the first row is activated; it is these whose longitudinal modes will give rise to the shift symmetric fields.", "Conventions: The spacetime dimension is $D$ , with indices $\\mu ,\\nu ,\\ldots $ .", "The dual CFT dimension is $d \\equiv D-1$ , with indices $i,j,\\ldots $ .", "We use the mostly plus metric signature.", "We denote the AdS$_D$ radius by $L$ , so that the Ricci scalar is $R=-{D(D-1)/ L^2}<0$ .", "Though we write everything in terms of AdS, our results also apply to dS with the replacement $L^2\\rightarrow -1/H^2$ with $H$ the dS Hubble scale.", "$X^A$ denotes the embedding of AdS$_D$ into flat spacetime of dimension $D+1$ , with indices $A,B,\\ldots $ (see appendix A of [1] for details and conventions of the embedding formalism).", "Tensors are symmetrized and antisymmetrized with unit weight, e.g.", "$t_{[\\mu \\nu ]}=\\frac{1}{2}\\left(t_{\\mu \\nu }-t_{\\nu \\mu }\\right)$ , and $(\\cdots )_T$ means the symmetric fully traceless part of the enclosed indices.", "Young tableaux are denoted $[s_1,s_2,\\ldots ,s_p]$ where $s_i$ is the number of boxes in the $i$ -th row, and are deployed in the manifestly anti-symmetric convention.", "We sometimes use the shorthand of using an exponent to denote multiple rows of the same length, e.g.", "$[4,2^3,1]\\equiv [4,2,2,2,1]$ .", "We denote the corresponding Young projectors by ${\\cal Y}_{[s_1,s_2,\\ldots ,s_p]}$ .", "The notation $T$ on a tableau or projector indicates that it is fully traceless.", "Masses for mixed symmetry fields are denoted by $\\tilde{m}^2$ , which are the “bare masses” that appear in the Klein-Gordon equation satisfied by the transverse and traceless field: $\\nabla ^2-\\tilde{m}^2=0$ .", "For symmetric tensors and $p$ -forms, there is a traditional notion of “massless,” which is the partially massless point with the largest depth (the only partially massless point in the $p$ -form case), so in these cases we use this more traditional $m^2$ which is shifted relative to $\\tilde{m}^2$ so that $m^2=0$ corresponds to this massless point." ], [ "Shift symmetries for 2-forms", "We start by illustrating the general pattern with the simplest case not covered by [1], the massive 2-form field.", "We will do the simplest instance of this case fully off-shell at the Lagrangian level, then proceed to more on-shell methods as we go on to more general cases.", "Consider the Lagrangian for a 2-form field $B_{\\mu \\nu }$ of mass $m$ on AdS$_D$ , ${1\\over \\sqrt{-g}}{\\cal L}_{[1,1],m^2}(B)= -{1\\over 12} (dB)_{\\mu \\nu \\rho }^2-{1\\over 4}m^2 B_{\\mu \\nu }^2\\,,\\ \\ \\ B_{\\mu \\nu }\\in ~\\raisebox {1.15ex}{(1,1)} \\, ,$ where $(dB)_{\\mu \\nu \\rho }=3\\nabla _{[\\mu } B_{\\nu \\rho ]}$ is the field strength.", "The equations of motion can be cast in the form $\\left(\\nabla ^2 +{2(D-2) \\over L^2}-m^2\\right)B_{\\mu \\nu }=0,\\ \\ \\ \\nabla ^{\\nu } B_{\\nu \\mu }=0.$ The mass is defined such that when $m^2=0$ we get the usual massless 2-form gauge symmetry.", "There are no other points of enhanced gauge symmetry besides this.", "As we will see, the massive 2-form theory (REF ) gets an enhanced shift symmetry when $m^2$ is set to the following values, $m_{[1,1],k}^2L^2= \\left(k+3 \\right) \\left(k+D-2\\right) ,\\ \\ \\ k=0,1,2,\\ldots \\,.$ The form of the $k$ -th shift symmetry is $\\delta B_{\\mu \\nu }= S_{B_1\\ldots B_{k+1},A_1,A_2 }X^{B_1}\\cdots X^{B_{k+1}} {\\partial X^{A_1}\\over \\partial x^{\\mu } } {\\partial X^{A_2}\\over \\partial x^{\\nu } } \\,, $ where $S_{B_1\\ldots B_{k+1},A_1,A_2 }$ is a constant, fully traceless mixed symmetry embedding space tensor of type $S_{B_1\\ldots B_{k+1},A_1,A_2 }\\in ~\\raisebox {1.15ex}{(_5{k+1},\\ ,\\ )} \\, ,$ and there is a $k+1$ derivative shift invariant field strength which is a traceless $[k+2,1]$ tensor $F_{\\mu \\nu \\rho _1\\ldots \\rho _{k+1}}={\\cal Y}_{[k+2,1]}^T \\left[\\nabla _{\\rho _1}\\cdots \\nabla _{\\rho _{k+1}} B_{\\mu \\nu }\\right] \\in ~\\raisebox {1.15ex}{(\\mu _5{k+1},\\nu )}\\,.", "$ We will find these shift symmetries by considering partially massless limits of appropriate mixed symmetry fields.", "For $k=0$ the appropriate field is a massive $[2,1]$ hook field (also known as a Curtright field [16], [17]).", "The Lagrangian for a massive $[2,1]$ hook field on AdS$_D$ is [18], [19], ${1\\over \\sqrt{-g}}{\\cal L}_{[2,1],\\tilde{m}^2}(f)&=& -{3\\over 4} \\left( \\nabla _{[\\mu } f_{\\nu \\rho ]\\sigma }\\nabla ^{[\\mu } f^{\\nu \\rho ]\\sigma }-3 \\nabla _{[\\mu } f_{\\nu \\rho ]}^{\\ \\ \\ \\rho } \\nabla ^{[\\mu } f^{\\nu \\sigma ]}_{\\ \\ \\ \\ \\sigma }\\right) \\nonumber \\\\&& -\\frac{1}{4} \\left({ 2 D-3 \\over L^2} +\\tilde{m}^2\\right)\\left(f_{\\mu \\nu \\rho }f^{\\mu \\nu \\rho }-2 f_{\\mu \\nu }^{\\ \\ \\ \\nu } f^{\\mu \\rho }_{\\ \\ \\ \\rho }\\right)\\,,\\ \\ \\ f_{\\mu \\nu \\rho }\\in ~\\raisebox {1.15ex}{(\\mu \\rho ,\\nu )}\\,.", "\\nonumber \\\\ $ The equations of motion can be cast in the form $\\left(\\nabla ^2 -\\tilde{m}^2\\right)f_{\\mu \\nu \\rho }=0\\, ,\\ \\ \\ f_{\\mu \\nu }^{\\ \\ \\nu }=0\\, ,\\ \\ \\ \\nabla ^\\mu f_{\\mu \\nu \\rho }=\\nabla ^\\rho f_{\\mu \\nu \\rho }=0\\, ,$ so the field on shell is fully traceless, fully divergenceless and satisfies a Klein Gordon equation with the bare mass $\\tilde{m}^2$ .", "The theory (REF ) has two mass values at which partially massless gauge symmetries arise [20]: The first is where the top block is activated, giving an antisymmetric tensor gauge symmetry, $~\\raisebox {1.15ex}{(\\ \\nabla ,\\ )}\\ \\ \\ \\tilde{m}^2L^2=-3\\ :\\ \\ \\ \\delta f_{\\mu \\nu \\rho }=\\nabla _{[\\mu }\\Lambda _{\\nu ]\\rho }-\\nabla _\\rho \\Lambda _{\\mu \\nu },\\ \\ \\ \\Lambda _{\\mu \\nu }\\in ~\\raisebox {1.15ex}{(1,1)} \\, .$ This gauge symmetry has no gauge-for-gauge reducibilities.", "This mass value is unitary on AdS and non-unitary on dS.", "The second is where the bottom block is activated, giving a symmetric tensor gauge symmetry, $~\\raisebox {1.15ex}{(\\ \\ ,\\nabla )}\\ \\ \\ \\tilde{m}^2L^2=-(2D-3)\\ :\\ \\ \\ \\delta f_{\\mu \\nu \\rho }=\\nabla _{[\\mu }\\xi _{\\nu ]\\rho },\\ \\ \\ \\xi _{\\mu \\nu }\\in ~\\raisebox {0.0ex}{(2)} \\, .", "$ This gauge symmetry has a gauge-for-gauge reducibility where $\\delta \\xi _{\\mu \\nu }=\\nabla _\\mu \\nabla _\\nu \\chi -{1\\over L^2}g_{\\mu \\nu }\\chi \\, ,$ with scalar gauge-for-gauge parameter $\\chi $ .", "This mass value is unitary on dS and non-unitary on AdS.", "As we approach these partially massless points with the curvature scale $L$ held fixed, the longitudinal modes that are removed by the gauge symmetries must decouple [21].", "Consider first the decoupling limit where we approach the first partially massless value (REF ) from above, $\\tilde{m}^2=-{3\\over L^2}+\\epsilon ^2,\\ \\ \\ \\ \\epsilon \\rightarrow 0.$ To preserve the degrees of freedom in this limit we introduce a 2-form Stückelberg field $B_{\\mu \\nu }$ and make the Stückelberg replacement $f_{\\mu \\nu \\rho }\\rightarrow f_{\\mu \\nu \\rho }+{1\\over \\epsilon }\\left( \\nabla _{[\\mu }B_{\\nu ]\\rho }-\\nabla _\\rho B_{\\mu \\nu }\\right),\\ \\ \\ B_{\\mu \\nu }\\in ~\\raisebox {1.15ex}{(1,1)} \\, ,$ where we have inserted the factor of $1/\\epsilon $ so that $B_{\\mu \\nu }$ will come out canonically normalized up to numerical factors.", "This replacement introduces a Stückelberg gauge symmetry under which the Stückelberg field shifts, $\\delta f_{\\mu \\nu \\rho }=\\left( \\nabla _{[\\mu }\\Lambda _{\\nu ]\\rho }-\\nabla _\\rho \\Lambda _{\\mu \\nu }\\right),\\ \\ \\ \\delta B_{\\mu \\nu }=-\\epsilon \\,\\Lambda _{\\mu \\nu }.$ In the limit (REF ), the Lagrangian (REF ) splits up into a partially massless hook (REF ) and a correct-sign massive 2-form with mass $m^2L^2= 3\\left(D-2\\right) $ , ${ \\cal L}_{[2,1],\\tilde{m}^2}(f) \\underset{\\tilde{m}^2\\rightarrow -{3\\over L^2}}{\\rightarrow } {\\cal L}_{[2,1], -{3\\over L^2}}(f) +{3\\over 2}\\,{\\cal L}_{[1,1], {3\\left(D-2\\right)\\over L^2} }(B).$ This mass is precisely the $k=0$ value of (REF ).", "As discussed in [1], the shift symmetry arises from reducibility parameters of the Stückelberg gauge symmetry, i.e.", "values of $\\Lambda _{\\mu \\nu }$ for which the gauge transformation of the hook field vanishes, $\\nabla _{[\\mu }\\Lambda _{\\nu ]\\rho }-\\nabla _\\rho \\Lambda _{\\mu \\nu }=0.$ In terms of the embedding space field $\\Lambda _{AB}$ corresponding to the gauge parameter $\\Lambda _{\\mu \\nu }$ , this equation says that the mixed symmetry part of $\\partial _A \\Lambda _{BC}$ should vanish.", "In addition, $\\Lambda _{AB}$ should be transverse to $X^A$ , should satisfy the higher dimensional Klein-Gordon equation $\\square _{(D+1)}=0$ , and should be of weight 1 in $X^A$ (so that the Klein-Gordon equation pulls back to the correct mass).", "The solution for all this is $\\Lambda _{A_1A_2}=S_{B_1A_1A_2 }X^{B_1}$ with totally antisymmetric $S_{B_1A_1A_2 }$ , which when pulled back to AdS gives the $k=0$ case of (REF )This is also known as a rank 2 Killing-Yano tensor and appears in generalized symmetries for linearized gravity [22], [23], [24], [25], [26], [27], [28], [29].", "$\\Lambda _{\\mu \\nu }=S_{B_1A_1A_2}X^{B_1}{\\partial X^{A_1}\\over \\partial x^{\\mu } } {\\partial X^{A_2}\\over \\partial x^{\\nu } } .$ This implies that there is a one-derivative `field strength' with the same symmetries as the parent PM field $f_{\\mu \\nu \\rho }$ , $F_{\\mu \\nu \\rho }\\equiv \\nabla _{[\\mu }B_{\\nu ]\\rho }-\\nabla _\\rho B_{\\mu \\nu }\\,,$ which is invariant under the $k=0$ shift symmetry (REF ).", "The trace of this field strength is proportional to $\\nabla ^\\nu B_{\\mu \\nu }$ and so vanishes on-shell.", "The traceless part is the basic on-shell non-trivial shift invariant operator in the theory, and is the $k=0$ case of (REF ).", "We can also consider a decoupling limit where we approach the second partially massless point (REF ), $\\tilde{m}^2=-{2D-3\\over L^2}+\\epsilon ^2,\\ \\ \\ \\ \\epsilon \\rightarrow 0.$ In this case we introduce a symmetric Stückelberg field $f_{\\mu \\nu \\rho }\\rightarrow f_{\\mu \\nu \\rho }+{1\\over \\epsilon } \\nabla _{[\\mu }H_{\\nu ]\\rho },\\ \\ \\ H_{\\mu \\nu }\\in ~\\raisebox {0.0ex}{(2)} \\, .$ In the limit (REF ), the Lagrangian (REF ) splits up into a partially massless hook and a massive spin-2, ${ \\cal L}_{[2,1],\\tilde{m}^2}(f) \\underset{\\tilde{m}^2\\rightarrow -{2D-3\\over L^2}}{\\rightarrow } {\\cal L}_{[2,1], -{2D-3\\over L^2}}(f) +{1\\over 4}\\,{\\cal L}_{[2], -{D-2\\over L^2}}(H)\\,,$ where ${\\cal L}_{[2], m^2}$ is the Fierz-Pauli action for a massive spin-2 field on AdS$_D$ (as written in e.g.", "(5.2) of [30]).", "This massive spin-2 that we get is not a new shift-symmetric field, rather its mass $m^2L^2=-(D-2)$ is that of the partially massless graviton [31], [32].", "This is because of the gauge-for-gauge reducibility (REF ), which shows up in the decoupling limit as a partially massless gauge symmetry for the longitudinal field $H_{\\mu \\nu }$ .", "This illustrates a general point: the shift symmetric fields can come only from the missing longitudinal modes of PM fields with irreducible gauge symmetries, i.e.", "those in which the first row is activated.", "PM field with gauge-for-gauge reducibilities, i.e.", "those where a row below the first is activated, instead spin off longitudinal modes which are themselves PM fields, and thus do not give new shift fields.", "The gauge-for-gauge parameter $\\chi $ in (REF ) represents the longitudinal mode of the PM spin-2 field, so this is the $k=1$ shift symmetric scalar.", "This illustrates another general point: the endpoint of gauge-for-gauge reducibility chains of partially massless fields is always a shift symmetric field.", "But these do not give new shift symmetric fields because they are already accounted for by the longitudinal modes of irreducible PM fields.", "We can see all of the above from the dual CFT perspective [33], [34].", "The massive $[2,1]$ field has a dual $[2,1]$ traceless primary state $|f_{ijk}\\rangle _\\Delta \\in ~\\raisebox {1.15ex}{(ik,j)} \\, , $ where the mass and conformal scaling dimension are related by $\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -3\\, .", "$ Denoting the larger and smaller roots of this as $\\Delta _\\pm $ , the PM point of interest (REF ) gives $\\Delta _+=d\\, .$ At this value, a $[2,1]$ state of type (REF ) saturates its unitary bound and develops null states in its Verma module, leading to a shortening condition.", "In general, the level at which this shortening occurs is equal to the number of derivative in the PM gauge transformation law.", "Since the gauge symmetry (REF ) has one derivative, the CFT state gets a conservation-type shortening condition at level one in the Verma module: $P^k|f_{ijk}\\rangle _d=0$ .", "This is a null state of spin $[1,1]$ and dimension $d+1$ which spans its own sub-module.", "As the PM value is approached, the AdS$_D$ representation $(\\Delta ,[2,1])$ spanned by the primary $|f_{ijk}\\rangle _\\Delta $ and its descendants splits according to the branching rule $(\\Delta ,[2,1])\\underset{\\Delta \\rightarrow d}{\\rightarrow } (d,[2,1]) \\oplus (d+1,[1,1])\\,.", "$ The submodule spanned by the null states is $(d+1,[1,1])$ .", "The mass of a 2-form is related to its conformal dimension by $m^2L^2=(\\Delta -2)(\\Delta -d+2)\\, ,$ and using this we see that the representation $(d+1,[1,1])$ is precisely the $\\Delta _+$ value of a $k=0$ shift symmetric 2-form with mass as written in (REF ).", "The expression (REF ) is the group theoretical version of the Lagrangian expression (REF ).", "If we consider the lesser root $\\Delta _-=-1$ for the shift field, we get the non-unitary representation $(-1,[1,1])$ , spanned by the primary $|b_{ij}\\rangle _{-1}$ .", "This representation is finite dimensional once the null states are factored out; the only non-null states are $|b_{ij}\\rangle _{-1} \\in ~\\raisebox {1.15ex}{(1,1)}\\, ,\\ \\ P^j|b_{ij}\\rangle _{-1} \\in ~\\raisebox {0.0ex}{(1)}\\, ,\\ \\ P_{[k}|b_{ij]}\\rangle _{-1} \\in ~\\raisebox {2.2ex}{(1,1,1)}\\, ,\\ \\ P_{[i}P^k|b_{j]k}\\rangle _{-1} +{d-4\\over d-1} P^2|b_{ij}\\rangle _{-1} \\in ~\\raisebox {1.15ex}{(1,1)}\\, .$ These states join together into a $[1,1,1]$ in $d+2$ dimensions, so this is the finite dimensional $[1,1,1]$ representation of the AdS$_D$ isometry algebra $so(2,D-1)$ , precisely the anti-symmetric tensor in (REF ) which parametrizes the shift symmetries.", "To get the higher values of $k$ for the shift-symmetric 2 form, we start with a massive $[k+2,1]$ tableau field, $f_{\\mu \\nu \\rho _1\\ldots \\rho _{k+1}}\\in ~\\raisebox {1.15ex}{(\\mu _5{k+1},\\nu )},\\ \\ \\ \\left(\\nabla ^2-\\tilde{m}^2\\right) f_{\\mu \\nu \\rho _1\\ldots \\rho _{k+1}}=0\\, ,$ which is also fully traceless and fully divergenceless on shell.", "We consider the partially massless point where all the possible top blocks are activated, so that the gauge parameter is a 2-form, $\\overset{\\ \\ \\ \\ \\overbrace{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }^{k+1} }{~\\raisebox {1.15ex}{(\\ \\nabla \\nabla _2\\cdots \\nabla ,\\ )}} \\ \\ \\tilde{m}^2=-{k+3\\over L^2} \\,:\\ \\ \\ \\delta f_{\\mu \\nu \\rho _1\\ldots \\rho _k}={\\cal Y}_{[k+2,1]}^T \\left[\\nabla _{\\rho _1}\\cdots \\nabla _{\\rho _{k+1}} \\Lambda _{\\mu \\nu }\\right],\\ \\ \\ \\Lambda _{\\mu \\nu }\\in ~\\raisebox {1.15ex}{(1,1)} \\,.", "\\ $ This gauge symmetry is irreducible, so the resulting longitudinal mode in the partially massless limit will be a shift field rather than a gauge field.", "Using reasoning similar to the $k=0$ case we can find the reducibility parameters for the gauge symmetries (REF ) and they are the shifts (REF ).", "There is a $k+1$ derivative field strength with the same symmetries as the PM field $f_{\\mu \\nu \\rho _1\\ldots \\rho _{k+1}}$ , made out of $k+1$ derivatives of the shift field patterned after the gauge transformation (REF ), which is invariant under these shifts and gives (REF ).", "A massive $[k+2,1]$ field has a dual $[k+2,1]$ primary state $|f_{ij\\, l_1\\ldots l_{k+1} }\\rangle _\\Delta \\, \\in ~\\raisebox {1.15ex}{(i_5{k+1},j)} ,$ where the mass and conformal scaling dimension are related by $\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -k-3.", "$ At the PM point of interest (REF ), the conformal dimension is $\\Delta _+=d\\, .$ Since the gauge transformation (REF ) has $k+1$ derivatives, the dual state at the value (REF ) is a kind of multiply-conserved current [35] which gets a shortening condition at level $k+1$ in the Verma module, $P^{l_1}\\cdots P^{l_{k+1}}|f_{ij\\, l_1\\ldots l_{k+1} }\\rangle _\\Delta =0$ .", "This is a null state of spin $[1,1]$ and dimension $d+k+1$ which spans its own sub-module.", "As the PM value is approached, the AdS$_D$ representation $(\\Delta ,[k+2,1])$ spanned by the primary $|f_{i_1i_2j_1\\ldots j_{k+1} }\\rangle _\\Delta $ and its descendants splits according to the branching rule $(\\Delta ,[k+2,1])\\underset{\\Delta \\rightarrow d}{\\rightarrow } (d,[k+2,1]) \\oplus (d+k+1,[1,1])\\,.", "$ Using (REF ), we see that the representation $(d+k+1,[1,1])$ is precisely the $\\Delta _+$ value of a level $k$ shift symmetric 2-form as written in (REF ).", "The negative root $\\Delta _-$ gives the representation $(-k-1,[1,1])$ whose non-null states span the finite dimensional, non-unitary representation $[k+1,1,1]$ of the AdS$_D$ isometry group $so(2,D-1)$ , and these are precisely the shift symmetry parameters (REF ).", "The PM point where the bottom block is activated, $~\\raisebox {1.15ex}{(\\ _5{k+1},\\Delta )}$ has a gauge-for-gauge reducibility, and the longitudinal mode will be a depth $t=0$ partially massless spin $k+2$ field, so this gives no new shift symmetric fields.", "The PM points where fewer than the maximal number of top blocks are activated will have gauge parameters which are again mixed symmetry tensors, and since these gauge transformations are irreducible, these will give rise to shift-symmetric points for these mixed symmetry tensors.", "We will return to this more general case in section , after discussing the higher $p$ -forms in the next section.", "We can ask if there are other possible shift-symmetric mass values for the 2-form besides those in (REF ), perhaps coming from PM limits of more complicated mixed symmetry tensors.", "The answer is no for the following reason.", "As we have seen, to get a shift-symmetric field for the longitudinal mode the PM gauge symmetry must be irreducible, otherwise the gauge-for-gauge reducibility parameters will become gauge symmetries of the longitudinal mode and we will get other PM fields rather than shift fields.", "The depth of gauge-for-gauge redundancy in a PM field is equal to the row number which is activated [13].", "The PM symmetry must therefore come from a PM tableau where only the top row is activated, so that the gauge symmetry is irreducible.", "To get a shift-symmetric 2-form, we need a PM tableau whose first row is activated and whose gauge parameter is a 2-form.", "The only such tableaux are those in (REF ).", "The same reasoning shows that the shift symmetric points (REF ) for symmetric tensor fields, and the shift symmetric points for more general fields that we find below, are the only ones." ], [ "Shift symmetries for $p$ -forms", "We now move on to a massive $p$ -form field $B_{\\mu _1\\ldots \\mu _p}$ on AdS$_D$ , $p\\ge 1$ .", "The equations of motion can be cast in the form $\\left(\\nabla ^2 +{1\\over L^2}p(D-p)-m^2\\right)B_{\\mu _1\\ldots \\mu _p}=0,\\ \\ \\ \\nabla ^{\\mu _1} B_{\\mu _1\\ldots \\mu _p}=0,\\ \\ \\ B_{\\mu _1\\ldots \\mu _p} \\in ~\\raisebox {2.15ex}{(|3p)}\\ \\ .$ The mass is defined such that at $m^2=0$ we get the usual massless $p$ -form gauge symmetry.", "There are no other points of enhanced gauge symmetry besides this.", "The massive $p$ -forms (REF ) get an enhanced shift symmetry at the following mass values $m_{[1^p],k}^2L^2= \\left( k + p+1\\right) \\left( k - p+D\\right) ,\\ \\ \\ k=0,1,2,\\ldots \\, ,$ and the form of the shift symmetry is $\\delta B_{\\mu _1\\ldots \\mu _p}= S_{B_1\\ldots B_{k+1},A_1,\\ldots ,A_p } X^{B_1}\\cdots X^{B_{k+1}} {\\partial X^{A_1}\\over \\partial x^{\\mu _1} } \\cdots {\\partial X^{A_p}\\over \\partial x^{\\mu _p} }\\,, $ where $S_{B_1\\ldots B_{k+1},A_1,\\ldots ,A_p }$ is a constant fully traceless mixed symmetry embedding space tensor of type $[k+1,1^p]$ , $S_{ B_1\\ldots B_{k+1}, A_1,\\ldots ,A_p}\\in ~\\raisebox {2.15ex}{(_5{k+1},|3p)} \\, .$ There is an invariant field strength of type $[k+2,1^{p-1}]$ , $F_{\\nu _1\\ldots \\nu _{k+1}\\mu _1, \\ldots ,\\mu _p }={\\cal Y}_{[k+2,1^{p-1}]}^T\\left[ \\nabla _{\\nu _1}\\cdots \\nabla _{\\nu _{k+1}} B_{\\mu _1\\ldots \\mu _p }\\right] \\in ~\\raisebox {2.15ex}{(|4p_4{k+1}) } \\,.", "$ These shift fields come from the longitudinal modes of mixed symmetry PM fields of the form $[k+2,1^{p-1}]$ , $f_{ \\nu _1\\ldots \\nu _{k+1}\\mu _1,\\ldots ,\\mu _p}\\in ~\\raisebox {2.15ex}{(|4p_4{k+1}) } \\, ,\\ \\ \\left(\\nabla ^2-\\tilde{m}^2\\right) f_{ \\nu _1\\ldots \\nu _{k+1}\\mu _1,\\ldots , \\mu _p}=0\\,,$ in which all the possible blocks in the upper row are activated, $&&\\overset{\\ \\ \\ \\overbrace{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }^{k+1} }{~\\raisebox {2.15ex}{(|4p\\nabla \\nabla _2\\cdots \\nabla ) }}\\,\\ \\ \\tilde{m}^2_{\\rm PM}=-{p+k+1\\over L^2}\\,: \\ \\ \\ \\nonumber \\\\&&\\delta f_{ \\nu _1\\ldots \\nu _{k+1}\\mu _1,\\ldots , \\mu _p}={\\cal Y}_{[k+2,1^{p-1}]}^T\\left[ \\nabla _{\\nu _1}\\cdots \\nabla _{\\nu _{k+1}} \\Lambda _{\\mu _1\\ldots \\mu _p }\\right] ,\\ \\ \\ \\Lambda _{\\mu _1\\ldots \\mu _p} \\in ~\\raisebox {2.15ex}{(|3p)}\\, .", "$ The shift symmetries (REF ) are the reducibility parameters of this PM transformation.", "A massive $[k+2,1^{p-1}]$ field has a dual primary operator with conformal scaling dimension related to the mass by $\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -(k+p+1) \\, .", "$ At the PM point of interest (REF ), the conformal dimension is $\\Delta _+=d\\, .$ At this value, since the gauge symmetry has $k+1$ derivatives the dual operator gets a conservation-type shortening condition at level $k+1$ , giving a null state of spin $[1^p]$ and dimension $\\Delta _+=d+k+1.$ As the PM value (REF ) is approached, the AdS$_D$ representation $(\\Delta ,[k+2,1^{p-1}])$ splits according to the branching rule $(\\Delta ,[k+2,1^{p-1}])\\underset{\\Delta \\rightarrow d}{\\rightarrow } (d,[k+2,1^{p-1}]) \\oplus (d+k+1,[1^p]])\\,.", "$ The relation between the mass and dual CFT scaling dimension of a $p$ -form field is given by $m^2L^2=(\\Delta -p)(\\Delta -d+p) .$ Using this, we see that the representation $(d+k+1,[1^p])$ is precisely the $\\Delta _+$ value of a level $k$ shift symmetric $p$ -form with mass value as written in (REF ).", "These values are all above the unitarity bound $\\Delta \\ge d-p$ for a $p$ -form, indicating that the shift fields are unitary on AdS, and irreducible with no further null states.", "The negative root $\\Delta _-$ gives the representation $(-k-1,[1^p])$ whose non-null states span the finite dimensional, non-unitary representation $[k+1,1^p]$ of the AdS$_D$ isometry group $so(2,D-1)$ , and these are precisely the shift symmetry parameters (REF ).", "The shift symmetric $p$ -form values are summarized in figure REF .", "Figure: pp form fields in the conformal dimension Δ\\Delta vs. pp plane, for the values Δ + \\Delta _+.", "The masses are given by m 2 L 2 =(Δ-p)(Δ-d+p)m^2L^2=(\\Delta -p)(\\Delta -d+p).", "Blue dots are the massless points, black dots are shift symmetric points.", "For each pp the unitarity bound coincides with the massless point." ], [ "Shift symmetries for general mixed symmetry fields", "We now turn to the general mixed symmetry case.", "Consider a general massive field in a tableau $[s_1,s_2,\\ldots ,s_p]$ which satisfies on-shell the Klein-Gordon equation, $\\phi _{\\mu _1\\ldots \\mu _{s_1},\\ldots }\\in \\raisebox {-40pt}{\\psfig {file=stableaux1.png,height=1.2in,width=1.2in}} \\, ,\\ \\ \\ \\ \\left(\\nabla ^2-\\tilde{m}^2\\right) \\phi _{\\mu _1\\ldots \\mu _{s_1},\\ldots }=0\\, ,$ and in addition is also fully traceless and fully divergenceless.", "This field has a dual conformal dimension related to the mass $\\tilde{m}^2$ by $\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -\\sum _{i=1}^p s_i .", "$ We find the shift symmetry values by considering the PM fields of type $[s_1+k+1,s_2,\\ldots ,s_p]$ where $k+1$ boxes in the top row are activated (depth $t=s_1-s_2$ ), $\\raisebox {-40pt}{\\psfig {file=stableaux2.png,height=1.4in,width=2.3in}} ,\\ \\ k=0,1,2,\\ldots \\, , $ since these are the only PM fields with irreducible gauge symmetries whose gauge parameter is of type $[s_1,s_2,\\ldots ,s_p]$ .", "These partially massless points occur at the mass values $\\tilde{m}^2_{\\rm PM}L^2= (s_1+D-2) (s_1-1) - k-1 - \\sum _{i=1}^ps_i\\, .$ At these values, the dual CFT conformal dimension is $\\Delta _+=d - 1 + s_1\\,.$ Since the gauge symmetry has $k+1$ derivatives the dual operator gets a conservation-type shortening condition at level $k+1$ , giving a null state of spin $[s_1,s_2,\\ldots , s_p]$ and dimension $\\Delta _+=d +k + s_1.$ As the PM value is approached, the AdS$_D$ representation $(\\Delta ,[s_1+k+1,s_2,\\ldots , s_p])$ splits according to the branching rule $(\\Delta ,[s_1+k+1,s_2,\\ldots , s_p])\\underset{\\Delta \\rightarrow d - 1 + s_1 }{\\rightarrow } ( d - 1 + s_1 ,[s_1+k+1,s_2,\\ldots , s_p]) \\oplus (d +k + s_1,[s_1,s_2,\\ldots , s_p])\\,.", "$ The representation $(d +k + s_1,[s_1,s_2,\\ldots , s_p])$ is the $\\Delta _+$ value of a level $k$ shift symmetric field corresponding to the longitudinal mode.", "Using (REF ) we can translate this into the masses for the shift symmetric fields of type $[s_1,\\ldots , s_p]$ , $\\tilde{m}^2_{[s_1,\\ldots , s_p],k}L^2=(s_1+k+D-1)(s_1+k)- \\sum _{i=1}^ps_i \\,.", "\\ $ The form of the shift symmetry is given by a constant fully traceless embedding space tensor of type $[s_1+k,s_1,s_2,\\ldots ,s_p]$ , where the indices of the top row are contracted with $X^A$ and the rest are projected down, $\\delta \\phi _{\\mu _1\\ldots \\mu _{s_1},\\ldots }=S_{A_{1}\\ldots A_{{s_1+k}}, B_{1}\\ldots B_{{s_1}},\\ldots } X^{A_{1}}\\cdots X^{A_{s_1+k}}{\\partial X^{B_{1}}\\over \\partial x^{\\mu _1}}\\cdots {\\partial X^{B_{s_1}}\\over \\partial x^{\\mu _{s_1}}}\\cdots \\, \\ ,$ $S_{A_{1}\\ldots A_{\\mu _{s_1+k}}, B_{\\mu _1}\\ldots B_{\\mu _{s_1}},\\ldots } \\in \\raisebox {-50pt}{\\psfig {file=stableaux3.png,height=1.4in,width=1.6in}} \\, \\ .$ The embedding space tensor $S_{A_{1}\\ldots A_{{s_1+k}}, B_{1}\\ldots B_{{s_1}},\\ldots } X^{A_{1}}\\ldots X^{A_{s_1+k}}$ that projects to (REF ) is transverse to $X^A$ due to the mixed symmetry of the coefficients (REF ), satisfies the ambient massless Klein-Gordon equation $\\square _{(D+1)}=0$ due to the tracelessness of the coefficients (REF ), and has homogeneity degree $w=s_1+k$ in the $X^A$ .", "The ambient Klein-Gordon operator acting on a degree $w$ tensor of type $[s_1,\\ldots ,s_p]$ reduces to the AdS$_D$ surface as $\\square _{(D+1)} \\rightarrow \\nabla ^2-{1\\over { L}^2}\\left(w(D+w-1)-\\sum _{i=1}^p s_i \\right)$ .", "Using $w=s_1+k$ this reproduces the masses (REF ).", "We can think of the right hand side of (REF ) as the most general kind of Killing-Yano-like object, or spherical harmonic, on AdS$_D$ .", "The negative root $\\Delta _-=-k - s_1$ of the shift field gives the representation $(-k - s_1,[s_1,s_2,\\ldots , s_p])$ whose non-null states span the finite dimensional, non-unitary representation $[s_1+k,s_1,s_2,\\ldots , s_p]$ of the AdS$_D$ isometry group $so(2,D-1)$ , and these are precisely the shift symmetry parameters (REF ).", "There is a $k+1$ derivative `field strength' with the symmetries $[s_1+k+1,s_2,\\cdots ,s_p]$ of the parent PM field which is invariant under the shifts (REF ), $F_{\\mu _1\\cdots \\mu _{s_1+k+1},\\cdots } = {\\cal Y}_{[s_1+k+1,s_2,\\cdots ,s_p]}^T \\left[ \\nabla _{\\mu _{s_1+1}}\\cdots \\nabla _{\\mu _{s_1+k+1}} \\phi _{\\mu _1\\cdots \\mu _{s_1},\\cdots } \\right] \\,.$ This is the basic local operator which captures the on-shell non-trivial shift invariant information in the theory.", "As an illustrative example, let us return to the PM $[s_1,1]$ fields studied in Section , and consider the PM points not used there, the ones where not all of the top blocks are activated.", "The $t=s-1$ first row partially massless point of a $[s+k+1,1]$ field, $\\overset{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\overbrace{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }^{k+1} }{~\\raisebox {1.15ex}{(_4{s}\\nabla \\nabla _3\\cdots \\nabla ,\\ )}} \\, ,\\ \\ \\ k=0,1,2,\\ldots \\,,$ will give the shift symmetric $[s,1]$ fields.", "We summarize these partially massless and shift symmetric values in figure REF .", "As a final illustrative example, the fields of type $[s,3,2]$ are shown in figure REF .", "Figure: Fields of symmetry type [s,1][s,1] in the conformal dimension Δ\\Delta vs. ss plane, for the values of the positive root Δ + \\Delta _+.", "The masses are given by m ˜ 2 L 2 =Δ(Δ-d)-s-1\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -s-1.Blue dots are PM points, black dots are shift symmetric points.", "The shift symmetric points are the longitudinal modes of the PM points where the upper row is activated.", "The shift point field corresponding to a given such PM point is found by reflecting about the line Δ=s+d-1\\Delta =s+d-1, as illustrated by the curved arrow.", "The AdS unitarity bound for each ss coincides with the uppermost PM point.Figure: Fields of symmetry type [s,3,2][s,3,2] in the conformal dimension Δ\\Delta vs. ss plane, for the values Δ + \\Delta _+.", "The masses are given by m ˜ 2 L 2 =Δ(Δ-d)-s-5\\tilde{m}^2L^2=\\Delta (\\Delta -d ) -s-5.", "Blue dots are PM points, black dots are shift symmetric points.", "The shift symmetric points are the longitudinal modes of the PM points where the upper row is activated.", "The shift point corresponding to a given such PM point is found by reflecting about the line Δ=s+d-1\\Delta =s+d-1, as illustrated by the curved arrow.", "The AdS unitarity bound for each ss coincides with the uppermost PM point." ], [ "Conclusions and Discussion", "We have found the points of enhanced shift symmetry for arbitrary bosonic $p$ -forms and mixed symmetry tensors on (A)dS$_D$ space.", "This extends [1], which covered only the symmetric tensors.", "The results for the masses are summarized in (REF ), and the form of the shift symmetries is (REF ).", "These fields capture the longitudinal mode that decouples when a massive field approaches a PM point where the top row is activated, so that the gauge symmetry is irreducible.", "They also appear, though not as dynamical longitudinal modes, as the endpoints of gauge-for-gauge reducibility chains of reducible PM points.", "The shift symmetric fields occur at certain integer values of the dual conformal dimension given by (REF ).", "These values are all above the unitarity bound $\\Delta \\ge d+s_1-h_1-1$ for a mixed symmetry state [9], [20], [36], where $h_1$ is the height of the top “block\" of the tableau, i.e.", "the number of rows of length $s_1$ .", "This indicates that the shift fields, in the ordinary quantization, are all unitary on AdS and irreducible with no further null states.", "On dS, the shift symmetric fields correspond to shortened irreducible representations of the de Sitter algebra (of type ${\\cal V}$ , in the notation of [37]) but they generally lie beyond the complementary series and do not correspond to any discrete series points [38], [15], so they are non-unitary.", "A notable exception is the shift symmetric scalars [39], [40], [41], [42], [43], which are unitary in dS$_D$ and correspond to scalar discrete series representations [44], [45], [46], [37].", "There are some other lower dimensional exceptions as well, related to the scalars by duality.", "For example in $D=3$ the level $k$ shift symmetric 2 form is dual to the level $k+1$ shift symmetric scalar (as can be seen from the fact that their masses are equal) so the 2-form shift fields are unitary in dS$_3$ .", "In fact, all the discussions in this paper must be understood modulo these massive dualities.", "For example, the construction in Section of the shift symmetric 2-forms as longitudinal modes of hook field fails in $D=3$ since the hook fields are non-dynamical for $D<4$ .", "But nothing is missed because the shift symmetric 2-form fields are dual to shift symmetric scalars, and these can be constructed as longitudinal modes of massive symmetric tensor fields which are dynamical in $D=3$ .", "More generally, for low enough dimension where the parent PM field is non-dynamical but the shift symmetric field is, the construction of the shift field as a longitudinal mode fails and the shift field will be equivalent by duality to a different shift field whose parent PM field is dynamical.", "A natural question is whether non-trivial shift symmetric interactions can be found for the more general representations studied here.", "Interactions can always be written using powers of the shift invariant field strength (REF ), so here `non-trivial' means that the interactions are not simply powers of the field strength.", "The $k=1,2$ scalars can be given non-trivial self-interactions [47], [48], [1], [49], as can the $k=0$ vector [21], [50], but no other examples are currently known.", "Whether interactions are non-trivial in the sense mentioned above is also tied to whether there are there are non-trivial algebras that could underlie the symmetries [51], [52].", "A trivial interaction will not deform the abelian algebra of shift symmetries present in the linear theory, whereas a non-trivial interaction should deform the algebra into a non-abelian algebra.", "It would be interesting to know if there are finite algebras of the type studied in [53] (which are finite subalgebras of higher spin algebras underlying PM Vasiliev theories [54], [55], [56], [53], [57]), that could be candidates to underly non-trivially interacting shift symmetric $p$ -form or mixed symmetry theories.", "Non-trivial theories would also presumably have a flat space limit which gives interacting $p$ -form or mixed symmetry theories in flat space.", "From the point of view of the $S$ -matrix, the non-trivial effective field theories on flat space should be theories with enhanced soft limits that allow for a recursive reconstruction of the amplitudes [4], [58], [6], [59], so it would be interesting to study if there are such possibilities for fields in these other representations.", "In flat space, interactions for $p$ -form galileons are known [60], [61], [62], though these are presumably Wess-Zumino-like [63] interactions that do not deform the basic underlying shift symmetries.", "It would be interesting to see whether these interactions could be extended to (A)dS and/or deformed into non-trivial interactions, or whether they can harbor hidden special galileon-like non-trivial enhancements." ], [ "Acknowledgments:", "The author would like to thank James Bonifacio and Austin Joyce for comments on the draft, and acknowledges support from DOE grant DE-SC0009946 and Simons Foundation Award Number 658908. tocsectionReferences" ] ]
2207.03494
[ [ "Condensation for Mouse Pairs" ], [ "Abstract In this paper, we prove a fine condensation theorem.", "This is quite similar to condensation theorems for pure extender mice in the literature, except that condensation for iteration strategies has been added to the mix." ], [ "1.1em" ], [ "1.1.1em" ], [ "1.1.1.1em all Key words: Least-branch hod mice, square, HOD, large cardinals, determinacy 2010 MSC: 03E15, 03E45, 03E60 This is the first of two papers on the fine structure of HOD in models of the Axiom of Determinacy ($\\sf {AD}$ ).", "Let $M\\vDash \\sf {AD}^+ + V=L({\\wp }(\\mathbb {R}))$ .", "[11] shows that under a natural hypothesis on the existence of iteration strategies, the basic fine structure theory for pure extender models goes over to $\\mbox{HOD}^M$ .", "In this paper, we prove a fine condensation theorem, quite similar to Theorem 9.3.2 of Zeman's book [14], except that condensation for iteration strategies has been added to the mix.", "In the second paper, we shall use this theorem to show that in $\\mbox{HOD}^M$ , $\\square _\\kappa $ holds iff $\\kappa $ is not subcompact." ], [ "INTRODUCTION", "One goal of descriptive inner model theory is to elucidate the structure of HOD (the universe of hereditarily ordinal definable sets) in models $M$ of the Axiom of Determinacy.", "$\\mbox{HOD}^M$ is close to $M$ in various ways; for example, if $M\\vDash \\sf {AD}^+ + V=L({\\wp }(\\mathbb {R}))$$\\sf {AD}^+$ is a technical strengthening of $\\sf {AD}$ .", "It is not known whether $\\sf {AD} \\Rightarrow \\sf {AD}^+$ , though in every model of $\\sf {AD}$ constructed so far, $\\sf {AD}^+$ also holds.", "The models of $\\sf {AD}$ that we deal with in this paper satisfy $\\sf {AD}^+$ ., then $M$ can be realized as a symmetric forcing extension of $\\mbox{HOD}^M$ , so that the first order theory of $M$ is part of the first order theory of its HOD.", "This is a theorem of Woodin from the early 1980s.", "Cf.", "[13].", "For this and many other reasons, the study of HOD in models of AD has a long history.", "We refer the reader to [10] for a survey of this history.", "The study of HOD involves ideas from descriptive set theory (for example, games and definable scales) and ideas from inner model theory (mice, comparison, fine structure).", "One early result showing that inner model theory is relevant is due to the first author, who showed in 1994 ([9]) that if there are $\\omega $ Woodin cardinals with a measurable above them all, then in $L(\\mathbb {R})$ , HOD up to $\\theta $ is a pure extender mouse.", "Shortly afterward, this result was improved by Hugh Woodin, who reduced its hypothesis to ${\\sf {AD}}^{L(\\mathbb {R})}$ , and identified the full $\\mbox{HOD}^{L(\\mathbb {R})}$ as a model of the form $L[M,\\Sigma ]$ , where $M$ is a pure extender premouse, and $\\Sigma $ is a partial iteration strategy for $M$ .", "$\\mbox{HOD}^{L(\\mathbb {R})}$ is thus a new type of mouse, sometimes called a strategy mouse, sometimes called a hod mouse.", "See [12] for an account of this work.", "Since the mid-1990s, there has been a great deal of work devoted to extending these results to models of determinacy beyond $L(\\mathbb {R})$ .", "Woodin analyzed HOD in models of $\\sf {AD}^+$ below the minimal model of $\\sf {AD}_{\\mathbb {R}}$ fine structurally, and Sargsyan pushed the analysis further, first to determinacy models below $\\sf {AD}_{\\mathbb {R}} + ``\\theta $ is regular\" (see [2]), and more recently, to determinacy models below the minimal model of the theory “$\\sf {AD}^+ + \\Theta = \\theta _{\\alpha +1} + \\theta _\\alpha $ is the largest Suslin cardinal\" (commonly known as $\\sf {LSA}$ ).", "(See [3].)", "The hod mice used in this work have the form $M=L[\\vec{E},\\Sigma ]$ , where $\\vec{E}$ is a coherent sequence of extenders, and $\\Sigma $ is an iteration strategy for $M$ .", "The strategy information is fed into the model $M$ slowly, in a way that is dictated in part by the determinacy model whose HOD is being analyzed.", "One says that the hierarchy of $M$ is rigidly layered, or extender biased.", "The object $(M,\\Sigma )$ is called a rigidly layered (extender biased) hod pair.", "Putting the strategy information in this way makes comparison easier, but it has serious costs.", "The definition of “premouse\" becomes very complicated, and indeed it is not clear how to extend the definition of rigidly layered hod pairs much past that given in [3].", "The definition of “extender biased hod premouse\" is not uniform, in that the extent of extender bias depends on the determinacy model whose HOD is being analyzed.", "Fine structure, and in particular condensation, become more awkward.", "For example, it is not true in general that the pointwise definable hull of a level of $M$ is a level of $M$ .", "(The problem is that the hull will not generally be sufficiently extender biased.)", "Because of this, it is open whether the hod mice of [3] satisfy $\\forall \\kappa \\square _\\kappa $ .", "(The second author did show that $\\forall \\kappa \\square _{\\kappa ,2}$ holds in these hod mice; cf.", "[3].)", "The more naive notion of hod premouse would abandon extender bias, and simply add the least missing piece of strategy information at essentially every stage.", "This was originally suggested by Woodin.", "The first author has recently proved a general comparison theorem that makes it possible to use this approach, at least in the realm of short extenders.", "The resulting premice are called least branch premice (lpm's), and the pairs $(M,\\Sigma )$ are called least branch hod pairs (lbr hod pairs).The (pure extender or least branch hod) premice in the paper are called pfs (projectum-free space) premice in [11].", "We will occasionally omit the “pfs\" for brevity.", "All premice used in this paper are pfs premice (and their strong cores), see Section for more discussions.", "Combining results of [11] and [8], one has [[11],[8]] Assume $\\sf {AD}^+ + $ “there is an $(\\omega _1,\\omega _1)$ iteration strategy for a pure extender premouse with a long extender on its sequence\".", "Let $\\Gamma \\subseteq P(\\mathbb {R})$ be such that $L(\\Gamma ,\\mathbb {R}) \\vDash \\sf {AD}_{\\mathbb {R}} +$ “there is no $(\\omega _1,\\omega _1)$ iteration strategy for a pure extender premouse with a long extender on its sequence\"; then $\\mbox{HOD}^{L(\\Gamma ,\\mathbb {R})}$ is a least branch premouse.", "Of course, one would like to remove the iterability hypothesis of , and prove its conclusion under $\\sf {AD}^+$ alone.", "Finding a way to do this is one manifestation of the long standing iterability problem of inner model theory.", "Although we do not yet know how to do this, the theorem does make it highly likely that in models of $\\sf {AD}_{\\mathbb {R}}$ that have not reached an iteration strategy for a pure extender premouse with a long extender, HOD is an lpm.", "Least branch premice have a fine structure much closer to that of pure extender models than that of rigidly layered hod premice.", "The paper [11] develops the basics, the solidity and universality of standard parameters, and a coarse form of condensation.", "The main theorem of this paper, Theorem , is a stronger condensation theorem.", "The statement of is parallel to that of Theorem 9.3.2 of [14], but it has a strategy-condensation feature that is new even in the pure extender model context.", "The proof of follows the same outline as the proofs of solidity, universality, and condensation given in [11], but there are a number of additional difficulties to be overcome.", "These stem from the restricted elementarity we have for the ultrapowers of phalanxes that are taken in the course of the proof.", "Theorem is one of the main ingredients in the proof of the main theorem of our second paper.", "We say that $(M,\\Sigma )$ is a mouse pair iff $M$ is either a pure extender pfs premouse or a least branch pfs premouse, and $\\Sigma $ is an iteration strategy for $M$ that has strong hull condensation and normalizes well.", "See [11] and section below for a full definition.", "[$\\sf {AD}^+$ ] Let $(M,\\Sigma )$ be a mouse pair.", "Let $\\kappa $ be a cardinal of $M$ such that $M \\vDash ``\\kappa ^+$ exists\"; then in $M$ , the following are equivalent.", "$\\square _\\kappa $ .", "$\\square _{\\kappa ,<\\kappa }$ .", "$\\kappa $ is not subcompact.", "The set of $\\nu <\\kappa ^+$ such that $M|\\nu $ is extender-active is non-stationary in $\\kappa ^+$ .", "The special case of this theorem in which $M$ is a pure extender model is a landmark result of Schimmerling and Zeman.", "(See [4].)", "Our proof follows the Schimmerling-Zeman proof quite closely.", "Theorem has applications to consistency strength lower bound questions that we discuss in the second paper.", "But our work was also motivated by the desire to put the fine structure theory of [11] to the test, so to speak.", "Determining the pattern of $\\square $ is a good way to go one level deeper into the world of projecta, standard parameters, restricted elementarity, and condensation theorems.", "We found when we did so that the definition of hod premouse given in [11] was wrong, in that strategy information was being added in a way that would not in general be preserved by $\\Sigma _1$ hulls.", "The correct method for strategy insertion comes from [7], and we describe it further below.", "[11] has been revised so that it now uses this method.", "Acknowledgements.", "The work reported here began when the second author visited the first author in March and June of 2016 at UC Berkeley.", "The second author thanks the NSF for its generous support through grant No DMS-1565808." ], [ "LEAST-BRANCH HOD PREMICE", "Again, we mention that all premice used in this paper are in the pfs hierarchy, defined in [11].", "We adopt for the most part the fine structure and notation from [11] concerning least-branch hod premice (lpm's) and lbr hod pairs.", "A similar, albeit simpler, fine structure for pure extender premice is discussed in [11].", "We summarize some main points below.", "The reader can see [11] for more details.", "Least branch premice (lpm).", "The language for lpm's is $\\mathcal {L}_1$ with symbols $\\in , \\dot{E},\\dot{F},\\dot{\\Sigma },\\dot{B}, \\dot{\\gamma }$ .", "$\\mathcal {L}_0 = \\mathcal {L}_1 - \\lbrace \\dot{B},\\dot{\\Sigma }\\rbrace $ is the language of pure extender mice.", "An lpm $M$ is of the form $(N,k)$ where $N$ is an $\\mathcal {L}_1$ amenable structure that is $k$ -sound.", "We write $k = k(M)$ .", "We often identify $M$ with $N$ and suppress $k$ .", "$o(M)$ denotes the ordinal height of $M$ , and $\\hat{o}(M)$ denotes the $\\alpha $ such that $o(M)=\\omega \\alpha $ .", "$l(M)=(\\hat{o}(M),k(M))$ is the index of $M$ .", "For $(\\nu ,l)\\le _{\\rm {lex}}$$l(M)$ , $M|(\\nu ,l)$ is the initial segment of $M$ with index $(\\nu ,l)$ .", "We write $N \\unlhd M$ iff $N = M|(\\nu ,l)$ for some $(\\nu ,l)\\le _{\\rm {lex}}$$l(M)$ .", "If $\\nu \\le \\hat{o}(M)$ , write $M|\\nu $ for $M|(\\nu ,0)$ .", "We adopt the projectum-free space (pfs) fine structure in [11].", "We write $\\rho _n(M)$ for the $n$ -th projectum of $M$ and $p_n(M)$ for the $n$ -th standard parameter of $M$ .", "We set $\\rho (M) = \\rho _{k(M)+1}(M)$ and $p(M) = p_{k(M)+1}(M)$ , and call them the projectum and parameter of $M$ .", "We say $M$ is sound iff it is $k(M)+1$ -sound.", "An lpm $M$ must be $k(M)$ -sound, but it need not be $k(M)+1$ -sound.", "All proper initial segments of an lpm must be sound lpms.", "If $M$ is $k$ -sound for $k \\ge 1$ , then it is coded by its reduct $M^k$ , where $M^k = (M||\\rho _k(M), A^k_M)$ , where $A^k_m = \\lbrace \\langle \\varphi , b\\rangle \\ | \\ \\varphi \\textrm { is } \\Sigma _1 \\wedge b \\in M||\\rho _k \\wedge M^{k-1} \\vDash \\varphi [b, w_k] \\rbrace $ , where $\\rho _k = \\rho _k(M)$ , $w_k = w_k(M) = \\langle \\rho _k(M), \\eta _k(M), p_k(M) \\rangle $ and $\\eta _k(M)$ is the $\\Sigma _1$ -cofinality of $\\rho _k(M)$ over $M^{k-1}$ .", "We also have the decoding function $d^k: M^k \\rightarrow M$ and canonical $\\Sigma _1$ -Skolem function $h^1_{M^k}$ over $M^k$ defined as in [11].", "We have the $k+1$ -st projectum, parameter, strong core $\\bar{\\mathfrak {C}}_{k+1}$ , and core $\\mathfrak {C}_{k+1}$ defined by (we will omit the $M$ from the notation) $\\rho _{k+1} &= \\rho _1(M^k),\\\\p_{k+1} &= p_1(M^k),\\\\\\bar{\\mathfrak {C}}_{k+1} &= \\text{transitive collapse of $d^k\\circ h^1_{M^k}[(\\rho _{k+1}\\cup \\lbrace p_{k+1}, w_k\\rbrace )]$,}\\\\\\bar{p}_{k+1} &= \\sigma ^{-1}(p_{k+1}),\\\\\\mathfrak {C}_{k+1} &= \\text{transitive collapse of $d^k\\circ h^1_{M^{k}}[(\\rho _{k+1}\\cup \\lbrace p_{k+1}, \\rho _{k+1},w_k\\rbrace )].$} $ Here, we let $\\sigma : \\bar{\\mathfrak {C}}_{k+1} \\rightarrow M$ and $\\pi : \\mathfrak {C}_{k+1}\\rightarrow M$ be the uncollapse maps.", "We define “$M$ is $k+1$ -solid\" just as in [11], namely when $M^k$ is parameter solid, projectum solid, stable, and $M$ is weakly ms-solid.", "$M$ is $k+1$ -sound iff $M$ is $k+1$ -solid and $M = \\mathfrak {C}_{k+1}(M)$ .", "$M$ is $k+1$ -strongly sound if $M$ is $k+1$ -sound and If $M = \\bar{\\mathfrak {C}}_{k+1}(M)$ .", "$M$ is $k+1$ -solid, then $\\rho _{k+1}$ is not measurable by the $M$ -sequence and either $\\mathfrak {C}_{k+1} = \\bar{\\mathfrak {C}}_{k+1}$ or $\\mathfrak {C}_{k+1} = \\textrm {Ult}_k(\\bar{\\mathfrak {C}}_{k+1},D)$ where $D$ is the order zero measure of $\\bar{\\mathfrak {C}}_{k+1}$ on $\\rho _{k+1}$ and $\\sigma = \\pi \\circ i_D$ .", "We recall that for $k = k(M)$ , $M^k$ is stable if either $\\eta _k(M) < \\rho _{k+1}$ or $\\eta _k(M)$ is not measurable by the $M$ -sequence.", "We say that $M$ is strongly stable if $\\eta _k(M)$ is not measurable by the $M$ -sequence.", "$\\dot{E}^M$ codes the sequence of extenders that go into constructing $M$ .", "$\\dot{F}^M$ if non-empty is the amenable code for a new extender being added; in this case, we say that $M$ is extender-active (or just $E$ -active).", "If $\\dot{F}^M =F$ is nonempty, then $M\\vDash \\rm {crt}$$(F)^+$ exists and $o(M)=i^M_F(\\mu )$ , where $\\mu =\\rm {crt}$$(F)^+$ .", "Also $F$ must satisfy the Jensen initial segment condition (ISC), that is, whole initial segments of $F$ must be in $\\dot{E}^M$ (see [14] for a detailed discussion of ISC).", "$\\dot{\\gamma }$ is the index of the largest whole initial segment of $F$ if exists; otherwise, $\\dot{\\gamma }=0$ .", "We also demand $M$ is coherent, that is $i^M_F(\\dot{E}^M)\\upharpoonright o(M)+1 =(\\dot{E}^{M})^\\langle \\emptyset \\rangle $ .", "Furthermore, weak ms-solidity means that every extender $F\\in \\dot{E}^M$ satisfies the weak $\\textsf {ms}$ -$\\textsf {ISC}$ , namely letting $\\kappa = \\textrm {crt}(F)$ , then the Jensen completion of $F_{\\lbrace \\kappa \\rbrace }$ is on the sequence of $M|\\textrm {lh}(E)$ .", "$\\dot{\\Sigma }^M$ and $\\dot{B}^M$ are used to record information about an iteration strategy $\\Omega $ of $M$ .", "$\\dot{\\Sigma }^M$ codes the strategy information added at earlier stages; $\\dot{\\Sigma }^M$ acts on $\\lambda $ -separated trees.See [11] for detailed discussions on $\\lambda $ -separated trees.", "$\\dot{\\Sigma }^M(s,b)$ implies that $s=\\langle \\nu ,k,\\mathcal {T} \\rangle $ , where $(\\nu ,k)\\le l(M)$ and $\\mathcal {T}$ is a $\\lambda $ -separated tree on $M|(\\nu ,k)$ in $M$ of limit length and $\\mathcal {T}^b$ is according to the strategy.", "We say that $s$ is an $M$ -tree, and write $s = \\langle \\nu (s),k(s),\\mathcal {T}(s)\\rangle $ .", "We write $\\dot{\\Sigma }^M_{\\nu ,k}$ for the partial iteration strategy for $M|(\\nu ,k)$ determined by $\\dot{\\Sigma }$ .", "We write $\\Sigma ^M(s)=b$ when $\\dot{\\Sigma }^M(s,b)$ , and we say that $s$ is according to $\\Sigma ^M$ if ${\\mathcal {T}}(s)$ is according to $\\dot{\\Sigma }^M_{\\nu (s),k(s)}$ .", "Now we discuss how to code branch information for a tree ${\\mathcal {T}}(s)$ such that $\\Sigma ^M(s)$ has not yet been defined into the $\\dot{B}^M$ predicate.", "Here we use the $\\mathfrak {B}$ -operator in [7].", "We are correcting some errors in the original version of [11].", "These corrections have been incorporated in its latest version.", "$M$ is branch-active (or just $B$ -active) iff (a) there is a largest $\\eta < o(M) $ such that $M|\\eta \\vDash {\\sf KP}$ , and letting $N=M|\\eta $ , (b) there is a $<_N$ -least $N$ -tree $s$ such that $s$ is by $\\Sigma ^N$ , ${\\mathcal {T}}(s)$ has limit length, and $\\Sigma ^N(s)$ is undefined.", "(c) for $N$ and $s$ as above, $o(M) \\le o(N) + lh({\\mathcal {T}}(s))$ .", "Note that being branch-active can be expressed by a $\\Sigma _2$ sentence in $\\mathcal {L}_1 -\\lbrace \\dot{B} \\rbrace $ .", "This contrasts with being extender-active, which is not a property of the premouse with its top extender removed.", "In contrast with extenders, we know when branches must be added before we do so.", "Suppose that $M$ is branch-active.", "We set $\\eta ^M &= \\text{the largest $\\eta $ such that $M|\\eta \\vDash {\\sf KP}$,}\\\\\\nu ^M &= \\text{unique $\\nu $ such that $\\eta ^M + \\nu = o(M)$},\\\\s^M &= \\text{least $M|\\eta ^M$-tree such that $\\dot{\\Sigma }^{M|\\eta ^M}$ is undefined, and}\\\\b^M &= \\text{$\\lbrace \\alpha \\mid \\eta + \\alpha \\in \\dot{B}^M \\rbrace .$} $ Moreover, (1) $M$ is a potential lpm iff $b^M$ is a cofinal branch of ${\\mathcal {T}}(s) {\\upharpoonright }\\nu ^M$ .", "(2) $M$ is honest iff $\\nu ^M = {\\rm lh}({\\mathcal {T}}(s))$ , or $\\nu ^M < lh({\\mathcal {T}}(s))$ and $b^M = [0,\\nu ^M)_{T(s)}$ .", "(3) $M$ is an lpm iff $M$ is an honest potential lpm.", "(4) $M$ is strategy active iff $\\nu ^M = \\textrm {lh}({\\mathcal {T}}(s))$ .", "Note that $\\eta ^M$ is a $\\Sigma _0^M$ singleton, because it is the least ordinal in $\\dot{B}^M$ (because 0 is in every branch of every iteration tree), and thus $s^M$ is also a $\\Sigma _0^M$ singleton.", "We have separated honesty from the other conditions because it is not expressible by a $Q$ -sentence, whereas the rest is.", "Honesty is expressible by a Boolean combination of $\\Sigma _2$ sentences.", "See below.", "The original version of [11] required that when $o(M) < \\eta ^M + lh({\\mathcal {T}}(s))$ , $\\dot{B}^M$ is empty, whereas here we require that it code $[0,o(M))_{T(s)}$ , in the same way that $\\dot{B}^M$ will have to code a new branch when $o(M) = \\eta ^M + lh({\\mathcal {T}}(s))$ .", "Of course, $[0,\\nu ^M)_{T(s)} \\in M$ when $o(M) < \\eta ^M + lh({\\mathcal {T}}(s))$ and $M$ is honest, so the current $\\dot{B}^M$ seems equivalent to the original $\\dot{B}^M = \\emptyset $ .", "However, $\\dot{B}^M=\\emptyset $ leads to $\\Sigma _1^M$ being too weak, with the consequence that a $\\Sigma _1$ hull of $M$ might collapse to something that is not an lpm.The hull could satisfy $o(H) = \\eta ^H + lh({\\mathcal {T}}(s^H))$ , even though $o(M) < \\eta ^M + lh({\\mathcal {T}}(s^M))$ .", "But then being an lpm requires $\\dot{B}^H \\ne \\emptyset $ .", "Our current choice for $\\dot{B}^M$ solves that problem.", "Suppose $N$ is an lpm, and $N \\vDash {\\sf KP}$ .", "It is very easy to see that $\\dot{\\Sigma }^N$ is defined on all $N$ -trees $s$ that are by $\\dot{\\Sigma }^N$ iff there are arbitrarily large $\\xi < o(N)$ such that $N|\\xi \\vDash {\\sf KP}$ .", "Thus if $M$ is branch-active, then $\\eta ^M$ is a successor admissible; moreover, we do add branch information, related to exactly one tree, at each successor admissible.", "Waiting until the next admissible to add branch information is just a convenient way to make sure we are done coding in the branch information for a given tree before we move on to the next one.", "One could go faster.", "We say that an lpm $M$ is (fully) passive if $\\dot{F}^M=\\emptyset $ and $\\dot{B}^M=\\emptyset $ .", "It cannot be the case that $M$ is both $E$ -active and $B$ -active.", "In the case that $M$ is $E$ -active, using the terminology of [4], the extender $\\dot{F}^M$ can be of type $A$ , $B$ , or $C$ .", "Suppose $M$ is an acceptable $J$ structure having the properties defined above.", "Suppose $k = k(M)$ .", "We say that $M$ is an lpm of type $1A$ iff $\\rho _k(M) = \\rho _{k-1}(M)$ or $\\rho _k(M)\\in Hull_1^{N^{k-1}}(\\rho _k(M)\\cup p_k(M))$ , equivalently iff $k(M) = 0$ or $k(M)>0$ and $M^-$ is strongly sound.", "$M$ has type $1B$ iff $k(M)> 0$ and $M^-$ is sound, but not strongly sound.", "$M$ has type 2 iff $M^-$ is not sound.", "In this paper, just like in [11], we will mostly deal with type 1 premice.", "In iteration trees involving type 1 premice, type 2 premice may show up, but we will replace them by type 1 equivalent premice.", "The precise way to do this is spelled out in [11]; see also the proof of Theorem .", "Suppose that $M$ is an lpm, and $\\pi \\colon H \\rightarrow M$ .", "What sort of elementarity for $\\pi $ do we need to conclude that $H$ is an lpm?", "In the proof of square for ordinary mice, we have to deal with embeddings that are only weakly elementary.See section 1.4 of [11] for a discussion of the degrees of elementarity.", "If $k(H)=k(M)=0$ , then $\\pi $ is weakly elementary iff it is $\\Sigma _0$ elementary and cardinal-preserving.", "In the context of proving square in pfs premice, we need a slight strengthening of weak elementarity, called nearly elementary defined in [11].", "For pfs premice, nearly elementary maps are the appropriate maps used in copying constructions and will be the maps used in the square constructions.", "We define nearly elementary maps now, roughly a map $\\pi : H\\rightarrow M$ with $k = k(M) = k(H)$ is nearly elementary if it is weakly elementary and maps $\\eta _k(H)$ to $\\eta _k(M)$ .", "More precisely, we first define the appropriate coding structures that work for both type 1 and type 2 premice.", "Let $M$ be a pfs premouse with $k = k(M) > 0$ .", "Let $\\hat{w}_k(M) = \\langle \\hat{\\eta }_k(M), \\hat{\\rho }_k(M), p_k(M)\\rangle $ , where $\\hat{\\rho }_k(M)$ is the least $\\rho $ that is not in Hull$_1^{M^{k-1}}(\\rho _k(M)\\cup p_k(M))$ and $\\hat{\\eta }_k(M) = \\textrm {cof}_1^{M^{k-1}}(\\hat{\\rho }_k(M))$ .", "When $M$ is clear from the context, we write $\\hat{w}_k$ for $\\hat{w}_k(M)$ etc.", "Let $\\hat{A}^k_M = \\lbrace \\langle \\varphi , b\\rangle \\ | \\ \\varphi \\textrm { is } \\Sigma _1 \\wedge b \\in M||\\rho _k \\wedge M^{k-1} \\vDash \\varphi [b, \\hat{w}_k] \\rbrace $ , and $\\hat{M}^k = (M||\\rho _k, \\hat{A}^k_M)$ .", "Finally, we say that $\\pi $ is nearly elementary if $\\pi $ is the completion of $\\pi \\upharpoonright \\hat{M}^k$ and $\\pi \\upharpoonright \\hat{M}^k$ is a $\\Sigma _0$ -preserving and cardinal preserving map from $\\hat{M}^k$ to $\\hat{N}^k$ .", "$\\pi $ is elementary it is nearly elementary and $\\pi \\upharpoonright \\hat{M}^k$ is $\\Sigma _1$ -elementary.", "The possible problem comes when $k(H)=k(M)=0$ .", "If $M$ is a passive lpm, then so is $H$ , and there is no problem.", "If $M$ is extender-active, then it could be that $H$ is only a protomouse, in that its last extender predicate is not total.", "The problem here is solved by the parts of the Schimmerling-Zeman proof related to protomice, which work in our context.", "Finally, we must consider the case that $M$ is branch-active.", "A $rQ$ -formula of $\\mathcal {L}_1$ is a conjunction of formulae of the form (a) $\\forall u \\exists v ( u \\subseteq v \\wedge \\varphi )$ , where $\\varphi $ is a $\\Sigma _1$ formula of $\\mathcal {L}_1$ such that $u$ does not occur free in $\\varphi $ , or of the form (b) “$\\dot{F}\\ne \\emptyset $ , and for $\\mu = \\mathrm {crt}(\\dot{F})^+$ , there are cofinally many $\\xi < \\mu $ such that $\\psi $ \", where $\\psi $ is $\\Sigma _1$ .", "Formulae of type (a) are usually called $Q$ -formulae.", "Being a passive lpm can be expressed by a $Q$ -sentence, but in order to express being an extender-active lpm, we need type (b) clauses, in order to say that the last extender is total.", "$rQ$ formulae are $\\pi _2$ , and hence preserved downward under $\\Sigma _1$ -elementary maps.", "They are preserved upward under $\\Sigma _0$ maps that are strongly cofinal.", "Let $M$ and $N$ be $\\mathcal {L}_0$ -structures and $\\pi \\colon M \\rightarrow N$ be $\\Sigma _0$ and cofinal.", "We say that $\\pi $ is strongly cofinal iff $M$ and $N$ are not extender active, or $M$ and $N$ are extender active, and letting $\\pi `` (\\mathrm {crt}(\\dot{F})^+)^M$ is cofinal in $(\\mathrm {crt}(\\dot{F})^+)^N$ .", "It is easy to see that $rQ$ formulae are preserved downward under $\\Sigma _1$ -elementary maps, and upward under strongly cofinal $\\Sigma _0$ -elementary maps.", "(a) There is a $Q$ -sentence $\\varphi $ of $\\mathcal {L}_1$ such that for all transitive $\\mathcal {L}_0$ structures $M$ , $M \\vDash \\varphi $ iff $M$ is a passive lpm.", "(b) There is a $rQ$ -sentence $\\varphi $ of $\\mathcal {L}_1$ such that for all transitive $\\mathcal {L}_0$ structures $M$ , $M \\vDash \\varphi $ iff $M$ is an extender-active lpm.", "(c) There is a $Q$ -sentence $\\varphi $ of $\\mathcal {L}_1$ such that for all transitive $\\mathcal {L}_0$ structures $M$ , $M \\vDash \\varphi $ iff $M$ is a potential branch-active lpm.", "(Sketch.)", "We omit the proofs of (a) and (b).", "For (c), note that “$\\dot{B} \\ne \\emptyset $ \" is $\\Sigma _1$ .", "One can go on then to say with a $\\Sigma _1$ sentence that if $\\eta $ is least in $\\dot{B}$ , then $M|\\eta $ is admissible, and $s^M$ exists.", "One can say with a $\\Pi _1$ sentence that $\\lbrace \\alpha \\mid \\dot{B}(\\eta + \\alpha ) \\rbrace $ is a branch of ${\\mathcal {T}}(s)$ , perhaps of successor order type.", "One can say that $\\dot{B}$ is cofinal in the ordinals with a $Q$ -sentence.", "Collectively, these sentences express the conditions on potential lpm-hood related to $\\dot{B}$ .", "That the rest of $M$ constitutes an extender-passive lpm can be expressed by a $\\Pi _1$ sentence.", "(a) If $M$ is a passive ( resp.", "extender-active, potential branch-active ) lpm, and ${\\rm Ult}_0(M,E)$ is wellfounded, then ${\\rm Ult}_0(M,E)$ is a passive (resp.extender-active, potential branch-active ) lpm.", "(b) Suppose that $M$ is a passive (resp.", "extender-active, potential branch-active) lpm, and $\\pi \\colon H \\rightarrow M$ is $\\Sigma _1$ -elementary; then $H$ is a passive (resp.", "potential branch-active) lpm.", "(c) Let $k(M)=k(H) = 0$ , and $\\pi \\colon H \\rightarrow M$ be $\\Sigma _2$ elementary; then $H$ is a branch-active lpm iff $M$ is a branch-active lpm.", "$rQ$ -sentences are preserved upward by strongly cofinal $\\Sigma _0$ embeddings, so we have (a).", "They are $\\Pi _2$ , hence preserved downward by $\\Sigma _1$ - elementary embeddings, so we have (b).", "It is easy to see that honesty is expressible by a Boolean combination of $\\Sigma _2$ sentences, so we get (c).", "It could happen that $M$ is a branch-active lpm, $\\pi \\colon H \\rightarrow M$ is cofinal and elementary (with $k(M)=k(H)=0$ ), and $b^M$ is not cofinal in ${\\mathcal {T}}(s^M)$ , but $b^H$ is cofinal in ${\\mathcal {T}}(s^H)$ .", "If we were using the branch coding in the original version of [11], then $\\dot{B}^M = \\emptyset $ , so $\\dot{B}^H =\\emptyset $ , so $H$ is not an lpm.", "Part (c) of Lemma is not particularly useful.", "In general, our embeddings will preserve honesty of a potential branch active lpm $M$ because $\\dot{\\Sigma }^M$ and $\\dot{B}^M$ are determined by a complete iteration strategy for $M$ that has strong hull condensation.", "So the more useful preservation theorem in the branch-active case applies to hod pairs, rather than to hod premice.", "Least branch hod pairs (lbr).", "We say that $(M,\\Omega )$ is a least branch hod pair (lbr hod pair) with scope $H_\\delta $ iff $M$ is an lpm.", "$\\Omega $ is a complete iteration strategy for $M$ with scope $H_\\delta $ (see [11]).", "$\\Omega $ is internally lift-consistent, quasi-normalizes well, and has strong hull condensation (again, see [11]), and If $s$ is by $\\Omega $ with last model $N$ , then $\\dot{\\Sigma }^N\\subseteq \\Omega _s$ , where $\\Omega _s(t) = \\Omega (s^t)$ .", "Included in clause (2) is the requirement that all $\\Omega $ -iterates of $M$ be least branch premice.", "Because of our honesty requirement in the branch-active case, this no longer follows automatically from the elementarity of the iteration maps.Honesty for “branch-anomalous\" $M$ does not seem to pass to ${\\rm Ult}_0(M,E)$ for first-order reasons.", "That the iterates of $M$ are honest comes out of the construction of $\\Omega $ , as a consequence of self-awareness.", "If $(M,\\Omega )$ is an lbr hod pair and $\\pi \\colon H \\rightarrow M$ is weakly elementary, then $\\Omega ^\\pi $ is the pullback strategy, given by $\\Omega ^\\pi (s) = \\Omega (\\pi s).$ We show now that, except in the protomouse case, $(H,\\Omega ^\\pi )$ is an lbr hod pair.", "Let $(M,\\Omega )$ be an lbr hod pair with scope $H_\\delta $ , and let $\\pi \\colon H \\rightarrow M$ be nearly elementary.", "Suppose that one of the following holds: (a) $M$ is passive or branch-active, or (b) $H$ is an lpm.", "Then $(H,\\Omega ^\\pi )$ is an lbr hod pair with scope $H_\\delta $ .", "We show first that $H$ is an lpm.", "If (b) holds, this is rather easy.", "If $M$ is passive, we can apply (a) of , noting that $Q$ sentences go down under weakly elementary embeddings.", "So let us assume that $M$ is branch-active.", "By (b) of , $H$ is a potential branch active lpm.", "So we just need to see that $H$ is honest.", "Let $\\nu = \\nu ^H$ , $b = b^H$ , and ${\\mathcal {T}}= {\\mathcal {T}}(s^H)$ .", "If $\\nu = lh({\\mathcal {T}})$ , there is nothing to show, so assume $\\nu < lh({\\mathcal {T}})$ .", "We must show that $b = [0,\\nu )_{T}$ .", "We have by induction that for $N=H|\\eta ^H$ , $(N,\\Omega ^\\pi _N )$ is an lbr hod pair.", "Thus ${\\mathcal {T}}$ is by $\\Omega ^\\pi $ , and so we just need to see that for ${\\mathcal {U}}= {\\mathcal {T}}{\\upharpoonright }\\nu $ and $\\mathcal {W} ={\\mathcal {U}}^\\frown b$ , $\\mathcal {W}$ is by $\\Omega ^\\pi $ , or equivalently, that $\\pi \\mathcal {W}$ is by $\\Omega $ .", "But it is easy to see that $\\pi \\mathcal {W}$ is a psuedo-hull of $\\pi (U)^\\frown b^M$ , and $\\Omega $ has strong hull condensation, so we are done.", "For the proof that $(H,\\Omega ^\\pi )$ is internally lift-consistent, normalizes well, and has strong hull condensation, the reader should see [11].", "We give here the proof that $(H,\\Omega ^\\pi )$ is self-aware, because it extends the honesty proof given above.", "Let $P$ be an $\\Omega ^\\pi $ iterate of $H$ via the stack of trees $s$ .", "Let $Q$ be the corresponding $\\Omega $ iterate of $M$ via $\\pi s$ , and let $\\tau \\colon P \\rightarrow Q$ be the nearly elementary copy map.", "Then for ${\\mathcal {U}}\\in P$ , ${\\mathcal {U}}\\text{ is by } \\dot{\\Sigma }^P & \\Rightarrow \\tau ({\\mathcal {U}}) \\text{ is by } \\dot{\\Sigma }^Q\\\\& \\Rightarrow \\tau {\\mathcal {U}}\\text{ is by } \\Omega _{\\pi s,Q}\\\\& \\Rightarrow {\\mathcal {U}}\\text{ is by } (\\Omega ^\\pi )_{s,P},\\\\$ as desired." ], [ "CONDENSATION LEMMA", "The main theorem of this section is Theorem .", "This theorem will be used in the $\\square $ -construction, but it is more general than is necessary for those applications.", "Our theorem extends Theorem 9.3.2 of [14], which deals with condensation under $\\pi \\colon H \\rightarrow M$ for pure extender mice $H$ and $M$ .", "That theorem breaks naturally into two cases: either (1) $H \\notin M$ , in which case $H$ is the $\\mathrm {crt}(\\pi )$ -core of $M$ , or (2) $H \\in M$ , in which case $H$ is a proper initial segment of either $M$ or an ultrapower of $M$ .", "The proof in case (1) works for least branch hod mice without much change, so we begin with that case.", "Let $M$ be an lpm or a pure extender premouse, and $n \\le k(M)$ ; then (a) $\\tilde{h}^{n+1}_M$ is the $\\Sigma _1$ -Skolem function of $\\hat{M}^n$ .", "We write $\\tilde{h}_M$ for $\\tilde{h}^{k(M)+1}_M$ .", "(b) Let $\\rho (M) \\le \\alpha $ and $r = p(M)-\\alpha $ , and suppose that $r$ is solid.", "Let $\\pi \\colon H \\rightarrow M$ with $H$ transitive be such that ${\\rm ran}(\\pi ) = \\tilde{h}_M `` (\\alpha \\cup r)$ , and suppose that $\\pi ^{-1}(r)$ is solid over $H$ .", "Then we call $H$ the $\\alpha $ -core of $M$ , and write $H = \\mbox{core}_{\\alpha }(M)$ .", "In addition, if $(M,\\Sigma )$ is a mouse pair, then the $\\alpha $ -core of $(M,\\Sigma )$ is $(H,\\Lambda )$ , where $H =$ core$_\\alpha (M)$ and $\\Lambda = \\Sigma ^\\pi $ , where $\\pi $ is the corresponding core map.", "(c) $M$ is $\\alpha $ -sound iff $M = \\mbox{core}_{\\alpha }(M)$ .", "We note that core$(M) = \\mathfrak {C}_{k(M)+1}(M)$ is the $\\rho (M)+1$ -core of $M$ .", "According to this definition, if $M$ is $\\alpha $ -sound, then $\\rho (M)+1 \\le \\alpha $ .", "So $M$ could be sound, but not $\\alpha $ -sound, which might be confusing at first.", "Let $H$ be the $\\alpha $ -core of $M$ , as witnessed by $\\pi $ .", "We have $p(M) \\subseteq {\\rm ran}(\\pi )$ , so the new $\\Sigma _{k(M)+1}$ subset of $\\rho (M)$ is $\\Sigma _{k(M)+1}$ over $H$ .", "Thus $\\rho (H) = \\rho (M)$ and $\\pi (p(H)) = p(M)$ , and $H \\notin M$ .", "One also gets that if both $H, M$ are of type 1, then $H$ is of type $1A$ iff $M$ is of type $1A$ .", "One might guess that $P(\\alpha )^M \\subseteq H$ , but this need not be the case, as the following example shows.", "Let $N$ be sound, and let $M = {\\rm Ult}(N,E)$ , where $\\rho (N) \\le \\kappa = {\\rm crt}(E)$ , and $E$ has one additional generator $\\alpha $ .", "Let $H = {\\rm Ult}(N,E {\\upharpoonright }\\alpha )$ , and let $\\pi \\colon H \\rightarrow M$ be the factor map.", "Clearly, $\\pi $ witnesses that $H$ is the $\\alpha $ -core of $M$ .", "But $\\alpha = (\\kappa ^{++})^H < (\\kappa ^{++})^M$ , so $H$ doesn't even have all the bounded subsets of $\\alpha $ that are in $M$ .", "[$\\sf {AD}^+$ ] Suppose $(M,\\Sigma )$ is a lbr hod pair with scope HC.", "Suppose $H$ and $M$ are both type 1 premice, $\\pi :H \\rightarrow M$ is nontrivial$\\pi $ is trivial iff $H=M$ and $\\pi $ is the identity., and letting $n=k(M) = k(H)$ and $\\alpha = crt(\\pi )$ ,Here we allow $\\alpha $ to be $o(H)$ and $\\pi $ to be the identity.", "$\\alpha < \\rho _n(M)$ .", "Suppose also (1) $H$ is $\\alpha $ -sound, (2) $\\pi $ is a cardinal-preserving and $\\pi \\upharpoonright H^n$ is $\\Sigma _0$ -elementary, and (3) $H$ is an lpm of the same type as $M$This means: $H$ is passive if and only if $M$ is passive; $H$ is $B$ -active if and only if $M$ is $B$ -active; and $H$ is $E$ -active if and only if $M$ is $E$ -active; in the third case, $\\dot{F}^{H}$ is of type A (B, C) if and only if $\\dot{F}^M$ is of type A (B, C respectively).", "All but the last clause are implicit in (2)., and (4) $H \\notin M$ .", "Then $H$ is the $\\alpha $ -core of $M$ .", "Let $r = p(H) - \\alpha $ .", "$T = \\mbox{Th}^{H^n}_{1}(\\alpha \\cup r),$ so that $T$ codes $H$ .", "$T $ is sometimes denoted by Th$^M_{n+1}(\\alpha \\cup r)$ .", "Suppose first that $\\pi \\upharpoonright H^n$ is not cofinal.", "We have that $T$ is $\\Sigma _1$ over $H^n$ , and hence $T$ is $\\Sigma _1$ over some proper initial segment of $M^n$ , so that $T \\in M^n$ .", "If $n>0$ , then $M|\\rho _n(M) \\vDash {\\sf KP}$ and $T \\in M|\\rho _n(M)$ , so $H \\in M$ .", "If $n =0$ and $H$ is fully passive, then we have $\\pi \\colon H \\rightarrow M|\\eta $ for some $\\eta < o(M)$ , and ${\\rm ran}(\\pi )$ in $M$ .", "Any premouse is closed under transitive collapse, so we again get $H \\in M$ .", "If $n=0$ and $H$ is extender-active, then letting $H^- = H||o(H)$ , we get $H^- \\in M$ by the argument just given.", "However, $\\dot{F}^H$ can be computed from the fragment $\\dot{F}^M \\upharpoonright \\sup \\pi ``o(H)$ and $\\pi $ inside $M$ , so $H \\in M$ .", "The case that $n=0$ and $H$ is branch-active can be handled similarly, noting that the proper initial segments of $b^M$ are in $M$ .", "So we may assume $\\pi \\upharpoonright H^n$ is cofinal $\\Sigma _0$ , and hence $\\Sigma _1$ .", "We then get that $\\pi (\\eta ^H_n) = \\eta ^M_n$ .", "We claim that $\\rho (M) \\le \\alpha $ .", "For if not, $T$ is a bounded subset of $\\rho (M)$ that is $\\Sigma _1$ over $M^n$ .", "Thus $T \\in M|\\rho (M)$ , so $H \\in M$ .", "Suppose $r = \\emptyset $ .", "If $\\gamma \\in (p(M) - \\alpha )$ , then $T$ can be computed easily from the solidity witness $W_\\gamma ^M$ , so $T$ in $M$ , and with a bit more work, $H \\in M$ .", "So we have $p(M)-\\alpha = \\emptyset $ , which implies that $H$ is the $\\alpha $ -core of $M$ , as witnessed by $\\pi $ .", "Suppose next that $r = \\langle \\beta _0,...,\\beta _l \\rangle $ , and $p(M) - \\alpha = \\langle \\gamma _0,...,\\gamma _m\\rangle $ , where $\\beta _i > \\beta _{i+1}$ and $\\gamma _i > \\gamma _{i+1}$ for all $i$ .", "We show by induction on $i \\le l$ that $i \\le m$ and $\\pi (\\beta _i) =\\gamma _i$ .", "Suppose we know it for $i \\le k <l$ .", "Let $W=W_{r,\\beta _{k+1}^H}$ be the solidity witness for $\\beta _{k+1}$ in $H$ .", "Since $\\pi \\upharpoonright H^n$ is $\\Sigma _1$ elementary, $\\pi (W)$ can be used to compute $\\mbox{Th}^M_{n+1}(\\pi (\\beta _{k+1}) \\cup \\lbrace \\gamma _0,...,\\gamma _k \\rbrace )$ inside $M$ .", "But $\\rho (M) < \\pi (\\beta _{k+1})$ , so we must have $k < m$ .", "Similarly, $\\gamma _{k+1} < \\pi (\\beta _{k+1})$ is impossible, as otherwise $\\pi (W)$ could be used in $M$ to compute the $\\Sigma _{n+1}$ theory of $p(M) \\cup \\rho (M)$ .", "On the other hand, if $\\pi (\\beta _{k+1}) < \\gamma _{k+1}$ , then using the solidity witness $W_{p(M),\\gamma _{k+1}}^M$ for $\\gamma _{k+1}$ in $M$ , we get $H \\in M$ .", "It follows that $\\pi (r) = p(M)-\\alpha $ , and thus $H$ is the $\\alpha _0$ -core of $M$ .", "In the case $H$ is the core of $M$ , we can also get agreement of $\\Sigma $ and $\\Sigma ^\\pi $ up to $(\\rho ^+)^H=(\\rho ^+)^M$ .", "See Corollary .", "It may be possible to prove strategy condensation in the other cases, but we have not tried to do that.", "An analog of the above theorem can be stated and proved for type 2 premice.", "Next we deal with condensation under $\\pi \\colon H \\rightarrow M$ in the case $H \\in M$ .If $\\pi \\colon H \\rightarrow M$ is elementary, $\\alpha =\\mathrm {crt}(\\pi )$ , $H$ is $\\alpha $ -sound, and $\\alpha < \\rho (M)$ , then $H \\in M$ .", "This is the case with the coarser condensation results of [11] and [1], where $\\alpha = \\rho (H)$ and $\\pi (\\alpha )=\\rho (M)$ .", "We shall actually prove a stronger result, one that includes condensation for iteration strategies as well as condensation for the mice themselves.", "It is convenient here to use the terminology associated to mouse pairs.", "Recall from [11] that $(M,\\Sigma )$ is a mouse pair iff either $(M,\\Sigma )$ is a least branch hod pair, or $(M,\\Sigma )$ is a pure extender pair.That is, $M$ is a pure extender premouse, and $\\Sigma $ is a self-consistent complete iteration strategy for $M$ that quasi-normalizes well and has strong hull condensation.", "We say that $(H,\\Psi ) \\unlhd (M,\\Sigma )$ iff for some $\\langle \\nu ,l\\rangle \\le l(M)$ , $H = M|\\langle \\nu ,l \\rangle $ and $\\Psi = \\Sigma _{\\langle \\nu ,l\\rangle }$ .", "If $(H,\\Psi )$ and $(M,\\Sigma )$ are mouse pairs of the same type, then $\\pi \\colon (H,\\Psi ) \\rightarrow (M,\\Sigma )$ is elementary (resp.", "nearly elementary) iff $\\pi $ is elementary (nearly elementary) as a map from $H \\rightarrow M$ , and $\\Psi = \\Sigma ^\\pi $ .", "We say that $(M,\\Sigma )$ is an iterate of $(H,\\Psi )$ iff there is a stack $s$ on $H$ such that $s$ is by $\\Psi $ , and $\\Sigma = \\Psi _s$ .", "It is a non-dropping iterate iff the branch $H$ -to-$M$ does not drop.", "Assuming $\\mbox{{\\sf AD}}^+$ and that our pairs have scope ${\\mathrm {H}C}$ , [11] proves the following: (1) If $(M,\\Sigma )$ is a mouse pair, and $\\pi \\colon H \\rightarrow M$ is nearly elementary, then $(H,\\Sigma ^\\pi )$ is a mouse pair.", "(2) If $(H,\\Psi )$ is a mouse pair, and $(M,\\Sigma )$ is a non-dropping iterate of $(H,\\Psi )$ , then the iteration map $i_s \\colon (H,\\Psi )\\rightarrow (M,\\Sigma )$ is elementary in the category of pairs.", "(3) (Dodd-Jensen) If $(H,\\Psi )$ is a mouse pair, $(M,\\Sigma )$ is an iterate of $(H,\\Psi )$ via the stack $s$ , and $\\pi \\colon (H,\\Psi ) \\rightarrow (M,\\Sigma )$ is nearly elementary, then (i) the branch $H$ -to-$M$ of $s$ does not drop, and (ii) for all $\\eta < o(H)$ , $i_s(\\eta ) \\le \\pi (\\eta )$ , where $i_s$ is the iteration map.", "(4) (Mouse order) Let $(H,\\Psi ) \\le ^* (M,\\Sigma )$ iff there is a nearly elementary embedding of $(H,\\Psi )$ into some iterate of $(M,\\Sigma )$ ; then $\\le ^*$ is a prewellorder of the mouse pairs with scope ${\\mathrm {H}C}$ in each of the two types.", "The following is an easy case of condensation for pairs.", "[$\\mbox{{\\sf AD}}^+$ ] Let $(M,\\Sigma )$ be a mouse pair with scope ${\\mathrm {H}C}$ , and let $\\pi \\colon (H,\\Psi )\\rightarrow (M,\\Sigma )$ be elementary, with $\\pi =$ identity; then either $(H,\\Psi ) \\unlhd (M,\\Sigma )$ , or $(H,\\Psi ) \\lhd {\\rm Ult}((M,\\Sigma ),E_\\alpha ^M)$ , where $\\alpha = o(H)$ .", "Suppose first $H$ is extender-active.", "Let $F = \\dot{F}^H$ and $G = \\dot{F}^M$ , and let $\\kappa = \\mathrm {crt}(F)$ .", "So $\\kappa ^{+,H} = \\kappa ^{+,M} < o(H)$ , and $i_F^H `` \\kappa ^{+,H} = i_G^M `` \\kappa ^{+,M}$ .", "Thus ${\\rm ran}(\\pi )$ is cofinal in $o(M)$ , which implies $(H,\\Psi ) = (M,\\Sigma )$ .", "Next, suppose that $H$ is branch active.Of course, this only applies when $M$ is an lpm.", "In general, our proofs for pure extender pairs are special cases of the proofs for lbr hod pairs, so it doesn’t hurt to assume our mouse pair is an lbr hod pair.", "Since $\\pi $ is the identity, $\\eta = \\eta ^H = \\eta ^M$ and $s = s^H = s^M$ .", "Let ${\\mathcal {T}}= {\\mathcal {T}}(s)$ , and let $\\nu = \\nu ^H$ , so that $o(H) = \\eta + \\nu $ .", "Because $\\pi $ preserves $\\dot{B}^H$ , $b^H = b^M \\cap \\nu $ .", "But $b^M \\cap \\nu = b^{M|o(H)}$ because $M$ is an lpm, so $H = M$ .", "We get $\\Sigma ^\\pi = \\Sigma _{l(H)}$ from the self-consistency of $(M,\\Sigma )$ , so $(H,\\Psi ) \\unlhd (M,\\Sigma )$ .", "Finally, suppose that $H$ is fully passive.", "Clearly, $M||\\hat{o}(H)$ is branch-passive, and thus $M||\\hat{o}(H) = H$ .", "Using self-consistency for $(M,\\Sigma )$ , we get $(H,\\Psi ) \\unlhd (M,\\Sigma )$ , unless $M|\\hat{o}(H)$ is extender-active.", "In that case we get $(H,\\Psi ) \\lhd Ult(M,E_\\alpha ^M)$ , where $\\alpha = \\hat{o}(H)$ , using self-consistency and strategy coherence.", "Our main condensation theorem for mouse pairs is: [$\\sf {AD}^+$ ] Suppose $(M,\\Sigma )$ is a pfs mouse pair with scope HC.", "Suppose $\\pi :(H,\\Psi ) \\rightarrow (M,\\Sigma )$ is nearly elementary, and not the identity.", "Let $\\alpha = \\mathrm {crt}(\\pi )$ , and suppose (1) $H$ is a premouse of the same type as $M$ , $k(H)=k(M)$ , and both $H$ and $M$ are stable, type 1 pfs premice, (2) $\\rho (H) \\le \\alpha < \\rho _{k(H)}(H)$ , and $H$ is $\\alpha $ -sound, and (3) $H$ is not the $\\alpha $ -core of $M$ .", "Then exactly one of the following holds.", "(a) $(H,\\Psi ) \\lhd (M,\\Sigma )$ .", "(b) $(H,\\Psi ) \\lhd Ult_0((M,\\Sigma ), \\dot{E}^M_\\alpha )$ .", "When one applies Theorem in the proof of $\\square _\\kappa $ in pfs mice, it is easy see that $H \\in M$ , and to rule out conclusions (b).", "In that proof, $\\rho (H) = \\rho (M) = \\kappa $ , and $\\alpha =(\\kappa ^+)^H$ , and both $H$ and $M$ are sound.", "Moreover, because $\\kappa $ is not subcompact, one can arrange that $\\dot{E}^M_\\alpha = \\emptyset $ , so (b) does not hold.", "So one gets conclusion (a), that $(H,\\Psi ) \\lhd (M,\\Sigma )$ .", "In the square proof, what matters then is just that $H \\unlhd M$ ; the full external strategy agreement given by $\\Sigma ^\\pi = \\Sigma _{l(H)}$ is not used.", "It follows from the theorem that the hypothesis $\\alpha < \\rho _{k(H)}(H)$ can be dropped, if one omits condensation of the external strategy from its conclusion.", "See below.", "One may notice that one of the conclusions of [14] does not occur here.", "The conclusion that $(H,\\Psi ) \\lhd Ult_0((M|\\xi ,\\Sigma ), E)$ where $\\rho (M|\\xi )=\\mu $ is the predecessor of $\\alpha $ in $M$ and $E$ is an extender on the sequence of $M$ with critical point $\\mu $ cannot occur here because $\\mu $ cannot be measurable in $M|\\xi $ , a solid pfs premouse.", "A relatively coarse special case of Theorem is sketched in [11].", "In that case, $\\pi $ is assumed to be fully elementary and crt$(\\pi ) = \\rho (H)$ .", "[Proof of Theorem ] Let $\\pi \\colon (H,\\Psi ) \\rightarrow (M,\\Sigma )$ be nearly elementary, and let $\\alpha _0 = \\alpha = \\mathrm {crt}(\\pi )$ and $k = k(H) = k(M)$ .", "Suppose $H$ is $\\alpha _0$ -sound, and not the $\\alpha _0$ -core of $M$ , so that by , $H \\in M$ .", "For definiteness, let us assume that $H$ and $M$ are lbr pfs premice.", "The proof in the case that they are pure extender mice is similar.Even in the pure extender case, one cannot simply quote 9.3.2 of [14], because we are demanding strategy condensation.Under $\\mbox{{\\sf AD}}^+$ , every countable $\\omega _1$ -iterable pure extender mouse $M$ has an complete iteration strategy $\\Sigma $ such $(M,\\Sigma )$ is a pure extender pair.", "Thus our theorems ,, and together imply 9.3.2 of [14], modulo some details about where the strategies live, and how elementary $\\pi $ is.", "A tuple $\\langle (N,\\Phi ), (G,\\Lambda ), \\sigma , \\nu \\rangle $ is problematic iff (1) $(N,\\Phi )$ and $(G,\\Lambda )$ are of the same type, with scope ${\\mathrm {H}C}$ , and $G \\in N$ , (2) $\\sigma \\colon (G,\\Lambda ) \\rightarrow (N,\\Phi )$ is nearly elementary, with $\\mathrm {crt}(\\sigma ) = \\nu $ , (3) $\\nu < \\rho _{k(G)}(G)$ and $G$ is $\\nu $ -sound, $k(N)=k(G) =k$ , $G, N$ are either both pfs premice or both strong $k$ -cores of pfs premice, and (4) conclusions (a) and (b) of all fail.", "More precisely, $\\lnot ((G,\\Lambda )\\lhd (N,\\Phi ))$ and if $\\rho _k(G)$ is measurable, then $\\lnot ((Ult_k(G,D), \\Lambda ^{\\prime }) \\lhd (N,\\Phi )$ ; and $\\lnot ((G,\\Lambda ) \\lhd (Ult_k((N,\\Phi ),E^N_\\nu )$ and if $\\rho _k(G)$ is measurable, then $\\lnot ((Ult_k(G,D), \\Lambda ^{\\prime }) \\lhd (Ult((N,\\Phi ),E^N_\\nu )$ .", "Here $D$ is the measure order zero on $\\rho _k(G)$ and $\\Lambda ^{\\prime }$ is the tail of $\\Lambda $ .", "We will use the “prime\" notation like above for similar objects defined later.", "Claim 1.", "Let $\\langle (N,\\Phi ), (G,\\Lambda ), \\sigma , \\nu \\rangle $ be a problematic tuple, and $k = k(G)$ ; then then $\\rho (G) \\le \\nu < \\rho _k(G) \\le \\rho _{k}(N)$ .", "Proof.", "$\\rho (G) \\le \\nu $ because $G$ is $\\nu $ -sound.", "$\\rho _k(G) \\le \\rho _k(N)$ because $\\sigma $ is nearly elementary.", "$\\hfill \\square $ Let $\\bar{M}= \\bar{\\mathfrak {C}}_k(M)$ and $\\bar{\\Sigma }$ be the pullback of $\\Sigma $ under the uncollapse map.", "Let $\\bar{H},\\bar{\\Psi }$ be defined similarly; in fact, we will use the “bar\" notation for similar objects defined later.", "$(\\bar{M},\\bar{\\Sigma })=(M,\\Sigma )$ if and only if $M$ has type $1A$ ; a similar statement holds for $H,\\bar{H}$ .", "We let $\\bar{\\pi }: \\bar{H}\\rightarrow \\bar{M}$ be the natural map.", "Note that $\\bar{\\pi }$ is the completion of $\\pi \\upharpoonright H^k_0$ in the category of premice like $\\bar{H},\\bar{M}$ .", "Here $H^k_0$ ($M^k_0$ ) is the reduct of $\\bar{H}$ ($\\bar{M}$ ) and is defined by $H^k_0 = (H||\\rho _k(H), B^k(H))$ , where $B^k(H) = \\lbrace \\langle \\varphi , b \\rangle \\ | \\ \\varphi \\textrm { is } \\Sigma _1 \\wedge b\\in H||\\rho _k(H) \\wedge H^{k-1} \\vDash \\varphi [b, p_k(H)\\rbrace $ .", "We must show that $\\langle (\\bar{M},\\bar{\\Sigma }),(\\bar{H},\\bar{\\Psi }),\\bar{\\pi },\\alpha _0\\rangle $ is not problematic.", "Assume toward contradiction that it is, and that $(\\bar{M},\\bar{\\Sigma })$ is minimal in the mouse order such that $(\\bar{M},\\bar{\\Sigma })$ is the first term in some problematic tuple.", "We obtain a contradiction by comparing the phalanx $(\\bar{M}, \\bar{H}, \\alpha _0)$ with $\\bar{M}$ , as usual.", "However, since we are comparing strategies, this must be done indirectly, by iterating both into some sufficiently strong background construction $\\mathbb {C}$ .", "It can happen that at some point, the two sides agree with each other (but not with $\\mathbb {C}$ ).", "This leads to a problem in the argument that the end model on the phalanx side can't be above $\\bar{M}$ .", "The solution, employed in [11]), is to modify how the phalanx is iterated, moving the whole phalanx (including its exchange ordinal) up at certain stages.", "Our main new problem here is that because of the restricted elementarity of our maps, if we move up naively, the new phalanx and associated embedding may not be problematic.", "This forces us to drop to a new problematic phalanx on occasion.", "Claim 2.", "Let $\\langle (N,\\Phi ), (G,\\Lambda ),\\sigma ,\\nu \\rangle $ be a problematic tuple, and $k = k(G)$ ; then we can decompose $\\sigma \\upharpoonright G^k$ as $\\sigma \\upharpoonright G^k = \\bigcup _{\\eta < \\rho _k(G)} \\sigma ^\\eta ,$ where each $\\sigma ^\\eta $ belongs to $N^k$ .", "Proof.", "Assume first $k=0$ , and that $\\hat{o}{(G)}$ is a limit ordinal.", "For $\\eta < \\hat{o}(G)$ , let $G^\\eta $ be $G||\\eta $ , expanded by $I^\\eta $ , where $I^\\eta $ is the appropriate fragment of $\\dot{F}^G$ if $G$ is extender active, and the appropriate initial segment of $\\dot{B}^G$ if $G$ is branch active.", "Let $N^\\eta $ be $N||\\sigma (\\eta )$ , expanded by $\\sigma (I^\\eta )$ .", "Let $s = p_1(G)\\backslash \\nu $ and $\\sigma ^\\eta $ be the fragment of $\\sigma $ given by ${\\rm dom}(\\sigma ^\\eta ) = h^1_{G^\\eta }``(\\nu \\cup s),$ and $\\sigma ^\\eta (h^1_{G^\\eta }(\\delta , s))= h^1_{N^{\\sigma (\\eta )}}(\\delta ,\\sigma (s)),$ for $\\delta < \\nu $ .", "We have that $\\sigma ^\\eta \\in N$ , and $\\sigma = \\bigcup _{\\eta < \\hat{o}(G)} \\sigma ^\\eta $ .", "If $\\hat{o}{(G)}$ is a successor ordinal, we can ramify using the $S$ -hierarchy.", "The case $k>0$ is similar.", "We have $G^k = (G||\\rho _k(G),A)$ where $A= \\mbox{Th}_1^{G^{k-1}}(\\rho _k(G) \\cup \\lbrace \\rho _k(G),\\eta _{k}(G),p_{k}(G)\\rbrace ).$ For $\\eta \\le \\rho _k(G)$ , let $G^{\\eta } = (G||\\eta , A \\cap G||\\eta ).$ Let $s = p_1({G^k}) \\setminus \\nu $ , and let $h^1_{G^k}$ be the $\\Sigma _1$ Skolem function, so that $G^k = h^1_{G^k}``(\\nu \\cup s)$ .", "For $\\eta <\\rho _k(G)$ , ${\\rm dom}(\\sigma ^\\eta ) = h^1_{G^{\\eta }} `` (\\nu \\cup s),$ and for $\\gamma < \\nu $ in ${\\rm dom}(\\sigma ^\\eta )$ , $\\sigma ^\\eta (h^1_{G^{\\eta }}(\\gamma , s))= h^1_{N^k||\\sigma (\\eta )}(\\gamma ,\\sigma (s)).$ It is easy to see that this works.", "$\\hfill \\square $ We call $\\langle (\\sigma ^\\eta ,G^\\eta ) \\mid \\nu \\le \\eta < \\rho _k(G) \\rangle $ as above the natural decomposition of $\\sigma \\upharpoonright G^k$ .", "Using claim 2, we can move a problematic tuple $\\langle (N,\\Phi ), (G,\\Lambda ), \\sigma , \\nu \\rangle $ of up via an iteration map that is continuous at $\\rho _k(G)$ .", "When the iteration map is discontinuous at $\\rho _k(G)$ , we may have to drop.", "Let $\\Phi = \\langle (N,\\Phi ), (G,\\Lambda ), \\sigma , \\nu \\rangle $ be a problematic tuple; then $\\Phi $ is extender-active iff $E_\\nu ^N \\ne \\emptyset $ .", "When we move up extender-active tuples, the new exchange ordinal is always the image of the old one, so the new tuple is still extender-active.", "Claim 3.", "Let $\\langle (N,\\Phi ),(G,\\Lambda ),\\sigma ,\\nu \\rangle $ be problematic, and suppose that $(N,\\Phi ) \\le ^* (M,\\Sigma )$ ; then there is no proper initial segment $(Q,\\Omega )$ of $(G,\\Lambda )$ such that $\\nu = \\rho (Q)$ and either (i) $E_\\nu ^N = \\emptyset $ , and $(Q,\\Omega )$ is not an initial segment of $(N,\\Phi )$ , and if $\\rho _k(Q)$ is measurable, then $\\lnot ((Ult_k(Q,D), \\Omega ^{\\prime }) \\lhd (N,\\Phi )$ , or (ii) $E_\\nu ^N \\ne \\emptyset $ , and $(Q,\\Omega )$ is not a proper initial segment of ${\\rm Ult}((N,\\Phi ),E_\\nu ^N)$ , and if if $\\rho _k(G)$ is measurable, then $\\lnot ((Ult_k(Q,D), \\Omega ^{\\prime }) \\lhd (N,\\Phi )$ .", "Proof.", "This follows from the minimality of $(M,\\Sigma )$ in the mouse order.", "For if $(Q,\\Omega )$ is a counterexample, then letting $(R,\\Gamma ) \\lhd (N,\\Phi )$ be such that $R = \\sigma (Q)$ , we have that $(R,\\Gamma ) <^* (M,\\Sigma )$ , and $\\langle (R,\\Gamma ), (Q,\\Lambda _Q), \\sigma \\upharpoonright Q, \\nu \\rangle $ is problematic.", "$\\hfill \\square $ So under the hypotheses of claim 3, $(N,\\Phi )$ agrees with $(G,\\Lambda )$ strictly below $\\nu ^{+,G}$ .", "We are ready now to enter the phalanx comparison argument of [11].", "Let $N^*$ be a coarse $\\Gamma $ -Woodin mouse, where $\\Sigma \\in {\\Delta _\\Gamma }$ and $\\Gamma $ is an inductive-like, scaled pointclass contained strictly in the Suslin co-Suslin sets.", "Let $(\\vec{F}, \\Sigma ^*, \\delta ^*)$ satisfy the following.", "$N^*\\vDash \\vec{F}$ is a coarsely coherent extender sequence.", "$\\vec{F}$ witnesses $\\delta ^*$ is Woodin in $N^*$ .", "$\\Sigma ^*$ is an $(\\omega _1,\\omega _1)$ -$\\vec{F}$ -strategy for $N^*$So ultrapowers used in trees according to $\\Sigma ^*$ are by extenders in $\\vec{F}$ and its images.", "such that the restriction of $\\Sigma ^*$ to stacks in $V_{\\delta ^*}^{N^*}$ is in $N^*$ .", "In fact, $N^*\\vDash ``$ I am uniquely $\\vec{F}$ -iterable for stacks in $V_{\\delta ^*}$ \".", "There is a Coll$(\\omega ,\\delta ^*)$ -term $\\tau $ and a universal $\\Gamma $ -set $U$ such that if $i:N^*\\rightarrow N$ is via $\\Sigma ^*$ and $g\\subseteq $ Coll$(\\omega ,i(\\delta ^*))$ is $N$ -generic, then $i(\\tau )_g = U\\cap N[g]$ .", "Let $\\mathbb {C}$ be the maximal pfs $\\vec{F}$ -construction of $N^*$ , with associated models $M_{\\nu ,l}=M^{\\mathbb {C}}_{\\nu ,l}$ and induced strategies $\\Omega _{\\nu ,l}=\\Omega ^\\mathbb {C}_{\\nu ,l}$ for $(\\nu ,l) < (\\delta ^*,0)$ .", "By [11], there is a unique pair $(\\eta _0,k_0)$ such that $(\\bar{M},\\bar{\\Sigma })$ iterates to $(M_{\\eta _0,k_0},\\Omega _{\\eta _0,k_0})$ , and for all $(\\nu ,l) <_{\\mathrm {lex}}(\\eta _0,k_0)$ , $(\\bar{M},\\bar{\\Sigma })$ iterates strictly past $(M_{\\nu ,l},\\Omega _{\\nu ,l})$ .", "Let $\\mathcal {U}_{\\nu ,l}$ be the unique $\\lambda $ -separated tree on $\\bar{M}$ witnessing $(\\bar{M},\\bar{\\Sigma })$ iterates past $(M_{\\mu ,l},\\Omega _{\\nu ,l})$ .", "We can work in $N^*$ from now on, and interpret these statements there.", "But in fact, the strategies $\\Omega _{\\nu ,l}$ are induced by $\\Sigma ^*$ in a way that guarantees they extend to $\\Sigma ^*$ -induced strategies $\\Omega _{\\nu ,l}^+$ defined on all of HC.", "$\\mathcal {U}_{\\nu ,l}$ iterates $(M,\\Sigma )$ past $(M_{\\nu ,l},\\Omega _{\\nu ,l}^+)$ in $V$ .", "We define trees ${\\mathcal {S}}_{\\nu ,l}$ on $(\\bar{M},\\bar{H},\\alpha _0)$ for certain $(\\nu ,l)\\le (\\eta _0,k_0)$ .", "Fix $(\\nu ,l)\\le (\\eta _0,k_0)$ for now, and assume ${\\mathcal {S}}_{\\nu ^{\\prime },l^{\\prime }}$ is defined whenever $(\\nu ^{\\prime },l^{\\prime })<(\\nu ,l)$ .", "Let ${\\mathcal {U}}= {\\mathcal {U}}_{\\nu ,l}$ , and for $\\tau < {\\rm lh}({\\mathcal {U}})$ , let $\\Sigma _\\tau ^{\\mathcal {U}}= \\bar{\\Sigma }_{{\\mathcal {U}}\\upharpoonright (\\tau +1)}$ be the tail strategy for ${\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ induced by $\\Sigma $ .", "We proceed to define $ \\mathcal {S} = {\\mathcal {S}}_{\\nu ,l}$ , by comparing the phalanx $(\\bar{M}, \\bar{H},\\alpha _0)$ with $M_{\\nu ,l}$ .", "As we define $\\mathcal {S}$ , we lift $\\mathcal {S}$ to a padded tree $\\mathcal {T}$ on $\\bar{M}$ , by copying.", "Let us write $\\Sigma _\\theta ^{\\mathcal {T}}=\\bar{\\Sigma }_{{\\mathcal {T}}\\upharpoonright (\\theta +1)}$ for the tail strategy for ${\\mathcal {M}}_\\theta ^{\\mathcal {T}}$ induced by $\\Sigma $ .", "For $\\theta < lh(\\mathcal {S})$ , we will have copy map $\\pi _\\theta : {\\mathcal {M}}^{\\mathcal {S}}_\\theta \\rightarrow {\\mathcal {M}}^{\\mathcal {T}}_\\theta $ .", "The map $\\pi _\\theta $ is nearly elementary.", "We attach the complete strategy $\\Lambda _\\theta = (\\Sigma _\\theta ^{\\mathcal {T}})^{\\pi _\\theta }$ to ${\\mathcal {M}}^{\\mathcal {S}}_\\theta $ .", "We also define a non-decreasing sequence of ordinals $\\lambda _\\theta = \\lambda ^{\\mathcal {S}}_\\theta $ that measure agreement between models of ${\\mathcal {S}}$ , and tell us which model we should apply the next extender to.", "The following claim will be useful in pushing up problematic tuples.", "Claim 4.", "Suppose $\\xi <_S \\theta $ and $(\\xi ,\\theta ]_S$ does not drop; then $\\Lambda _\\xi = \\Lambda _\\theta ^{i_{\\xi ,\\theta }^{\\mathcal {S}}}$ .", "Proof.", "Because $\\Sigma $ is pullback consistent, we have $\\Sigma _\\xi ^{\\mathcal {T}}= (\\Sigma _\\theta ^{\\mathcal {T}})^{i_{\\xi ,\\theta }^{\\mathcal {T}}}$ .", "But then $\\Lambda _\\xi & = (\\Sigma _\\xi ^{\\mathcal {T}})^{\\pi _\\xi } \\\\& = (\\Sigma _\\theta ^{\\mathcal {T}})^{i_{\\xi ,\\theta }^{\\mathcal {T}}\\circ \\pi _\\xi }\\\\& = (\\Sigma _\\theta ^{\\mathcal {T}})^{\\pi _\\theta \\circ i_{\\xi ,\\theta }^{\\mathcal {S}}}\\\\& = \\Lambda _\\theta ^{i_{\\xi ,\\theta }^{\\mathcal {S}}},\\\\$ as desired.", "$\\hfill \\square $ We start with ${\\mathcal {M}}^{\\mathcal {S}}_0 = \\bar{M}, {\\mathcal {M}}_1^{\\mathcal {S}}= \\bar{H}, \\lambda _0 = \\alpha _0,$ and ${\\mathcal {M}}^{\\mathcal {T}}_0 = {\\mathcal {M}}^{\\mathcal {T}}_1 = \\bar{M}, \\pi _0 = id, \\pi _1 = \\bar{\\pi }$ , and $\\Lambda _0 = \\Sigma , \\mbox{ } \\Lambda _1 = \\Sigma ^{\\pi _1}.$ We say that $0, 1$ are distinct roots of ${\\mathcal {S}}$ .", "We say that 0 is unstable, and 1 is stableThis is different from the notion of “stable pfs premice\".", "We will let context dictate the meaning of the term.. As we proceed, we shall declare additional nodes $\\theta $ of ${\\mathcal {S}}$ to be unstable.", "We do so because $({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta ) =({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}, \\Sigma _\\gamma ^{\\mathcal {U}})$ for some $\\gamma $In the first version of [11] the external strategy agreement was not required for $\\theta $ to be declared unstable, but it is important to do so here., and when we do so, we shall immediately define ${\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}}$ , as well as $\\sigma _\\theta $ and $\\alpha _\\theta $ such that $ \\Phi _\\theta =_{\\mbox{df}} \\langle ({\\mathcal {M}}_\\theta ^{\\mathcal {S}}, \\Lambda _\\theta ),({\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}},\\Lambda _{\\theta +1}),\\sigma _\\theta ,\\alpha _\\theta \\rangle $ is a problematic tuple.", "Here $\\Lambda _{\\theta +1} =\\Lambda _\\theta ^{\\sigma _\\theta }$ .", "In this case, $[0,\\theta ]_S$ does not drop, and all $\\xi \\le _S \\theta $ are also unstable.", "We regard $\\theta +1$ as a new root of ${\\mathcal {S}}$ .", "This is the only way new roots are constructed.", "Let us also write $ \\Phi _\\theta ^- =_{\\mbox{df}} \\langle {\\mathcal {M}}_\\theta ^{\\mathcal {S}},{\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}},\\sigma _\\theta ,\\alpha _\\theta \\rangle $ for the part of $\\Phi _\\theta $ that is definable over ${\\mathcal {M}}_\\theta ^{\\mathcal {S}}$ .", "We say $\\Phi _\\theta ^-$ is problematic iff (4) holds for ${\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}}, {\\mathcal {M}}_\\theta ^{\\mathcal {S}}$ .", "If $\\theta $ is unstable, then we define $\\beta _\\theta = (\\alpha _\\theta ^+)^{{\\mathcal {M}}_{\\theta +1}^{{\\mathcal {S}}}}.$ If $\\xi <_S, \\theta $ , then we shall have $ \\beta _\\theta \\le i_{\\xi ,\\theta }^{{\\mathcal {S}}}(\\beta _\\xi )$ , and $\\beta _\\theta = i_{\\xi ,\\theta }^{{\\mathcal {S}}}(\\beta _\\xi ) \\Rightarrow \\Phi _\\theta = i_{\\xi ,\\theta }^{{\\mathcal {S}}}(\\Phi _\\xi ),$ in the appropriate sense.", "In this connection: it will turn out that $i_{\\xi ,\\theta }(\\beta _\\xi ) = \\beta _\\theta $ implies $i_{\\xi ,\\theta }^{\\mathcal {S}}$ is continuous at $\\rho _k({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})$ , where $k = k({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})$ .", "So we can set $i_{\\xi ,\\theta }^{\\mathcal {S}}(\\sigma _\\xi ) = \\text{ upward extension of }\\bigcup _{\\eta < \\rho _k({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})} i_{\\xi ,\\theta }^{\\mathcal {S}}(\\sigma _\\xi ^\\eta ),$ where $\\langle \\sigma _\\xi ^\\eta \\mid \\eta < \\rho _k({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})\\rangle $ is the natural decomposition of $\\sigma _\\xi $ .", "This enables us to make sense of $i_{\\xi ,\\theta }^{\\mathcal {S}}(\\Phi _\\xi ^-)$ .", "The construction of ${\\mathcal {S}}$ takes place in rounds in which we either add one stable $\\theta $ , or one unstable $\\theta $ and its stable successor $\\theta +1$ .", "Thus the current last model is always stable, and all extenders used in ${\\mathcal {S}}$ are plus extenders taken from stable models.", "If $\\gamma $ is stable, then $\\lambda _\\gamma = \\lambda (E_\\gamma ^{{\\mathcal {S}}})$ .", "In sum, we are maintaining by induction that the last node $\\gamma $ of our current ${\\mathcal {S}}$ is stable, and Induction hypotheses $(\\dagger )_\\gamma $ .", "If $\\theta < \\gamma $ and $\\theta $ is unstable, then (1) $0\\le _{\\mathcal {S}}\\theta $ and $[0,\\theta ]_{\\mathcal {S}}$ does not drop (in model or degree), and every $\\xi \\le _S \\theta $ is unstable, (2) there is a $\\gamma $ such that $({\\mathcal {M}}_\\theta ^{\\mathcal {S}}, \\Lambda _\\theta ) =({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}, \\Sigma _\\gamma ^{\\mathcal {U}})$ , (3) $\\Phi _\\theta = \\langle ({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta ), (M_{\\theta +1}^{\\mathcal {S}}, \\Lambda _{\\theta +1}),\\sigma _\\theta , \\alpha _\\theta \\rangle $ is a problematic tuple, (4) $\\Phi _\\theta $ is extender-active iff $\\Phi _0$ is extender-active, and if $\\Phi _\\theta $ is extender-active, then $i_{0,\\theta }^{\\mathcal {S}}(\\alpha _0)= \\alpha _\\theta $ , (5) if $\\xi <_S \\theta $ , then $\\alpha _\\theta \\le i^{\\mathcal {S}}_{\\xi ,\\theta }(\\alpha _\\xi )$ and $\\beta _\\theta \\le i^{{\\mathcal {S}}}_{\\xi ,\\theta }(\\beta _\\xi )$ , 6) if $\\langle \\alpha _\\theta , \\beta _\\theta \\rangle = i_{\\xi ,\\theta }^{{\\mathcal {S}}}(\\langle \\alpha _\\xi ,\\beta _\\xi \\rangle )$ , then (a) $\\Phi _\\theta ^- = i^{{\\mathcal {S}}}_{\\xi ,\\theta }(\\Phi _\\xi ^-)$ , and (b) $i_{\\xi ,\\theta }^{\\mathcal {S}}$ is continuous at $\\rho _k({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})$ , for $k = k({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})$ , (7) ${\\mathcal {M}}_{\\theta +1}^{\\mathcal {T}}= {\\mathcal {M}}_\\theta ^{\\mathcal {T}}$ , and $\\pi _{\\theta +1} = \\pi _\\theta \\circ \\sigma _\\theta $ .", "Setting $\\sigma _0 = \\bar{\\pi }$ , we have $(\\dagger )_1$ .", "For a node $\\gamma $ of ${\\mathcal {S}}$ , we write ${\\mathcal {S}}$ -pred$(\\gamma )$ for the immediate $\\le _{\\mathcal {S}}$ -predecessor of ${\\mathcal {S}}$ .", "For $\\gamma $ a node in ${\\mathcal {S}}$ , we set st$(\\gamma ) = $ the least stable $\\theta $ such that $\\theta \\le _{\\mathcal {S}}\\gamma $ , and rt$(\\gamma )={\\left\\lbrace \\begin{array}{ll}\\text{${\\mathcal {S}}$-pred(st$(\\gamma )$)} & : \\ \\text{if ${\\mathcal {S}}$-pred(st$(\\gamma )$) exists} \\\\\\text{st$(\\gamma )$} & : \\ \\text{otherwise}.\\end{array}\\right.", "}$ The construction of ${\\mathcal {S}}$ ends when we reach a stable $\\theta $ such that $(M_{\\nu ,l},\\Omega _{\\nu ,l}) \\lhd ({\\mathcal {M}}^{\\mathcal {S}}_\\theta , \\Lambda _\\theta )$ , or $({\\mathcal {M}}^{\\mathcal {S}}_\\theta , \\Lambda _\\theta ) \\unlhd (M_{\\nu ,l}, \\Omega _{\\nu ,l})$ , and $[$ rt$(\\theta ),\\theta ]_{\\mathcal {S}}$ does not drop in model or degree.In [11], there is another way the comparison can end: we reach a stable $\\theta $ such that for some $\\xi $ , ${\\mathcal {M}}^{\\mathcal {S}}_\\theta ={\\mathcal {M}}^{\\mathcal {U}}_\\xi $ , and neither $[$ rt$(\\theta ),\\theta ]_{\\mathcal {S}}$ nor $[0,\\xi ]_{\\mathcal {U}}$ drops in model or degree.", "Moreover, letting ${\\mathcal { Q}}$ be the result of removing the last extender predicate of ${\\mathcal {M}}^{\\mathcal {S}}_\\theta $ , then ${\\mathcal { Q}}\\unlhd M_{\\nu ,l}$ .", "This is not necessary in our situation, and would cause problems for the strategy-condensation part of our proof.", "If case (I) occurs, then we go on to define ${\\mathcal {S}}_{\\nu ,l+1}$ .", "If case (II) occurs, we stop the construction.", "We now describe how to extend ${\\mathcal {S}}$ one more step.", "First we assume ${\\mathcal {S}}$ has successor length $\\gamma +1$ and let ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma $ be the current last model, so that $\\gamma $ is stable.", "Suppose$(\\dagger )_\\gamma $ holds.", "Suppose (I) and (II) above do not hold for $\\gamma $ , so that we have a least disagreement between ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma $ and $M_{\\nu ,l}$ .", "By the proof of [11], the least disagreement involves only an extender $E$ on the sequence of ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma $ .", "Letting $\\tau = {\\rm lh}(E)$ , we have $M_{\\nu ,l}|(\\tau ,0) = {\\mathcal {M}}^{\\mathcal {S}}_\\gamma |(\\tau ,-1)$ ,Recall ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma |(\\tau ,-1)$ is the structure obtained from ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma |\\tau $ by removing $E$ .", "and ${(\\Omega _{\\nu ,l})}_{(\\tau ,0)}=(\\Lambda _\\gamma )_{(\\tau ,-1)}$ .", "Set $\\lambda ^{\\mathcal {S}}_\\gamma = \\lambda _E$ and $\\xi $ be least such that crt$(E)<\\lambda ^{\\mathcal {S}}_\\xi $ .", "We let ${\\mathcal {S}}$ -pred$(\\gamma +1)=\\xi $ .", "Let $(\\beta ,k)$ be lex least such that either $\\rho ({\\mathcal {M}}^{\\mathcal {S}}_\\xi |(\\beta ,k))\\le $ crt$(E)$ or $(\\beta ,k)=(\\hat{o}({\\mathcal {M}}^{\\mathcal {S}}_\\xi ),k({\\mathcal {M}}^{\\mathcal {S}}_\\xi ))$ .", "Set $R = {\\mathcal {M}}^{\\mathcal {S}}_\\xi |(\\beta ,k)$ , $\\bar{R} = \\bar{\\mathfrak {C}}_k(R)$ and set $M^{\\mathcal {S}}_{\\gamma +1}= \\begin{cases*}\\text{Ult}(\\bar{R},E) & \\text{ if } \\text{ $R$ is of type $1B$ and crt$(E) = \\eta _k^R$} \\\\\\text{Ult}(R,E) & \\text{ otherwise.", "}\\end{cases*}$ and let $\\hat{i}^{\\mathcal {S}}_{\\xi ,\\gamma +1}$ be the canonical embedding.", "We note that in the first case, $M^{\\mathcal {S}}_{\\gamma +1}$ is strongly stable and is of type $1A$ ; in the second case, $M^{\\mathcal {S}}_{\\gamma +1}$ has type 1, is stable but may not be strongly stable.", "Furthermore, in second case, the ultrapower map is continuous at $\\rho _k(R)=\\rho _k(\\bar{R})$ .", "Let ${\\mathcal {M}}^{\\mathcal {T}}_{\\gamma +1}=$ Ult$({\\mathcal {M}}^{\\mathcal {T}}_\\xi |(\\pi _\\xi (\\beta ),k),\\pi _\\gamma (E))$ , if ${\\mathcal {M}}^{\\mathcal {S}}_{\\gamma +1}$ is defined as in the second case or ${\\mathcal {M}}^{\\mathcal {T}}_{\\gamma +1}=$ Ult$(\\bar{\\mathfrak {C}}_k({\\mathcal {M}}^{\\mathcal {T}}_\\xi |(\\pi _\\xi (\\beta ),k)),\\pi _\\gamma (E))$ , if ${\\mathcal {M}}^{\\mathcal {S}}_{\\gamma +1}$ is defined as in the first case.", "Let $\\pi _{\\gamma +1}$ be given by the Shift Lemma .", "This determines $\\Lambda _{\\gamma +1}$ .", "If $\\xi $ is stable or $(\\beta ,k)< (\\hat{o}({\\mathcal {M}}^{\\mathcal {S}}_\\xi ),k({\\mathcal {M}}^{\\mathcal {S}}_\\xi ))$ , then we declare $\\gamma +1$ to be stable.", "$(\\dagger )_{\\gamma +1}$ follows vacuously from $(\\dagger )_\\gamma $ .It is possible that $\\xi $ is unstable, $\\lambda _\\xi = \\alpha _\\xi $ , ${\\mathcal {S}}$ -pred$(\\gamma +1)=\\xi $ , and crt$(E^{\\mathcal {S}}_\\gamma )=\\lambda _F$ where $F$ is the last extender of ${\\mathcal {M}}^{\\mathcal {S}}_\\xi |\\alpha _\\xi $ .", "In this case, $(\\beta ,k) = ({\\rm lh}(F),0)$ .", "The problem then is that ${\\mathcal {M}}^{\\mathcal {S}}_{\\gamma +1}$ is not an lpm, because its last extender $i_{\\xi ,\\gamma +1}(F)$ has a missing whole initial segment, namely $F$ .", "Schindler and Zeman found a way to deal with this anomaly in [6].", "Their method works in the hod mouse context as well.", "Here we shall not go into the details of this case.", "The anomaly cannot occur when $\\xi $ is stable, because then $\\lambda _\\xi = \\lambda (E_\\xi ^{{\\mathcal {S}}})$ is inaccessible in ${\\mathcal {M}}_\\gamma ^{{\\mathcal {S}}}$ .", "If $\\xi $ is unstable, $(\\beta ,k)= (\\hat{o}({\\mathcal {M}}^{\\mathcal {S}}_\\xi ),k({\\mathcal {M}}^{\\mathcal {S}}_\\xi ))$ , and $({\\mathcal {M}}^{\\mathcal {S}}_{\\gamma +1},\\Lambda _\\gamma +1)$ is not a model of ${\\mathcal {U}}$ , then again we declare $\\gamma +1$ stable.", "Again, $(\\dagger )_{\\gamma +1}$ follows vacuously from $(\\dagger )_\\gamma $ .", "Finally, suppose $\\xi $ is unstable, $(\\beta ,k)= (\\hat{o}({\\mathcal {M}}^{\\mathcal {S}}_\\xi ),k({\\mathcal {M}}^{\\mathcal {S}}_\\xi ))$ , and for some $\\tau $ , $({\\mathcal {M}}^{\\mathcal {S}}_{\\gamma +1},\\Lambda _{\\gamma +1}) = ({\\mathcal {M}}_\\tau ^{\\mathcal {U}}, \\Sigma _\\tau ^{\\mathcal {U}}).$ We then we declare $\\gamma +1$ to be unstable and $\\gamma +2$ stable.", "We must define the problematic tuple needed for $(\\dagger )_{\\gamma +2}$ .", "Let $i = i_{\\xi ,\\gamma +1}^{\\mathcal {S}}$ , and $\\langle (N,\\Psi ),(G,\\Phi ),\\sigma ,\\alpha \\rangle = \\langle ({\\mathcal {M}}_\\xi ^{\\mathcal {S}},\\Lambda _\\xi ),({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}), \\sigma _\\xi ,\\alpha _\\xi \\rangle .$ We have that $\\langle (N,\\Psi ),(G,\\Phi ),\\sigma ,\\alpha \\rangle $ is problematic.", "Let $k =k(G)$ .", "(So $k = k(N) = k(M)$ .)", "Case 1.", "$i$ is continuous at $\\rho _k(G)$ .", "In this case, we simply let $\\langle {\\mathcal {M}}_{\\gamma +2}^{\\mathcal {S}}, \\alpha _{\\gamma +1} \\rangle =\\langle i(G), i(\\alpha ) \\rangle .$ We must define $\\sigma _{\\gamma +1}$ .", "Note that by our case hypothesis, $i(G^k) = i(G)^k.$ Let $\\langle \\sigma ^\\eta \\mid \\eta < \\rho _k(G) \\rangle $ be the natural decomposition of $\\sigma \\upharpoonright G^k$ , and set $i(\\sigma \\upharpoonright G^k) = \\bigcup _{\\eta < \\rho _k(G)}i(\\sigma ^\\eta ).$ Using the continuity of $i$ at $\\rho _k(G)$ , it is easy to see that $i(\\sigma \\upharpoonright G^k)$ is $\\Sigma _0$ -elementary and cardinal preserving from $i(G)^k$ to $i(N)^k$ .", "We set $\\sigma _{\\gamma +1} = \\text{ completionof $i(\\sigma \\upharpoonright G^k)$ via upward extension of embeddings,}$ and $\\Lambda _{\\gamma +2} = \\Lambda _{\\gamma +1}^{\\sigma _{\\gamma +1}}.$ This defines $\\Phi _{\\gamma +1}$ .", "Abusing notation a bit, let us write $\\Phi _{\\gamma +1}^- = \\langle i(N),i(G),i(\\sigma ),i(\\alpha ) \\rangle .$ We must see that $\\Phi _{\\gamma +1}$ is problematic.", "First, it satisfies the hypotheses of the condensation theorem .", "For $G$ is $\\alpha $ -sound, so $i(G)$ is $i(\\alpha )$ -sound.", "By downward extension of embeddings (cf.", "[5]), $i(\\sigma \\upharpoonright G^k)$ extends to a unique embedding from some $K$ into $i(N)$ , and it is easy to see that $K = i(G)$ , because $i(G)$ is $k$ -sound, and that the embedding in question is what we have called $i(\\sigma )$ .", "$i(\\sigma )$ is nearly elementary: it maps parameters correctly, $i(\\sigma )\\upharpoonright i(G)^k$ is $\\Sigma _0$ -elementary and cardinal preserving by construction.", "Finally, $\\mathrm {crt}(i(\\sigma )) = i(\\alpha )$ , because for all sufficiently large $\\eta < \\rho _k(G)$ , $\\alpha +1 \\subseteq {\\rm dom}(\\sigma ^\\eta )$ and $\\mathrm {crt}(\\sigma ^\\eta )=\\alpha $ , so $\\mathrm {crt}(i(\\sigma ^\\eta ))=i(\\alpha )$ .", "So we must see that one of the conclusions of fails.", "We show that the conclusion that failed for $\\Phi _\\xi $ fails for $\\Phi _{\\gamma +1}$ .", "Suppose first that $\\Phi _\\xi ^-$ is problematic.", "We break into cases.", "If $G$ is strongly sound and $E_\\alpha ^N = \\emptyset $ , then $\\lnot G \\lhd N$ .", "But then $i(G)$ is strongly sound, $E_{i(\\alpha )}^{i(N)} = \\emptyset $ , and $\\lnot i(G) \\lhd i(N)$ , so $\\Phi _{\\gamma +1}^-$ is problematic.", "If $G$ is strongly sound and $E_\\alpha ^N \\ne \\emptyset $ , then $\\lnot G \\lhd {\\rm Ult}(N,E_\\alpha ^N)$ .", "But then $i(G)$ is strongly sound, $E_{i(\\alpha )}^{i(N)} \\ne \\emptyset $ , and $\\lnot i(G) \\lhd {\\rm Ult}(i(N),E_{i(\\alpha )}^{i(N)})$ , so $\\Phi _{\\gamma +1}^-$ is problematic.", "The case where $\\rho _k(G)$ is measurable in $G$ , i.e.", "$\\mathfrak {C}_k(G) \\ne \\bar{\\mathfrak {C}}_k(G)$ or the case that $G$ is unsound are handled similarly.", "The $\\Pi _1$ fact that $G$ is not an initial segment of an appropriate ultrapower of $N$ is preserved by $i$ , so we are done.", "So we may assume $\\Phi _\\xi ^-$ is not problematic, and hence also $\\Phi _{\\gamma +1}^-$ is not problematic.", "Suppose first $G \\lhd N$ , so $i(G) \\lhd i(N)$ .", "We must show $\\Lambda _{\\gamma +1}^{i(\\sigma )} \\ne (\\Lambda _{\\gamma +1})_{i(G)}$ , so suppose otherwise.", "Using claim 4 we get $\\Lambda _\\xi = \\Lambda _{\\gamma +1}^i$ , so $\\Lambda _\\xi ^\\sigma & = (\\Lambda _{\\gamma +1})^{i \\circ \\sigma }\\\\& = (\\Lambda _{\\gamma +1})^{i(\\sigma ) \\circ i}\\\\& =(\\Lambda _{\\gamma +1}^{i(\\sigma )})^i \\\\& = ((\\Lambda _{\\gamma +1})_{i(G)})^i\\\\& = (\\Lambda _\\xi )_G,\\\\$ a contradiction.", "Equation 2 holds because $i \\circ \\sigma =i(\\sigma ) \\circ i$ , and equation 5 comes from equation 4 using claim 4 again.", "Thus $\\Phi _\\xi $ is not problematic, contradiction.", "The case $\\rho _k(G)$ is measurable in $G$ is proved similarly.", "Let $G^* = \\textrm {Ult}(G, D)$ where $D$ is the order zero measure on $\\rho _k(G)$ , $i_D: G\\rightarrow G^*$ be the ultrapower map, and $\\Lambda ^*$ the tail strategy of $G^*$ induced by $i_D$ .", "Suppose $G^*\\lhd N$ and so $i(G^*)\\lhd i(N)$ .", "Let $\\Psi ^*$ be the tail strategy of $i(G^*)$ induced by the ultrapower map $i_{i(D)}: i(G) \\rightarrow i(G^*)$ .", "Suppose $\\Psi ^* = (\\Lambda _{\\gamma +1})_{i(G^*)} \\lhd \\Lambda _{\\gamma +1}$ .", "We have $\\Lambda ^* & = (\\Psi ^*)^i\\\\& \\lhd \\Lambda _{\\gamma +1}^i\\\\& =\\Lambda _\\xi .", "\\\\$ The first equality above follows from the fact that $i\\circ i_D = i\\circ i_{i(D)}$ and $\\Lambda ^i_{i(G)} = \\Lambda _{G}$ ; here $\\Lambda ^i_{i(G)}=(\\Lambda ^{i(\\sigma )}_{\\gamma +1})^i$ and $\\Lambda _G = \\Lambda _\\xi ^\\sigma $ .", "This is a contradiction.", "A similar argument shows that $\\Phi _{\\gamma +1}$ is “strategy problematic” in the other case, when $G$ is an ultrapower away from $N$ .", "Thus $\\langle {\\mathcal {M}}_{\\gamma +1}^{\\mathcal {S}}, {\\mathcal {M}}_{\\gamma +2}^{\\mathcal {S}},\\sigma _{\\gamma +1},\\alpha _{\\gamma +1} \\rangle $ is problematic.", "Setting $ {\\mathcal {M}}_{\\gamma +2}^{\\mathcal {T}}= {\\mathcal {M}}_{\\gamma +1}^{\\mathcal {T}}\\text{ and } \\pi _{\\gamma +2} =\\pi _{\\gamma +1} \\circ \\sigma _{\\gamma +1},$ the rest of $(\\dagger )_{\\gamma +2}$ is clear.", "Case 2.", "$i$ is discontinuous at $\\rho _k(G)$ .", "Set $\\kappa = {\\rm crt}(E_\\gamma ^{\\mathcal {S}})$ .", "In case 2, $\\rho _k(G)$ has cofinality $\\kappa $ in $N$ .", "Since $\\rho (G) \\le \\alpha $ and $G$ is $\\alpha $ -sound, we have a $\\Sigma _1$ over $G^k$ map of $\\alpha $ onto $(\\alpha ^+)^G$ .", "Ramifying this map, we see that $(\\alpha ^+)^G$ also has cofinality $\\kappa $ in $N$ .", "Let $\\langle (\\sigma ^\\eta ,G^\\eta ) \\mid \\alpha \\le \\eta < \\rho _k(G) \\rangle $ be the natural decomposition of $\\sigma \\upharpoonright G^k$ .We encourage the reader to focus on the case $k=0$ , which has the main ideas.", "Let $s = p(G) -\\alpha $ , so that ${\\rm dom}(\\sigma ^\\eta ) = h^1_{G^\\eta }``(\\alpha \\cup s).$ Let $\\tau = \\bigcup _{\\eta < \\rho _k(G)} i(\\sigma ^\\eta ).$ The domain of $\\bar{\\tau }$ is no longer all of $i(G)^k$ , instead ${\\rm dom}(\\tau ) = \\bigcup _{\\eta < \\rho _k(G)} h^1_{i(G^\\eta )}``(i(\\alpha ) \\cup i(s)).$ But set, in the case $i(N)$ is of type 1, $J = {\\rm Ult}(G,E_i \\upharpoonright \\sup i``\\alpha ),$ and let $t \\colon G \\rightarrow J$ be the canonical embedding, and $v \\colon J \\rightarrow i(G)$ be the factor map.", "This is a $\\Sigma _k$ ultrapower, by what may be a long extender.", "That is, $J$ is the decoding of $J^k = {\\rm Ult}_0(G^k,E_i \\upharpoonright \\sup i``\\alpha )$ .", "Note that $t$ is continuous at $\\alpha $ , because $\\alpha $ is regular in $G$ (because $\\alpha = \\mathrm {crt}(\\sigma )$ ), and $\\alpha < \\rho _k(G)$ .", "We claim that ${\\rm ran}(v \\upharpoonright J^k) \\subseteq {\\rm dom}(\\tau )$ .", "For let $f \\in G^k$ and $b \\subseteq \\sup i``\\alpha $ be finite, so that $t(f)(b)$ is a typical element of $J^k$ , and $v(t(f)(b)) =i(f)(b)$ .", "We can find $\\eta < \\rho _k(G)$ such that $f \\in {\\rm dom}(\\sigma ^\\eta )$ and $\\eta > \\alpha $ , so that $i(f) \\in {\\rm dom}(i(\\sigma ^\\eta ))$ and $b \\subseteq i(\\eta )$ .", "Since $f ``(\\alpha )\\subseteq {\\rm dom}(\\sigma ^\\eta )$ , $i(f)``i(\\alpha ) \\subseteq {\\rm dom}(i(\\sigma ^\\eta ))$ , so $i(f)(b)\\in {\\rm dom}(\\tau )$ , as desired.", "Let $\\tau $ be the extension of $\\bar{\\tau }$ given by: for $a \\subseteq \\sup i``\\rho _k(G)$ finite, $\\tau (h^{k+1}_{i(G)}(a,p(i(G))) = h^{k+1}_{i(N)}(\\tau (a), p(i(N))).$ It is easy to check that ${\\rm ran}(v) \\subseteq {\\rm dom}(\\tau )$ .", "This gives us the diagram in Figure REF .", "Figure: Lift up of (N,G,σ,α)(N,G,\\sigma ,\\alpha ) in the case ii is discontinuous at ρ k (G)\\rho _k(G)The map $\\tau $ here is only partial on $i(G)$ , but $\\tau \\circ v \\colon J \\rightarrow i(N)$ is total.", "Also, $i``G \\subseteq {\\rm dom}(\\tau )$ , so $\\tau \\circ i$ is total on $G$ .", "For each $\\eta < \\rho _k(G)$ , and $x \\in {\\rm dom}(\\sigma ^\\eta )$ , $i \\circ \\sigma ^\\eta (x) = i(\\sigma ^\\eta )(i(x)),$ so $\\tau \\circ i$ agrees on $G^k$ with $i \\circ \\sigma $ .", "Since both map $p_k(G)$ to $p_k(i(N))$ , $\\tau \\circ i = i \\circ \\sigma .$ Clearly $i \\upharpoonright G = v \\circ t$ , so the diagram commutes.", "Since $E_i \\upharpoonright \\sup i``\\alpha $ is a long extender, we need some care to show that $J$ is a premouse.", "The worry is that it could be a protomouse, in the the case that $G$ is extender active and $k=0$ .", "So suppose $k=0$ and $\\mu = {\\rm crt}(\\dot{F}^G)$ ; it is enough to see that $t$ is continuous at $\\mu ^{+,G}$ .", "If not, we have $f \\in G$ and $b \\subseteq \\sup i``\\alpha $ finite such that $\\sup i`` \\mu ^{+,G} < t(f)(b) < i(\\mu ^{+,G}).$ We may assume ${\\rm dom}(f) = \\gamma ^{|b|}$ , where $\\gamma < \\alpha $ , and by Los, ${\\rm ran}(f)$ is unbounded in $\\mu ^{+,G}$ .", "It follows that $\\mu ^{+,G} < \\alpha $ .", "But $\\mathrm {cof}(\\mu ^{+,G}) = \\mathrm {cof}(\\hat{o}(G)) =\\kappa $ in $N$ , so $\\mu ^{+,G}$ is not a cardinal in $N$ , so $\\mu ^{+,G} < \\alpha $ is ruled out by $\\sigma \\upharpoonright \\alpha $ being the identity.", "Thus $J$ is a premouse.", "We claim that $\\tau \\circ v$ is nearly elementary.", "First, $\\tau $ is a partial $\\Sigma _0$ map from $i(G)^k$ to $i(N)^k$ , so $\\tau \\circ v \\upharpoonright J^k$ is $\\Sigma _0$ from $J^k$ to $i(N)^k$ .", "The diagram shows that $\\tau \\circ v(w_k(J)) = \\tau \\circ v \\circ t (w_k(G)) =i \\circ \\sigma (w_k(G)) = w_k(i(N))$ .", "For $\\eta < \\rho _k(G)$ , we have that $\\sigma ^\\eta $ is the identity on $\\alpha \\cap \\eta $ , so $\\sup i``\\alpha = \\sup t``\\alpha \\le \\mathrm {crt}(\\tau \\circ v) .$ But $\\alpha < \\sigma (\\alpha )$ , so $i(\\alpha )< i \\circ \\sigma (\\alpha )= \\tau \\circ v \\circ t (\\alpha )$ .", "Also, $t(\\alpha ) \\le i(\\alpha )$ , so $t(\\alpha )< \\tau \\circ v(t(\\alpha ))$ .", "Thus $\\mathrm {crt}(\\tau \\circ v) \\le t(\\alpha )$ , and since $t(\\alpha ) = \\sup t``\\alpha $ , we get $\\mathrm {crt}(\\tau \\circ v) = \\sup i``\\alpha = t(\\alpha ).$ We set ${\\mathcal {M}}_{\\gamma +1}^{\\mathcal {S}}&= i(N),\\\\{\\mathcal {M}}_{\\gamma +2}^{\\mathcal {S}}&= J,\\\\\\sigma _{\\gamma +1} &= \\tau \\circ v, \\text{ and}\\\\\\alpha _{\\gamma +1} &= {\\rm crt}(\\tau \\circ v).$ In the case $i(N)$ is of type 2,This means $N$ is of type $1B$ , and $G$ is also of type $1B$ , and crt$(E) = \\eta ^N_k = \\eta ^G_k$ .", "we set $\\bar{G} =\\bar{\\mathfrak {C}}_k(G),\\bar{N} =\\bar{\\mathfrak {C}}_k(N) $ and define $\\bar{J}$ from $\\bar{G}$ the same way $G$ defines $J$ .", "We also define the associated maps $\\bar{t}: \\bar{G} \\rightarrow \\bar{J}$ and $\\bar{v}: \\bar{J}\\rightarrow i(\\bar{G})$ in an obvious way.", "We then let ${\\mathcal {M}}_{\\gamma +1}^{\\mathcal {S}}&= i(\\bar{N}),\\\\{\\mathcal {M}}_{\\gamma +2}^{\\mathcal {S}}&= \\bar{J},\\\\\\sigma _{\\gamma +1} &= \\bar{\\tau } \\circ \\bar{v}, \\text{ and}\\\\\\alpha _{\\gamma +1} &= {\\rm crt}(\\bar{\\tau } \\circ \\bar{v}).$ It is easy to see that $\\bar{J}, i(\\bar{N})$ are both strongly stable, type 1 pfs premice.", "We need to see that $\\Phi _{\\gamma +1}$ is problematic.", "Claim 5.", "(a) If $\\Phi _\\xi ^-$ is problematic, then $\\Phi _{\\gamma +1}^-$ is problematic.", "(b) $\\Phi _\\xi $ is extender-active iff $\\Phi _{\\gamma +1}$ is extender-active; moreover, if $\\Phi _\\xi $ is extender-active, then $i_{\\xi ,\\gamma +1}^{\\mathcal {S}}(\\alpha _\\xi )=\\alpha _{\\gamma +1}$ .", "Proof.", "We write $\\Phi _{\\gamma +1}^- = \\langle i(N),J,\\tau \\circ v, {\\rm crt}(\\tau \\circ v) \\rangle ,$ in the case $i(N)$ has type 1.", "The reader can easily check that the tuple obeys the hypotheses of .", "Clearly $J$ is $\\Sigma _{k+1}$ generated by $\\sup i``\\alpha \\cup t(s)$ , and ${\\rm crt}(\\tau \\circ v) \\ge \\sup i``\\alpha $ .", "$t$ preserves the solidity of $s$ , so $t(s) = p(J) - \\sup i``\\alpha $ .", "We have shown that $\\tau \\circ v$ is nearly elementary.", "Since $i(G) \\in i(N)$ , we have $J \\in i(N)$ , so $J$ is not the $\\mathrm {crt}(\\tau \\circ v)$ -core of $i(N)$ .", "So what we need to see is that none of the conclusions (a)-(b) of hold for $\\langle i(N),J,\\tau \\circ v, {\\rm crt}(\\tau \\circ v \\rangle $ .", "We break into two cases.", "Case A.", "$\\Phi _\\xi $ is not extender-active.", "We have $E_\\alpha ^N = \\emptyset .$ Since $(N,\\Psi ) \\le ^*({\\mathcal {M}}_\\xi ^{\\mathcal {T}},\\Sigma _\\xi ^{\\mathcal {T}}) \\le ^* (M,\\Sigma )$ , claim 3 gives $G|(\\alpha ^+)^G = N||(\\alpha ^+)^G.$ Since $G \\in N$ , there is a first level $P$ of $N$ such that $P||(\\alpha ^+)^G =G|(\\alpha ^+)^G$ and $\\rho (P) \\le \\alpha $ .", "Because our tuple is problematic, $P \\ne G$ .", "Letting $n = k(P)$ , we get by the argument above that in $N$ , $\\rho _n(P)$ has the same cofinality as $(\\alpha ^+)^P = (\\alpha ^+)^G$ , namely $\\kappa $ .", "We set $Q = {\\rm Ult}(P,E_i \\upharpoonright \\sup i ``\\alpha ),$ and let $t_0 \\colon P \\rightarrow Q$ be the canonical embedding, and $v_0 \\colon Q \\rightarrow i(P)$ be the factor map.", "Again, we must see that $Q$ is not a protomouse, in the case $P$ is extender active with ${\\rm crt}(\\dot{F}^P) = \\mu $ , and $n=0$ .", "If $\\alpha \\le \\mu ^{+,P}$ , this follows as above.", "If $\\mu ^{+,P} < \\alpha $ , then because $P|\\alpha ^{+,P} =G|\\alpha ^{+,G}$ , $\\mu ^{+,P}$ is a cardinal of $G$ , and hence of $N$ , so $i$ is continuous at $\\mu ^{+,P}$ , as desired.", "It is easy to check that the hypotheses of hold for $\\langle (i(P),\\Omega ),(Q, \\Omega ^{v_0}),v_0,{\\rm crt}(v_0)\\rangle $ , where $\\Omega = (\\Lambda _{\\gamma +1})_{i(P)}$ .", "Note here that $\\sup i``\\alpha = {\\rm crt}(v_0) =t_0(\\alpha ),$ because $t_0$ is continuous at $\\alpha $ and $i$ is not.", "Let $s_0 = p(P) \\setminus \\alpha $ ; then $P = h^{n+1}_P ``\\alpha \\cup s_0$ because $P$ is sound and $\\rho (P) \\le \\alpha $ .", "Thus $Q = h^{n+1}_Q ``(\\sup i``\\alpha \\cup t_0(s_0)).$ Moreover, $t_0$ maps the solidity witnesses for $s_0$ to solidity witnesses for $t_0(s)$ , so $Q$ is $\\mathrm {crt}(v_0)$ -sound, with parameter $t_0(s) \\setminus \\mathrm {crt}(v_0)$ .$Q$ may not be sound; this happens if $\\rho (P) \\le \\kappa $ .", "Also, $\\rho (Q) \\le \\sup i``\\alpha = \\mathrm {crt}(v_0) < t_0(\\alpha ^{+,P}) \\le \\rho _n(Q),$ where the last inequality comes from $\\rho _n(Q) = \\sup t_0 `` \\rho _n(P)$ and the fact that $t_0$ is continuous at $\\alpha ^{+,P}$ .", "It is easy to verify that $v_0$ is nearly elementary.", "Finally, $i(P)$ is sound, so $Q$ cannot be its ${\\rm crt}(v_0)$ -core.", "Thus the hypotheses of hold for $\\langle (i(P),\\Omega ),(Q, \\Omega ^{v_0}),v_0,{\\rm crt}(v_0)\\rangle $ .", "But note $(i(P),\\Omega )) \\lhd (i(N),\\Lambda _{\\gamma +1}) \\le ^* (\\mathcal {M}_{\\gamma +1}^{\\mathcal {T}},\\Sigma _{\\gamma +1}^{\\mathcal {T}}) \\le ^* (M,\\Sigma ).$ So because $(M,\\Sigma )$ is minimal, one of the conclusions of holds, and $Q$ is an initial segment of $i(P)$ , or an ultrapower away.", "(One can argue that $Q \\lhd i(P)$ , but we don't need this.)", "However, $J$ and $Q$ agree to $t(\\alpha ^{+,G}) = t_0(\\alpha ^{+,P})$ , and both project to $\\sup i``\\alpha $ or below.", "So if $(i(N),J,\\tau \\circ v, {\\rm crt}(\\tau \\circ v))$ is not problematic, then $J=Q.$ This implies that $k(J)=k(Q)$ , and $t(s) = t_0(s_0)$ .", "But $t \\upharpoonright \\alpha = i \\upharpoonright \\alpha =t_0 \\upharpoonright \\alpha $ .", "It follows at once that $G = P$ or $\\textrm {Ult}(G,D) = P$ , where $D$ is a measure on $\\rho _k(G)$ of order 0.", "So $G \\lhd N$ or $\\textrm {Ult}(G,D) \\lhd N$ .", "In either case, $\\Phi _\\xi ^-$ is not problematic, contradiction.", "This gives (a) of the claim.", "For (b), we must see that $\\mathrm {crt}(\\tau \\circ v)$ is not an index in $i(N)$ .", "There are two cases.", "If $i$ is continuous at $\\alpha $ , then $\\mathrm {crt}(v) > i(\\alpha )$ and $\\mathrm {crt}(\\tau ) = i(\\alpha )$ , so $\\mathrm {crt}(\\tau \\circ v) =i(\\alpha )$ , which is not an index in $i(N)$ .", "Otherwise, $\\mathrm {crt}(\\tau \\circ v) = \\mathrm {crt}(v) = \\sup i``\\alpha $ .", "But then $\\sup i``\\alpha $ has cofinality $\\kappa $ in $i(N)$ , and since $\\kappa $ is a limit cardinal in $i(N)$ , it is not the cofinality of the index of a total extender in $i(N)$ .", "The case ($ \\Phi _{\\gamma +1}^- = \\langle i(\\bar{N}),\\bar{J},\\bar{\\tau } \\circ \\bar{v}, {\\rm crt}(\\bar{\\tau } \\circ \\bar{v}) \\rangle $ ) is handled similarly.", "We have $P\\lhd \\bar{N}$ by the agreement between $N,\\bar{N}$ .", "As mentioned before, $\\bar{J}, i(\\bar{N})$ are strongly stable and of type 1.", "We show as before $\\bar{J} = Q$ and so $\\textrm {Ult}(\\bar{G}, D) = P\\lhd N$ , where $D$ is the measure of Mitchell order 0 on $\\rho _k(G)$ .", "This again shows $\\Phi ^-_\\xi $ is not problematic.", "Case B.", "$\\Phi _\\xi $ is extender-active.", "In this case $\\sup i``\\alpha = i(\\alpha )$ , because $\\alpha $ has cofinality ${\\rm crt}(E_\\alpha ^N)^{+,N}$ in $N$ .", "So $i(\\alpha )$ is an index in $i(N)$ , say of $F$ .", "Moreover, $i(\\alpha ) =\\mathrm {crt}(\\tau \\circ v)$ , so we have (b) of the claim.", "Let $R = {\\rm Ult}(N,E_\\alpha ^N)$ .", "$G|\\alpha ^{+,G}$ is an initial segment of $R$ by $(*)(N)$ .", "If $\\alpha ^{+,R}= \\alpha ^{+,G}$ , then $i(\\alpha )^{+,J} =i(\\alpha )^{+,{\\rm Ult}(i(N),F)})$ , so $\\langle i(N),J,\\tau \\circ v, i(\\alpha )\\rangle $ is problematic.", "(Since ${\\rm crt}(v) > i(\\alpha )$ , $i(\\alpha ) = {\\rm crt}(\\tau \\circ v))$ .)", "Otherwise, we have a first initial segment $P$ of $R$ past $\\alpha ^{+,G}$ that projects to or below $\\alpha $ .", "We can now use $P$ just as we did in Case A, thereby proving (a) of the claim.", "The case ($ \\Phi _{\\gamma +1}^- = \\langle i(\\bar{N}),\\bar{J},\\bar{\\tau } \\circ \\bar{v}, {\\rm crt}(\\bar{\\tau } \\circ \\bar{v}) \\rangle $ ) is handled similarly: we show $\\textrm {Ult}(\\bar{G}, D) = P\\lhd R$ , where $D$ is the measure of Mitchell order 0 on $\\rho _k(G)$ .", "$\\hfill \\square $ By the last claim, we may assume that $\\Phi _\\xi ^-$ and $\\Phi _{\\gamma +1}^-$ are not problematic.", "Suppose for example that $G \\lhd N$ , so that $J \\lhd i(N)$ .", "Since $\\Phi _\\xi $ is problematic, $\\Lambda _\\xi ^\\sigma \\ne (\\Lambda _\\xi )_G$ .", "If $\\Phi _{\\gamma +1}$ is not problematic, then $(\\Lambda _{\\gamma +1})_J =\\Lambda _{\\gamma +1}^{\\tau \\circ v}.$ Because $(i(G), (\\Lambda _{\\gamma +1})_{i(G)}) <^* (M,\\Sigma )$ , we also have $(\\Lambda _{\\gamma +1})_{i(G)}^v = (\\Lambda _{\\gamma +1})_J.$ By claim 4, $\\Lambda _\\xi ^\\sigma =\\Lambda _{\\gamma +1}^{i \\circ \\sigma }$ .", "So we can calculate as above $\\Lambda _\\xi ^\\sigma & = (\\Lambda _{\\gamma +1})^{i \\circ \\sigma }\\\\& = (\\Lambda _{\\gamma +1})^{\\tau \\circ v \\circ t}\\\\& =(\\Lambda _{\\gamma +1}^{\\tau \\circ v})^t \\\\& = (\\Lambda _{\\gamma +1})_{J}^t\\\\& = ((\\Lambda _{\\gamma +1})_{i(G)}^v)^t\\\\& = (\\Lambda _{\\gamma +1})_{i(G)}^i\\\\& = (\\Lambda _\\xi )_G.\\\\$ This is a contradiction.", "The cases in which $G$ or $\\bar{G}$ is an ultrapower away from $N$ are similar.", "We give some detail here in the case $ \\Phi _{\\gamma +1}^- = \\langle i(\\bar{N}),\\bar{J},\\bar{\\tau } \\circ \\bar{v}, {\\rm crt}(\\bar{\\tau } \\circ \\bar{v}) \\rangle $ .", "Note that by the proof above, $J = \\bar{J} = Q$ and since $(i(\\bar{G}), (\\Lambda _{\\gamma +1})_{i(\\bar{G})}) <^* (M,\\Sigma )$ , $(\\Lambda _{\\gamma +1})_{i(\\bar{G})}^v = (\\Lambda _{\\gamma +1})_J.$ We assume again that $\\Lambda _\\xi ^\\sigma \\ne (\\Lambda _\\xi )_G$ and $(\\Lambda _{\\gamma +1})_J =\\Lambda _{\\gamma +1}^{\\bar{\\tau } \\circ \\bar{v}}.$ We have $\\Lambda _\\xi ^\\sigma & = \\text{Ult}(\\Lambda _{\\bar{G}}, i_D)\\\\& = \\text{Ult}(\\Lambda ^{\\bar{t}},i_D)\\\\& = \\text{Ult}((\\Lambda _{\\gamma +1}^{\\bar{\\tau }\\circ \\bar{v}})^{\\bar{t}},i_D) \\\\& = \\text{Ult}((\\Lambda _{\\gamma +1})_J^{\\bar{t}},i_D) \\\\& = \\text{Ult}((\\Lambda _{\\gamma +1})_J^{\\bar{t}},i_D) \\\\& = \\text{Ult}(((\\Lambda _{\\gamma +1})_{i(\\bar{G})}^{\\bar{v}})^{\\bar{t}},i_D) \\\\& = \\text{Ult}((\\Lambda _{\\gamma +1})_{\\bar{G}}^i,i_D) \\\\& = (\\Lambda _\\xi )_G.\\\\$ We abuse notation in the above calculations.", "For example, by $\\text{Ult}(\\Lambda _{\\bar{G}}, i_D)$ , we mean the tail strategy of $\\Lambda _{\\bar{G}}$ by the iteration tree $\\langle (\\bar{G},D)\\rangle $ .", "This is a contradiction.", "So we conclude that $\\Phi _{\\gamma +1}$ is problematic in all cases.", "This finishes the definition of $\\Phi _{\\gamma +1}$ , and the proof that it is a problematic tuple.", "We have also proved (4) of $(\\dagger )_{\\gamma +2}$ .", "We now verify the rest of $(\\dagger )_{\\gamma +2}$ .", "For the first part of (5), note that if $i$ is discontinuous at $\\alpha $ , then $\\sup i``\\alpha ={\\rm crt}(v) = {\\rm crt}(\\tau \\circ v) < i(\\alpha )$ , and if $i$ is continuous at $\\alpha $ , then ${\\rm crt}(\\tau ) = i(\\alpha ) ={\\rm crt}(\\tau \\circ v)$ .", "Thus in either case, $\\alpha _{\\gamma +1} \\le i_{\\xi ,\\gamma +1}^{{\\mathcal {S}}}(\\alpha _\\xi )$ .", "For the rest of (5) and (6), it is enough to see that $\\beta _{\\gamma +1} < i(\\beta )$ , where $ \\beta = \\beta _\\xi = (\\alpha ^+)^G$ .", "If $i$ is discontinuous at $\\alpha $ , then $\\alpha $ is a limit cardinal of $G$ , and $\\beta _{\\gamma +1} =(\\sup i``\\alpha )^{+,J} < i(\\alpha ) < i(\\beta )$ , as desired.", "If $i$ is continuous at $\\alpha $ , then since and $(\\alpha ^+)^G$ has cofinality $\\kappa $ in $N$ , we get $(\\alpha _{\\gamma +1})^{+,J} \\le i(\\alpha )^{+,J} = \\sup i``\\beta < i(\\beta ),$ as desired.", "(7) of $(\\dagger )_{\\gamma +2}$ is obvious from our definitions.", "If Case 2 occurs in the passage from $\\Phi ^-_\\xi =\\langle N,G,\\sigma ,\\alpha \\rangle $ to to $\\Phi ^-_{\\gamma +1} = \\langle i(N),J, \\tau \\circ v, \\mathrm {crt}(\\tau \\circ v) \\rangle $ , then $\\rho _k(J) = \\sup t`` \\rho _k(G)$ has cofinality $\\kappa $ in $i(N)$ , where $\\kappa = \\mathrm {crt}(E_\\gamma ^{\\mathcal {S}})$ .", "Along branches of ${\\mathcal {S}}$ containing $\\gamma +1$ , $\\kappa $ can no longer be a critical point.", "It follows that along any given branch, Case 2 can occur at most once.", "A similar remark applies to the case $\\Phi ^-_{\\gamma +1} = \\langle i(\\bar{N}),\\bar{J}, \\bar{\\tau } \\circ \\bar{v}, \\mathrm {crt}(\\bar{\\tau } \\circ \\bar{v}) \\rangle $ .", "If (I) or (II) hold at $\\gamma +2$ , then the construction of ${\\mathcal {S}}$ is over.", "Otherwise, we let $E_{\\gamma +2}^{{\\mathcal {S}}}$ be the least disagreement between ${\\mathcal {M}}_{\\gamma +2}^{{\\mathcal {S}}}$ and $M_{\\nu ,l}$ , and we set $\\lambda _{\\gamma +1} = \\inf (\\alpha _{\\gamma +1},\\lambda (E_{\\gamma +2}^{{\\mathcal {S}}})).$ This completes the successor step in the construction of ${\\mathcal {S}}$ .", "Now suppose we are given ${\\mathcal {S}}\\upharpoonright \\theta $ , where $\\theta $ is a limit ordinal.", "Let $b=\\Sigma ({\\mathcal {T}}\\upharpoonright \\theta )$ .", "Case 1.", "There is a largest $\\eta \\in b$ such that $\\eta $ is unstable.", "Fix $\\eta $ .", "There are two subcases.", "for all $\\gamma \\in b-(\\eta +1)$ , rt$(\\gamma )=\\eta +1$ .", "In this case, $b-(\\eta +1)$ is a branch of ${\\mathcal {S}}$ .", "Let ${\\mathcal {S}}$ choose this branch, $[\\eta +1,\\theta )_{\\mathcal {S}}=b-(\\eta +1)$ , and let ${\\mathcal {M}}^{\\mathcal {S}}_\\theta $ be the direct limit of the ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma $ for sufficiently large $\\gamma \\in b-(\\eta +1)$ .", "We define the branch embedding $i^{\\mathcal {S}}_{\\gamma ,\\theta }$ a usual and $\\pi _\\theta :{\\mathcal {M}}^{\\mathcal {S}}_\\theta \\rightarrow {\\mathcal {M}}^{\\mathcal {T}}_\\theta $ is given by the fact that the copy maps commute with the branch embeddings.", "We declare $\\theta $ to be stable.", "for all $\\gamma \\in b-(\\eta +1)$ , rt$(\\gamma )=\\eta $ .", "Let ${\\mathcal {S}}$ choose $[0,\\theta )_{\\mathcal {S}}= (b-\\eta )\\cup [0,\\eta ]_{\\mathcal {S}}$ , and let ${\\mathcal {M}}^{\\mathcal {S}}_\\theta $ be the direct limit of the ${\\mathcal {M}}^{\\mathcal {S}}_\\gamma $ for sufficiently large $\\gamma \\in b$ .", "Branch embeddings $i^{\\mathcal {S}}_{\\gamma ,\\theta }$ for $\\gamma \\ge \\eta $ are defined as usual.", "$\\pi _\\theta :{\\mathcal {M}}^{\\mathcal {S}}_\\theta \\rightarrow {\\mathcal {M}}^{\\mathcal {T}}_\\theta $ is given by the fact that copy maps commute with branch embeddings.", "We declare $\\theta $ to be stable.", "Since $\\theta $ is stable, $(\\dagger )_\\theta $ follows at once from $\\forall \\gamma < \\theta (\\dagger )_\\gamma $ .", "Case 2.", "There are boundedly many unstable ordinals in $b$ but no largest one.", "We let $\\eta $ be the sup of the unstable ordinals in $b$ .", "Let ${\\mathcal {S}}$ choose $[0,\\theta )_{\\mathcal {S}}= (b-\\eta )\\cup [0,\\eta ]_{\\mathcal {S}}$ , and define the corresponding objects as in case 1(B).", "We declare $\\theta $ stable, and again $(\\dagger )_\\theta $ is immediate.", "Case 3.", "There are arbitrarily large unstable ordinals in $b$ .", "In this case, $b$ is a disjoint union of pairs $\\lbrace \\gamma ,\\gamma +1\\rbrace $ such that $\\gamma $ is unstable and $\\gamma +1$ is stable.", "We set $[0,\\theta )_{\\mathcal {S}}=\\lbrace \\xi \\in b \\ | \\ \\xi \\textrm { is unstable}\\rbrace ,$ and let ${\\mathcal {M}}^{\\mathcal {S}}_\\theta $ be the direct limit of the ${\\mathcal {M}}^{\\mathcal {S}}_\\xi $ 's for $\\xi \\in b$ unstable.", "There is no dropping in model or degree along $[0,\\theta )_{\\mathcal {S}}$ .", "We define maps $i^{\\mathcal {S}}_{\\xi ,\\theta }, \\pi _\\theta $ as usual.", "If $({\\mathcal {M}}^{\\mathcal {S}}_\\theta , \\Lambda _\\theta )$ is not a pair of the form $({\\mathcal {M}}_\\tau ^{\\mathcal {U}}, \\Sigma _\\tau ^{\\mathcal {U}})$ , then we declare $\\theta $ stable and $(\\dagger )_\\theta $ is immediate.", "Suppose that $({\\mathcal {M}}_\\theta ^{\\mathcal {S}}, \\Lambda _\\theta )$ is a pair of ${\\mathcal {U}}$ .", "We declare $\\theta $ unstable.", "Note that by clauses (4) and (5) of $(\\dagger )$ , there is a $\\xi <_S \\theta $ such that for all $\\gamma $ with $\\xi <_S \\gamma <_S \\theta $ , $i^{\\mathcal {S}}_{\\xi ,\\gamma }(\\langle \\alpha _\\xi ,\\beta _\\xi \\rangle ) = \\langle \\alpha _\\gamma , \\beta _\\gamma \\rangle $ .", "So we can set $\\alpha _\\theta = \\text{ common value of $i_{\\gamma ,\\theta }^{\\mathcal {S}}(\\alpha _\\gamma )$, for $\\gamma <_S\\theta $ sufficiently large.", "}$ By clause (5), we can set ${\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}}= \\text{ common value of $i_{\\gamma ,\\theta }^{\\mathcal {S}}({\\mathcal {M}}_{\\gamma +1}^{\\mathcal {S}})$, for $\\gamma <_S\\theta $ sufficiently large.", "}$ We also let $\\sigma _\\theta = \\text{ common value of $i_{\\gamma ,\\theta }^{\\mathcal {S}}(\\sigma _\\gamma )$, for $\\gamma <_S\\theta $ sufficiently large.", "}$ Here $i_{\\gamma ,\\theta }^{\\mathcal {S}}(\\sigma _\\gamma ) = \\text{ upward extension of }\\bigcup _{\\eta < \\rho } i_{\\gamma ,\\theta }^{\\mathcal {S}}(\\sigma _\\gamma ^\\eta ),$ where $\\rho = \\rho _k({\\mathcal {M}}_{\\gamma +1}^{\\mathcal {S}})$ for $k = k({\\mathcal {M}}_{\\gamma +1}^{\\mathcal {S}})$ , and the $\\sigma _\\gamma ^\\eta $ are the terms in the natural decomposition of $\\sigma _\\gamma $ .", "By (5) of $(\\dagger )$ , $i_{\\gamma ,\\theta }^{\\mathcal {S}}$ is continuous at $\\rho _k({\\mathcal {M}}_{\\gamma +1})$ for $\\gamma <_S \\theta $ sufficiently large, so $\\sigma _\\theta $ is defined on all of ${\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}}$ .", "It is easy then to see that $\\Phi _\\theta = \\langle ({\\mathcal {M}}_\\theta ^{\\mathcal {S}}, \\Lambda _\\theta ), ({\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}}, \\Lambda _{\\theta +1}),\\sigma _\\theta , \\alpha _\\theta \\rangle $ is a problematic tuple.", "If (I) holds, then we stop the construction of ${\\mathcal {S}}= {\\mathcal {S}}_{\\nu ,l}$ and move on to ${\\mathcal {S}}_{\\nu ,l+1}$ .", "If (II) holds, we stop the construction of ${\\mathcal {S}}$ and do not move on.", "If neither holds, we let $E_{\\theta +1}^{\\mathcal {S}}$ be the extender on the ${\\mathcal {M}}_{\\theta +1}^{\\mathcal {S}}$ sequence that represents its first disagreement with $M_{\\nu ,l}$ , and set $\\lambda _{\\theta +1} = \\lambda (E_{\\theta +1}^{\\mathcal {S}}),$ and $\\lambda _\\theta = \\inf (\\lambda _{\\theta +1}, \\alpha _\\theta ).$ It then is routine to verify $(\\dagger )_{\\theta +1}$ .", "This finishes our construction of ${\\mathcal {S}}={\\mathcal {S}}_{\\nu ,l}$ and ${\\mathcal {T}}$ .", "Note that every extender used in ${\\mathcal {S}}$ is taken from a stable node and every stable node, except the last model of ${\\mathcal {S}}$ contributes exactly one extender to ${\\mathcal {S}}$ .", "The last model of ${\\mathcal {S}}$ is stable.", "Claim 6.", "The construction of ${\\mathcal {S}}_{\\nu ,l}$ stops for one of the reasons (I) and (II).", "Proof sketch.", "This follows at once from the fact that in the comparison above, no strategy disagreements show up, and $M_{\\nu ,l}$ never moves.", "That in turn can be shown by the same method by which the other results on comparing phalanxes with background constructions are proved in [11].", "See [11].", "$\\hfill \\square $ Claim 7.", "For some $(\\nu ,l)\\le (\\eta _0,k_0)$ , the construction of ${\\mathcal {S}}_{\\nu ,l}$ stops for reason (II).", "Proof.", "If not, then the construction of ${\\mathcal {S}}={\\mathcal {S}}_{\\eta _0,k_0}$ must reach some ${\\mathcal {M}}^{\\mathcal {S}}_\\theta $ such that $(M_{\\eta _0,k_0},\\Omega _{\\eta _0,k_0})$ is a proper initial segment of $({\\mathcal {M}}^{\\mathcal {S}}_\\theta ,\\Lambda _\\theta )$ .", "Let $j:M \\rightarrow M_{\\eta _0,k_0}$ be the iteration map given by ${\\mathcal {U}}_{\\eta _0,k_0}$ .", "(Note that by the definition of $(\\eta _0,k_0)$ , $M_{\\eta _0,k_0}$ is a nondropping iterate of $M$ .)", "Letting ${\\mathcal {T}}={\\mathcal {T}}_{\\eta _0,k_0}$ , we have $\\pi _\\theta :{\\mathcal {M}}^{\\mathcal {S}}_\\theta \\rightarrow {\\mathcal {M}}^{\\mathcal {T}}_{\\theta }$ from the copying construction.", "Let $Q = \\pi _\\theta (M_{\\eta _0,k_0});$ then $\\pi _\\theta \\circ j \\colon (M, \\Sigma ) \\rightarrow (Q,(\\Sigma _\\theta ^{\\mathcal {T}})_Q)$ is an elementary map, because $\\Sigma = \\Omega _{\\eta _0,k_0}^j = ((\\Lambda _\\theta )_{M_{\\eta _0,k_0}})^j =((\\Sigma _\\theta ^{\\mathcal {T}})_Q)^{\\pi _\\theta \\circ j}.$ Thus $\\pi _\\theta \\circ j$ maps $(M,\\Sigma )$ into a proper initial segment of $({\\mathcal {M}}_\\theta ^{\\mathcal {T}},\\Sigma _\\theta ^{\\mathcal {T}})$ , contrary to the Dodd-Jensen Theorem.", "$\\hfill \\square $ By Claim 7, we may fix $(\\nu ,l)\\le (\\eta _0,k_0)$ such that the construction of ${\\mathcal {S}}={\\mathcal {S}}_{\\nu ,l}$ terminates at a stable $\\theta $ such that for some $\\gamma $ , ${\\mathcal {M}}^{\\mathcal {S}}_\\theta \\unlhd {\\mathcal {M}}^{{\\mathcal {U}}_{\\nu ,l}}_\\gamma $ .", "Let ${\\mathcal {S}}={\\mathcal {S}}_{\\nu ,l}$ , ${\\mathcal {U}}= {\\mathcal {U}}_{\\nu ,l}$ , and let $\\gamma $ be the least such that ${\\mathcal {M}}^{\\mathcal {S}}_\\theta \\unlhd {\\mathcal {M}}^{\\mathcal {U}}_\\gamma $ .", "We have lh$({\\mathcal {S}})=\\theta +1$ , and $[\\rm {rt}$$(\\theta ),\\theta ]_{\\mathcal {S}}$ does not drop in model or degree.", "If $0\\le _{\\mathcal {S}}\\theta $ , then $[0,\\theta ]_{\\mathcal {S}}$ does not drop in model or degree.", "Claim 8.", "For some unstable $\\xi $ , rt$(\\theta ) = \\xi +1$ .", "Proof.", "If not, then $0\\le _{\\mathcal {S}}\\theta $ and the branch $[0,\\theta ]_{\\mathcal {S}}$ does not drop.", "We claim that $({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta ) = ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}, \\Sigma _\\gamma ^{\\mathcal {U}})$ .", "For otherwise $i_{0,\\theta }^{\\mathcal {S}}$ maps $(M,\\Sigma )$ to a proper initial segment of a $\\Sigma $ -iterate of $(M,\\Sigma )$ , contrary to the Dodd-Jensen Theorem.", "(Note here that $\\Sigma $ is the pullback of $\\Lambda _\\theta $ under $i_{0,\\theta }^{\\mathcal {S}}$ , by Claim 4.)", "For the same reason, $[0,\\gamma ]_U$ does not drop.", "We also get by a standard Dodd-Jensen argument that $i^{\\mathcal {S}}_{0,\\theta }=i^{\\mathcal {U}}_{0,\\gamma }$ .", "To see this, note that both are elementary maps from $(M,\\Sigma )$ to $({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta ) = ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}, \\Sigma _\\gamma ^{\\mathcal {U}})$ .", "Since $i^{\\mathcal {U}}_{0,\\gamma }$ is an iteration map, for all $\\xi $ $i_{0,\\gamma }^{\\mathcal {U}}(\\xi ) \\le i^{\\mathcal {S}}_{0,\\eta }(\\xi ).$ Since $i^{\\mathcal {T}}_{0,\\theta }$ is also an iteration map, for all $\\xi $ $i^{\\mathcal {T}}_{0,\\eta }(\\xi ) =\\pi _{\\eta }\\circ i^{\\mathcal {S}}_{0,\\eta }(\\xi ) \\le \\pi _\\eta \\circ i^{\\mathcal {U}}_{0,\\tau }(\\xi ).$ Multiplying by $\\pi _\\eta ^{-1}$ , we get that $i^{\\mathcal {S}}_{0,\\eta }(\\xi ) \\le i^{\\mathcal {U}}_{0,\\gamma }(\\xi )$ for all $\\xi $ .", "So $i^{\\mathcal {S}}_{0,\\eta }=i^{\\mathcal {U}}_{0,\\tau }$ .", "Since we can recover branch extenders from branch embeddings, we get then that $s^{\\mathcal {S}}_\\theta = s^{\\mathcal {U}}_\\gamma $ .$s^{\\mathcal {S}}_\\theta $ is the sequence of extenders used along the branch $[0,\\theta ]_{\\mathcal {S}}$ and similarly for $s^{\\mathcal {U}}_\\gamma $ .", "Let $\\eta \\le _{\\mathcal {S}}\\theta $ be least such that $\\eta $ is stable.", "Then $s^{\\mathcal {S}}_\\eta = s^{\\mathcal {S}}_\\theta \\upharpoonright \\delta =s^{\\mathcal {U}}_\\gamma \\upharpoonright \\delta $ for some $\\delta $ .", "But there is $\\tau $ such that $s^{\\mathcal {U}}_\\tau = s^{\\mathcal {U}}_\\delta \\upharpoonright \\delta $ .", "Thus ${\\mathcal {M}}^{\\mathcal {S}}_\\eta ={\\mathcal {M}}^{\\mathcal {U}}_\\tau $ .", "We have also $\\Lambda _\\eta = \\Lambda _\\theta ^{i_{\\eta ,\\theta }^{\\mathcal {S}}} = (\\Sigma _\\gamma ^{\\mathcal {U}})^{i_{\\tau ,\\gamma }^{\\mathcal {U}}} = \\Sigma _\\tau ^{\\mathcal {U}},$ by pullback consistency, since $i_{\\eta ,\\theta }^{\\mathcal {S}}= i_{\\tau ,\\gamma }^{\\mathcal {U}}$ .", "If $\\eta $ is a limit ordinal, then by the rules at limit stages of ${\\mathcal {S}}$ above, we declare $\\eta $ unstable.", "This contradicts our assumption.", "If ${\\mathcal {S}}$ -pred$(\\eta )=\\mu $ , then $\\mu $ is unstable by our minimality assumption on $\\eta $ ; but then we declare $\\eta $ unstable by our rules at successor stages.", "Again, we reach a contradiction.", "$ \\hfill \\square $ Let $\\xi $ be as in Claim 6, and let $\\tau $ be such that $({\\mathcal {M}}^{\\mathcal {S}}_\\xi ,\\Lambda _\\xi )=({\\mathcal {M}}^{\\mathcal {U}}_\\tau ,\\Sigma _\\tau ^{\\mathcal {U}})$ .", "We have ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}= {\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ , so $s^{\\mathcal {S}}_\\xi = s^{\\mathcal {U}}_\\tau $ by the proof in claim 6.", "Claim 9.", "$\\tau < \\gamma $ .", "Proof.", "We show first $\\tau \\le \\gamma $ .", "Let $\\lambda = \\sup _i({\\rm lh}(s_\\xi ^{\\mathcal {S}}(i))) = \\sup _i({\\rm lh}(s^{\\mathcal {U}}_\\tau (i)).$ We have that $(M_\\xi ^{\\mathcal {S}},\\Lambda _\\xi )$ , $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1})$ , and $({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta )$ all agree below $\\lambda $ .", "(Note that $\\lambda \\le \\alpha _\\xi $ .)", "However, if $\\beta < \\tau $ , then ${\\mathcal {M}}_\\beta ^{\\mathcal {U}}$ disagrees with ${\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ below $\\lambda $ .", "Thus $\\tau \\le \\gamma $ .", "Now suppose $\\gamma = \\tau $ .", "If $\\theta = \\xi +1$ , then $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}) \\unlhd ({\\mathcal {M}}_\\tau ^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}})= ({\\mathcal {M}}_\\xi ^{\\mathcal {S}},\\Lambda _\\xi )$ , so $\\Phi _\\xi $ is not problematic, contradiction.", "If $\\theta > \\xi +1$ , then ${\\mathcal {M}}_\\theta ^{\\mathcal {S}}$ is not $\\alpha _\\xi $ -sound.", "Since ${\\mathcal {M}}_\\theta ^{\\mathcal {S}}\\unlhd {\\mathcal {M}}_\\gamma ^{\\mathcal {U}}$ , we must have ${\\mathcal {M}}_\\theta ^{\\mathcal {S}}= {\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ .", "However, ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ belongs to ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}= {\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ because $\\Phi _\\xi $ is problematic, and clearly ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}\\notin {\\mathcal {M}}_\\theta ^{\\mathcal {S}}$ , again a contradiction.", "Thus $\\gamma \\ne \\tau $ .", "$ \\hfill \\square $ Note that in fact $({\\mathcal {M}}_\\xi ^S,\\Lambda _\\xi )$ , $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1})$ , and $({\\mathcal {M}}_\\tau ^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}})$ all agree with $M_{\\nu ,l}$ below $\\alpha _\\xi $ .", "(Possibly not at $\\alpha _\\xi $ .)", "This is because otherwise $\\lambda _\\xi < \\alpha _\\xi $ , and $\\xi +1$ is a dead node in ${\\mathcal {S}}$ .", "Claim 10.", "$({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta ) = ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}},\\Sigma _\\gamma ^{\\mathcal {U}})$ .", "Proof.", "Otherwise $({\\mathcal {M}}_\\theta ^{\\mathcal {S}},\\Lambda _\\theta ) \\lhd ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}},\\Sigma _\\gamma ^{\\mathcal {U}})$ , so ${\\mathcal {M}}_\\theta ^{\\mathcal {S}}$ is sound, and thus $\\theta = \\xi +1$ .", "$\\rho ({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}) \\le \\alpha _\\xi $ , so $o({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})< (\\alpha ^+_\\xi )^{{\\mathcal {M}}_\\gamma ^{\\mathcal {U}}}$ .", "Suppose first $\\beta = {\\rm lh}(E_\\tau ^{\\mathcal {U}}) > \\alpha _\\xi $ .", "Then $({\\mathcal {M}}_\\tau ^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}})$ agrees with $({\\mathcal {M}}_\\gamma ^{\\mathcal {U}},\\Sigma _\\gamma ^{\\mathcal {U}})$ below $\\beta $ , and $\\beta \\ge (\\alpha _\\xi ^+)^{{\\mathcal {M}}_\\gamma ^{\\mathcal {U}}}$ , so $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}) \\lhd ({\\mathcal {M}}_\\tau ^{\\mathcal {U}}, \\Sigma _\\tau ^{\\mathcal {U}})= ({\\mathcal {M}}_\\xi ^{\\mathcal {S}},\\Lambda _\\xi )$ .", "It follows that $\\Phi _\\xi $ is not problematic.", "Suppose ${\\rm lh}(E_\\tau ^{\\mathcal {U}}) = \\alpha _\\xi $ .", "Let us write $F = E_\\tau ^{\\mathcal {U}}= E_{\\alpha _\\xi }^{{\\mathcal {M}}_\\tau ^{\\mathcal {U}}} = E_{\\alpha _\\xi }^{{\\mathcal {M}}_\\xi ^{\\mathcal {S}}}.$ We have $(M_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}) \\lhd ({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}},\\Sigma _{\\tau +1}^{\\mathcal {U}})$ , because $({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}},\\Sigma _{\\tau +1}^{\\mathcal {U}})$ agrees sufficiently with $({\\mathcal {M}}_\\gamma ^{\\mathcal {U}},\\Sigma _\\gamma ^{\\mathcal {U}})$ .", "Thus $\\gamma = \\tau +1$ and $\\theta = \\xi +1$ .", "Let $\\kappa = \\mathrm {crt}(F)$ and $\\mu = \\lambda (F)$ .", "Since $\\sigma _\\xi (\\mu ) = \\mu $ , $\\mu $ is a cardinal of ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}$ , and $F$ is total on ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}$ .", "We shall show that $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}) \\lhd {\\rm Ult}_0(({\\mathcal {M}}_{\\xi }^{\\mathcal {S}},\\Lambda _\\xi ), F)$ , so that $\\Phi _\\xi $ is not problematic.", "Note that $ {\\rm Ult}_0(({\\mathcal {M}}_{\\xi }^{\\mathcal {S}},\\Lambda _\\xi ), F) ={\\rm Ult}_0(({\\mathcal {M}}_{\\tau }^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}}), F)$ .", "Let $\\eta = {\\mathcal {U}}$ -pred$(\\tau +1)$ .", "By (4) of $(\\dagger )_\\xi $ , $\\alpha _\\xi = i_{0,\\xi }^{\\mathcal {S}}(\\alpha _0) = i_{0,\\tau }^{\\mathcal {U}}(\\alpha _0)$ .", "Thus $\\kappa \\in {\\rm ran}(i_{0,\\tau }^{\\mathcal {U}})$ .", "From this we get that $\\eta \\le _U \\tau $ .", "Let $n=k(M) = k(M_\\tau ^{\\mathcal {U}}) = k(M_\\eta ^{\\mathcal {U}})$ .", "We have $\\alpha _0 < \\rho _n(M)$ by hypothesis, so $\\alpha _\\xi < \\rho _n(M_\\tau ^{\\mathcal {U}})$ .", "If $\\eta = \\tau $ , then ${\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}= {\\rm Ult}_n({\\mathcal {M}}_\\tau ^{\\mathcal {U}},F)$ agrees with ${\\rm Ult}_0({\\mathcal {M}}_\\tau ^{\\mathcal {U}},F)$ to $\\sup i``\\rho _n({\\mathcal {M}}_\\tau ^{\\mathcal {U}})$ , which is well past ${\\rm lh}(F)^+$ as computed in the ultrapower, so we are done.", "So assume $\\eta <_U \\tau $ , and let $G$ be the extender applied to ${\\mathcal {M}}_\\eta ^{\\mathcal {U}}$ in $(\\eta ,\\tau ]_U$ .", "We must have $\\mathrm {crt}(G) < \\rho _n({\\mathcal {M}}_\\eta ^{\\mathcal {U}})$ , as otherwise $[0,\\tau ]_U$ drops.", "But also $\\kappa < \\mathrm {crt}(G)$ , because $\\kappa < \\lambda (G)$ by the definition of $\\eta $ , and $\\kappa \\in {\\rm ran}(i_{0,\\tau }^{\\mathcal {U}})$ .", "Thus $(\\kappa )^{+++,{{\\mathcal {M}}_\\eta ^{\\mathcal {U}}}} < \\mathrm {crt}(G) < \\rho _n({\\mathcal {M}}_\\eta ^{\\mathcal {U}})$ .", "It follows that ${\\rm Ult}_n({\\mathcal {M}}_\\eta ^{\\mathcal {U}},F)$ , ${\\rm Ult}_0({\\mathcal {M}}_\\eta ^{\\mathcal {U}},F)$ , and ${\\rm Ult}_0({\\mathcal {M}}_\\tau ^{\\mathcal {U}},F)$ all agree to their common value for ${\\rm lh}(F)^+$ .", "This is what we need.", "$\\hfill \\square $ Figure: Phalanx ComparisonClaim 11.", "$\\alpha _\\xi $ is a successor cardinal of ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ .", "Proof.", "Suppose not.", "It follows that $\\alpha _\\xi $ is a limit cardinal of ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}$ , and that $\\rho ({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}) = \\alpha _\\xi $ .", "Thus ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ is sound, and it is the core of ${\\mathcal {M}}_\\theta ^{\\mathcal {S}}$ .", "Moreover, $i_{\\xi +1,\\theta }^{\\mathcal {S}}$ is the uncoring embedding, and $\\Lambda _{\\xi +1} = \\Lambda _\\theta ^{i_{\\xi +1,\\theta }^{\\mathcal {S}}}$ by Claim 4.", "So $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1})$ is the core of $({\\mathcal {M}}_\\gamma ^{\\mathcal {U}},\\Sigma _\\gamma ^{\\mathcal {U}})$ .", "It follows that there is a $\\beta \\in [0,\\gamma ]_U$ such that either ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}= {\\mathcal {M}}_\\beta ^{\\mathcal {U}}$ or ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}\\lhd {\\mathcal {M}}_\\beta ^{\\mathcal {U}}$ .", "In either case, $\\hat{i}_{\\beta ,\\gamma }^{\\mathcal {U}}= i_{\\xi +1,\\theta }^{\\mathcal {S}}$ is again the uncoring map.", "By pullback consistency in ${\\mathcal {U}}$ , setting $Q = {\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ , $(\\Sigma _\\beta ^{\\mathcal {U}})_Q & = (\\Sigma _\\gamma ^{\\mathcal {U}})^{\\hat{i}_{\\beta ,\\gamma }^{\\mathcal {U}}}\\\\& = \\Lambda _\\theta ^{i_{\\xi +1,\\theta }^{\\mathcal {S}}}\\\\& = \\Lambda _{\\xi +1}.$ Thus $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}, \\Lambda _{\\xi +1}) \\unlhd ({\\mathcal {M}}_\\beta ^{\\mathcal {U}},\\Sigma _\\beta ^{\\mathcal {U}})$ .", "Clearly $\\beta \\ge \\tau $ .", "$\\beta = \\tau $ is impossible because $\\Phi _\\xi $ is problematic, and $({\\mathcal {M}}_\\tau ^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}}) =({\\mathcal {M}}_\\xi ^{\\mathcal {S}}, \\Lambda _\\xi )$ .", "So suppose $\\beta > \\tau $ .", "Since $\\alpha _\\xi $ is a limit cardinal of ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}$ , ${\\rm lh}(E_\\tau ^{\\mathcal {U}}) > \\alpha _\\xi $ .", "${\\rm lh}(E_\\tau ^{\\mathcal {U}})$ is a cardinal of ${\\mathcal {M}}_\\beta ^{\\mathcal {U}}$ , so if $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}) \\lhd ({\\mathcal {M}}_\\beta ^{\\mathcal {U}},\\Sigma _\\beta ^{\\mathcal {U}})$ , then $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}) \\lhd ({\\mathcal {M}}_\\tau ^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}})$ , contrary to $\\Phi _\\xi $ being problematic.", "So ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}= {\\mathcal {M}}_\\beta ^{\\mathcal {U}}$ .", "Now let $F$ be the first extender used on the branch $[0,\\beta ]_U$ such that ${\\rm lh}(F) > \\alpha _\\xi $ .", "Since $\\rho ({\\mathcal {M}}_\\beta ^{\\mathcal {U}}) =\\alpha _\\xi $ , $\\mathrm {crt}(F) \\ge \\alpha _\\xi $ .", "But then ${\\mathcal {M}}_\\beta ^{\\mathcal {U}}= {\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ is not $\\alpha _\\xi $ -sound, contradiction.", "$ \\hfill \\square $ Let $\\mu $ be the cardinal predecessor of $\\alpha _\\xi $ in ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ , or equivalently, in $M_\\xi ^{\\mathcal {S}}$ .", "Let also $ \\rho = \\rho ({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})$ .", "We have $\\rho \\in \\lbrace \\mu ,\\alpha _\\xi \\rbrace $ , and $\\rho & = \\rho ({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}) \\\\& = \\rho ({\\mathcal {M}}_\\theta ^{\\mathcal {S}}) \\\\& = \\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}).$ Claim 12.", "$E_{\\alpha _\\xi }^{{\\mathcal {M}}_\\xi ^{\\mathcal {S}}} = \\emptyset .$ Proof.", "Otherwise $E_{\\alpha _\\xi }^{{\\mathcal {M}}_\\xi ^{\\mathcal {S}}} = E_\\tau ^{\\mathcal {U}}$ , ${\\rm lh}(E_\\tau ^{\\mathcal {U}}) = \\alpha _\\xi $ , and $\\mu = \\lambda (E_\\tau ^{\\mathcal {U}})$ .", "Let $F$ be the first extender used in $[0,\\gamma ]_U$ such that ${\\rm lh}(F) \\ge \\alpha _\\xi $ .", "We claim that $F=E_\\tau ^{\\mathcal {U}}$ .", "For if not, then by the rules of $\\lambda $ -separated trees, $\\mathrm {crt}(F) < \\lambda (E_\\tau ^{\\mathcal {U}}) < {\\rm lh}(E^{\\mathcal {U}}_\\tau ) < \\lambda (F)$ .", "Since $\\rho ({\\mathcal {M}}_{\\gamma }^{\\mathcal {U}}) \\le \\alpha _\\xi < \\lambda (F)$ , we must have $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) \\le \\mathrm {crt}(F) < \\mu $ .", "However, $\\rho = \\rho ({\\mathcal {M}}_{\\gamma }^{\\mathcal {U}}) = \\rho ({\\mathcal {M}}_\\theta ^{\\mathcal {S}}) = \\rho ({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}) \\ge \\mu ,$ because $\\mu $ is a cardinal of ${\\mathcal {M}}_\\xi ^{\\mathcal {S}}$ and ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}\\in {\\mathcal {M}}_\\xi ^{\\mathcal {S}}$ .", "This is a contradiction, so we have $F=E_\\tau ^{\\mathcal {U}}$ , and $\\tau +1 \\le _U \\gamma $ .", "We claim that $\\rho = \\rho ({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}).$ We remarked above that this holds if $\\tau +1 = \\gamma $ , so suppose $\\tau +1 <_U \\gamma $ .", "Let $\\delta = \\mathrm {crt}(\\hat{i}_{\\tau +1,\\gamma }^{\\mathcal {U}})$ , so that $\\mu \\le \\delta $ .", "Let $Q \\unlhd {\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}$ be such that $Q = {\\rm dom}(\\hat{i}_{\\tau +1,\\gamma }^{\\mathcal {U}})$ .", "If $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}})>\\delta $ , then $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) > (\\delta ^+)^{{\\mathcal {M}}_\\gamma ^{\\mathcal {U}}} \\ge \\alpha _\\xi $ ; thus $\\rho \\le \\delta $ .", "It follows that $\\rho = \\rho (Q)$ .", "If $Q \\lhd {\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}$ , then $\\rho (Q) \\ge \\alpha _\\xi $ since $\\alpha _\\xi = {\\rm lh}(F)$ is a cardinal of ${\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}$ , so $\\rho (Q) = \\rho = \\alpha _\\xi $ .", "It follows that $(Q,(\\Sigma _{\\tau +1}^{\\mathcal {U}})_Q) = \\alpha _\\xi \\text{-core of } {\\mathcal {M}}_\\gamma ^{\\mathcal {U}}= ({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}},\\Lambda _{\\xi +1}),$ so $({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}, \\Lambda _{\\xi +1}) \\lhd ({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}},\\Sigma _{\\tau +1}^{\\mathcal {U}})$ , contrary to $\\Phi _\\xi $ being problematic.", "(As we showed above, ${\\rm Ult}_0(({\\mathcal {M}}_\\tau ^{\\mathcal {U}},\\Sigma _\\tau ^{\\mathcal {U}}),F)$ is in sufficient agreement with $({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}},\\Sigma _{\\tau +1}^{\\mathcal {U}})$ that we can conclude this.)", "Thus $Q = {\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}$ , and $\\rho = \\rho (Q)$ .", "We cannot have $\\rho = \\mu $ because $\\lambda (E_\\tau ^{\\mathcal {U}})$ is not a possible value of $\\rho ({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}})$ , and thus $\\rho = \\alpha _\\xi $ .", "Let $\\eta $ be the $U$ -predecessor of $\\tau +1$ and $\\kappa = \\mathrm {crt}(F)$ .", "If $[0,\\tau +1]_U$ drops, then $\\rho \\le \\kappa $ , so $[0,\\tau +1]_U$ does not drop.", "Since $\\rho ({\\mathcal {M}}_{\\tau +1}^{\\mathcal {U}}) = {\\rm lh}(F)$ , $\\rho ({\\mathcal {M}}_\\eta ^{\\mathcal {U}}) = (\\kappa ^+)^{{\\mathcal {M}}_\\eta ^{\\mathcal {U}}}$ .", "Let $Z = \\mbox{Th}_n^{{\\mathcal {M}}_\\eta ^{\\mathcal {U}}}((\\kappa ^+)^{{\\mathcal {M}}_\\eta ^{\\mathcal {U}}} \\cup r),\\footnote {Equivalently, Z = \\mbox{Th}_1^{({\\mathcal {M}}_\\eta ^{\\mathcal {U}})^{n-1}}((\\kappa ^+)^{{\\mathcal {M}}_\\eta ^{\\mathcal {U}}} \\cup r).", "}$ where $n =k({\\mathcal {M}}_\\eta ^{\\mathcal {U}}) +1 = k(M)+1$ , and $r = p_n({\\mathcal {M}}_\\eta ^{\\mathcal {U}})$ .", "$Z$ is not in ${\\mathcal {M}}_\\eta ^{\\mathcal {U}}$ , and hence $Z$ is not in ${\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ .", "But $Z$ can be computed inside ${\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ from $F$ and ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ , both of which belong to ${\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ .", "This is because $i_{\\eta ,\\gamma }^{\\mathcal {U}}(r) = p({\\mathcal {M}}_\\gamma ^U) = i_{\\xi +1,\\theta }^S(t),$ where $t = p({\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}})$${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ is sound, because $\\rho = \\alpha _\\xi $ ., and $\\mathrm {crt}(i_{\\tau +1,\\gamma })> \\alpha _\\xi ,$ because otherwise $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) > \\alpha _\\xi $ , so for $\\nu < (\\kappa ^+)^{{\\mathcal {M}}_\\eta ^{\\mathcal {U}}}$ , $\\langle \\varphi ,\\nu ,r \\rangle \\in Z & \\Leftrightarrow {\\mathcal {M}}_\\gamma ^{\\mathcal {U}}\\vDash \\varphi [i_{\\eta ,\\tau +1}^{\\mathcal {U}}(\\nu ),p({\\mathcal {M}}_\\gamma ^{\\mathcal {U}})]\\\\& \\Leftrightarrow {\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}\\vDash \\varphi [i_{\\eta ,\\tau +1}^{\\mathcal {U}}(\\nu ),t].$ Since $i_{\\eta ,\\tau +1}^{\\mathcal {U}}\\upharpoonright \\kappa ^+$ can be computed from $F$ , we get $Z \\in {\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ , a contradiction.", "This completes the proof of claim 10.", "$\\hfill \\square $ Claim 13.", "Let $F = E_\\nu ^{\\mathcal {U}}$ be the first extender used in $[0,\\gamma ]_U$ such that ${\\rm lh}(F) \\ge \\alpha _\\xi $ ; then $\\mathrm {crt}(F) = \\mu $ .", "Proof.", "${\\rm lh}(F)> \\alpha _\\xi $ by claim 10, so $\\lambda (F) >\\alpha _\\xi $ .", "$\\rho $ is not in the interval $(\\mathrm {crt}(F),\\lambda (F)]$ , so $\\mathrm {crt}(F) \\ge \\mu $ .", "Let $\\eta = \\mbox{pred}_U(\\nu +1)$ , and let $Q \\unlhd {\\mathcal {M}}_\\eta ^{\\mathcal {U}}$ be such that ${\\mathcal {M}}_{\\nu +1}^{\\mathcal {U}}= {\\rm Ult}(Q,F).$ Note that $\\eta \\le \\tau $ , as otherwise some extender $G$ such that ${\\rm lh}(G) \\ge {\\rm lh}(E_\\tau ^{\\mathcal {U}}) > \\alpha _\\xi $ is used on $[0,\\eta )_U$ .", "If $\\eta < \\tau $ , then $\\mathrm {crt}(F) < \\lambda (E_\\eta ^{\\mathcal {U}}) < \\mu $ , contradiction.", "Thus $\\eta = \\tau $ .", "Note $Q$ , ${\\mathcal {M}}_\\gamma ^{\\mathcal {U}}$ , and ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ have the same subsets of $\\alpha _\\xi $ .", "Since ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}\\in {\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ , this implies that $Q$ is a proper initial segment of ${\\mathcal {M}}_\\tau ^{\\mathcal {U}}$ .", "The branch $(\\nu +1,\\gamma ]_U$ can have no drops, since otherwise $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) \\ge \\lambda (F) > \\alpha _\\xi $ , whereas $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) \\in \\lbrace \\mu , \\alpha _\\xi \\rbrace $ .", "It follows that $Q$ is the core of ${\\mathcal {M}}_\\gamma ^{\\mathcal {U}}$ .", "(The full core, not the $\\alpha _\\xi $ -core.)", "We claim $\\mathrm {crt}(F) = \\mu $ .", "For otherwise, $\\mathrm {crt}(F) > \\alpha _\\xi $ , which implies that $Q$ is the $\\alpha _\\xi $ -core of ${\\mathcal {M}}_\\gamma ^{\\mathcal {U}}$ , so that $Q = {\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ .", "One also has that $\\hat{i}_{\\tau ,\\gamma }^{\\mathcal {U}}= i_{\\xi +1,\\theta }^{\\mathcal {S}}$ is the uncoring map, so ${(\\Sigma _\\tau ^{\\mathcal {U}})}_Q & = (\\Sigma _\\gamma ^{\\mathcal {U}})^{\\hat{i}_{\\tau ,\\gamma }^{\\mathcal {U}}}\\\\& = (\\Sigma _\\theta ^{\\mathcal {S}})^{i_{\\xi +1,\\theta }^{\\mathcal {S}}} \\\\& = \\Lambda _{\\xi +1}.$ The last equation holds because $\\Lambda _{\\xi +1} =(\\Sigma _\\xi ^{\\mathcal {T}})^{\\pi _{\\xi +1}} = (\\Sigma _\\theta ^{\\mathcal {T}})^{i_{\\xi ,\\theta }^{\\mathcal {T}}\\circ \\pi _{\\xi +1}} = (\\Sigma _\\theta ^{\\mathcal {T}})^{ \\pi _\\theta \\circ i_{\\xi ,\\theta }^{\\mathcal {S}}}= (\\Sigma _\\theta ^{\\mathcal {S}})^{i_{\\xi +1,\\theta }^{\\mathcal {S}}}$ .", "So if $\\mathrm {crt}(F)> \\alpha _\\xi $ , then $\\Phi _\\xi $ is not problematic, contradiction.", "Thus $\\mathrm {crt}(F) = \\mu $ .", "$\\hfill \\square $ Claims 12 and 13 give that $\\lambda (E^{\\mathcal {U}}_\\nu )>\\alpha _\\xi $ .", "Claim 11 implies also that $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) = \\mu $ , as otherwise $\\rho ({\\mathcal {M}}_\\gamma ^{\\mathcal {U}}) = \\alpha _\\xi $ is in the forbidden interval $(\\mathrm {crt}(E^{\\mathcal {U}}_\\nu ),\\lambda (E^{\\mathcal {U}}_\\nu ))$ .", "From this we get that $\\rho (Q) = \\mu $ as well.", "So $Q\\lhd {\\mathcal {M}}^{\\mathcal {U}}_\\tau $ and $\\tau \\le _U \\nu +1$ .", "This implies that $Q$ is the core of ${\\mathcal {M}}_{\\xi +1}^{\\mathcal {S}}$ , and in fact leting $D$ be the normal measure defined by $F$ , we have the diagram in Figure REF , where $i_D:Q\\rightarrow Ult(Q,D)$ is the ultrapower map and $k: Ult(Q,D) \\rightarrow {\\mathcal {M}}^{\\mathcal {U}}_{\\nu +1}$ is the factor map.", "We have that $k\\circ i_D = i^{\\mathcal {U}}_{\\tau ,\\nu +1}$ and $\\mathrm {crt}(k)>\\alpha _\\xi $ .", "We have then that $Ult(Q,D) & = Core_{\\alpha _\\xi }({\\mathcal {M}}^{\\mathcal {U}}_{\\nu +1}) \\\\& = Core_{\\alpha _\\xi }({\\mathcal {M}}^{\\mathcal {U}}_\\gamma ) \\\\& = {\\mathcal {M}}^{\\mathcal {S}}_{\\xi +1}$ Figure: ℳ ξ+1 𝒮 = Ult (Q,D){\\mathcal {M}}^{\\mathcal {S}}_{\\xi +1} = {\\rm Ult}(Q,D), where the trivial completion of DD is on the QQ-sequenceClaim 14.", "$\\tau \\in [0,\\nu ]_U$ .", "We assume $\\nu > \\tau $ ; otherwise, there is nothing to prove.", "Let $N = {\\mathcal {M}}^{\\mathcal {U}}_\\tau |\\alpha _\\xi $ .", "Suppose that $D\\in Ult(N,F)$ .", "Then $D$ witnesses that the first generator above $\\mu $ of the extender given by the branch embedding $\\hat{i}^{\\mathcal {U}}_{\\tau ,\\gamma }$ is $\\beta =_{def}$ crt$(k)$ and $\\beta < (\\mu ^{++})^{{\\mathcal {M}}^{\\mathcal {U}}_\\gamma }$ .", "On the other hand, the first generator $> \\mu $ of the extender given by the branch embedding $\\hat{i}^{\\mathcal {S}}_{\\xi +1,\\theta }$ is an inaccessible in ${\\mathcal {M}}^{\\mathcal {U}}_\\gamma $ .", "This is a contradiction.", "The above paragraph implies that $\\rho ({\\mathcal {M}}^{\\mathcal {U}}_\\nu )\\le \\alpha _\\xi $ since $D$ codes a subset of $\\alpha _\\xi $ missing from ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ .", "Now we proceed to prove $\\tau \\in [0,\\nu ]_U$ .", "First we claim that $F$ is the top extender of ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ .", "Otherwise, $({\\mathcal {M}}^{\\mathcal {U}}_\\nu ||lh(F),F)\\lhd {\\mathcal {M}}^{\\mathcal {U}}_\\nu $ is sound.", "By the above paragraph, $D\\notin Ult(N,F)$ .", "This implies that $F$ is a type A extender, by the initial segment condition.", "Note also that $Ult(N,F) = {\\mathcal {M}}^{\\mathcal {U}}_\\nu || lh(F)$ .", "Now consider the factor map, where $D^*$ is the Jensen completion of $D$ $\\tilde{k}: W =_{def} (Ult(N,D^*),D^*) \\rightarrow Y =_{def} (Ult(N,F),F)$ .", "We note that crt$(\\tilde{k}) = \\beta \\ge \\alpha _\\xi $ , $W, Y$ are premice of the same type, and $\\tilde{k}$ is nearly elementary.", "Since $D \\notin Ult(N,F)$ , $W \\notin Y$ and $\\rho (Y)\\le \\alpha _\\xi $ .", "Also, $W, Y$ are $\\alpha _\\xi $ -sound, and $\\rho _1(W)=\\alpha _\\xi \\le \\beta $ .", "We can apply Theorem and conclude that $W$ is the $\\alpha _\\xi $ -core of $Y$ .", "Since $Y = {\\mathcal {M}}^{\\mathcal {U}}_\\nu |lh(F) \\lhd {\\mathcal {M}}^{\\mathcal {U}}_\\nu $ , $Y$ is sound.", "We then conclude that $W=Y$ and $D^*=F$ .A another argument is to use the ms-$\\textsf {ISC}$ of $M^{\\mathcal {U}}_\\nu $ to get $D^*$ is on the sequence of $Q$ and this is a contradiction to the fact $Q$ is a pfs premouse and $\\rho (Q) = \\mu = \\textrm {crt}(D^*)$ .", "This means $F$ is on the extender sequence of ${\\mathcal {M}}^{\\mathcal {U}}_\\tau $ (by the agreement of ${\\mathcal {M}}^{\\mathcal {U}}_\\tau $ and ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ ).", "So $\\tau =\\nu $ , which contradicts our assumption $\\nu >\\tau $ .", "Let $G$ be the first extender used on the branch $[0,\\nu ]_U$ that has length $> \\alpha _\\xi $ .", "Then crt$(G)\\ge \\alpha _\\xi $ .", "Otherwise, $\\mu \\notin $ rng$(\\hat{i}^{\\mathcal {U}}_{0,\\nu })$ , but we know $\\mu \\in $ rng$(\\hat{i}^{\\mathcal {U}}_{0,\\nu })$ as $\\mu $ is the critical point of the top extender of ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ .", "Then $G$ has to be applied to (an initial segment of) ${\\mathcal {M}}^{\\mathcal {U}}_\\tau $ since $\\tau $ is the least $\\tau ^{\\prime }$ such that crt$(G)<\\lambda (E^{\\mathcal {U}}_{\\tau ^{\\prime }})$ .Recall we already know that for any $\\tau ^{\\prime }<\\tau $ , $lh(E^{\\mathcal {U}}_{\\tau ^{\\prime }})\\le \\alpha _\\xi $ .", "We have shown $\\tau \\in [0,\\nu ]_U$ .", "Claim 15.", "The Jensen completion $D^*$ of $D$ is on the sequence of $Q$ .", "The proof of Claim 14 implies that if $\\nu > \\tau $ , then crt$(\\hat{i}^{\\mathcal {U}}_{\\tau ,\\nu }) > \\alpha _\\xi $ and if $D^*=F$ then $F$ is on the extender sequence of ${\\mathcal {M}}^{\\mathcal {U}}_\\tau $ .", "In this case, using the fact that $\\rho (Q)=\\mu $ and $\\alpha _\\xi = (\\mu ^+)^Q$ , we get $F$ must be on the $Q$ -sequence.", "We assume $D^*\\ne F$ .", "In this case $F$ is the top extender of ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ .", "The proof of Claim 12 gives that if ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ is $\\alpha _\\xi $ -sound, then $D^* = F$ .", "So we may assume that ${\\mathcal {M}}^{\\mathcal {U}}_\\nu $ is not $\\alpha _\\xi $ -sound.", "So the branch $[0,\\nu ]_U$ must have truncation points.", "We let $\\epsilon \\in [0,\\nu ]_U$ be last truncation point of $[0,\\nu ]_U$ .", "So there is $Y\\lhd {\\mathcal {M}}^{\\mathcal {U}}_\\epsilon $ such that $\\hat{i}^{\\mathcal {U}}_{\\epsilon ,\\nu }: Y \\rightarrow {\\mathcal {M}}^{\\mathcal {U}}_\\nu $ has critical point $> \\alpha _\\xi $ .", "$Y$ is sound with top extender $E$ such that $\\hat{i}^{\\mathcal {U}}_{\\epsilon ,\\nu }(E) = F$ .", "Using the same argument as in Claim 14, we get that $D^* = E$ is on the $Q$ -sequence.", "The last claim immediately gives us a contradiction.", "Note that $\\rho (Q) = \\mu $ and by the claim, there is an extender on the sequence of $Q$ with critical point $\\mu $ .", "This contradicts the fact that $Q$ is a solid pfs premouse.", "This contradiction completes of proof of Theorem .", "We can drop the hypothesis that $\\mathrm {crt}(\\pi ) < \\rho _{k(H)}(H)$ from Theorem , at the cost of omitting its conclusions concerning condensation of the external strategies.", "This will be useful in the proof of square.", "[$\\sf {AD}^+$ ] Suppose $(M,\\Sigma )$ is a pfs mouse pair with scope HC and $M$ is stable, and of type 1.", "Suppose $\\pi :H \\rightarrow M$ is nearly elementary, and not the identity.", "Let $\\alpha = \\mathrm {crt}(\\pi )$ , and suppose (1) $H$ is a pfs premouse of the same type as $M$ , i.e.", "$H$ is stable and is of type 1 and $H$ is branch (extender) active iff $M$ is branch (extender) active, (2) $H$ is $\\alpha $ -sound, and (3) $H$ is not the $\\alpha $ -core of $M$ .", "Then exactly one of the following holds.", "(a) $H \\lhd M$ .", "(b) $H \\lhd Ult_0(M, \\dot{E}^M_\\alpha )$ .", "Let $n$ be largest such that $\\alpha < \\rho _n(H)$ , and $n \\le k(H)$ .", "Let $G$ and $N$ be the same as $H$ and $M$ , except that $k(G) = n = k(N)$ .", "Let $\\Psi = \\Sigma _N^\\pi $ .", "The hypotheses of hold of $(G,\\Psi )$ , $(N,\\Sigma _N)$ , and $\\pi $ .", "(We have $H \\in M$ by , hence $G \\in N$ , hence $G$ is not the $\\alpha $ -core of $N$ .)", "Hence one of the conclusions of holds of them.", "If it is conclusion (a), then $G \\lhd N$ , which easily implies $H \\lhd M$ .", "If it is (b), then $G \\lhd Ult_0(N, \\dot{E}^M_\\alpha )$ yields $H \\lhd Ult_0(M, \\dot{E}^M_\\alpha )$ .", "[$\\sf {AD}^+$ ] Let $(M,\\Sigma ), \\alpha , \\pi $ etc.", "be as in the hypothesis of Theorem .", "Assume additionally that $H$ is sound, $p(M)\\in rng(\\pi )$ and $\\alpha > \\rho (H)=\\rho (M)$ .", "Then $\\pi $ is the core map.", "Furthermore, letting $\\rho = \\rho (H)$ , then $H|(\\rho ^+)^H = M|(\\rho ^+)^M$ .", "Letting $\\Psi = \\Sigma ^\\pi $ and $K = H|(\\rho ^+)^H$ , then $\\Sigma _K = \\Psi _K$ .", "[Proof sketch.]", "It is clear that $\\pi $ is the core map.", "The first conclusion follows from [11].", "For the second conclusion, we apply Theorem to the map $\\pi \\upharpoonright N: N \\rightarrow \\pi (N)$ for each $N\\lhd K$ such that $\\rho (N) = \\rho $ , noting that $N\\in \\pi (N)$ .", "We get that $\\Sigma _N = \\Psi _N$ .", "Therefore, $\\Sigma _K = \\oplus _{N\\lhd K, \\rho (N)=\\rho } \\Sigma _N = \\oplus _{N\\lhd K, \\rho (N)=\\rho } \\Psi _N = \\Psi _K$ ." ] ]
2207.03559
[ [ "Red PANDA: Disambiguating Anomaly Detection by Removing Nuisance Factors" ], [ "Abstract Anomaly detection methods strive to discover patterns that differ from the norm in a semantic way.", "This goal is ambiguous as a data point differing from the norm by an attribute e.g., age, race or gender, may be considered anomalous by some operators while others may consider this attribute irrelevant.", "Breaking from previous research, we present a new anomaly detection method that allows operators to exclude an attribute from being considered as relevant for anomaly detection.", "Our approach then learns representations which do not contain information over the nuisance attributes.", "Anomaly scoring is performed using a density-based approach.", "Importantly, our approach does not require specifying the attributes that are relevant for detecting anomalies, which is typically impossible in anomaly detection, but only attributes to ignore.", "An empirical investigation is presented verifying the effectiveness of our approach." ], [ "Introduction", "Anomaly detection, discovering unusual patterns in data, is a key capability for many machine learning and computer vision applications.", "In the typical setting, the learner is provided with training data consisting only of normal samples, and is then tasked with classifying new samples as normal or anomalous.", "It has emerged that the representations used to describe data are key for anomaly detection in images and videos [1].", "Advances in deep representation learning [2] have been used to significantly boost anomaly detection performance on standard benchmarks.", "However, these methods have not specifically addressed biases in data.", "Anomaly detection methods which suffer from the existence of such biases may produce more overall errors, and incorrectly classify as anomalies some types of samples more than others.", "A major source for such biases is the presence of additional, nuisance factors.", "One of the most important and unsolved challenges of anomaly detection is resolving the ambiguity between relevant and nuisance attributes.", "As a motivating example let us consider the application of detecting traffic violations in video.", "Normal samples consist of videos of usual traffic.", "When aiming to detect traffic violations, we may encounter two kinds of difficulties: (i) The distribution of anomalous samples is not known at training time e.g.", "bad driving may come in many forms: speeding, failure to yield, parking in a fire lane, etc.", "This is the standard problem addressed by most anomaly detection methods [3], [1], [4].", "(ii) There may be biases in the normal data.", "For example, assume that all the taxi drivers in the normal training dataset were females, while all the bus drivers were males.", "A female driving a bus lawfully, is likely to be considered an anomaly by current methods.", "Unlike previous works, we aim to disambiguate between true anomalies (e.g.", "traffic violations) and unusual variation of nuisance attributes in normal data (e.g.", "female bus drivers acting lawfully).", "Detecting normal but unusual variations according to nuisance attributes as anomalies may be a source of false positive alarms.", "It may also introduce an undesirable imbalance in the detected anomalies, or even falsely discriminating against certain social groups.", "The are many settings where some attribute combinations are missing from the training dataset but are considered normal: assembly line training images may be biased in terms of lighting conditions or camera angles - while these may be irrelevant to their anomaly score; photos of people may be biased in terms of ethnicity, for example when collected in specific geographical areas.", "Moreover, in some cases, normal attribute combinations may be absent just due to the rarity of some attributes (e.g.", "rare car colors may appear only with specific car models).", "Our technical approach proposes to ignore nuisance attributes by learning representations that are independent from them.", "Our approach takes as input a training set of normal samples, each labeled with the value of the nuisance attribute that we wish to ignore.", "Our approach utilizes a domain-supervised disentanglement approach [5] to remove the dependency on the provided nuisance attribute, while preserving as much information (uncorrelated to that attribute) as possible, about the image.", "Specifically, we train an encoder with an additional per-domain contrastive loss term to learn a representation which is independent of the labeled nuisance attribute.", "For example, an encoder guided to be invariant to gender, would be trained to contrast images of females against other images of females, but not against images of males (and vice versa).", "Additionally, a conditional generator is trained over the representations with a reconstruction term, to ensure the representations are informative.", "The combination of the two loss terms yields informative representations which are agnostic to the nuisance attributes.", "The representations are then combined with standard density estimation methods ($k$ nearest neighbors) for anomaly scoring.", "Our approach differs from previous approaches that propose to use some level of supervision for anomaly detection such as out-of-distribution detection methods or weakly-supervised anomaly detection.", "Those approaches require knowledge of the true attribute relevant for discriminating anomalies - which is often impossible as anomalies are unexpected.", "In contrast, we require only knowledge of a subset of the factors that are not indicative for detecting the anomalies we wish to find - a far more realistic scenario.", "In fact, these additional labels are often provided by the datasets, such as self-identified age, gender, or race of employees or customers.", "In other cases, such labels are easily predicted using pretrained classifiers such as CLIP [6].", "As this task is novel, we present new benchmarks and new metrics for evaluation.", "Our benchmarks typically incorporate normal examples which experience unusual variation in a nuisance attribute.", "Our evaluation metrics measure both overall anomaly detection accuracy, as well as the false alarm rate due to mistaking normal samples with nuisance variation as anomalies.", "Our experiments indicate that using our approach for removing the representational dependencies on a nuisance attribute significantly improves both metrics.", "Contributions: (i) Introducing the novel setting of Negative Attribute Guided Anomaly Detection (NAGAD).", "(ii) Presenting new evaluation benchmarks and metrics for the NAGAD setting (iii) Proposing a new approach, REpresentation Disentanglement for Pretrained Anomaly Detection Adaptation (Red PANDA), using domain-supervised disentanglement to address this setting.", "(iv) Demonstrating the success of our approach through empirical evaluation." ], [ "Related Works", "Classical anomaly detection methods.", "These may be grouped into three themes: (i) Density-estimation based methods.", "Estimation of the density of the normal data can be non-parametric methods, such as $k$ NN or kernel density estimation.", "Parametric methods, such as Gaussian Mixture Models (GMM) [7] learn a parametric representation of the data to estimate the probability density of the test samples.", "(ii) Reconstruction based methods - methods such as PCA learn to reconstruct well normal training samples.", "Anomalies coming from a different distribution might not reconstruct as well.", "(iii) One class classification methods - Learning a classifier to separate between the train normal samples and the rest of feature space (e.g.", "SVDD [8]).", "Deep anomaly detection methods.", "As only normal samples are available during training, we cannot learn features with standard supervision.", "Therefore, deep anomaly detection methods either use self-supervision learning to score the anomalies [9], or adapt a pretrained representation [9], [1], [10], [3], [11] to describe the normal training data.", "(i) Self-supervised methods - these methods learn to solve an auxiliary task on the normal samples, test the performance on new images, and score anomalies accordingly: the network is expected to perform better on the normal samples that come from a similar distribution [9].", "More recent works such as CSI [4] or DROC [12] use contrastive learning to learn a representation of the normal data.", "(ii) Adaptation of Pretrained Feature - Transfer learning of pretrained features was shown to give strong results for out of distribution detection by [9].", "Adaptation of pretrained features for anomaly detection was attempted by Deep-SVDD [3], which adapted features learnt by an auto-encoder using compactness loss.", "Perera & Patel suggested to training the compactness loss jointly with ImageNet classification [11].", "By incorporating early stopping and EWC regularization [13], PANDA [1] allowed feature adaptation without with mitigated catastrophic forgetting, resulting in better performance.", "Further improvement in pretrained feature adaptation was later suggested by MeanShifted [10], using contrastive learning to adapt the pretrained features to the normal training set.", "Domain-supervised disentanglement.", "Disentanglement is the process of recovering the latent factors that are responsible for the variation between samples in a given dataset.", "For example, from images of human faces we may recover the age of each person, his hair color, eye color, etc.", "In domain-supervised disentanglement, one assumes that a single such factor is labelled and aims to learn a representation of the other attributes independent of the labelled factor.", "This task was approached with variational auto-encoders [14], [15], and latent optimization [16], [17].", "Contrastive methods have also shown great promise with general disentanglement [18].", "This was followed by Kahana & Hoshen in domain-supervised disentanglement [5] who employed a contrastive loss for each set of similarly-labelled samples individually, learning a code which ideally describes only (and all) attributes which are uncorrelated to the labelled attributes.", "Domain-supervised disentanglement has been used for a variety of applications.", "Most notably, for generative models [19][16].", "Self-supervised models have also been discussed in the context of interpretability [20], abstract reasoning [21], domain adaptation [22], and fairness [23].", "Some previous works have considered using domain supervision for increasing fairness in anomaly detection[24], [25], [26].", "These methods aim at obtaining equal anomaly detection performance across the protected attributes.", "On the other hand our objective is to ignore the nuisance attributes in order improve the overall performance of the anomaly detection method." ], [ "Nuisance Attributes Mislead Anomaly Detectors", "Anomaly detection methods aim to detect samples deviating from the norm.", "However, operators of anomaly detection methods expect the deviation to be semantically relevant.", "As the anomaly detection setting is typically unsupervised, algorithms are not given guidance as to which mode of deviation is relevant and which is simply nuisance.", "Detecting anomalies via nuisance attributes is highly undesirable.", "For example, assume that both pose and car types are the data generating attributes - but pose is nuisance and car type is relevant.", "Images that different from the norrmal in the pose attribute but not the car model are likely to result in false positive detections.", "Current algorithms rely on different inductive biases to select the relevant attributes and remove the nuisance ones.", "The most common choice is manual feature selection, where the operator specifies particular features that would be the most relevant.", "Automated methods for learning features perform a similar function.", "Contrastive learning methods specify augmentations which remove specific attributes (minor color and location information) from the representation which are considered nuisance.", "This helps to select attributes more relevant to object-centric tasks.", "Similarly, representations pretrained on supervised object classifcation (e.g.", "ImageNet [27]), which have recently demonstrated very strong results for image anomaly detection, select object-centric attributes at the expense of others.", "The most extreme level of supervision is the out-of-distribution detection setting where the relevant attribute is labeled for all normal training data.", "However, this guidance is not available in the typical anomaly detection setting as anomalies are unexpected.", "e attributes they wish to exclude for anomaly140 detection; either due to legal and moral reasons, Our novel setting, Negative Attribute Guided Anomaly Detection (NAGAD), allows specification of nuisance attributes which should be ignored by the anomaly detector.", "Differently from specifying the relevant attributes, which is not possible in anomaly detection, specifying nuisance attributes is often possible.", "Users may know in advance about the attributes they wish to exclude for anomaly detection; either due to legal and moral reasons, or due to prior domain knowledge.", "A natural way for specifying nuisance attributes is to provide labels for their different classes.", "For example, wishing to detect anomalies according to their type of shoe but not according to the image type, we may provide for each image a label for the image type (such as sketches vs. photos labels, see Fig.REF ).", "Currently available anomaly detection approaches cannot directly benefit from such information and thus mitigate nuisance attributes only implicitly (using the mechanisms explained above).", "In Sec.", "we describe a specific technical approach for using the guidance for anomaly detection.", "However, we stress that our main contribution is this anomaly detection setting which we expect to significantly reduce false alarms in many cases.", "Figure: Samples from the Edges2Shoes datasets.", "Pseudo-anomalies are marked in green while true anomalies are marked in red.", "Both pseudo-anomalies and true anomalies appear only in the test set." ], [ "Obtaining Labels for the Nuisance Attribute", "Our approach, REpresentation Disentanglement for Pretrained Anomaly Detection Adaptation (Red PANDA), aims to achieve a representation invariant to a nuisance attribute of our dataset, leading to better detection of anomalies expressed in relevant attributes.", "To do so, we provide labels for the nuisance attribute.", "For example, when we wish to detect anomalies in drivers behaviour, we may consider the gender of the driver as a nuisance attribute.", "Therefore, we wish not to consider the gender attribute in our algorithm during anomaly detection and provide labels for it.", "We have a few options to achieve these labels.", "In some cases they may already exist in the dataset.", "A very natural such case is when we have data from a few static cameras, and wish to ignore the camera identity.", "In many other cases, a pretrained classifier, already trained for these specific attributes may provide such labels.", "Recently, pretrained models for text-based zero-shot classification such as CLIP [6] have shown promising results.", "They allow to supply of-the-shelf automatic labels for a very large set of attributes.", "We conducted a small experiment over the Edges2Shoes [28] dataset, automatically labelling it with CLIP, and achieved $99.97\\%$ accuracy in labelling whether an image is a photo or a sketch.", "Taken together, although in some cases collecting labels for nuisance attributes may be laborious, in many cases they can be achieved at virtually no cost." ], [ "Preliminaries", "In our setting, the training set consists of normal samples only denoted as ${\\cal {D}}_{train}$ .", "For each normal image $x_i \\in {\\cal {D}}_{train}$ we are also provided with its label $n_i$ describing the nuisance attribute we wish to ignore.", "Our evaluation set ${\\cal {D}}_{test}$ consists of both normal and anomalous samples.", "We denote the normal/anomaly label for a test image $x_i$ as $y_i$ .", "For each such dataset, each sample is described by multiple attribute labels $(n_i, a_i, b_i, c_i, ...) \\in N \\times A \\times B \\times C \\times ...$ , where $N$ describe our nuisance attribute, and $A, B, C, ...$ describe different relevant attributes (consider for example the identity of the object, the lightning condition, and camera angles as different attributes).", "We assume that the anomaly label is always a function of (potentially) all the relevant attributes $y_i = f_a(a_i, b_i, c_i, ...)$ .", "Namely, we assume the nuisance attribute $n_i$ never affects the anomaly label $y_i$ .", "We emphasize that in our described setting, none of the relevant attribute labels nor the anomaly labels are given during training.", "We aim to learn an encoder function $f$ mapping samples $x_i$ to a code describing their relevant attributes $f(x_i) \\in R^{d}$ .", "We also wish our codes to be aligned.", "This is, we wish our encoder to represent the relevant attributes in a way which is not affected by the nuisance attributes: $(a_i, b_i, c_i, ...) = (a_j, b_j, c_j, ...) \\leftrightarrow f(x_i) \\approx f(x_j)$ We also need our code to be informative - to represent sufficient information regarding our relevant attributes: $I\\big ((a_i, b_i, c_i, ...); x_i\\big ) \\approx I\\big ((a_i, b_i, c_i, ...); f(x_i)\\big )$ Given such a representation we may later score anomalies independently from any biases caused by the nuisance attribute we wish to ignore." ], [ "Contrastive Disentanglement", "In this section, we describe the technical approach we employ for ensuring that $f$ does not contain information on the nuisance attribute, while retaining as much information about the relevant attributes [5].", "Pretrained encoder.", "We initialize the encoder function $f$ with an ImageNet pretrained network.", "ImageNet-pretrained representations were previously shown to be very effective for image anomaly detection[1].", "Off-the-self pretrained representation, however, also encodes much information on the nuisance attributes.", "Therefore by themselves they do not satisfy our disentanglement objective.", "Contrastive loss.", "Our objective is that images that have similar relevant attributes but different nuisance attributes would have similar representations.", "Although we are not provided with supervised matching pairs, we use the proxy objective requiring the distribution of representations of images having different nuisance attributes to be the same [5].", "To match the distributions we first split our training data ${\\cal {D}}_{train}$ to disjoint subsets $S_{n_i}$ according to the nuisance attribute values: ${\\cal {D}}_{train} = \\mathop {\\vphantom{\\bigcup }\\mathchoice{\\leavevmode \\vtop {{\\hfil \\m@th \\displaystyle #\\hfil \\cr \\bigcup \\cr \\cdot \\crcr }}}{\\leavevmode \\vtop {{\\hfil \\m@th \\textstyle #\\hfil \\cr \\bigcup \\cr \\cdot \\crcr }}}{\\leavevmode \\vtop {{\\hfil \\m@th \\scriptstyle #\\hfil \\cr \\bigcup \\cr \\cdot \\crcr }}}{\\leavevmode \\vtop {{\\hfil \\m@th \\scriptscriptstyle #\\hfil \\cr \\bigcup \\cr \\cdot \\crcr }}}}\\displaylimits _{n_i\\in N} S_{n_i}$ We then employ a contrastive loss, on each of the sets $S_{n_i}$ independently: $\\mathcal {L}_{con} = \\log \\sum _{{\\cal {D}}_{train}, N }^{} \\mathbb {1}_{(n_i = n_j)}e^{sim\\big ((f(x_i), f(x_j)\\big )}$ This objective encourages the encoder to map the image distribution uniformly to the unit sphere (see Wang and Isola [29]), and therefore is likely to match the marginal distribution $F$ of latent codes across the nuisance attribute.", "Specifically, we would like the distribution of encoded features $F$ to be independent of the nuisance attribute $n_i$ : $p(F|n_i) = p(F|n_j)$ .", "We note that matching of marginal distributions is necessary, but not a sufficient condition for alignment (Eq.", "REF ).", "Yet, this often appears to happen in practice.", "Another problem that may arise is insufficient informativeness: the contrastive objective does not prevent ignoring some of the relevant attributes [30].", "To support the informativeness we add an augmentation loss, encouraging different augmentations of the same image to me mapped to similar codes: $\\mathcal {L}_{aug} = -sim\\Big (f\\big (A_1(x_i),A_2(x_i)\\big )\\Big )$ .", "To further encourage informativeness, we also employ a reconstruction loss.", "Reconstruction loss.", "To require the representation to contain as much information about the relevant attributes as possible, we use a reconstruction constraint.", "Specifically, we require that given the combination of the representation $f_i$ (which ideally ignores the nuisance attribute) and the value of the nuisance attribute $n_i$ , it should be possible to perfectly recover the sample $x_i$ .", "This is enforced using a generator function $G$ which performs this as a regression task.", "The generator is trained end-to-end together with the encoder.", "The reconstruction is measured using a perceptual loss.", "$\\mathcal {L}_{rec} = \\sum _{{\\cal {D}}_{train}, N }^{}\\ell _{perceptual}\\Big ( x_i, G \\big (f(x_i), n_i\\big )\\Big )$" ], [ "Anomaly Scoring", "In the previous steps we learned an encoder $f$ that maps each image into a compact representation of its relevant attributes.", "In this section, we estimate the probability distribution of the normal data in the representation space for anomaly detection.", "Similarly to other anomaly detection methods, we hypothesize the anomalous samples will be mapped to low-density regions, while normal data will be mapped to high-density regions.", "This assumption is violated in the case where the representation contains both relevant and nuisance attributes; as unusual combinations of relevant and nuisance attributes will be rare and therefore classified as anomalous.", "However, if the representation contains only relevant attributes, low-density regions would indeed correspond to samples with rare relevant attributes - which are indeed likely to be anomalous.", "To numerically estimate the density of the normal data around each test sample, we use the $k$ nearest neighbours algorithm ($k$ NN).", "We begin with extracting the representation for each normal samples: $ f_i = f(x_i), \\quad \\forall x_i \\in {\\cal {D}}_{train}$ .", "Next, for each test sample we infer its latent code $f_t = f(x_t)$ .", "Finally, we score it by the $k$ NN distance to the normal data: $S(x_t) = \\sum _{f_i \\in N_k(f_t)}^{} sim(f_i, f_t)$ where $N_k(f_t)$ denotes the $k$ most similar relevant attribute feature vectors in the normal data.", "We note that although we trained our encoder $f$ with a contrastive loss, encouraging uniform distribution in the sphere, the high dimension of the latent space allows us to distinguish between high and low density areas of the distribution of normal data.", "Runtime.", "Although $k$ NN has runtime complexity linear in the number of training data, it can be sped up using K means or core-set techniques (as done in SPADE [31] or PatchCore [32]).", "In practice, the wall-clock runtime of the retrieval stage of our approach is minimal, even without such speedups (>3500 images per second for the SmallNorb dataset).", "Figure: Samples from the Cars3D datasets.", "Pseudo-anomalies are marked in green while true anomalies are marked in red.", "Both pseudo-anomalies and true anomalies appear only in the test set." ], [ "Setting", "Benchmark construction.", "As our anomaly detection setting is novel, new benchmarks need to be designed for its evaluation.", "The following protocol is proposed for creating the benchmarks.", "First, we select an existing dataset containing multiple labelled attributes.", "We designate one of its attributes as nuisance, e.g.", "the object pose, and other attributes as relevant, e.g.", "the identity of the object.", "Only the relevant attributes are used to designate an object as anomalous whereas the nuisance attribute does not.", "We then remove some combinations of nuisance and relevant attributes from the training set, creating bias in the data.", "For example, we may remove all left-facing cars for one car model, and right-facing cars for another car model.", "As these combinations of attributes are not present in the normal train set, we refer to them as pseudo-anomalies.", "We refer to any sample that shares all the attributes (including nuisance attributes) with a normal training sample as a familiar sample.", "In this setting, we aim both to both detect true anomalies (anomalies according to the relevant attributes), and treat pseudo-anomalies as normal as the familiar-samples, as they differ from the normal data only in nuisance attributes, Metrics.", "In our setting, we wish not only to measure our overall anomaly detection performance, but also to evaluate the false alarm rate due to pseudo-anomalies.", "We therefore report our results in terms of three different scores.", "Each such score uses two subsets of the test set, and measures by ROC-AUC how well does our anomaly detection score distinguish between them: (i) Standard anomaly detection (AD)-Score, which measures how accurately anomalies are detected with respect to the normal test data (both seen combinations and pseudo anomalies).", "(ii) Pseudo anomalies (PA)-Score: measures how much pseudo-anomalies are scored as more anomalous than familiar-samples (iii) Relative abnormality (RA)-score: measures how accurately true anomalies are detected compared to pseudo-anomalies.", "Taken together, these metrics measure the accuracy of an anomaly detector while not being biased by unseen combinations of nuisance attributes." ], [ "Results", "We report the results on three multi-attribute datasets based on Cars3D, SmallNORB and Edges2Shoes.", "We chose these specific datasets as they are the common datasets in the field of domain-supervised disentanglement [16], [5].", "We find these datasets to be also non-trivial for state-of-the-art anomaly detection algorithms.", "Compared Methods.", "DN2 [1].", "A simple but effective approach fully reliant on pertaining.", "It uses an ImageNet-pretrained network to extract representations for each image.", "Each test image is scored using $k$ NN density estimation similarly to our approach.", "MeanShifted [10].", "A recent method that achieves state-of-the-art performance on standard anomaly benchmarks.", "It uses a modified contrastive learning loss to adapt its feature to the normal train set.", "This method uses the same pretrained network as our method to initialize the features.", "It then uses a $k$ NN for anomaly scoring.", "CSI [4].", "A strong self-supervised anomaly detection method that does not rely on pretraining.", "It uses two types of augmentations, fine changes simulating positive contrastive loss samples, and domain shifts as negative samples.", "Anomaly scoring is performed using an ensemble of similarity scores based on the learnt features.", "SimCLR [33].", "An ablation of our approach that trains a single contrastive loss rather than a different contrastive loss for each domain.", "We score the anomalies similarly to our approach.", "Evaluation.", "For each dataset, we label each sample as either normal, true anomaly, or pseudo-anomaly as detailed below.", "We include true anomalies and pseudo-anomalies only in the test set, and split the normal samples between the training set and the test set ($85\\% / 15\\%$ train/test split).", "Datasets.", "Cars3D [34].", "A synthetic image dataset, with each image formed from two attributes: car model and pose.", "Car models are varied across different colors, shapes and functionalities.", "Each car model is observed from multiple camera angles (pose).", "We define true anomalies as 5 (randomly selected) car models.", "To simulate pseudo anomalies, we randomized for each camera angle another single car model and labeled it as pseudo-anomaly.", "An illustration of the dataset can be seen in Fig.", "REF .", "We can see in Tab.REF that the disentanglement approach significantly outperforms methods that do not use any guidance to remove the nuisance attribute.", "The method detects true anomalies, without assigning high anomaly scores to the pseudo-anomalies, significantly better than all other methods compared.", "The RA-Score shows that our detector scores true anomalies significantly higher than pseudo-anomalies.", "Table: Empirical Evaluation on the Cars3D Dataset (ROC-AUC)SmallNorb [35].", "Each image is synthetically constructed from several attributes: object type, camera azimuth, camera elevation and lighting.", "The object types come from different categories such as animals, people, planes, trucks and cars.", "To simulate our anomalies we randomized a single object class (e.g.", "deer) from each category type.", "We define the camera azimuth angles as our nuisance attribute.", "For each azimuth angle value we randomize a single object class, and assign samples of that type and camera angle as pseudo-anomalies.", "We can see in Tab.REF that our approach outperforms the baselines on all metrics.", "All the methods utilizing pretrained features detect true anomalies fairly well.", "This could be expected, as during pretraining the network learns a good representation of objects.", "Our disentanglement approach significantly reduces the tendency to score pseudo-anomalies as anomalies.", "CSI treats pseudo-anomalies similarly to normal samples, but this is most likely because its representation for this dataset is not informative, and does not distinguish well between unseen data (be it true anomalies or pseudo-anomalies) and the rest of the test data.", "Table: Empirical Evaluation on the SmallNorb Dataset (ROC-AUC)Edges2Shoes [28].", "This dataset contains photos of shoes and edge maps images of the same photos.", "They are labelled in terms of image type (sketch vs. photo), shoe type, and other attributes (the labels come from the original UT-Zappos50K dataset [36]).", "We assign all images with shoe type \"slippers\" as a true anomaly.", "We assign all photos of type \"sandal\", and all sketches of type \"boot\" as pseudo-anomalies.", "An illustration of the dataset can be seen in Fig.", "REF .", "This dataset is challenging as the photo and sketch domains are quite far, making the nuisance attribute dominant.", "E.g., by observing only sketches of boots, real photos of boots could be easily considered as anomalies without further guidance.", "Our approach outperforms methods that do not remove nuisance attributes from the representation.", "We observe (by the PA-score) that although the pseudo-anomalies are indeed scored higher than normal images by our approach, their scores are still higher than the true anomalies (demonstrated by the RA-score).", "Our approach significantly outperforms the baselines, showcasing the importance of specifying and removing nuisance attributes.", "Table: Empirical Evaluation on the Edges2Shoes Dataset (ROC-AUC)" ], [ "Discussion", "Multi-attribute dataset.", "Many datasets (e.g.", "SmalNorb) have more than two attributes.", "In some cases, we may wish to remove multiple nuisance attributes.", "Methods such as [16] very naturally extends to the case of disentangling many factors of the same dataset.", "We believe that our approach can also be extended to this setting.", "Supervised vs. self-supervised pretraining.", "Many top performing approaches (including ours) rely on externally-pretrained weights for initializing their neural networks.", "Pretrained weights implicitly provide useful guidance regarding the relevant attributes we should focus on, and the ones we may wish to ignore (e.g.", "low-level image information).", "Different pretrained networks provide different relevant/nuisance attribute splits.", "We found that pretrained weights obtained from supervised classification on external datasets such as ImageNet, tend to emphasize the main object featured in the center of the image, and are more invariant to other attributes.", "Representations learned by self-supervised pretraining on external datasets are affected both by the external dataset but also by the augmentation used for its contrastive learning.", "Therefore they have difference inductive biases.", "Augmentations.", "Different methods may require augmented images to be similar or dissimilar to the original image [33], [4].", "This choice tends to have a strong effect on the results.", "E.g., a network trained to be rotation invariant, may fail when the relevant attribute is image orientation angle.", "Our approach only uses simple augmentations such as Gaussian blurring, saturation and crops.", "We expect these augmentations not to restrict the anomalies detectable in the vast majority of cases.", "In general, augmentation should be carefully inspected when deploying anomaly detection methods in practice.", "Removing nuisance attributes with generative models.", "Recently, generative models e.g.", "StyleGAN [37] have been able to learn very powerful representations for several data types, particularly images of faces.", "Their representations experience a certain level of disentanglement [38].", "When available, such models can be utilized for removing nuisance attributes in a similar approach to ours." ], [ "Limitations", "Domain supervised disentanglement in the wild.", "Currently, state-of-the-art domain-supervised disentanglement methods achieve impressive results on synthetic or curated datasets.", "Such methods do not perform as well for in-the-wild datasets.", "As our approach heavily relies on disentanglement, it is prone to similar limitations.", "As the field of disentanglement advances, the advancements can be directly translated to improved anomaly detection in our approach.", "Highly biased datasets.", "Similarly to other disentanglement approaches, we require the distributions of relevant attributes across nuisance domains to be somewhat similar.", "We have shown that our method can work when the supports across domains are not overlapping.", "Still, we expect that when the support is highly non-overlapping the results will significantly deteriorate.", "Developing methods able to disentangle domains with highly non-overlapping support is an exciting future direction." ], [ "Conclusion", "We proposed a new anomaly detection setting where information is provided on a set of attributes that are known to be irrelevant from distinguishing normal from anomalous data.", "Using a disentanglement-based approach, we showed how this additional supervision can be leveraged for better anomaly detection in biased datasets.", "As knowing a subset of the attributes that are irrelevant is much easier than knowing in advance the entire list of relevant attributes, we expect our new setting to be useful in practical applications." ], [ "Acknowledgements", "This work was partly supported by the Malvina and Solomon Pollack scholarship and an Israeli Council for Higher Education grant in the data sciences." ], [ "Disentanglement module", "We use most of the parameters as in the DCoDR paper[5] for our disentanglement module.", "All images were used in a $64\\times \\ 64$ resolution.", "For the contrastive temperature, we use $\\tau = 0.1$ for all the datasets.", "We scale down the loss $\\mathcal {L}_{rec}$ by a factor of $0.3$ .", "Architecture.", "A ResNet50 encoder pretrained on image classifications.", "In accordance with previous works, we add 3 fully-connected layer to the encoder for the SmallNorb dataset [16], [5].", "For the perceptual loss of the generator we used a VGG network pretrained on ImageNet.", "Optimization.", "We use 200 training epochs.", "In each batch we used 32 images from 4 different nuisance classes (a batch size of 128, in total).", "We used a learning rates of $1\\cdot 10^{-4}$ and $3\\cdot 10^{-4}$ for the encoder and generator (respectively).", "Augmentation.", "We used Gaussian blurring (kernel_size $=5$ , $\\sigma =1$ ), high contrast (contrast $=(1.8,3.0)$ ), and high saturation (saturation $=(1.8,3.0)$ ) for our augmentation.", "For Edges2Shoes we used only Gaussian Blurring.", "For the SimCLR [33] contrastive learning (both in our approach and the baseline), we follow DCoDR by only augmenting the original image once, and comparing the augmented and the original views encodings.", "This in contrast to SimCLR which compares two augmented views instead." ], [ "Scoring module", "We use faiss[39] $k$ NN implementation, using $k = 1$ .", "As our similarity measure we use Cosine distance, similarly to the distance used during our contrastive training." ], [ "Datasets", "To simulate anomalies in the dataset, we first designate true anomalies as described in Sec.. We then chose combination of normal classes and the nuisance attribute to designate pseudo anomalies.", "We used the following random combinations for pseudo anomalies: Cars3D: Table: Cars3D Psuedo Anomalies SelectionSmallNorb: Table: Smallnorb Psuedo Anomalies SelectionEdges2Shoes: Table: Edges2Shoes Psuedo Anomalies SelectionFinally we take all of the psuedo anomalies to the test set." ], [ "Compute Resources", "The entire project used in total 3000 hours of NVIDIA RTX A5000 GPU (including development, testing and comparisons).", "All resources were supplied by a local internal cluster." ], [ "Typical Statistical Error in Experimental Results", "As our experiments are relatively long and results are fairly consistent among different runs we do not provide an error bar for each single run.", "As a typical case, we ran 3 repetitions of our approach for the SmallNorb experiments.", "The consistency of the results is presented in Tab.REF .", "Table: Consistency of results among repetitions for the SmallNorb dataset (ROC-AUC)" ], [ "License", "Our technical approach is based on the DCoDR paper[5] with SOFTWARE RESEARCH LICENSE detailed herehttps://github.com/jonkahana/DCoDR/blob/main/LICENSE.", "The implementation uses the PyTorch and faiss [39] packages.", "PyTorch Uses a BSD-style license, as detailed in their license filehttps://github.com/pytorch/pytorch/blob/master/LICENSE.", "faiss uses MIT License.", "The CLIP[6] network we used for automatic labelling uses MIT License.", "SimCLR [33] used by DcoDR and as a baseline uses Apache License." ] ]
2207.03478
[ [ "Harmonic calibration of quadrature phase interferometry" ], [ "Abstract The two output signals of quadrature phase interferometers allow to benefit both from the high sensitivity of interferometry (working inside a fringe) and from an extended input range (counting fringes).", "Their calibration to reach a linear output is traditionally performed using Heydemann's correction, which involves fitting one output versus the other by an ellipse.", "Here we present two alternative methods based on the linear response of the measurement to a sinusoidal input in time, which enables a direct calibration with an excellent linearity.", "A ten fold improvement with respect to the usual technique is demonstrated on an optical interferometer measuring the deflection of scanning force microscopy cantilevers." ], [ "Harmonic calibration of quadrature phase interferometry Baptiste Ferrero Univ Lyon, ENS de Lyon, CNRS, Laboratoire de Physique, F-69342 Lyon, France Ludovic Bellon ludovic.bellon@ens-lyon.fr Univ Lyon, ENS de Lyon, CNRS, Laboratoire de Physique, F-69342 Lyon, France The two output signals of quadrature phase interferometers allow to benefit both from the high sensitivity of interferometry (working inside a fringe) and from an extended input range (counting fringes).", "Their calibration to reach a linear output is traditionally performed using Heydemann's correction, which involves fitting one output versus the other by an ellipse.", "Here we present two alternative methods based on the linear response of the measurement to a sinusoidal input in time, which enables a direct calibration with an excellent linearity.", "A ten fold improvement with respect to the usual technique is demonstrated on an optical interferometer measuring the deflection of scanning force microscopy cantilevers.", "Interferometers are todays gold standard when it comes to measuring displacements or deformations with a high precision[1].", "Gravitational wave interferometers[2], as glaring examples, reach an impressive resolution down to ${e-20}{m/\\sqrt{Hz}}$ .", "Such an achievement is obtained by maintaining the working point around the maximum sensitivity of the detector, thus forbidding simple measurements of large deformations $d$ .", "Indeed, the output of interferometers is always non-linear with their input $d$ : the optical power $I$ at their output is a periodic function of an optical phase $\\varphi \\propto d/\\lambda $ , where $\\lambda $ is the wavelength of the source.", "The linearization of the output is then only possible for a fraction of the wavelength: $d\\ll \\lambda $ .", "In the simplest case (single reflection of the probe beam on the moving part), this periodic function is sinusoidal: $ I=I_0+I_1\\cos (\\varphi ),$ with $\\varphi = 4\\pi d/\\lambda $ , and $I_0$ and $I_1$ two parameters that can be calibrated by exploring a range in $d$ larger than $\\lambda /2$ , thus a range in $\\varphi $ larger than $2\\pi $ .", "An adequate workaround to circumvent the non-linear output is then to create within the interferometer a second optical signal in quadrature with the first one[1], [3]: $ Q=Q_0+Q_1\\sin (\\varphi + \\psi ),$ with $Q_0$ , $Q_1$ , and $\\psi $ (respectively offset, amplitude and deviation to perfect quadrature) three more parameters that can be calibrated with a large excursion in $\\varphi $ .", "Figure: We use a quadrature phase differential interferometer (QPDI) to measure the deflection of an atomic force microscopy (AFM) cantilever.", "The optical phase ϕ=4πd/λ\\varphi = 4 \\pi d/\\lambda between the two laser beams after reflection on the support and free end of the cantilever encodes the deflection dd of the latter, with an absolute calibration with respect to the wavelength λ=633nm\\lambda ={633}{nm} of the laser.", "The QPDI produces two outputs in quadrature, II and QQ.", "Imposing an harmonic oscillation of dd at angular frequency ω 0 \\omega _0 through a piezoceramic, we explore the full [0,2π][0,2\\pi ] range of phases.", "Plotting QQ versus II gives the calibration curve, where lies all measurements, and that can be parametrised by the polar angle θ\\theta .", "Calibrating the interferometer means finding the correspondance between θ\\theta and ϕ\\varphi .", "In Heydemann's correction scheme, this is realised by fitting the calibration curve by an ellipse.In practice in a $Q$ versus $I$ plot, we expect data to lay on an ellipse from eqs.", "(REF ) and (REF ).", "The ideal case, $I_1=Q_1$ and $\\psi =0$ , leads to a circle: the polar angle $\\theta $ of the measurement point on this circle is then a direct measurement of the optical phase $\\varphi $ .", "To account for imperfections of the actual instrument, it is customary to use Heydemann's correction[3]: the closed curve in the $I,Q$ plane corresponding to a large enough excursion in $\\varphi $ (the calibration curve) is fitted by an ellipse to extract the five parameters $I_0$ , $I_1$ , $Q_0$ , $Q_1$ and $\\psi $ , as illustrated in Fig.", "REF .", "Eqs.", "(REF ) and (REF ) are then reversed to compute the optical phase: $\\cos \\varphi &= \\frac{I-I_0}{I_1} \\\\\\sin \\varphi &= \\frac{Q-Q_0}{Q_1\\cos \\psi } + \\frac{I-I_0}{I_1} \\tan \\psi .$ Through the measurements of $I$ and $Q$ and the knowledge of the five calibration parameters, the simultaneous knowledge of $\\cos \\varphi $ and $\\sin \\varphi $ allows to extract of $\\varphi $ in the full $[0, 2\\pi ]$ interval from their signs and the arctan function applied to their ratio.", "Unwrapping $\\varphi $ as time runs can then be used to reach a virtually infinite input range, while maintaining the full sensitivity of the interferometric measurement for sub-fringe variations.", "This approach, first introduced by Heydemann[3], has been used in many devices and experiments, to linearise the dual output of quadrature phase interferometers with a very good precision[5], [6], [7], [8], [9], [4], [10].", "Though this approach can handle many imperfections of the interferometer, such as offsets, $I$ and $Q$ imbalance, imperfect quadrature, it still rely on the hypothesis that the periodic outputs are simple sinusoidal functions of the optical phase $\\varphi $ .", "Other imperfections, such has beam clipping, can actually create more complex periodic functions, than manifest as a calibration curve in the $I,Q$ plane that deviates from an ellipse, and higher order terms in the Fourier expansion of $I(\\varphi )$ and $Q(\\varphi )$ .", "One could add such terms a priori, and fit the $I,Q$ curve with a more general parametric curve, to extract more calibration parameters.", "The number of additional fitting terms to include would however need to be guessed, and the inversion problem might be difficult to perform.", "In this Letter, we use an alternative approach to calibration, which is in some sense more conventional.", "Let us first describe the $I,Q$ plane by its complex number representation $z=I+iQ-z_0$ , with $i=\\sqrt{-1}$ the imaginary number and $z_0$ the origin.", "We place $z_0$ somewhere inside the calibration curve (for exemple at midway between the minimum and maximum of $I$ and $Q$ ).", "Any measurement point can now be written as $z=|z|e^{i\\theta }$ , with $\\theta $ the argument of $z$ (or the polar angle) in this representation.", "For simple calibration curves (close to a circle or an ellipse for exemple), the relation between $\\theta $ and $\\varphi $ is bijective, and its knowledge is a calibration: once this bijection $\\theta =\\Theta (\\varphi )$ is known, any measurement point $z$ directly leads to the optical phase: $\\varphi =\\Theta ^{-1}(\\arg z)$ .", "It would be convenient to be able to apply a known and calibrated deformation $d$ , thus optical phase $\\varphi $ , spanning the full $[0, 2\\pi ]$ interval, and directly plot the measured $\\theta $ versus the applied $\\varphi $ to access the calibration function $\\Theta $ .", "However, such an approach supposes that we already have a calibrated instrument as precise as the interferometer we want to calibrate...", "The trick we implement in our approach is to use a pure harmonic calibration signal, obtained with a driving at a single frequency of a resonant system.", "Our setup, described in Refs.", "Bellon-2002, Paolino-2013 and Fig.", "REF , uses a quadrature phase interferometer to measure the deflection of a micro-cantilever used in an atomic force microscope (AFM).", "Using a waveform generator with a very low distortion, we drive the cantilever at its resonance frequency through a piezo-ceramic.", "The amplitude of the driving is small enough to have a linear response of the electro-mechanical components, all the more as the resonant behavior of the cantilever with a high quality factor efficiently filters out any signal at frequency higher than the resonance.", "The deflection $d$ of the cantilever can thus be considered purely harmonic at angular frequency $\\omega _0$ , so that the optical phase writes: $\\varphi (t)=\\varphi _0+\\varphi _1 \\cos [\\omega _0 (t-t_0)]$ .", "$\\omega _0$ is set by the operator, so if we manage to identify the three parameters $\\varphi _0$ , $\\varphi _1$ and $t_0$ , we will be able to plot $\\varphi (t)$ (imposed) versus $\\theta (t)$ (measured), and have a direct access of the calibration function $\\Theta ^{-1}$ .", "Figure: (a) Time trace θ(t)\\theta (t) superposed with 2π+θ(t+1 2T 0 )2\\pi +\\theta (t+\\frac{1}{2}T_0) during calibration, with T 0 T_0 the driving period.", "For periodicity reasons, the intersection of those curves define the time t π t_\\pi where ϕ(±t π )=π\\varphi (\\pm t_\\pi )=\\pi , entirely defining the optical phase ϕ(t)\\varphi (t).", "(b) Since ϕ(t)\\varphi (t) versus θ(t)\\theta (t) is very close to the identity (inset), we plot the correction ϕ-θ\\varphi -\\theta to add to the polar angle θ\\theta to compute the optical phase ϕ\\varphi .", "Heydemann's correction (red line), interpreting the data from the ellipse fitted on the calibration curve, and the harmonic calibration (blue line), proposed in this Letter, yield similar results, with a deviation up to 0.3rad{0.3}{rad} from the identity.", "The harmonic calibration function is computed with a low pass filtering of the experimental data (gray dots).$\\varphi _0$ is usually arbitrary in interferometry, and correspond to the origin of phase (or deflection in our case), we thus impose $\\varphi _0=0$ without loss of generality.", "Let us continue with $t_0$ : since $\\varphi $ is maximum at $t=t_0$ (and at every period $T_0=2\\pi /\\omega _0$ ), so is $\\theta $ : $\\dot{\\theta }(t_0)=\\Theta ^{\\prime }[\\varphi (t_0)]\\dot{\\varphi }(t_0)=0$ .", "We can therefore define the origin of time at a maximum of the measured $\\theta $ to set $t_0=0$ .", "We are therefore only left with $ \\varphi (t)=\\varphi _1 \\cos (\\omega _0 t),$ and we just need to extract the value of $\\varphi _1$ from the measurement.", "Let us first note that since we explore with the closed calibration curve the full range $[0, 2\\pi ]$ , we necessarily have $\\varphi _1>\\pi $ .", "Let us then define the smallest positive time $t_\\pi $ when $\\varphi (t_\\pi )=\\pi $ .", "From eq.", "(REF ), we know that half a period later, we will have $\\varphi (t_\\pi +\\frac{1}{2}T_0)=-\\pi =\\varphi (t_\\pi )-2\\pi $ .", "Since both phases are equal modulo $2\\pi $ , the measurement point is the same on the calibration curve.", "Unwrapping $\\theta $ as well, we should have $\\theta (t_\\pi +\\frac{1}{2}T_0)=\\theta (t_\\pi )-2\\pi $ .", "When plotting $\\theta (t)$ and $2\\pi +\\theta (t+\\frac{1}{2}T_0)$ , we thus see the two curves intersecting in $t=t_\\pi $ , as illustrated in Fig.", "REF (a).", "By symmetry, they should also intersect in $t=-t_\\pi $ .", "Note that the time origin can also be defined to meet this symmetry, instead of looking for a maximum of $\\theta $ .", "Once $t_\\pi $ is graphically determined, we directly have the value of $\\varphi _1$ , thus the full knowledge of $\\varphi (t)$ with eq.", "(REF ): since $\\varphi (t_\\pi )=\\pi $ , then $\\varphi _1=\\pi /\\cos (\\omega _0 t_\\pi )$ , and $\\varphi (t)=\\pi \\frac{\\cos (\\omega _0 t)}{\\cos (\\omega _0 t_\\pi )}.$ Finally, plotting $\\varphi (t)$ versus the measured $\\theta (t)$ directly leads to the calibration function $\\Theta ^{-1}$ , as illustrated in Fig.", "REF (b).", "To smooth the experimental raw data, we take advantage of the $2\\pi $ periodicity in $\\theta $ of $\\varphi -\\theta $ : first, we average the experimental data on 512 evenly distributed points in the $[-\\pi ,\\pi ]$ interval, then we remove high frequency noise by low pass filtering of this curve (Fast Fourier Transform (FFT) of the curve, remove high frequency and low amplitude components, inverse FFT)[13].", "The resulting curve can be interpolated to any input data $\\theta $ to compute the optical phase.", "Figure: (a) We apply steady driving to the cantilever at 50kHz{50}{kHz} while slowly changing the mean working point with an external optical phase ϕ ext \\varphi _\\mathrm {ext}.", "The oscillation amplitude ϕ 1 \\varphi _1 at 50kHz{50}{kHz} extracted from Heydemann's correction (×\\times ) shows a ±3%{\\pm 3}{\\%} rms variation around the mean value, whereas it is almost constant when using the harmonic calibration (++).", "We use the normalised standard deviation std (ϕ 1 )/〈ϕ 1 〉\\mathrm {std}(\\varphi _1)/\\langle \\varphi _1\\rangle as a figure of merit for the linearity of the measurement, and report it as a function the oscillation frequency (b, for an amplitude around 0.2rad{0.2}{rad}) or amplitude (c, amplitudes above 0.3rad{0.3}{rad} only for the first and second resonant modes of the cantilever, at f 0 =13.35kHzf_0={13.35}{kHz} and f 1 =84.64kHzf_1={84.64}{kHz}).To illustrate the gain in linearity from this calibration process with respect to the Heydemann correction, we perform the following experiment: after performing the calibration step, we apply to the cantilever a steady harmonic driving of low amplitude (deflection ${10}{nm}$ to ${60}{nm}$ , corresponding to an optical phase amplitude $\\varphi _1={0.2}{rad}$ to ${1.2}{rad}$ ) and high frequency (from ${1}{kHz}$ to ${150}{kHz}$ ), and at the same time we artificially induce a low frequency drift (around ${1}{Hz}$ ) of the working point, to explore the full calibration curve.", "In our setup, this drift is controlled optically by adding an external phase $\\varphi _\\mathrm {ext}$ to the optical one $\\varphi $ , it could also be done by adding a low frequency force of large amplitude applied to the cantilever tip.", "The high frequency deflection amplitude, measured with a digital lock-in data processing, should remain constant, regardless of the working point, and thus probes the linearity of the output in the full input range.", "We report an example of this procedure in Fig.", "REF (a), for a $\\sim {0.2}{rad}$ oscillation at ${50}{kHz}$ .", "The amplitude extracted from Heydemann's correction shows a ${\\pm 3}{\\%}$ rms variation around the mean value while varying $\\varphi _\\mathrm {ext}$ , whereas the harmonic calibration yields a amplitude constant within ${\\pm 0.3}{\\%}$ .", "This one order of magnitude gain in linearity is true at any probed frequency, as show in Fig.", "REF (b).", "When probing larger amplitude, the non-linearity is averaged out in Heydemann's approach as the oscillation averages the sensitivity on a larger range, as shown by Fig.", "REF (c).", "In the meantime, it stays equally good using the harmonic calibration.", "Another way to probe the non-linearity of the output is to drive the cantilever at a given frequency and look for the generation of harmonics by the measurement process.", "We plot in Fig.", "REF (a) the Power Spectrum Density (PSD) of the deflection during the calibration run: the cantilever is driven at its first resonance $f_0={13.35}{kHz}$ with an amplitude $\\varphi _1={3.815}{rad}$ [data corresponding to Fig.", "REF (a)].", "We can observe how the peaks corresponding to the harmonics at integer multiples of $f_0$ decreases when choosing the harmonic calibration over Heydemann's correction.", "To be quantitative, we can compute the Total Harmonic Distortion (THD), defined as the integral of the PSD around all harmonics of the excitation frequency, normalised by the one around $f_0$ .", "For this specific amplitude and frequency, we measure a THD of ${2e-7}$ for the harmonic calibration, down from ${e-5}$ for the classic approach.", "This two orders of magnitude gain in THD is consistent on every oscillation amplitude explored, as shown in Fig.", "REF (b).", "Figure: (a) Power Spectrum Density (PSD) S ϕ S_\\varphi of the optical phase ϕ\\varphi while driving to the cantilever at its first resonance frequency f 0 =13.35kHzf_0={13.35}{kHz} with an amplitude ϕ 1 =3.8rad\\varphi _1={3.8}{rad}.", "All harmonics of this fundamental frequency (markers) are created by the residual non-linearities of the detection scheme.", "Heydemann's correction produces higher distortion than the harmonic calibration for all harmonics.", "(b) Total Harmonic Distortion (THD, ratio of all harmonics power over the fundamental) versus oscillation amplitude with a driving at f 0 f_0: the harmonic calibration is consistently better than the classic approach.", "(c) THD versus ϕ 1 guess \\varphi _1^\\mathrm {guess} for the calibration dataset: if we let the amplitude ϕ 1 guess \\varphi _1^\\mathrm {guess} as a adjustable parameter and compute for each guess the calibration function Θ -1 \\Theta ^{-1}, then the THD of Θ -1 θ ( t )\\Theta ^{-1}\\big (\\theta (t)\\big ), the latter presents a sharp minimum for the actual value of ϕ 1 \\varphi _1, here 3.815rad{3.815}{rad}.The THD can be used as an indicator of the performance of the calibration, but it can also be used to perform the calibration itself.", "Indeed, as mentioned earlier, the important parameter to extract from the calibration dataset is the imposed oscillation amplitude $\\varphi _1$ , which can be done using the periodicity trick of Fig.", "REF (a).", "However, one can directly make a guess $\\varphi _1^\\mathrm {guess}$ for $\\varphi _1$ , compute a calibration function $\\Theta ^{-1}$ , and then the THD of the reconstructed phase $\\Theta ^{-1}\\big (\\theta (t)\\big )$ from the same calibration dataset.", "We then expect non-linearities to be present in the signal if the guess value $\\varphi _1^\\mathrm {guess}$ differs from the actual value $\\varphi _1$ , thus degrading the THD.", "As shown in Fig.", "REF (c) by plotting the THD versus $\\varphi _1^\\mathrm {guess}$ , the curve presents a sharp minimum at $\\varphi _1$ .", "This minimisation procedure is another way to perform the harmonic calibration, and the relative difference in $\\varphi _1$ computed from both methods (periodicity trick or THD minimum) is only ${e-4}$ with the dataset of Fig.", "REF (a): both calibrations lead to indistinguishable results.", "The benefits of the THD minimisation approach is that it is very general : it doesn't require the periodicity of the output, nor the minimum amplitude $\\varphi _1>\\pi $ .", "As a matter of fact, it could be applied to any measurement device having a non-linear output, for which applying a pure harmonic input is possible.", "To summarize, in this Letter we present a generic method to calibrate the dual output of quadrature phase interferometers, based on applying a pure harmonic input signal.", "Using a large enough oscillation (interferometer phase spanning more than $2\\pi $ ), and the periodicity properties of the input and outputs, we can infer the amplitude of the input with no need for any other information.", "Plotting the inferred input versus the measured output yields the calibration function, which can be subsequently be used for any input signal.", "We demonstrate on an experiment using a differential interferometer measuring the deflection of an AFM cantilever that a significant gain in linearity can be achieved with respect to the common Heydemann's correction approach.", "The THD can be used as a figure of merit for the calibration process, and can actually take part to the calibration procedure itself: its minimisation from guess values of the input amplitude bypasses the periodicity trick.", "Undoubtedly, the gain in performance is setup (and user) dependant: a perfectly tuned interferometer, with negligible imperfections, wouldn't benefit much from such correction (just as Heydemann's wouldn't be necessary in such case).", "However, in real life use, the approach is light to implement and could benefit many existing interferometric devices.", "If signal post-processing is sufficient, it can be performed with simple data analysis software.", "Once the calibration curve extracted, a real-time implementation could also be performed using FPGA based devices.", "To conclude this Letter, let us mention two interesting perspectives for this work.", "First, multi-frequency AFM[11]: in this approach, one uses the non-linear characteristic of the tip-sample interaction to create from a single (at most a few) frequency driving of the cantilever a comb of response frequencies.", "The amplitude of those harmonics can then be used for imaging, or even to reconstruct the non-linear interaction potential[12].", "The linearity of the detection is crucial in this technique, as one would like harmonics to come from the physics of the interaction rather than from the sensor.", "Moreover, the signal to noise ratio is important to get as many harmonics as possible: the more the merrier when its comes to the inversion problem.", "On those 2 criteria, an interferometric readout of the cantilever deflection (as the one described in this Letter) with high sensitivity and linearity would be beneficial.", "A second interesting perceptive is the very general calibration procedure offered by the THD minimisation approach.", "Indeed, it could be applied to any measurement device having a non-linear output for which applying a pure harmonic input is possible.", "In such case, one can use guess amplitudes for the input, create a calibration curve, infer the THD, and then minimize the latter with respect to the guess value.", "As long as the instrument bandwidth is large enough to include a few harmonics of the forcing, this procedure can be applied and yields a direct linearization of the device output.", "Interestingly, the non-linear character of the output is instrumental for this linearization approach to work !", "Acknowledgments This work has been financially supported by the French région Auvergne Rhône Alpes through project Dylofipo and the Optolyse plateform (CPER2016).", "We thank S. Ciliberto, C. Crauste, R. Pedurand, A. Labuda for enlightening scientific and technical discussions.", "Data availability The datasets and data processing tools to compute calibration functions that support the findings of this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.6801118 [13]" ] ]
2207.03488
[ [ "The effect of smooth parametrizations on nonconvex optimization\n landscapes" ], [ "Abstract We develop new tools to study landscapes in nonconvex optimization.", "Given one optimization problem, we pair it with another by smoothly parametrizing the domain.", "This is either for practical purposes (e.g., to use smooth optimization algorithms with good guarantees) or for theoretical purposes (e.g., to reveal that the landscape satisfies a strict saddle property).", "In both cases, the central question is: how do the landscapes of the two problems relate?", "More precisely: how do desirable points such as local minima and critical points in one problem relate to those in the other problem?", "A key finding in this paper is that these relations are often determined by the parametrization itself, and are almost entirely independent of the cost function.", "Accordingly, we introduce a general framework to study parametrizations by their effect on landscapes.", "The framework enables us to obtain new guarantees for an array of problems, some of which were previously treated on a case-by-case basis in the literature.", "Applications include: optimization over low-rank matrices and tensors by optimizing over a factorization; the Burer--Monteiro approach to semidefinite programs; training neural networks by optimizing over their weights and biases; and quotienting out symmetries." ], [ "Introduction", "We consider pairs of optimization problems (REF ) and () as defined below, where $\\mathcal {E}$ is a linear space, $\\mathcal {M}$ is a smooth manifold, and $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {E}$ is a smooth (over)parametrization of the search space $\\mathcal {X}= \\varphi (\\mathcal {M})$ of (REF ):The two-headed arrow $\\twoheadrightarrow $ denotes a surjection.", "$\\begin{tikzcd}{\\mathcal {M}} \\\\{\\qquad \\mathcal {X}\\subseteq \\mathcal {E}} & \\mathbb {R}[\"\\varphi \"^{\\prime }, two heads, from=1-1, to=2-1][\"f\"^{\\prime }, from=2-1, to=2-2][\"{g=f\\circ \\varphi }\", from=1-1, to=2-2]\\end{tikzcd}$ $\\min _{x \\in \\mathcal {X}} f(x) \\\\\\min _{y \\in \\mathcal {M}} g(y) $ We usually assume that $f \\colon \\mathcal {E}\\rightarrow \\mathbb {R}$ is smooth, hence so is $g = f \\circ \\varphi \\colon \\mathcal {M}\\rightarrow \\mathbb {R}$ by composition.", "Such pairs of problems () and (REF ) arise in two scenarios (concrete examples follow): Our task is to minimize $f$ on $\\mathcal {X}$ as in (REF ), but we lack good algorithms to do so, e.g., because $\\mathcal {X}$ lacks regularity.", "In this case, we choose a smooth parametrization $\\varphi $ of $\\mathcal {X}$ , and use it to optimize over the smooth problem () instead.", "Our task is to minimize $g$ on $\\mathcal {M}$ as in (), but its landscape is complex (e.g., due to symmetries).", "In this case, we factor $g$ through a smooth map $\\varphi $ in the hope of revealing a problem (REF ) whose landscape is simpler and can be leveraged to analyze that of ().", "In both cases, we run an optimization algorithm on the smooth problem ().", "This algorithm may be able to find desirable points $y$ on $\\mathcal {M}$ for () (global or local minima, stationary points).", "However, in general such points need not map to desirable points $\\varphi (y)$ on $\\mathcal {X}$ for (REF ).", "Indeed, nonlinear parameterizations may severely distort landscapes.", "The presence of such spurious points is significant even if the algorithm does not converge to them, as practical algorithms are only able to find approximately stationary points in finitely-many iterations.", "Therefore, there typically is a region around spurious points in which algorithms terminate and return a point on $\\mathcal {X}$ that is not close to a desirable point.", "In this paper, we characterize the properties that the parametrization $\\varphi $ needs to satisfy for desirable points of () to map to desirable points of (REF ), that is, we develop a general framework to relate the landscapes of pairs of problems of the above form.", "Our framework enables us to unify and strengthen the analysis of a wealth of parametrizations arising in applications, hitherto studied case-by-case.", "For an example of scenario (a), consider minimizing a cost $f$ over the set $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ of all $m\\times n$ matrices of rank at most $r$ .", "Unfortunately, standard algorithms running on (REF ) may converge to a non-stationary point because of the nonsmooth geometry of $\\mathcal {X}$  [33], [37].", "Instead of trying to solve (REF ) directly, it is common to parametrize $\\mathcal {X}$ by the linear space $\\mathcal {M}=\\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{n\\times r}$ using the rank factorization $\\varphi (L,R)=LR^\\top $ , and to solve () instead.", "The resulting problem () requires minimizing a smooth cost function over a linear space; there are several algorithms that converge to a second-order stationary point for such problems.", "Furthermore, any second-order stationary point for () maps under $\\varphi $ to a stationary point for (REF ) by [22].", "Thus, parametrization of $\\mathcal {X}$ by $\\varphi $ gives us an algorithm converging to a stationary point for (REF ) by running standard algorithms on (), even though similarly reasonable algorithms may fail to produce a stationary point when applied directly to (REF ).", "For an illustration of scenario (b), consider finding the smallest eigenvalue of a $d\\times d$ symmetric matrix $A$ , which is of the form () with $\\mathcal {M}$ the unit sphere in $\\mathbb {R}^d$ and $g(y)=y^\\top Ay$ .", "This problem is not convex, hence it could have bad local minima.", "Here is one way to reason that it does not (as is well known).", "If $\\lambda \\in \\mathbb {R}^d$ denotes the vector of eigenvalues of $A$ and $U\\in \\mathrm {O}(d)$ is an orthogonal matrix of eigenvectors satisfying $A=U\\mathrm {diag}(\\lambda )U^\\top $ (both of which are unknown), define $\\varphi (y)=\\mathrm {diag}(U^\\top yy^\\top U)\\in \\mathbb {R}^d$ and $f(x)=\\lambda ^\\top x$ .", "It is easy to check that $g = f\\circ \\varphi $ and that $\\mathcal {X}=\\varphi (\\mathcal {M})$ is the standard simplex in $\\mathbb {R}^d$ .", "The resulting problem (REF ) is convex in this case, hence each of its stationary points is a global minimum.", "A corollary of the theory we develop in this paper is that any second-order stationary point for (), for any cost function $f$ , maps to a stationary point for (REF )—see Example REF .", "Thus, we recover the well-known fact that any second-order stationary point for the eigenvalue problem () is globally optimal.", "Even though the problem (REF ) cannot be solved directly in this case because $f$ and $\\varphi $ are unknown, their mere existence can be used to show that the nonconvexity of () is “benign”.", "From this perspective, problem (REF ) reveals hidden convexity in problem ()." ], [ "Contributions", "We call the parametrization $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ a (smooth) lift of $\\mathcal {X}$ .", "As the above examples illustrate, understanding when lifts map certain desirable points for () to desirable points for (REF ) yields guarantees for algorithms running on ().", "The relation between these two sets of desirable points has been studied for a variety of specific lifts—see “Prior Work” below.", "In this paper, we study this relation in general.", "Specifically, we ask and answer the following questions: Do global minima of () map under $\\varphi $ to global minima of (REF ) for all cost functions $f$ ?", "The answer is yes because $\\varphi (\\mathcal {M}) = \\mathcal {X}$ , see Theorem REF .", "Do local minima of () map to local minima of (REF ) for all continuous $f$ ?", "If so, we say $\\varphi $ satisfies “local $\\Rightarrow \\!$ local”.", "The answer is yes if and only if $\\varphi $ is open, see Theorem REF .", "Openness is clearly sufficient, but proving its necessity requires substantial work.", "Some lifts of interest are not open, see Table REF .", "Do stationary points of () map to stationary points of (REF ) for all differentiable $f$ ?", "If so, we say $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1”, where “1” stands for “first-order”.", "This property is rarely satisfied: unless all tangent cones of $\\mathcal {X}$ are linear subspaces, for every smooth lift $\\varphi $ , there exists a (linear) $f$ such that some stationary point for () does not map to a stationary point for (REF ).", "See Theorem REF and Corollary REF .", "Thus, for all but the simplest sets, all lifts are liable to introduce spurious stationary points.", "Do second-order stationary points of () map to stationary points of (REF ) for all twice-differentiable $f$ ?", "If so, we say $\\varphi $ satisfies “2 $\\Rightarrow \\!$ 1”.", "This property holds under some conditions on $\\varphi $ which we characterize completely, see Theorem REF .", "These conditions hold for many interesting examples, see Table REF .", "The “2 $\\Rightarrow \\!$ 1” property can enable crisp conclusions as illustrated by the two example scenarios above.", "Understanding stationarity on $\\mathcal {X}$ requires knowledge of its tangent cones.", "These can be hard to characterize.", "We show that in some cases it is possible to obtain an explicit expression for the tangent cones simultaneously with proving “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1” for some lift of $\\mathcal {X}$ , see Remark REF .", "Given a set $\\mathcal {X}$ , how can we construct a smooth lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ satisfying our desirable properties?", "When $\\mathcal {X}$ is given as a preimage under a smooth map, as is the case in typical constrained optimization problems, we give a systematic construction of a lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ .", "When the set $\\mathcal {M}$ constructed in this way is a smooth manifold, we give conditions under which $\\varphi $ satisfies each of the above properties.", "Our main definitions and results are given in Section REF .", "We use our framework to study several examples, including various lifts of low-rank matrix and tensor varieties and the Burer–Monteiro lift for smooth semidefinite programs (SDPs).", "See Table REF for a summary of some of our results.", "In particular, our theory enables us to do the following.", "We prove in Corollary REF that the Burer–Monteiro lift [10], [11], [9], [27] for smooth low-rank SDPs satisfies “2 $\\Rightarrow \\!$ 1”, that is, it maps second-order stationary points for () to stationary points for (REF ), for any twice-differentiable cost function.", "Previously, this was only shown for rank-deficient second-order stationary points [27].", "We also derive an explicit expression for the tangent cones to smooth low-rank SDP domains (REF ).", "In the robotics and computer vision literature, the authors of [17] compose the Burer–Monteiro lift with a submersion to enable the use of robotics software.", "We study compositions of lifts in general in Section REF , and use this general theory to strengthen the guarantees of [17] in Example REF .", "In [22] and [32], the authors study several lifts for optimization over low-rank matrices, including the rank factorization lift mentioned above.", "For all these lifts, they show that first-order stationary points for () may not map to stationary points for (REF ) but second-order stationary points for () do.", "It is also shown in [32] that, for all of these lifts, local minima for () may not map to local minima for (REF ).", "We extend the results of [22], [32] using our unified framework, by characterizing precisely where each of our three desirable properties hold.", "In [36], the authors numerically observe that optimization over the factors of the singular value decomposition (SVD) performs poorly, but can be improved by allowing the middle factor to be non-diagonal.", "These observations may be explained by our framework.", "We characterize in Proposition REF where each of our desirable properties holds for the SVD lift and its modification from [36].", "In particular, we show that the SVD lift may introduce spurious local minima in the setting of [36], while its modification does not.", "A neural network architecture encodes a lift from a parameter space to a function space.", "The authors of [38] show that this lift may fail to be open, and that such failure may cause optimization algorithms to stagnate.", "Our theory gives another implication: since the lift is not open, it introduces spurious local minima for some cost functions.", "We show in Section REF that none of the popular tensor factorization lifts satisfy “2 $\\Rightarrow \\!$ 1”, so second-order stationary points of () need not map to stationary points of (REF ).", "This suggests that to find a stationary point over the set of bounded-rank tensors (for many notions of such rank), it is not enough to find a second-order stationary point over the factors in a rank-revealing factorization.", "The eigenvalue example above (scenario (b)) essentially lifts the simplex to the sphere by entrywise squaring.", "This lift is the topic of [35].", "We extend their results beyond the case of convex $f$ , as a particular case of our general construction of lifts in Section , see Corollary REF .", "Table: Examples of lifts and their properties.Lifts are ubiquitous.", "We already mentioned above the Burer–Monteiro approach to semidefinite programming [9], [14], low-rank optimization [22], [32], [36], computer vision [17], and neural networks [38].", "In [47], [29], the authors compare the stationary points for () to those of (REF ) for the case of linear neural networks and prove a special case of our Theorem REF characterizing “1 $\\Rightarrow \\!$ 1”.", "The training of general neural networks, and risk minimization more generally, is naturally given in the form (), see [47] for example.", "This perspective on risk minimization has been exploited in works such as [5], [6].", "Other instances of lifts occur in robotics, specifically in inverse kinematics and trajectory planning [45].", "There, variables corresponding to the joints of the robot parametrize the configurations that the robot can assume.", "Yet another class of lifts comes from Kostant's convexity theorem, extending the hidden convexity in the eigenvalue example from the beginning of this section to optimization of certain linear functions over certain Lie group orbits, see [30].", "When $\\mathcal {X}$ is a non-smooth algebraic variety (i.e., the zero set of polynomials), a smooth variety $\\mathcal {M}$ and a polynomial map $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ satisfying certain properties (specifically, properness and birationality) are called a resolution of singularities for $\\mathcal {X}$  [23], which always exists in our setting [23].", "However, this resolution of singularities may not satisfy any of the properties we consider, as the example of the nodal cubic shows (Example REF ).", "Finally, the composite optimization literature [12], [34], [19], [7], [49], [4] studies algorithms for problems of the form () where $f$ is convex (and not necessarily differentiable) and $\\varphi $ is smooth.", "Our setting and aims are distinct however.", "We focus on algorithm-independent questions pertaining to the relation between desirable points for () and (REF ).", "Here we collect notation and standard definitions.", "Table REF collects the notations and definitions for several sets used throughout the paper.", "Table: Commonly-used sets $\\mathcal {E}$ is a linear space endowed with an inner product $\\langle \\cdot ,\\cdot \\rangle $ and induced norm $\\Vert \\cdot \\Vert $ .", "$\\mathcal {M}$ is a smooth Riemannian manifold endowed with a Riemannian metric, also denoted $\\langle \\cdot ,\\cdot \\rangle $ , with its induced norm $\\Vert \\cdot \\Vert $ .", "The tangent space to $\\mathcal {M}$ at $y\\in \\mathcal {M}$ is denoted $y\\mathcal {M}$ .", "If $S\\subseteq \\mathcal {E}$ is a subspace of an inner product space $\\mathcal {E}$ , we denote by $\\mathrm {Proj}_S(x)$ the orthogonal projection of $x\\in \\mathcal {E}$ onto $S$ .", "$\\mathcal {X}$ is endowed with its subspace topology from $\\mathcal {E}$ , and $\\mathcal {M}$ is endowed with its manifold topology.", "A neighborhood of a point $x$ is a set that contains $x$ in its interior.", "(We do not require neighborhoods to be open.)", "A map $\\varphi $ between two topological spaces is open at $y$ if it maps neighborhoods of $y$ in $\\mathcal {M}$ to neighborhoods of $\\varphi (y)$ in $\\mathcal {X}$ .", "Globally, a map is open if it is open at all points, or equivalently if it maps open subsets of $\\mathcal {M}$ to open subsets of $\\mathcal {X}$ .", "A smooth curve on $\\mathcal {M}$ passing through $y\\in \\mathcal {M}$ with velocity $v\\in y\\mathcal {M}$ is a smooth map $c\\colon I\\rightarrow \\mathcal {M}$ satisfying $c(0)=y$ and $c^{\\prime }(0)=v$ , where $I$ denotes an arbitrary open interval in $\\mathbb {R}$ containing 0 (not always the same one).", "If $c\\colon I\\rightarrow \\mathcal {M}$ is a smooth curve on a Riemannian manifold $\\mathcal {M}$ such that $c(0)=y$ , then $c^{\\prime \\prime }(0)\\in y\\mathcal {M}$ denotes its intrinsic (Riemannian) acceleration.", "Accordingly, if $\\gamma =\\varphi \\circ c$ then $\\gamma ^{\\prime \\prime }(0)$ denotes its (standard) acceleration in the Euclidean space $\\mathcal {E}$ .", "A cone is a set $K\\subseteq \\mathcal {E}$ such that $v\\in K$ implies $\\alpha v\\in K$ for all $\\alpha >0$ .", "The dual cone $K^*$ of a cone $K$ is $K^* = \\lbrace u\\in \\mathcal {E}: \\left\\langle {u},{v}\\right\\rangle \\ge 0 \\textrm { for all } v\\in K\\rbrace $ .", "We use the following properties throughout (see [18] for proofs): The dual cone is always a closed convex cone.", "If $K_1\\subseteq K_2$ , then $K_2^*\\subseteq K_1^*$ .", "The bidual cone $K^{**}$ of $K$ is equal to the closure of its convex hull $K^{**}=\\overline{\\mathrm {conv}}(K)$ .", "In particular, $K^{**}\\supseteq K$ .", "If $K$ is a linear space, then its dual $K^*$ is equal to its orthogonal complement $K^{\\perp }$ .", "If $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {M}$ is a smooth map between smooth manifolds, its differential at $z\\in \\mathcal {N}$ is denoted $(z)\\colon z\\mathcal {N}\\rightarrow {\\psi (z)}\\mathcal {M}$ .", "If $g\\colon \\mathcal {M}\\rightarrow \\mathbb {R}$ is a smooth function, its Riemannian gradient at $y\\in \\mathcal {M}$ is denoted $\\nabla g(y)\\in y\\mathcal {M}$ .", "It is the unique element of $y\\mathcal {M}$ satisfying $\\langle \\nabla g(y),v\\rangle = g̥(y)[v]$ for all $v\\in y\\mathcal {M}$ .", "The Riemannian Hessian of $g$ at $y$ is a self-adjoint linear map denoted $\\nabla ^2g(y)\\colon y\\mathcal {M}\\rightarrow y\\mathcal {M}$ , see [8] for the definition.", "If $f\\colon \\mathcal {E}\\rightarrow \\mathbb {R}$ is a smooth function on a linear space, the usual definitions of the directional derivative $f̥(x)[v]$ , the gradient $\\nabla f(x)$ , and the Hessian $\\nabla ^2f(x)$ coincide with the above definitions specialized to $\\mathcal {M}=\\mathcal {E}$ .", "We write $S\\succeq 0$ to denote a positive-semidefinite (PSD) matrix or a PSD self-adjoint linear map $S$ , and we write $S\\succ 0$ to indicate $S$ is positive-definite." ], [ "Smooth lifts and their properties", "We aim to relate the landscapes of (REF ) and ().", "To this end, we define several types of points of interest.", "We begin with a preliminary definition.", "Definition 2.1 The tangent cone to $\\mathcal {X}$ at $x \\in \\mathcal {X}$ is the set $x\\mathcal {X}& = \\left\\lbrace v = \\lim _{i \\rightarrow \\infty } \\frac{x_i - x}{\\tau _i} : x_i \\in \\mathcal {X}, \\tau _i > 0 \\textrm { for all } i, \\tau _i \\rightarrow 0 \\right\\rbrace \\subseteq \\mathcal {E}.$ This is a closed (not necessarily convex) cone [42].", "In particular, if $\\gamma $ is a differentiable curve in $\\mathcal {X}$ with $\\gamma (0) = x$ , then $\\gamma ^{\\prime }(0)\\in x\\mathcal {X}$ .", "The cone $x\\mathcal {X}$ is also called the contingent or Bouligand tangent cone [15].", "If $\\mathcal {X}$ is a smooth embedded submanifold of $\\mathcal {E}$ near $x$ , then $x\\mathcal {X}$ coincides with the usual tangent space to $\\mathcal {X}$ at $x$ , see [41].", "Definition 2.2 (Desirable points for (REF )) A point $x\\in \\mathcal {X}$ is a global minimum for (REF ) if $f(x)=\\min _{x^{\\prime }\\in \\mathcal {X}}f(x^{\\prime })$ .", "local minimum for (REF ) if there is a neighborhood $U\\subseteq \\mathcal {X}$ of $x$ such that $f(x)=\\min _{x^{\\prime }\\in U}f(x^{\\prime })$ .", "(first-order) stationary point for (REF ) if $f̥(x)[v]\\ge 0$ for all $v\\in x\\mathcal {X}$ , or equivalently, if $\\nabla f(x)$ is in the dual $(x\\mathcal {X})^*$ of the tangent cone.", "In words, $x$ is stationary if the cost function is non-decreasing to first order along all tangent directions at $x$ .", "Local minima of (REF ) are stationary [42].", "Next, we define a smooth lift and consider desirable points for ().", "Definition 2.3 A smooth lift of $\\mathcal {X}\\subseteq \\mathcal {E}$ is a smooth manifold $\\mathcal {M}$ together with a smooth map $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {E}$ such that $\\varphi (\\mathcal {M})=\\mathcal {X}$ .", "Definition 2.4 (Desirable points for ()) Global and local minima for () are defined exactly as for (REF ).", "A point $y\\in \\mathcal {M}$ is first-order stationary (or “1-critical”) for () if for each smooth curve $c\\colon I\\rightarrow \\mathcal {M}$ satisfying $c(0)=y$ , we have $(g\\circ c)^{\\prime }(0)\\ge 0$ , or equivalently,If $(g\\circ c)^{\\prime }(0)>0$ , let $\\widetilde{c}(t)=c(-t)$ and note that $(g\\circ \\widetilde{c})^{\\prime }(0)<0$ .", "$(g\\circ c)^{\\prime }(0)=0$ .", "A point $y\\in \\mathcal {M}$ is second-order stationary (or “2-critical”) for () if it is 1-critical and $(g\\circ c)^{\\prime \\prime }(0)\\ge 0$ for all smooth curves $c\\colon I\\rightarrow \\mathcal {M}$ satisfying $c(0)=y$ .", "If $\\mathcal {M}$ is embedded in a linear space, first-order stationarity in Definition REF (c) coincides with Definition REF (c) by [41].", "Definition REF can be rephrased in terms of the Riemannian gradient and Hessian of $g$ .", "Proposition 2.5 ([8]) A point $y\\in \\mathcal {M}$ is 1-critical for () if and only if $\\nabla g(y)=0$ .", "It is 2-critical if and only if $\\nabla g(y)=0$ and $\\nabla ^2g(y)\\succeq 0$ .", "We proceed to answer the questions raised in Section ." ], [ "Main results", "The connection between global minima of () and (REF ) is straightforward.", "Theorem 2.6 A point $y \\in \\mathcal {M}$ is a global minimum of () if and only if $x = \\varphi (y)$ is a global minimum of (REF ).", "Because $\\varphi (\\mathcal {M})=\\mathcal {X}$ , we have $\\inf _{y\\in \\mathcal {M}}g(y)=\\inf _{y\\in \\mathcal {M}}f(\\varphi (y))=\\inf _{x\\in \\mathcal {X}}f(x)=:p^*$ .", "Therefore, $y$ is a global minimum for () iff $g(y)=f(x)=p^*$ which happens iff $x$ is a global minimum for (REF ).", "Computing global minima is often hard, so in practice we can only hope to find local minima, or even just stationary points.", "If $x=\\varphi (y)$ is a local minimum or stationary point for (REF ), then it is easy to show that $y$ is, respectively, a local minimum or a stationary point for (): see Propositions REF and REF .", "However, the converse is false in general.", "As the example of the nodal cubic (Example REF below) shows, $x$ need not be stationary even if $y$ is a local minimum.", "Therefore, the following two properties are not automatically satisfied.", "Definition 2.7 (Desirable properties of lifts) Suppose $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ is a smooth lift.", "The lift $\\varphi $ satisfies the “local $\\Rightarrow \\!$ local” property at $y \\in \\mathcal {M}$ if, for all continuous $f \\colon \\mathcal {X}\\rightarrow \\mathbb {R}$ , if $y$ is a local minimum for () then $x = \\varphi (y)$ is a local minimum for (REF ).", "We say $\\varphi $ satisfies the “local $\\Rightarrow \\!$ local” property if it does so at all $y \\in \\mathcal {M}$ .", "The lift $\\varphi $ satisfies the “$k\\!", "\\Rightarrow \\!$ 1” property at $y$ for $k=1,2$ if for all $k$ times differentiable $f \\colon \\mathcal {X}\\rightarrow \\mathbb {R}$ , if $y$ is $k$ -critical for () then $x = \\varphi (y)$ is stationary for (REF ).", "We say $\\varphi $ satisfies the “$k\\!", "\\Rightarrow \\!$ 1” property if it does so at all $y \\in \\mathcal {M}$ .", "We now characterize the above properties.", "The characterization of “local $\\Rightarrow \\!$ local” below is easy to state.", "Theorem 2.8 Let $x = \\varphi (y)$ .", "The lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ satisfies the “local $\\Rightarrow \\!$ local” property at $y$ if and only if it is open at $y$ .", "Globally, the lift $\\varphi $ satisfies the “local $\\Rightarrow \\!$ local” property if and only if it is open.", "Proving that openness is sufficient for “local $\\Rightarrow \\!$ local” is easy.", "Proof of its necessity requires substantial work, deferred to Appendix .", "Our proof in the appendix provides the result in a more general, topological setting without using smoothness.", "Moreover, it provides (possibly new) conditions which are equivalent to openness and may be easier to check in some situations.", "To characterize the “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1” properties, we need to consider velocities and accelerations of curves on $\\mathcal {X}$ obtained by pushing curves on $\\mathcal {M}$ through the lift $\\varphi $ .", "To that end, we introduce the following definitions.", "Definition 2.9 Let $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ be a smooth lift and fix $y\\in \\mathcal {M}$ .", "For each $v\\in y\\mathcal {M}$ , choose a curve $c_v$ on $\\mathcal {M}$ satisfying $c(0)=y$ and $c^{\\prime }(0)=v$ .", "Define maps $\\mathbf {L}_y, \\mathbf {Q}_y\\colon y\\mathcal {M}\\rightarrow \\mathcal {E}$ by $\\mathbf {L}_y(v) = (\\varphi \\circ c_v)^{\\prime }(0),\\qquad \\mathbf {Q}_y(v) = (\\varphi \\circ c_v)^{\\prime \\prime }(0).$ We write $\\mathbf {L}_y^{\\varphi }$ and $\\mathbf {Q}_y^{\\varphi }$ when we wish to emphasize the lift.", "Of course, $\\mathbf {L}_y$ is simply the differential $(y)$ , and is therefore linear and independent of the choice of curves $c_v$ .", "The value of $\\mathbf {Q}_y(v)$ does depend on the choice of $c_v$ but, importantly, this is inconsequential for our purposes: see Remark REF .", "We explain how to compute $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ without explicitly choosing curves $c_v$ in Section REF .", "Our characterization of “1 $\\Rightarrow \\!$ 1” can now be concisely stated.", "Theorem 2.10 $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y\\in \\mathcal {M}$ if and only if $\\operatorname{im}\\mathbf {L}_y = x\\mathcal {X}$ , where $x=\\varphi (y)$ .", "We further show that if $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ and $x=\\varphi (y)$ is a smooth point of $\\mathcal {X}$ , that is, $\\mathcal {X}$ is a smooth embedded submanifold of $\\mathcal {E}$ near $x$ , then $\\varphi $ satisfies “local $\\Rightarrow \\!$ local” at $y$ as well: see Proposition REF below.", "Theorem REF yields the following necessary, and quite restrictive, condition for “1 $\\Rightarrow \\!$ 1”.", "Corollary 2.11 Unless all tangent cones of $\\mathcal {X}$ are linear spaces, no smooth lift for $\\mathcal {X}$ satisfies “1 $\\Rightarrow \\!$ 1”.", "Corollary REF shows that “1 $\\Rightarrow \\!$ 1” usually does not hold at preimages of nonsmooth points of $\\mathcal {X}$ (see Definition REF ), whose tangent cones are rarely linear, so all lifts may introduce spurious critical points on $\\mathcal {M}$ .", "Table REF gives several examples of popular lifts of nonsmooth sets violating “1 $\\Rightarrow \\!$ 1”.", "Corollary REF reveals that no smooth lift of these sets can satisfy “1 $\\Rightarrow \\!$ 1”.", "This is our main motivation for studying “2 $\\Rightarrow \\!$ 1”.", "Our characterization of “2 $\\Rightarrow \\!$ 1” is more involved.", "We therefore also give sufficient conditions that are easier to check in many examples, and a necessary condition that is easier to refute in others.", "Theorem 2.12 Let $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ be a smooth lift and fix $y\\in \\mathcal {M}$ .", "We have the following chain of implications for “2 $\\Rightarrow \\!$ 1”, where the cones $W_y$ and $B_y$ in $\\mathcal {E}$ are defined in Definitions REF and REF , respectively, and are expressed in terms of $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ in Proposition REF .", "$&x \\mathcal {X}\\subseteq \\operatorname{im}\\mathbf {L}_y + \\mathbf {Q}_y(\\ker \\mathbf {L}_y)\\\\\\iff & x \\mathcal {X}= \\operatorname{im}\\mathbf {L}_y + \\mathbf {Q}_y(\\ker \\mathbf {L}_y)\\\\\\Rightarrow & B_y^* \\subseteq (x \\mathcal {X})^* \\\\\\Rightarrow & W_y \\subseteq (x \\mathcal {X})^* \\\\\\iff & \\varphi \\textrm { satisfies {``2 \\Rightarrow \\!", "1^{\\prime \\prime }} at } y \\\\\\Rightarrow & (\\operatorname{im}\\mathbf {L}_y)^\\perp \\cap (\\mathbf {Q}_y(y\\mathcal {M}))^* \\subseteq (x \\mathcal {X})^*.$ We show that the first sufficient condition above is not necessary in Example REF , and that the necessary condition is not sufficient in Example REF .", "We conjecture that the second sufficient condition is not necessary either, see Conjecture REF below.", "In the remainder of this section, we prove the above theorems and explain how to compute the quantities appearing in them.", "We then apply them to a variety of examples in Section ." ], [ "Local minima", "In this section, we investigate the relationship between the local minima of (REF ) and those of ().", "Preimages of local minima on $\\mathcal {X}$ are always local minima on $\\mathcal {M}$ merely because $\\varphi $ is continuous.", "Proposition 2.13 Let $x$ be a local minimum for (REF ).", "Any $y \\in \\varphi ^{-1}(x)$ is a local minimum for ().", "There exists a neighborhood $U$ of $x$ in $\\mathcal {X}$ such that $f(x) \\le f(x^{\\prime })$ for all $x^{\\prime } \\in U$ .", "Since $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ is continuous, the set $\\mathcal {U}= \\varphi ^{-1}(U)$ is a neighborhood of $y$ in $\\mathcal {M}$ .", "Pick an arbitrary $y^{\\prime } \\in \\mathcal {U}$ : it satisfies $\\varphi (y^{\\prime }) = x^{\\prime }$ for some $x^{\\prime } \\in U$ .", "Hence, $g(y) = f(x) \\le f(x^{\\prime }) = g(y^{\\prime })$ , i.e., $y$ is a local minimum of ().", "Unfortunately, lifting can introduce spurious local minima, that is, local minima for () that exist only because of the lift and not because they were present in (REF ) to begin with.", "Example 2.14 (Nodal cubic) Consider the nodal cubic $\\mathcal {X}= \\lbrace x\\in \\mathbb {R}^2:x_2^2=x_1^2(x_1+1)\\rbrace ,$ and its liftThe smooth curve $\\mathcal {M}$ is obtained by blowing up $\\mathcal {X}$ at the origin in the sense of algebraic geometry [23].", "$\\mathcal {M}= \\lbrace y\\in \\mathbb {R}^3:y_1=y_3^2-1,\\ y_2=y_1y_3\\rbrace ,\\qquad \\varphi (y)=(y_1,y_2).$ Let $f(x)=x_1-x_2$ .", "Then the point $y=(0,0,1)$ is a local minimum for $g=f\\circ \\varphi $ but $\\varphi (y)=(0,0)$ is not even stationary for $f$ .", "Indeed, we have $(-1,1)^\\top \\in {(0,0)}\\mathcal {X}$ and $f̥(0,0)[(-1,1)]=-2<0$ .", "The curves $\\mathcal {X}$ and $\\mathcal {M}$ together with the points $x,y$ are illustrated in Figure REF .", "Figure: Nodal cubic in ℝ 2 \\mathbb {R}^2 as the shadow of its lift in ℝ 3 \\mathbb {R}^3, colored by the value of the function f(x)=x 1 -x 2 f(x)=x_1-x_2.", "The highlighted points are x=(0,0)x=(0,0) (not stationary for ()), and its two preimages on the lift, including y=(0,0,1)y=(0,0,1) (a spurious local minimum for ()).To ensure that a lift does not introduce spurious local minima, we need to verify that it satisfies the “local $\\Rightarrow \\!$ local” property (Definition REF (a)).", "We proceed to prove the easy direction of Theorem REF stating that openness implies “local $\\Rightarrow \\!$ local”.", "The converse is more involved and is deferred to Appendix .", "The proof of Theorem REF shows that $\\varphi $ maps local minima of () to local minima of (REF ) for all smooth $f$ if and only if it does so for all $f$ .", "[Proof of Theorem REF ] Assume $\\varphi $ is open at $y$ , and that $y$ is a local minimum for ().", "Then, there exists a neighborhood $\\mathcal {U}$ of $y$ on $\\mathcal {M}$ such that $g(y) \\le g(y^{\\prime })$ for all $y^{\\prime } \\in \\mathcal {U}$ .", "The set $U = \\varphi (\\mathcal {U})$ is a neighborhood of $x = \\varphi (y)$ in $\\mathcal {X}$ by openness of $\\varphi $ at $y$ .", "Moreover, each $x^{\\prime } \\in U$ is of the form $x^{\\prime } = \\varphi (y^{\\prime })$ for some $y^{\\prime } \\in \\mathcal {U}$ .", "Therefore, $f(x) = g(y) \\le g(y^{\\prime }) = f(x^{\\prime })$ for all $x^{\\prime } \\in U$ , that is, $x$ is a local minimum of (REF ).", "For the converse, see Theorem REF .", "We note in passing that all continuous, surjective, open maps are quotient maps, hence if $\\varphi $ is a smooth lift of $\\mathcal {X}$ satisfying “local $\\Rightarrow \\!$ local” then it is a quotient map from $\\mathcal {M}$ to $\\mathcal {X}$ .", "Remark 2.15 We introduce an equivalent condition for openness of $\\varphi $ at $y$ in Appendix  that we call the Subsequence Lifting Property (SLP), and which is sometimes easier to check.", "Specifically, SLP is satisfied at $y$ if for any sequence $(x_i)_{i\\in \\mathbb {N}}\\subseteq \\mathcal {X}$ converging to $x=\\varphi (y)$ , there is a subsequence indexed by $(i_j)_{j\\in \\mathbb {N}}\\subseteq \\mathbb {N}$ and $y_{i_j}\\in \\mathcal {M}$ converging to $y$ such that $\\varphi (y_{i_j})=x_{i_j}$ for all $j\\in \\mathbb {N}$ .", "Here is an example illustrating the usefulness of SLP.", "We give more examples in Section .", "Example 2.16 (Burer–Monteiro lift) The lift $\\varphi (R)=RR^\\top $ defined on $\\mathcal {M}=\\mathbb {R}^{n\\times r}$ of low-rank PSD matrices $\\mathcal {X}=\\mathbb {R}^{n\\times n}_{\\le r}\\cap \\mathbb {S}_{\\succeq 0}^n$ is open everywhere on $\\mathcal {M}$ .", "This was shown by Burer and Monteiro in [11] by (in our terminology) proving SLP holds.", "Proving SLP often requires a characterization of the fibers of $\\varphi $ .", "For example, the main lemma [11] used in their proof states that $\\varphi ^{-1}(RR^\\top ) = \\lbrace RQ:Q\\in O(r)\\rbrace $ .", "Not all lifts of interest are open, as Table REF states.", "For example, the rank factorization lift $\\varphi (L, R) = LR^\\top $ from $\\mathcal {M}=\\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{n\\times r}$ to $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ is not open at all points—see Proposition REF (this issue can be mitigated through balancing [32]).", "In fact, all lifts we consider in Section  for the bounded-rank matrix variety fail to be open.", "Example 2.17 (Neural networks) Another interesting lift that fails to be open comes from neural networks.", "Specifically, let $\\mathcal {M}$ be the linear space of weights and biases of a fixed neural network architecture.", "Let $\\varphi $ map a choice of weights and biases to the function given by the corresponding neural network with some fixed, Lipschitz continuous but nonconstant, activation function.", "Then $\\varphi $ is not open by [38].", "Our Theorem REF implies that training such a neural network by running an optimization algorithm on its weights and biases might get stuck in a local minimum that does not parametrize a local minimum in function space.", "In that case, a different parametrization of the same function might not be a local minimum for ()." ], [ "Stationary points", "In this section, we investigate the relationship between the first- and second-order stationary points for () and (first-order) stationary points for (REF ).", "To that end, we begin by relating the (Riemannian) gradient and Hessian of $g=f\\circ \\varphi $ to the (Euclidean) counterparts of $f$ .", "Definition 2.18 For any $w\\in \\mathcal {E}$ , define $\\varphi _w\\colon \\mathcal {M}\\rightarrow \\mathbb {R}$ by $\\varphi _w(y)=\\langle w,\\varphi (y)\\rangle $ .", "Lemma 2.19 For any twice differentiable cost $f\\colon \\mathcal {E}\\rightarrow \\mathbb {R}$ , any $y\\in \\mathcal {M}$ , and $x=\\varphi (y)$ , we have $ \\nabla g(y) = \\mathbf {L}_y^*(\\nabla f(x)),\\qquad \\nabla ^2g(y) = \\mathbf {L}_y^*\\circ \\nabla ^2f(x)\\circ \\mathbf {L}_y + \\nabla ^2\\varphi _{\\nabla f(x)}(y),$ where $\\mathbf {L}_y^*\\colon \\mathcal {E}\\rightarrow y\\mathcal {M}$ is the adjoint of $\\mathbf {L}_y$ .", "For any $v\\in y\\mathcal {M}$ , let $c_v\\colon I\\rightarrow \\mathcal {M}$ be a smooth curve satisfying $c_v(0)=y$ , $c_v^{\\prime }(0)=v$ , and $c_v^{\\prime \\prime }(0)=0$ (e.g., let $c_v$ be a geodesic), and let $\\gamma _v=\\varphi \\circ c_v$ which satisfies $\\gamma _v(0)=x$ , $\\gamma _v^{\\prime }(0)=\\mathbf {L}_y(v)$ .", "Then $\\langle \\nabla g(y),v\\rangle = (g\\circ c_v)^{\\prime }(0) = (f\\circ \\gamma _v)^{\\prime }(0) = \\langle \\nabla f(x),\\mathbf {L}_y(v)\\rangle = \\langle \\mathbf {L}_y^*(\\nabla f(x)),v\\rangle .$ Since this holds for all $v\\in y\\mathcal {M}$ , we conclude that $\\nabla g(y)=\\mathbf {L}_y^*(\\nabla f(x))$ .", "Next, $\\langle \\nabla ^2g(y)[v],v\\rangle = (g\\circ c_v)^{\\prime \\prime }(0) = (f\\circ \\gamma _v)^{\\prime \\prime }(0) = \\langle \\nabla ^2f(x)[\\mathbf {L}_y(v)],\\mathbf {L}_y(v)\\rangle + \\langle \\nabla f(x),\\gamma _v^{\\prime \\prime }(0)\\rangle ,$ where the first equality uses $c_v^{\\prime \\prime }(0)=0$ , see [8].", "On the other hand, $\\langle \\nabla ^2\\varphi _{\\nabla f(x)}(y)[v],v\\rangle = (\\varphi _{\\nabla f(x)}\\circ c_v)^{\\prime \\prime }(0) = \\left.\\frac{\\mathrm {d}^2}{\\mathrm {d}t^2}\\right|_{t=0}\\langle \\nabla f(x),\\gamma _v(t)\\rangle = \\langle \\nabla f(x),\\gamma _v^{\\prime \\prime }(0)\\rangle ,$ hence $\\langle \\nabla ^2g(y)[v],v\\rangle &= \\langle \\nabla ^2f(x)[\\mathbf {L}_y(v)],\\mathbf {L}_y(v)\\rangle + \\langle \\nabla ^2\\varphi _{\\nabla f(x)}(y)[v],v\\rangle \\\\ &= \\left\\langle {\\big (\\mathbf {L}_y^*\\circ \\nabla ^2f(x)\\circ \\mathbf {L}_y+\\nabla ^2\\varphi _{\\nabla f(x)}(y)\\big )[v]},{v}\\right\\rangle .$ Since this holds for all $v\\in y\\mathcal {M}$ and both $\\nabla ^2g(y)$ and $\\mathbf {L}_y^*\\circ \\nabla ^2f(x)\\circ \\mathbf {L}_y+\\nabla ^2\\varphi _{\\nabla f(x)}(y)$ are self-adjoint linear maps on $y\\mathcal {M}$ , we conclude that they are equal.", "We turn to proving our characterizations of “$k\\!", "\\Rightarrow \\!$ 1” for $k=1,2$ stated in Section REF ." ], [ "“1 $\\Rightarrow \\!$ 1”: Lifts preserving 1-critical points", "In this section, we prove Theorem REF characterizing the “1 $\\Rightarrow \\!$ 1” property.", "Lemma 2.20 Fix $y\\in \\mathcal {M}$ and let $x=\\varphi (y)$ .", "Then $\\operatorname{im}\\mathbf {L}_y\\subseteq x\\mathcal {X}$ .", "Moreover, $y$ is 1-critical for () if and only if $\\nabla f(x)\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp } = (\\operatorname{im}\\mathbf {L}_y)^*$ .", "The first claim follows from the fact that $\\mathbf {L}_y(v) = (\\varphi \\circ c_v)^{\\prime }(0)$ for a curve $c_v$ as in Definition REF , and Definition REF of the tangent cone $x\\mathcal {X}$ .", "For the second claim, $y$ is 1-critical for () iff $\\nabla g(y)=\\mathbf {L}_y^*(\\nabla f(x))=0$ , or equivalently, $\\nabla f(x)\\in \\ker (\\mathbf {L}_y^*)=(\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ .", "Preimages of stationary points on $\\mathcal {X}$ are always 1-critical on $\\mathcal {M}$ .", "Proposition 2.21 If $x\\in \\mathcal {X}$ is stationary for (REF ), then any $y\\in \\varphi ^{-1}(x)$ is 1-critical for ().", "If $x\\in \\mathcal {X}$ is stationary for (REF ), then $\\nabla f(x)\\in (x\\mathcal {X})^*$ .", "Since $x\\mathcal {X}\\supseteq \\operatorname{im}\\mathbf {L}_y$ , taking duals on both sides we get that $\\nabla f(x)\\in (x\\mathcal {X})^*\\subseteq (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ , hence $y$ is 1-critical for () by Lemma REF .", "The converse to Proposition REF is false in general.", "In fact, Example REF shows that a lift need not even map local minima to stationary points on $\\mathcal {X}$ .", "We can now prove Theorem REF characterizing “1 $\\Rightarrow \\!$ 1”.", "[Proof of Theorem REF ] Suppose $\\operatorname{im}\\mathbf {L}_y=x\\mathcal {X}$ , so $(\\operatorname{im}\\mathbf {L}_y)^{\\perp }=(\\operatorname{im}\\mathbf {L}_y)^*=(x\\mathcal {X})^*$ .", "If $y$ is 1-critical for (), then $\\nabla f(x)\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ by Lemma REF .", "Therefore, $\\nabla f(x)\\in (x\\mathcal {X})^*$ , which is the definition of $x$ being stationary for (REF ).", "Thus, “1 $\\Rightarrow \\!$ 1” holds.", "Observe that if $(x\\mathcal {X})^*=(\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ , then $\\operatorname{im}\\mathbf {L}_y = (\\operatorname{im}\\mathbf {L}_y)^{\\perp \\perp }=\\left((\\operatorname{im}\\mathbf {L}_y)^{\\perp }\\right)^*=(x\\mathcal {X})^{**} \\supseteq x\\mathcal {X}\\supseteq \\operatorname{im}\\mathbf {L}_y,$ so in fact, $\\operatorname{im}\\mathbf {L}_y=x\\mathcal {X}$ .", "Suppose $\\operatorname{im}\\mathbf {L}_y\\ne x\\mathcal {X}$ .", "By Lemma REF , we always have $\\operatorname{im}\\mathbf {L}_y\\subseteq x\\mathcal {X}$ , hence $(x\\mathcal {X})^*\\subseteq (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ .", "Therefore, the above observation implies that $(x\\mathcal {X})^*\\subsetneq (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ .", "Pick $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }\\setminus (x\\mathcal {X})^*$ and define $f(x^{\\prime })=\\langle w,x^{\\prime }\\rangle $ for $x^{\\prime }\\in \\mathcal {E}$ .", "Then $\\nabla f(x)=w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ so $y$ is 1-critical for () by Lemma REF , but $\\nabla f(x)\\notin (x\\mathcal {X})^*$ , so $x$ is not stationary for (REF ).", "Hence “1 $\\Rightarrow \\!$ 1” is not satisfied at $y$ .", "In fact, the proof of Theorem REF also characterizes the cost functions $f$ for which $y$ is 1-critical for () but $x$ is not stationary for (REF ).", "Corollary 2.22 Suppose $\\varphi $ does not satisfy “1 $\\Rightarrow \\!$ 1” at $y$ , and denote $x=\\varphi (y)$ .", "Then $y$ is 1-critical for () but $x$ is not stationary for (REF ) if and only if $\\nabla f(x)\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }\\setminus (x\\mathcal {X})^*$ .", "In particular, if “1 $\\Rightarrow \\!$ 1” is not satisfied at $y$ , there is a linear cost function $f$ for which $y$ is 1-critical for () but $x$ is not stationary for (REF ).", "As discussed in Section REF , Corollary REF implies that “1 $\\Rightarrow \\!$ 1” rarely holds on all of $\\mathcal {M}$ .", "Nevertheless, “1 $\\Rightarrow \\!$ 1” does usually hold at preimages of smooth points, that is, points near which $\\mathcal {X}$ is a smooth embedded submanifold of $\\mathcal {E}$ .", "Moreover, if “1 $\\Rightarrow \\!$ 1” holds at such points then “local $\\Rightarrow \\!$ local” holds there as well, which we proceed to show.", "Definition 2.23 A point $x\\in \\mathcal {X}$ is smooth if there is an open neighborhood $U\\subseteq \\mathcal {E}$ containing $x$ such that $U\\cap \\mathcal {X}$ is a smooth embedded submanifold of $\\mathcal {E}$ .", "It is called nonsmooth or singular otherwise.", "The smooth locus of $\\mathcal {X}$ , denoted $\\mathcal {X}^{\\mathrm {smth}}$ , is the set of smooth points of $\\mathcal {X}$ .", "Note that $\\mathcal {X}^{\\mathrm {smth}}$ is open in $\\mathcal {X}$ in the subspace topology.", "For all equidimensional algebraic varieties, and all constraint sets in practical optimization problems we are aware of, $\\mathcal {X}^{\\mathrm {smth}}$ is itself a smooth embedded submanifold of $\\mathcal {E}$ , and it is dense in $\\mathcal {X}$ .", "For example, if $\\mathcal {X}$ is the nodal cubic (REF ) shown in Figure REF , then $\\mathcal {X}^{\\mathrm {smth}}=\\mathcal {X}\\setminus \\lbrace (0,0)\\rbrace $ .", "If $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ , then $\\mathcal {X}^{\\mathrm {smth}} = \\mathbb {R}^{m\\times n}_{=r}$ .", "Proposition 2.24 Let $y\\in \\varphi ^{-1}(\\mathcal {X}^{\\mathrm {smth}})\\subseteq \\mathcal {M}$ .", "If $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ , then it also satisfies “local $\\Rightarrow \\!$ local” at $y$ .", "Let $U\\subseteq \\mathcal {E}$ be an open neighborhood of $\\varphi (y)$ in $\\mathcal {E}$ such that $U\\cap \\mathcal {X}$ is a smooth embedded submanifold of $\\mathcal {E}$ .", "Since $\\varphi (\\mathcal {M})=\\mathcal {X}$ , we have $\\varphi ^{-1}(U\\cap \\mathcal {X})=\\varphi ^{-1}(U)=:V$ , which is open in $\\mathcal {M}$ by continuity of $\\varphi $ .", "Therefore, $V$ is also a smooth manifold, since it is an open subset of $\\mathcal {M}$ , and $\\varphi |_V\\colon V\\rightarrow U\\cap \\mathcal {X}$ is a smooth map between smooth manifolds.", "Since $U\\cap \\mathcal {X}$ is an embedded submanifold of $\\mathcal {E}$ , the differential of $\\varphi |_V$ viewed as a map $V\\rightarrow \\mathcal {E}$ coincides with its differential viewed as a map $V\\rightarrow U\\cap \\mathcal {X}$ .", "By Theorem REF , $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ iff $\\mathbf {L}_y=(y) = \\varphi |_V)(y)$ is surjective, meaning $\\varphi |_V$ is a submersion near $y$  [31].", "By [31], this implies $\\varphi $ is open at $y$ , hence it satisfies “local $\\Rightarrow \\!$ local” at $y$ by Theorem REF .", "The converse of Proposition REF fails.", "For example, $\\varphi (y)=y^3$ viewed as a map $\\mathbb {R}\\rightarrow \\mathbb {R}$ satisfies “local $\\Rightarrow \\!$ local” at $y=0$ but not “1 $\\Rightarrow \\!$ 1”.", "Remark 2.25 In the setting of Proposition REF , if $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ then it also satisfies “$k\\!", "\\Rightarrow \\!", "k$ ” for all $k\\ge 1$ .", "Here $k$ -criticality for $\\varphi (y)$ on $\\mathcal {X}^{\\mathrm {smth}}$ (or equivalently, on $\\mathcal {X}$ ) is defined in terms of curves as in Definition REF .", "This follows because $\\varphi $ is a smooth submersion between smooth manifolds near $y$ , hence any curve passing through $\\varphi (y)$ is the image under $\\varphi $ of a curve passing through $y$ , see [31].", "The only examples of lifts we know of that satisfy “1 $\\Rightarrow \\!$ 1” everywhere are smooth maps between smooth manifolds that are submersions, which then also satisfy “local $\\Rightarrow \\!$ local”.", "Example 2.26 (Submersions) Examples of submersions in optimization, that is, lifts of the form $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ where $\\mathcal {X}$ is an embedded submanifold of $\\mathcal {E}$ and $\\operatorname{im}(y)={\\varphi (y)}\\mathcal {X}$ for all $y\\in \\mathcal {M}$ , include: Quotient maps by smooth, free, and proper Lie group actions [8], [2], used in particular to optimize over Grassmannians by lifting to Stiefel manifolds [20].", "The map $\\mathrm {SO}(p)\\rightarrow \\mathrm {St}(p,d)$ taking the first $d$ columns of a matrix, which is used in the rotation averaging algorithm of the robotics paper [17], see Example REF below.", "Orthogonal Deep Neural Networks (orthDNNs), mapping $\\mathrm {O}(p)^n\\rightarrow \\mathrm {O}(p)$ by $\\varphi (Q_1,\\ldots ,Q_n)=Q_1\\cdots Q_n$ , whose properties are studied in [1].", "Theorem REF and Proposition REF show that these lifts satisfy “1 $\\Rightarrow \\!$ 1” and “local $\\Rightarrow \\!$ local”.", "Failure of a lift to satisfy “1 $\\Rightarrow \\!$ 1” means that it may introduce spurious critical points.", "In the next section, we characterize the “2 $\\Rightarrow \\!$ 1” property, which enables the use of second-order information to avoid these spurious points." ], [ "“2 $\\Rightarrow \\!$ 1”: Lifts mapping 2-critical points to 1-critical points", "Since “1 $\\Rightarrow \\!$ 1” usually fails, we proceed to study “2 $\\Rightarrow \\!$ 1” and prove Theorem REF .", "As Table REF and Section  show, this property is satisfied for many interesting lifts.", "We begin by introducing the $W_y$ set appearing in Theorem REF to characterize “2 $\\Rightarrow \\!$ 1”.", "Definition 2.27 For $y\\in \\mathcal {M}$ and $x=\\varphi (y)\\in \\mathcal {X}$ , define $W_y = \\Big \\lbrace w \\in \\mathcal {E}: \\textrm {there exists a twice differentiable function } f \\colon \\mathcal {E}\\rightarrow \\mathbb {R}\\textrm { such that \\nabla f(x) = w}\\\\ \\textrm {and y is 2-critical for~(\\ref {eq:Q})} \\Big \\rbrace .$ We write $W_y^{\\varphi }$ when we wish to emphasize the lift.", "Theorem 2.28 The lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ satisfies “2 $\\Rightarrow \\!$ 1” at $y$ if and only if $W_y\\subseteq (x\\mathcal {X})^*$ where $x=\\varphi (y)$ .", "If $\\varphi $ satisfies “2 $\\Rightarrow \\!$ 1” and $w\\in W_y$ , then since $y$ is 2-critical for () for some $f$ satisfying $\\nabla f(x)=w$ , we conclude that $x$ is stationary for $f$ , hence $\\nabla f(x) = w \\in (x\\mathcal {X})^*$ .", "Conversely, if $W_y\\subseteq (x\\mathcal {X})^*$ and $y$ is 2-critical for (), then $\\nabla f(x)\\in W_y\\subseteq (x\\mathcal {X})^*$ , hence $x$ is stationary for $f$ .", "This shows “2 $\\Rightarrow \\!$ 1”.", "Since the 2-criticality of $y$ for () only depends on the first two derivatives of $f$ , we can restrict the functions $f$ in Definition REF to be of class $\\mathcal {C}^{\\infty }$ or even quadratic polynomials whose Hessians are a multiple of the identity, as the following proposition shows.", "Proposition 2.29 For $y\\in \\mathcal {M}$ and $x=\\varphi (y)$ , we have $W_y = \\left\\lbrace w\\in \\mathcal {E}: \\exists \\alpha >0 \\textrm { s.t.", "\\right.y is 2-critical for~(\\ref {eq:Q}) with } f(x^{\\prime }) = \\langle x^{\\prime },w\\rangle + \\frac{\\alpha }{2}\\Vert x^{\\prime }-x\\Vert ^2.$ In particular, $W_y$ is a convex cone.", "The inclusion $\\supseteq $ is clear from the definition of $W_y$ .", "Conversely, if $w\\in W_y$ then $y$ is 2-critical for () for some $f$ satisfying $\\nabla f(x)=w$ .", "Let $g=f\\circ \\varphi $ and $\\alpha =\\lambda _{\\max }(\\nabla ^2f(x))$ , and define $f_{\\alpha }(x^{\\prime })=\\langle w,x^{\\prime }\\rangle + \\frac{\\alpha }{2}\\Vert x-x^{\\prime }\\Vert ^2,\\quad g_{\\alpha } = f_{\\alpha }\\circ \\varphi .$ Note that $\\nabla f_{\\alpha }(x)=w$ , and by Lemma REF , we have $\\nabla g_{\\alpha }(y)=\\mathbf {L}_y^*(w)=\\nabla g(y)=0$ and $\\nabla ^2 g_{\\alpha }(y) &= \\mathbf {L}_y^*\\circ \\nabla ^2 f_{\\alpha }(x)\\circ \\mathbf {L}_y + \\nabla ^2\\varphi _{\\nabla f_{\\alpha }(x)}(y) = \\mathbf {L}_y^*\\circ (\\alpha I)\\circ \\mathbf {L}_y + \\nabla ^2\\varphi _w(y)\\\\ &\\succeq \\mathbf {L}_y^*\\circ \\nabla ^2f(x)\\circ \\mathbf {L}_y + \\nabla ^2\\varphi _w(y) = \\nabla ^2 g(y) \\succeq 0.$ Thus, $y$ is 2-critical for $g_{\\alpha }$ .", "This shows the reverse inclusion.", "$W_y$ is a convex cone since if $w_1, w_2$ are in $W_y$ as witnessed by functions $f_1, f_2$ , then any $w = \\lambda _1 w_1 + \\lambda _2 w_2$ with $\\lambda _1, \\lambda _2 \\ge 0$ is in $W_y$ as witnessed by $f = \\lambda _1 f_1 + \\lambda _2 f_2$ .", "Proposition REF shows that if “2 $\\Rightarrow \\!$ 1” is not satisfied at $y$ , then there exists a simple convex quadratic cost $f$ for which $y$ is 2-critical for () but $x=\\varphi (y)$ is not stationary for (REF ).", "Corollary 2.30 Suppose $\\varphi $ does not satisfy “2 $\\Rightarrow \\!$ 1” at $y\\in \\mathcal {M}$ and denote $x=\\varphi (y)$ .", "Then $W_y\\setminus (x\\mathcal {X})^*\\ne \\emptyset $ and for any $w$ in that set there exists $\\alpha >0$ such that if $f(x^{\\prime })=\\langle w,x^{\\prime }\\rangle + \\frac{\\alpha }{2}\\Vert x^{\\prime }-x\\Vert ^2$ , then $y$ is 2-critical for () but $x$ is not stationary for (REF ).", "We conjecture that the reverse inclusion in Theorem REF always holds (it does for all the lifts in Table REF ).", "If this is indeed true, then $\\varphi $ satisfies the “2 $\\Rightarrow \\!$ 1” property at $y$ if and only if $(x\\mathcal {X})^* = W_y$ , neatly echoing the condition for “1 $\\Rightarrow \\!$ 1”, namely, $(x\\mathcal {X})^* = (\\operatorname{im}\\mathbf {L}_y)^\\perp $ .", "Conjecture 2.31 It always holds that $(x \\mathcal {X})^* \\subseteq W_y$ .", "The description of $W_y$ can be complicated.", "It is therefore worthwhile to derive sufficient conditions for “2 $\\Rightarrow \\!$ 1” that are easier to check.", "A point $x\\in \\mathcal {X}$ is stationary for (REF ) if $\\nabla f(x)\\in (x\\mathcal {X})^*$ .", "To derive sufficient conditions for “2 $\\Rightarrow \\!$ 1”, we identify two sets whose duals contain $\\nabla f(x)$ if $x=\\varphi (y)$ and $y$ is 2-critical for ().", "The sufficient conditions then require the duals of these two subsets to be contained in $(x\\mathcal {X})^*$ .", "We will see in Section  that these conditions are indeed often convenient to verify.", "Definition 2.32 For $y\\in \\mathcal {M}$ , define $&A_y = \\lbrace w\\in \\mathcal {E}:\\exists c\\colon I\\rightarrow \\mathcal {M}\\textrm { smooth s.t. }", "c(0)=y,\\ (\\varphi \\circ c)^{\\prime }(0) = 0,\\ (\\varphi \\circ c)^{\\prime \\prime }(0)=w\\rbrace ,\\\\&B_y = \\lbrace w\\in \\mathcal {E}:\\exists c_i\\colon I_i\\rightarrow \\mathcal {M}\\textrm { smooth s.t. }", "c_i(0)=y,\\ \\lim _{i\\rightarrow \\infty }(\\varphi \\circ c_i)^{\\prime }(0) = 0,\\ \\lim _{i\\rightarrow \\infty }(\\varphi \\circ c_i)^{\\prime \\prime }(0) = w\\rbrace .$ We write $A_y^{\\varphi }, B_y^{\\varphi }$ when we wish to emphasize the lift.", "We show in Proposition REF (a) below that $A_y=\\mathbf {Q}_y(\\ker \\mathbf {L}_y)+\\operatorname{im}\\mathbf {L}_y$ is precisely the set appearing in the first sufficient condition in Theorem REF .", "The following are the basic properties these two sets satisfy.", "Proposition 2.33 Fix $y\\in \\mathcal {M}$ and denote $x=\\varphi (y)$ .", "$A_y$ and $B_y$ are cones, and $B_y$ is closed.", "$A_y\\subseteq x\\mathcal {X}$ and $\\operatorname{im}\\mathbf {L}_y\\subseteq A_y\\subseteq B_y$ .", "Moreover, $\\operatorname{im}\\mathbf {L}_y + A_y = A_y$ and $\\operatorname{im}\\mathbf {L}_y + B_y = B_y$ .", "If $y$ is 2-critical for $g=f\\circ \\varphi $ , then $\\nabla f(x)\\in B_y^*$ .", "$W_y\\subseteq B_y^*\\subseteq A_y^*\\subseteq (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ .", "The proofs of part (a) and the second half of part (b) are given in Appendix .", "If $c\\colon I\\rightarrow \\mathcal {M}$ satisfies $c(0)=y$ and $(\\varphi \\circ c)^{\\prime }(0)=0$ , then $(\\varphi \\circ c)(t)\\in \\mathcal {X}$ for all $t$ and $(\\varphi \\circ c)(t)= x + (t^2/2)(\\varphi \\circ c)^{\\prime \\prime }(0) + \\mathcal {O}(t^3)$ , hence $(\\varphi \\circ c)^{\\prime \\prime }(0) = \\lim _{t\\rightarrow 0}\\frac{(\\varphi \\circ c)(t) - x}{t^2/2} \\in x\\mathcal {X},$ by Definition REF .", "This shows $A_y\\subseteq x\\mathcal {X}$ .", "If $w\\in \\operatorname{im}\\mathbf {L}_y$ so $w=\\mathbf {L}_y(v)$ for $v\\in y\\mathcal {M}$ , let $c\\colon I\\rightarrow \\mathcal {M}$ be a curve satisfying $c(0)=y$ and $c^{\\prime }(0)=v$ .", "Define $\\widetilde{c}\\colon I\\rightarrow \\mathcal {M}$ by $\\widetilde{c}(t)=c(t^2/2)$ , and note that $\\widetilde{c}(0)=y$ , $(\\varphi \\circ \\widetilde{c})^{\\prime }(0)=0$ , and $(\\varphi \\circ \\widetilde{c})^{\\prime \\prime }(0) = (\\varphi \\circ c)^{\\prime }(0) = w$ .", "Hence $w\\in A_y$ .", "This shows $\\operatorname{im}\\mathbf {L}_y\\subseteq A_y$ .", "It is clear that $A_y\\subseteq B_y$ from Definition REF .", "Suppose $y$ is 2-critical for $g=f\\circ \\varphi $ and $w\\in B_y$ .", "Let $c_i\\colon I_i\\rightarrow \\mathcal {M}$ witness $w\\in B_y$ .", "Because $y$ is 1-critical, $(g\\circ c_i)^{\\prime }(0)=0$ for all $i$ .", "Because $y$ is 2-critical, for all $i$ we have $(g\\circ c_i)^{\\prime \\prime }(0) = \\langle \\nabla f(x),(\\varphi \\circ c_i)^{\\prime \\prime }(0)\\rangle + \\langle \\nabla ^2f(x)[(\\varphi \\circ c_i)^{\\prime }(0)],(\\varphi \\circ c_i)^{\\prime }(0)\\rangle \\ge 0.$ Taking $i\\rightarrow \\infty $ , we conclude that $\\langle \\nabla f(x),w\\rangle \\ge 0$ and hence $\\nabla f(x)\\in B_y^*$ as claimed.", "If $w\\in W_y$ then there exists $f$ such that $\\nabla f(x)=w$ and $y$ is 2-critical for (), hence $w\\in B_y^*$ by part (c).", "The other inclusions follow by taking duals in part (b).", "We remark that neither the inclusion $B_y\\subseteq x\\mathcal {X}$ nor $x\\mathcal {X}\\subseteq B_y$ hold in general, see the end of Appendix .", "By combining part (d) above with Theorem REF , we get the following sufficient conditions for “2 $\\Rightarrow \\!$ 1”.", "Corollary 2.34 Fix $y\\in \\mathcal {M}$ and denote $x=\\varphi (y)$ .", "If $A_y^*\\subseteq (x\\mathcal {X})^*$ , and in particular if $A_y=x\\mathcal {X}$ , then “2 $\\Rightarrow \\!$ 1” holds at $y$ .", "If $B_y^*\\subseteq (x\\mathcal {X})^*$ , then “2 $\\Rightarrow \\!$ 1” holds at $y$ .", "The next two examples illustrate the utility and the limitations of the above sufficient conditions.", "Example 2.35 (Cuspidal cubic) Consider the cuspidal cubic (see Figure REF in Section REF ) $\\mathcal {X}=\\lbrace x\\in \\mathbb {R}^2:x_2^2=x_1^3\\rbrace ,$ and its liftThe curve $\\mathcal {M}$ is called the “twisted cubic” [23].", "$\\mathcal {M}= \\lbrace y\\in \\mathbb {R}^3:y_1=y_3^2,\\ y_2=y_3^3\\rbrace ,\\qquad \\varphi (y)=(y_1,y_2).$ The origin $x=(0,0)$ is a singular point for $\\mathcal {X}$ at which $x\\mathcal {X}= \\lbrace (u_1,0):u_1\\ge 0\\rbrace $ .", "Since this is not a linear space, there is no lift of $\\mathcal {X}$ satisfying “1 $\\Rightarrow \\!$ 1” by Corollary REF .", "For our particular lift, $\\varphi ^{-1}(x)=(0,0,0)=:y$ we have $\\mathbf {L}_y=0$ so $\\operatorname{im}\\mathbf {L}_y=\\lbrace 0\\rbrace $ .", "On the other hand, the curve $\\mathcal {M}$ is parameterized by $c(t) = (t^2,t^3,t)$ , which satisfies $(\\varphi \\circ c)^{\\prime }(0)=0$ and $(\\varphi \\circ c)^{\\prime \\prime }(0)=(2,0)\\in A_y$ .", "Since $A_y$ is a cone containing 0, we conclude that $x\\mathcal {X}\\subseteq A_y$ , and hence the lift satisfies “2 $\\Rightarrow \\!$ 1” at $y$ by Corollary REF .", "Example 2.36 (The rank factorization lift) Consider the lift $\\varphi (L,R) = LR^\\top $ of the bounded-rank matrix variety $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ , where $\\mathcal {M}= \\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{n\\times r}$ is a Euclidean space.", "Let $y=(L,0)$ for some $L$ with $\\mathrm {rank}(L)=r$ and let $x=\\varphi (y)=0$ .", "Because $\\mathbb {R}^{m \\times n}_{\\le r}$ is a closed cone whose convex hull is all of $\\mathbb {R}^{m\\times n}$ , we have $x\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}\\ \\textrm { and }\\ (x\\mathcal {X})^*=\\lbrace 0\\rbrace .$ Since $x\\mathcal {X}$ is not a linear space, there is no smooth lift of $\\mathbb {R}^{m \\times n}_{\\le r}$ satisfying “1 $\\Rightarrow \\!$ 1” at $y$ .", "If $c(t)=(L(t),R(t))$ is a smooth curve on $\\mathcal {M}$ such that $c(0)=y$ and $\\varphi \\circ c = L(t)R(t)^\\top $ , then $(\\varphi \\circ c)^{\\prime }(0)=LR^{\\prime }(0)^\\top $ .", "Since $R^{\\prime }(0)\\in \\mathbb {R}^{n\\times r}$ is arbitrary, we conclude that $\\operatorname{im}\\mathbf {L}_y = \\lbrace L Z^\\top : Z\\in \\mathbb {R}^{n\\times r}\\rbrace \\subsetneq x\\mathcal {X}.$ Moreover, if $LR^{\\prime }(0)^\\top =0$ then $R^{\\prime }(0)=0$ , hence $(\\varphi \\circ c)^{\\prime \\prime }(0)=LR^{\\prime \\prime }(0)^\\top $ .", "Since $R^{\\prime \\prime }(0)\\in \\mathbb {R}^{n\\times r}$ is again arbitrary, we conclude that $A_y=\\operatorname{im}\\mathbf {L}_y\\subsetneq x\\mathcal {X}$ , hence $A_y^*=(\\operatorname{im}\\mathbf {L}_y)^{\\perp }\\lnot \\subseteq \\lbrace 0\\rbrace =(x\\mathcal {X})^*$ .", "Thus, the sufficient conditions in Corollary REF (a) do not hold at $y$ .", "Nevertheless, it was shown in [22] that “2 $\\Rightarrow \\!$ 1” holds everywhere on $\\mathcal {M}$ .", "In Proposition REF below, we use a similar argument to show $B_y$ contains all rank-1 $m\\times n$ matrices, hence $B_y^* = \\lbrace 0\\rbrace = (x\\mathcal {X})^*$ and the sufficient condition Corollary REF (b) holds in this case.", "This example also shows that $B_y$ may be strictly larger than the closure of $A_y$ .", "The sufficient condition in Corollary REF (b) holds in all the examples satisfying “2 $\\Rightarrow \\!$ 1” that we consider, and is convenient to check for the various lifts of low-rank matrices we study in Section REF .", "Still, we conjecture the following.", "Conjecture 2.37 The sufficient condition in Corollary REF (b) is not necessary in general.", "We can now prove Theorem REF , stating the chain of implications we find the most useful for verifying or refuting “2 $\\Rightarrow \\!$ 1” in the examples of Section .", "[Proof of Theorem REF ] As we show in Proposition REF (a) below, the first condition is just $x\\mathcal {X}\\subseteq A_y$ .", "The second condition is the equivalence of this inclusion to $x\\mathcal {X}=A_y$ which follows by Proposition REF (b).", "The second condition implies the third by Proposition REF (b).", "The third condition implies the fourth by Proposition REF (d), and it is equivalent to “2 $\\Rightarrow \\!$ 1” at $y$ by Theorem REF .", "The last implication gives a necessary condition for “2 $\\Rightarrow \\!$ 1” to hold.", "Suppose there exists $w\\in (\\operatorname{im}\\mathbf {L}_y)^\\perp \\cap (\\mathbf {Q}_y(y\\mathcal {M}))^* \\setminus (x \\mathcal {X})^*$ .", "Define $f(x^{\\prime }) = \\left\\langle {w},{x^{\\prime }}\\right\\rangle $ , whose gradient and Hessian at $x$ are $\\nabla f(x) = w$ and $\\nabla ^2 f(x) = 0$ .", "For any curve $c\\colon I\\rightarrow \\mathcal {M}$ satisfying $c(0)=y$ , denote $v=c^{\\prime }(0)\\in y\\mathcal {M}$ .", "Note that $(g\\circ c)^{\\prime }(0)=\\left\\langle {w},{\\mathbf {L}_y(v)}\\right\\rangle =0$ since $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ and $(g\\circ c)^{\\prime \\prime }(0) = (f\\circ \\varphi \\circ c)^{\\prime \\prime }(0) = \\langle w,(\\varphi \\circ c)^{\\prime \\prime }(0)\\rangle = \\langle w,\\mathbf {Q}_y(v)\\rangle \\ge 0,$ where the second equality follows from the chain rule, the third equality follows from Lemma REF (a) below, and the inequality follows from $w\\in (\\mathbf {Q}(y\\mathcal {M}))^*$ .", "Thus, $y$ is 2-critical for ().", "However, $\\nabla f(x)=w \\notin (x \\mathcal {X})^*$ so $x$ is not stationary for (REF ), hence “2 $\\Rightarrow \\!$ 1” does not hold at $y$ .", "Our goal now is to derive more explicit expressions for the sets $A_y,B_y,W_y$ that appear in Theorem REF in terms of the maps $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ from Definition REF .", "We first characterize the ambiguity in $\\mathbf {Q}_y(v)$ arising from different choices of curves $c_v$ in that definition.", "Lemma 2.38 For each $y\\in \\mathcal {M}$ and $v\\in y\\mathcal {M}$ , let $c_v\\colon I\\rightarrow \\mathcal {M}$ be a curve satisfying $c_v(0)=y$ and $c_v^{\\prime }(0)=v$ , so we can set $\\mathbf {Q}_y(v)=(\\varphi \\circ c_v)^{\\prime \\prime }(0)$ according to Definition REF .", "For any other curve $c$ satisfying $c(0)=y$ and $c^{\\prime }(0)=v$ , we have $(\\varphi \\circ c)^{\\prime \\prime }(0)-(\\varphi \\circ c_v)^{\\prime \\prime }(0)=\\mathbf {L}_y(c^{\\prime \\prime }(0)-c_v^{\\prime \\prime }(0))\\in \\operatorname{im}\\mathbf {L}_y$ .", "For any $u\\in y\\mathcal {M}$ , there exists a curve $c$ as in part (a) satisfying $c^{\\prime \\prime }(0)-c_v^{\\prime \\prime }(0)=u$ , hence $(\\varphi \\circ c)^{\\prime \\prime }(0)-(\\varphi \\circ c_v)^{\\prime \\prime }(0)=\\mathbf {L}_y(u)$ .", "In particular, $\\lbrace (\\varphi \\circ c)^{\\prime \\prime }(0): c(0)=y \\textrm { and } c^{\\prime }(0)=v\\rbrace = \\mathbf {Q}_y(v) + \\operatorname{im}\\mathbf {L}_y.$ For any $w\\in \\mathcal {E}$ , recall the function $\\varphi _w(y)=\\langle w,\\varphi (y)\\rangle $ from Definition REF .", "Let $c\\colon I\\rightarrow \\mathcal {M}$ be a curve satisfying $c(0)=y$ and $c^{\\prime }(0)=v$ .", "Then, on the one hand, $\\left.\\frac{\\mathrm {d}^2}{\\mathrm {d}t^2}\\right|_{t=0}\\varphi _w(c(t)) = \\left.\\frac{\\mathrm {d}^2}{\\mathrm {d}t^2}\\right|_{t=0}\\left\\langle {w},{(\\varphi \\circ c)(t)}\\right\\rangle = \\langle w,(\\varphi \\circ c)^{\\prime \\prime }(0)\\rangle .$ On the other hand, using the Riemannian structure on $\\mathcal {M}$ , $\\left.\\frac{\\mathrm {d}^2}{\\mathrm {d}t^2}\\right|_{t=0}\\varphi _w(c(t)) = \\langle \\nabla ^2\\varphi _w(y)[c^{\\prime }(0)],c^{\\prime }(0)\\rangle + \\langle \\nabla \\varphi _w(y),c^{\\prime \\prime }(0)\\rangle .$ By Lemma REF , we have $\\nabla \\varphi _w(y)=\\mathbf {L}_y^*(w)$ , so $\\langle \\nabla \\varphi _w(y),c^{\\prime \\prime }(0)\\rangle = \\langle w,\\mathbf {L}_y(c^{\\prime \\prime }(0))\\rangle $ .", "We conclude that $\\langle w,(\\varphi \\circ c)^{\\prime \\prime }(0)\\rangle = \\langle \\nabla ^2\\varphi _w(y)[v],v\\rangle + \\langle w,\\mathbf {L}_y(c^{\\prime \\prime }(0))\\rangle .$ Therefore, for any $w\\in \\mathcal {E}$ we have $\\langle w,(\\varphi \\circ c)^{\\prime \\prime }(0) - (\\varphi \\circ c_v)^{\\prime \\prime }(0)\\rangle = \\langle w,\\mathbf {L}_y(c^{\\prime \\prime }(0)-c_v^{\\prime \\prime }(0))\\rangle ,$ which proves the claim.", "For the first claim, set $c(t)=\\exp _y(tv + t^2(c_v^{\\prime \\prime }(0)-u)/2)$ where $\\exp $ is the exponential map of $\\mathcal {M}$  [8].", "The second claim follows from part (a).", "Lemma REF shows that the possible values of $\\mathbf {Q}_y(v)$ (depending on the choice of curve $c_v$ in Definition REF ) differ by an element of $\\operatorname{im}\\mathbf {L}_y$ , and conversely, every element of $\\mathbf {Q}_y(v)+\\operatorname{im}\\mathbf {L}_y$ can be obtained by an appropriate choice of $c_v$ .", "Consequently, if $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ , then the inner product $\\langle w,\\mathbf {Q}_y(v)\\rangle $ is independent of the choice of $c_v$ in Definition REF .", "In fact, (REF ) shows that it is a quadratic form in $v\\in y\\mathcal {M}$ given by: $\\langle w,\\mathbf {Q}_y(v)\\rangle = \\langle \\nabla ^2\\varphi _w(y)[v],v\\rangle \\quad \\forall v\\in y\\mathcal {M}.$ We stress that this identity requires $w \\in (\\operatorname{im}\\mathbf {L}_y)^\\perp $ in general.", "It allows us to view $\\langle w,\\mathbf {Q}_y(v)\\rangle $ interchangeably as either a quadratic form in $v$ on $y\\mathcal {M}$ or a linear form in $w$ on $(\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ .", "Remark 2.39 (Disambiguation of $\\mathbf {Q}_y$ ) Given the above ambiguity in $\\mathbf {Q}_y$ , it would be natural to define the codomain of $\\mathbf {Q}_y$ to be the quotient vector space $\\mathcal {E}/\\operatorname{im}\\mathbf {L}_y$ .", "This would make it independent of the choice of $c_v$ .", "Subsets of the quotient $\\mathcal {E}/\\operatorname{im}\\mathbf {L}_y$ are in bijection with subsets of $\\mathcal {E}$ that are closed under addition with $\\operatorname{im}\\mathbf {L}_y$ (i.e.", "subsets $S\\subseteq \\mathcal {E}$ such that $S+\\operatorname{im}\\mathbf {L}_y = S$ ), which includes $A_y$ and $B_y$ .", "Subsets of the dual vector space to $\\mathcal {E}/\\operatorname{im}\\mathbf {L}_y$ are in bijection with subsets of $(\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ , which includes $W_y$ .", "Hence we could equivalently phrase our conditions for “2 $\\Rightarrow \\!$ 1” in terms of subsets of $\\mathcal {E}/\\operatorname{im}\\mathbf {L}_y$ and its dual.", "However, we have several techniques to obtain expressions for $\\mathbf {Q}_y$ without explicitly choosing curves, see Section REF below.", "Thus, Definition REF mirrors the computations we do more closely.", "In practice, we view $\\mathbf {Q}_y$ as taking values in $\\mathcal {E}$ , and consider two maps differing by elements of $\\operatorname{im}\\mathbf {L}_y$ as equivalent for the purpose of verifying “2 $\\Rightarrow \\!$ 1” since they yield the same sets $A_y,B_y,W_y$ .", "We now express the sets $A_y,B_y,W_y$ appearing in our conditions for “2 $\\Rightarrow \\!$ 1” in terms of the maps $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ .", "We explain how to compute $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ in various settings in Section REF below.", "Proposition 2.40 For any $y\\in \\mathcal {M}$ , $A_y = \\mathbf {Q}_y(\\ker \\mathbf {L}_y) + \\operatorname{im}\\mathbf {L}_y$ .", "$B_y = \\underset{\\begin{array}{c}(v_i)_{i\\ge 1}:\\mathbf {L}_y(v_i)\\rightarrow 0\\end{array}}{\\bigcup }\\lim _i(\\mathbf {Q}_y(v_i)+ \\operatorname{im}\\mathbf {L}_y).\\footnote {A sequence (v_i+W)_{i\\ge 1} of translates of a subspace W of a (topological) vector space V converges (necessarily to another translate of W) iff there exist w_i\\in W such that (v_i+w_i)_{i\\ge 1}\\subseteq V converges in V.}$ $W_y = \\left\\lbrace w\\in A_y^*: \\forall v\\in \\ker \\mathbf {L}_y, \\langle \\nabla ^2\\varphi _w(y)[v],v\\rangle =0 \\Rightarrow \\nabla ^2\\varphi _w(y)[v] = 0\\right\\rbrace $ .", "Remark 2.41 We always have $B_y \\supseteq \\left\\lbrace \\lim _i\\mathbf {Q}_y(v_i):\\mathbf {L}_y(v_i)\\rightarrow 0\\right\\rbrace + \\operatorname{im}\\mathbf {L}_y,$ but inclusion may be strict depending on the curves $c_v$ in Definition REF .", "In practice, the expressions for $\\mathbf {Q}_y$ we obtain using our techniques from Section REF below are smooth, and the subset of $B_y$ in (REF ) is large enough to prove “2 $\\Rightarrow \\!$ 1” in every example where it holds.", "If $w\\in A_y$ , then $w=(\\varphi \\circ c)^{\\prime \\prime }(0)$ for some smooth curve $c$ on $\\mathcal {M}$ such that $c(0)=y$ and $0=(\\varphi \\circ c)^{\\prime }(0) = \\mathbf {L}_y(c^{\\prime }(0))$ , so $c^{\\prime }(0)\\in \\ker \\mathbf {L}_y$ .", "By Lemma REF (a), we have $w \\in \\mathbf {Q}(c^{\\prime }(0)) + \\operatorname{im}\\mathbf {L}_y$ , showing $A_y\\subseteq \\mathbf {Q}_y(\\ker \\mathbf {L}_y)+\\operatorname{im}\\mathbf {L}_y$ .", "Conversely, suppose $w = \\mathbf {Q}_y(v) + \\mathbf {L}_y(u)$ for some $v\\in \\ker \\mathbf {L}_y$ and $u\\in y\\mathcal {M}$ .", "By Lemma REF (b), there is a smooth curve $c\\colon I\\rightarrow \\mathcal {M}$ satisfying $c(0)=y$ , $c^{\\prime }(0)=v$ , and $(\\varphi \\circ c)^{\\prime \\prime }(0) = w$ .", "Since $(\\varphi \\circ c)^{\\prime }(0)=\\mathbf {L}_y(v)=0$ , this shows $\\mathbf {Q}_y(\\ker \\mathbf {L}_y)+\\operatorname{im}\\mathbf {L}_y\\subseteq A_y$ .", "If $w\\in B_y$ , then there are smooth curves $c_i$ such that $c_i(0)=y$ , $\\mathbf {L}_y(c_i^{\\prime }(0))\\rightarrow 0$ , and $(\\varphi \\circ c_i)^{\\prime \\prime }(0)\\rightarrow w$ .", "By Lemma REF (a), we have $(\\varphi \\circ c_i)^{\\prime \\prime }(0)\\in \\mathbf {Q}_y(c_i^{\\prime }(0)) + \\operatorname{im}\\mathbf {L}_y$ .", "Because $\\lim _i(\\varphi \\circ c_i)^{\\prime \\prime }(0) = w$ exists, we conclude that $\\lim _i(\\mathbf {Q}_y(c_i^{\\prime }(0))+\\operatorname{im}\\mathbf {L}_y) = w + \\operatorname{im}\\mathbf {L}_y$ exists as well, and $w$ is contained in this limit.", "This shows the inclusion $\\subseteq $ in the claim.", "Conversely, suppose $w\\in \\lim _i(\\mathbf {Q}_y(v_i)+\\operatorname{im}\\mathbf {L}_y)$ for some sequence $(v_i)_{i\\ge 1}\\subseteq y\\mathcal {M}$ such that $\\mathbf {L}_y(v_i)\\rightarrow 0$ .", "Then there exist $u_i\\in y\\mathcal {M}$ such that $w = \\lim _i(\\mathbf {Q}_y(v_i) + \\mathbf {L}_y(u_i))$ .", "By Lemma REF (b), there exist curves $c_i$ satisfying $c_i(0)=y$ , $c_i^{\\prime }(0)=v_i$ , and $(\\varphi \\circ c_i)^{\\prime \\prime }(0) = \\mathbf {Q}_y(v_i) + \\mathbf {L}_y(u_i)$ .", "Then $(\\varphi \\circ c_i)^{\\prime }(0)=\\mathbf {L}_y(v_i)\\rightarrow 0$ and $(\\varphi \\circ c_i)^{\\prime \\prime }(0)\\rightarrow w$ , so $w\\in B_y$ and hence the reverse inclusion in the claim also holds.", "Let $x=\\varphi (y)$ .", "By Proposition REF , a vector $w\\in \\mathcal {E}$ is contained in $W_y$ iff there exists $\\alpha >0$ such that $y$ is 2-critical for () with cost $g_{\\alpha }=f_{\\alpha }\\circ \\varphi $ where $f_{\\alpha }(x^{\\prime }) = \\langle w,x^{\\prime }\\rangle + \\tfrac{\\alpha }{2}\\Vert x^{\\prime }-x\\Vert ^2$ .", "By Lemma REF , this is equivalent to $\\nabla g_{\\alpha }(y) = \\mathbf {L}_y^*(w) = 0,\\qquad \\nabla ^2g_{\\alpha }(y) = \\alpha \\mathbf {L}_y^*\\circ \\mathbf {L}_y + \\nabla ^2\\varphi _w(y)\\succeq 0.$ In other words, $w\\in W_y$ iff $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ and $\\nabla ^2\\varphi _w(y) + \\alpha \\mathbf {L}_y^*\\circ \\mathbf {L}_y\\succeq 0$ for some $\\alpha >0$ .", "To understand when the second condition holds, we decompose $y\\mathcal {M}=\\ker \\mathbf {L}_y\\oplus (\\ker \\mathbf {L}_y)^{\\perp }$ and express the relevant self-adjoint operators on $y\\mathcal {M}$ in block matrix form with respect to a basis compatible with this decomposition.", "More explicitly, choose a basis as described above and denote $n=\\dim \\ker \\mathbf {L}_y$ and $m=\\dim (\\ker \\mathbf {L}_y)^{\\perp }$ .", "Assume first that $m>0$ .", "We can represent $\\nabla ^2\\varphi _w(y)$ and $\\alpha \\mathbf {L}_y^*\\circ \\mathbf {L}_y$ as $\\nabla ^2\\varphi _w(y) &= \\begin{bmatrix} \\Phi _1 & \\Phi _2\\\\ \\Phi _2^\\top & \\Phi _3\\end{bmatrix},\\quad &&\\textrm {with }\\ \\Phi _1\\in \\mathbb {S}^n, \\Phi _3\\in \\mathbb {S}^m.\\\\\\alpha \\mathbf {L}_y^*\\circ \\mathbf {L}_y &= \\begin{bmatrix} 0 & 0\\\\ 0 & \\alpha \\Psi \\end{bmatrix},\\quad &&\\textrm {with }\\ \\Psi \\in \\mathbb {S}^m_{\\succ 0}.$ Thus, $w & \\in W_y & \\iff && w \\in (\\operatorname{im}\\mathbf {L}_y)^\\perp \\textrm { and } \\exists \\alpha >0 \\textrm { such that } \\begin{bmatrix}\\Phi _1 & \\Phi _2 \\\\ \\Phi _2^\\top \\!", "& \\Phi _3 + \\alpha \\Psi \\end{bmatrix} & \\succeq 0.$ By the generalized Schur complement theorem [50], the positivity of the above block matrix is equivalent to $\\Phi _1 \\succeq 0, \\operatorname{im}\\Phi _2 \\subseteq \\operatorname{im}\\Phi _1 \\textrm { and } \\exists \\alpha >0 \\textrm { such that } \\Phi _3 + \\alpha \\Psi \\succeq \\Phi _2^\\top \\!", "\\Phi _1^\\dagger \\Phi _2,$ where $\\Phi _1^\\dagger $ is the Moore–Penrose pseudo-inverse of $\\Phi _1$ .", "The last condition above always holds, since we can choose $\\alpha \\ge \\lambda _{\\max }(\\Phi _2^\\top \\Phi _1^{\\dagger }\\Phi _2 - \\Phi _3) / \\lambda _{\\min }(\\Psi )$ .", "We deduce the following expression for $W_y$ : $W_y = \\lbrace w \\in (\\operatorname{im}\\mathbf {L}_y)^\\perp : \\Phi _1 \\succeq 0 \\textrm { and } \\operatorname{im}\\Phi _2 \\subseteq \\operatorname{im}\\Phi _1 \\rbrace \\\\ $ with $\\Phi _1$ and $\\Phi _2$ as defined above.", "We now work out basis-free characterizations of the properties $\\Phi _1 \\succeq 0$ and $\\operatorname{im}\\Phi _2 \\subseteq \\operatorname{im}\\Phi _1$ .", "First, notice that $\\Phi _1 \\succeq 0$ iff $\\begin{bmatrix} v_1 \\\\ \\mathbf {0}_{m}\\end{bmatrix}^\\top \\!", "\\begin{bmatrix} \\Phi _1 & \\Phi _2\\\\ \\Phi _2^\\top & \\Phi _3\\end{bmatrix}\\begin{bmatrix} v_1 \\\\ \\mathbf {0}_{m}\\end{bmatrix}\\ge 0,\\quad \\textrm {for all } v_1\\in \\mathbb {R}^n,$ or in basis-free terms, $\\left\\langle {\\nabla ^2\\varphi _w(y)[v]},{v}\\right\\rangle \\ge 0$ for all $v\\in \\ker \\mathbf {L}_y$ .", "If $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ then (REF ) shows that this is also equivalent to $\\left\\langle {w},{\\mathbf {Q}_y(v)}\\right\\rangle \\ge 0$ for all $v\\in \\ker \\mathbf {L}_y$ , which is in turn equivalent to $\\left\\langle {w},{\\mathbf {Q}_y(v) + \\mathbf {L}_y(u)}\\right\\rangle \\ge 0$ for all $v\\in \\ker \\mathbf {L}_y$ and $u\\in y\\mathcal {M}$ .", "This last condition is just $w\\in A_y^*$ by part (a).", "Second, we must understand for which vectors $w$ it holds that $\\operatorname{im}\\Phi _2 \\subseteq \\operatorname{im}\\Phi _1$ , or equivalently, $\\ker \\Phi _1 \\subseteq \\ker \\Phi _2^\\top \\!", "$ (recall that $\\Phi _1^\\top =\\Phi _1$ ).", "If $\\Phi _1\\succeq 0$ , then $v_1\\in \\ker \\Phi _1$ iff $v_1^\\top \\!", "\\Phi _1 v_1=0$ .", "Moreover, if $v_1\\in \\ker \\Phi _1$ then $\\begin{bmatrix} \\Phi _1 & \\Phi _2\\\\ \\Phi _2^\\top & \\Phi _3\\end{bmatrix}\\begin{bmatrix} v_1 \\\\ \\mathbf {0}_{m}\\end{bmatrix} = \\begin{bmatrix} \\mathbf {0}_n \\\\ \\Phi _2^\\top \\!", "v_1\\end{bmatrix},$ which vanishes iff $v_1\\in \\ker \\Phi _2^\\top \\!", "$ .", "Thus, the inclusion $\\operatorname{im}\\Phi _2\\subseteq \\operatorname{im}\\Phi _1$ is equivalent to the implication $\\begin{bmatrix} v_1 \\\\ \\mathbf {0}_{m}\\end{bmatrix}^\\top \\!", "\\begin{bmatrix} \\Phi _1 & \\Phi _2\\\\ \\Phi _2^\\top & \\Phi _3\\end{bmatrix}\\begin{bmatrix} v_1 \\\\ \\mathbf {0}_{m}\\end{bmatrix} = 0 \\Rightarrow \\begin{bmatrix} \\Phi _1 & \\Phi _2\\\\ \\Phi _2^\\top & \\Phi _3\\end{bmatrix}\\begin{bmatrix} v_1 \\\\ \\mathbf {0}_{m}\\end{bmatrix} = 0,\\qquad \\textrm { for all } v_1\\in \\mathbb {R}^n.$ In basis-free terms, we have shown that if $\\Phi _1\\succeq 0$ , then $\\operatorname{im}\\Phi _2\\subseteq \\operatorname{im}\\Phi _1$ is equivalent to the implication $\\left\\langle {\\nabla ^2\\varphi _w(y)[v]},{v}\\right\\rangle =0\\Rightarrow \\nabla ^2\\varphi _w(y)[v]=0$ holding for all $v\\in \\ker \\mathbf {L}_y$ .", "Putting everything together, $W_y & = \\lbrace w \\in (\\operatorname{im}\\mathbf {L}_y)^\\perp : \\Phi _1 \\succeq 0 \\textrm { and } \\operatorname{im}\\Phi _2 \\subseteq \\operatorname{im}\\Phi _1 \\rbrace ,\\\\ & = \\lbrace w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }: w\\in A_y^* \\textrm { and } \\forall v\\in \\ker \\mathbf {L}_y, \\left\\langle {\\nabla ^2\\varphi _w(y)[v]},{v}\\right\\rangle =0\\Rightarrow \\nabla ^2\\varphi _w(y)[v]=0\\rbrace ,\\\\& = \\lbrace w\\in A_y^*: \\forall v\\in \\ker \\mathbf {L}_y, \\left\\langle {\\nabla ^2\\varphi _w(y)[v]},{v}\\right\\rangle =0\\Rightarrow \\nabla ^2\\varphi _w(y)[v]=0\\rbrace ,$ where the last equality holds because $A_y^*\\subseteq (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ by Proposition REF (b).", "This is the claimed expression for $W_y$ .", "If $m=0$ , or equivalently, if $\\mathbf {L}_y=0$ , then $w\\in W_y$ iff $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }=\\mathcal {E}$ and $\\nabla ^2\\varphi _w(y)\\succeq 0$ .", "This in turn is equivalent to $w\\in A_y^*=(\\mathbf {Q}_y(\\ker \\mathbf {L}_y))^*\\cap (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ by (REF ), so $W_y=A_y^*$ in this case.", "Conversely, if $w\\in A_y^*$ and $\\mathbf {L}_y=0$ then $\\nabla ^2\\varphi _w(y)\\succeq 0$ so the condition in the claimed expression for $W_y$ is satisfied automatically.", "This gives the claimed expression for $W_y$ for $m=0$ as well." ], [ "Composition of lifts", "In this section, we ask when are lift properties we have been studying preserved under composition.", "We use the following proposition both to compute $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ in various settings, and to study some of the lifts appearing in the literature in Sections -.", "Proposition 2.42 Let $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ be a smooth lift, and let $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {M}$ be a smooth map between smooth manifolds such that $\\varphi \\circ \\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {X}$ is surjective.", "Both $\\varphi $ and $\\varphi \\circ \\psi $ are smooth lifts for $\\mathcal {X}$ .", "For $z\\in \\mathcal {N}$ and $y=\\psi (z)\\in \\mathcal {M}$ , the following hold.", "If $\\varphi \\circ \\psi $ satisfies “local $\\Rightarrow \\!$ local” at $z$ , then $\\varphi $ satisfies “local $\\Rightarrow \\!$ local” at $y$ .", "If $\\psi $ is open (in particular, if $\\psi $ is a submersion) at $z$ , and if $\\varphi $ satisfies “local $\\Rightarrow \\!$ local” at $y$ , then $\\varphi \\circ \\psi $ satisfies “local $\\Rightarrow \\!$ local” at $z$ .", "If $\\varphi \\circ \\psi $ satisfies “1 $\\Rightarrow \\!$ 1” or “2 $\\Rightarrow \\!$ 1” at $z$ , then $\\varphi $ satisfies the corresponding property at $y$ .", "If $\\psi $ is a submersion at $z$ and $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” or “2 $\\Rightarrow \\!$ 1” at $y$ , then $\\varphi \\circ \\psi $ satisfies the corresponding property at $z$ .", "If $\\psi $ is a submersion at $z$ , then $\\mathbf {L}_z^{\\varphi \\circ \\psi } = \\mathbf {L}_y^{\\varphi } \\circ \\mathbf {L}_z^{\\psi },\\quad \\mathbf {Q}_z^{\\varphi \\circ \\psi } \\equiv \\mathbf {Q}_y^{\\varphi } \\circ \\mathbf {L}_z^{\\psi } \\mod {\\operatorname{im}}\\unknown.", "\\mathbf {L}_z^{\\varphi \\circ \\psi }.$ Moreover, $\\operatorname{im}\\mathbf {L}_z^{\\varphi \\circ \\psi } = \\operatorname{im}\\mathbf {L}_y^{\\varphi }$ , $A_z^{\\varphi \\circ \\psi } = A_y^{\\varphi }$ , $B_z^{\\varphi \\circ \\psi } = B_y^{\\varphi }$ , and $W_z^{\\varphi \\circ \\psi } = W_y^{\\varphi }$ .", "The proof is straightforward but long, and is deferred to Appendix REF .", "Here we denote $v\\equiv w\\mod {\\operatorname{im}}\\unknown.", "\\mathbf {L}_y$ to mean $v-w\\in \\operatorname{im}\\mathbf {L}_y$ .", "By Remark REF , equality of $\\mathbf {Q}_z^{\\varphi \\circ \\psi }$ and $\\mathbf {Q}_y^{\\varphi }$ modulo $\\operatorname{im}\\mathbf {L}_y^{\\varphi }=\\operatorname{im}\\mathbf {L}_z^{\\varphi \\circ \\psi }$ means that either one can be used to verify “2 $\\Rightarrow \\!$ 1”.", "Proposition REF shows that, given a smooth lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ , there is no benefit to further lifting $\\mathcal {M}$ to another smooth manifold through $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {M}$ in terms of our properties.", "Indeed, if $\\varphi $ does not satisfy one of our properties, then neither does $\\varphi \\circ \\psi $ for any smooth $\\psi $ (we cannot `fix' a bad lift by lifting it further).", "On the other hand, this proposition also tells us that our properties, as well as the sets involved in their characterization, are preserved under submersions.", "This allows us to check these properties on a chart for $\\mathcal {M}$ .", "For lifts to a manifold $\\mathcal {M}$ which is a quotient of another manifold $\\overline{\\mathcal {M}}$ (these arise naturally when quotienting by group actions, see [8]), the proposition allows us to verify our properties on $\\overline{\\mathcal {M}}$ which is often easier.", "Remark 2.43 If $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {M}$ is a submersion, for each $z\\in \\mathcal {N}$ let $V_z=\\ker (z)$ and $H_z=(\\ker (z))^{\\perp }$ be the so-called vertical and horizontal spaces at $z$ , which satisfy $z\\mathcal {N}=V_z\\oplus H_z$ and $H_z\\cong {\\psi (z)}\\mathcal {M}$ .", "Proposition REF implies that $\\mathbf {L}_z^{\\varphi \\circ \\psi } = \\mathbf {L}_z^{\\varphi \\circ \\psi }\\circ \\mathrm {Proj}_{H_z}$ and $\\mathbf {Q}_z^{\\varphi \\circ \\psi } \\equiv \\mathbf {Q}_z^{\\varphi \\circ \\psi }\\circ \\mathrm {Proj}_{H_z}$ where $\\mathrm {Proj}_{H_z}$ denotes orthogonal projection onto $H_z$ , so it suffices to consider the restrictions of $\\mathbf {L}_z^{\\varphi \\circ \\psi }$ and $\\mathbf {Q}_z^{\\varphi \\circ \\psi }$ to the horizontal space at $z$ .", "The latter is often simpler than $z\\mathcal {N}$ , see [8].", "We end this section with an application of Proposition REF to the robotics and computer vision literature.", "Example 2.44 (Shohan rotation averaging) In [17], the authors consider maximum likelihood estimation of a set of $n$ rotations on $\\mathbb {R}^d$ from noisy measurements of pairs of relative rotations.", "This problem can be formulated as a low-rank SDP of the form (REF ) with $f$ linear, which can be lifted using the Burer–Monteiro approach to a problem of the form () with $\\mathcal {M}=\\mathrm {St}(p,d)^n$ for appropriate $p\\ge d$ .", "Using a general result about the Burer–Monteiro lift in [9], they show in [17] that any rank-deficient 2-critical point for the lifted problem is globally optimal, and therefore maps to a globally optimal point for their original problem.", "Due to the availability of high-performance solvers for optimization over rotations, the authors of [17] further lift to optimization over $\\mathcal {N}=\\mathrm {SO}(p)^n$ , with the second lift map $\\psi \\colon \\mathrm {SO}(p)^n\\rightarrow \\mathrm {St}(p,d)^n$ extracting the first $d$ columns of each matrix.", "The authors then prove that $\\psi $ preserves global minima and 1-critical points.", "These are special cases of our Theorems REF and REF .", "Using our framework, we can improve these guarantees.", "First, as we show in Corollary REF below, the first lift used in [17] satisfies “2 $\\Rightarrow \\!$ 1” and “local $\\Rightarrow \\!$ local”.", "In particular, any 2-critical point for their first lifted problem maps to a stationary point even if it is full-rank.", "Second, since $\\psi $ is a submersion, Proposition REF shows that the composed lift satisfies “local $\\Rightarrow \\!$ local” and “2 $\\Rightarrow \\!$ 1” as well.", "Third, all of the above conclusions continue to hold for any smooth cost function $f$ .", "Finally, if every stationary point for the original low-rank SDP of [17] is globally optimal (e.g., if the conditions of [9] hold), then any 2-critical point for their lifted problem is globally optimal.", "Hence the nonconvexity in the lifted problem benign." ], [ "Computing $\\mathbf {L}_y$ and {{formula:4600763f-1a00-458d-9dcb-d19988e79df9}}", "Theorem REF gives several conditions for “2 $\\Rightarrow \\!$ 1” that can be checked using $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ from Definition REF .", "We therefore consider various strategies for computing $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ given various presentations of $\\mathcal {M}$ .", "Since $\\mathbf {Q}_y$ is only defined modulo $\\operatorname{im}\\mathbf {L}_y$ , different methods may yield different expressions, any of which can be used to verify “2 $\\Rightarrow \\!$ 1”." ], [ "$\\mathcal {M}$ through charts:", "Suppose we are given a chart $\\psi \\colon U\\rightarrow \\mathcal {M}$ on $\\mathcal {M}$ , which is a diffeomorphism from an open subset $U\\subseteq \\mathcal {E}^{\\prime }$ of some linear space $\\mathcal {E}^{\\prime }$ onto its image, and let $y\\in \\operatorname{im}\\psi $ .", "Then we can compose $\\varphi $ with $\\psi $ to obtain a lift to a linear space $\\widetilde{\\varphi }=\\varphi \\circ \\psi $ .", "By Proposition REF , the lift $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” or “2 $\\Rightarrow \\!$ 1” at $y\\in \\mathcal {M}$ if and only if $\\widetilde{\\varphi }$ satisfies the corresponding property at $z=\\psi ^{-1}(y)$ .", "Thus, it suffices to compute $\\mathbf {L}_z^{\\widetilde{\\varphi }}$ and $\\mathbf {Q}_z^{\\widetilde{\\varphi }}$ and use them to check “2 $\\Rightarrow \\!$ 1” at $z$ .", "Since $U$ is an open subset of a linear space $\\mathcal {E}^{\\prime }$ , it is natural to compute $\\mathbf {L}_z^{\\widetilde{\\varphi }}$ and $\\mathbf {Q}_z^{\\widetilde{\\varphi }}$ directly from Definition REF using curves $\\widetilde{c}_{\\widetilde{v}}(t) = z + t\\widetilde{v}$ which are straight lines through $z$ in direction $\\widetilde{v}\\in \\mathcal {E}^{\\prime }=zU$ .", "This choice yields the expressions $\\mathbf {L}_z^{\\widetilde{\\varphi }}(\\widetilde{v}) = (\\widetilde{\\varphi }\\circ \\widetilde{c}_{\\widetilde{v}})^{\\prime }(0)=\\varphi (z)[\\widetilde{v}],\\quad \\textrm {and}\\quad \\mathbf {Q}_z^{\\widetilde{\\varphi }}(\\widetilde{v}) = (\\widetilde{\\varphi }\\circ \\widetilde{c}_{\\widetilde{v}})^{\\prime \\prime }(0)=2\\widetilde{\\varphi }(z)[\\widetilde{v},\\widetilde{v}],$ where $\\varphi (z)$ and $2\\widetilde{\\varphi }(z)$ are the ordinary first- and second-order derivative maps of $\\widetilde{\\varphi }$ viewed as a map between linear spaces $\\mathcal {E}^{\\prime }\\rightarrow \\mathcal {E}$ .", "In particular, if $\\mathcal {M}$ is itself a linear space, we may take $U=\\mathcal {E}^{\\prime }=\\mathcal {M}$ and $\\psi =\\mathrm {id}$ and use (REF ) with $\\widetilde{\\varphi }=\\varphi $ ." ], [ "$\\mathcal {M}$ embedded in a linear space:", "Suppose now that $\\mathcal {M}$ is an embedded Riemannian submanifold of another linear space $\\mathcal {E}^{\\prime }$ .", "By [8], the lift $\\varphi $ can be extended to a smooth map on a neighborhood $V$ of $\\mathcal {M}$ in $\\mathcal {E}^{\\prime }$ , denoted by $\\overline{\\varphi }\\colon V\\rightarrow \\mathcal {E}$ .", "This means $\\overline{\\varphi }$ is a smooth map defined on an open subset $V\\subseteq \\mathcal {E}^{\\prime }$ containing $\\mathcal {M}$ and satisfies $\\overline{\\varphi }|_{\\mathcal {M}}=\\varphi $ .", "If $c_v\\colon I\\rightarrow \\mathcal {M}\\subseteq V$ is a curve passing through $y\\in \\mathcal {M}$ with velocity $v\\in y\\mathcal {M}\\subseteq yV=\\mathcal {E}^{\\prime }$ , then $\\varphi \\circ c_v = \\overline{\\varphi }\\circ c_v$ because the curve is contained in $\\mathcal {M}$ where $\\overline{\\varphi }$ agrees with $\\varphi $ .", "Denote by $u_v = \\ddot{c}_v(0)$ the ordinary (extrinsic) acceleration of $c_v$ viewed as a curve in $\\mathcal {E}^{\\prime }$ .", "Then Definition REF together with the chain rule give $\\mathbf {L}_y(v) = (\\overline{\\varphi }\\circ c_v)^{\\prime }(0) = {\\varphi }(y)[v],\\quad \\mathbf {Q}_y(v) = (\\overline{\\varphi }\\circ c_v)^{\\prime \\prime }(0) = 2\\overline{\\varphi }(y)[v,v] + {\\varphi }[u_v].$ Suppose further that $\\mathcal {M}$ is given locally near $y$ as the zero set of a smooth map $h\\colon \\mathcal {E}\\rightarrow \\mathbb {R}^k$ such that $\\mathrm {rank}\\, h̥(y^{\\prime })=k$ for all $y^{\\prime }$ near $y$ .", "For a curve $c_v$ as above, we have $h(c_v(t))\\equiv 0$ , so in particular $(h\\circ c_v)^{\\prime }(0) = (h\\circ c_v)^{\\prime \\prime }(0)=0$ .", "By the chain rule, the latter two equations can be written as $h̥(y)[v] = 2h(y)[v,v] + h̥(y)[u_v] = 0.$ Conversely, for any $v,u_v\\in \\mathcal {E}^{\\prime }$ satisfying (REF ), there exists a curve on $\\mathcal {M}$ passing through $y$ with velocity $v$ and (extrinsic) acceleration $u_v$ by [41].", "The set of all possible extrinsic accelerations of curves on $\\mathcal {M}$ passing through $y\\in \\mathcal {M}$ with velocity $v\\in y\\mathcal {M}$ is the second-order tangent set to $\\mathcal {M}$ at $y$ for $v$ , and is denoted by $2_{y,v}\\mathcal {M}$  [41].", "The above discussion shows that $y\\mathcal {M}= \\ker h̥(y),\\qquad 2_{y,v}\\mathcal {M}= \\left\\lbrace u\\in \\mathcal {E}^{\\prime }:h̥(y)[u] = -2h(y)[v,v]\\right\\rbrace ,$ and $\\mathbf {L}_y(v) = {\\varphi }(y)[v],\\qquad \\mathbf {Q}_y(v) = 2\\overline{\\varphi }(y)[v,v] + {\\varphi }(y)[u_v],\\ \\textrm { for some } u_v\\in 2_{y,v}\\mathcal {M},$ for any extension $\\overline{\\varphi }$ of $\\varphi $ .", "Note that $2_{y,v}\\mathcal {M}$ is an affine subspace of $\\mathcal {E}^{\\prime }$ which is a translate of the subspace $y\\mathcal {M}$ , as can be seen from (REF ).", "Therefore, different choices of $u_v$ lead to different expressions for $\\mathbf {Q}_y$ which are equal modulo $\\operatorname{im}\\mathbf {L}_y$ ." ], [ "$\\mathcal {M}$ as a quotient manifold:", "Suppose next that $\\mathcal {M}$ is a quotient manifold of $\\overline{\\mathcal {M}}$ with quotient map $\\pi \\colon \\overline{\\mathcal {M}}\\rightarrow \\mathcal {M}$ .", "Then $\\overline{\\varphi }=\\varphi \\circ \\pi $ gives a smooth lift of $\\mathcal {X}$ to $\\overline{\\mathcal {M}}$ .", "Since $\\pi $ is a submersion, Proposition REF and Remark REF imply that to check “2 $\\Rightarrow \\!$ 1”, we need only compute $\\mathbf {L}_z$ and $\\mathbf {Q}_z$ for $\\overline{\\varphi }$ restricted to the horizontal spaces $(\\ker (z))^{\\perp }$ using the preceding two methods." ], [ "Computing $\\nabla ^2\\varphi _w$ :", "To check the equivalent condition in Theorem REF for any presentation of $\\mathcal {M}$ , we need to compute $W_y$ .", "If we use Proposition REF to do so, we need an expression for the Riemannian Hessian $\\nabla ^2\\varphi _w(y)$ where $\\varphi _w(y)=\\langle w, \\varphi (y)\\rangle $ for $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ .", "Given $\\mathbf {Q}_y$ , we can obtain $\\nabla ^2\\varphi _w(y)$ as the unique self-adjoint operator on $y\\mathcal {M}$ that defines the quadratic form (REF ).", "Conversely, if we compute $\\nabla ^2\\varphi _w(y)$ for all $w\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ , e.g., using the techniques from [8], we can set $\\mathbf {Q}_y(v)$ to be the unique element of $(\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ satisfying (REF ), providing another way to compute $\\mathbf {Q}_y$ .", "Remark 2.45 A natural choice of curve $c_v$ in Definition REF would be one that has zero initial acceleration, i.e.", "$c_v^{\\prime \\prime }(0)=0$ .", "This choice gives $\\mathbf {Q}_y(v)\\in (\\operatorname{im}\\mathbf {L}_y)^{\\perp }$ , and can also be obtained by choosing $u_v$ in (REF ) to be the minimum norm solution to the system of equations defining $2_{y,v}\\mathcal {M}$ in (REF ), or by deriving $\\mathbf {Q}_y$ from $\\nabla ^2\\varphi _w(y)$ using (REF ).", "Incidentally, this minimum norm solution is called the second fundamental form of $\\mathcal {M}$ in $\\mathcal {E}^{\\prime }$ .", "Even though this choice of curve $c_v$ is natural theoretically, the resulting expressions for $\\mathbf {Q}_y$ may be unnecessarily complicated, as Example REF below illustrates.", "We now illustrate the above techniques for computing $\\mathbf {L}_y$ and $\\mathbf {Q}_y$ on two examples.", "We first show an example using charts.", "Example 2.46 (Desingularization of $\\mathbb {R}^{m \\times n}_{\\le r}$ ) Consider the lift, proposed in [28], of $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ to the so-called desingularization of $\\mathbb {R}^{m \\times n}_{\\le r}$ : $\\mathcal {M}= \\lbrace (X,\\mathcal {S})\\in \\mathbb {R}^{m\\times n}\\times \\mathrm {Gr}(n,n-r): \\mathcal {S}\\subseteq \\ker X\\rbrace ,\\qquad \\varphi (X,\\mathcal {S}) = X.$ Here $\\mathrm {Gr}(n,n-r)$ is the Grassmannian of $(n-r)$ -dimensional subspaces of $\\mathbb {R}^n$  [8].", "We compute $\\mathbf {L}$ and $\\mathbf {Q}$ using charts.", "For $(X_0,\\mathcal {S}_0)\\in \\mathcal {M}$ , let $Y_0\\in \\mathbb {R}^{n\\times (n-r)}$ be a matrix satisfying $\\mathrm {col}(Y_0)=\\mathcal {S}_0$ , so $X_0Y_0=0$ .", "Since $\\mathrm {rank}(Y_0)=n-r$ , we can find $n-r$ linearly independent rows in $Y_0$ .", "Let $J\\in \\mathbb {R}^{(n-r)\\times (n-r)}$ be the invertible submatrix of $Y_0$ obtained by extracting these rows, and let $\\Pi \\in \\mathbb {R}^{n\\times n}$ be a permutation matrix sending those $n-r$ rows to the first rows.", "Then $\\Pi Y_0 J^{-1} = \\begin{bmatrix} I_{n-r}\\\\ W_0\\end{bmatrix} & \\textrm { and } X_0 = \\begin{bmatrix}-Z_0W_0, & Z_0\\end{bmatrix}\\Pi & \\textrm {for some } Z_0\\in \\mathbb {R}^{m\\times r} & \\textrm { and } W_0\\in \\mathbb {R}^{r\\times (n-r)},$ where the second identity is implied by $0 = X_0Y_0J^{-1}=X_0\\Pi ^\\top (\\Pi Y_0 J^{-1})$ .", "Accordingly, a chart $\\psi \\colon \\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{r\\times (n-r)}\\rightarrow \\mathcal {M}$ containing $(X_0,\\mathcal {S}_0)$ is given by $\\psi (Z, W) = \\left(\\begin{bmatrix} -ZW, & Z\\end{bmatrix}\\Pi ,\\ \\mathrm {col}\\!\\left(\\Pi ^\\top \\begin{bmatrix} I_{n-r}\\\\ W\\end{bmatrix}\\right)\\right).$ Composing with $\\varphi $ , we obtain the lift $\\widetilde{\\varphi }(Z,W) = \\varphi (\\psi (Z,W))=\\begin{bmatrix} -ZW, & Z\\end{bmatrix}\\Pi $ , and by (REF ), $\\begin{split}&\\mathbf {L}_{(Z,W)}^{\\widetilde{\\varphi }}(\\dot{Z},\\dot{W}) = \\varphi (Z,W)[\\dot{Z},\\dot{W}] = \\begin{bmatrix} -\\dot{Z}W - Z\\dot{W}, & \\dot{Z}\\end{bmatrix}\\Pi ,\\\\&\\mathbf {Q}_{(Z,W)}^{\\widetilde{\\varphi }}(\\dot{Z},\\dot{W}) = 2\\widetilde{\\varphi }(Z,W)[(\\dot{Z},\\dot{W}),(\\dot{Z},\\dot{W})] = \\begin{bmatrix} -2\\dot{Z}\\dot{W}, & 0\\end{bmatrix}\\Pi .\\end{split}$ For $V=\\begin{bmatrix}V_1, & V_2\\end{bmatrix}\\Pi \\in \\mathbb {R}^{m\\times n}$ where $V_2$ is $m\\times r$ , the Hessian $\\nabla ^2\\widetilde{\\varphi }_V(Z,W)$ is the ordinary Euclidean Hessian of $\\widetilde{\\varphi }_V(Z,W)=\\langle V,\\widetilde{\\varphi }(Z,W)\\rangle $ , given by $\\nabla ^2\\widetilde{\\varphi }_V(Z,W)[\\dot{Z},\\dot{W}] = \\begin{bmatrix} -V_1\\dot{W}^\\top , & \\dot{Z}^\\top V_1\\end{bmatrix}.$ We use the above expressions to show that this lift satisfies “2 $\\Rightarrow \\!$ 1” everywhere on $\\mathcal {M}$ in Proposition REF below.", "Alternatively, we can compute $\\mathbf {L}$ and $\\mathbf {Q}$ by viewing $\\mathcal {M}$ as a quotient manifold of an embedded submanifold, and compute $\\nabla ^2\\varphi _V$ using (REF ).", "We carry out this alternative approach and compare it to the charts-based one above in Appendix REF .", "It is much harder to use the resulting expressions to check the equivalent condition for “2 $\\Rightarrow \\!$ 1” in Theorem REF .", "Next, we illustrate the embedded submanifold case on the following low-dimensional example, which shows that the necessary condition for “2 $\\Rightarrow \\!$ 1” in the last implication of Theorem REF is not sufficient.", "Example 2.47 Consider the lift of the unit disk $\\mathcal {X}$ in $\\mathbb {R}^2$ to $\\mathcal {M}= \\lbrace y\\in \\mathbb {R}^3:y_1^2+y_2^2+y_3^4=1\\rbrace ,\\qquad \\varphi (y)=(y_1,y_2).$ Note that $\\mathcal {M}$ is an embedded submanifold of $\\mathbb {R}^3$ defined by $h(y)=y_1^2+y_2^2+y_3^4-1$ , as $\\nabla h(y)\\ne 0$ for all $y\\in \\mathcal {M}$ .", "The first two derivatives of $h$ are $h̥(y)[\\dot{y}] = 2y_1\\dot{y}_1 + 2y_2\\dot{y}_2 + 4y_3^3\\dot{y}_3,\\qquad 2 h(y)[\\dot{y},\\dot{y}] = 2\\dot{y}_1^2 + 2\\dot{y}_2^2 + 12y_3^2\\dot{y}_3^2.$ Let $y=(1,0,0)$ and $x=\\varphi (y)=(1,0)$ .", "We get from (REF ) that $y\\mathcal {M}= \\lbrace \\dot{y}\\in \\mathbb {R}^3:\\dot{y}_1 = 0\\rbrace ,\\qquad 2_{y,\\dot{y}}\\mathcal {M}= (-\\dot{y}_2^2,0,0)+y\\mathcal {M}.$ Because $\\varphi $ extends to a linear map $\\overline{\\varphi }(y)=(y_1,y_2)$ on all of $\\mathbb {R}^3$ , whose first two derivatives are ${\\varphi }(y)[\\dot{y}]=(\\dot{y}_1,\\dot{y}_2)$ and $2\\overline{\\varphi }(y)[\\dot{y},\\dot{y}]=0$ , we have from (REF ) that $\\mathbf {L}_y(\\dot{y})=(0,\\dot{y}_2),\\quad \\mathbf {Q}_y(\\dot{y}) = (-\\dot{y}_2^2,0).$ On the other hand, $x\\mathcal {X} = \\lbrace \\dot{x}\\in \\mathbb {R}^2:\\dot{x}_1\\le 0\\rbrace \\supsetneq \\operatorname{im}\\mathbf {L}_y = \\lbrace \\dot{x}\\in \\mathbb {R}^2:\\dot{x}_1 = 0\\rbrace .$ Since $\\operatorname{im}\\mathbf {L}_y\\ne x\\mathcal {X}$ , “1 $\\Rightarrow \\!$ 1” does not hold at $y$ .", "Checking our sufficient conditions, Proposition REF gives $A_y = \\mathbf {Q}_y(\\mathrm {ker}(\\mathbf {L}_y)) + \\operatorname{im}\\mathbf {L}_y = \\operatorname{im}\\mathbf {L}_y,\\qquad B_y = \\bigcup _{\\begin{array}{c}\\mathbf {L}_y(\\dot{y}_i)\\rightarrow 0\\\\ (\\dot{y}_i)_1=0\\end{array}}\\lim _i(\\mathbf {Q}_y(\\dot{y}_i) + \\operatorname{im}\\mathbf {L}_y) = \\operatorname{im}\\mathbf {L}_y,$ where the last equality follows since $\\mathbf {L}_y(\\dot{y}_i) = (0,(\\dot{y}_i)_2)\\rightarrow 0$ implies $\\mathbf {Q}_y(\\dot{y}_i) = (-(\\dot{y}_i)_2^2,0)\\rightarrow 0$ .", "Thus, all sufficient conditions in Theorem REF fail.", "For the equivalent condition in Theorem REF , if $w\\in A_y^*$ , or equivalently, $w=(w_1,0)$ , then $\\nabla ^2\\varphi _w(y)$ is the unique self-adjoint operator on $y\\mathcal {M}$ satisfying $\\langle \\nabla ^2\\varphi _w(y)[\\dot{y}],\\dot{y}\\rangle = \\langle w,\\mathbf {Q}_y(\\dot{y})\\rangle = -w_1\\dot{y}_2^2,\\quad \\textrm {hence } \\nabla ^2\\varphi _w(y)[\\dot{y}] = (0, -w_1\\dot{y}_2,0).$ For $u\\in \\mathrm {ker}(\\mathbf {L}_y)$ , or equivalently, $u = (0,0,u_3)$ , we get $\\langle \\nabla ^2\\varphi _w(y)[u],u\\rangle = 0$ and $\\nabla ^2\\varphi _w(y)[u] = 0$ .", "Proposition REF shows that $W_y = A_y^*\\lnot \\subseteq (T_x\\mathcal {X})^*$ , hence “2 $\\Rightarrow \\!$ 1” does not hold.", "Nevertheless, the necessary condition in Theorem REF does hold.", "Indeed, $\\mathbf {Q}_y(y\\mathcal {M}) + \\operatorname{im}\\mathbf {L}_y = \\lbrace (-\\dot{y}_2^2,0):\\dot{y}_2\\in \\mathbb {R}\\rbrace + \\lbrace (0,\\dot{y}_2):\\dot{y}_2\\in \\mathbb {R}\\rbrace = x\\mathcal {X},$ so taking duals on both sides yields the desired condition." ], [ "Examples", "In this section, we illustrate how our theory can be used to verify properties of various concrete lifts.", "For each of the lifts below, we ask whether “local $\\Rightarrow \\!$ local”, “1 $\\Rightarrow \\!$ 1”, and “2 $\\Rightarrow \\!$ 1” hold." ], [ "Curves", "We begin with two classic examples of nonsmooth plane curves: the cuspidal and nodal cubics.", "Both have singularities at the origin, though the two singularities have different types, called a cusp and node respectively [23].", "They are shown in Figure REF , together with their tangent cone at the origin.", "Figure: Cuspidal and nodal cubics (solid blue), along with their tangent cones at the origin (dashed red).We have already encountered them in Examples REF and REF , where we proved some of their properties using ad hoc arguments.", "They can be easily treated systematically using our framework, computing $\\mathbf {L}$ and $\\mathbf {Q}$ using the expressions (REF ) for embedded submanifolds.", "Specifically, the following two propositions follow from short computations.", "Proposition 3.1 (Cuspidal cubic) The lift (REF ) of the cuspidal cubic (REF ) satisfies “local $\\Rightarrow \\!$ local” and “2 $\\Rightarrow \\!$ 1” everywhere, and “1 $\\Rightarrow \\!$ 1” precisely on $\\mathcal {M}\\setminus \\lbrace 0\\rbrace $ .", "Proposition 3.2 (Nodal cubic) The lift (REF ) of the nodal cubic (REF ) satisfies “local $\\Rightarrow \\!$ local” precisely on $\\mathcal {M}\\setminus \\lbrace (0,0,\\pm 1)\\rbrace $ ; the same holds for “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1”.", "Remark 3.3 In general, every algebraic curve has a so-called normalization, which is a smooth algebraic curve together with a surjective polynomial map to the original curve [44].", "Any other smooth algebraic lift of the curve (i.e.", "a lift to a smooth algebraic variety such that the lift map is polynomial) must factor through the normalization via a polynomial map [44].", "By Proposition REF , an algebraic curve has a smooth algebraic lift satisfying “local $\\Rightarrow \\!$ local”, “1 $\\Rightarrow \\!$ 1”, or “2 $\\Rightarrow \\!$ 1”, if and only if the normalization satisfies the corresponding property.", "For the cuspidal and nodal cubics, the above lifts are their normalizations.", "Unfortunately, higher-dimensional varieties do not have such a distinguished lift, and non-polynomial lifts need not factor through the normalization.", "Next, we consider lifts of matrices and tensors of bounded rank." ], [ "PSD matrices", "We consider the set of bounded-rank PSD matrices $\\mathcal {X}= \\lbrace X\\in \\mathbb {R}^{n\\times n}: X^\\top = X,\\ X\\succeq 0,\\ \\mathrm {rank}(X)\\le r\\rbrace = \\mathbb {S}_{\\succeq 0}^n\\cap \\mathbb {R}^{n\\times n}_{\\le r},$ with $r<n$ together with its lift to $\\mathcal {M}=\\mathbb {R}^{n\\times r}$ via the factorization map $\\varphi (R)=RR^\\top $ .", "This is the parametrization of $\\mathcal {X}$ (possibly intersected with an affine subspace) proposed by Burer and Monteiro to solve certain SDPs [10].", "Viewing this parametrization as a smooth lift, we ask whether it satisfies our desirable properties.", "Proposition 3.4 The lift $\\varphi (R)=RR^\\top $ from $\\mathcal {M}=\\mathbb {R}^{n\\times r}$ to $\\mathcal {X}= \\mathbb {R}^{n\\times n}_{\\le r}\\cap \\mathbb {S}_{\\succeq 0}^n$ satisfies “local $\\Rightarrow \\!$ local” and “2 $\\Rightarrow \\!$ 1” everywhere, and “1 $\\Rightarrow \\!$ 1” at $R\\in \\mathcal {M}$ if and only if $\\mathrm {rank}(R)=r$ .", "Moreover, the sufficient condition $A_R = {RR^\\top }\\mathcal {X}$ for “2 $\\Rightarrow \\!$ 1” holds everywhere, and $X\\mathcal {X}&= X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n\\\\ &= \\lbrace V\\in \\mathbb {S}^n: V_{\\perp }\\succeq 0,\\ \\mathrm {rank}(V_{\\perp })\\le r-\\mathrm {rank}(X) \\textrm { where } V_{\\perp } = \\mathrm {Proj}_{\\mathrm {col}(X)^{\\perp }}V\\mathrm {Proj}_{\\mathrm {col}(X)^{\\perp }}\\rbrace \\\\&= \\left\\lbrace U\\begin{pmatrix} V_1 & V_2\\\\ V_2^\\top & V_3\\end{pmatrix}U^\\top : V_1\\in \\mathbb {R}^{\\mathrm {rank}(X)\\times \\mathrm {rank}(X)},\\ V_3\\succeq 0,\\ \\mathrm {rank}(V_3)\\le r-\\mathrm {rank}(X)\\right\\rbrace ,$ where in the last line $U\\in O(n)$ is an eigenmatrix for $X$ satisfying $X=U\\Sigma U^\\top $ with $\\Sigma \\in \\mathbb {R}^{n\\times n}$ diagonal with the eigenvalues of $X$ on the diagonal in descending order.", "For “local $\\Rightarrow \\!$ local”, we sketch the proof from [11] of (in our terminology) SLP, the condition from Remark REF that is equivalent to “local $\\Rightarrow \\!$ local” by Theorem REF .", "Fix $R\\in \\mathcal {M}$ and $X=RR^\\top $ .", "For any sequence $(X_i)_{i\\ge 1}\\subseteq \\mathcal {X}$ converging to $X$ , let $X_i=U_i\\Sigma _i U_i^\\top $ be a size-$r$ eigendecomposition for each $X_i$ (so $\\Sigma _i\\in \\mathbb {R}^{r\\times r}$ is a diagonal matrix of eigenvalues of $X_i$ , possibly including zeros).", "Let $R_i = U_i\\Sigma _i^{1/2}$ and note that $\\Vert R_i\\Vert =\\Vert X_i\\Vert ^{1/2}$ which is bounded, hence after passing to a subsequence we may assume that $\\lim _iR_i=\\widetilde{R}$ exists.", "By continuity of $\\varphi $ , we have $X=\\lim _iX_i=\\lim _i\\varphi (R_i) = \\varphi (\\lim _iR_i)=\\widetilde{R}\\widetilde{R}^\\top $ , hence $\\widetilde{R}\\widetilde{R}^\\top = RR^\\top $ .", "By [11], there is an orthogonal $Q\\in O(r)$ satisfying $R=\\widetilde{R}Q$ , so $(R_iQ)_{i\\ge 1}\\subseteq \\mathcal {M}$ is a sequence converging to $R$ satisfying $\\varphi (R_iQ)=X_i$ .", "This proves SLP holds.", "For $V\\in \\mathbb {R}^{n\\times n}$ and $X=RR^\\top \\in \\mathcal {X}$ , define $V_{\\perp } \\mathrm {Proj}_{\\ker (X)}V\\mathrm {Proj}_{\\ker (X)} = \\mathrm {Proj}_{\\mathrm {col}(R)^{\\perp }}V\\mathrm {Proj}_{\\mathrm {col}(R)^{\\perp }},$ where we used the fact that $\\mathrm {col}(R)=\\mathrm {col}(X) = \\mathrm {ker}(X)^{\\perp }$ .", "The tangent cone at $X$ to $\\mathbb {S}_{\\succeq 0}^n$ is given by [24] $\\begin{aligned}X\\mathbb {S}_{\\succeq 0}^n &= \\lbrace V\\in \\mathbb {S}^n: \\langle Vu,u\\rangle \\ge 0,\\ \\textrm {for all } u\\in \\mathrm {ker}(X)\\rbrace = \\lbrace V\\in \\mathbb {S}^n:V_{\\perp }\\succeq 0\\rbrace \\\\&= \\left\\lbrace U\\begin{pmatrix} V_1 & V_2\\\\ V_2^\\top & V_3\\end{pmatrix}U^\\top :V_1\\in \\mathbb {R}^{\\mathrm {rank}(X)\\times \\mathrm {rank}(X)},\\ V_3\\succeq 0\\right\\rbrace ,\\end{aligned}$ and the tangent cone at $X$ to $\\mathbb {R}^{n\\times n}_{\\le r}$ is given by [43] $X\\mathbb {R}^{n\\times n}_{\\le r} &= \\lbrace V\\in \\mathbb {R}^{n\\times n}: \\mathrm {rank}(V_{\\perp })\\le r-\\mathrm {rank}(X)\\rbrace \\\\&= \\left\\lbrace U\\begin{pmatrix} V_1 & V_2\\\\ \\widetilde{V}_2 & V_3\\end{pmatrix} U^\\top : V_1\\in \\mathbb {R}^{\\mathrm {rank}(X)\\times \\mathrm {rank}(X)},\\ \\mathrm {rank}(V_3)\\le r-\\mathrm {rank}(X)\\right\\rbrace .$ Hence the intersection $X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n$ is given by the claimed expression.", "Furthermore, the tangent cone to an intersection is always included in the intersection of the tangent cones, which follows easily from Definition REF .", "Hence $X\\mathcal {X}\\subseteq X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n$ and it suffices to show the reverse inclusion.", "We do so simultaneously with proving “2 $\\Rightarrow \\!$ 1”.", "Since $\\mathcal {M}$ is a linear space, the expressions (REF ) with the identity chart give $&\\mathbf {L}_R(\\dot{R}) = (R)[\\dot{R}] = R\\dot{R}^\\top + \\dot{R} R^\\top ,\\\\&\\mathbf {Q}_R(\\dot{R}) = 2\\varphi (R)[\\dot{R},\\dot{R}] = 2\\dot{R}\\dot{R}^\\top .$ Therefore, $\\operatorname{im}\\mathbf {L}_R = \\lbrace V\\in \\mathbb {S}^n: V_{\\perp } = 0\\rbrace .$ Indeed, if $V\\in \\operatorname{im}\\mathbf {L}_R$ then $V=R\\dot{R}^\\top + \\dot{R} R^\\top $ for some $\\dot{R}\\in \\mathbb {R}^{n\\times r}$ and hence $V_{\\perp }=0$ since $\\mathrm {Proj}_{\\mathrm {col}(R)^{\\perp }}R=0$ , while if $V\\in \\mathbb {S}^n$ satisfies $V_{\\perp } = 0$ then $V = \\mathrm {Proj}_{\\mathrm {col}(R)}V + V\\mathrm {Proj}_{\\mathrm {col}(R)} - \\mathrm {Proj}_{\\mathrm {col}(R)}V\\mathrm {Proj}_{\\mathrm {col}(R)} = \\mathbf {L}_R\\left(VR^{\\dagger \\top } - \\frac{1}{2}RR^{\\dagger }VR^{\\dagger \\top }\\right)\\in \\operatorname{im}\\mathbf {L}_R,$ where $RR^{\\dagger } = \\mathrm {Proj}_{\\mathrm {col}(R)}$ .", "Furthermore, we have $\\mathbf {Q}_R(\\ker \\mathbf {L}_R) \\supseteq \\lbrace V\\in \\mathbb {S}^n_{\\succeq 0}: \\mathrm {rank}(V)\\le r-\\mathrm {rank}(X)\\rbrace .$ Indeed, if $V\\in \\mathbb {S}^n_{\\succeq 0}$ with $\\mathrm {rank}(V)\\le r-\\mathrm {rank}(X)$ , let $V=2\\dot{R}_0\\dot{R}_0^\\top $ be a (rescaled) Cholesky decomposition with $\\dot{R}_0\\in \\mathbb {R}^{n\\times r}$ , so $\\mathrm {rank}(R_0)=\\mathrm {rank}(V)$ .", "Since $\\dim \\mathrm {col}(R^\\top )=\\mathrm {rank}(R^\\top )=\\mathrm {rank}(R)=\\mathrm {rank}(X)\\le r-\\mathrm {rank}(R_0)=\\dim \\mathrm {ker}(R_0),$ there is an orthogonal matrix $Q\\in O(r)$ satisfying $Q\\mathrm {col}(R^\\top )\\subseteq \\mathrm {ker}(\\dot{R}_0)$ .", "Let $\\dot{R}=\\dot{R}_0Q$ and note that $V=2\\dot{R}\\dot{R}^\\top $ so $\\mathbf {Q}_R(\\dot{R})=V$ , and $\\dot{R}R^\\top = 0$ so $\\dot{R}\\in \\ker \\mathbf {L}_R$ .", "Thus, $A_R = \\mathbf {Q}_R(\\ker \\mathbf {L}_R) + \\operatorname{im}\\mathbf {L}_R \\supseteq X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n.$ On the other hand, by Proposition REF (b), we have $A_R\\subseteq X\\mathcal {X}$ .", "Thus, we have the chain of inclusions $X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n\\subseteq \\mathbf {Q}_R(\\ker \\mathbf {L}_R) + \\operatorname{im}\\mathbf {L}_R \\subseteq X\\mathcal {X}\\subseteq X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n,$ so all the above inclusions are equalities.", "In particular, we obtain the claimed expression for $X\\mathcal {X}$ and “2 $\\Rightarrow \\!$ 1” everywhere on $\\mathcal {M}$ by Theorem REF .", "Our claims about “1 $\\Rightarrow \\!$ 1” follow from (REF ) and Theorem REF .", "Remark 3.5 Finding an explicit expression for tangent cones can be difficult in general.", "In Proposition REF , the set $\\mathcal {X}$ was an intersection of two sets whose tangent cones are known, namely $\\mathbb {R}^{n\\times n}_{\\le r}$ and $\\mathbb {S}_{\\succeq 0}^n$ , which gave us an inclusion $X\\mathcal {X}\\subseteq X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n$ .", "However, the tangent cone to an intersection can be strictly contained in the intersection of the tangent cones.For example, consider intersecting the circle in the plane with one of its tangent lines.", "The proof of “2 $\\Rightarrow \\!$ 1” in Proposition REF proceeds by showing $A_R = X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n$ , which gives $X\\mathcal {X}= X\\mathbb {R}^{n\\times n}_{\\le r}\\cap X\\mathbb {S}_{\\succeq 0}^n = A_R$ because $A_R\\subseteq X\\mathcal {X}$ by Proposition REF (b).", "This simultaneously gives us “2 $\\Rightarrow \\!$ 1” and an expression for the tangent cone.", "The proof of Proposition REF illustrates a more general, and as far as we know novel, technique of getting expressions for the tangent cones using lifts.", "Generalizing the above discussion, if we have an inclusion $x\\mathcal {X}\\subseteq S$ for some set $S$ and we are able to prove either $A_y\\supseteq S$ for some $y\\in \\varphi ^{-1}(x)$ , then we must have $x\\mathcal {X}= S$ by Proposition REF (b).", "In this case, we also conclude that “2 $\\Rightarrow \\!$ 1” holds at $y$ by Theorem REF .", "In Section , we shall see another setting in which we naturally have a superset for $x\\mathcal {X}$ (see Lemma REF ), and which allows us to derive expressions for $x\\mathcal {X}$ from lifts satisfying “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1”.", "If $\\mathcal {X}$ is defined by polynomial equalities and inequalities, a particular superset of the tangent cone, called the algebraic tangent cone, can be computed using Gröbner bases [16].", "Incidentally, a general condition implying that the tangent cone to an intersection is the intersection of the tangent cones is given in [41].", "That condition does not apply to $\\mathcal {X}=\\mathbb {R}^{n\\times n}_{\\le r}\\cap \\mathbb {S}_{\\succeq 0}^n$ because $\\mathbb {R}^{n\\times n}_{\\le r}$ is not Clarke-regular in the sense of [41].", "Our approach circumvents Clarke regularity, exploiting the existence of an appropriate lift instead." ], [ "General matrices", "Next, we consider the analogous lift for $\\mathbb {R}^{m \\times n}_{\\le r}$ given by $\\varphi (L,R)=LR^\\top $ , defined on the linear space $\\mathcal {M}=\\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{n\\times r}$ .", "Throughout this section, we assume $r<\\min \\lbrace m,n\\rbrace $ .", "We have already encountered this lift, which we call the rank factorization lift, in Example REF .", "The proof of the following proposition is given in Appendix REF .", "Proposition 3.6 The lift $\\varphi (L,R) = LR^\\top $ from $\\mathcal {M}=\\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{n\\times r}$ to $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ satisfies: “local $\\Rightarrow \\!$ local” at $(L,R)$ if and only if $\\mathrm {rank}(L)=\\mathrm {rank}(R)=\\mathrm {rank}(LR^\\top )$ , “1 $\\Rightarrow \\!$ 1” at $(L,R)$ if and only if $\\mathrm {rank}(L)=\\mathrm {rank}(R)=r$ , and “2 $\\Rightarrow \\!$ 1” everywhere on $\\mathcal {M}$ .", "The manifolds $\\mathcal {M}$ in the preceding two examples were linear spaces.", "For an example of a nonlinear manifold, we consider the desingularization lift of $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ that we have already encountered in Example REF .", "The proof of the following proposition is given in Appendix REF .", "Proposition 3.7 The lift of $\\mathcal {X}=\\mathbb {R}^{m \\times n}_{\\le r}$ given by (REF ) satisfies: “local $\\Rightarrow \\!$ local” at $(X,\\mathcal {S})$ if and only if $\\mathrm {rank}(X)=r$ ; the same is true for “1 $\\Rightarrow \\!$ 1”.", "“2 $\\Rightarrow \\!$ 1” everywhere on $\\mathcal {M}$ .", "Yet another natural lift of $\\mathbb {R}^{m \\times n}_{\\le r}$ is given by the SVD: $\\mathcal {M}= \\mathrm {St}(m,r)\\times \\mathbb {R}^r\\times \\mathrm {St}(n,r),\\qquad \\varphi (U,\\sigma ,V) = U\\mathrm {diag}(\\sigma ) V^\\top .$ To make $\\mathcal {M}$ a smooth manifold, we do not restrict $\\sigma $ to be nonnegative.", "In [36], the authors observed that Riemannian gradient descent running on the SVD lift (REF ) gets stuck in a suboptimal point for a certain matrix completion problem.", "In contrast, they observed that if they allow the middle factor in the SVD lift to be symmetric but possibly non-diagonal, the same algorithm converges to the global optimum from the same initialization.", "To help elucidate this empirical behavior, we also study the corresponding modified lift $\\varphi \\colon \\mathrm {St}(m,r)\\times \\mathbb {S}^r\\times \\mathrm {St}(n,r)\\rightarrow \\mathbb {R}^{m \\times n}_{\\le r}$ defined by $\\varphi (U,M,V)=UMV^\\top $ .", "The proof of the following proposition is given in Appendix REF .", "Proposition 3.8 The SVD lift and its modification satisfy the following.", "The SVD lift of $\\mathbb {R}^{m \\times n}_{\\le r}$ given in (REF ) satisfies “local $\\Rightarrow \\!$ local” at $(U,\\sigma ,V)$ if and only if all entries of the vector of absolute values $|\\sigma |$ are nonzero and distinct; the same holds for “1 $\\Rightarrow \\!$ 1”.", "The modified SVD lift $\\varphi \\colon \\mathrm {St}(m,r)\\times \\mathbb {S}^r\\times \\mathrm {St}(n,r)\\rightarrow \\mathbb {R}^{m \\times n}_{\\le r}$ defined by $\\varphi (U,M,V)=UMV^\\top $ satisfies “local $\\Rightarrow \\!$ local” at $(U,M,V)$ if and only if the eigenvalues of $M$ satisfy $\\lambda _i(M)+\\lambda _j(M)\\ne 0$ for all $i,j$ (in particular, $\\lambda _i(M)\\ne 0$ for all $i$ so $\\mathrm {rank}(M)=r$ ); the same is true for “1 $\\Rightarrow \\!$ 1”.", "In particular, Proposition REF shows that the SVD lift of rank exactly $r$ matrices $\\mathrm {St}(m,r)\\times \\mathbb {R}^r_{>0}\\times \\mathrm {St}(n,r)\\rightarrow \\mathbb {R}^{m\\times n}_{=r}$ does not satisfy “local $\\Rightarrow \\!$ local” or “1 $\\Rightarrow \\!$ 1”, whereas the modified lift $\\mathrm {St}(m,r)\\times \\mathbb {S}_{\\succ 0}^r\\times \\mathrm {St}(n,r)\\rightarrow \\mathbb {R}^{m\\times n}_{=r}$ is a submersion, hence satisfies both.", "This helps explain the numerical results in [36], since restricting the middle factor to be diagonal can introduce spurious local minima for () where gradient descent may get stuck.", "If we further allow the middle factor to be non-symmetric, then “local $\\Rightarrow \\!$ local” and “1 $\\Rightarrow \\!$ 1” hold everywhere on the preimage of $\\mathbb {R}^{m\\times n}_{=r}$ ." ], [ "Tensors", "We are not aware of a lift of $\\mathbb {R}^{m \\times n}_{\\le r}$ induced by matrix factorizations with three or more factors which satisfies “2 $\\Rightarrow \\!$ 1”.", "In fact, we can prove some negative results for lifts that are multilinear in more than two factors.", "Such lifts also arise naturally when parametrizing various sets of tensors and linear neural networks.", "The proof of the following proposition is given in Appendix REF .", "Proposition 3.9 Suppose $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}\\subseteq \\mathcal {E}$ is a smooth lift where $\\mathcal {M}\\subseteq \\mathcal {E}^{\\prime } = \\mathcal {E}_1\\times \\cdots \\times \\mathcal {E}_d$ is a smooth embedded submanifold of a product of Euclidean spaces $\\mathcal {E}_i$ , and $\\varphi $ is defined on all of $\\mathcal {E}^{\\prime }$ and is multilinear in its $d$ arguments.", "If $\\mathcal {M}$ contains a point $(y_1,\\ldots ,y_d)$ such that $y_i=0$ for three indices $i$ , and $0=\\varphi (y_1,\\ldots ,y_d)$ is not an isolated point of $\\mathcal {X}$ , then $\\varphi $ does not satisfy “2 $\\Rightarrow \\!$ 1” at $(y_1,\\ldots ,y_d)$ .", "Corollary 3.10 Consider the following lifts: Linear neural networks: $&\\mathcal {X}= \\mathbb {R}^{n_1\\times n_{d+1}}_{\\le \\min (n_1,\\ldots ,n_{d+1})},\\\\&\\mathcal {M}=\\mathbb {R}^{n_1\\times n_2}\\times \\mathbb {R}^{n_2\\times n_3}\\times \\cdots \\times \\mathbb {R}^{n_d\\times n_{d+1}},\\qquad \\varphi (W_1,\\ldots ,W_d)=W_1\\cdots W_d.$ CP-tensors: $&\\mathcal {X}= \\lbrace X\\in \\mathbb {R}^{n_1\\times \\cdots \\times n_d}: \\mathrm {CP\\text{-}rank}(X)\\le r\\rbrace ,\\\\&\\mathcal {M}=\\mathbb {R}^{n_1\\times r}\\times \\cdots \\times \\mathbb {R}^{n_d\\times r},\\qquad \\varphi (X_1,\\ldots ,X_d)=\\sum _{i=1}^r(X_1)_{:,i}\\otimes \\cdots \\otimes (X_d)_{:,i},$ where $(X_i)_{:,j}$ denotes the $j$ th column of $X_i$ .", "Tensor train: $&\\mathcal {X}= \\lbrace X\\in \\mathbb {R}^{n_1\\times \\cdots \\times n_d}: \\mathrm {TT\\text{-}rank}(X)\\le (r_2,\\ldots ,r_d)\\rbrace ,\\\\&\\mathcal {M}=\\mathbb {R}^{r_1\\times n_1\\times r_2}\\times \\cdots \\times \\mathbb {R}^{r_d\\times n_d\\times r_{d+1}},\\qquad \\varphi (G_1,\\ldots ,G_d)_{i_1,\\ldots ,i_d}=G_1^{(i_1)}\\cdots G_d^{(i_d)},$ where $r_1=r_{d+1}=1$ , and $G_i^{(j)}$ is the $r_i\\times r_{i+1}$ matrix defined by $(G_i^{(j)})_{k,\\ell }=(G_i)_{k,j,\\ell }$ .", "Tucker tensors: $&\\mathcal {X}= \\lbrace X\\in \\mathbb {R}^{n_1\\times \\cdots \\times n_d}: \\mathrm {Tucker\\text{-}rank}(X)\\le (r_1,\\ldots ,r_d)\\rbrace ,\\\\&\\mathcal {M}=\\mathbb {R}^{r_1\\times \\cdots \\times r_d}\\times \\mathbb {R}^{n_1\\times r_1}\\times \\cdots \\times \\mathbb {R}^{n_d\\times r_d},\\qquad \\varphi (C,A_1,\\ldots ,A_d) = C\\times _1 A_1\\times _2\\cdots \\times _d A_d,$ where $(C\\times _jA_j)_{i_1,\\ldots ,i_d}=\\sum _{k=1}^{r_j}C_{i_1,\\ldots ,i_{j-1},k,i_{j+1},\\ldots ,i_d}A_{i_j,k}$ .", "Generalizing the above examples, tensor networks are multilinear maps $\\varphi \\colon \\mathcal {M}=\\mathcal {E}_1\\times \\cdots \\times \\mathcal {E}_d\\rightarrow \\mathcal {E}$ specified by a graph with $d$ vertices [13].", "If $d\\ge 3$ then none of the above lifts satisfy “2 $\\Rightarrow \\!$ 1” at points in $\\mathcal {M}$ with at least three zero factors.", "Proposition REF might suggest that failure of “2 $\\Rightarrow \\!$ 1” can be avoided by normalizing the arguments of the lift to have unit norm.", "Specifically, by multilinearity of $\\varphi $ we have $\\varphi (y_1,\\ldots ,y_d) = \\left(\\prod _{i=1}^d\\Vert y_i\\Vert \\right)\\varphi \\!\\left(\\frac{y_1}{\\Vert y_1\\Vert },\\ldots ,\\frac{y_d}{\\Vert y_d\\Vert }\\right),\\quad \\textrm {whenever } y_i\\ne 0 \\textrm { for all } i.$ Using this observation, one could replace a lift $\\varphi \\colon \\mathbb {R}^{n_1}\\times \\cdots \\times \\mathbb {R}^{n_d}\\rightarrow \\mathcal {X}$ to a product of Euclidean spaces by a lift $\\psi \\colon \\mathbb {R}\\times \\mathrm {S}^{n_1-1}\\times \\cdots \\times \\mathrm {S}^{n_d-1}$ to a product of $\\mathbb {R}$ and several spheres, satisfying $\\psi (\\lambda ,x_1,\\ldots ,x_d)=\\lambda \\varphi (x_1,\\ldots ,x_d)$ .", "Only one factor can be zero in this new lift, so Proposition REF does not apply and we might hope that “2 $\\Rightarrow \\!$ 1” holds.", "Unfortunately, this may not resolve the problem as there is another obstruction to “2 $\\Rightarrow \\!$ 1” that does not rely on several zero factors, at least for the following specific form of a lift.", "The proofs of the next proposition and its corollary are also given in Appendix REF .", "Proposition 3.11 Suppose $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ is a smooth lift of the form $\\varphi (\\lambda ,Y_1,\\ldots ,Y_d) = \\sum _{i=1}^r\\lambda _i\\cdot (Y_1)_{:,i}\\otimes \\cdots \\otimes (Y_d)_{:,i},$ where $\\mathcal {M}\\subseteq \\mathbb {R}^r\\times \\mathbb {R}^{n_1\\times r}\\times \\cdots \\times \\mathbb {R}^{n_d\\times r}$ .", "Denote $X=\\varphi (\\lambda ,Y_1,\\ldots ,Y_d)$ .", "If $d\\ge 3$ and $\\mathrm {col}(Y_1)^{\\perp }\\otimes \\cdots \\otimes \\mathrm {col}(Y_d)^{\\perp }\\lnot \\subseteq (X\\mathcal {X})^*,$ then $\\varphi $ does not satisfy “2 $\\Rightarrow \\!$ 1” at $(\\lambda ,Y_1,\\ldots ,Y_d)$ for any $\\lambda \\in \\mathbb {R}^r$ .", "If $d=2$ and (REF ) holds, then $\\varphi $ does not satisfy “2 $\\Rightarrow \\!$ 1” at $(0,Y_1,Y_2)$ .", "Corollary 3.12 Proposition REF implies the following.", "Consider the symmetric tensor lift: $&\\mathcal {X}= \\left\\lbrace \\textrm {symmetric tensors of symmetric rank } \\le r \\textrm { in } \\mathbb {R}^{n^d}\\right\\rbrace \\\\&\\mathcal {M}= \\mathbb {R}^r\\times (\\mathbb {R}^n)^r,\\qquad \\varphi (\\lambda ,y_1,\\ldots ,y_r) = \\sum _{i=1}^r\\lambda _iy_i^{\\otimes d}.$ If $d\\ge 3$ , then “2 $\\Rightarrow \\!$ 1” is not satisfied whenever $\\mathrm {span}\\lbrace y_1,\\ldots ,y_r\\rbrace \\ne \\mathbb {R}^n$ , and $\\mathrm {sym{\\text{-}}rk}(\\varphi (\\lambda ,y_1,\\ldots ,y_r))<r$ .The symmetric rank of a symmetric tensor $X$ , denoted $\\mathrm {sym{\\text{-}}rk}(X)$ , is the smallest $r\\in \\mathbb {N}$ such that there exists a decomposition of the form $X=\\sum _{i=1}^r\\lambda _i v_i^{\\otimes r}$ for some $\\lambda _i\\in \\mathbb {R}$ and $v_i\\in \\mathbb {R}^n$ .", "If $d=2$ , then “2 $\\Rightarrow \\!$ 1” is not satisfied if, in addition, $\\lambda =0$ .", "Consider the symmetric orthogonally decomposable (ODECO) tensor lift [40] : $&\\mathcal {X}= \\left\\lbrace \\textrm {symmetric ODECO tensors in } \\mathbb {R}^{n^d} \\textrm { that have } \\le r \\textrm { orthogonal components}\\right\\rbrace ,\\\\&\\mathcal {M}= \\left\\lbrace (\\lambda ,y_1,\\ldots ,y_r)\\in \\mathbb {R}^r\\times (\\mathbb {R}^n)^r:\\langle y_i,y_j\\rangle =\\delta _{i,j}\\right\\rbrace ,\\qquad \\varphi (\\lambda ,y_1,\\ldots ,y_r) = \\sum _{i=1}^r\\lambda _iy_i^{\\otimes d}.$ If $d\\ge 3$ , then “2 $\\Rightarrow \\!$ 1” is not satisfied whenever $\\lambda _i=0$ for some $i$ and $r<n$ .", "If $d=2$ , then “2 $\\Rightarrow \\!$ 1” is not satisfied whenever $\\lambda =0$ .", "Consider the normalized CP decomposition lift: $&\\mathcal {X}=\\lbrace X\\in \\mathbb {R}^{n_1\\times \\cdots \\times n_d}:\\mathrm {CP\\text{-}rank}\\le r \\textrm { tensors}\\rbrace \\\\&\\mathcal {M}= \\lbrace (\\lambda ,Y_1,\\ldots ,Y_r)\\in \\mathbb {R}^d\\times \\mathbb {R}^{n_1\\times r}\\times \\cdots \\times \\mathbb {R}^{n_d\\times r}:\\Vert (Y_j)_{:,i}\\Vert _2=1 \\textrm { for all } i,j\\rbrace ,\\\\ &\\varphi (\\lambda ,Y_1,\\ldots ,Y_d) = \\sum _{i=1}^r\\lambda _i\\cdot (Y_1)_{:,i}\\otimes \\cdots \\otimes (Y_d)_{:,i}.$ The case $r=1$ with $\\lambda >0$ has been studied in [46].", "In the general case, this lift does not satisfy “2 $\\Rightarrow \\!$ 1” whenever $\\mathrm {CP\\text{-}rank}(\\varphi (\\lambda _1,Y_1,\\ldots ,Y_d))<r$ , $\\mathrm {col}(Y_j)^{\\perp }\\ne \\lbrace 0\\rbrace $ for all $j$ , and either $d\\ge 3$ or $d=2$ and $\\lambda =0$ ." ], [ "Lift construction via fiber products, more examples", "In this section, we give a systematic construction of lifts for a large class of sets $\\mathcal {X}$ .", "If the resulting lifted space is a smooth manifold, we also give conditions under which the lift satisfies our desirable properties.", "Moreover, under these conditions we can obtain expressions for the tangent cones to $\\mathcal {X}$ .", "We shall see that several natural lifts are special cases of this construction.", "Suppose the set $\\mathcal {X}$ is presented in the form $\\mathcal {X}= \\lbrace x\\in \\mathcal {E}: F(x)\\in \\mathcal {Z}\\rbrace = F^{-1}(\\mathcal {Z}),$ where $\\mathcal {Z}\\subseteq \\mathcal {E}^{\\prime }$ is some subset of a linear space and $F\\colon \\mathcal {E}\\rightarrow \\mathcal {E}^{\\prime }$ is smooth.", "This form is general—any set $\\mathcal {X}$ can be written in this form by letting $F$ be the identity and $\\mathcal {Z}=\\mathcal {X}$ .", "However, we shall see that our framework is most useful when $\\mathcal {Z}$ is a product of simple sets for which we have smooth lifts satisfying our desirable properties.", "For example, any set defined by $k$ smooth equalities $g_i(x)=0$ and $\\ell $ smooth inequalities $h_j(x)\\ge 0$ can be written in this form by letting $F(x)=(g_1(x),\\ldots ,g_k(x),h_1(x),\\ldots ,h_{\\ell }(x))$ and $\\mathcal {Z}=\\lbrace 0\\rbrace ^k\\times \\mathbb {R}_{\\ge 0}^{\\ell }$ .", "We can also incorporate semidefiniteness and rank constraints of smooth functions of $x$ by taking Cartesian products of $\\mathcal {Z}$ with $\\mathbb {R}^{m \\times n}_{\\le r}$ or $\\mathbb {S}^n_{\\succeq 0}$ .", "Suppose now that we have a smooth lift $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {Z}$ .", "We can use this lift of $\\mathcal {Z}$ to construct a lift of $\\mathcal {X}$ by taking the fiber product of $F$ and $\\psi $ .", "Definition 4.1 Let $\\mathcal {X}$ be a subset of $\\mathcal {E}$ defined by a smooth map $F \\colon \\mathcal {E}\\rightarrow \\mathcal {E}^{\\prime }$ and a set $\\mathcal {Z} \\subseteq \\mathcal {E}^{\\prime }$ as $\\mathcal {X}= F^{-1}(\\mathcal {Z})$ .", "Suppose $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {Z}$ is a smooth lift of $\\mathcal {Z}$ to the smooth manifold $\\mathcal {N}$ .", "Then the fiber product lift of $\\mathcal {X}$ with respect to $F$ and $\\psi $ is $\\varphi \\colon \\mathcal {M}_{F,\\psi }\\rightarrow \\mathcal {X}$ where $\\mathcal {M}_{F,\\psi } = \\lbrace (x,y)\\in \\mathcal {E}\\times \\mathcal {N}:F(x) = \\psi (y)\\rbrace ,\\qquad \\varphi (x,y)=x.$ Here $\\mathcal {M}_{F,\\psi }$ is the (set-theoretic) fiber product of the maps $F\\colon \\cal E\\rightarrow \\mathcal {E}^{\\prime }$ and $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {E}^{\\prime }$ .", "The following commutative diagram illustrates Definition REF .", "Its top horizontal arrow is the coordinate projection $(x,y)\\mapsto y$ .", "[-2.7em] MF, [d, \"\"', two heads] [r] N [d, \"\", two heads] [-2em] E X [r, \"F\"'] [l, phantom, \"\"] Z [r, phantom, \"\"] E' The fiber product $\\mathcal {M}_{F,\\psi }$ need not be a smooth manifold even when both $F$ and $\\psi $ are smooth maps between smooth manifolds.", "For our purposes, we shall make a more restrictive assumption: Assumption 4.2 The map $(x,y)\\mapsto \\mathrm {rank}\\!\\begin{bmatrix}F̥(x),& (y)\\end{bmatrix}$ is constant in a neighborhood of $\\mathcal {M}_{F,\\psi }$ in $\\mathcal {E}\\times \\mathcal {N}$ .", "Assumption REF not only implies $\\mathcal {M}_{F,\\psi }$ is a smooth embedded submanifold of $\\mathcal {E}\\times \\mathcal {N}$ , but also that $F(x) - \\psi (y) = 0$ is a defining equation for it (in the sense of [8]).", "Under this assumption, the tangent space to $\\mathcal {M}_{F,\\psi }$ is given by ${(x,y)}\\mathcal {M}_{F,\\psi } = \\lbrace (\\dot{x},\\dot{y})\\in \\mathcal {E}\\times y\\mathcal {N}:F̥(x)[\\dot{x}] = (y)[\\dot{y}]\\rbrace .$ We proceed to give some examples of the above construction.", "We then study fiber product lifts in general and instantiate our results on these examples.", "Example 4.3 (Sphere to ball) Let $\\mathcal {X}= \\lbrace x\\in \\mathbb {R}^n:\\Vert x\\Vert _2\\le 1\\rbrace $ be the unit Euclidean ball.", "Let $\\mathcal {Z} = \\mathbb {R}_{\\ge 0}$ and $F(x)=1-x^\\top x$ so $\\mathcal {X}=F^{-1}(\\mathcal {Z})$ .", "Let $\\psi \\colon \\mathbb {R}\\rightarrow \\mathbb {R}_{\\ge 0}$ be the smooth lift $\\psi (y)=y^2$ .", "Then $\\mathcal {M}_{F,\\psi }=\\lbrace (x,y)\\in \\mathbb {R}^n\\times \\mathbb {R}:1-x^\\top x = y^2\\rbrace $ , which is just the unit sphere in $\\mathbb {R}^{n+1}$ , and $\\varphi (x,y)=x$ is projection onto the first $n$ coordinates.", "This lift is used in [39] to apply a solver for quadratic programming over the sphere () to quadratic programs over the ball (REF ).", "Example 4.4 (Sphere to simplex) Let $\\mathcal {X}= \\Delta ^{n-1}=\\lbrace x\\in \\mathbb {R}^n:x\\ge 0,\\ \\sum _{i=1}^nx_i=1\\rbrace $ be the standard simplex.", "Let $\\mathcal {Z}=\\mathbb {R}_{\\ge 0}^n\\times \\lbrace 0\\rbrace $ and $F(x)=(x,\\sum _{i=1}^nx_i-1)$ so $\\mathcal {X}=F^{-1}(\\mathcal {Z})$ .", "Let $\\psi \\colon \\mathbb {R}^n\\rightarrow \\mathcal {Z}$ be $\\psi (y)=(y^{\\odot 2},0)$ where superscript $\\odot 2$ denotes entrywise squaring.", "Then $\\mathcal {M}_{F,\\psi } = \\left\\lbrace (x,y)\\in \\mathbb {R}^n\\times \\mathbb {R}^n:x=y^{\\odot 2},\\ \\sum _{i=1}^nx_i=1\\right\\rbrace = \\left\\lbrace (x,y)\\in \\mathbb {R}^n\\times \\mathbb {R}^n:x=y^{\\odot 2},\\ \\Vert y\\Vert _2=1\\right\\rbrace ,$ and $\\varphi (x,y)=x$ .", "It is easy to check that $\\mathcal {M}_{F,\\psi }$ is a smooth manifold, and that $\\overline{\\psi }\\colon \\mathrm {S}^{n-1}\\rightarrow \\mathcal {M}_{F,\\psi }$ given by $\\overline{\\psi }(y)=(y^{\\odot 2},y)$ is a diffeomorphism from the unit sphere $\\mathrm {S}^{n-1}$ to $\\mathcal {M}_{F,\\psi }$ .", "By Proposition REF , the fiber product lift of the simplex is equivalent (for the purposes of checking our desirable properties) to the lift $\\overline{\\varphi }=\\varphi \\circ \\overline{\\psi }$ sending a unit vector $y\\in \\mathrm {S}^{n-1}$ to $y^{\\odot 2}$ .", "Li et al.", "[35] study this particular lift for optimization of (REF ) through () for convex $f$ .", "Example 4.5 (Torus to annulus) Let $\\mathcal {X}= \\lbrace x\\in \\mathbb {R}^n:r_1 \\le \\Vert x\\Vert _2\\le r_2\\rbrace $ , where we assume $0<r_1<r_2$ .", "Let $\\mathcal {Z}=\\mathbb {R}_{\\ge 0}^2$ and $F(x)=(x^\\top x-r_1^2,r_2^2-x^\\top x)$ .", "Let $\\psi \\colon \\mathbb {R}^2\\rightarrow \\mathcal {Z}$ be $\\psi (y)=y^{\\odot 2}$ so $\\mathcal {M}_{F,\\psi } &= \\lbrace (x,y)\\in \\mathbb {R}^n\\times \\mathbb {R}^2:x^\\top x-r_1^2=y_1^2,\\ r_2^2-x^\\top x=y_2^2\\rbrace ,\\\\&= \\left\\lbrace (x,y)\\in \\mathbb {R}^n\\times \\mathbb {R}^2: \\Vert x\\Vert _2 = \\sqrt{r_1^2+y_1^2},\\ \\Vert y\\Vert _2 = \\sqrt{r_2^2-r_1^2}\\right\\rbrace .$ This is an $n$ -dimensional manifold diffeomorphic to $\\mathrm {S}^{n-1}\\times \\mathrm {S}^1$ , with diffeomorphism $\\Phi (x,y) = \\begin{bmatrix} (r_1^2+y_1^2)^{-1/2}x, & (r_2^2-r_1^2)^{-1/2}y\\end{bmatrix}.$ Viewed differently, the equivalent (by Proposition REF ) lift $\\varphi \\circ \\Phi ^{-1}$ is the composition $\\mathrm {S}^{n-1}\\times \\mathrm {S}^1\\rightarrow \\mathrm {S}^{n-1}\\times \\Delta ^1\\rightarrow \\mathcal {X},$ where the first map is the entrywise squaring lift from the sphere to the simplex from the preceding example, and the second map sends $(y,\\theta )\\mapsto \\sqrt{\\theta _1 r_1^2 + \\theta _2r_2^2}y$ .", "If $n=2$ , then $\\mathcal {X}$ is an annulus and $\\mathcal {M}$ is a torus.", "Example 4.6 (Smooth SDPs) Consider the domain of a rank-constrained SDP $\\mathcal {X}= \\left\\lbrace X\\in \\mathbb {S}_{\\succeq 0}^n: \\mathrm {rank}(X)\\le r,\\ \\langle A_i,X\\rangle =b_i \\textrm { for } i=1,\\ldots ,m\\right\\rbrace .$ Let $\\mathcal {Z} = (\\mathbb {S}_{\\succeq 0}^n\\cap \\mathbb {R}^{n\\times n}_{\\le r})\\times \\lbrace 0\\rbrace ^m,\\quad F(X)=(X,\\langle A_1,X\\rangle -b_1,\\ldots ,\\langle A_m,X\\rangle -b_m),$ and $\\psi (R) = (RR^\\top ,0,\\ldots ,0)$ defined on $\\mathcal {N} = \\mathbb {R}^{n\\times r}$ .", "Then $\\mathcal {M}_{F,\\psi } = \\lbrace (X,R)\\in \\mathbb {S}^n\\times \\mathbb {R}^{n\\times r}:X=RR^\\top ,\\ \\langle A_iR,R\\rangle = b_i \\textrm { for } i=1,\\ldots ,m\\rbrace .$ Assumption REF guarantees that this is a smooth manifold.", "This assumption holds for generic $A_i,b_i$  [14] and also for a number of applications of interest [9].", "It is used in [9] to prove that the nonconvexity in the Burer–Monteiro method is benign by (in our terminology) proving “2 $\\Rightarrow \\!$ 1” for this lift and linear costs $f$ .", "Now that we have seen several examples of fiber product lifts, we ask when do desirable properties of the lift $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {Z}$ imply the corresponding properties for the fiber product lift $\\varphi \\colon \\mathcal {M}_{F,\\psi }\\rightarrow \\mathcal {X}$ .", "This is answered by the next few propositions.", "Proposition 4.7 Under Assumption REF , if $\\psi \\colon \\mathcal {N}\\rightarrow \\mathcal {Z}$ satisfies “local $\\Rightarrow \\!$ local”, then so does $\\varphi \\colon \\mathcal {M}_{F,\\psi }\\rightarrow \\mathcal {X}$ .", "By Theorem REF , it is equivalent to show that openness of $\\psi $ implies openness of $\\varphi $ .", "Assumption REF implies that $\\mathcal {M}_{F,\\psi }$ is an embedded submanifold of $\\mathcal {E}\\times \\mathcal {N}$ , hence its manifold topology coincides with the subspace topology induced from $\\mathcal {E}\\times \\mathcal {N}$ .", "Thus, to show $\\varphi $ is open, it suffices to show that $\\varphi ((U\\times V)\\cap \\mathcal {M}_{F,\\psi })$ is open for any open $U\\subseteq \\mathcal {E}$ and $V\\subseteq \\mathcal {N}$ , since such sets form a basis for the subspace topology on $\\mathcal {M}_{F,\\psi }$ .", "Since $\\psi $ is open, $\\psi (V)\\subseteq \\mathcal {Z}$ is open.", "Since $F$ is continuous, $F^{-1}(\\psi (V))\\subseteq \\mathcal {X}$ is open.", "Since $\\varphi (x,y)=x$ , we have $\\varphi ((U\\times V)\\cap \\mathcal {M}_{F,\\psi }) &= \\lbrace x\\in U\\cap \\mathcal {X}: \\exists \\ y\\in V \\textrm { s.t. }", "\\psi (y)=F(x)\\rbrace \\\\ &= \\lbrace x\\in U\\cap \\mathcal {X}: F(x)\\in \\psi (V)\\rbrace = (U\\cap \\mathcal {X})\\cap F^{-1}(\\psi (V)),$ which is open in $\\mathcal {X}$ as the intersection of two open sets.", "Thus, $\\varphi $ is open.", "Note that the above proof, and hence the conclusion of Proposition REF , apply more generally whenever $\\mathcal {M}_{F,\\psi }$ is endowed with the subspace topology induced from $\\mathcal {E}\\times \\mathcal {N}$ (but is not necessarily a smooth manifold) and when all maps involved are continuous (but not necessarily smooth).", "We now turn to studying “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1”.", "Along the way, we give another instance of the technique for finding tangent cones via lifts outlined in Remark REF .", "To do so, we begin by giving a superset of the tangent cone that is obtained from the fact that $\\mathcal {X}$ is given by an inverse image [41].", "Lemma 4.8 The following inclusion always holds: $x\\mathcal {X}\\subseteq \\lbrace \\dot{x}\\in \\mathcal {E}:F̥(x)[\\dot{x}]\\in {F(x)}\\mathcal {Z}\\rbrace = F̥(x)^{-1}({F(x)}\\mathcal {Z}).$ If $v\\in x\\mathcal {X}$ then by Definition REF there exist a sequence $(x_i)_{i\\ge 1}\\subseteq \\mathcal {X}$ converging to $x$ and $(\\tau _i)_{i\\ge 1}\\subseteq \\mathbb {R}_{>0}$ converging to zero satisfying $v = \\lim _{i\\rightarrow \\infty }\\frac{x_i-x}{\\tau _i}$ .", "Because $F$ is differentiable at $x$ , we have $F(x_i) = F(x) + F̥(x)[x_i-x] + o(\\Vert x_i-x\\Vert ),$ so $F̥(x)[v] = \\lim _{i\\rightarrow \\infty }\\frac{F(x_i)-F(x)}{\\tau _i}.$ Since $F(x_i)\\in \\mathcal {Z}$ for all $i$ , we conclude that $F̥(x)[v]\\in {F(x)}\\mathcal {Z}$ by Definition REF .", "Equality in Lemma REF does not always hold.", "For example, let $\\mathcal {X}$ be the cuspidal cubic (REF ) defined by $F(x)\\in \\mathcal {Z}$ where $F(x)=x_2^2-x_1^3$ and $\\mathcal {Z}=\\lbrace 0\\rbrace $ .", "Then $\\lbrace \\dot{x}\\in \\mathbb {R}^2:F̥(x)[\\dot{x}]\\in {F(x)}\\mathcal {Z}\\rbrace = \\lbrace \\dot{x}\\in \\mathbb {R}^2:2x_2\\dot{x}_2=3x_1^2\\dot{x}_1\\rbrace ,\\\\$ which is equal to ${x}\\mathcal {X}$ iff $x\\ne (0,0)$ .", "If $x=(0,0)$ then the above set is all of $\\mathbb {R}^2$ while ${x}\\mathcal {X}$ is the nonnegative half of the $x_1$ -axis (see Figure REF ).", "Proposition 4.9 Under Assumption REF , if $\\psi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y\\in \\mathcal {N}$ , then $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $(x,y)\\in \\mathcal {M}_{F,\\psi }$ , and equality holds in Lemma REF .", "If $\\psi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ , then $\\operatorname{im}\\mathbf {L}_y^{\\psi } = {\\psi (y)}\\mathcal {Z} = {F(x)}\\mathcal {Z}$ .", "Assumption REF implies that $\\mathcal {M}_{F,\\psi }$ is an embedded submanifold of $\\mathcal {E}\\times \\mathcal {N}$ .", "Since $\\varphi $ extends to $\\overline{\\varphi }(x,y)=x$ defined on all of $\\mathcal {E}\\times \\mathcal {N}$ , we get from (REF ) that $\\mathbf {L}_{(x,y)}^{\\varphi }(\\dot{x},\\dot{y})=\\dot{x}$ for all $(\\dot{x},\\dot{y})\\in {(x,y)}\\mathcal {M}_{F,\\psi }$ .", "From (REF ), we obtain $\\operatorname{im}\\mathbf {L}_{(x,y)}^{\\varphi } = F̥(x)^{-1}(\\operatorname{im}\\mathbf {L}_y^{\\psi }) = F̥(x)^{-1}({F(x)}\\mathcal {Z}).$ Using Lemma REF and Proposition REF (b), we get the chain of inclusions $x\\mathcal {X}\\subseteq F̥(x)^{-1}({F(x)}\\mathcal {Z}) = \\operatorname{im}\\mathbf {L}_{(x,y)}^{\\varphi } \\subseteq x\\mathcal {X}.$ We conclude that all these sets are equal and that “1 $\\Rightarrow \\!$ 1” holds for $\\varphi $ at $(x,y)$ .", "Proposition 4.10 Under Assumption REF , if $\\psi $ satisfies the sufficient condition $A_y^{\\psi } = {\\psi (y)}\\mathcal {Z}$ for “2 $\\Rightarrow \\!$ 1” at $y\\in \\mathcal {N}$ , then $\\varphi $ satisfies the sufficient condition $A_{(x,y)}^{\\varphi }=x\\mathcal {X}$ for “2 $\\Rightarrow \\!$ 1” at $(x,y)\\in \\mathcal {M}_{F,\\psi }$ , and equality holds in Lemma REF .", "By Lemma REF , we always have $x\\mathcal {X}\\subseteq F̥(x)^{-1}({F(x)}\\mathcal {Z})$ .", "For the reverse inclusion and the desired sufficient condition for “2 $\\Rightarrow \\!$ 1”, it suffices to prove that $F̥(x)^{-1}({F(x)}\\mathcal {Z})\\subseteq A_{(x,y)}^{\\varphi }$ since $A_{(x,y)}^{\\varphi }\\subseteq x\\mathcal {X}$ by Proposition REF (b).", "Suppose $F̥(x)[\\dot{x}]\\in {F(x)}\\mathcal {Z}$ .", "By hypothesis, ${F(x)}\\mathcal {Z} = {\\psi (y)}\\mathcal {Z} = A_y^{\\psi }$ .", "Therefore, $F̥(x)[\\dot{x}] = \\mathbf {Q}_y^{\\psi }(v) + \\mathbf {L}_y^{\\psi }(u),\\quad \\textrm {for some } v\\in \\ker \\mathbf {L}_y^{\\psi } \\textrm { and } u\\in y\\mathcal {N}.$ Because $v\\in \\ker \\mathbf {L}_y^{\\psi }$ , we have $(0,v)\\in {(x,y)}\\mathcal {M}_{F,\\psi }$ by (REF ).", "Let $c(t)=(c_x(t),c_y(t))\\colon I\\rightarrow \\mathcal {M}_{F,\\psi }$ be a curve passing through $(x,y)$ with velocity $(0,v)$ .", "Because $F(c_x(t))=\\psi (c_y(t))$ for all $t$ near 0, differentiating this expression twice we get $2 F(x)[\\underbrace{c_x^{\\prime }(0)}_{=0}] + F̥(x)[c_x^{\\prime \\prime }(0)] = (\\psi \\circ c_y)^{\\prime \\prime }(0).$ Using Definition REF together with Lemma REF , we further obtain $F̥(x)[c_x^{\\prime \\prime }(0)]=\\mathbf {Q}_y^{\\psi }(v) + \\mathbf {L}_y^{\\psi }(u^{\\prime }),\\quad \\textrm {for some } u^{\\prime }\\in y\\mathcal {N}.$ Subtracting (REF ) from (REF ) yields $F̥(x)[\\dot{x}-c_x^{\\prime \\prime }(0)] = \\mathbf {L}_y^{\\psi }(u-u^{\\prime }) = (y)[u-u^{\\prime }],\\quad \\textrm { hence } (\\dot{x}-c_x^{\\prime \\prime }(0),u-u^{\\prime })\\in {(x,y)}\\mathcal {M}_{F,\\psi },$ where the second equality is Definition REF .", "Therefore, $\\dot{x}-c_x^{\\prime \\prime }(0)\\in \\operatorname{im}\\mathbf {L}_{(x,y)}^{\\varphi }$ .", "Finally, by Definition REF and Lemma REF , there exists $w\\in {(x,y)}\\mathcal {M}_{F,\\psi }$ satisfying $c_x^{\\prime \\prime }(0) + \\mathbf {L}_{(x,y)}^{\\varphi }(w) = \\mathbf {Q}_{(x,y)}^{\\varphi }(0,v) \\in \\mathbf {Q}_{(x,y)}^{\\varphi }(\\ker \\mathbf {L}_{(x,y)}^{\\varphi }),$ from which we conclude that $\\dot{x}\\in \\mathbf {Q}_{(x,y)}^{\\varphi }(\\ker \\mathbf {L}_{(x,y)}^{\\varphi })+\\operatorname{im}\\mathbf {L}_{(x,y)}^{\\varphi } = A_{(x,y)}^{\\varphi }$ .", "Remark 4.11 We do not know whether $\\varphi $ satisfies “2 $\\Rightarrow \\!$ 1” under the assumption that $\\psi $ satisfies one of the weaker sufficient conditions for “2 $\\Rightarrow \\!$ 1” in Theorem REF .", "We remark that other sufficient conditions for equality in Lemma REF to be achieved are given in [41].", "However, they do not apply to Example REF ($\\mathcal {Z}$ is not Clarke-regular and $F̥(X)$ may not be surjective).", "In contrast, our approach via lifts does apply to this example, and gives “2 $\\Rightarrow \\!$ 1” and an expression for the tangent cones simultaneously, see Corollary REF below.", "As the examples in the beginning of this section illustrate, $\\mathcal {Z}$ is often given by a product.", "It is therefore useful to note that a product of lifts satisfying our desirable properties also satisfies the same properties: Proposition 4.12 Suppose $\\mathcal {Z}_i\\subseteq \\mathcal {E}_i$ for $i=1,\\ldots ,k$ are subsets admitting smooth lifts $\\psi _i\\colon \\mathcal {N}_i\\rightarrow \\mathcal {Z}_i$ .", "Let $\\mathcal {Z}=\\mathcal {Z}_1\\times \\cdots \\times \\mathcal {Z}_k$ and $\\psi =\\psi _1\\times \\cdots \\times \\psi _k\\colon \\mathcal {N}_1\\times \\cdots \\times \\mathcal {N}_k\\rightarrow \\mathcal {Z}$ , which is a smooth lift of $\\mathcal {Z}$ .", "Then the following hold.", "${(z_1,\\ldots ,z_k)}\\mathcal {Z} \\subseteq {z_1}\\mathcal {Z}_1\\times \\cdots \\times {z_k}\\mathcal {Z}_k$ but the inclusion may be strict.", "$\\psi $ satisfies “local $\\Rightarrow \\!$ local” at $(y_1,\\ldots ,y_k)$ if and only if $\\psi _i$ satisfies “local $\\Rightarrow \\!$ local” at $y_i$ for all $i$ .", "We have $\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }=\\operatorname{im}\\mathbf {L}_{y_1}^{\\psi _1}\\times \\cdots \\times \\operatorname{im}\\mathbf {L}_{y_k}^{\\psi _k}$ .", "In particular, $\\psi $ satisfies “1 $\\Rightarrow \\!$ 1” at $(y_1,\\ldots ,y_k)$ if $\\psi _i$ satisfies “1 $\\Rightarrow \\!$ 1” at $y_i$ for all $i$ , in which case equality in (a) holds.", "We have $\\mathbf {Q}_{(y_1,\\ldots ,y_k)}^{\\psi }\\equiv \\mathbf {Q}_{y_1}^{\\psi _1}\\times \\cdots \\times \\mathbf {Q}_{y_k}^{\\psi _k}\\mod {\\operatorname{im}}\\unknown.", "\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }$ .", "Moreover, $A_{(y_1,\\ldots ,y_k)}^{\\psi }=A_{y_1}^{\\psi _1}\\times \\cdots \\times A_{y_k}^{\\psi _k}$ and similarly for $B_{(y_1,\\ldots ,y_k)}^{\\psi }$ and $W_{(y_1,\\ldots ,y_k)}^{\\psi }$ .", "In particular, $\\psi $ satisfies “2 $\\Rightarrow \\!$ 1” at $(y_1,\\ldots ,y_k)$ if $\\psi _i$ satisfies “2 $\\Rightarrow \\!$ 1” at $y_i$ for all $i$ .", "$\\psi $ satisfies the sufficient condition $A_{(y_1,\\ldots ,y_k)}^{\\psi }={(y_1,\\ldots ,y_k)}\\mathcal {Z}$ for “2 $\\Rightarrow \\!$ 1” if $\\psi _i$ satisfies the corresponding conditions $A_{y_i}^{\\psi _i}={y_i}\\mathcal {Z}_i$ for all $i$ .", "The proof is given in Appendix REF .", "Equality in Proposition REF (a) is achieved when each $\\mathcal {Z}_i$ is Clarke-regular at $z_i$  [41].", "By Remark REF , equality of $\\mathbf {Q}_{(y_1,\\ldots ,y_k)}^{\\psi }$ and $\\mathbf {Q}_{y_1}^{\\psi _1}\\times \\cdots \\times \\mathbf {Q}_{y_k}^{\\psi _k}$ modulo $\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }$ means that either one can be used to verify “2 $\\Rightarrow \\!$ 1”.", "We can now revisit the examples from the beginning of this section.", "Corollary 4.13 The preceding lifts satisfy the following.", "The sphere to ball lift in Example REF satisfies “local $\\Rightarrow \\!$ local” everywhere, “1 $\\Rightarrow \\!$ 1” at $y$ if and only if $y\\ne 0$ (i.e., at preimages of the interior of the ball), and “2 $\\Rightarrow \\!$ 1” everywhere.", "The sphere to simplex lift in Example REF satisfies “local $\\Rightarrow \\!$ local” everywhere, “1 $\\Rightarrow \\!$ 1” at $y$ if and only if $y_i\\ne 0$ for all $i$ (i.e., at preimages of points in the relative interior of the simplex), and “2 $\\Rightarrow \\!$ 1” everywhere.", "The lift of the annulus in Example REF satisfies “local $\\Rightarrow \\!$ local” everywhere, “1 $\\Rightarrow \\!$ 1” at $y$ if and only if $y_1,y_2\\ne 0$ (i.e., at preimages of points in the interior of the annulus), and “2 $\\Rightarrow \\!$ 1” everywhere.", "The Burer–Monteiro lift of Example REF under the smoothness assumption satisfies “local $\\Rightarrow \\!$ local” everywhere, “1 $\\Rightarrow \\!$ 1” at $Y$ if and only if $\\mathrm {rank}(Y)=r$ (i.e., at preimages of points of rank $r$ ), and “2 $\\Rightarrow \\!$ 1” everywhere.", "Moreover, we get the following expression for the tangent cones to $\\mathcal {X}$ : $X\\mathcal {X}= \\lbrace V\\in \\mathbb {S}^n: V\\in X\\mathbb {S}_{\\succeq 0}^n\\cap X\\mathbb {R}_{\\le r}^{n\\times n},\\ \\langle A_i,V\\rangle = 0\\rbrace .$ Note that an expression for $X\\mathbb {S}_{\\succeq 0}^n\\cap X\\mathbb {R}_{\\le r}^{n\\times n}$ is derived in Proposition REF (incidentally, also as a consequence of the sufficient condition for “2 $\\Rightarrow \\!$ 1” used in Proposition REF ).", "The expression (REF ) for the tangent cone to a smooth low-rank SDP domain appears to be new.", "Previously, it was only shown that a rank-deficient 2-critical point for the Burer–Monteiro lifted problem () maps to a stationary point for the low-rank SDP (REF ) [27].", "The result of Corollary REF shows that this is true for any 2-critical point.", "For the first three bullet points, consider the lift $\\psi (y)=y^2$ from $\\mathcal {N}=\\mathbb {R}$ to $\\mathcal {Z}=\\mathbb {R}_{\\ge 0}$ .", "Observe that it satisfies “local $\\Rightarrow \\!$ local” everywhere, “1 $\\Rightarrow \\!$ 1” at $y\\ne 0$ and satisfies the sufficient condition $A_y={\\psi (y)}\\mathcal {Z}$ for “2 $\\Rightarrow \\!$ 1” at $y=0$ .", "Indeed, at $y\\ne 0$ we have $\\mathbf {L}_y(\\dot{y})=2y\\dot{y}$ which is an isomorphism of $y\\mathcal {N}=\\mathbb {R}$ and ${y^2}\\mathcal {Z}=\\mathbb {R}$ , and at $y=0$ we have $\\mathbf {L}_y=0$ and $\\mathbf {Q}_y(\\dot{y})=2\\dot{y}^2$ by (REF ) so $A_y=\\mathbf {Q}_y(\\ker \\mathbf {L}_y)+\\operatorname{im}\\mathbf {L}_y=\\mathbb {R}_{\\ge 0} = 0\\mathcal {Z}$ .", "Propositions REF , REF , and REF imply that the first three lifts satisfy “local $\\Rightarrow \\!$ local” and “2 $\\Rightarrow \\!$ 1” everywhere and give the claimed “if” directions for “1 $\\Rightarrow \\!$ 1”.", "The “only if” directions follow from Corollary REF .", "For the Burer–Monteiro lift, consider the lift $\\psi (R)=RR^\\top $ from $\\mathcal {N}=\\mathbb {R}^{n\\times r}$ to $\\mathcal {Z}=\\mathbb {S}_{\\succeq 0}^n\\cap \\mathbb {R}^{n\\times n}_{\\le r}$ .", "Proposition REF shows that $\\psi $ satisfies “local $\\Rightarrow \\!$ local” and the sufficient condition $A_R = {RR^\\top }\\mathcal {Z}$ for “2 $\\Rightarrow \\!$ 1” everywhere and “1 $\\Rightarrow \\!$ 1” at points $R$ of rank $r$ .", "Therefore, Propositions REF , REF , and REF imply that the Burer–Monteiro lift satisfies “local $\\Rightarrow \\!$ local” and “2 $\\Rightarrow \\!$ 1” everywhere and “1 $\\Rightarrow \\!$ 1” at $R$ if $\\mathrm {rank}(R)=r$ .", "The “1 $\\Rightarrow \\!$ 1” property does not hold at other points by Corollary REF .", "Proposition REF gives the claimed expression for the tangent cones to $\\mathcal {X}$ .", "Example 4.14 We can now revisit the example of computing the smallest eigenvalue of a symmetric matrix mentioned in Section .", "There, $&\\mathcal {X}= \\Delta ^{d-1},\\qquad \\mathcal {M}=\\mathrm {S}^{d-1},\\qquad \\varphi (y)=\\mathrm {diag}(U^\\top yy^\\top U),$ where $U$ is the matrix of eigenvectors of a given symmetric matrix.", "Observe that $\\varphi (y)=(U^\\top y)^{\\odot 2}$ , which is the composition of the linear diffeomorphism $y\\mapsto U^\\top y$ and the sphere to simplex lift from Example REF .", "We conclude that this lift satisfies “2 $\\Rightarrow \\!$ 1” everywhere on $\\mathcal {M}$ by Proposition REF and Corollary REF .", "Therefore, any 2-critical point for () maps to a stationary point for (REF ), for any cost $f$ .", "If $f$ is convex, then since $\\mathcal {X}$ is also convex any stationary point for (REF ) is globally optimal.", "Thus, in this case any 2-critical point for () is globally optimal and its nonconvexity is benign.", "This is well-known for the eigenvalue problem, which corresponds to the case of linear $f$ ." ], [ "Conclusions and future work", "For the pair of problems () and (REF ), we characterized the properties the lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ needs to satisfy in order to map desirable points of () to desirable points of (REF ).", "We showed that global minima for () always map to global minima for (REF ) (Theorem REF ), and that local minima for () map to local minima for (REF ) if and only if $\\varphi $ is open (Theorem REF ).", "We showed that 1-critical points for () map to stationary points for (REF ) if and only if the differential of $\\varphi $ , viewed as a map from tangent spaces to $\\mathcal {M}$ to tangent cones to $\\mathcal {X}$ , is surjective (Theorem REF ).", "This requires the tangent cones to $\\mathcal {X}$ to be linear spaces.", "We then characterized when 2-critical points for () map to stationary points for (REF ), and gave two sufficient conditions and a necessary condition that may be easier to check for some examples (Theorem REF ).", "We explained several techniques to compute all quantities involved in these conditions in Section REF .", "Using our theory, we studied the above properties for a variety of lifts, including several lifts of low-rank matrices and tensors (Section ) and the Burer–Monteiro lift for smooth SDPs (Corollary REF ).", "We also proposed a systematic construction of lifts using fiber products that applies when $\\mathcal {X}$ is given as a preimage under a smooth function (Section ).", "We gave conditions under which it satisfies our desirable properties, assuming this construction yields a smooth manifold.", "In some cases, we can also obtain an expression for the tangent cone simultaneously with “2 $\\Rightarrow \\!$ 1”, as explained in Remark REF .", "We end by listing several future directions suggested by this work.", "“$k\\!", "\\Rightarrow \\!$ 1” for general $k$ : Several lifts of interest, notably tensor factorizations with more than two factors, do not satisfy “2 $\\Rightarrow \\!$ 1”.", "It would therefore be interesting to characterize “$k\\!", "\\Rightarrow \\!$ 1” for general $k$ , i.e., when do $k$ -critical points for () map to stationary points for (REF ) for any $k$ times differentiable cost $f$ ?", "Do lifts that are multilinear in $k$ arguments, such as order-$k$ tensor lifts, satisfy “$k\\!", "\\Rightarrow \\!$ 1”?", "What can be said about “$k\\!", "\\Rightarrow \\!", "\\ell $ ” for $\\ell >1$ ?", "Already for $\\ell =2$ , the second-order optimality conditions on $\\mathcal {X}$ can be involved [42].", "On the positive side, if “1 $\\Rightarrow \\!$ 1” holds at a preimage of a smooth point, then “$k\\!", "\\Rightarrow \\!", "k$ ” holds there for all $k\\ge 1$ , see Remark REF .", "Robust “$k\\!", "\\Rightarrow \\!$ 1”: Algorithms run for finitely many iterations in practice, hence can only find approximate $k$ -critical points for ().", "It is therefore important to characterize “robust” versions of “$k\\!", "\\Rightarrow \\!$ 1”, guaranteeing that approximate $k$ -critical points for () map to approximate stationary points for (REF ).", "Note that if $\\mathcal {X}$ lacks regularity, care is needed when defining approximate stationarity for (REF ), see [33].", "Obstructions to “local $\\Rightarrow \\!$ local” and “$k\\!", "\\Rightarrow \\!$ 1”: Can we find general obstructions to “local $\\Rightarrow \\!$ local” and “$k\\!", "\\Rightarrow \\!$ 1”?", "More concretely, is there a lift for low-rank tensors satisfying “2 $\\Rightarrow \\!$ 1”?", "Is there a lift for $\\mathbb {R}^{m \\times n}_{\\le r}$ satisfying “local $\\Rightarrow \\!$ local”?", "Are there topological obstructions to existence of lifts satisfying “local $\\Rightarrow \\!$ local”?", "What properties of the singularities of $\\mathcal {X}$ play a role?", "Conjectures: Are Conjectures REF and REF true?", "Sets $\\mathcal {X}$ defined via lift: To verify “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1” on concrete examples of $\\mathcal {X}$ using the theory in this paper, we need to understand the tangent cones to $\\mathcal {X}$ , which is often challenging.", "Many sets $\\mathcal {X}$ encountered in applications are only defined implicitly via a lift $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {E}$ .", "Examples of such sets $\\mathcal {X}$ include the set of tensors admitting a certain factorization, the set of functions parameterized by a given neural network architecture, and the set of positions and orientations attainable by a robotic arm with a given joint configuration.", "Are there conditions for “$k\\!", "\\Rightarrow \\!$ 1” that can be checked using $\\varphi $ and $\\mathcal {M}$ alone, without an explicit expression for the tangent cones to $\\mathcal {X}$ ?", "Dynamical systems on $\\mathcal {M}$ and their image on $\\mathcal {X}$ : This paper is focused on comparing properties of points on $\\mathcal {M}$ and their images on $\\mathcal {X}$ .", "In contrast, several applications are concerned with properties of entire trajectories of dynamical systems on $\\mathcal {M}$ , and it may be interesting to compare these properties with their counterparts for the images of the trajectories on $\\mathcal {X}$ .", "Examples of such comparisons include relating gradient flow on the weights of a neural network to gradient flow in function or measure spaces [6], [5], [26], and the “algorithmic equivalence” technique used in [3], [21] to study mirror descent by showing that its continuous-time analogue is equivalent to gradient flow on a reparametrized problem." ], [ "Acknowledgements", "We thank Christopher Criscitiello and Quentin Rebjock for helpful conversations and comments on drafts of this paper." ], [ "Lifts preserving local minima", "We characterize the lifts that map local minima of () to local minima of (REF ).", "To this end, we introduce a number of properties related to preservation of local minima and then prove that they are all equivalent.", "Recall that $\\overline{S}$ is our notation for the closure of a set $S$ .", "Definition A.1 Let $\\varphi \\colon \\mathcal {M}\\rightarrow \\mathcal {X}$ be a continuous, surjective map from a topological space $\\mathcal {M}$ to a metric space $\\mathcal {X}$ with distance $\\mathrm {dist}$ , and let $x = \\varphi (y)$ .", "$\\varphi $ is open at $y$ if $\\varphi (U)$ is a neighborhood of $x$ in $\\mathcal {X}$ for all neighborhoods $U$ of $y$ in $\\mathcal {M}$ .", "$\\varphi $ is approximately open at $y$ if $\\overline{\\varphi (U)}$ is a neighborhood of $x$ in $\\mathcal {X}$ for all neighborhoods $U$ of $y$ in $\\mathcal {M}$ .", "$\\varphi $ satisfies the Subsequence Lifting Property (SLP) at $y$ if for every sequence $(x_i)_{i \\ge 1} \\subseteq \\mathcal {X}$ converging to $x$ there exists a subsequence indexed by $(i_j)_{j\\ge 1}$ and a sequence $(y_{i_j})_{j \\ge 1} \\subseteq \\mathcal {M}$ converging to $y$ such that $\\varphi (y_{i_j}) = x_{i_j}$ for all $j \\ge 1$ .", "$\\varphi $ satisfies the Approximate Subsequence Lifting Property (ASLP) at $y$ if for every sequence $(x_i)_{i > 1} \\subseteq \\mathcal {X}$ converging to $x$ and every sequence $(\\epsilon _i)_{i \\ge 1} \\subseteq \\mathbb {R}_{> 0}$ converging to 0 there exists a subsequence indexed by $(i_j)_{j\\ge 1}$ and a sequence $(y_{i_j})_{j \\ge 1} \\subseteq \\mathcal {M}$ converging to $y$ such that $\\mathrm {dist}(\\varphi (y_{i_j}), x_{i_j}) \\le \\epsilon _{i_j}$ for all $j \\ge 1$ .", "Theorem A.2 If $\\mathcal {M}$ is Hausdorff, second-countable and locally compact (all of which hold if $\\mathcal {M}$ is a topological manifold), then the four properties of $\\varphi $ at $y \\in \\mathcal {M}$ in Definition REF are equivalent to each other and to the “local $\\Rightarrow \\!$ local” property at $y$ (Definition REF (a)).", "We show that ASLP $\\Rightarrow $ approximate openness $\\Rightarrow $ openness $\\Rightarrow $ SLP $\\Rightarrow $ “local $\\Rightarrow \\!$ local”$\\Rightarrow $ ASLP.", "Suppose $\\varphi $ satisfies ASLP at $y$ .", "Suppose there exists a neighborhood $U$ of $y$ such that $\\overline{\\varphi (U)}$ is not a neighborhood of $x=\\varphi (y)$ .", "Then we can find a sequence $(x_i)_{i\\ge 1}\\subseteq \\mathcal {X}$ such that $x_i\\rightarrow x$ but $x_i \\notin \\overline{\\varphi (U)}$ for all $i$ .", "Set $\\epsilon _i = \\frac{1}{2}\\mathrm {dist}(x_i, \\overline{\\varphi (U)}) > 0$ and apply ASLP to find a sequence $(y_i)_{i\\ge 1} \\subseteq \\mathcal {M}$ such that $y_i\\rightarrow y$ and $\\mathrm {dist}(\\varphi (y_i), x_i) \\le \\epsilon _i$ .", "Because $\\mathrm {dist}(\\varphi (y_i), x_i) < \\mathrm {dist}(x_i, \\overline{\\varphi (U)})$ , we have $\\varphi (y_i) \\notin \\overline{\\varphi (U)}$ for all $i$ .", "However, because $U$ is a neighborhood of $y$ and $y_i\\rightarrow y$ , we must have $y_i\\in U$ for all large $i$ , a contradiction.", "Thus, $\\overline{\\varphi (U)}$ is a neighborhood of $x$ , so $\\varphi $ is approximately open at $y$ .", "Suppose $\\varphi $ is approximately open at $y$ , and let $U$ be a neighborhood of $y$ in $\\mathcal {M}$ .", "Because $\\mathcal {M}$ is locally compact, we can find a compact neighborhood $V \\subseteq U$ of $x$ .", "Since $\\varphi $ is continuous and $V$ is compact, we have that $\\varphi (V)$ is compact; since $\\mathcal {X}$ is Hausdorff (it is a metric space), it follows that $\\varphi (V)$ is closed.", "Combining with the fact that $\\varphi $ is approximately open at $y$ , we deduce that $\\varphi (V)$ is a neighborhood of $x$ .", "Since $\\varphi (U) \\supseteq \\varphi (V)$ , we conclude that $\\varphi (U)$ is a neighborhood of $x$ as well.", "Thus, $\\varphi $ is open at $y$ .", "Suppose $\\varphi $ is open at $y$ , and $(x_j)_{j\\ge 1}\\subseteq \\mathcal {X}$ converges to $x = \\varphi (y)$ .", "Owing to the topological properties of $\\mathcal {M}$ , there is a sequence of open neighborhoods $U_i$ of $y$ with compact closures such that $U_i \\supseteq \\overline{U_{i+1}}$ and $\\bigcap _{i=1}^{\\infty } U_i = \\lbrace y\\rbrace $ , see Lemma REF following this proof.", "Because $\\varphi $ is open, each $\\varphi (U_i)$ is an open neighborhood of $x$ such that $\\varphi (U_i)\\supseteq \\varphi (U_{i+1})$ and $x\\in \\bigcap _{i=1}^{\\infty }\\varphi (U_i)$ .", "Moreover, because $\\varphi (U_i)$ is a neighborhood of $x$ and $x_j\\rightarrow x$ , there exists index $J(i)$ such that $x_j\\in \\varphi (U_i)$ for all $j\\ge J(i)$ .", "After passing to a subsequence of $(x_j)$ , we may assume $x_j\\in \\varphi (U_j)$ and pick $y_j\\in U_j$ satisfying $x_j=\\varphi (y_j)$ .", "Because $(y_j)$ is an infinite sequence contained in the compact set $\\overline{U_1}$ , after passing to a subsequence again we may assume that $\\lim _jy_j$ exists.", "With $i$ arbitrary, we have for all $j > i$ that $y_j\\in U_j\\subseteq U_{i+1}$ , hence that $\\lim _j y_j \\in \\overline{U_{i+1}} \\subseteq U_i$ .", "This holds for all $i$ , hence $\\lim _jy_j\\in \\bigcap _iU_i=\\lbrace y\\rbrace $ .", "Thus, $y=\\lim _iy_i$ and $\\varphi (y_i)=x_i$ , so $\\varphi $ satisfies SLP.", "Suppose $\\varphi $ satisfies SLP at $y$ .", "Let $f \\colon \\mathcal {X}\\rightarrow \\mathbb {R}$ be a cost function on $\\mathcal {X}$ and $g = f \\circ \\varphi $ .", "Suppose $x = \\varphi (y)$ is not a local minimum for $f$ on $\\mathcal {X}$ , that is, there exists a sequence $(x_i)_{i\\ge 1} \\subseteq \\mathcal {X}$ converging to $x$ such that $f(x_i) < f(x)$ for all $i$ .", "Applying SLP, after passing to a subsequence we can find a sequence $(y_i)_{i\\ge 1} \\subseteq \\mathcal {M}$ converging to $y$ such that $\\varphi (y_i) = x_i$ .", "Since $g(y_i) = f(x_i) < f(x) = g(y)$ and $y_i \\rightarrow y$ , we conclude that $y$ is not a local minimum for $g$ .", "By contrapositive, this shows that $\\varphi $ satisfies the “local $\\Rightarrow \\!$ local” property at $y$ .", "For the last implication, we proceed by contrapositive once again.", "Suppose $\\varphi $ does not satisfy ASLP at $y$ .", "Then, we can find sequences $(x_i)_{i\\ge 1}\\subseteq \\mathcal {X}$ converging to $x$ and $(\\epsilon _i)_{i \\ge 1}\\subseteq \\mathbb {R}_{> 0}$ converging to 0 such that no subsequence of $(x_i)$ can be approximately lifted to $\\mathcal {M}$ in the sense of ASLP.", "Let $\\bar{B}(x, \\epsilon ) = \\lbrace x^{\\prime }\\in \\mathcal {X}:\\mathrm {dist}(x, x^{\\prime }) \\le \\epsilon \\rbrace $ .", "Notice that $x=\\varphi (y) \\notin \\bar{B}(x_i,\\epsilon _i)$ for all but finitely many indices $i$ , as otherwise the constant sequence $y_i \\equiv y$ would give an approximate lift of a subsequence.", "Since $x_i \\rightarrow x$ and $\\epsilon _i \\rightarrow 0$ , after passing to a subsequence we may assume that the closed balls $\\bar{B}(x_i, \\epsilon _i)$ are pairwise disjoint and none contain $x$ .", "Define the following sum of smooth bump functions centered at the $x_i$ $f(x^{\\prime }) & = {\\left\\lbrace \\begin{array}{ll} -\\exp \\left(1-\\frac{1}{1-(\\mathrm {dist}(x_i, x^{\\prime })/\\epsilon _i)^2}\\right) & \\textrm { if } x^{\\prime }\\in \\bar{B}(x_i,\\epsilon _i) \\text{ for some $i$,} \\\\ 0 & \\textrm { otherwise.}", "\\end{array}\\right.", "}$ This is well defined because the balls $\\bar{B}(x_i,\\epsilon _i)$ are disjoint.", "(As a side note, we remark that if $\\mathcal {X}$ is a metric subspace of a Euclidean space $\\mathcal {E}$ as in our general treatment, then $f$ extends to a smooth function on $\\mathcal {E}$ .)", "Note that $x$ is not a local minimum for $f$ since $x_i\\rightarrow x$ and $f(x_i)=-1<0=f(x)$ .", "However, $y$ is a local minimum for $g = f\\circ \\varphi $ .", "Indeed, if there was a sequence $(y_i)$ converging to $y$ such that $g(y_i) < g(y) = 0$ , then we would have $\\varphi (y_i) \\in \\bar{B}(x_{n_i}, \\epsilon _{n_i})$ for an infinite subsequence $(n_i)$ with $n_i \\rightarrow \\infty $ (since we must have $\\varphi (y_i)\\rightarrow x$ by continuity of $\\varphi $ ), showing that $(y_i)$ is an approximate lift of the subsequence $(x_{n_i})$ : a contradiction to our assumptions about $(x_i), (\\epsilon _i)$ .", "Thus, $\\varphi $ does not satisfy the “local $\\Rightarrow \\!$ local” property at $y$ .", "Lemma A.3 Suppose $\\mathcal {M}$ is Hausdorff, second-countable, and locally compact.", "Then for any $y\\in \\mathcal {M}$ there is a sequence of open neighborhoods $U_i$ of $y$ with compact closures such that $U_i \\supseteq \\overline{U_{i+1}}$ and $\\bigcap _{i=1}^{\\infty } U_i = \\lbrace y\\rbrace $ .", "Because $\\mathcal {M}$ is second-countable and locally compact, we can find a countable basis of open neighborhoods with compact closures $\\lbrace V_j\\rbrace _{j\\ge 1}$ for $y$ .", "Since $\\mathcal {M}$ is Hausdorff and $\\lbrace V_j\\rbrace $ is a basis for $y$ , we have $\\bigcap _{j=1}^{\\infty }V_j=\\lbrace y\\rbrace $ .", "Indeed, if $y^{\\prime }\\ne y$ , then there exists a neighborhood of $y$ not containing $y^{\\prime }$ , and this neighborhood contains $V_i$ for some $i$ by definition of a local basis.", "By replacing $V_i$ by $\\bigcap _{j=1}^iV_j$ (which preserves their intersection), we may assume $V_i\\supseteq V_{i+1}$ .", "We construct $\\lbrace U_i\\rbrace _{i\\ge 1}$ inductively.", "Set $U_1=V_1$ , which is an open neighborhood of $y$ with compact closure by assumption.", "Having constructed $U_1,\\ldots ,U_i$ , use local compactness to find a compact neighborhood $K_{i+1}\\subseteq V_{i+1}\\cap U_i$ of $y$ and let $U_{i+1}$ be the interior of $K_{i+1}$ .", "Then $U_{i+1}$ is an open neighborhood of $y$ by construction, and $\\overline{U_{i+1}}\\subseteq K_{i+1}\\subseteq U_i$ which also shows $\\overline{U_{i+1}}$ is compact as a closed subset of the compact set $K_{i+1}$ .", "Finally, we have $\\lbrace y\\rbrace \\subseteq \\bigcap _{i=1}^{\\infty }U_i\\subseteq \\bigcap _{i=1}^{\\infty }V_i=\\lbrace y\\rbrace $ hence $\\bigcap _{i=1}^{\\infty }U_i=\\lbrace y\\rbrace $ ." ], [ "Basic properties of $A_y$ and {{formula:5366be6d-fd07-46c4-b8ec-0168610a11b7}}", "[Proof of Proposition REF ] For $w\\in A_y$ , let $c\\colon I\\rightarrow \\mathcal {M}$ satisfy $c(0)=y$ , $(\\varphi \\circ c)^{\\prime }(0)=0$ and $(\\varphi \\circ c)^{\\prime \\prime }(0)=w$ .", "For any $\\alpha \\ge 0$ define $\\widetilde{c}(t)=c(\\sqrt{\\alpha } t)$ .", "Note that $\\widetilde{c}(0)=y$ , $(\\varphi \\circ \\widetilde{c})^{\\prime }(0)=\\sqrt{\\alpha }(\\varphi \\circ c)^{\\prime }(0)=0$ and $(\\varphi \\circ \\widetilde{c})^{\\prime \\prime }(0)=\\alpha (\\varphi \\circ c)^{\\prime \\prime }(0)\\in A_y$ .", "Thus, $\\alpha w\\in A_y$ so $A_y$ is a cone.", "The proof that $B_y$ is a cone is analogous.", "Suppose $w_j\\in B_y$ and $w_j\\rightarrow w$ .", "Let $c_{j,i}\\colon I_{j,i}\\rightarrow \\mathcal {M}$ satisfy $(\\varphi \\circ c_{j,i})^{\\prime }(0)\\xrightarrow{}0$ and $(\\varphi \\circ c_{j,i})^{\\prime \\prime }(0)\\xrightarrow{} w_j$ .", "For each $j$ , pick $i_j$ large enough so that $\\Vert (\\varphi \\circ c_{j,i_j})^{\\prime }(0)\\Vert \\le 1/j$ and $\\Vert (\\varphi \\circ c_{j,i_j})^{\\prime \\prime }(0)- w_j\\Vert \\le 1/j$ .", "Set $\\tilde{c}_j = c_{j,i_j}$ and observe that $(\\varphi \\circ \\tilde{c}_j)^{\\prime }(0)\\rightarrow 0$ and $(\\varphi \\circ \\tilde{c}_j)^{\\prime \\prime }(0)\\rightarrow w$ , showing $w\\in B_y$ so $B_y$ is closed.", "We prove the last claim in part (b).", "Since $0\\in \\operatorname{im}\\mathbf {L}_y$ , we trivially have $A_y + \\operatorname{im}\\mathbf {L}_y\\supseteq A_y$ and similarly for $B_y$ .", "Conversely, if $w\\in A_y$ , let $c\\colon I\\rightarrow \\mathcal {M}$ satisfy $c(0)=y$ , $(\\varphi \\circ c)^{\\prime }(0)=0$ and $(\\varphi \\circ c)^{\\prime \\prime }(0)=w$ .", "By Lemma REF (b), for any $v\\in y\\mathcal {M}$ there exists a curve $\\widetilde{c}\\colon I\\rightarrow \\mathcal {M}$ satisfying $\\widetilde{c}(0)=y$ , $\\widetilde{c}^{\\prime }(0)=c^{\\prime }(0)$ , and $\\widetilde{c}^{\\prime \\prime }(0) = c^{\\prime \\prime }(0) + v$ , the sum being taken inside $y\\mathcal {M}$ .", "Then $(\\varphi \\circ \\widetilde{c})^{\\prime }(0)=(y)[\\widetilde{c}^{\\prime }(0)] = (y)[c^{\\prime }(0)] = (\\varphi \\circ c)^{\\prime }(0) = 0,$ and by Lemma REF (a), $(\\varphi \\circ \\widetilde{c})^{\\prime \\prime }(0) = (\\varphi \\circ c)^{\\prime \\prime }(0) + \\mathbf {L}_y(v) = w + \\mathbf {L}_y(v)$ .", "This shows $A_y+\\operatorname{im}\\mathbf {L}_y\\subseteq A_y$ .", "The proof that $B_y + \\operatorname{im}\\mathbf {L}_y\\subseteq B_y$ is analogous.", "The proofs of the first half of part (b) and parts (c)-(d) are given in the main body.", "To see that $B_y\\lnot \\subseteq x\\mathcal {X}$ in general, consider $B_y$ for the lift $\\varphi \\colon \\mathcal {M}=\\mathbb {R}\\times (\\mathbb {R}^2)^2\\rightarrow \\mathcal {X}=\\mathbb {R}^{2\\times 2}_{\\le 1}$ given by $\\varphi (\\lambda ,u,v) = \\lambda uv^\\top $ at $y=(0,e_1,e_1)$ where $e_1=\\begin{bmatrix}1, & 0\\end{bmatrix}^\\top $ .", "We remark that “2 $\\Rightarrow \\!$ 1” does not hold for this lift by Proposition REF .", "To see that $x\\mathcal {X}\\lnot \\subseteq B_y$ in general, note that if $x\\mathcal {X}\\subseteq B_y$ then $B_y^*\\subseteq (x\\mathcal {X})^*$ and hence “2 $\\Rightarrow \\!$ 1” holds at $y$ by Corollary REF (b).", "In particular, we have $x\\mathcal {X}\\lnot \\subseteq B_y$ for the above example as well." ], [ "Compositions", "[Proof of Proposition REF ] Recall that “local $\\Rightarrow \\!$ local” is equivalent to openness by Theorem REF .", "For the first statement, suppose $\\varphi \\circ \\psi $ is open at $z$ and let $V\\subseteq \\mathcal {M}$ be a neighborhood of $y$ .", "Because $\\psi $ is continuous, $\\psi ^{-1}(V)$ is a neighborhood of $z$ , so $(\\varphi \\circ \\psi )(\\psi ^{-1}(V))$ is a neighborhood of $\\varphi (y)$ , hence $\\varphi (V)\\supseteq (\\varphi \\circ \\psi )(\\psi ^{-1}(V))$ is a neighborhood of $\\varphi (y)$ as well.", "This shows $\\varphi $ is open at $y$ , and hence satisfies “local $\\Rightarrow \\!$ local” there by Theorem REF .", "For the second statement, if $U\\subseteq \\mathcal {N}$ is a neighborhood of $z$ , then $\\psi (U)\\subseteq \\mathcal {M}$ is a neighborhood of $y$ since $\\psi $ is open at $z$ .", "If $\\varphi $ satisfies “local $\\Rightarrow \\!$ local”, or equivalently, is open at $y$ , then $\\varphi (\\psi (U))\\subseteq \\mathcal {X}$ is a neighborhood of $\\varphi (y)$ , hence $\\varphi \\circ \\psi $ is open at $z$ and satisfies “local $\\Rightarrow \\!$ local” there by Theorem REF .", "For “1 $\\Rightarrow \\!$ 1”, the chain rule gives $\\varphi \\circ \\psi )(z) = (y)\\circ (z)$ for any $z\\in \\mathcal {N}$ .", "If $\\varphi \\circ \\psi $ satisfies “1 $\\Rightarrow \\!$ 1”, then Theorem REF shows that $\\operatorname{im}\\varphi \\circ \\psi )(z) = x\\mathcal {X}$ , hence $\\operatorname{im}(y)\\supseteq x\\mathcal {X}$ .", "Since $\\operatorname{im}(y)\\subseteq x\\mathcal {X}$ by Lemma REF , we conclude that $\\operatorname{im}(y)=x\\mathcal {X}$ and hence $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ by Theorem REF .", "Conversely, suppose $\\psi $ is a submersion at $z$ , so $(z)$ is surjective.", "Then $\\operatorname{im}(y) = \\operatorname{im}\\varphi \\circ \\psi )(z)$ .", "By Theorem REF , we conclude that $\\varphi $ satisfies “1 $\\Rightarrow \\!$ 1” at $y$ if and only if $\\varphi \\circ \\psi $ does so at $z$ .", "For “2 $\\Rightarrow \\!$ 1”, suppose first that $\\varphi \\circ \\psi $ satisfies “2 $\\Rightarrow \\!$ 1” at $z$ and $y$ is 2-critical for $f\\circ \\varphi $ where $f$ is some cost function on $\\mathcal {X}$ .", "For any curve $\\bar{c}\\colon I\\rightarrow \\mathcal {N}$ such that $\\bar{c}(0)=z$ , the curve $c = \\psi \\circ \\bar{c}\\colon I\\rightarrow \\mathcal {M}$ satisfies $c(0)=y$ .", "Because $y$ is 2-critical, we have $(f\\circ \\varphi \\circ c)^{\\prime }(0) = (f\\circ \\varphi \\circ \\psi \\circ \\bar{c})^{\\prime }(0)=0,\\quad \\textrm {and}\\quad (f\\circ \\varphi \\circ c)^{\\prime \\prime }(0) = (f\\circ \\varphi \\circ \\psi \\circ \\bar{c})^{\\prime \\prime }(0)\\ge 0.$ This shows $z$ is 2-critical for $f\\circ \\varphi \\circ \\psi $ on $\\mathcal {N}$ .", "Because $\\varphi \\circ \\psi $ satisfies “2 $\\Rightarrow \\!$ 1”, we conclude that $\\varphi (\\psi (z))=\\varphi (y)$ is stationary for $f$ on $\\mathcal {X}$ , hence $\\varphi $ satisfies “2 $\\Rightarrow \\!$ 1” at $y$ .", "Conversely, suppose $\\psi $ is a submersion at $z$ and $\\varphi $ satisfies “2 $\\Rightarrow \\!$ 1” at $y$ .", "Suppose $z$ is 2-critical for $f\\circ \\varphi \\circ \\psi $ .", "Because $\\psi $ is a submersion at $z$ , for any curve $c\\colon I\\rightarrow \\mathcal {M}$ satisfying $c(0)=y$ , there exists a potentially smaller interval $I^{\\prime }\\subseteq I$ containing $t=0$ in its interior and a curve $\\bar{c}\\colon I^{\\prime }\\rightarrow \\mathcal {N}$ such that $\\psi \\circ \\bar{c} = c$ .", "For example, by [31] there exists a smooth local section $\\sigma $ of $\\psi $ defined near $y$ satisfying $\\sigma (y)=z$ , in which case we can set $\\bar{c} = \\sigma \\circ c$ .", "Because $z$ is 2-critical, it follows that $y$ is 2-critical for $f\\circ \\varphi $ by (REF ).", "Since $\\varphi $ satisfies “2 $\\Rightarrow \\!$ 1” at $y$ , we conclude that $\\varphi (y)=\\varphi (\\psi (z))$ is stationary for $f$ , which shows $\\varphi \\circ \\psi $ satisfies “2 $\\Rightarrow \\!$ 1” at $z$ .", "The first claimed equality is just the chain rule $\\varphi \\circ \\psi )(z) = (y)\\circ (z)$ .", "For the second claimed equality, let $v = \\mathbf {L}_z^{\\psi }(u)\\in y\\mathcal {M}$ .", "Recall that $\\mathbf {Q}_z^{\\varphi \\circ \\psi }(u)=(\\varphi \\circ \\psi \\circ \\bar{c}_u)^{\\prime \\prime }(0)$ where $\\bar{c}_u\\colon I\\rightarrow \\mathcal {N}$ is some curve satisfying $\\bar{c}_u(0)=z$ and $\\bar{c}_u^{\\prime }(0)=u$ , and $\\mathbf {Q}_y^{\\varphi }(v)=(\\varphi \\circ c_{v})^{\\prime \\prime }(0)$ for some curve $c_{v}$ on $\\mathcal {M}$ satisfying $c_{v}(0)=y$ and $c_{v}^{\\prime }(0)=v$ .", "Since $\\psi \\circ \\bar{c}_u$ is another curve on $\\mathcal {M}$ satisfying $(\\psi \\circ \\bar{c}_u)(0)=y$ and $(\\psi \\circ \\bar{c}_u)^{\\prime }(0)=\\mathbf {L}_z^{\\psi }(u)=v$ , Lemma REF shows that $\\mathbf {Q}_y^{\\varphi }(\\mathbf {L}_z^{\\psi }(u))-\\mathbf {Q}_z^{\\varphi \\circ \\psi }(u) \\in \\operatorname{im}\\mathbf {L}_y^{\\varphi }=\\operatorname{im}\\mathbf {L}_z^{\\varphi \\circ \\psi }$ for all $u\\in z\\mathcal {N}$ as claimed, where we used the fact that $\\mathbf {L}_z^{\\varphi \\circ \\psi }=\\mathbf {L}_y^{\\varphi }\\circ \\mathbf {L}_z^{\\psi }$ and the surjectivity of $\\mathbf {L}_z^{\\psi }$ .", "Since $\\mathbf {L}_z^{\\varphi \\circ \\psi } = \\mathbf {L}_y^{\\varphi }\\circ \\mathbf {L}_z^{\\psi }$ , we have $\\ker \\mathbf {L}_z^{\\varphi \\circ \\psi }=(\\mathbf {L}_z^{\\psi })^{-1}(\\ker \\mathbf {L}_y^{\\varphi })$ .", "Because $\\mathbf {L}_z^{\\psi }$ is surjective, we have $\\mathbf {L}_z^{\\psi }\\Big ((\\mathbf {L}_z^{\\psi })^{-1}(\\ker \\mathbf {L}_y^{\\varphi })\\Big )=\\ker \\mathbf {L}_y^{\\varphi }$ .", "Using the results of the preceding paragraph, $A_z^{\\varphi \\circ \\psi } &= \\mathbf {Q}_z^{\\varphi \\circ \\psi }(\\ker \\mathbf {L}_z^{\\varphi \\circ \\psi }) + \\operatorname{im}\\mathbf {L}_z^{\\varphi \\circ \\psi } = \\mathbf {Q}_y^{\\varphi }\\circ \\mathbf {L}_z^{\\psi }\\Big ((\\mathbf {L}_z^{\\psi })^{-1}(\\ker \\mathbf {L}_y^{\\varphi })\\Big ) + \\operatorname{im}\\mathbf {L}_y^{\\varphi }\\\\ &= \\mathbf {Q}_y^{\\varphi }(\\ker \\mathbf {L}_y^{\\varphi }) + \\operatorname{im}\\mathbf {L}_y^{\\varphi } = A_y^{\\varphi }.$ If $w\\in B_y^{\\varphi }$ then there exist $v_i\\in y\\mathcal {M}$ such that $\\mathbf {L}_y^{\\varphi }(v_i)\\rightarrow 0$ and $w\\in \\lim _i(\\mathbf {Q}_y^{\\varphi }(v_i)+\\operatorname{im}\\mathbf {L}^{\\varphi }_y)$ .", "Since $\\mathbf {L}_z^{\\psi }$ is surjective, there exist $u_i\\in z\\mathcal {N}$ satisfying $\\mathbf {L}_z^{\\psi }(u_i)=v_i$ .", "We then have $\\mathbf {L}_z^{\\varphi \\circ \\psi }(u_i) = \\mathbf {L}_y^{\\varphi }(v_i)\\rightarrow 0$ , and $\\mathbf {Q}_z^{\\varphi \\circ \\psi }(u_i)+\\operatorname{im}\\mathbf {L}_z^{\\varphi \\circ \\psi }=\\mathbf {Q}_y^{\\varphi }(v_i)+\\operatorname{im}\\mathbf {L}_y^{\\varphi }$ .", "Taking $i\\rightarrow \\infty $ , we conclude that $w\\in B_z^{\\varphi \\circ \\psi }$ and $B_y^{\\varphi }\\subseteq B_z^{\\varphi \\circ \\psi }$ .", "Conversely, if $w\\in B_z^{\\varphi \\circ \\psi }$ then there exist $u_i\\in z\\mathcal {N}$ such that $\\mathbf {L}_z^{\\varphi \\circ \\psi }(u_i)\\rightarrow 0$ and $w\\in \\lim _i(\\mathbf {Q}_z^{\\varphi \\circ \\psi }(u_i)+\\operatorname{im}\\mathbf {L}_z^{\\varphi \\circ \\psi })$ .", "Let $v_i=\\mathbf {L}_z^{\\psi }(u_i)$ to conclude that $w\\in B_y^{\\varphi }$ , and hence $B_y^{\\varphi } = B_z^{\\varphi \\circ \\psi }$ .", "The argument for part (b) shows that $y$ is 2-critical for $f\\circ \\varphi $ if and only if $z$ is 2-critical for $f\\circ \\varphi \\circ \\psi $ , which implies $W_y^{\\varphi } = W_z^{\\varphi \\circ \\psi }$ by Definition REF ." ], [ "Products", "[Proof of Proposition REF ] If sequences $((y_1^{(j)},\\ldots ,y_k^{(j)}))_{j\\ge 1}\\subseteq \\mathcal {Z}$ and $\\tau _j\\rightarrow 0$ such that $\\tau _j>0$ satisfy $(v_1,\\ldots ,v_k) = \\lim _{j\\rightarrow \\infty }\\frac{(y_1^{(j)},\\ldots ,y_k^{(j)})-(y_1,\\ldots ,y_k)}{\\tau _j},$ then $v_i = \\lim _j\\frac{y_i^{(j)}-y_i}{\\tau _j}$ for all $i=1,\\ldots ,k$ , which shows that if $(v_1,\\ldots ,v_k)\\in {(z_1,\\ldots ,z_k)}\\mathcal {Z}$ then $v_i\\in {z_i}\\mathcal {Z}_i$ for all $i$ .", "For an example where the inclusion is strict, see [41].", "Since products of open sets form a basis for the (product) topology on $\\mathcal {Z}$ , we conclude that $\\psi $ is open at $y=(y_1,\\ldots ,y_k)$ iff $\\psi (U_1\\times \\cdots \\times U_k)=\\psi _1(U_1)\\times \\cdots \\times \\psi _k(U_k)$ is a neighborhood of $y$ whenever $U_i\\subseteq \\mathcal {M}_i$ is a neighborhood of $y_i$ for all $i$ .", "Because the interior of a product of sets in the product topology is the product of their interiors, we conclude that $\\psi _1(U_1)\\times \\cdots \\times \\psi _k(U_k)$ is a neighborhood of $y$ if and only if $\\psi _i(U_i)$ is a neighborhood of $y_i$ for all $i$ .", "This proves part (b) by Theorem REF .", "We have $\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi } = \\mathbf {L}_{y_1}^{\\psi _1}\\times \\cdots \\times \\mathbf {L}_{y_k}^{\\psi _k}$ , which is defined on ${(y_1,\\ldots ,y_k)}\\mathcal {N} = {y_1}\\mathcal {N}_1\\times \\cdots \\times {y_k}\\mathcal {N}_k$ .", "Therefore, $\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }=\\operatorname{im}\\mathbf {L}_{y_1}^{\\psi _1}\\times \\cdots \\times \\operatorname{im}\\mathbf {L}_{y_k}^{\\psi _k}$ .", "If $\\psi _i$ satisfies “1 $\\Rightarrow \\!$ 1” at $y_i$ for all $i$ , then $\\operatorname{im}\\mathbf {L}_{y_i}^{\\psi _i}={z_i}\\mathcal {Z}_i$ for all $i$ by Theorem REF , so $\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }={z_1}\\mathcal {Z}_1\\times \\cdots \\times {z_k}\\mathcal {Z}_k$ where $z_i=\\psi _i(y_i)$ .", "Since $\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }$ is always contained in ${(z_1,\\ldots ,z_k)}\\mathcal {Z}$ by Proposition REF (b), we get the chain of inclusions ${z_1}\\mathcal {Z}_1\\times \\ldots \\times {z_k}\\mathcal {Z}_k = \\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi } \\subseteq {(z_1,\\ldots ,z_k)}\\mathcal {Z}\\subseteq {z_1}\\mathcal {Z}_1\\times \\ldots \\times {z_k}\\mathcal {Z}_k,$ where the last inclusion is part (a).", "We conclude that equality in part (a) holds and $\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }={(z_1,\\ldots ,z_k)}\\mathcal {Z}$ so “1 $\\Rightarrow \\!$ 1” holds.", "For any $v=(v_1,\\ldots ,v_k)\\in {(y_1,\\ldots ,y_k)}\\mathcal {M}$ , let $c_{v_i}$ be a smooth curve on $\\mathcal {M}_i$ satisfying $\\mathbf {Q}_{y_i}^{\\psi _i}(v_i)=(\\psi _i\\circ c_{v_i})^{\\prime \\prime }(0)$ as in Definition REF .", "Then $c(t)=(c_{v_1}(t),\\ldots ,c_{v_k}(t))$ is a smooth curve on $\\mathcal {M}$ passing through $(y_1,\\ldots ,y_k)$ with velocity $v$ and $(\\psi \\circ c)^{\\prime \\prime }(0) = ((\\psi _1\\circ c_{v_1})^{\\prime \\prime }(0),\\ldots ,(\\psi _1\\circ c_{v_k})^{\\prime \\prime }(0)) = (\\mathbf {Q}_{y_1}^{\\psi _1}\\times \\cdots \\times \\mathbf {Q}_{y_k}^{\\psi _k})(v),$ showing that $\\mathbf {Q}_{(y_1,\\ldots ,y_k)}^{\\psi }\\equiv \\mathbf {Q}_{y_1}^{\\psi _1}\\times \\cdots \\times \\mathbf {Q}_{y_k}^{\\psi _k}\\mod {\\operatorname{im}}\\unknown.", "\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }$ by Lemma REF .", "By part (c) above, we also have $\\ker \\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi } = \\ker (\\mathbf {L}_{y_1}^{\\psi _1}\\times \\cdots \\times \\mathbf {L}_{y_k}^{\\psi _k}) = \\ker \\mathbf {L}_{y_1}^{\\psi _1}\\times \\cdots \\times \\ker \\mathbf {L}_{y_k}^{\\psi _k},$ so Proposition REF (a) gives $A_{(y_1,\\ldots ,y_k)}^{\\psi } &= \\mathbf {Q}_{(y_1,\\ldots ,y_k)}^{\\psi }(\\ker \\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }) + \\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_k)}^{\\psi }\\\\ &= \\mathbf {Q}_{y_1}^{\\psi _1}(\\ker \\mathbf {L}_{y_1}^{\\psi _1})\\times \\cdots \\times \\mathbf {Q}_{y_k}^{\\psi _k}(\\ker \\mathbf {L}_{y_k}^{\\psi _k}) + \\operatorname{im}\\mathbf {L}_{y_1}^{\\psi _1}\\times \\cdots \\times \\operatorname{im}\\mathbf {L}_{y_k}^{\\psi _k}\\\\&= \\Big (\\mathbf {Q}_{y_1}(\\ker \\mathbf {L}_{y_1}^{\\psi _1}) + \\operatorname{im}\\mathbf {L}_{y_1}^{\\psi _1}\\Big )\\times \\cdots \\times \\Big (\\mathbf {Q}_{y_k}^{\\psi _k}(\\ker \\mathbf {L}_{y_k}^{\\psi _k}) + \\operatorname{im}\\mathbf {L}_{y_k}^{\\psi _k}\\Big )\\\\&= A_{y_1}^{\\psi _1}\\times \\cdots \\times A_{y_k}^{\\psi _k}.$ The proof for $B_{(y_1,\\ldots ,y_k)}$ similarly follows from Proposition REF (b).", "Now recall Definition REF .", "For any $(w_1,\\ldots ,w_k)\\in \\mathcal {E}_1\\times \\cdots \\times \\mathcal {E}_k$ we have $\\psi _{(w_1,\\ldots ,w_k)}(y_1,\\ldots ,y_k)=\\langle (w_1,\\ldots ,w_k),\\psi (y_1,\\ldots ,y_k)\\rangle = \\sum _{i=1}^k\\langle w_i,\\psi _i(y_i)\\rangle = \\sum _{i=1}^k(\\psi _i)_{w_i}(y_i),$ where the first and last equalities are the definitions of $\\psi _{(w_1,\\ldots ,w_k)}(y_1,\\ldots ,y_k),(\\psi _i)_{w_i}(y_i)$ from Definition REF .", "Therefore, $\\nabla ^2\\psi _{(w_1,\\ldots ,w_k)}(y_1,\\ldots ,y_k)[\\dot{y}_1,\\ldots ,\\dot{y}_k]=\\Big (\\nabla ^2(\\psi _1)_{w_1}(y_1)[\\dot{y}_1],\\ldots ,\\nabla ^2(\\psi _k)_{w_k}(y_k)[\\dot{y}_k]\\Big ).$ The claimed expression for $W_{(y_1,\\ldots ,y_k)}^{\\psi }$ follows from Proposition REF (c) together with (REF ).", "Taking duals in part (a), we get $({(z_1,\\ldots ,z_k)}\\mathcal {Z})^*\\supseteq ({z_1}\\mathcal {Z}_1)^*\\times \\cdots \\times ({z_k}\\mathcal {Z}_k)^*$ where we used the fact that the dual of a product of cones is the product of their duals.", "If $\\psi _i$ satisfies “2 $\\Rightarrow \\!$ 1” for all $i$ , then $W_{y_i}^{\\psi _i}\\subseteq ({z_i}\\mathcal {Z}_i)^*$ by Theorem REF , in which case $W_{(y_1,\\ldots ,y_k)}^{\\psi }=W_{y_1}^{\\psi _1}\\times \\cdots \\times W_{y_k}^{\\psi _k}\\subseteq ({z_1}\\mathcal {Z}_1)^*\\times \\cdots \\times ({z_k}\\mathcal {Z}_k)^*\\subseteq ({(z_1,\\ldots ,z_k)}\\mathcal {Z})^*.$ Thus, $\\psi $ satisfies “2 $\\Rightarrow \\!$ 1” at $(y_1,\\ldots ,y_k)\\in \\mathcal {N}$ .", "This follows from (a) and (d)." ], [ "Alternative computation of $\\mathbf {L}$ and {{formula:18026b29-d847-4bc4-bdb5-5565fb4e0647}} for desingularization lift", "In Example REF , we computed $\\mathbf {L}$ and $\\mathbf {Q}$ for the desingularization lift of $\\mathbb {R}^{m \\times n}_{\\le r}$ using charts.", "Alternatively, we can view $\\mathcal {M}$ as a quotient manifold of the embedded submanifold $\\overline{\\mathcal {M}} = \\lbrace (X,Y)\\in \\mathbb {R}^{m\\times n}\\times \\mathbb {R}^{n\\times (n-r)}:XY=0,\\ Y^\\top Y = I_{n-r}\\rbrace ,$ with the quotient map $\\pi \\colon \\overline{\\mathcal {M}}\\rightarrow \\mathcal {M}$ given by $\\pi (X,Y)=(X,\\mathrm {col}(Y))$ , see [28].", "Since quotient maps are submersions, Proposition REF shows that to check our desirable properties for $\\varphi $ at $(X,\\mathcal {S})$ , it suffices to check them for $\\overline{\\varphi }=\\varphi \\circ \\pi $ at any $(X,Y)$ such that $\\pi (X,Y)=(X,\\mathcal {S})$ .", "Since $\\overline{\\mathcal {M}}$ is an embedded submanifold of a linear space $\\mathcal {E}=\\mathbb {R}^{m\\times n}\\times \\mathbb {R}^{n\\times (n-r)}$ , we can use the corresponding techniques from Section REF to compute $\\mathbf {L}$ and $\\mathbf {Q}$ and check our properties.", "We now carry this out to illustrate the difference between the expressions obtained using different techniques.", "If $h(X,Y)=(XY,Y^\\top Y-I_{n-r})$ , then one can check that $\\ker h̥(X,Y)$ has constant dimension $(m+n-r)r$ in a neighborhood of $\\overline{\\mathcal {M}}$ , hence $\\overline{\\mathcal {M}}$ is indeed an embedded submanifold of $\\mathcal {E}$ with that dimension.", "The first two derivatives of $h$ are $&h̥(X,Y)[\\dot{X},\\dot{Y}] = (\\dot{X}Y + X\\dot{Y},\\dot{Y}^\\top Y + Y^\\top \\dot{Y}),\\\\&2h(X,Y)[(\\dot{X},\\dot{Y}),(\\dot{X},\\dot{Y})] = 2(\\dot{X}\\dot{Y},\\dot{Y}^\\top \\dot{Y}).$ The tangent spaces and second-order tangent sets to $\\overline{\\mathcal {M}}$ are then given by $&{(X,Y)}\\overline{\\mathcal {M}} = \\ker h̥(X,Y) = \\lbrace (\\dot{X},\\dot{Y})\\in \\mathbb {R}^{m\\times n}\\times \\mathbb {R}^{n\\times (n-r)}: \\dot{X}Y + X\\dot{Y} = 0,\\ \\dot{Y}^\\top Y + Y^\\top \\dot{Y} = 0\\rbrace ,\\\\&2_{(X,Y),(\\dot{X},\\dot{Y})}\\overline{\\mathcal {M}} = \\lbrace (\\ddot{X},\\ddot{Y})\\in \\mathbb {R}^{m\\times n}\\times \\mathbb {R}^{n\\times (n-r)}:\\ddot{X}Y + X\\ddot{Y} = -2\\dot{X}\\dot{Y},\\ \\ddot{Y}^\\top Y + Y^\\top \\ddot{Y} = -2\\dot{Y}^\\top \\dot{Y}\\rbrace .$ A particular point in $2_{(X,Y),(\\dot{X},\\dot{Y})}\\overline{\\mathcal {M}}$ is $(\\ddot{X},\\ddot{Y}) = -(2\\dot{X}\\dot{Y} Y^\\top ,Y\\dot{Y}^\\top \\dot{Y})$ .", "The lift map extends by the same expression $\\overline{\\varphi }(X,Y)=X$ to a linear map on all of $\\mathcal {E}$ , whose first two derivatives are ${\\varphi }(X,Y)[\\dot{X},\\dot{Y}] = \\dot{X},\\quad 2\\overline{\\varphi }(X,Y)[(\\dot{X},\\dot{Y}),(\\dot{X},\\dot{Y})] = 0.$ Therefore, we get from (REF ) that $&\\mathbf {L}_{(X,Y)}^{\\overline{\\varphi }}(\\dot{X},\\dot{Y}) = {\\varphi }(X,Y)[\\dot{X},\\dot{Y}] = \\dot{X},\\\\&\\mathbf {Q}_{(X,Y)}^{\\overline{\\varphi }}(\\dot{X},\\dot{Y}) = 2\\overline{\\varphi }(X,Y)[(\\dot{X},\\dot{Y}),(\\dot{X},\\dot{Y})] + {\\varphi }(X,Y)[(\\ddot{X},\\ddot{Y})] = -2\\dot{X}\\dot{Y} Y^\\top .$ Both maps can be restricted to the horizontal space of $\\pi $ by Remark REF , which is shown in [28] to be $H_{(X,Y)}\\ker (X,Y) = \\lbrace (\\dot{X},\\dot{Y}):\\dot{X}Y + X\\dot{Y} = 0,\\ Y^\\top \\dot{Y}=0\\rbrace .$ If $V\\in \\mathbb {R}^{m\\times n}$ satisfies $V\\in (\\operatorname{im}\\mathbf {L}_{(X,Y)}^{\\overline{\\varphi }})^{\\perp }$ , then the Riemannian Hessian $\\nabla ^2\\overline{\\varphi }_V(X,Y)$ of $\\overline{\\varphi }_V(X,Y)=\\langle V,\\overline{\\varphi }(X,Y)\\rangle =\\langle V,X\\rangle $ is the unique self-adjoint operator on $H_{(X,Y)}$ satisfying $\\langle \\nabla ^2\\overline{\\varphi }_V(X,Y)[\\dot{X},\\dot{Y}],(\\dot{X},\\dot{Y})\\rangle = \\langle V,\\mathbf {Q}_{(X,Y)}^{\\overline{\\varphi }}(\\dot{X},\\dot{Y})\\rangle = -2\\langle VY,\\dot{X}\\dot{Y}\\rangle ,$ given by $\\nabla ^2\\overline{\\varphi }_V(X,Y) = \\mathrm {Proj}_{H_{(X,Y)}}\\Big (-VY\\dot{Y}^\\top , -\\dot{X}^\\top VY\\Big ).$ Indeed, the above map is clearly linear, maps $H_{(X,Y)}$ to itself by definition, and can be easily verified to be self-adjoint.", "Unfortunately, orthogonal projection onto $H_{(X,Y)}$ has a complicated expression [28].", "In contrast, we get a simple explicit formula using the chart-based formalism in (REF ).", "Incidentally, using the above expression for the Riemannian Hessian we can obtain a different expression for $\\mathbf {Q}_{(X,Y)}^{\\overline{\\varphi }}$ , which is again somewhat more complicated than (REF ) obtained using charts: $\\mathbf {Q}_{(X,Y)}^{\\overline{\\varphi }}(\\dot{X},\\dot{Y}) = -2(I+XX^\\top )^{-1}\\dot{X}\\dot{Y}Y^\\top .$ By Remark REF , the expression for $\\mathbf {Q}_{(X,Y)}^{\\overline{\\varphi }}$ is only defined up to $\\operatorname{im}\\mathbf {L}_{(X,Y)}^{\\overline{\\varphi }}$ .", "The difference between the two expressions lies in $\\operatorname{im}\\mathbf {L}_{(X,Y)}^{\\overline{\\varphi }}$ by the Woodbury matrix identity.", "This example shows that verification of “2 $\\Rightarrow \\!$ 1” is simplified by using the right approach." ], [ "Rank factorization lift", "[Proof of Proposition REF ] For “local $\\Rightarrow \\!$ local”, we follow [32] and show SLP holds at every “balanced” $(L,R)$ factorizations, i.e., one satisfying $\\mathrm {rank}(L)=\\mathrm {rank}(R)=\\mathrm {rank}(LR^\\top )$ .Compare this argument to the proof of Proposition REF .", "In both cases, the proof relies on a characterization of the fibers of $\\varphi $ or a subset of them.", "Let $X=LR^\\top $ and suppose $(X_i)_{i\\in \\mathbb {N}}\\subseteq \\mathcal {X}$ converges to $X$ .", "Let $X_i=U_i\\Sigma _iV_i^\\top $ be a size-$r$ SVD of $X_i$ where $\\Sigma _i\\in \\mathbb {R}^{r\\times r}$ is diagonal with the first $r$ singular values of $X_i$ (possibly including zeros) on the diagonal.", "Let $L_i=U_i\\Sigma _i^{1/2}$ and $R_i=V_i\\Sigma _i^{1/2}$ .", "These satisfy $\\varphi (L_i,R_i)=L_iR_i^\\top =X_i$ and $\\Vert L_i\\Vert =\\Vert R_i\\Vert =\\Vert X_i\\Vert ^{1/2}$ .", "Since $\\Vert X_i\\Vert $ are bounded, after passing to a subsequence we may assume that the limit $(L_{\\infty },R_{\\infty })=\\lim _i(L_i,R_i)$ exists.", "By continuity of $\\varphi $ , we must have $L_{\\infty }R_{\\infty }^\\top = X$ .", "Since $L_i^\\top L_i=R_i^\\top R_i=\\Sigma _i$ for all $i$ , we also have $L_{\\infty }^\\top L_{\\infty } = R_{\\infty }^\\top R_{\\infty }$ .", "This implies $\\mathrm {rank}(L_{\\infty }) = \\mathrm {rank}(R_{\\infty })=\\mathrm {rank}(X)$ by considering the polar decompositions of $L_{\\infty },R_{\\infty }$ , see [32].", "By [32], there exists $J\\in \\mathrm {GL}(r)$ satisfying $L = L_{\\infty }J$ and $R = R_{\\infty }J^{-\\top }$ .", "Therefore, $(L_iJ,R_iJ^{-\\top })$ converges to $(L,R)$ and is a lift of $X_i$ , showing that SLP holds.", "We conclude that $\\varphi $ satisfies “local $\\Rightarrow \\!$ local” at $(L,R)$ such that $\\mathrm {rank}(L)=\\mathrm {rank}(R)=\\mathrm {rank}(LR^\\top )$ by Theorem REF .", "Conversely, we show that if $\\mathrm {rank}(L), \\mathrm {rank}(R)$ and $\\mathrm {rank}(LR^\\top )$ are not all equal then “local $\\Rightarrow \\!$ local” does not hold at $(L,R)$ by constructing a sequence $(X_i)$ converging to $X=LR^\\top $ no subsequence of which can be lifted to a sequence converging to $(L,R)$ .", "It always holds that $\\mathrm {rank}(LR^\\top )\\le \\min \\lbrace \\mathrm {rank}(L),\\mathrm {rank}(R)\\rbrace $ .", "Assume $\\mathrm {rank}(X)<\\mathrm {rank}(L)$ (in particular, $\\mathrm {rank}(X)<r$ ).", "The case $\\mathrm {rank}(X)<\\mathrm {rank}(R)$ is similar.", "Define $L_i=\\mathrm {Proj}_{\\mathrm {col}(X)}L+i^{-1}L_{\\perp },\\quad \\textrm {and}\\quad R_i=\\mathrm {Proj}_{\\mathrm {row}(X)}R+i^{-1}R_{\\perp },$ for $L_{\\perp }\\in \\mathbb {R}^{m\\times r}$ and $R_{\\perp }\\in \\mathbb {R}^{n\\times r}$ satisfying $L_{\\perp }^\\top L=R_{\\perp }^\\top R=0$ and $\\mathrm {rank}(L_{\\perp })=\\mathrm {rank}(R_{\\perp })=r-\\mathrm {rank}(X)$ .", "Note that $\\mathrm {rank}(L_i)=\\mathrm {rank}(R_i)=r$ for all $i$ .", "We construct $L_{\\perp },R_{\\perp }$ as follows.", "Let $\\mathrm {Proj}_{\\mathrm {col}(X)}L=U_L\\Sigma _LV_L^\\top $ be a thin SVD for $\\mathrm {Proj}_{\\mathrm {col}(X)}L$ , whose rank is $\\mathrm {rank}(X)$ (because $X=LR^\\top $ implies $\\mathrm {col}(X)\\subseteq \\mathrm {col}(L)$ ).", "Since $\\mathrm {rank}(X)<r$ , we can find $W_L\\in \\mathrm {St}(m,r-\\mathrm {rank}(X))$ and $Q_L\\in \\mathrm {St}(r,r-\\mathrm {rank}(X))$ such that $W_L^\\top U_L=Q_L^\\top V_L=0$ , and note that $L_{\\perp }=W_LQ_L^\\top $ satisfies the desired properties.", "The construction of $R_{\\perp }$ is analogous.", "Define $X_i=L_iR_i^\\top $ which converges to $X$ as $i\\rightarrow \\infty $ , and suppose $(\\tilde{L}_{i_j},\\tilde{R}_{i_j})$ is a lift of a subsequence of $(X_i)$ converging to $(L,R)$ .", "Because $\\mathrm {rank}(L_i)=\\mathrm {rank}(R_i)=r$ , we also have $\\mathrm {rank}(X_i)=r$ , and there exist $J_{i_j}\\in \\mathrm {GL}(r)$ such that $(\\tilde{L}_{i_j},\\tilde{R}_{i_j})=(L_{i_j}J_{i_j},R_{i_j}J_{i_j}^{-\\top })$ , see [32].", "Then $L^\\top L_{i_j}J_{i_j}=L^\\top \\tilde{L}_{i_j}\\xrightarrow{}L^\\top L,\\quad \\textrm {whose rank is } \\mathrm {rank}(L) > \\mathrm {rank}(X).$ But we also have $L^\\top L_{i_j}J_{i_j} = L^\\top \\mathrm {Proj}_{\\mathrm {col}(X)}LJ_{i_j},\\quad \\textrm {whose rank is } \\mathrm {rank}(X),$ so $\\mathrm {rank}(\\lim _jL^\\top L_{i_j}J_{i_j})\\le \\mathrm {rank}(X)$ since bounded-rank matrices form a closed set.", "This is a contradiction.", "Thus, no subsequence of $(X_i)$ can be lifted, showing that $\\varphi $ does not satisfy “local $\\Rightarrow \\!$ local” at $(L,R)$ by Theorem REF .", "For an explicit example of a cost $f$ and point $(L,R)$ which is a local minimum for () but such that $LR^\\top $ is not a local minimum for (REF ), see [32].", "For “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1”, note that $\\mathcal {M}$ is a linear space, hence (REF ) gives $&\\mathbf {L}_{(L,R)}(\\dot{L},\\dot{R}) = \\dot{L}R^\\top + L\\dot{R}^\\top ,\\\\&\\mathbf {Q}_{(L,R)}(\\dot{L},\\dot{R}) = 2\\dot{L}\\dot{R}^\\top .$ If $\\mathrm {rank}(LR^\\top )=r$ , then [32] (which is a slight generalization of the proof of “1 $\\Rightarrow \\!$ 1” in Proposition REF ) shows $\\operatorname{im}\\mathbf {L}_{(L,R)}={LR^\\top }\\mathcal {X}$ hence “1 $\\Rightarrow \\!$ 1” holds.", "If $\\mathrm {rank}(LR^\\top )<r$ then ${LR^\\top }\\mathcal {X}$ is not a linear space [25], hence “1 $\\Rightarrow \\!$ 1” does not hold at $(L,R)$ by Corollary REF .", "To show “2 $\\Rightarrow \\!$ 1” holds everywhere on $\\mathcal {M}$ , it suffices to show $B_{(L,R)}^*\\subseteq (X\\mathcal {X})^*$ whenever $\\mathrm {rank}(LR^\\top )<r$ by Theorem REF .", "Since $\\mathrm {rank}(LR^\\top )<r$ we must have either $\\mathrm {rank}(L)<r$ or $\\mathrm {rank}(R)<r$ , assume the former (the case $\\mathrm {rank}(R)<r$ is similar).", "Then there exists $w\\in \\mathbb {R}^r$ such that $Lw=0$ and $\\Vert w\\Vert ^2=1$ .", "For any $u\\in \\mathbb {R}^m$ , $v\\in \\mathbb {R}^n$ , and $i\\in \\mathbb {N}$ , let $\\dot{L}_i = i^{-1} uw^\\top $ and $\\dot{R}_i = (i/2) vw^\\top $ .", "Then $&\\mathbf {L}_{(L,R)}(\\dot{L}_i,\\dot{R}_i) = i^{-1}uw^\\top R^\\top \\xrightarrow{}0,\\\\&\\mathbf {Q}_{(L,R)}(\\dot{L}_i,\\dot{R}_i) = uv^\\top ,$ showing that $uv^\\top \\in B_{(L,R)}$ .", "Thus, $B_{(L,R)}$ contains all rank-1 matrices, showing that $B_y^*=\\lbrace 0\\rbrace = (X\\mathcal {X})^*$ ." ], [ "Desingularization lift", "[Proof of Proposition REF ] We begin by showing “local $\\Rightarrow \\!$ local” does not hold at $(X,\\mathcal {S})\\in \\mathcal {M}$ if $\\mathrm {rank}(X)<r$ .", "Since $(X,\\mathcal {S})\\in \\mathcal {M}$ , the subspace $\\mathcal {S}\\subseteq \\mathbb {R}^n$ has dimension $n-r$ and $\\mathcal {S}\\subseteq \\ker (X)$ .", "We construct a sequence converging to $X$ such that no subsequence of it can be lifted to a sequence converging to $(X,\\mathcal {S})$ , demonstrating that SLP does not hold at $(X,\\mathcal {S})$ .", "Let $X=U\\Sigma V^\\top $ be a thin SVD for $X$ where $U\\in \\mathrm {St}(m,\\mathrm {rank}(X)),\\ V\\in \\mathrm {St}(n,\\mathrm {rank}(X))$ .", "Since $\\mathcal {S}\\subseteq \\ker (X)$ , we have $\\mathrm {col}(V)\\subseteq \\mathcal {S}^{\\perp }$ .", "Let $U_{\\perp }\\in \\mathrm {St}(m,r-\\mathrm {rank}(X))$ have columns orthogonal to those of $U$ , and $V_{\\perp }\\in \\mathrm {St}(n,r-\\mathrm {rank}(X))$ have columns orthogonal to those of $V$ , such that some column of $V_{\\perp }$ is contained in $\\mathcal {S}$ .", "Define $X_i = X + i^{-1}U_{\\perp }V_{\\perp }^\\top \\in \\mathbb {R}^{m\\times n}.$ Note that $X_i\\rightarrow X$ , that $\\mathrm {rank}(X_i)=r$ for all $i$ , and that $\\ker (X_i) = \\ker (X)\\cap \\mathrm {col}(V_{\\perp })^{\\perp } =:\\mathcal {S}^{\\prime }\\ne \\mathcal {S}$ for all $i$ .", "Suppose $(X_{i_j},\\mathcal {S}_{i_j})\\in \\mathcal {M}$ is a lift of a subsequence converging to $(X,\\mathcal {S})$ .", "Since $\\mathrm {rank}(X_i)=r$ , we must have $\\mathcal {S}_{i_j}=\\ker (X_{i_j})=\\mathcal {S}^{\\prime }$ for all $j$ .", "Thus, $\\lim _j\\mathcal {S}_{i_j}=\\mathcal {S}^{\\prime }\\ne \\mathcal {S}$ , a contradiction.", "Therefore, no subsequence of $(X_i)$ can be lifted, so we conclude that $\\varphi $ does not satisfy “local $\\Rightarrow \\!$ local” at $(X,\\mathcal {S})$ by Theorem REF .", "For an explicit example of a cost $f$ and point $(X,\\mathcal {S})\\in \\mathcal {M}$ that is a local minimum for (), but such that $X$ is not a local minimum for (REF ), see [32].", "We shall show “local $\\Rightarrow \\!$ local” holds at $(X,\\mathcal {S})\\in \\mathcal {M}$ if $\\mathrm {rank}(X)=r$ by showing that “1 $\\Rightarrow \\!$ 1” holds there.", "This gives “local $\\Rightarrow \\!$ local” as well by Proposition REF .", "Alternatively, one can prove “local $\\Rightarrow \\!$ local” at such points by noting that $\\varphi $ is a diffeomorphism between $\\mathbb {R}^{m\\times n}_{=r}$ and $\\lbrace (X,\\ker (X))\\in \\mathcal {M}:\\mathrm {rank}(X)=r\\rbrace $ , hence $\\varphi $ is in particular open at points $(X,\\mathcal {S})$ with $\\mathrm {rank}(X)=r$ .", "For “1 $\\Rightarrow \\!$ 1” and “2 $\\Rightarrow \\!$ 1”, we use the results of Example REF .", "Recall from that example that every $(X,\\mathcal {S})\\in \\mathcal {M}$ is in the image of a chart of the form $\\psi (Z,W) = \\left(\\begin{bmatrix} -ZW, & Z\\end{bmatrix}\\Pi , \\mathrm {col}\\!\\left(\\Pi ^\\top \\begin{bmatrix} I_{n-r}\\\\ W\\end{bmatrix}\\right)\\right),\\qquad (Z,W)\\in \\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{r\\times (n-r)}$ for some permutation matrix $\\Pi \\in \\mathbb {R}^{n\\times n}$ .", "Proposition REF implies that our desirable properties hold at $(Z,W)$ iff they hold at $(X,\\mathcal {S})$ , so it suffices to consider the composed lift $\\widetilde{\\varphi }(Z,W)=\\begin{bmatrix} -ZW, & Z\\end{bmatrix}\\Pi =: X,$ defined on the linear space $\\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{r\\times (n-r)}$ .", "For this lift, we computed $\\mathbf {L}_{(Z,W)}$ and $\\mathbf {Q}_{(Z,W)}$ in (REF ): $\\mathbf {L}_{(Z,W)}(\\dot{Z},\\dot{W}) = \\begin{bmatrix} -\\dot{Z}W - Z\\dot{W}, & \\dot{Z}\\end{bmatrix}\\Pi ,\\qquad \\mathbf {Q}_{(Z,W)}(\\dot{Z},\\dot{W}) = \\begin{bmatrix} -2\\dot{Z}\\dot{W}, & 0\\end{bmatrix}\\Pi .$ Suppose $\\mathrm {rank}(X)=r$ .", "Note that $\\mathrm {col}(X)=\\mathrm {col}(Z)$ , so $\\mathrm {rank}(Z)=r$ .", "If $\\mathbf {L}_{(Z,W)}(\\dot{Z},\\dot{W}) = 0$ , then $\\dot{Z}=0$ and $Z\\dot{W}=0$ .", "This implies $\\dot{W}=0$ since $Z$ has full column rank.", "Thus, $\\mathbf {L}_{(Z,W)}$ is injective, but since its domain has dimension $(m+n-r)r=\\dim \\mathbb {R}^{m\\times n}_{=r}$ , we conclude that it is an isomorphism.", "Thus, “1 $\\Rightarrow \\!$ 1” holds at $(Z,W)$ by Theorem REF .", "If $\\mathrm {rank}(X)<r$ then $X\\mathcal {X}$ is not a linear space [25], hence “1 $\\Rightarrow \\!$ 1” cannot hold for any lift by Corollary REF .", "Suppose $\\mathrm {rank}(X)<r$ .", "We show “2 $\\Rightarrow \\!$ 1” holds at $(Z,W)$ by showing that $B_{(Z,W)}^*\\subseteq (X\\mathcal {X})^*$ .", "To that end, note that $\\mathrm {rank}(Z)=\\mathrm {rank}(X)<r$ , so there is a unit vector $w\\in \\mathbb {R}^r$ satisfying $Zw=0$ .", "Let $u\\in \\mathbb {R}^m$ and $v\\in \\mathbb {R}^{n-r}$ be arbitrary.", "For any $i\\in \\mathbb {N}$ , let $\\dot{Z}_i=i^{-1} uw^\\top $ and $\\dot{W}_i=i wv^\\top $ .", "Then $&\\mathbf {L}_{(Z,W)}(\\dot{Z}_i,\\dot{W}_i) = \\begin{bmatrix} -i^{-1}uw^\\top W - i(Zw)v^\\top , & i^{-1} uw^\\top \\end{bmatrix}\\Pi \\xrightarrow{} 0,\\\\&\\mathbf {Q}_{(Z,W)}(\\dot{Z}_i,\\dot{W}_i) \\equiv \\begin{bmatrix} -2uv^\\top , & 0\\end{bmatrix}\\Pi .$ We conclude that $B_{(Z,W)}\\supseteq (\\mathbb {R}^{m\\times (n-r)}_{\\le 1}\\times \\lbrace 0\\rbrace )\\Pi + \\operatorname{im}\\mathbf {L}_{(Z,W)} \\Rightarrow B_{(Z,W)}^*\\subseteq (\\lbrace 0\\rbrace \\times \\mathbb {R}^{m\\times r})\\Pi \\cap (\\operatorname{im}\\mathbf {L}_{(Z,W)})^{\\perp }.$ To characterize $(\\operatorname{im}\\mathbf {L}_{(Z,W)})^{\\perp }$ , observe that $V=\\begin{bmatrix}V_1, & V_2\\end{bmatrix}\\Pi \\in \\mathbb {R}^{m\\times n}$ with $V_1\\in \\mathbb {R}^{m\\times (n-r)}$ satisfies $V\\in (\\operatorname{im}\\mathbf {L}_{(Z,W)})^{\\perp }$ iff the following holds for all $(\\dot{Z},\\dot{W})\\in \\mathbb {R}^{m\\times r}\\times \\mathbb {R}^{r\\times (n-r)}$ : $\\langle V,\\mathbf {L}_{(Z,W)}(\\dot{Z},\\dot{W})\\rangle = \\langle V_1, -\\dot{Z}W - Z\\dot{W}\\rangle + \\langle V_2,\\dot{Z}\\rangle = \\langle \\dot{Z}, V_2-V_1W^\\top \\rangle - \\langle \\dot{W},Z^\\top V_1\\rangle = 0.$ This is equivalent to $V_2=V_1W^\\top $ and $Z^\\top V_1=0$ .", "Thus, if $V_1=0$ then $V=0$ , hence $B_{(Z,W)}^*=\\lbrace 0\\rbrace =(X\\mathcal {X})^*$ .", "This shows “2 $\\Rightarrow \\!$ 1” holds at $(Z,W)$ ." ], [ "SVD lift and its modification", "[Proof of Proposition REF ] We first analyze the SVD lift, and then its modification." ], [ "SVD lift:", "Considering the lift (REF ).", "Fix $(U,\\sigma ,V)\\in \\mathcal {M}$ such that the entries of $|\\sigma |$ are all nonzero but not distinct.", "Choose $k<\\ell $ such that $|\\sigma _k|=|\\sigma _{\\ell }|$ and let $X = \\varphi (U,\\sigma ,V)=U\\mathrm {diag}(\\sigma )V^\\top $ .", "We show “local $\\Rightarrow \\!$ local” does not hold at $(U,\\sigma ,V)$ by constructing a sequence converging to $X$ such that no subsequence of it can be lifted to a sequence converging to $(U,\\sigma ,V)$ .", "Choose $\\alpha _i\\in \\mathbb {R}^r$ converging to zero such that $|\\sigma +\\alpha _i|$ has distinct entries which are all nonzero.", "Let $Q^{(k,\\ell )}\\in O(r)$ be the Givens rotation matrix rotating by $\\pi /4$ in $(k,\\ell )$ plane, given explicitly by $Q^{(k,\\ell )}_{r,s} = {\\left\\lbrace \\begin{array}{ll} 1/\\sqrt{2} & \\textrm {if } r=s=k \\textrm { or } r=s=\\ell ,\\\\ 1/\\sqrt{2} & \\textrm {if } r=\\ell ,s=k,\\\\ -1/\\sqrt{2} & \\textrm {if } r=k,s=\\ell ,\\\\ 1 & \\textrm {if } r=s\\notin \\lbrace k,\\ell \\rbrace ,\\\\ 0 & \\textrm {otherwise},\\end{array}\\right.", "}$ and let $U^{(k,\\ell )} = U\\mathrm {sign}(\\sigma )Q^{(k,\\ell )}\\mathrm {sign}(\\sigma )\\in \\mathrm {St}(m,r)$ and $V^{(k,\\ell )}=VQ^{(k,\\ell )}\\in \\mathrm {St}(n,r)$ .", "Define $X_i= \\varphi (U^{(k,\\ell )},\\sigma + \\alpha _i, V^{(k,\\ell )})\\in \\mathcal {M},$ whose limit is $\\lim _{i\\rightarrow \\infty }X_i = U^{(k,\\ell )}\\mathrm {diag}(\\sigma )(V^{(k,\\ell )})^\\top = U\\mathrm {sign}(\\sigma )\\Big [Q^{(k,\\ell )}\\mathrm {diag}(|\\sigma |)(Q^{(k,\\ell )})^\\top \\Big ] V^\\top = U\\mathrm {diag}(\\sigma )V^\\top = X.$ The third equality above follows since $|\\sigma _k|=|\\sigma _{\\ell }|$ so the corresponding submatrix of $\\mathrm {diag}(\\sigma )$ is a multiple of the identity, and $Q^{(k,\\ell )}$ is orthogonal and acts by the identity outside of that submatrix.", "Suppose $(U_{i_j},\\sigma _{i_j},V_{i_j})$ is a lift of some subsequence of $(X_i)$ converging to $(U,\\sigma ,V)$ .", "The singular values of $X_{i_j}$ are the entries of $|\\sigma _{i_j}|$ , which are also the entries of $|\\sigma + \\alpha _{i_j}|$ .", "Since all these singular values are distinct, $U_{i_j}$ and $V_{i_j}$ must contain the $k$ th and $\\ell $ th columns of $U^{(k,\\ell )}$ and $V^{(k,\\ell )}$ up to sign, since the singular vectors are unique up to sign [48].", "But then it cannot happen that $(U_{i_j},V_{i_j})\\rightarrow (U,V)$ by construction of $Q^{(k,\\ell )}$ , a contradiction.", "Thus, no subsequence of $(X_i)$ can be lifted to a sequence on $\\mathcal {M}$ converging to $(U,\\sigma ,V)$ , showing that “local $\\Rightarrow \\!$ local” does not hold there.", "Proposition REF together with the fact that $\\mathcal {X}^{\\mathrm {smth}}=\\mathbb {R}^{m\\times n}_{=r}$ implies that “1 $\\Rightarrow \\!$ 1” does not hold at such $(U,\\sigma ,V)$ either.", "Next, fix $(U,\\sigma ,V)\\in \\mathcal {M}$ such that $\\sigma _k=0$ for some $k$ , and let $X=\\varphi (U,\\sigma ,V)$ .", "Since $\\mathrm {rank}(X)<r<\\min (m,n)$ , there exist unit vectors $u_k^{\\prime }\\in \\mathbb {R}^m$ and $v_k^{\\prime }\\in \\mathbb {R}^n$ such that $U^\\top u_k^{\\prime }=0$ and $V^\\top v_k^{\\prime }=0$ .", "Let $U^{(k)},V^{(k)}$ be obtained from $U,V$ by replacing their $k$ th columns with $u_k^{\\prime },v_k^{\\prime }$ , respectively, and let $\\alpha _i\\in \\mathbb {R}^r$ be a sequence converging to zero such that $|\\sigma +\\alpha _i|$ has distinct entries that are all nonzero.", "Define $X_i = \\varphi (U^{(k)},\\sigma +\\alpha _i,V^{(k)})$ which converge to $X$ as $i\\rightarrow \\infty $ .", "If $(U_{i_j},\\sigma _{i_j},V_{i_j})$ is a lift of a subsequence of $(X_i)$ converging to $(U,\\sigma ,V)$ , then one of the columns of $U_{i_j},V_{i_j}$ must be $u_k^{\\prime },v_k^{\\prime }$ up to sign because $|\\sigma _{i_j}|$ has distinct entries which are the singular values of $X_{i_j}$ .", "This contradicts $U_{i_j}\\rightarrow U$ and $V_{i_j}\\rightarrow V$ .", "Thus, “local $\\Rightarrow \\!$ local” does not hold at such $(U,\\sigma ,V)$ .", "Corollary REF shows that “1 $\\Rightarrow \\!$ 1” does not hold there either since $X\\mathcal {X}$ is not a linear space.", "Finally, fix $(U,\\sigma ,V)\\in \\mathcal {M}$ such that all the entries of $|\\sigma |$ are nonzero and distinct.", "We verify that “1 $\\Rightarrow \\!$ 1” holds there using Theorem REF by showing $\\operatorname{im}\\mathbf {L}_{(U,\\sigma ,V)}=X\\mathcal {X}$ .", "Since $\\mathcal {M}$ is a product of embedded submanifolds of linear spaces, we have from (REF ) that ${(U,\\sigma ,V)}\\mathcal {M}= U\\mathrm {St}(m,r)\\times {\\sigma }\\mathbb {R}^r\\times V\\mathrm {St}(n,r) = \\lbrace (\\dot{U},\\dot{\\sigma },\\dot{V}):U^\\top \\dot{U}+\\dot{U}^\\top U = V^\\top \\dot{V} + \\dot{V}^\\top V = 0\\rbrace ,$ and $\\mathbf {L}_{(U,\\sigma ,V)}(\\dot{U},\\dot{\\sigma },\\dot{V}) = (U,\\sigma ,V)[\\dot{U},\\dot{\\sigma }, \\dot{V}] = \\dot{U}\\mathrm {diag}(\\sigma ) V^\\top + U\\mathrm {diag}(\\dot{\\sigma })V^\\top + U\\mathrm {diag}(\\sigma )\\dot{V}^\\top ,$ where $(\\dot{U},\\dot{\\sigma },\\dot{V})\\in {(U,\\sigma ,V)}\\mathcal {M}$ .", "Let $U_{\\perp }\\in \\mathrm {St}(m,m-r)$ and $V_{\\perp }\\in \\mathrm {St}(n,n-r)$ satisfy $U^\\top U_{\\perp }=0$ and $V^\\top V_{\\perp }=0$ .", "By [8], we have $\\dot{U}\\in U\\mathrm {St}(m,r)$ and $\\dot{V}\\in V\\mathrm {St}(n,r)$ if and only if $\\dot{U} = U\\Omega _u + U_{\\perp }B_u,\\quad \\dot{V} = V\\Omega _v + V_{\\perp }B_v,$ where $\\Omega _u,\\Omega _v\\in \\mathrm {Skew}(r)\\lbrace \\Omega \\in \\mathbb {R}^{r\\times r}:\\Omega ^\\top +\\Omega =0\\rbrace $ and $B_u\\in \\mathbb {R}^{(m-r)\\times r}$ , $B_v\\in \\mathbb {R}^{(n-r)\\times r}$ are arbitrary.", "Using this parametrization, $\\mathbf {L}_{(U,\\sigma ,V)}(\\dot{U},\\dot{\\sigma },\\dot{V}) = \\begin{bmatrix} U & U_{\\perp }\\end{bmatrix}\\begin{bmatrix} \\Omega _u\\mathrm {diag}(\\sigma ) - \\mathrm {diag}(\\sigma )\\Omega _v + \\mathrm {diag}(\\dot{\\sigma }) & \\mathrm {diag}(\\sigma )B_v^\\top \\\\ B_u\\mathrm {diag}(\\sigma ) & 0\\end{bmatrix}\\begin{bmatrix} V & V_{\\perp }\\end{bmatrix}^\\top .$ Since $X\\mathcal {X}$ is a linear space and $\\dim {(U,\\sigma ,V)}\\mathcal {M}=\\dim X\\mathcal {X}$ , we have $\\operatorname{im}\\mathbf {L}_{(U,\\sigma ,V)}=X\\mathcal {X}$ if and only if $\\mathbf {L}_{(U,\\sigma ,V)}$ is injective.", "Suppose therefore that $\\mathbf {L}_{(U,\\sigma ,V)}(\\dot{U},\\dot{\\sigma },\\dot{V})=0$ .", "By (REF ) this is equivalent to $B_u = 0,\\quad B_v = 0,\\quad \\dot{\\sigma }= 0,\\quad \\Omega _u\\mathrm {diag}(\\sigma )-\\mathrm {diag}(\\sigma )\\Omega _v = 0,$ where $\\dot{\\sigma }=0$ follows by considering the diagonal of the top left block in (REF ).", "The fourth equality in (REF ) together with the skew-symmetry of $\\Omega _u,\\Omega _v$ gives for all $i,j$ that $(\\Omega _u)_{i,j}=\\frac{\\sigma _i}{\\sigma _j}(\\Omega _v)_{i,j} \\Rightarrow -\\frac{\\sigma _i}{\\sigma _j}(\\Omega _v)_{i,j} = (\\Omega _u)_{j,i}=-\\frac{\\sigma _j}{\\sigma _i}(\\Omega _v)_{i,j}\\Rightarrow \\frac{\\sigma _i^2-\\sigma _j^2}{\\sigma _i\\sigma _j}(\\Omega _v)_{i,j}=0.$ Since $|\\sigma _i|\\ne |\\sigma _j|$ whenever $i\\ne j$ , we get $(\\Omega _v)_{i,j}=0$ and $(\\Omega _u)_{i,j}=0$ for all $i\\ne j$ , hence $\\Omega _u=\\Omega _v=0$ .", "We conclude that $(\\dot{U},\\dot{\\sigma },\\dot{V})=0$ so $\\mathbf {L}_{(U,\\sigma ,V)}$ is injective and “1 $\\Rightarrow \\!$ 1” holds.", "By Proposition REF , “local $\\Rightarrow \\!$ local” holds there as well.", "All cases have been checked, so the first bullet in Proposition REF is proved." ], [ "Modified SVD lift:", "We now consider the modified SVD lift $\\mathrm {St}(m,r)\\times \\mathbb {S}^r\\times \\mathrm {St}(n,r)\\rightarrow \\mathbb {R}^{m \\times n}_{\\le r}$ defined by $\\varphi (U,M,V)=UMV^\\top $ .", "Fix $(U,M,V)\\in \\mathcal {M}$ such that $\\mathrm {rank}(M)=r$ and $\\lambda _k(M)+\\lambda _{\\ell }(M)=0$ for some $k<\\ell $ .", "We show “local $\\Rightarrow \\!$ local” does not hold at $(U,M,V)$ by constructing a sequence converging to $X=\\varphi (U,M,V)$ such that no subsequence of it can be lifted to a sequence converging to $(U,M,V)$ .", "Let $\\alpha _i\\in \\mathbb {R}^r$ be a sequence converging to zero such that $|\\lambda (M)+\\alpha _i|$ has distinct entries that are all nonzero.", "Let $M = W\\mathrm {diag}(\\lambda (M))W^\\top $ be an eigendecomposition of $M$ , where $W\\in \\mathrm {O}(r)$ .", "Define $U^{(k,\\ell )}=UWT^{(k,\\ell )}S^{(k,\\ell )}W^\\top \\in \\mathrm {St}(m,r),\\quad V^{(k,\\ell )}=VWT^{(k,\\ell )}W^\\top \\in \\mathrm {St}(n,r),$ where $T^{(k,\\ell )}\\in \\mathrm {O}(r)$ is the permutation interchanging the $k$ th and $\\ell $ th entries of a vector while fixing all the others, and $S^{(k,\\ell )}\\in \\mathrm {O}(r)$ flips the signs of these entries (so it's a diagonal matrix with all 1's on the diagonal except for the $k$ th and $\\ell $ th entries which are $-1$ ).", "Note that $T^{(k,\\ell )}$ and $S^{(k,\\ell )}$ are symmetric and commute.", "Let $X_i = \\varphi (U^{(k,\\ell )}, M + W\\mathrm {diag}(\\alpha _i)W^\\top , V^{(k,\\ell )}),$ whose singular values are $|\\lambda (M)+\\alpha _i|$ and are all distinct.", "Note that $X_i\\rightarrow X$ , because $\\lim _{i\\rightarrow \\infty } X_i &= U^{(k,\\ell )}M(V^{(k,\\ell )})^\\top = (UW)T^{(k,\\ell )}(S^{(k,\\ell )}\\mathrm {diag}(\\lambda (M)))(T^{(k,\\ell )})^\\top (VW)^\\top \\\\ &= (UW)\\mathrm {diag}(\\lambda (M))(VW)^\\top = UMV^\\top = X,$ where the first equality on the second line follows because $S^{(k,\\ell )}$ flips the signs of $\\lambda _k(M)$ and of $\\lambda _{\\ell }(M)=-\\lambda _k(M)$ , and conjugation by $T^{(k,\\ell )}$ interchanges them again.", "Suppose $(U_{i_j},M_{i_j},V_{i_j})$ is a lift of a subsequence of $(X_i)$ converging to $(U,M,V)$ .", "The singular values of $X_{i_j}$ are the entries $|\\lambda (M_{i_j})|$ , which are also the entries of $|\\lambda (M) + \\alpha _{i_j}|$ (possibly permuted).", "Combining this with $\\lambda (M_{i_j})\\rightarrow \\lambda (M)$ (since $M_{i_j}\\rightarrow M$ ), $\\alpha _{i_j}\\rightarrow 0$ , and the fact that $\\lambda (M)$ has no zero entries, we further get that the entries of $\\lambda (M_{i_j})$ are equal to the entries of $\\lambda (M)+\\alpha _{i_j}$ for all large $j$ .", "After passing to a subsequence, we may assume that this equality holds for all $j$ .", "Let $M_{i_j}=W_{i_j}\\mathrm {diag}(\\lambda (M)+\\alpha _{i_j})W_{i_j}^\\top $ be an eigendecomposition of $M_{i_j}$ .", "Since $\\mathrm {O}(r)$ is compact, we may also assume after further passing to a subsequence that $\\lim _{j\\rightarrow \\infty }W_{i_j} = \\widetilde{W}\\in \\mathrm {O}(r),$ exists.", "At this point, we have two SVDs of $X_{i_j}$ , namely $X_{i_j} &= [U^{(k,\\ell )}W\\mathrm {sign}(\\lambda (M) + \\alpha _{i_j})]\\mathrm {diag}(|\\lambda (M)+\\alpha _{i_j}|)[V^{(k,\\ell )}W]^\\top \\\\&= [U_{i_j}W_{i_j}\\mathrm {sign}(\\lambda (M) + \\alpha _{i_j})]\\mathrm {diag}(|\\lambda (M)+\\alpha _{i_j}|)[V_{i_j}W_{i_j}]^\\top .$ Because the singular values $|\\lambda (M)+\\alpha _{i_j}|$ of $X_{i_j}$ are distinct, its singular vectors are unique up to sign [48].", "Specifically, there exists $S_{i_j}\\in \\mathrm {diag}(\\lbrace \\pm 1\\rbrace ^r)$ satisfying $&U^{(k,\\ell )}W\\mathrm {sign}(\\lambda (M) + \\alpha _{i_j}) = U_{i_j}W_{i_j}\\mathrm {sign}(\\lambda (M) + \\alpha _{i_j})S_{i_j}\\quad \\textrm {implying}\\quad U^{(k,\\ell )}W = U_{i_j}W_{i_j}S_{i_j},\\\\&V^{(k,\\ell )}W=V_{i_j}W_{i_j}S_{i_j},$ where in the first line we used the fact that diagonal matrices commute.", "Because $S_{i_j}$ takes values in a finite set, after passing to a subsequence again we may assume $S_{i_j}=S$ is fixed.", "Then $U^{(k,\\ell )} = U_{i_j}W_{i_j}SW^\\top $ and $V^{(k,\\ell )} = V_{i_j}W_{i_j}SW^\\top $ for all $j$ , and taking $j\\rightarrow \\infty $ we conclude that $U^{(k,\\ell )} = U\\widetilde{W} SW^\\top $ and $V^{(k,\\ell )} = V\\widetilde{W}SW^\\top $ .", "Equating this to (REF ), we obtain $U\\widetilde{W}SW^\\top = UWT^{(k,\\ell )}S^{(k,\\ell )}W^\\top \\quad \\textrm {and}\\quad V\\widetilde{W}SW^\\top = VWT^{(k,\\ell )}W^\\top .$ Rearranging gives $W^\\top \\widetilde{W}S = T^{(k,\\ell )}S^{(k,\\ell )}$ and $W^\\top \\widetilde{W}S = T^{(k,\\ell )}$ so in fact, $T^{(k,\\ell )}S^{(k,\\ell )}=T^{(k,\\ell )}$ , a contradiction.", "Thus, no subsequence of $(X_i)$ can be lifted, so the lift does not satisfy “local $\\Rightarrow \\!$ local” at $(U,M,V)$ .", "By Proposition REF and the fact that $X\\in \\mathbb {R}^{m\\times n}_{=r}=\\mathcal {X}^{\\mathrm {smth}}$ , we conclude that “1 $\\Rightarrow \\!$ 1” is not satisfied at $(U,M,V)$ either.", "Now fix $(U,M,V)$ such that $\\lambda _k(M)=0$ for some $k$ , and let $X,W$ be as above.", "Since $r<\\min \\lbrace m,n\\rbrace $ , there exist unit vectors $u_k^{\\prime }\\in \\mathbb {R}^m$ and $v_k^{\\prime }\\in \\mathbb {R}^n$ such that $(UW)^\\top u_k^{\\prime }=0$ and $(VW)^\\top v_k^{\\prime }=0$ .", "Let $Y\\in \\mathrm {O}(m)$ and $Z\\in \\mathrm {O}(n)$ send the $k$ th columns of $UW$ and $VW$ to $u_k^{\\prime }$ and $v_k^{\\prime }$ , respectively, and act by the identity on their orthogonal complements.", "Let $\\alpha _i\\in \\mathbb {R}^r$ converge to zero such that $|\\lambda (M)+\\alpha _i|$ are distinct and nonzero.", "Define $X_i = \\varphi (YU,M + \\alpha _i,ZV)$ , which converge to $YUM(ZV)^\\top = X$ .", "Suppose $(U_{i_j},M_{i_j},V_{i_j})$ is a lift of a subsequence of $(X_i)$ converging to $(U,M,V)$ .", "Let $M_{i_j}=W_{i_j}\\mathrm {diag}(\\lambda (M) + \\alpha _i)W_{i_j}^\\top $ be an eigendecomposition.", "After passing to a subsequence, we may assume $W_{i_j}\\rightarrow \\widetilde{W}$ in $\\mathrm {O}(r)$ .", "Because the singular vectors of $X_{i_j}$ are unique up to sign, there exists $S_{i_j}\\in \\mathrm {diag}(\\lbrace \\pm 1\\rbrace ^r)$ satisfying $YU = U_{i_j}W_{i_j}S_{i_j}W^\\top ,$ and similarly for $ZV$ .", "Because $S_{i_j}$ takes values in a finite set, after passing to a subsequence again we may assume $S_{i_j} = S$ is fixed for all $j$ , in which case we get $YU = U_{i_j}W_{i_j}SW^\\top \\rightarrow U\\widetilde{W}SW^\\top .$ This is a contradiction since $\\mathrm {col}(YU)\\ne \\mathrm {col}(U) = \\mathrm {col}(U\\widetilde{W}SW^\\top )$ by construction of $Y$ .", "Thus, no subsequence of $(X_i)$ can be lifted so “local $\\Rightarrow \\!$ local” does not hold at such $(U,M,V)$ .", "Since $X\\mathcal {X}$ is not a linear space, Corollary REF shows that “1 $\\Rightarrow \\!$ 1” does not hold there either.", "Finally, fix $(U,M,V)\\in \\mathcal {M}$ such that $\\lambda _i(M)+\\lambda _j(M)\\ne 0$ for all $i,j$ and let $X=\\varphi (U,M,V)$ .", "We show that “1 $\\Rightarrow \\!$ 1” holds at $(U,M,V)$ by showing $\\operatorname{im}\\mathbf {L}_{(U,M,V)}=X\\mathcal {X}$ and appealing to Theorem REF .", "Note that ${(U,M,V)}\\mathcal {M}= U\\mathrm {St}(m,r)\\times \\mathbb {S}^r\\times V\\mathrm {St}(n,r)$ .", "Then (REF ) gives $\\mathbf {L}_{(U,M,V)}(\\dot{U},\\dot{M},\\dot{V}) = \\dot{U}MV^\\top + U\\dot{M}V^\\top + UM\\dot{V}^\\top .$ Writing $\\dot{U},\\dot{V}$ as in (REF ), we get similarly to (REF ) that $\\mathbf {L}_{(U,M,V)}(\\dot{U},\\dot{M},\\dot{V}) = \\begin{bmatrix} U & U_{\\perp }\\end{bmatrix}\\begin{bmatrix} \\Omega _uM - M\\Omega _v + \\dot{M} & MB_v^\\top \\\\ B_uM & 0 \\end{bmatrix}\\begin{bmatrix} V & V_{\\perp }\\end{bmatrix}^\\top .$ Since $\\mathrm {rank}(M)=r$ , we have $\\mathrm {col}(U)=\\mathrm {col}(X)$ and $\\mathrm {col}(V)=\\mathrm {col}(X^\\top )$ .", "By [43], we have $X\\mathbb {R}^{m \\times n}_{\\le r}= \\left\\lbrace \\dot{X}=\\begin{bmatrix} U & U_{\\perp }\\end{bmatrix}\\begin{bmatrix} \\dot{X}_1 & \\dot{X}_2 \\\\ \\dot{X}_3 & 0 \\end{bmatrix}\\begin{bmatrix} V & V_{\\perp }\\end{bmatrix}^\\top : \\dot{X}_1\\in \\mathbb {R}^{r\\times r}, \\dot{X}_2\\in \\mathbb {R}^{r\\times (n-r)}, \\dot{X}_3\\in \\mathbb {R}^{(m-r)\\times r}\\right\\rbrace .$ Since $\\lambda _i(M)+\\lambda _j(M)\\ne 0$ for all $i,j$ , for any $\\dot{X}_1\\in \\mathbb {R}^{r\\times r}$ we can pick $\\Omega \\in \\mathrm {Skew}(r)$ such that $\\Omega M + M\\Omega = \\mathrm {skew}(\\dot{X}_1)=\\frac{\\dot{X}_1-\\dot{X}_1^\\top }{2}$ .", "Indeed, if $M=W\\Lambda W^\\top $ is an eigendecomposition of $M$ , define $\\Omega $ by setting $(W^\\top \\Omega W)_{i,j} = \\frac{(W^\\top \\mathrm {skew}(\\dot{X}_1) W)_{i,j}}{\\lambda _i(M)+\\lambda _j(M)},$ which is clearly skew-symmetric and solves the above equation.", "We set $\\Omega _u=-\\Omega _v=\\Omega $ and $\\dot{M} = \\mathrm {sym}(\\dot{X}_1)=\\frac{\\dot{X}_1+\\dot{X}_1^\\top }{2}\\in \\mathbb {S}^r$ .", "Finally, we set $B_v = \\dot{X}_2^\\top M^{-\\top },\\quad B_u = \\dot{X}_3 M^{-1}.$ With these choices, we get $\\mathbf {L}_{(U,M,V)}(\\dot{U},\\dot{M},\\dot{V}) = \\dot{X}$ , showing that $\\mathbf {L}_{(U,M,V)}$ is surjective and “1 $\\Rightarrow \\!$ 1” holds.", "By Proposition REF , the lift satisfies “local $\\Rightarrow \\!$ local” at $(U,M,V)$ as well." ], [ "Tensor lifts", "[Proof of Proposition REF ] Let $(y_1,\\ldots ,y_d)\\in \\mathcal {E}^{\\prime }$ .", "Note that if $y_i=0$ for some $i$ , then $\\varphi (y_1,\\ldots ,y_d)=0$ by the multilinearity of $\\varphi $ .", "Also, since $\\varphi $ is multilinear and defined on all of $\\mathcal {E}^{\\prime }$ , its differential in the ambient Euclidean space is $(y_1,\\ldots ,y_d)[\\dot{y}_1,\\ldots ,\\dot{y}_d] &= \\left.\\frac{d}{dt}\\right|_{t=0}\\varphi (y_1+t\\dot{y}_1,\\ldots , y_d+t\\dot{y}_d)\\\\ &= \\sum _{i=1}^d\\varphi (y_1,\\ldots ,y_{i-1},\\dot{y}_i,y_{i+1},\\ldots ,y_d).$ Similarly, $&2\\varphi (y_1,\\ldots ,y_d)[(\\dot{y}_1,\\ldots ,\\dot{y}_d),(\\dot{y}_1,\\ldots ,\\dot{y}_d)] = \\left.\\frac{d}{dt}\\right|_{t=0}(y_1+t\\dot{y}_1,\\ldots ,y_d+t\\dot{y}_d)[\\dot{y}_1,\\ldots ,\\dot{y}_d]\\\\ &= 2\\sum _{1\\le i<j\\le d}\\varphi (y_1,\\ldots ,y_{i-1},\\dot{y}_i,y_{i+1},\\ldots ,y_{j-1},\\dot{y}_j,y_{j+1},\\ldots ,y_d).$ In particular, if $y_i=0$ for at least three indices $i$ , then $(y_1,\\ldots ,y_d) = 2\\varphi (y_1,\\ldots ,y_d) = 0$ .", "Hence (REF ) gives $&\\mathbf {L}_{(y_1,\\ldots ,y_d)}=(y_1,\\ldots ,y_d)|_{{(y_1,\\ldots ,y_d)}\\mathcal {M}} = 0,\\\\&\\mathbf {Q}_{(y_1,\\ldots ,y_d)} = 2\\varphi (y_1,\\ldots ,y_d)|_{{(y_1,\\ldots ,y_d)}\\mathcal {M}} + (y_1,\\ldots ,y_d)[u] = 0,$ where $u\\in 2_{(y_1,\\ldots ,y_d),(\\dot{y}_1,\\ldots ,\\dot{y}_d)}\\mathcal {M}$ is arbitrary.", "This implies $(\\operatorname{im}\\mathbf {L}_{(y_1,\\ldots ,y_d)})^{\\perp }\\cap (\\mathbf {Q}_{(y_1,\\ldots ,y_d)}({(y_1,\\ldots ,y_d)}\\mathcal {M}))^* = \\mathcal {E}.$ The necessary condition for “2 $\\Rightarrow \\!$ 1” given by the last implication in Theorem REF is satisfied iff $(0\\mathcal {X})^*=\\mathcal {E}$ , or equivalently $0\\mathcal {X}=\\lbrace 0\\rbrace $ .", "This holds if and only if 0 is an isolated point of $\\mathcal {X}$ , since if $(x_i)\\subseteq \\mathcal {X}\\setminus \\lbrace 0\\rbrace $ is a sequence converging to 0, then after passing to a subsequence $(x_i/\\Vert x_i\\Vert )$ converges and gives a nonzero element of $0\\mathcal {X}$ .", "[Proof of Proposition REF ] As argued for Proposition REF , the multilinearity of $\\varphi $ implies that $&(\\lambda ,Y_1,\\ldots ,Y_d)[\\dot{\\lambda },\\dot{Y}_1,\\ldots ,\\dot{Y}_d] = \\sum _{i=1}^r\\dot{\\lambda }_i(Y_1)_{:,i}\\otimes \\ldots \\otimes (Y_d)_{:,i}\\\\ &\\quad + \\sum _{i=1}^r\\lambda _i\\sum _{j=1}^r(Y_1)_{:,i}\\otimes \\ldots \\otimes (Y_{j-1})_{:,i}\\otimes (\\dot{Y}_j)_{:,i}\\otimes (Y_{j+1})_{:,i}\\otimes \\ldots \\otimes (Y_d)_{:,i},$ and $&2\\varphi (\\lambda ,Y_1,\\ldots ,Y_d)[(\\dot{\\lambda },\\dot{Y}_1,\\ldots ,\\dot{Y}_d),(\\dot{\\lambda },\\dot{Y}_1,\\ldots ,\\dot{Y}_d)]= \\\\ &\\quad 2\\sum _{i=1}^r\\dot{\\lambda }_i\\sum _{j=1}^r(Y_1)_{:,i}\\otimes \\ldots \\otimes (Y_{j-1})_{:,i}\\otimes (\\dot{Y}_j)_{:,i}\\otimes (Y_{j+1})_{:,i}\\otimes \\ldots \\otimes (Y_d)_{:,i}\\\\ &\\qquad + 2\\sum _{i=1}^r\\lambda _i\\sum _{j<k}(Y_1)_{:,i}\\otimes \\ldots \\otimes (\\dot{Y}_j)_{:,i}\\otimes \\ldots \\otimes (\\dot{Y}_k)_{:,i}\\otimes \\ldots \\otimes (Y_d)_{:,i}.$ Pick $w_i\\in \\mathrm {col}(Y_i)^{\\perp }$ and let $W=w_1\\otimes \\cdots \\otimes w_d$ .", "Then $\\langle W,(\\lambda ,Y_1,\\ldots ,Y_d)[\\dot{\\lambda },\\dot{Y}_1,\\ldots ,\\dot{Y}_d]\\rangle &= \\sum _{i=1}^r\\dot{\\lambda }_i\\prod _{j=1}^d\\langle (Y_j)_{:,i},w_j\\rangle \\\\ &\\ + \\sum _{i=1}^r\\lambda _i\\sum _{j=1}^r\\langle (\\dot{Y}_j)_{:,i},w_j\\rangle \\prod _{k\\ne j}\\langle (Y_k)_{:,i},w_k\\rangle = 0,$ for all $(\\dot{Y}_1,\\ldots ,\\dot{Y}_d)$ as long as $d\\ge 2$ , and similarly $&\\langle W,2\\varphi (\\lambda ,Y_1,\\ldots ,Y_d)[(\\dot{\\lambda },\\dot{Y}_1,\\ldots ,\\dot{Y}_d),(\\dot{\\lambda },\\dot{Y}_1,\\ldots ,\\dot{Y}_d)]\\rangle = 2\\sum _{i=1}^r\\dot{\\lambda }_i\\sum _{j=1}^r\\langle (\\dot{Y}_j)_{:,i},w_j\\rangle \\prod _{k\\ne j}\\langle (Y_k)_{:,i},w_k\\rangle \\\\ &\\quad + 2\\sum _{i=1}^r\\lambda _i\\sum _{j<k}\\langle (\\dot{Y}_j)_{:,i},w_j\\rangle \\langle (\\dot{Y}_k)_{:,i},w_k\\rangle \\prod _{\\ell \\ne j,k}\\langle (Y_{\\ell })_{:,i},w_{\\ell }\\rangle = 0,$ for all $(\\dot{Y}_1,\\ldots ,\\dot{Y}_d)$ if $d\\ge 3$ or $d=2$ and $\\lambda =0$ .", "More generally, if $W\\in \\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp }$ then $W=\\sum _{\\ell =1}^Lw_{1,\\ell }\\otimes \\ldots \\otimes w_{d,\\ell }$ for $w_{i,\\ell }\\in \\mathrm {col}(Y_i)^{\\perp }$ , hence the above two equalities apply to such $W$ as well.", "Since $\\mathbf {L}_{(\\lambda ,Y_1,\\ldots , Y_d)}$ is the restriction of $(\\lambda ,Y_1,\\ldots ,Y_d)$ to ${(\\lambda ,Y_1,\\ldots ,Y_d)}\\mathcal {M}$ and $\\mathbf {Q}_{(\\lambda ,Y_1,\\ldots ,Y_d)}$ is given by (REF ), we conclude that $\\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp }\\subseteq (\\operatorname{im}\\mathbf {L}_{(\\lambda ,Y_1,\\ldots ,Y_d)})^{\\perp }\\cap (\\mathbf {Q}_{(\\lambda ,Y_1,\\ldots ,Y_d)}({(\\lambda ,Y_1,\\ldots ,Y_d)}\\mathcal {M}))^*,$ if either $d\\ge 3$ or $d=2$ and $\\lambda =0$ .", "Thus, if $\\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp }\\lnot \\subseteq (X\\mathcal {X})^*$ then the necessary condition for “2 $\\Rightarrow \\!$ 1” from Theorem REF does not hold.", "[Proof of Corollary REF ] Let $X = \\varphi (\\lambda ,Y_1,\\ldots ,Y_d)$ .", "View $\\mathcal {X}$ as a subset of the linear space of symmetric tensors.", "Note that $\\mathcal {M}$ is diffeomorphic to $\\mathcal {N} = \\left\\lbrace (\\lambda ,Y_1,\\ldots ,Y_d)\\in \\mathbb {R}^r\\times (\\mathbb {R}^{n\\times r})^d:Y_1=Y_2=\\ldots =Y_d\\right\\rbrace ,$ via $\\psi (\\lambda ,y_1,\\ldots ,y_r)=(\\lambda ,Y,\\ldots ,Y)$ where $Y=\\begin{bmatrix} y_1, & \\ldots , & y_r\\end{bmatrix}$ .", "By Proposition REF , our lift satisfies “2 $\\Rightarrow \\!$ 1” at $(\\lambda ,y_1,\\ldots ,y_r)$ iff the composed lift $\\varphi \\circ \\psi ^{-1}$ satisfies “2 $\\Rightarrow \\!$ 1” at $\\psi (\\lambda ,y_1,\\ldots ,y_r)$ .", "If $\\textrm {sym-rank}(X)<r$ , we claim that $(X\\mathcal {X})^*=\\lbrace 0\\rbrace $ .", "Indeed, for any rank-1 symmetric tensor $v^{\\otimes d}$ , we have $X+tv^{\\otimes d}\\in \\mathcal {X}$ for all $t\\in \\mathbb {R}$ , so $\\pm v^{\\otimes d}\\in X\\mathcal {X}$ .", "Since rank-1 symmetric tensors span the space of all symmetric tensors, the claim follows.", "If $\\mathrm {span}\\lbrace y_1,\\ldots ,y_r\\rbrace \\ne \\mathbb {R}^n$ , then $\\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp } = (\\mathrm {span}\\lbrace y_1,\\ldots ,y_r\\rbrace ^{\\perp })^{\\otimes d}\\ne 0$ .", "The result follows from Proposition REF .", "Note that $\\mathcal {M}$ is diffeomorphic to $\\mathcal {N} = \\left\\lbrace (\\lambda ,Y_1,\\ldots ,Y_d)\\in \\mathbb {R}^r\\times (\\mathbb {R}^{n\\times r})^d:Y_1=Y_2=\\ldots =Y_d,\\ Y_1^\\top Y_1=I_r\\right\\rbrace ,$ with the same diffeomorphism as in part (a).", "As in (a), it suffices to consider the composed lift mapping $\\mathcal {N}\\rightarrow \\mathcal {X}$ .", "If $r<n$ , let $u\\in \\mathrm {span}\\lbrace y_1,\\ldots ,y_r\\rbrace ^{\\perp }$ have unit norm.", "Then $u^{\\otimes d}\\in (\\mathrm {span}\\lbrace y_1,\\ldots ,y_r\\rbrace ^{\\perp })^{\\otimes d} = \\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp }$ .", "However, $u^{\\otimes d}\\notin (X\\mathcal {X})^*$ if $\\lambda _i=0$ for some $i$ .", "Indeed, in that case we have $X-tu^{\\otimes d}\\in \\mathcal {X}$ for all $t\\ge 0$ , so $-u^{\\otimes d}\\in X\\mathcal {X}$ and $\\langle u^{\\otimes d},-u^{\\otimes d}\\rangle =-1$ .", "Thus, the result follows from Proposition REF .", "If $\\textrm {CP-rank}(X)<r$ then $(X\\mathcal {X})^*=\\lbrace 0\\rbrace $ since $X\\mathcal {X}$ includes all rank-1 tensors, by the same argument as in (a).", "Also, $\\mathrm {col}(Y_j)^{\\perp }\\ne \\lbrace 0\\rbrace $ for all $j$ iff $\\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp }\\ne \\lbrace 0\\rbrace $ .", "Thus, under the stated hypotheses we have $\\mathrm {col}(Y_1)^{\\perp }\\otimes \\ldots \\otimes \\mathrm {col}(Y_d)^{\\perp }\\lnot \\subseteq (X\\mathcal {X})^*$ , so the result follows from Proposition REF ." ] ]
2207.03512
[ [ "CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal\n Relationships" ], [ "Abstract As machine learning models become increasingly prevalent in motion forecasting for autonomous vehicles (AVs), it is critical to ensure that model predictions are safe and reliable.", "However, exhaustively collecting and labeling the data necessary to fully test the long tail of rare and challenging scenarios is difficult and expensive.", "In this work, we construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.", "Specifically, we conduct an extensive labeling effort to identify causal agents, or agents whose presence influences human drivers' behavior in any format, in the Waymo Open Motion Dataset (WOMD), and we use these labels to perturb the data by deleting non-causal agents from the scene.", "We evaluate a diverse set of state-of-the-art deep-learning model architectures on our proposed benchmark and find that all models exhibit large shifts under even non-causal perturbation: we observe a 25-38% relative change in minADE as compared to the original.", "We also investigate techniques to improve model robustness, including increasing the training dataset size and using targeted data augmentations that randomly drop non-causal agents throughout training.", "Finally, we release the causal agent labels (at https://github.com/google-research/causal-agents) as an additional attribute to WOMD and the robustness benchmarks to aid the community in building more reliable and safe deep-learning models for motion forecasting." ], [ "Introduction", "Machine learning models have been increasingly adopted in trajectory prediction and motion planning tasks for autonomous vehicles (AVs) [5][6][7][10][37][29][20], [16][30][38][23][18], [21].", "To safely deploy such models, they must have reliable, robust predictions across a diverse range of scenarios and environments.", "In particular, models must not learn or memorize patterns in the data that do not generalize to different environments.", "For example, parked cars separated by a barrier from the roadway should not affect a model's predictions for the AV.", "Figure: Trajectory prediction is sensitive to removing non-causal agents.", "We show a top-down visualization of a scene from the WOMD (left) and a perturbed version of the scene where we delete all non-causal agents (right).", "The AV and its predicted trajectories via the Scene Transformer model are shown in blue, the ground truth trajectory of the AV is grey, and the ground truth of other agents is green.The perturbation causes a large shift in minADE because the model fails to predict the ground truth mode (a right turn), which indicates the brittleness of the model to such perturbation, i.e., the model needs learn better about the causal relationships among agents.Ideally, if we could train the model on the full set of situations it needed to generalize to, it would be possible for the model to learn from the data that certain features are not reliable.", "However, this is not feasible in practice since collecting and labeling the required data is expensive, and there is a long tail of rare and difficult scenarios [22].", "In this work, we propose perturbing existing data via agent deletions to evaluate and improve model robustness to spurious features.", "In order to be useful for evaluating the robustness of models in our setting, the perturbations must preserve the correct labels and not change the ground truth trajectory of the predicted agent (i.e., the AV).", "Since generating such perturbations requires causal reasoning, we propose using human labelers to identify which agents in the scene are non-causal.", "Specifically, we define a non-causal agent as an agent whose deletion does not cause the ground truth trajectory of a given target agent to change.", "We then construct a robustness evaluation dataset that consists of perturbed example where we remove all non-causal agents from each scene, reusing the ground truth trajectory associated with the original examples.", "In order to more broadly understand model behavior under different perturbations, we also consider alternative ways to perturb the data, such as removing causal agents, removing a subset of non-causal agents, or removing static (i.e.", "stationary) agents.", "Using our perturbed datasets, we then conduct an extensive experimental study exploring how factors such as model architecture, dataset size, and data augmentation affect model sensitivity to these perturbations.", "We also propose two robustness metrics to quantify the model sensitivity, with one capturing the per-example absolute minADE changes, and another directly reflecting how the model outputs change under perturbation via IoU (intersection-over-union) without referring to the ground truth trajectory.", "The second metric helps to address the issue that ground truth trajectory is just a sample from a distribution of many possible correct trajectories.", "We also visualize scenes with large minADE changes to understand how the model performance degrades under perturbations.", "Our results show that existing motion forecasting models are sensitive to deleting non-causal agents and can have pathological behavior dependencies on faraway or distant agents.", "For example, fig:intro illustrates an original (left) and perturbed (right) scenes with non-causal agents removed.", "It shows that the model's prediction missed the right-turn mode (which is exactly the ground-truth trajectory) under the perturbation.", "Such brittleness could lead to serious consequences in autonomous driving systems if we rely on deep-learning models without further safety assurance from other techniques such as optimization and robotics algorithms.", "The main contributions of our work are as follows: We contribute a new robustness benchmark for the WOMD for evaluating trajectory prediction models' sensitivity to spurious correlations.", "We will release the causal agent labels from human labelers as additional attributes to WOMD so that researchers can utilize the causal relationships between the agents for other works such as agents relevance or ranking [28], [36].", "We introduce several metrics to quantify the robustness of motion forecasting models to perturbations, including $\\text{Abs}(\\Delta _{\\text{minADE}})$ and two trajectory set metrics that measures differences between the original and perturbed trajectories without using the ground truth as a reference.", "We evaluate the robustness of several state-of-the-art motion forecasting models, including the Multipath++ [37], PathformerPathformer is a transformer-based architecture that is concurrent work and performs competitively on various benchmarks.", "We describe more details in Appendix ., and SceneTransformer [24] models using these metrics.", "We show that the absolute average change in minADE can range from 0.07-0.23 m (a significant $25-38\\%$ change relative to the original minADE).", "We find that all models are sensitive to deleting non-causal agents, and the model with the best overall performance (in terms of regular metrics used to quantify the trajectory prediction performance such as minADE) is not necessarily the most robust.", "We show that increasing training dataset size and targeted data augmentations that remove non-causal agents can help improve model robustness.", "Overall, this is the first work focusing on the robustness of trajectory prediction models to non-causal factors based on human labels.", "Such robustness is critical for models deployed in a self-driving car where the reliability and safety requirements are of utmost importance.", "Ultimately, our goal is to provide a robustness benchmark which can aid the community to better evaluate model reliability, detect possible spurious correlations in deep-learning-based trajectory prediction models, and facilitate the development of more robust models or other mitigation techniques such as optimization and traditional robotic algorithms as complementary solutions to minimize safety risks." ], [ "Related work", "Robustness evaluation on perturbations.", "Machine learning models are well known to have brittle predictions under distribution shift, and across multiple domains, researchers have proposed robustness evaluation protocols that move beyond a fixed test set [27], [4], [34], [15], [13], [32], [35].", "Such efforts can be broadly categorized into three types of evaluation: (i) slicing, i.e.", "existing test data is sliced over multiple dimensions, (ii) perturbations, i.e.", "existing test data is modified via transformations, or (iii) dataset shift, i.e.", "new test data is drawn from a different distribution, .", "Our work focuses on perturbations, which are common in both computer vision and NLP.", "In computer vision, researchers perturb images via pixel level noise corruptions [12], [15] , spatial transformations [9], [11] , and adversarial modifications [3], [34].", "Such synthetic shifts are easy to apply to arbitrary images, but limited in that they do not test model invariance to more complex modifications such as deleting or modifying irrelevant parts of the image.", "In trajectory prediction, perturbations are potentially more valuable, since the models train on discrete inputs, namely, the agents and the roadgraph; because of the structure of the problem, it is easier to reliably construct perturbations that do not modify the ground truth labels.", "This situation mirrors that of NLP, where sentences composed of discrete words can be modified in ways that do not change the prediction task, and indeed, such transformations have proven valuable for testing the robustness of models and identifying possible biases [8].", "Robustness evaluation for trajectory prediction.", "The three types of robustness evaluation (slicing, perturbations, and dataset shift) described above also characterize the trajectory prediction literature.", "Slicing.", "The most common approach in the literature is to slice model performance along different hyperparameters and buckets, such as duration of the historical trajectories [25], size of the training data [17], [24], sampling frequency [2], number of agents in the scene [29], [24], criticality / interactivity of the scenarios [19], [10], and speed of the AV [24].", "Perturbations.", "Another thread of related work focuses on the robustness of the algorithms to perturbations in both training and test data.", "For example, [2] introduced synthetic sensor noises into both the training and test process to evaluate the model's accuracy against sensor noises.", "[14] introduced 30% anomalies into the training data (with extra labels), and evaluated the robustness of the algorithm to anomalies in the training process.", "Dataset shift.", "Less work has focused on dataset shift due to the difficulty of collecting, annotating, and releasing entirely new data.", "Examples include training and testing in different locations or routes [31], [33], weather, time of day, and sensor noise [33].", "Unlike prior work, we evaluate trajectory prediction models trained on the original dataset on “non-causal” domain shifts instead of hard domain shifts (such as weather or locations) since we manually transform our test set by leveraging the causal relationships among agents in the scene.", "Because these non-causal perturbations are closer to the original validation dataset than the hard domain shifts, the discrepancies we observe are in some ways a more immediate priority for improving model robustness.", "We focus on evaluating models' sensitivity to agent interactions because that is a complicated component of trajectory prediction and is crucially important for safety.", "Agent relevance for autonomous driving.", "Since trajectory prediction in autonomous driving systems must reason about other agents in the scene, researchers have attempted to efficiently rank agents according to their impact on the AV.", "The main motivation of this line of work is to determine which agents to allocate computational resources to for processing in real time.", "In particular, [1] proposed a driver's saliency prediction model which incorporates an attention mechanism to understand salient features for driving context.", "[28] approximated an agent's influence by looking at the difference between two plans (a planner is running in simulation) when a given agent is accounted for versus not.", "However, removing one agent at a time does not account for certain situations where multiple agents may be influencing the car in the same way i.e.", "two pedestrians are blocking the path of the car and removing one of the pedestrians has no influence on the car.", "In similar work, [36] quantifies interactivity using a deep learning model which can suffer from the same robustness issues.", "More generally, algorithmically defined importance/relevance or interaction scores can be unreliable, especially in scenes with complex interactions between the AV and surrounding agents.", "In this work, we use human labeling to decide which agents are important, and our motivation is to use these labels to test model robustness.", "In the future, our causal agent labels can be used to verify algorithmic definitions of agent importance or relevance.", "Causal reasoning in autonomous driving.", "In a similar line of work to ours, [26] collect causal annotations using human labelers to help explain the reason behind human driver behavior for the Honda Research Institute Driving Dataset.", "Example causal annotations include stopped cars, congestion, traffic signals, or pedestrians.", "They then slice the performance of an object detector over scenes with different causal attributes.", "Our work also explores causal relationships in the AV setting, but we provide per-agent level causality and we further use the causal labels to evaluate model robustness for trajectory prediction." ], [ "Labeling causal agents in WOMD", "The objective of the labeling task is to identify all agents — cars, cyclists, or pedestrians — that are causal to the AV at any time during a driving segment.", "Although we are more interested in removing non-causal agents from each scene, we ask labelers to identify causal agents since there are typically fewer of them and they tend to be closer to the AV, making them easier for labelers to identify.", "Data.", "We focus on labeling the WOMD validation data because our primary goal is to evaluate the robustness of models trained on the original dataset.", "Each example is 9.1 seconds in length (91 steps at 10Hz) and is generated in overlapping windows from a 20-second segment of data.", "We label the 20-second segments of data to give labelers access to a longer time horizon and to not waste resources on labeling overlapping scenes.", "Moreover, both the regular and interactive WOMD validation sets are generated from the same 20-second segments of data, hence, our causal labels can be used for both datasets.", "Figure: Camera images from a randomly chosen scene in the labeling UI.", "The causal agents identified by the human labelers are circled in red.Labeling policy.", "Causality is an inherently subjective label since human drivers may vary in their feeling of which agents in the scene affect their decisions.", "Therefore, we want to be overly conservative and identify as many causal agents as possible to maximize the likelihood that removed agents are actually non-causal.", "If human labelers are unsure if an agent is causal or not, we instruct them to please err on the side of including it.", "We emphasize that false positives (identifying an agent as causal when it is truly non-causal) are acceptable to a certain extent, but we should avoid false negatives (failing to identify a truly causal agent).", "(Appendix includes the exact instructions given to labelers.)", "That said, in ambiguous situations, we did not expect labelers to reason about chained causality relationships.", "For example, if the AV is driving behind a series of 5 cars and the lead car were to brake, it could eventually cause the car in front of the AV to brake, but in this situation we would only expect the labeler to identify the car directly in front as causal.", "Labeling UI.", "The labeling UI is a web-based 3D view of the AV and its surroundings in the 20-second segmented videos.", "Labelers can scroll back and forth in the video to decide whether an agent is causal.", "The UI provides either a road view, which shows both static and dynamic map features as well as the 3D bounding boxes of all agents in the scene, or a camera view, which has RGB images from each camera on the car.", "Figure REF shows the camera images from a randomly chosen scene overlaid with the causal annotations provided by the human labelers.", "Labelers are asked to identify all agents that are causal to the AV at any point during the video length and record the IDs of the causal agents.", "Human annotations.", "To maximize coverage and avoid false negatives, each scene is annotated by 5 human labelers and we designate causal agents as all agents that any labeler identified as causal.", "Appendix shows the distribution over causal agents for the number of human labelers who selected the agent as causal.", "The majority of causal agents are selected by all 5 labelers, but a significant portion (24%) are selected by only 1 labeler." ], [ "Causal agent statistics", "To understand the properties of causal agents, we compute several statistics of causal agents in the WOMD validation dataset, including the percentage of causal agents (subfig:causalagentfrequency), the distribution of the relative distance between the AV and the causal agents versus all surrounding agents (subfig:distancefromsdc), and the breakdown of causal versus all surrounding agents by agent type (vehicle, pedestrian, or cyclist) (subfig:agentbreakdown).", "Figure: Causal agent statistics.", "Causal agents are less frequent than non-causal agents (on average 13% of agents are causal), and, compared to typical agents, they tend to be closer to the AV.", "Cyclists are relatively more likely to be causal agents than pedestrians or vehicles.subfig:causalagentfrequency shows that the majority of agents are actually non-causal: on average, only 13% of the total agents in the scene are labeled as causal, and 93% of scenes have less than 30% of the agents labeled as causal.", "Figure REF shows that causal agents are typically closer to the AV than non-causal agents; causal agents are an average distance of 28.4m from the AV, compared to an average of 49.4m over all agents.", "subfig:agentbreakdown shows the likelihood that an agent of a given type (Vehicle, Ped, or Cyclist) is causal.", "Surprisingly, cyclists are more likely to be causal agents than any other agents, and vehicles are more likely to be causal than pedestrians.", "We hypothesize that this is because cyclists usually share the road with the AV and have a strong prior of not respecting road boundaries like a car, whereas there are many parked cars that are not necessarily interacting with the AV and similarly pedestrians can be off the road on sidewalks." ], [ "Perturbed datasets", "In this work, we consider perturbations that modify the scene by deleting agents.", "While it is possible to create more complex perturbations, such as adding noise to the xyz position of the agents, we start with deletion since it directly reflects the models' robustness regarding the causal relationships of agents in the scene.", "Object track states in the WOMD consist of the object's states (e.g., 3D center point, velocity vector, heading), as well as a valid flag to indicate which time steps have valid measurements.", "To delete an agent from the scene, we set its valid mask to false throughout all time steps (and we double check for each model implementation that all agent state is ignored if the valid bit is false).", "We consider four different perturbations: RemoveNoncausal: Removes all non-causal agents in the dataset.", "RemoveNoncausalEqual: Removes an equal number of randomly selected non-causal agents as there are causal agents in the scene For example, if a scene has 5 causal agents, we randomly remove 5 non-causal agents.. RemoveNoncausalEqual is meant to be a less aggressive form of RemoveNoncausal since it deletes fewer agents and it allows us to compare to RemoveCausal when controlling for the number of agents deleted.", "RemoveStatic: Removes agents whose xyz positions do not change above a certain thresholdWe use a threshold of .1 m on the L2 distance of the agent's xyz state through the 9.1s (the duration of the example) to account for noise in the sensors.", "(e.g.", "parked cars).", "Not all static agents are non-causal.", "RemoveCausal: Removes all causal agents in the dataset, i.e., the complement of RemoveNoncausal.", "Among them, we categorize both RemoveNoncausal and RemoveNoncausalEqual as “non-causal\" perturbations.", "Specifically, to define non-causal perturbations, let us assume $X$ is a scenario representation, $Y$ is the ground truth trajectory of the AV, and $f$ is the ground-truth model that gives the relationship between $X$ and $Y$ .", "If a perturbation $\\Delta X$ satisfies $f(X+\\Delta X) = f(X) = Y$ , we define it as non-causal perturbation since it does not impact the relationship between $X$ and $Y$ .", "We define a deep learning model $\\hat{f}$ to be robust to non-causal perturbations if $\\hat{f}(X+\\Delta X) = \\hat{f}(X) = \\hat{Y} \\text{ } \\forall \\textit { non-causal } \\Delta X$ , where $\\hat{Y}$ is the predicted trajectory from the model.", "We consider RemoveStatic as an important baseline that does not require the human labels.", "We can thus apply it to the training dataset, which we explore in subsec:heuristicaug.", "Finally, we include the RemoveCausal perturbation as a sanity to ensure models are sensitive to deleting causal agents." ], [ "Evaluation", "Since we only have camera and LiDAR data from the perspective of the AV, we only collect causal labels with respect to it.", "As a result, we can only evaluate model predictions for the AV trajectory.", "We report the averaged minADE (following the definition from [10]) over 3, 5 and 8 seconds on both the original and perturbed datasets.", "In all instances, we use the top 6 trajectories for each model (K=6).", "Finally, we only evaluate on the 38k examples that were annotated by human labelers.", "Robustness Metrics.", "Since we found in our results that the perturbed minADE often improves for a large fraction of the examples, averaging over examples cancels out some of the effects we would like to measure.", "Thus, we introduce a robustness metric to measure the absolute change in minADE on a per-example level, which we define as $\\vspace{-10.0pt}\\text{Abs}(\\Delta ) = \\frac{1}{n}\\sum _{i=1}^n |\\text{perturbed\\_minADE}(i) - \\text{original\\_minADE}(i) |$ We report Abs($\\Delta $ ), the standard deviation of Abs($\\Delta $ ), and the the relative percentage change in Abs($\\Delta $ ) with respect to the original minADE.", "Finally, since the ground truth may represent only one of several correct ways to drive, in sec:trajectorysetmetrics we also consider pairwise differences between the original and perturbed predictions to measure model sensitivity.", "Table: We evaluate on a diverse set of models.Models.", "We select three representative deep learning models for evaluation: MultiPath++ [37], Scene Transformer [24], and the Pathformer model.", "Importantly, we only consider non-ensembled models (Multipath++ reports ensemble results in their paper and on the WOMD leaderboard).", "tab:models reviews the architectural differences and parameter counts of the models.", "Since we only evaluate on the AV, we typically only train the models to predict the AV, but for MultiPath++ and SceneTransformer we also train models on all agents (which we indicate by appending –All to the model name).", "Additionally, for the SceneTransformer–All model, we include both the marginal and joint models (these models are the same when training on only the AV.)" ], [ "Model sensitivity to non-causal perturbations", "In order to understand model sensitivity on a per-example level, fig:perexamplescatterplots plots the perturbed minADE versus the original minADE for each of the perturbations for the MultiPath++ model (Appendix shows the same plot for other models).", "For each perturbation type, we observe that the majority of examples show minimal change (i.e.", "are clustered around the $y{=}x$ axis), but there is a long tail of outlier examples that experience a large change (${>}1m$ ).", "Among perturbation types, the model is most sensitive to RemoveCausal, which is as expected since the causal agents might have changed the correct ground-truth trajectories.", "However, the model does not show enough robustness to RemoveNoncausal.", "We also see that the model is significantly more robust to RemoveNoncausalEqual, which means removing more agents increases model sensitivity.", "Meanwhile, when comparing RemoveCausal and RemoveNoncausalEqual, which controls for the number of agents removed, we see that the model is significantly more sensitive to removing causal agents than removing non-causal agents.", "Surprisingly, across all perturbation types, including RemoveCausal, the model sees a large portion of examples where minADE actually improves: 42.7% of examples show an improvement in minADE under the RemoveCausal perturbation, 43.0% for RemoveNoncausal, 49.6% for RemoveNoncausalEqual and 51.0% for RemoveStatic.", "This finding is counter-intuitive and motivates us to measure model sensitivity in terms of Abs($\\Delta $ ), as defined in eq:robustnessmetricdef.", "Across all models, the average Abs($\\Delta $ ) is 0.1450 for RemoveCausal, 0.131 for RemoveNoncausal, 0.051 for RemoveNoncausalEqual, and 0.089 for RemoveStatic.", "Appendix includes the Abs($\\Delta $ ) for each individual model and perturbation type.", "Comparing models.", "Focusing on the RemoveNoncausal perturbation, in tab:noncausal, we evaluate each model architectures and report the original minADE, perturbed minADE, Abs($\\Delta $ ), the standard deviation of Abs($\\Delta $ ), and $\\frac{\\text{Abs}(\\Delta )}{\\text{minADE}_{\\text{Ori}}}$ (see Appendix for other perturbations.)", "The SceneTransformer Marginal model shows the lowest average absolute sensitivity to the perturbation, while the MultiPath++–All model shows the lowest sensitivity relative to original minADE.", "In general, Abs($\\Delta $ ) decreases with the original minADE, but there is no clear relationship between relative Abs($\\Delta $ ) and minADE.", "Unexpectedly, the marginal SceneTransformer is more robust than the joint (we hypothesize that jointly modeling agents in the scene causes the model to pay more attention to non-causal agents and thus lead to relatively bad robustness).", "In Appendix , we report aggregate results for minFDE, overlap rate, miss rate, and mAP.", "Table: Models sensitivity to the RemoveNoncausal perturbation.", "Original and Perturbed are the average minADE across the whole dataset.", "Abs(Δ\\Delta ) is the average absolute difference between perturbed and original minADE computed per-example.", "The SceneTransformer Marginal model shows the lowest average absolute sensitivity to the perturbation, while the MultiPath++–All model shows the lowest sensitivity relative to original minADE.Slicing the robustness metric.", "We further slice the robustness of the models (Abs($\\Delta $ )) along several dimensions: AV's current speed, the percentage of removed non-causal agents (the number of removed non-causal agents divided by the number of all context agents), and the minimum distance from the AV to removed non-causal agents.", "The full results are given in fig:slicingresults in apx:examplevisualization.", "We see that, across all models, model sensitivity increases when we drop a larger fraction of non-causal agents and when the speed of the AV is greater.", "We also see that model sensitivity typically decreases when we drop agents that are farther away from the AV, though the SceneTransformer models have much noisy robustness measurements when dropping far away agents.", "Visualizing examples.", "We also visualize some examples with the largest output changes under the RemoveNoncausal perturbation in apx:examplevisualization." ], [ "Sensitivity via an IoU-based trajectory set metric", " To directly measure the magnitude of model output changes with and without perturbation, in this section we introduce a simple IoU (intersection-over-union) based metric to compare the sensitivity across models to different perturbations.", "The IoU-based metric.", "The IoU-based trajectory metric is computed as follows: given two predicted trajectory sets (with and without perturabtion), we first upsample all predicted trajectories (6 of them in each set) to 100Hz, and then voxelize them into a 2D top down grid with resolution of 0.5 meters.", "We then count the number of voxels both sets occupy, divided by the total number of voxels either output set occupies.", "To simplify computation, we explicitly ignore the probabilities and speeds of trajectories.", "This measure quantifies \"how geometrically different the trajectories look\".", "An IoU of 1 means the trajectories did not meaningfully change, and an IoU of 0 means the trajectories do not overlap at all.", "While more complicated versions of this metric could be computed (e.g.", "earth movers distance), we found this metric intuitive and useful for finding interesting shifts due to perturbation.", "Figure: Density distribution of the per-scene trajectory set IoU values for AV-only models under perturbations (RemoveCausal, RemoveNoncausal, and RemoveNoncausalEqual):models are least sensitive to RemoveNoncausalEqual, and more sensitive to RemoveCausal and RemoveNoncausal.Results.", "The results of three AV-only models under perturbations RemoveCausal, RemoveNoncausal and RemoveNoncausalEqual are shown in fig:iouresults.", "We find that models are least sensitive to RemoveNoncausalEqual, and much more sensitive to RemoveCausal and RemoveNoncausal.", "This is consistent with our finding in sec:mainresults, indicating the model is more sensitive to large perturbations since there are more non-causal agents than causal ones in most examples." ], [ "Training with data augmentations improves model robustness", "We experiment with two types of data augmentation: 1) data augmentations that use a heuristic definition of non-causal agents, such as randomly dropping any static context agentWe follow the definition of context agents in WOMD, i.e, the agents that no prediction is required for in the leaderboard., and 2) robustness-targeted data augmentations that directly drop only non-causal agents using a labeled portion of the val set.", "Heuristic data augmentation.", "The benefit of using a heuristic definition of non-causal agents for data augmentations is that it can be applied without collecting causal labels.", "We implement 2 types of heuristic-based data augmentation in the training set of WOMD: Drop Context (randomly dropping context agents) as a baseline, and Drop Static Context (randomly dropping static context agents).", "We use the MultiPath++–All model and we set the probability of dropping an agent to $0.1$ (the best one among $0.1$ , $0.5$ , and $0.8$ ).", "Table REF summarizes the results for the RemoveNoncausal perturbation (for the per-scene distribution of model sensitivity, see Appendix ).", "Models with data augmentation show less sensitivity to the perturbations, and, in particular, Drop Static Context shows a significant improvement in minADE and Abs($\\Delta $ ) over Drop Context.", "We hypothesize that Drop Static Context does better because the static context agents are less likely to be causal.", "Overall, the results for Drop Static Context imply that dropping non-causal agents via data augmentation in training can improve model robustness to such perturbations at test time.", "Table: Heuristic data augmentations.", "We compare the MP++-All baseline model to the same model trained with either dropping context agents or dropping static context agents, finding that data augmentations that drop agents that are more likely to be non-causal can improve robustness.Non-causal data augmentations.", "Motivated our experiments showing that dropping static context agents improves model robustness, we further explore using non-causal perturbations as a data augmentation strategy during training.", "We randomly sample approximately 70% of the original validation dataset (i.e.", "30k scenes), perturb multiple copies of them via the causal labels, and add the perturbed versions into the training dataset.", "We leave the remaining 30% of the validation set as a holdout for evaluation.", "We then train a baseline model on the new training dataset as well as a model that randomly drops non-causal agents (when possible) with probability 0.1.", "In tab:removenoncausalaugmentation, we similarly see that dropping non-causal agents helps improve minADE as well as model robustness.", "Table: Noncausal data augmentation.", "We fold a portion of the WOMD validation dataset into the original training dataset and apply data augmentations that drop non-causal agents.", "On held-out validation data, we find significant improvements in model robustness across all three Abs(Δ\\Delta ) metrics." ], [ "Larger dataset size improves model robustness", " We also evaluate model robustness with increasing training data size.", "We randomly select 10%, 20%, 50%, 80% of the training dataset and train separate models on each of the splitsTo mitigate bias, for each split we sample three sets of data and average the performance and robustness of three models independently trained on each set..", "Appendix summarizes the results.", "As we increase the training data size, the model performance improves (minADE decreases) and both the absolute and relative robustness improves.", "Interestingly, previously when varying model architectures, we found that the model with the lowest minADE did not always have the best relative robustness.", "Here, we see a strong trend: for a fixed model architecture, lowering the minADE by increasing the training data results in lower relative sensitivity." ], [ "Discussion", "We now discuss a few hypotheses and initial supporting evidence for why models are not robust to the non-causal perturbations.", "There may be other reasons we have yet to find strong evidence for, but we hope future work can utilize the labels to explore these ideas further.", "Overfitting.", "One reason models may fail to generalize to the non-causal perturbations is that they overfit to spurious correlations in the training data (i.e.", "features that correlate with certain ground truth trajectories but fail to generalize).", "In our experiments, we observe that models that overfit on the original training dataset (as measured by increasing minADE on the original validation dataset) are more sensitive to the non-causal perturbations.", "Thus, the more the model overfits to spurious features like the number of parked cars in a faraway parking lot, the less well it generalizes to examples where these features are absent.", "Data augmentation and increasing dataset size may improve robustness by protecting against overfitting.", "Distribution shift.", "Models may fail to generalize to perturbations if the perturbed data is significantly different from any data seen during training.", "In our results, we observe that the more non-causal agents we remove, the less robust models are.", "Perhaps certain types of scenes with few agents are relatively rare in the training dataset and the model does not generalize well to the distribution shift.", "By evaluating on the perturbations, we essentially expose the model to rare scenarios not seen in training.", "One reason that training with data augmentations via dropping (static) context agents or non-causal agents improves robustness could be that it exposes the model to similar scenes during training.", "Over-reliance on agents instead of roadmap.", "A third possible reason that models fail to generalize is that they utilize the non-causal agents to infer the drivable areas instead of using the mapping information in the input (we serve high-definition maps and traffic control signals as input features for all models).", "Our evidence comes from visualizing examples where dropping non-causal agents creates predictions that disobey the roadgraph rules (Appendix ).", "Mode missing with data-dependent modes.", "Finally, many of the state of the art models (e.g.", "[24], [37]) utilize modes (e.g.", "straight, left, u-turn, etc) learned from the data distribution, where the input data influences how the model will utilize its K predictions to minimize its loss function.", "While effective at minimizing minADE-like metrics, these methods provide no coverage guarantees, and can encourage the model to predict multiple speed profiles for the same mode instead of diverse modes.", "When we triage examples (see Appendix ), we find some of the largest failures come from spurious correlations with non-causal agents changing which modes get predicted by the model, demonstrating a weakness of this approach and the metrics they perform well on." ], [ "Conclusions", "We establish a benchmark and metrics for evaluating the robustness of several state-of-the-art models for trajectory prediction for autonomous driving.", "We find that most state-of-the-art models (with different model architectures and coordination systems) show significant levels of sensitivity to perturbations that remove non-causal agents, with higher sensitivity when removing a greater number of them.", "While most examples show minimal change in minADE ($\\le 0.1$ m), there is a long tail of examples that can have large changes ($\\ge 1$ m and sometimes up to 8m).", "Surprisingly, removing either causal or non-causal agents can cause a significant fraction of examples to improve their minADE.", "We also find that increasing dataset size and data augmentation can help improve the model robustness.", "Overall, our results indicate that current machine learning models for trajectory prediction may not be reliable enough on their own, and careful thought needs to be given to how to integrate such models with non-learning components to make a safe system.", "Finally, we will publish the causal agent labels as complementary attributes to the WOMD to aid future researchers in building more robust models." ], [ "Acknowledgement", "We thank Rami Al-Rfou, Zhifeng Chen, Anca Dragan, Carlton Downey, Aleksandra Faust, Nigamaa Nayakanti, Jiquan Ngiam, Jon Shlens, Mukund Sundararajan, and Vijay Vasudevan for the valuable discussions and inputs during the course of this research." ], [ "Pathformer", "Pathformer is a transformer based architecture for motion forecasting, that is simple, homogeneous and modality agnostic.", "The input to the model is multi-modal, containing information about agent's history, traffic light information, roadgraph information, and history of other agents in the scene.", "The model effectively learns an understanding of the scene through attention based mechanism to produce robust, realistic and diverse N future trajectories of an agent.", "In this study, we set N=6.", "Figure: The X-Ray visualization of the Pathformer model: the multi-modal inputs include agents' history features, trafflic light state features, AV centric traffic light features, AV centric interaction features, interaction features, and polyline features from the road maps.", "The inputs go through a transformer-based encoder-decoder architecture to capture the attention among the inputs.", "Finally, a classification head and a regression head contribute to the total training loss similar to .More specifically, the architecture of the Pathformer model, as shown in fig:pathformerdiagram, consists of two main sub-networks: an encoder and a decoder.", "The encoder is mainly composed of one or more self-attention networks that learns a representation of the driving scene that facilitates motion forecasting.", "The decoder is composed of one or more standard transformer decoder blocks.", "We use learned embeddings as the initial queries to the decoder, which are then cross-attended with the encoder outputs to produce the proposed trajectories.", "In this encoder, we simply combine all the modalities / inputs into a single long sequence to be fed into the transformer network.", "This scene encoding serves as the context for the decoder to generate N possible trajectories covering the multimodality of the output space.", "The learned embeddings are refined iteratively through the several layers of cross-attention to produce an encoding of a full trajectory.", "The pathformer model has achieved rank 2 on the leader-board of the WOMD, and the scores are shown in fig:pathformerrank.", "Figure: The scores of the Pathformer model on the WOMD." ], [ "Labeling Policy", "Below is the exact text given to labelers to define causal agents: The objective is to identify all agents - cars, cyclists, or pedestrians - that are causal to the AV at any time.", "A causal agent is one whose presence would modify or influence human driver behavior in any way.", "Causality is an inherently subjective label.", "If you are unsure if an agent is causal or not, please err on the side of including it.", "In other words, false positives (identifying an agent as causal when it is truly non-causal) are okay, but we should avoid false negatives (failing to identify a truly causal agent).", "If the behavior of a human driver would be modified because of a potential action that an agent is likely to take, then that agent should be causal.", "On the other hand, if the human driver would drive the same regardless of whether the agent is there or not, the agent is non-causal.", "The labeling policy also included several examples scenarios with causal agents identified such as Figure REF .", "Figure: Example from the labeling policy.", "Causal agents are circled in green and a subset of non-causal agents are circled in red." ], [ "Model sensitivity to various perturbation types", "In this section, we summarize the robustness metrics across different model architectures for each of the perturbation types (RemoveCausal, RemoveNoncausal, RemoveNoncausalEqual, RemoveStatic).", "We report the model's original minADE, perturbed minADE, average absolute difference between perturbed and original minADE computed per-example (Abs($\\Delta $ )), standard deviation of Abs($\\Delta $ ), and the relative % change (Abs($\\Delta $ ) divided by the original minADE).", "Each table below shows the results for a different perturbation dataset.", "Table: Model sensitivity for RemoveCausal, minADE.Table: Model sensitivity for RemoveNoncausal, minADE.Table: Model sensitivity for RemoveNoncausalEqual, minADE.Table: Model sensitivity for RemoveStatic, minADE.To make it easier to compare across perturbation types, we also report the average Abs(Perturbed - Original) minADE for each model and perturbation type in Table REF .", "Table: Abs(Perturbed-Original) across different perturbation types and models.", "We report the average absolute difference between the per-example perturbed and original minADE for each model and perturbation type.", "The model sensitivity for RemoveCausal and RemoveNoncausal is similar, with RemoveCausal resulting in a slightly larger average absolute change.", "However, when we control for the number of agents and compare RemoveCausal to RemoveNoncausalEqual, we see that the model is significantly more sensitive to removing causal agents than non-causal agents." ], [ "Failure (non-robust) cases under non-causal perturbation", "We have triaged several top sensitive examples under the RemoveNoncausal perturbation.", "Among these examples, we have found three failure patterns: 1) predictions under the perturbation violate traffic rules, as shown in fig:mp++manualtriage0; 2) predictions under the perturbation missed to capture the ground-truth mode, as shown in fig:stmanualtriage0; and 3) predictions under the perturbation violates the causality, for instance, unnecessary slows down when the road becomes more empty due to the removal of non-causal agents, such as the bottom example in fig:mp++overfittriage.", "Meanwhile, we also have identified examples where the predictions under the non-causal perturbation becomes better, as shown in fig:stmanualtriage1.", "Figure: An sensitive example from MP++ under non-causal perturbation: Left side is inference on the original validation data, and right side is inference on the RemoveNoncausal data, where all non-causal agents are removed from the scene, but some of predicted outputs weirdly turn right in the middle of a straight road.Figure: An example from marginal Scene Transformer under non-causal perturbation: Left side is inference on the original validation data, and right side is inference on the RemoveNoncausal data, where all non-causal agents are removed from the scene.", "It shows a scene where the model performed worse under perturbation, entirely missing the correct mode.Figure: An sensitive example from marginal Scene Transformer under non-causal perturbation: Left side is inference on the original validation data, and right side is inference on the RemoveNoncausal data, where all non-causal agents are removed from the scene.", "The model outputs under perturbation actually improved by capturing the ground-truth mode.", "The original model missed a mode where it should have driven forward, potentially because of a spurious correlation with the non-causal agent in front of it.", "After removing that (and other agents), it correctly predicted that mode.", "However, there is one mode showing a too wide right turn under the perturbation, which is highly unlikely in human driving.", "This might due to the removal of the static agents from the cross traffic confuses the model about the drivable area." ], [ "An evidence example for non-robustness due to overfitting", "In this section, we show an example scenario that showcases overfitting is one potential reason for poor robustness.", "We have trained the MP++ with 1M iterations, which overfitted at 210k iterations.", "We then visualize the predictions of a same example with two different checkpoints, one at 210k iteration and another at 1M.", "The results are shown in fig:mp++overfittriage.", "We can see that the robustness of the model under non-causal perturbation becomes bad when the model over-fits.", "Figure: An example from MP++ indicating over-fitting is one possible reason for poor robustness: Left side is inference on the original validation data, and right side is inference on the RemoveNoncausal data, where all non-causal agents are removed from the scene.", "The top row shows the performance of the model at 210k iteration, while the bottom row is for that of 1M where we observe over-fitting based on minADE on the validation set.", "We can see that at 1M iteration, the top-1 prediction under the non-causal perturbation unnecessarily slows down.", "Note that in this plot, we only visualize the top-1 predictions for better visualization.", "We also only visualize the predictions of the AV in the bottom row." ], [ "Slicing results", "We further slice the robustness of the models along several dimensions, including the AV's current speed, the percentage of removed non-causal agents (the number of removed non-noncausal agents divided by the number of all context agents) in the scenarios, and the minimum distance from the AV to removed non-causal agents.", "Results are given in fig:slicingresults.", "We found that: Along the percentage of removed non-causal agents, we found all models are more sensitive if a larger fraction of agents are removed.", "Across them, ST Marginal and Pathformer are the most robust models.", "Compared to Pathformer, ST marginal is less sensitive when more than 40% of the context agents are non-causal and removed from the data (fig:slicingresults left).", "Along the AV's speed, ST Marginal is more robust when the AV's speed is slower than 45mph (fig:slicingresults top-left).", "Note that the high fluctuations at high speed ($>45$ mph) is because we have fewer examples there (the count of examples in each bin is provided in Appendix).", "Along the minimum distance between the removed non-causal agents to the AV, Pathformer is the most robust one, particularly when the minimum distance is larger (i.e., all removed non-causal agent are relatively far away from the AV, fig:slicingresults right).", "Such results indicate that Pathformer learns to not pay too much attention to far-away non-causal agents.", "On the contrary, we noticed that ST models tend to be more sensitive to far-away non-causal agents.", "We hypothesize that this might be because the global coordination that ST models are used makes it more sensitive to large coordinate values.", "Figure: We slice the average Abs(Perturbed - Original) minADE along i) ratio of removed non-causal agents to context agents (left), ii) AV speed (mph) (middle), and iii) minimum distance (m) between the removed non-causal agents and the AV (right)." ], [ "Aggregate results for alternate metrics", "In this section, we report aggregate results for minFDE, overlap rate, miss rate, and mAP on the RemoveNoncausal perturbation.", "Table: minFDE RemoveNoncausal results.Table: Overlap Rate RemoveNoncausal results.Table: Miss Rate RemoveNoncausal results.Table: mAP RemoveNoncausal results." ], [ "Comparison across models", "The results for comparison across models are shown in fig:modelcomparison.", "We found that all models are sensitive to the non-causal perturbations.", "They are also more sensitive to RemoveNoncausalAgents than RemoveNoncausalAgentsEqual, implying that removing more non-causal agents increases model sensitivity.", "Among the models, Pathformer and Scene Transformer Marginal show the least sensitivity to the perturbation.", "Figure: Models are sensitive to non-causal perturbations.We plot the distribution of the per-scene difference between perturbed and original minADE for various models and perturbation types.", "Models that are less sensitive to the perturbation have a higher example density at 0 difference.All models show sensitivity to the non-causal perturbations, which can either increase or decrease the perturbed minADE relative to the original minADE.The models are more sensitive to RemoveNoncausalAgents than RemoveNoncausalAgentsEqual, implying that removing more non-causal agents increases model sensitivity.", "Among the models, Pathformer and Scene Transformer Marginal show the least sensitivity to the perturbation." ], [ "Comparison across perturbation types", "Figure REF shows the sensitivity of the Pathformer AV Only, ST Marginal AV Only, and MultiPath++ All Agents models to each of the perturbation types.", "The perturbation can either increase or decrease the minADE.", "On average, it increase the minADE but this depends on the pertubation type (RemoveCausalAgents causes the strongest increase; see Appendix ).", "The models are most sensitive to both the RemoveCausalAgents and RemoveNoncausalAgents perturbations.", "The RemoveCausalAgents has the largest effect on the model, producing the most outliers that increase the difference between the perturbed and original minADE, followed closely by RemoveNoncausalAgents, then RemoveStaticAgents, and then RemoveNoncausalAgentsEqual.", "Surprisingly, the sensitivity of RemoveNoncausalAgents is close to that of RemoveCausalAgents (exact numbers TODO).", "However, when we change the number of non-causal agents removed (in RemoveNoncausalAgentsEqual) to be the same as the number of causal agents removed (in RemoveCausalAgents), the sensitivity is much less.", "Figure: Models are sensitive to non-causal perturbations.", "For three models, we plot the distribution of the per-scene difference between perturbed and original minADE for various perturbation types.", "The models are least sensitive the RemoveNoncausalAgentsEqual perturbation, and most sensitive (almost equally so) to RemoveCausalAgents and RemoveNoncausalAgents." ], [ "Trajectory Set Metrics (continued)", "Since $\\Delta $ (Ptb-Ori) on minADE only quantifies how the perturbations impact the models' robustness in terms of the distance between ground-truth and the closest predicted trajectories, it does not directly reflect the difference between the two predicted trajectory sets (w/ and w/o perturbations).", "We thus introduce two trajectory set metrics to capture such difference: an IoU based metric as given in sec:evaluation in the main context and a trajectory set minADE defined below.", "Ideally, a model's predicted trajectory sets would not be sensitive to dropping non-causal agents, meaning we expect a low difference on the trajectory set metrics.", "minADE between trajectory sets (TS_minADE).", "Let $\\hat{p}^i_{pert, orig}$ represent the $i$ -th predicted trajectory in the predicted trajectory sets w/ and w/o perturbation, respectively.", "We define TS_minADE = $\\min L_2(\\hat{p}^i_{orig}, \\hat{p}^j_{pert})$ , $i,j=1,2,\\cdots , N$ where $N$ is the number of the predicted trajectories of the model.", "Hence, a smaller TS_minADE means that two predicted trajectory sets are more similar.", "The results for all the models are given in tab:pairwisemetric.", "We can see that most of the models are sensitive to the RemoveNoncausal perturbation.", "The Pathformer is least sensitive, which is good.", "However, it is also least sensitive to RemoveCausal, which indicate that the model is less sensitive to agent removal in general.", "Table: The trajectory set minADE for models evaluated on the perturbations of RemoveNoncausal and RemoveCausal" ] ]
2207.03586
[ [ "Classical solutions of the Boltzmann equation with irregular initial\n data" ], [ "Abstract This article considers the spatially inhomogeneous, non-cutoff Boltzmann equation.", "We construct a large-data classical solution given bounded, measurable initial data with uniform polynomial decay of mild order in the velocity variable.", "Our result requires no assumption of strict positivity for the initial data, except locally in some small ball in phase space.", "We also obtain existence results for weak solutions when our decay and positivity assumptions for the initial data are relaxed.", "Because the regularity of our solutions may degenerate as $t \\rightarrow 0$, uniqueness is a challenging issue.", "We establish weak-strong uniqueness under the additional assumption that the initial data possesses no vacuum regions and is H\\\"older continuous.", "As an application of our short-time existence theorem, we prove global existence near equilibrium for bounded, measurable initial data that decays at a finite polynomial rate in velocity." ], [ "Introduction", "We consider the Boltzmann equation, a fundamental kinetic integro-differential equation from statistical physics [18], [66], [19], [21], [70].", "The unknown function $f(t,x,v)\\ge 0$ models the particle density of a diffuse gas in phase space at time $t \\ge 0$ , location $x \\in \\mathbb {R}^3$ , and velocity $v\\in \\mathbb {R}^3$ .", "The equation reads $(\\partial _t + v\\cdot \\nabla _x) f = Q(f,f),$ where the left-hand side is a transport term, and $Q(f,g)$ is the Boltzmann collision operator with non-cutoff collision kernel, which we describe in detail below.", "The purpose of this article is to develop a well-posedness theory for (REF ) on a time interval $[0,T]$ , making minimal assumptions on the initial data $f_{\\rm in}(x,v)\\ge 0$ .", "In particular, we would like our local existence theory to properly encapsulate the regularizing effect of the non-cutoff Boltzmann equation.", "This effect comes from the nonlocal diffusion produced by $Q(f,f)$ in the velocity variable, and has been studied extensively, as we survey below in Section REF .", "In light of this regularizing effect, it is natural and desirable to construct a solution $f$ with initial data in a low-regularity (ideally zeroth-order) space, such that $f$ has at least enough regularity for positive times to evaluate the equation in a pointwise sense.", "However, so far this has only been achieved in the close-to-equilibrium [13], [65] and space homogeneous (i.e $x$ -independent) [32] regimes.", "For the general case, essentially all of the local existence results for classical solutions in the literature [5], [6], [10], [61], [40], [42] require $f_{\\rm in}$ to lie in a weighted Sobolev space of order at least 4.", "The current article fills this gap by constructing a solution with initial data in a weighted $L^\\infty $ -space.", "Another goal of our analysis is to optimize the requirement on the decay of $f_{\\rm in}$ for large velocities.", "Because of the nonlocality of $Q$ , decay of solutions is intimately tied to regularity, and since we work in the physical regime $\\gamma \\le 0$ (see (REF )), the decay of $f$ for positive times is limited by the decay of $f_{\\rm in}$ .", "In our main existence result, we require $f_{\\rm in}$ to have pointwise polynomial decay of order $2s+3$ , where $2s\\in (0,2)$ is the order of the diffusion (see (REF )).", "In particular, the energy density $\\int _{\\mathbb {R}^3} |v|^2 f(t,x,v) \\, \\mathrm {d}v$ of our solutions may be infinite, which places them outside the regime where the conditional regularity estimates of Imbert-Silvestre [49] may be applied out of the box.", "The possible presence of vacuum regions in the initial data is a key source of difficulty.", "The regularization coming from $Q(f,f)$ relies on positivity properties of $f$ , in a complex way that reflects the nonlocality of $Q$ .", "In the space homogeneous setting, conservation of mass provides sufficient positivity of $f$ for free.", "The close-to-equilibrium assumption would also ensure that $f$ has regions of strict positivity at all times.", "By contrast, in the case of general initial data, any lower bounds for $f$ must degenerate at a severe rate as $t\\searrow 0$ , which impacts the regularity of the solution for small times and causes complications for the well-posedness theory.", "Our main existence theorem requires a weak positivity assumption, namely that $f_{\\rm in}$ is uniformly positive in some small ball in phase space.", "We consider solutions posed on the spatial domain $\\mathbb {R}^3$ , with no assumption that the solution or the initial data decay for large $|x|$ .", "This regime includes the physically important example of a localized disturbance away from a Maxwellian equilibrium $M(v) = c_1 e^{-c_2|v|^2}$ ; that is $f = M(v) + g(t,x,v)$ , where $g(t,x,v) \\rightarrow 0$ as $|x|\\rightarrow \\infty $ but $g$ is not necessarily small.", "Our regime also includes spatially periodic solutions as a special case.", "The lack of integrability in $x$ is a nontrivial source of difficulty and in particular makes energy methods much less convenient.", "Also, the total mass, energy, and entropy of the solution could be infinite, so we do not have access to the usual bounds coming from conservation of mass and energy and monotonicity of entropy.", "In Section REF , we give a more complete bibliography of well-posedness results for (REF )." ], [ "The collision operator", "Boltzmann's collision operator is a bilinear integro-differential operator defined for functions $f,g:\\mathbb {R}^3\\rightarrow \\mathbb {R}$ by $Q(f,g) := \\int _{\\mathbb {R}^3} \\int _{\\mathbb {S}^2} B(v-v_*,\\sigma ) [f(v_*^{\\prime })g(v^{\\prime }) - f(v_*) g(v)] \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_*.$ Because collisions are assumed to be elastic, the pre- and post-collisional velocities all lie on a sphere of diameter $|v-v_*|$ parameterized by $\\sigma \\in \\mathbb {S}^2$ , and are related by the formulas $v^{\\prime } = \\frac{v+v_*}{2} + \\frac{|v-v_*|}{2} \\sigma , \\quad v_*^{\\prime } = \\frac{v+v_*}{2} - \\frac{|v-v_*|}{2} \\sigma .$ We take the standard non-cutoff collision kernel $B$ defined by $B(v-v_*,\\sigma ) = b(\\cos \\theta ) |v-v_*|^\\gamma , \\quad \\cos \\theta = \\sigma \\cdot \\frac{v-v_*}{|v-v_*|},$ for some $\\gamma > -3$ .", "The angular cross-section $b$ is singular as $\\theta $ (the angle between pre- and post-collisional velocities) approaches 0 and satisfies the bounds $c_b \\theta ^{-1-2s} \\le b(\\cos \\theta ) \\sin \\theta \\le \\frac{1}{c_b} \\theta ^{-1-2s},$ for some $c_b>0$ and $s\\in (0,1)$ .", "This implies $b$ has the asymptotics $b(\\cos \\theta ) \\approx \\theta ^{-2-2s}$ as $\\theta \\rightarrow 0$ .", "The parameters $\\gamma $ and $s$ reflect the modeling choices made in defining $Q(f,g)$ .", "When electrostatic interactions between particles are governed by an inverse-square-law potential of the form $\\phi (x) = c|x|^{1-p}$ for some $p>2$ , then one has $\\gamma = (p-5)/(p-1)$ and $s = 1/(p-1)$ .", "As is common in the literature, we consider arbitrary pairs $(\\gamma ,s)$ and disregard the parameter $p$ .", "For our main results, we assume $\\gamma < 0,$ but otherwise, we do not place any restriction on $\\gamma $ and $s$ .", "The integral in (REF ) has two singularities: as $\\theta \\rightarrow 0$ , and as $v_*\\rightarrow v$ .", "The non-integrable singularity at $\\theta \\approx 0$ (grazing collisions), which is related to the long-range interactions taken into account by the physical model, is the source of the regularizing properties of the operator $Q$ ." ], [ "Main results", "For $1\\le p\\le \\infty $ , define the velocity-weighted $L^p$ norms $\\Vert f\\Vert _{L^p_q(\\mathbb {R}^3)} = \\Vert \\langle v\\rangle ^q f\\Vert _{L^p(\\mathbb {R}^3)},\\quad \\Vert f\\Vert _{L^p_q(\\Omega \\times \\mathbb {R}^3)} = \\Vert \\langle v\\rangle ^q f\\Vert _{L^p(\\Omega \\times \\mathbb {R}^3_v)}$ where $\\Omega $ is any subset of $\\mathbb {R}^3_x$ or $[0,\\infty )\\times \\mathbb {R}^3_x$ .", "Our results involve kinetic Hölder spaces $C^{\\beta }_{\\rm \\ell }$ that are defined precisely in Section REF below.", "These spaces are based on a distance $d_\\ell $ that is adapted to the scaling and translation symmetries of the Boltzmann equation.", "Roughly speaking, a function in $C^\\beta _\\ell $ for some $\\beta >0$ is $C^\\beta $ in $v$ , $C^{\\beta /(1+2s)}$ in $x$ , and $C^{\\beta /(2s)}$ in $t$ .", "Note that the subscript $q$ in $L^p_q$ refers to a decay exponent, while the subscript $\\ell $ in $C^\\beta _\\ell $ refers to the distance $d_\\ell $ .", "In the statement of our main results, for brevity's sake, we make the convention that all constants may depend on the parameters $\\gamma $ , $s$ , and $c_b$ from the collision kernel, even if they are not specifically mentioned.", "Our first main result is about the existence of classical solutions: Theorem 1.1 Let $\\gamma \\in (-3,0)$ and $s\\in (0,1)$ .", "Assume that $f_{\\rm in} \\ge 0$ lies in $L^\\infty _q(\\mathbb {R}^6)$ with $q > 2s+3$ , and that for some $x_m,v_m \\in \\mathbb {R}^3$ and $\\delta , r>0$ , $f_{\\rm in}(x,v) \\ge \\delta , \\quad \\text{ for } (x,v) \\in B_r(x_m,v_m).$ Then there exists $T>0$ depending on $q$ and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q}$ and a solution $f(t,x,v) \\ge 0$ to the Boltzmann equation (REF ) in $L^\\infty _q([0,T]\\times \\mathbb {R}^6)$ .", "This solution is locally of class $C^{2s}_\\ell $ .", "More precisely, for each compact $\\Omega \\subset (0,T]\\times \\mathbb {R}^6$ , there exist $C, \\alpha >0$ depending on $q$ , $\\Omega $ , $x_m$ , $v_m$ , $r$ , $\\delta $ , and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q(\\mathbb {R}^6)}$ , such that $ \\Vert f\\Vert _{C^{2s+\\alpha }_\\ell (\\Omega )} \\le C.$ Furthermore, for any $m\\ge 0$ and partial derivative $D^k f$ , where $k$ is a multi-index in $(t,x,v)$ variables, there exists $q(k,m)>0$ such that for any compact $\\Omega \\subset (0,T]\\times \\mathbb {R}^3$ , $ f_{\\rm in} \\in L^\\infty _{q(k,m)}(\\mathbb {R}^6) \\quad \\Rightarrow \\quad \\Vert D^k f\\Vert _{L^\\infty _{m}(\\Omega \\times \\mathbb {R}^3)} \\le C,$ with $C$ depending on $k$ , $m$ , $\\Omega $ , and the initial data.", "If $f_{\\rm in}$ decays faster than any polynomial, i.e.", "$f_{\\rm in} \\in L^\\infty _q(\\mathbb {R}^6)$ for all $q>0$ , then the solution $f$ is $C^\\infty $ in all three variables for positive times, and $D^k f\\in L^\\infty _{m}(\\Omega \\times \\mathbb {R}^3)$ for all $m\\ge 0$ , multi-indices $k$ , and compact sets $\\Omega $ .", "At $t=0$ , the solution agrees with $f_{\\rm in}$ in the following weak sense: for all $\\varphi \\in C^1_{t,x} C^2_v$ with compact support in $[0,T)\\times \\mathbb {R}^6$ , $\\int _{\\mathbb {R}^6} f_{\\rm in}(x,v) \\varphi (0,x,v) \\, \\mathrm {d}v \\, \\mathrm {d}x = \\int _0^T \\int _{\\mathbb {R}^6} [f(\\partial _t + v\\cdot \\nabla _x)\\varphi + Q(f,f) \\varphi ] \\, \\mathrm {d}v \\, \\mathrm {d}x \\, \\mathrm {d}t.$ Some comments on the theorem statement are in order: The local regularity of $f$ of order $2s+\\alpha $ , where $\\alpha >0$ is uniform on compact sets, is enough to make pointwise sense of $Q(f,f)$ , as we prove in Lemma REF .", "The norm $C^{2s+\\alpha }_\\ell $ also controls the material derivative $(\\partial _t + v\\cdot \\nabla _x)f$ (see [51]).", "Therefore, although $\\partial _t f$ and $\\nabla _x f$ do not necessarily exist classically, the two sides of equation (REF ) have pointwise values and are equal at every $(t,x,v)\\in (0,T]\\times \\mathbb {R}^6$ .", "In general, our solutions may have a discontinuity at $t=0$ .", "If we make the additional assumption that $f_{\\rm in}$ is continuous, then $f$ is continuous as $t\\rightarrow 0$ and agrees with $f_{\\rm in}$ pointwise.", "This is proven in Proposition REF .", "It is not a priori obvious that the time integral on the right in (REF ) converges, since the regularity required to make pointwise sense of $Q(f,f)$ degenerates as $t\\rightarrow 0$ .", "Using the weak formulation of the collision operator (see (REF ) below) one can bound $\\int _{\\mathbb {R}^3} Q(f,f) \\varphi \\, \\mathrm {d}v$ from above using only bounds for $\\varphi $ in $W_v^{2,\\infty }$ and $f$ in $L^\\infty \\cap L^1_{\\gamma +2s}$ .", "This implies that the formula (REF ) is well-defined.", "It seems that one can deduce existence of a classical solution in the $\\gamma =0$ case fairly easily via t:existence.", "Indeed, the $C^{2s+\\alpha }_\\ell $ estimates are uniform as $\\gamma \\nearrow 0$ , so one can use local convergence of solutions $f^\\gamma $ to obtain $f^0$ that solves (REF ), after performing a suitable convergence analysis for $Q(f,f)$ as $\\gamma \\nearrow 0$ .", "In the interest of brevity, we do not analyze this case in detail.", "Although our main interest is in constructing classical solutions, our approach is robust enough to prove the existence of weak solutions when the decay and positivity conditions on $f_{\\rm in}$ are relaxed.", "In particular, we obtain a well-defined notion of weak solution for any $f_{\\rm in}\\in L^\\infty _q(\\mathbb {R}^6)$ , $q>\\gamma +2s+3$ , without any quantitative lower bound assumptions on $f_{\\rm in}$ .", "These weak solutions do not have enough regularity to evaluate $Q(f,f)$ pointwise, so we define the weak formulation of the collision operatorThis weak form of $Q(f,f)$ is very classical and goes back to James Clerk Maxwell's 1867 work on the theory of gases [57].", "as follows: $W(g,h,\\varphi ) := \\frac{1}{2} \\int _{\\mathbb {R}^3} \\int _{\\mathbb {S}^2} B(v-v_*,\\sigma ) g(v) h(v_*) [\\varphi (v_*^{\\prime }) + \\varphi (v^{\\prime }) - \\varphi (v_*) - \\varphi (v)] \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* .$ When $f$ is sufficiently smooth and rapidly decaying, the identity $\\int _{\\mathbb {R}^3} \\varphi Q(f,f) \\, \\mathrm {d}v = \\int _{\\mathbb {R}^3} W(f,f,\\varphi ) \\, \\mathrm {d}v$ follows from the pre-post-collisional change of variables and symmetrization, see, e.g.", "[70].", "Our result on weak solutions is as follows: Theorem 1.2 Let $\\gamma $ and $s$ be as in Theorem REF .", "Assume that $f_{\\rm in}\\ge 0$ lies in $L^\\infty _{q}(\\mathbb {R}^6)$ for some $q> \\gamma +2s+3$ .", "Then there exists $T>0$ depending on $q$ and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q}(\\mathbb {R}^6)}$ and $f: [0,T]\\times \\mathbb {R}^6\\rightarrow [0,\\infty )$ such that, for any $\\varphi \\in C^1_{t,x} C^2_v$ with compact support in $[0,T)\\times \\mathbb {R}^6$ , there holds $\\int _{\\mathbb {R}^6} f_{\\rm in}(x,v) \\varphi (0,x,v) \\, \\mathrm {d}v \\, \\mathrm {d}x = \\int _0^T \\int _{\\mathbb {R}^6}[ f (\\partial _t + v\\cdot \\nabla _x)\\varphi +W(f,f,\\varphi )] \\, \\mathrm {d}v \\, \\mathrm {d}x \\, \\mathrm {d}t.$ If, in addition, there exist $\\delta , r>0$ and $x_m,v_m\\in \\mathbb {R}^3$ with $ f_{\\rm in}(x,v) \\ge \\delta , \\quad \\text{ for } (x,v) \\in B_r(x_m)\\times B_r(v_m),$ then $f$ is locally Hölder continuous: for any compact $\\Omega \\subset (0,T]\\times \\mathbb {R}^6$ , there exist $C,\\alpha >0$ depending on $\\Omega $ , $x_m$ , $v_m$ , $r$ , $\\delta $ , and $\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^6)}$ , with $ \\Vert f\\Vert _{C^\\alpha _\\ell (\\Omega )}\\le C.$ This definition of weak solution is similar to one used by Alexandre [3] who worked under stricter hypotheses on the initial data.", "Next, we present our main result on uniqueness.", "This is a challenging issue because of the generality of our existence theorem.", "We discuss some of the specific difficulties in Section REF below.", "Figure: A cartoon depicting two f in f_{\\rm in} satisfying the condition ().", "The set where f in ≥δf_{\\rm in}\\ge \\delta is depicted by the gray shading.", "Notice that, for every xx, there is a v x v_x between the dashed red lines such that a translation of the red ball (B r B_r) centered at (x,v x )(x,v_x) is within the shaded region.", "This would not be the case if, e.g., f in ≡0f_{\\rm in} \\equiv 0 for all x∈B 1 (0)x \\in B_1(0).For this result, we make additional assumptions on $f_{\\rm in}$ : there are $\\delta $ , $r$ , $R>0$ so that $\\text{for each } x\\in \\mathbb {R}^3,\\ \\text{ there is } \\ v_x\\in B_R(0)\\ \\text{ such that }\\ f_{\\rm in} \\ge \\delta 1_{B_r(x,v_x)}.$ (See Figure REF .)", "This condition is stronger than (REF ) and rules out vacuum regions in the spatial domain at $t=0$ .", "We also need to assume that $f_{\\rm in}$ is Hölder continuous.", "Our uniqueness theorem is as follows: Theorem 1.3 Let $\\gamma \\in (-3,0)$ and $s\\in (0,1)$ .", "For any $\\alpha >0$ and for $q>0$ sufficiently large, depending only on $\\alpha $ , $\\gamma $ , $s$ , and $c_b$ , assume that $f_{\\rm in}\\in L^\\infty _q(\\mathbb {R}^6)\\cap C^\\alpha _\\ell (\\mathbb {R}^6)$ , and that $f_{\\rm in}$ satisfies the lower bound assumption (REF ).", "Let $f$ be the classical solution on $[0,T]\\times \\mathbb {R}^6$ constructed in Theorem REF with initial data $f_{\\rm in}$ .", "Then there exists $T_U>0$ depending on $\\delta $ , $r$ , $R$ , $\\alpha $ , $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q(\\mathbb {R}^6)}$ , and $\\Vert f_{\\rm in}\\Vert _{C^\\alpha _\\ell (\\mathbb {R}^6)}$ , such that for any weak solution $g$ in the sense of Theorem REF with initial data $f_{\\rm in}$ , and such that $g\\in L^1([0,T_U],L^\\infty _q(\\mathbb {R}^6)),$ the equality $f(t,x,v)=g(t,x,v)$ holds everywhere in $[0,\\min (T,T_U)]\\times \\mathbb {R}^6$ .", "Let us make the following comments on the statement of this theorem: This uniqueness result holds up to a time $T_U$ that depends on the $C^\\alpha $ bound for $f_{\\rm in}$ , and may be smaller than $T$ , the time of existence granted by Theorem REF .", "This makes sense because our proof of uniqueness breaks down as $\\alpha $ is sent to 0.", "The admissible values of the parameter $q$ in Theorem REF are explicitly computable from our proof.", "The uniqueness or non-uniqueness of the solutions constructed in Theorem REF , without any regularity assumption for $f_{\\rm in}$ , remains an interesting open question.", "For other examples of nonlinear evolution equations where uniqueness is not understood in the same generality as existence, even though the system regularizes instantaneously, we refer to [52], [39], [15]." ], [ "Application: global existence near equilibirium", "In our main results, we do not assume that our initial data is close to equilibrium.", "In the case that $f_{\\rm in}$ is sufficiently close to a Maxwellian equilibrium state, solutions are known to exist globally in time and converge to equilibrium as $t\\rightarrow \\infty $ , and there is a large literature about this regime (see Section REF below).", "Although in general the near-equilibrium and far-from-equilibrium regimes seem very different mathematically, we are nevertheless able to prove a new result in the close-to-equilibrium regime as an application of our Theorem REF : Corollary 1.4 Let $\\gamma $ and $s$ be as in Theorem REF , let $M(x,v) = (2\\pi )^{-3/2} e^{-|v|^2/2}$ , and let $q_0>5$ be fixed.", "There exists $q_1> q_0$ depending on $q_0$ , $\\gamma $ , $s$ , and $c_b$ , such that for any $f_{\\rm in}:3\\times \\mathbb {R}^3\\rightarrow [0,\\infty )$ such that $f_{\\rm in} \\in L^\\infty _{q_1}(3\\times \\mathbb {R}^3), $ and for any $\\varepsilon \\in (0,\\frac{1}{2})$ , there exists a $\\delta >0$ , depending on $\\varepsilon $ and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_1}(3\\times \\mathbb {R}^3)}$ , such that if $\\Vert f_{\\rm in} - M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)} < \\delta ,$ then there exists a global classical solution $f:[0,\\infty ) \\times 3\\times \\mathbb {R}^3 \\rightarrow [0,\\infty )$ to the Boltzmann equation (REF ), with $\\Vert f(t) - M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)} < \\varepsilon ,$ for all $t\\ge 0$ .", "By the results of [31], this solution converges to $M$ as $t\\rightarrow \\infty $ faster than any polynomial rate.", "The proof of Corollary REF is based on a strategy developed by the second named author, jointly with Silvestre [65].", "The idea is to combine a short-time existence theorem, the conditional regularity estimates of Imbert-Silvestre [50], and the global trend-to-equilibrium result of Desvillettes-Villani [31].", "The result in [65] worked under the assumption $\\gamma +2s\\ge 0$ .", "Taking advantage of the generality of the short-time existence theorem in the current paper, Corollary REF improves on [65] in two ways: by including the case $\\gamma +2s<0$ , and by working with initial data that decays at a finite polynomial rate, rather than at a rate faster than any polynomial.", "For the regime $\\gamma +2s<0$ , this seems to be the first result proving global existence near equilibrium for initial data in a zeroth-order space." ], [ "Prior well-posedness results and comparison", "Existence results for the non-cutoff Boltzmann equation fall into the following categories: Spatially homogeneous solutions.", "Local well-posedness and smoothing are well understood for the space homogeneous equation, and classical solutions are known to exist globally when $\\gamma +2s\\ge 0$ .", "We refer to [67], [32], [34], [30], [59], [23], [37], [35] and the references therein, as well as [68] for so-called $H$ -solutions and [56], [60] for measure-valued solutions.", "Close-to-equilibrium solutions.", "For close-to-equilibrium solutions, we refer to [36], [7], [9], [14], [13], [44], [71], [33], [65], [17] and the references therein.", "The first result for near-equilibrium initial data in a weighted $L^\\infty _{x,v}$ space was [13], which applied to $\\gamma >0$ .", "This was extended to the case $\\gamma +2s\\in [0,2]$ in [65], and we extend it to $\\gamma +2s<0$ in our Corollary REF .", "We also refer to [33], [58], [71] for other results in low-regularity spaces.", "Results that work with polynomially decaying initial data include [44], [14], [13], [17].", "Close-to-vacuum solutions.", "Recently, global solutions that are close to the vacuum state $f\\equiv 0$ have been constructed by Chaturvedi [22], with initial data in a tenth-order Sobolev space with Gaussian weight.", "Weak solutions.", "A generalized notion of solution for the non-cutoff Boltzmann equation called renormalized solution with defect measure was constructed by Alexandre-Villani [11].", "The uniqueness and regularity of these solutions are not understood, but they exist globally in time, for any initial data $f_{\\rm in}$ such that $\\int _{\\mathbb {R}^6} f_{\\rm in}(x,v)(1+|v|^2 + |x|^2 + \\log f_{\\rm in}(x,v)) \\, \\mathrm {d}x \\, \\mathrm {d}v < \\infty .$ This assumption is weaker than ours in terms of local integrability and $v$ -decay, but stronger in terms of $x$ -decay.", "If $f_{\\rm in}$ satisfies the assumptions of both [11] and our Theorem REF , then our weak solutions are, in particular, renormalized solutions with defect measure, as can be seen from the stability theorem [11] and the fact that our weak solutions are obtained as a limit of classical solutions of (REF ).", "Short-time solutions.", "Early results on local existence in the non-cutoff case were due to the AMUXY group (Alexandre, Morimoto, Ukai, Xu, and Yang) [5], [6] and required initial data to lie in Sobolev spaces of order 4 with Gaussian velocity weights.", "Later results relaxed the decay assumption by treating initial data with finite polynomial decay, at the cost of increasing the regularity requirement on $f_{\\rm in}$ .", "The first result in this direction was from Morimoto-Yang [61], who worked with $s\\in (0,\\frac{1}{2})$ and $\\gamma \\in (-\\frac{3}{2}, 0]$ and took $\\langle v\\rangle ^q f_{\\rm in}\\in H^6(3\\times \\mathbb {R}^3)$ with $q> 13$ .", "Next, the work [40] by the current authors assumed $\\max \\lbrace -3,-\\frac{3}{2} - 2s\\rbrace < \\gamma < 0$ and required $\\langle v\\rangle ^q f_{\\rm in} \\in H^5(3\\times \\mathbb {R}^3)$ for some non-explicit $q$ .", "See also [8] for an earlier uniqueness result in a similar regime as [40].", "Most recently, Henderson-Wang [42] extended the result of [40] to the case $\\gamma +2s<-\\frac{3}{2}$ .", "The only prior results that require fewer than 4 derivatives for $f_{\\rm in}$ are restricted to the case $s\\in (0,\\frac{1}{2})$ : see [10], which requires at least two Sobolev derivatives as well as spatial localization, and [42], which requires only $\\langle v\\rangle ^q f_{\\rm in}\\in C^1(3\\times \\mathbb {R}^3)$ and $q$ sufficiently large, but uses a specific argument that cannot be generalized to $s>\\frac{1}{2}$ .", "Our Theorem REF represents a significant improvement, in terms of the decay and regularity assumptions on $f_{\\rm in}$ , and applies for any $\\gamma < 0$ and $s\\in (0,1)$ ." ], [ "Regularizing effect", "The regularizing effect of the non-cutoff Boltzmann equation is a major theme and motivation of this work.", "The first rigorous understanding of this effect came in the 1990sMuch earlier, the idea that $Q(f,g)$ behaves like a fractional differentiation operator in $v$ was understood on a heuristic level by Cercignani [18].", "with Desvillettes' work on the two-dimensional homogeneous setting [27], [28], [29], as well as functional estimates for $Q(f,g)$ in Sobolev spaces by various authors [54], [1], [69], culminating in the sharp entropy dissipation estimate of Alexandre-Desvillettes-Villani-Wennberg [4].", "The key property for many of these estimates is the following functional identity for the collision operator: $\\int _{\\mathbb {R}^3} Q(f,f) \\log f \\, \\mathrm {d}v = -\\frac{1}{4}\\int _{\\mathbb {R}^6}\\int _{\\mathbb {S}^2} B(v-v_*,\\sigma )\\left( f^{\\prime } f_*^{\\prime } - f f_*\\right) \\log \\frac{f^{\\prime } f_*^{\\prime }}{ff_*} \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_*\\, \\mathrm {d}v \\le 0.$ This identity implies the entropy $\\int _{\\mathbb {R}^6} f\\log f \\, \\mathrm {d}v \\, \\mathrm {d}x$ is nonincreasing for solutions of (REF ), but even more, it implies a smoothing effect in the $v$ variable, because the quantity on the right—called the entropy dissipation—turns out to control $\\Vert \\sqrt{f}\\Vert _{H^s_v}$ , up to a lower-order correction term.", "In the context of the homogeneous Boltzmann equation, this fractional smoothing effect can be iterated to show solutions are $C^\\infty $ .", "For the full inhomogeneous equation, the matter is more difficult because the diffusion acts only in velocity, and the smoothing effect of (REF ) is therefore hypoelliptic rather than parabolic.", "Results such as [5], [24] proved that any solution that lies in $H^5_{x,v}(\\mathbb {R}^6)$ uniformly in time, decays faster than any polynomial, and satisfies a lower bound on the mass density, is in fact $C^\\infty $ .", "More recently, the breakthrough result of Imbert-Silvestre [50], which finished off a long program of the two authors and Mouhot [63], [48], [51], [46], [47] (see also the survey articles [62], [49], [64]), established $C^\\infty $ estimates for solutions of (REF ) that depend only on bounds for the mass, energy, and entropy densities of the solution, as well as (when $\\gamma <0$ ) the polynomial decay rates of the initial data.", "See also [55] for a quantitative version of the Hölder estimate of [48].", "These results do not use entropy dissipation estimates, relying instead on understanding the ellipticity of $Q(f,g)$ as an integro-differential operator.", "The current article adapts some techniques from the Imbert-Silvestre program." ], [ "Comparison with Landau equation", "The Landau equation is a kinetic model that can be derived from the Boltzmann equation (REF ) in the limit as grazing collisions (collisions with $\\theta \\approx 0$ in (REF )) predominate (see e.g.", "[26], [12]).", "This equation reads $\\partial _t f + v\\cdot \\nabla _x f = Q_L(f,f),$ where the Landau collision operator takes the form $Q_L(f,g) = \\nabla _v\\cdot (\\bar{a}^f \\nabla _v g) + \\bar{b}^f \\cdot \\nabla _v g + \\bar{c}^f g,$ and $\\bar{a}^f$ , $\\bar{b}^f$ , and $\\bar{c}^f$ are defined in terms of velocity integrals of $f$ .", "This is an important model in plasma physics and has also attracted a great deal of interest as an equation with similar mathematical properties to the Boltzmann equation.", "In [39], we proved existence and uniqueness results for the Landau equation that are in a similar spirit to Theorem REF and Theorem REF above.", "While [39] provides a helpful outline for the current study, the Boltzmann case turns out to be much more challenging.", "This is partly because $Q_L(f,g)$ defined in (REF ) is a second-order differential operator which is local in $g$ , unlike $Q(f,g)$ , which is nonlocal in both $f$ and $g$ .", "The local structure of the Landau equation is more amenable to barrier arguments because, letting $f$ be a solution and $g$ be an upper (resp.", "lower) barrier, one can derive good upper (resp.", "lower) bounds for $Q_L(f,g)$ at a crossing point between $f$ and $g$ , using information about $g$ only at the crossing point.", "In the Boltzmann case, bounding $Q(f,g)$ is more subtle because one has to take into account the values of both $f$ and $g$ in the entire velocity domain $\\mathbb {R}^3$ .", "Since we use barrier arguments extensively in this work, the “double nonlocality” of $Q$ is a significant source of difficulty.", "Compared to [39], the current study makes a much less stringent positivity assumption on the initial data: the main result of [39] required that no $x$ location in $\\mathbb {R}^3$ be too far from a region in which $f_{\\rm in}$ is uniformly positive, whereas our Theorem REF only requires that $f_{\\rm in}$ is uniformly positive in one single region.", "This is due to improvements in our method, rather than differences between the two equations.", "We also refer to the well-posedness theorem of [15] for a nonlinear Fokker-Planck equation (studied earlier in [45], [53]) that shares some properties with the Boltzmann and Landau equations.", "With a similar approach as [39], the authors construct a solution with initial data in $L^\\infty _{x,v}(\\mathbb {R}^6)$ .", "As in [39] and the current article, an extra assumption of Hölder continuity is needed to prove uniqueness.", "We note that the authors also prove an interesting estimate on the diffusion asymptotics of the solution." ], [ "Existence", "The prior large-data existence results for (REF ) cited above are based on the energy method.", "To demonstrate some disadvantages of this method, let us integrate (REF ) against $\\varphi (x) f$ for a compactly supported cutoff $\\varphi $ , which is needed because $f$ and its derivatives may not decay as $|x|\\rightarrow \\infty $ .", "This gives $\\frac{1}{2} \\frac{d}{dt} \\int _{\\mathbb {R}^6} \\varphi f^2 \\, \\mathrm {d}x \\, \\mathrm {d}v\\le \\int _{\\mathbb {R}^6}\\left[ \\varphi f Q(f,f) -\\frac{1}{2} f^2 v\\cdot \\nabla _x \\varphi \\right] \\, \\mathrm {d}v \\, \\mathrm {d}x.$ One would like to bound the right-hand side in terms of $\\int \\varphi f^2 \\, \\mathrm {d}v \\, \\mathrm {d}x$ , but this is not possible with either term.", "First, the collision operator $Q(f,f)$ cannot be controlled using only $L^2_v$ -based norms of $f$ due to the kinetic factor $|v-v_*|^\\gamma $ (recall $\\gamma \\in (-3,0)$ ).", "Instead, an $L^p_v$ bound is needed, with $p>2$ depending on $\\gamma $ and $s$ .", "Therefore, to continue with $L^2$ -based energy estimates, one must seek bounds on higher derivatives of $f$ in order to use an embedding theorem.", "Second, the $Q(f,f)$ integral involves three $f$ terms, so an $L^2$ -estimate will not closeOne might hope to use the fact that, in some sense, $Q(f,\\cdot )$ involves an average over $v$ in order to close the estimate.", "Unfortunately, $Q$ has no such average in $x$ , meaning $L^\\infty _x$ -regularity of $f$ is required.. One cannot sidestep this by working in an $L^p$ space with $p \\in (2,\\infty )$ : the analogous integral will involve $p+1$ copies of $f$ and, hence, will not close.", "For these reasons, the energy method seems incompatible with working in a zeroth-order space.", "Similarly, the growth of $v\\cdot \\nabla _x \\varphi $ for large $v$ means the second term on the right cannot be controlled by $\\int \\varphi f^2 \\, \\mathrm {d}v \\, \\mathrm {d}x$ .", "(When $\\gamma +2s>0$ , there are also terms coming from $Q(f,f)$ that grow as $|v|\\rightarrow \\infty $ , leading to a similar issue, even in the spatially periodic case where $\\varphi $ is not needed.)", "One standard way to overcome this issue [5], [6], [10] is to divide $f$ by a time-dependent Gaussian $e^{(\\rho -\\kappa t) \\langle v\\rangle ^2}$ , which adds a term proportional to $\\langle v\\rangle ^2 f$ to the equation, with the correct sign to absorb the terms with growing velocity dependence.", "However, this requires $f_{\\rm in}$ to have velocity decay proportional to a Gaussian.", "More intricate methods, based on the coercivity properties of $Q(f,f)$ , have been found to deal with this velocity growth [61], [40], [42], but these also require working with polynomial decay of relatively high degree.", "Instead of the energy method, we use a barrier argument to propagate decay estimates in $L^\\infty _q$ from $t=0$ forward in time, using barriers of the form $g = N e^{\\beta t} \\langle v\\rangle ^{-q}$ with $N, \\beta , q>0$ .", "The function $g$ is a valid barrier if $q>\\gamma +2s+3$ and if $f$ also decays at a rate proportional to $\\langle v\\rangle ^{-q}$ , which we show via a detailed analysis of $Q(f, g)$ in Lemma REF .", "This argument gives a closed estimate in the space $L^\\infty _q([0,T_q]\\times \\mathbb {R}^6)$ for some $T_q>0$ depending on $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q(\\mathbb {R}^6)}$ .", "To understand the regularity of our solutions for positive times, we need to propagate higher decay estimates for $f$ , because each step of the regularity bootstrap uses up a certain number of velocity moments.", "This brings up a subtle difficulty: since our time of existence should depend only on some fixed $L^\\infty _{q_0}$ norm of $f_{\\rm in}$ with $q_0$ small, we need to propagate higher $L^\\infty _q$ norms to a common time interval $[0,T]$ depending only on the norm of $f_{\\rm in}$ in $L^\\infty _{q_0}$ .", "We note that this is one place where the double nonlocality (see Section REF ) causes issues.", "To overcome this, we return to our barrier argument, proceeding more carefully in order to extract a small gain in the exponent $q$ , which can then be iterated to bound any $L^\\infty _q$ norm on $[0,T]$ , provided $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q}$ is finite (see Lemma REF ).", "Once we have propagated sufficient decay forward in time, we would like to apply the global regularity estimates of [50].", "These estimates are an important tool in our study, but applying them to the problem we consider is not straightforward for several reasons.", "First, the authors of [50] work under the assumption $\\gamma +2s\\in [0,2]$ , while we treat any $\\gamma \\in (-3,0)$ and $s\\in (0,1)$ .", "Therefore, we need to extend the analysis of [50] to the case $\\gamma +2s< 0$ , with suitably modified hypotheses (see Section ).", "The change of variables developed in [50] to pass from local to global regularity estimates is defined in a way that does not generalize well to the case $\\gamma +2s< 0$ , and the main novelty of our work in Section is defining a suitable change of variables for this case.", "Second, the estimates of [50] require a uniform-in-$x$ positive lower bound on the mass density $\\int _{\\mathbb {R}^3} f(t,x,v)\\, \\mathrm {d}v$ , but it does not seem possible to propagate such a bound forward from time zero with current techniques (unlike in the space homogeneous case).", "Instead, we work with initial data that is pointwise positive in a small ball, and spread this positivity to the whole domain via our result in [41].", "This means we need to re-work the regularity estimates of [50] to depend quantitatively on pointwise lower bounds for $f$ rather than a lower mass density bound.", "Finally, we need to understand how the regularity of $f$ degenerates as $t\\searrow 0$ in the case of irregular initial data, which requires us to revisit some of the arguments in [50] to track the dependence on $t$ .", "Our extension of the global regularity estimates and change of variables of [50] to the case $\\gamma +2s< 0$ may be of independent interest." ], [ "Uniqueness", "In this section, we discuss some of the difficulties in proving uniqueness.", "Given two solutions $f$ , $g$ , the bilinearity of $Q$ implies, with $h=f-g$ , $\\partial _t h + v\\cdot \\nabla _x h = Q(h,f) + Q(g,h).$ Any standard strategy for proving uniqueness would involve bounding $h$ in some norm, using this equation or its equivalent.", "For the sake of discussion, we set aside the (nontrivial) difficulties related to velocity growth on the right-hand side of (REF ), as well as the potential lack of decay for large $x$ , to focus on a more serious difficulty: that some regularity of $f$ in the $v$ variable is needed to control the term $Q(h,f)$ , either of order $2s$ for a pointwise bound or order $s$ for integrals like $\\int g Q(h,f) \\, \\mathrm {d}v$ .", "In the context of irregular initial data, regularity estimates for $f$ must degenerate as $t\\searrow 0$ , but one may still get a good bound for $h$ if this degeneration is slow enough for a Grönwall-style argument.", "Let us distinguish between two very different regimes: If vacuum regions are present in the initial data, then the known lower bounds for $f$ , which are expected to be sharp, degenerate very quickly in such regions, at a rate like $e^{-c/t}$ (see [41]).", "The available regularization mechanisms, such as entropy dissipation or linear De Giorgi estimates, rely on lower bounds for the mass density of $f$ , and are therefore useless as $t\\searrow 0$ .", "For this reason, uniqueness of solutions in this regime is expected to be very difficult and require completely new ideas, if it even holds.", "If there are no vacuum regions in the initial data, the situation appears more hopeful, because we can use our earlier result [41] to obtain positive lower bounds for $f$ and $g$ that are uniform for small times.", "Because $f$ and $g$ satisfy good lower and upper bounds on some time interval, they enjoy the regularity provided by entropy dissipation on that interval, and one might try to exploit this regularity to prove uniqueness.", "Let us make a brief digression to explain why this approach does not work: As described in Section REF , one has an a priori bound on $\\sqrt{f}$ in $L^2_{t,x}H^s_v([0,T]\\times \\mathbb {R}^6)$ via the formal identity (REF ), which can be improved to a bound for $f$ itself in the same space, using our $L^\\infty _q$ estimates for $f$ .", "The same bounds apply to $g$ and (by the triangle inequality) $h$ .", "Integrating (REF ) against $h$ and using coercivity and trilinear estimates for $Q$ that are standard in the literature, one would obtain an estimate of the following form (recall that we are ignoring velocity weights and the possible lack of decay for large $x$ ): $\\frac{d}{dt} \\Vert h(t)\\Vert _{L^2_{x,v}}^2\\lesssim \\int \\Vert h(t,x)\\Vert _{L^2_v} \\Vert f(t,x)\\Vert _{H^s_v} \\Vert h(t,x)\\Vert _{H^s_v} \\, \\mathrm {d}x.$ To close this estimate, one would need to bound the right-hand side by a constant times $\\Vert h(t)\\Vert _{L^2_{x,v}}^2$ .", "An $L^\\infty _x$ -bound on $\\Vert f(t,x)\\Vert _{H^s_v}$ and a bound like $\\Vert h(t,x)\\Vert _{H^s_v} \\lesssim \\Vert h(t,x)\\Vert _{L^2_v}$ would be sufficient to do this, but unfortunately, we only have bounds for $f$ and $h$ in $L^2_x H^s_v$ , so it is not at all clear how to close the above argument.", "This gap between an $L^2_x$ estimate arising from the formal structure of the equation, and a desired estimate in $L^\\infty _x$ , is reminiscent of the current state of the global well-posedness problem for (REF ): $L^\\infty _x$ bounds for the mass, energy, and entropy densities would be sufficient to extend large-data solutions globally in time [50], but the natural conservation laws of the equation only provide bounds in $L^1_x$ for these densities.", "Bridging this gap is widely considered to be out of reach with current techniques.", "Based on this apparent similarity, we believe that our assumption of Hölder continuity for $f_{\\rm in}$ in Theorem REF is more than a technicality, and that removing it may be a difficult problem.", "Instead of entropy dissipation, one may try to apply the global Hölder estimates of De Giorgi and Schauder type from [50] to obtain enough regularity to bound $Q(h,f)$ pointwise.", "Although these estimates on $[\\tau ,T]\\times \\mathbb {R}^6$ are uniform in $x$ , they also must degenerate as $\\tau \\rightarrow 0$ since they include the case of irregular initial data.", "In Proposition REF , we determine the explicit dependence on $\\tau $ when Schauder estimates are applied to $f$ : ignoring velocity weights, one has $\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_\\ell ([\\tau ,T]\\times \\mathbb {R}^6)} \\le C \\tau ^{-1 + \\frac{\\alpha -\\alpha ^{\\prime }}{2s}} \\Vert f\\Vert _{C^\\alpha _\\ell ([\\tau /2,T]\\times \\mathbb {R}^6)}^{1+(\\alpha +2s)/\\alpha ^{\\prime }},$ with $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ .", "This exponent of $\\tau $ is consistent with a gain of regularity of order $2s+\\alpha ^{\\prime } - \\alpha $ on a kinetic cylinder of width $\\sim \\tau ^{\\frac{1}{2s}}$ in the time variable (see (REF )).", "By a similar heuristic, the global $C^\\alpha _\\ell $ estimate (Theorem REF ) on $[\\tau /2,T]\\times \\mathbb {R}^6$ should have a constant proportional to $\\tau ^{-\\frac{\\alpha }{2s}}$ .", "Combining this with (REF ), a bound for the $C^{2s+\\alpha ^{\\prime }}_\\ell $ norm in terms of $\\Vert f\\Vert _{L^\\infty }$ would give an overall $\\tau $ dependence of $\\tau ^{-1 + \\frac{\\alpha - \\alpha ^{\\prime }}{2s} - \\frac{\\alpha }{2s}(1+\\frac{\\alpha +2s}{\\alpha ^{\\prime }})},$ which is not integrable as $\\tau \\rightarrow 0$ .", "Therefore, this line of argument does not seem feasible without any additional regularity assumptions for $f_{\\rm in}$ .", "On the other hand, if $f$ were bounded in $C^\\alpha _\\ell $ uniformly for small times, an estimate of the form (REF ) would be sufficient to derive a time-integrable bound on $Q(h,f)$ in (REF ).", "This motivates our extra assumption that $f_{\\rm in}$ is Hölder continuous, and the following step-by-step strategy for proving uniqueness: Prove that the Hölder modulus of $f_{\\rm in}$ in $(x,v)$ variables is propagated forward to positive times.", "To do this, we study the function $g$ defined for $(t,x,v,\\chi ,\\nu ) \\in \\mathbb {R}^{13}$ and $m>0$ by $g(t,x,v,\\chi ,\\nu ) = \\frac{f(t,x+\\chi ,v+\\nu ) - f(t,x,v)}{(|\\chi |^2+|\\nu |^2)^{\\alpha /2}} \\langle v\\rangle ^m.$ Bounding $g$ in $L^\\infty $ on a short time interval is equivalent to controlling the weighted $\\alpha $ -Hölder seminorm of $f$ .", "Note that this is the Hölder seminorm with respect to the Euclidean scaling on $\\mathbb {R}^6$ , not the kinetic scaling that one might expect.", "This choice is imposed on us by the proof.", "Using (REF ), we derive an equation satisfied by $g$ and use Grönwall's inequality to bound $g$ on a short time interval.", "This step requires a detailed analysis of the quantity $Q(f,f)(t,x+\\chi ,v+\\nu ) - Q(f,f)(t,x,v)$ , the repeated use of annular decompositions of the velocity integrals defining $Q$ , and an estimate of the form (REF ) coming from a carefully scaled version of the Schauder estimates.", "This approach to propagating Hölder continuity is inspired by [25].", "Show that the Hölder regularity for $f$ in $(x,v)$ from the previous step implies Hölder regularity in $t$ as well.", "This property is clearly false for general functions on $\\mathbb {R}^7$ , so we must exploit the equation (REF ).", "The proof is surprisingly intricate and is based on controlling a finite difference in $t$ of $f$ via well-chosen barriers.", "Using the regularity from the prior two steps, apply Schauder estimates to conclude $C^{2s+\\alpha ^{\\prime }}_\\ell $ regularity for $f$ , for some $\\alpha ^{\\prime }>0$ .", "Armed with this regularity for $f$ , return to (REF ) and use (for the only time in this paper) the energy method to bound $h= f-g$ in a weighted $L^2$ -norm and establish weak-strong uniqueness.", "The energy method is chosen because of its compatibility with our notion of weak solution, but one must contend with the lack of decay for large $x$ .", "This step of the proof combines the strategy for $L^2$ -estimates developed in [40], [42] with a spatial localization method that is compatible with the transport term.", "The particular form of our localizing cutoff function (which depends on both $x$ and $v$ ) leads to extra difficulties because we cannot deal with the $x$ and $v$ integrations separately.", "We should note that this strategy requires working with regularity in all three variables because of the application of Schauder estimates, even though the important ingredient for proving uniqueness is the regularity in $v$ ." ], [ "Relaxing the Hölder continuity assumption in Theorem ", "As discussed in Section REF , proving uniqueness without any regularity assumptions on $f_{\\rm in}$ may be a difficult problem.", "In the Landau case, a recent result [43] by the first named author, jointly with W. Wang, derived a uniqueness theorem that requires $f_{\\rm in}$ to be Hölder continuous in $x$ but only logarithmically Hölder in $v$ , via Schauder estimates with time-irregular coefficients.", "See [16] for a similar Schauder estimate.", "It is likely that an analogous improvement is available for the Boltzmann equation via a refinement of the Schauder estimates in [51], though this would be nontrivial to prove.", "Even with such an improvement, Hölder regularity in $x$ would be needed for the initial data." ], [ "The case $\\gamma > 0$", "In the case $\\gamma > 0$ , the analysis of the Boltzmann equation is somewhat different because the kinetic term $|v-v_*|^\\gamma $ in $Q(f,g)$ becomes a growing weight instead of a singularity.", "Our argument in this paper for local existence uses the assumption $\\gamma \\le 0$ crucially, and a different argument would be required for $\\gamma > 0$ .", "We have proven some of our intermediate results in this paper without the restriction $\\gamma \\le 0$ , with a mind to eventually filling this gap." ], [ "Classical solutions without a locally uniform lower bound", "Our construction of classical solutions requires a locally uniform positive lower bound at time zero (condition (REF ) in Theorem REF ).", "This is automatically true if the initial data is continuous and not identically zero, but our initial data may be discontinuous, so (REF ) is an extra assumption we have to make.", "In either limits $\\delta \\searrow 0$ or $r\\searrow 0$ , we lose all quantitative control on the pointwise regularity of our solutions, and we can only recover a weak solution in the sense of Theorem REF .", "On the other hand, if $f_{\\rm in}$ is identically 0, then the solution is also identically zero for positive times, and is therefore perfectly smooth.", "This leaves open the question of regularity for solutions with initial data $f_{\\rm in}\\in L^\\infty _q(\\mathbb {R}^6)$ that is not identically zero but nowhere uniformly positive." ], [ "Decay estimates and continuation", "Continuation criteria for (REF ) are highly relevant because they represent partial progress toward the outstanding open problem of global existence of non-perturbative solutions.", "As with any short-time existence result, our Theorem REF implies a continuation criterion: solutions can be extended past any time $T$ such that $\\Vert f(t)\\Vert _{L^\\infty _q(\\mathbb {R}^6)}$ remains finite for $t\\in [0,T]$ , for some $q>2s+3$ .By [41], the lower bound condition (REF ) is automatically satisfied for any positive time $T$ , with constants depending on $T$ , as long as it holds at time zero.", "Note that the time of existence granted by Theorem REF does not depend quantitatively on $\\delta $ , $r$ , or $v_m$ .", "On the other hand, the continuation criterion of [41] (which combined the lower bounds of [41] with the continuation criterion of [50]) states that solutions can be continued as long as $\\Vert f(t)\\Vert _{L^\\infty _x (L^1_2)_v(\\mathbb {R}^6)}$ remains finite.", "The continuation criterion of [41] only applies to solutions that are smooth, rapidly decaying, and spatially periodic, and applies only when $\\gamma +2s\\in [0,2]$ .", "Ideas related to the decay analysis in the current paper could likely strengthen [41] by enlarging the class of solutions and ranges of $(\\gamma ,s)$ that can be handled, and possibly by replacing $L^\\infty _x (L^1_2)_v$ with a weaker $L^\\infty _x (L^1_q)_v$ norm.", "We plan to explore this question in a future article." ], [ "Notation", "For any $\\lambda \\in \\mathbb {R}$ , we write $\\lambda _+ = \\max \\lbrace \\lambda ,0\\rbrace $ and $\\lambda _- = \\max \\lbrace -\\lambda ,0\\rbrace $ .", "We call a constant universal if it depends only on $\\gamma $ , $s$ , and the constant $c_b$ in (REF ).", "Inside of proofs, to keep the notation clean, we often write $A \\lesssim B$ to mean $A\\le CB$ for a constant $C>0$ depending on $\\gamma $ , $s$ , $c_b$ , and the quantities in the statement of the lemma or theorem being proven.", "We also write $A\\approx B$ when $A\\lesssim B$ and $B\\lesssim A$ .", "Throughout the manuscript, it is always assumed that $\\gamma < 0$ unless otherwise indicated (some results apply to the case $\\gamma \\in [0,1]$ as well).", "We say that a solution to (REF ) is classical if $(\\partial _t + v\\cdot \\nabla _x) f$ and $Q(f,f)$ are continuous and (REF ) holds pointwise." ], [ "Outline of the paper", "In Section , we recall and slightly extend some results from the literature that are needed for our study.", "Section extends the change of variables and global regularity estimates of [50] to the case $\\gamma +2s<0$ .", "Section is devoted to the proof of existence.", "Section addresses the extension of Hölder regularity from $(x,v)$ variables to the $t$ variable.", "Section propagates a Hölder modulus from $t=0$ to positive times, and Section finishes the proof of uniqueness.", "Section proves existence of global solutions near equilibrium.", "Appendix proves the key properties of the change of variables defined in Section , and Appendix contains some technical lemmas." ], [ "Kinetic Hölder spaces", "To study the regularity properties of the Boltzmann equation, we use the kinetic Hölder spaces from [50], [51], which we briefly recall now.", "First, let us recall two transformations that are well-adapted to the symmetries of linear kinetic equations with velocity diffusion of order $2s$ .", "For $z_1 = (t_1,x_1,v_1)$ and $z = (t,x,v)$ points of $\\mathbb {R}^7$ , define the Lie product $ z_1 \\circ z = ( t_1 + t, x_1 + x + tv_1, v_1 + v).$ and the dilation $\\delta _r(z) = (r^{2s}t, r^{1+2s}x, rv), \\quad r>0.$ Next, define the distanceIn fact, $d_\\ell $ does not satisfy the triangle inequality if $s< 1/2$ .", "(See [51].)", "This fact causes no issues in our analysis, and we refer to $d_\\ell $ as a distance regardless.", "$d_\\ell (z_1,z_2) := \\min _{w\\in \\mathbb {R}^3} \\max \\lbrace |t_1-t_2|^{\\frac{1}{2s}}, |x_1 - x_2 - (t_1-t_2) w|^{\\frac{1}{1+2s}}, |v_1 - w|, |v_2 - w|\\rbrace .$ This distance is invariant under left translations (hence the $\\ell $ subscript, which stands for left-invariant) and dilations: for any $z_1, z_2, \\xi \\in \\mathbb {R}^7$ and $r>0$ , $d_\\ell (\\xi \\circ z_1, \\xi \\circ z_2) &= d_\\ell (z_1,z_2).\\\\d_\\ell (\\delta _r(z_1), \\delta _r(z_2)) &= rd_\\ell (z_1,z_2),$ The distance $d_\\ell $ is not invariant under right translations.", "However, for right translations in the velocity variable, one has the useful property $\\begin{split}d_\\ell (z_1\\circ (0,0,w), z_2\\circ (0,0,w)) &\\le d_\\ell (z_1,z_2) + |t_1-t_2|^{1/(1+2s)} |w|^{1/(1+2s)}\\\\&\\le d_\\ell (z_1,z_2) + d_\\ell (z_1,z_2)^{2s/(1+2s)} |w|^{1/(1+2s)}\\end{split}$ We define the kinetic cylinders in a way that respects the transformations (REF ), (): $Q_r(z_0)= \\lbrace z=(t,x,v) \\in \\mathbb {R}^7 :t_0-r^{2s}< t\\le t_0, |x-x_0-(t-t_0)v_0|<r^{1+2s}, |v-v_0| < r\\rbrace .$ We often write $Q_r = Q_r(0)$ .", "Note that $Q_r = \\delta _r(Q_1)$ , and $Q_r(z_0) = z_0 \\circ Q_r$ .", "The kinetic Hölder spaces are defined in terms of approximation by polynomials.", "For any monomial $m$ in the variables $t,x,v$ of the form $ m(t,x,v) = c t^{\\alpha _0} x_1^{\\alpha _1} x_2^{\\alpha _2} x_3^{\\alpha _3} v_1^{\\alpha _4} v_2^{\\alpha _5} v_3^{\\alpha _6},$ with $c\\ne 0$ , we define the kinetic degree as $\\mbox{deg}_k m = 2s\\alpha _0 + (1+2s)(\\alpha _1 + \\alpha _2 + \\alpha _3) + \\alpha _4 + \\alpha _5 + \\alpha _6.$ This definition is compatible with the scaling $(t,x,v) \\mapsto \\delta _r(t,x,v)$ .", "For any nonzero polynomial $p(t,x,v)$ , we define its kinetic degree as the maximum of $\\mbox{deg}_k$ over all monomial terms in $p$ .", "Now we are ready to define the kinetic Hölder spaces: Definition 2.1 Given any $\\alpha >0$ and any open set $D\\subset \\mathbb {R}^7$ , a continuous function $f:D\\rightarrow \\mathbb {R}$ is $\\alpha $ -Hölder continuous at $z_0\\in D$ if there exists a polynomial $p(t,x,v)$ with $\\mbox{deg}_k(p)< \\alpha $ , and $|f(z) - p(z)| \\le C d_\\ell (z,z_0)^\\alpha , \\quad z\\in D.$ We say $f\\in C^\\alpha _\\ell (D)$ if the inequality (REF ) holds at all points of $D$ .", "The semi-norm $[f]_{C^\\alpha _\\ell (D)}$ is the smallest value of the constant $C$ such that (REF ) holds for all $z,z_0\\in D$ (with the polynomial $p$ depending on $z_0$ ).", "The norm $\\Vert f\\Vert _{C^\\alpha _\\ell (D)}$ is defined as $\\Vert f\\Vert _{L^\\infty (D)} + [f]_{C^\\alpha _\\ell (D)}$ .", "For functions defined on open subsets $D\\subset \\mathbb {R}^6$ , the seminorm $[f]_{C^\\alpha _{\\ell ,x,v}(D)}$ can be defined similarly as the smallest constant $C>0$ such that for every $(x_0,v_0)\\in D$ , there is a polynomial $p(x,v)$ with $\\mbox{deg}_k(p)< \\alpha $ , such that $|f(x,v) - p(x,v)|\\le Cd_\\ell ((0,x_0,v_0), (0,x,v))^\\alpha .$ We also define the global kinetic Hölder spaces with polynomial weights: Definition 2.2 Given $\\alpha , q>0$ and $0<\\tau <T$ , we define the weighted semi-norm $ [f]_{C^\\alpha _{\\ell ,q}([\\tau ,T]\\times 6)} := \\sup \\left\\lbrace (1+|v|)^q [f]_{C^\\alpha _\\ell (Q_r(z))} : r\\in (0,1] \\text{ and } Q_r(z) \\subset [\\tau ,T]\\times \\mathbb {R}^6\\right\\rbrace .$ We say $f\\in C^\\alpha _{\\ell ,q}([\\tau ,T]\\times \\mathbb {R}^6)$ if the norm $ \\Vert f\\Vert _{C^\\alpha _{\\ell ,q}([\\tau ,T]\\times \\mathbb {R}^6)} = \\Vert f\\Vert _{L^\\infty _q([\\tau ,T]\\times \\mathbb {R}^6)} + [f]_{C^\\alpha _{\\ell ,q}([\\tau ,T]\\times \\mathbb {R}^6)}$ is finite." ], [ "Well-posedness for regular initial data", "As part of our existence proof, we need to construct solutions corresponding to smooth, rapidly decaying approximations of our initial data.", "For this, we use the following proposition, which combines two short-time existence results from the literature.", "We state here a non-sharp result with assumptions that are uniform in $\\gamma $ and $s$ for the sake of brevity (and because we do not need the sharp version).", "Proposition 2.3 Let $\\gamma \\in (-3,0)$ and $s\\in (0,1)$ .", "Let $M^3$ be the 3-dimensional torus of side length $M>0$ .", "For any $k\\ge 6$ , there exists $n_0,p_0>0$ depending on universal constants and $k$ , such that for any initial data $f_{\\rm in}\\ge 0$ defined for $(x,v)\\in M^3\\times \\mathbb {R}^3$ with $f_{\\rm in} \\in H^k_{n}\\cap L^\\infty _{p}(3_M\\times \\mathbb {R}^3)$ with $n\\ge n_0$ and $p\\ge p_0$ , there exists a unique solution $f\\ge 0$ to (REF ) in $C^0([0,T], H^k_{n}\\cap L^\\infty _{p}( M^3\\times \\mathbb {R}^3))$ for some $T>0$ depending on $\\Vert f_{\\rm in} \\Vert _{H^k_{n}} + \\Vert f_{\\rm in}\\Vert _{L^\\infty _{p}}$ , with $f(0,x,v) = f_{\\rm in}(x,v)$ .", "The proofs for the case $M=1$ can found in the following works: for any $s\\in (0,1)$ and $\\max \\lbrace -3,-3/2-2s\\rbrace < \\gamma < 0$ , see [40].", "For $s\\in (0,1)$ and $\\gamma \\in (-3, -2s)$ , see [42].", "To extend this result to the case of general $M>0$ , we rescale to the torus 3 of side length 1 by defining $\\tilde{f}_{\\rm in} := M^{\\gamma +3} f_{\\rm in}^\\varepsilon (M x, M v), \\quad x\\in \\mathbb {T}^3, v\\in \\mathbb {R}^3.$ The result for the $M=1$ case gives us a solution $\\tilde{f}$ on $[0,T]\\times 3\\times \\mathbb {R}^3$ , and to scale back to the torus of size $M$ , we define $f^\\varepsilon (t,x,v) := \\frac{1}{M^{\\gamma +3}}\\tilde{f}^\\varepsilon \\left( t, \\frac{x}{M}, \\frac{v}{M}\\right), \\quad t\\in [0,T_\\varepsilon ], x\\in {3_M}, v\\in \\mathbb {R}^3.$ By a direct calculation, $f$ solves the Boltzmann equation (REF ), with initial data $f_{\\rm in}$ .", "The function $f$ lies in the same regularity spaces as $\\tilde{f}$ ." ], [ "Carleman representation", "The collision operator $Q(f,g)$ defined in (REF ) can be written as a sum of two terms $Q = Q_{\\rm s} + Q_{\\rm ns}$ , where the first (“singular”) term $Q_{\\rm s}$ acts as a nonlocal diffusion operator of order $2s$ .", "The second (“nonsingular”) term $Q_{\\rm ns}$ is a lower-order convolution term.", "By adding and subtracting $f(v_*^{\\prime })g(v)$ inside the integral in (REF ), one has $\\begin{split}&Q_{\\rm s}(f,g) = \\int _{\\mathbb {R}^3} \\int _{{\\mathbb {S}}^2} (g(v^{\\prime })-g(v)) f(v_*^{\\prime }) B(|v-v_*|,\\sigma )\\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_*\\qquad \\text{and}\\\\&Q_{\\rm ns}(f,g) = g(v) \\int _{\\mathbb {R}^3} \\int _{{\\mathbb {S}}^2} (f(v_*^{\\prime })-f(v_*)) B(|v-v_*|,\\sigma ) \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_*.\\end{split}$ It can be shown [2], [63] that $Q_{\\rm s}(f,\\cdot )$ is equal to an integro-differential operator with kernel depending on $f$ : Lemma 2.4 [63] The term $Q_{\\rm s}(f,g)$ can be written $Q_{\\rm s}(f,g) = \\int _{\\mathbb {R}^3} (g(v^{\\prime })-g(v)) K_f (v, v^{\\prime }) \\, \\mathrm {d}v^{\\prime },$ with kernel $K_f(v,v^{\\prime }) = \\frac{1}{|v^{\\prime }-v|^{3+2s} }\\int _{(v^{\\prime }-v)^\\perp } f(v+w) |w|^{\\gamma +2s+1} \\tilde{b}(\\cos \\theta ) \\, \\mathrm {d}w,$ where $\\tilde{b}$ is uniformly positive and bounded.", "Above, we have used the shorthand $(v-v^{\\prime })^\\perp $ to mean $\\lbrace w: w\\cdot (v-v^{\\prime }) = 0 \\rbrace $ .", "For the term $Q_{\\rm ns}$ , we have the following formula, which is related to the Cancellation Lemma of [4]: Lemma 2.5 The term $Q_{\\rm ns}(f,g)$ can be written $Q_{\\rm ns}(f,g) = Cg(v)\\int _{\\mathbb {R}^3} f(v+w) |w|^\\gamma \\, \\mathrm {d}w,$ for a constant $C>0$ depending only on the bounds (REF ) for the collision cross-section $b$ ." ], [ "Self-generating lower bounds", "The main result of [41] states that if $f_{\\rm in}$ is uniformly positive in some ball in $(x,v)$ space, this positivity is spread instantly to the entire domain: Theorem 2.6 [41] Let $\\gamma \\in (-3,1)$ and $s\\in (0,1)$ .", "Suppose that $f$ is a classical solution ($C^1$ in $t,x$ and $C^2$ in $v$ ) of (REF ) on $[0,T]\\times \\mathbb {R}^6$ , with initial data $f(0,x,v)\\ge 0$ satisfying the lower bound (REF ), i.e.", "$f(0,x,v) \\ge \\delta , \\quad (x,v) \\in B_r(x_m,v_m),$ for some $x_m,v_m\\in \\mathbb {R}^3$ and $\\delta , r>0$ .", "Assume that $f$ satisfies $\\begin{split}&\\sup _{t\\in [0,T],x\\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\langle v\\rangle ^{(\\gamma +2s)_+} f(t,x,v) \\, \\mathrm {d}v \\le K_0, \\qquad \\text{ and}\\\\&\\sup _{t\\in [0,T],x\\in \\mathbb {R}^3} \\Vert f(t,x,\\cdot )\\Vert _{L^p(\\mathbb {R}^3)} \\le P_0\\quad \\text{ for some } p>\\frac{3}{3+\\gamma +2s} \\quad (\\mbox{only if } \\gamma + 2s < 0).\\end{split}$ Then $ f(t,x,v) \\ge \\mu (t,x) e^{-\\eta (t,x)|v|^2}, \\quad (t,x,v) \\in (0,T]\\times \\mathbb {R}^6,$ where $\\mu (t,x)$ and $\\eta (t,x)$ are uniformly positive and bounded on any compact subset of $(0,T]\\times \\mathbb {R}^3$ , and depend only on $t$ , $T$ , $|x-x_m|$ , $\\delta $ , $r$ , $v_m$ , $K_0$ , and $P_0$ .", "Furthermore, near the point $x_m$ , the lower bounds are uniform up to time zero: $f(t,x,v) \\ge \\mu e^{-\\eta |v|^2}, \\quad (t,x,v) \\in (0,T]\\times B_{r/2}(x_m)\\times \\mathbb {R}^3,$ for constants $\\mu , \\eta >0$ depending on $\\delta $ , $r$ , $v_m$ , $K_0$ , and $P_0$ .", "As stated in [41], this theorem requires an upper bound on the energy density $\\int _{\\mathbb {R}^3} |v|^2 f(t,x,v) \\, \\mathrm {d}v$ .", "However, it is clear from the proof that a bound on the $\\gamma +2s$ moment is sufficient.", "More specifically, the only purpose of the energy density bound is to estimate $Q_{\\rm s}$ from above via Lemma REF below, and a bound for $\\int _{\\mathbb {R}^3} \\langle v\\rangle ^{\\gamma +2s} f\\, \\mathrm {d}v$ suffices to estimate the convolution $f\\ast |v|^{\\gamma +2s}$ in Lemma REF .", "We should also note that (REF ) is not stated as part of the main result of [41], but follows immediately from [41].", "The following lemma gives a cone of nondegeneracy for the collision kernel $K_f$ .", "When combined with the previous theorem, it provides coercivity estimates for $Q_s(f,\\cdot )$ that depend only on the initial data and the quantities in (REF ).", "Lemma 2.7 [41] Let $f:\\mathbb {R}^3\\rightarrow \\mathbb {R}$ be a nonnegative function with $f(v) \\ge \\delta 1_{B_r(v_m)}$ for some $\\delta , r > 0$ and $v_m\\in \\mathbb {R}^3$ .", "There exist constants $\\lambda , \\mu , C > 0$ (depending on $\\delta $ , $r$ , and $|v_m|$ ) such that for each $v\\in \\mathbb {R}^3$ , there is a symmetric subset of the unit sphere $A(v)\\subset \\mathbb {S}^2$ such that: $|A(v)|_{\\mathcal {H}^2}\\ge \\mu (1+|v|)^{-1}$ .", "where $|\\cdot |_{\\mathcal {H}^2}$ is the 2-dimensional Hausdorff measure.", "For all $\\sigma \\in A(v)$ , $|\\sigma \\cdot v|\\le C$ .", "Whenever $(v-v^{\\prime })/|v-v^{\\prime }| \\in A(v)$ , $K_f(v,v^{\\prime }) \\ge \\lambda (1+|v|)^{1+\\gamma +2s} |v^{\\prime }-v|^{-3-2s}.$ These results also give a pointwise upper bound that we will need in our analysis.", "The following proposition combines the $L^\\infty $ bounds of [63] with the cone of nondegeneracy of Lemma REF : Proposition 2.8 [41] Let $f$ be a solution of the Boltzmann equation (REF ) on $[0,T]\\times 3\\times \\mathbb {R}^3$ that satisfies (REF ).", "Assume that the initial data satisfies (REF ).", "Then $f$ satisfies an $L^\\infty $ bound that is uniform away from $t=0$ , i.e.", "$ \\Vert f(t,\\cdot ,\\cdot )\\Vert _{L^\\infty (3\\times \\mathbb {R}^3)} \\le C(1+t^{-3/(2s)}),$ for a constant depending on $T$ , $\\delta $ , $r$ , $v_0$ , $K_0$ , and (if $\\gamma +2s<0$ ) $P_0$ ." ], [ "Local regularity estimates", "We recall the local regularity estimates of [48] and [51] for linear kinetic equations of the following type: $\\partial _t f + v\\cdot \\nabla _x f = \\int _{\\mathbb {R}^3} (f(t,x,v^{\\prime }) - f(t,x,v))K(t,x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } + h,$ where the kernel $K$ satisfies suitable ellipticity assumptions.", "First, we have a De Giorgi-type estimate that gives Hölder continuity of solutions: Theorem 2.9 ([48]) Let $K: (-1,0]\\times B_1 \\times B_2 \\times \\mathbb {R}^3 \\rightarrow \\mathbb {R}_+$ be a kernel satisfying the following ellipticity conditions, uniformly in $t$ and $x$ , for some $\\lambda , \\Lambda > 0$ : $&\\text{For all } v\\in B_2, r>0,\\quad \\inf _{|e|=1}\\int _{B_r(v)} ((v^{\\prime }-v)\\cdot e)^2_+ K(t,x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\ge \\lambda r^{2-2s},\\quad \\text{(if $s< 1/2$)},$ $&\\text{For any } f(v) \\text{ supported in } B_2, &\\\\&\\phantom{ \\text{For any } f(v) } \\iint _{B_2\\times \\mathbb {R}^3} f(v) (f(v) - f(v^{\\prime })) K(t,x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\ge \\lambda \\Vert f\\Vert _{\\dot{H}^s(\\mathbb {R}^3)}^2 - \\Lambda \\Vert f\\Vert _{L^2(\\mathbb {R}^3)}^2,\\nonumber $ $&\\text{For all } v\\in B_2 , r>0, \\quad \\int _{\\mathbb {R}^3\\setminus B_r(v)} K(t,x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\le \\Lambda r^{-2s},&$ $&\\text{For all } v^{\\prime }\\in B_2 , r>0, \\quad \\int _{\\mathbb {R}^3\\setminus B_r(v^{\\prime })} K(t,x,v,v^{\\prime }) \\, \\mathrm {d}v \\le \\Lambda r^{-2s},&$ $&\\text{For all } v \\in B_{7/4}, \\quad \\left| \\mbox{p.v.}", "\\int _{B_{1/4}(v)}(K(t,x,v,v^{\\prime }) - K(t,x,v^{\\prime },v)) \\, \\mathrm {d}v^{\\prime }\\right| \\le \\Lambda ,&$ $&\\text{For all } r\\in [0,1/4] \\text{ and } v\\in B_{7/4}, &\\\\&\\phantom{\\text{For }}\\left| \\mbox{p.v.}", "\\int _{B_r(v)} (K(t,x,v,v^{\\prime }) - K(t,x,v^{\\prime },v)) (v^{\\prime }-v) \\, \\mathrm {d}v^{\\prime }\\right| \\le \\Lambda (1+r^{1-2s}), \\quad \\text{(if $s\\ge 1/2$)}.\\nonumber $ Let $f:(-1,0]\\times B_1\\times \\mathbb {R}^3\\rightarrow \\mathbb {R}$ be a bounded function that is a solution of (REF ) in $Q_1$ , for some bounded function $h$ .", "Then $f$ is Hölder continuous in $Q_{1/2}$ , and $ \\Vert f\\Vert _{C^\\alpha _\\ell (Q_{1/2})} \\le C\\left( \\Vert f\\Vert _{L^\\infty ((-1,0]\\times B_1\\times \\mathbb {R}^3)} + \\Vert h\\Vert _{L^\\infty (Q_1)}\\right).$ The constants $C>0$ and $\\alpha \\in (0,1)$ depend only on $\\lambda $ and $\\Lambda $ .", "Next, we recall Schauder-type estimates for linear kinetic integro-differential equations of the form (REF ).", "As in [51], the kernel is assumed to be elliptic in the sense of the following definition: Definition 2.10 (Ellipticity class) Given $s\\in (0,1)$ and $0< \\lambda < \\Lambda $ , a kernel $K:\\mathbb {R}^3\\setminus \\lbrace 0\\rbrace \\rightarrow \\mathbb {R}_+$ lies in the ellipticity class of order $2s$ if $K(w) = K(-w)$ .", "For all $r>0$ , $\\int _{B_r} |w|^2 K(w) \\, \\mathrm {d}w \\le \\Lambda r^{2-2s}.$ For any $R>0$ and $\\varphi \\in C^2(B_R)$ , $\\iint _{B_R\\times B_R} |\\varphi (v) - \\varphi (v^{\\prime })|^2 K(v^{\\prime }-v) \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\ge \\lambda \\iint _{B_{R/2}\\times B_{R/2}} |\\varphi (v) - \\varphi (v^{\\prime })|^2 |v^{\\prime }- v|^{-3-2s} \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v.$ If $s<1/2$ , assume in addition that for each $r>0$ , $\\inf _{|e|=1} \\int _{B_r} (w\\cdot e)^2_+ K(w) \\, \\mathrm {d}w \\ge \\lambda r^{2-2s}.$ For technical convenience, we quote the scaled form of the Schauder estimate on cylinders $Q_{2r}$ with $r>0$ , as in [50]: Theorem 2.11 ([51], [50]) Let $0 < \\alpha < \\min (1,2s)$ , and let $\\alpha ^{\\prime } = \\frac{2s}{1+2s}\\alpha $ .", "Let $f:(-(2r)^{2s},0]\\times B_{(2r)^{1+2s}} \\times \\mathbb {R}^3\\rightarrow \\mathbb {R}$ be a solution of the linear equation (REF ) in $Q_{2r}$ for some bounded function $h$ and some integral kernel $K_z(w) = K(t,x,v,v+w): (-(2r)^{2s}, 0]\\times B_{(2r)^{1+2s}}\\times \\mathbb {R}^3\\times \\mathbb {R}^3\\rightarrow [0,\\infty )$ satisfying, for each $t$ , $x$ , and $v$ , the ellipticity assumptions of Definition REF for uniform constants $0< \\lambda < \\Lambda $ , as well as the Hölder continuity assumption $\\int _{B_\\rho } |K_{z_1}(w) - K_{z_2}(w)| |w|^2 \\, \\mathrm {d}w \\le A_0 \\rho ^{2-2s} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }}, \\quad z_1,z_2\\in Q_{2r}, \\rho >0,$ for some $\\alpha >0$ .", "If $f\\in C^\\alpha _\\ell ((-(2r)^{2s},0]\\times B_{(2r)^{1+2s}} \\times \\mathbb {R}^3)$ and $h\\in C^\\alpha _\\ell (Q_{2r})$ , then $\\begin{split}\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_{r})} &\\le C\\left(\\max \\left(r^{-2s-\\alpha ^{\\prime }+\\alpha }, A_0^{(2s+\\alpha ^{\\prime }-\\alpha )/\\alpha ^{\\prime }} \\right)[f]_{C^\\alpha _\\ell ([-(2r)^{2s},0]\\times B_{(2r)^{1+2s}}\\times \\mathbb {R}^3)}\\right.\\\\&\\qquad \\left.", "+ [h]_{C^{\\alpha ^{\\prime }}_\\ell (Q_{2r})} + \\max (r^{-\\alpha ^{\\prime }}, A_0)\\Vert h\\Vert _{L^\\infty (Q_{2r})}\\right).\\end{split}$ The constant $C$ depends on $s$ , $\\lambda $ , and $\\Lambda $ .", "Remark 2.12 The local estimates of Theorems REF and REF impose a number of conditions on the integral kernel $K$ .", "When the kernel is defined in terms of a function $f$ according to the formula for $K_f$ from (REF ), one must place appropriate conditions on $f$ so that the kernel $K_f$ satisfies all the hypotheses of these two theorems.", "Regarding the coercivity conditions (REF ), (REF ), (REF ), and (REF ), it is understood in the literature (see [48], [20], [50]) that all of these conditions follow from the existence of a cone of nondegeneracy as in Lemma REF .", "The upper bound conditions (REF ) and (REF ) from Theorem REF hold for $K_f$ (locally in $v$ ) whenever the convolution $f\\ast |\\cdot |^{\\gamma +2s} = \\int _{\\mathbb {R}^3} f(v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w$ is bounded.", "This is shown in [48].", "The cancellation conditions (REF ) and (REF ) hold whenever the convolutions $f\\ast |\\cdot |^\\gamma $ and $f\\ast |\\cdot |^{\\gamma +1}$ are bounded, from [48].", "In particular, these conditions all hold whenever $f\\in L^\\infty _q(\\mathbb {R}^3)$ for $q>\\gamma +2s+3$ .", "We emphasize that these four lemmas from [48] are proven for any $\\gamma $ and $s$ such that $\\gamma +2s\\le 2$ , including in the case $\\gamma +2s<0$ .", "From Lemma REF , we see that the upper bound (REF ) is also satisfied whenever the convolution $f\\ast |\\cdot |^{\\gamma +2s+3}$ is bounded.", "To sum up, this discussion shows that the kernel $K_f$ defined by (REF ) satisfies the hypotheses of Theorems REF and REF whenever $f\\in L^\\infty _q(\\mathbb {R}^3)$ with $q>\\gamma +2s+3$ and $f$ satisfies a pointwise lower bound condition as in Lemma REF , with constants depending on $|v|$ , $\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ , $q$ , $\\delta $ , $r$ , and $|v_m|$ .", "In general, these estimates for $K_f$ degenerate as $|v|\\rightarrow \\infty $ , which means $K_f$ is uniformly elliptic on any fixed bounded domain in velocity space, but not uniformly elliptic globally.", "The change of variables $\\mathcal {T}_0$ , described in [50] and Section below, addresses this difficulty." ], [ "Estimates for the collision operator", "First, we have an integral estimate on annuli for the kernel $K_f$ defined in (REF ): Lemma 2.13 [63] For any $r>0$ , $\\int _{B_{2r}\\setminus B_r} K_f(v,v+w) \\, \\mathrm {d}w \\le C \\left( \\int _{\\mathbb {R}^3} f(v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w\\right) r^{-2s}.$ The following two closely related estimates can be proven by writing the integral over $B_r$ (respectively, $B_r^c$ ) as a sum of integrals over $B_{r2^{-n}}\\setminus B_{r2^{-n-1}}$ for $n=0,1,2,\\ldots $ (respectively $n=-1,-2,\\ldots $ ) and applying Lemma REF for each $n$ : Lemma 2.14 For any $r>0$ , $\\int _{B_{r}} K_f(v,v+w) |w|^2 \\, \\mathrm {d}w\\le C \\left( \\int _{\\mathbb {R}^3} f(v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w\\right) r^{2-2s}.$ $\\int _{B_r^c} K_f(v,v+w) \\, \\mathrm {d}w\\le C \\left( \\int _{\\mathbb {R}^3} f(v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w\\right) r^{-2s}.$ Lemma 2.15 [47] For any bounded, $C^2$ function $\\varphi $ on $\\mathbb {R}^3$ , the following inequality holds: $|Q_s(f, \\varphi )|\\le C\\left(\\int _{\\mathbb {R}^3} f(v+w) |w|^{\\gamma +2s} \\, \\mathrm {d}w\\right)\\Vert \\varphi \\Vert _{L^\\infty (\\mathbb {R}^3)}^{1-s}\\Vert D^2 \\varphi \\Vert _{L^\\infty (\\mathbb {R}^3)}^s.$ The next lemma, which appears to be new, is related to [40], but the statement here is sharper in terms of the decay exponent.", "The small gain in the exponent provided by this lemma will be crucial in propagating higher polynomial decay estimates forward in time.", "Lemma 2.16 For any $q_0> \\gamma + 2s + 3$ let $f\\in L^\\infty _{q_0}(\\mathbb {R}^3)$ be a nonnegative function, and choose $q\\in [q_0, q_0-\\gamma ]$ .", "(Recall that $\\gamma < 0$ .)", "Then there holds $ Q(f,\\langle \\cdot \\rangle ^{-q})(v) \\le C\\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)}\\langle v\\rangle ^{-q}.$ The constant $C$ depends on universal constants and $q$ .", "Writing $Q = Q_{\\rm s} + Q_{\\rm ns}$ , from Lemma REF we have $ Q_{\\rm ns} (f, \\langle \\cdot \\rangle ^{-q})(v) \\approx \\langle v\\rangle ^{-q} \\int _{\\mathbb {R}^3} f(v-w) |w|^{\\gamma } \\, \\mathrm {d}w \\lesssim \\langle v\\rangle ^{-q} \\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)},$ by the convolution estimate Lemma REF , since $q_0> \\gamma +2s+3 > \\gamma + 3$ .", "For the singular term, Lemma REF gives $ Q_{\\rm s} (f,\\langle \\cdot \\rangle ^{-q})(v)= \\int _{\\mathbb {R}^3} K_f(v,v^{\\prime }) [\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }.$ If $|v|\\le 2$ , then Lemma REF implies $\\begin{split}Q_{\\rm s}(f,\\langle \\cdot \\rangle ^{-q})(v) &\\lesssim \\left(\\int _{\\mathbb {R}^3} f(v+w) |w|^{\\gamma +2s} \\, \\mathrm {d}w \\right) \\Vert \\langle v\\rangle ^{-q} \\Vert _{L^\\infty (\\mathbb {R}^3)}^{1-s} \\Vert D^2\\langle v\\rangle ^{-q}\\Vert _{L^\\infty (\\mathbb {R}^3)}^s\\\\&\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)}\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)} \\langle v\\rangle ^{-q},\\end{split} $ since $v$ lives on a bounded domain and $q_0 > \\gamma + 2s + 3$ .", "When $|v|\\ge 2$ , we write the integral over $\\mathbb {R}^3$ as an infinite sum by defining, for each integer $k$ , the annulus $A_k(v) = B_{2^k|v|}(v) \\setminus B_{2^{k-1}|v|}(v)$ .", "For the terms with $k\\le -1$ , for $v^{\\prime } \\in A_k(v) \\subset B_{|v|/2}(v)$ we Taylor expand $g(v) := \\langle v\\rangle ^{-q}$ to obtain $ g(v^{\\prime }) - g(v) = (v^{\\prime }-v) \\cdot \\nabla g(v) + \\frac{1}{2} (v^{\\prime }-v) \\cdot (D^2g (z) (v^{\\prime }-v)), \\quad \\text{for some } z\\in B_{|v|/2}(v).$ The symmetry of the kernel $K_f$ implies $\\int _{A_k(v)} (v^{\\prime }-v) \\cdot \\nabla g(v) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } = 0$ .", "We then have, using Lemma REF and that $q_0 > 3 + \\gamma + 2s$ , $\\begin{split}\\Big |\\sum _{k\\le -1} \\int _{A_k(v)} &K_f(v,v^{\\prime }) [\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }\\Big |\\lesssim \\Vert D^2 g\\Vert _{L^\\infty (B_{|v|/2}(v))} \\sum _{k\\le -1} \\int _{A_k(v)}|v^{\\prime }-v|^2 K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim \\langle v\\rangle ^{-q-2}\\left(\\int _{\\mathbb {R}^3} f(v+w) |w|^{\\gamma +2s}\\, \\mathrm {d}w\\right) \\sum _{k\\le -1} (2^{k-1} |v|)^{2-2s}\\\\&\\lesssim \\langle v\\rangle ^{-q-2} \\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)} \\langle v\\rangle ^{(\\gamma +2s)_+} |v|^{2-2s}\\lesssim \\langle v\\rangle ^{-q}\\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)}.\\end{split}$ For the terms with $k\\ge 0$ , we further divide $A_k(v)$ into $A_k(v) \\cap B_{|v|/2}(0)$ and $A_k(v) \\setminus B_{|v|/2}(0)$ .", "(Note that $A_k(v) \\cap B_{|v|/2}(0)$ is empty unless $k = 0$ or $k=1$ .)", "In $A_k(v) \\setminus B_{|v|/2}(0)$ , we use $\\langle v^{\\prime }\\rangle ^{-q} \\lesssim \\langle v\\rangle ^{-q}$ and Lemma REF to write $\\begin{split}\\sum _{k\\ge 2}\\int _{A_k(v)\\setminus B_{|v|/2}} K_f(v,v^{\\prime })[\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }&\\lesssim \\langle v\\rangle ^{-q} \\left(\\int _{\\mathbb {R}^3} f(v+w) |w|^{\\gamma +2s} \\, \\mathrm {d}w\\right)\\sum _{k\\ge 2} (2^{k-1}|v|)^{-2s}\\\\&\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^3)} \\langle v\\rangle ^{-q} ,\\end{split}$ where we again used that $q_0> \\gamma +2s+3$ .", "It only remains to bound the integral over $v^{\\prime } \\in B_{|v|/2}(0)$ .", "From [42], we have $|K_f(v,v^{\\prime })| \\lesssim \\frac{1}{|v-v^{\\prime }|^{3+2s}} \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{\\gamma +2s+3-q_0}\\qquad \\text{if } |v^{\\prime }|\\le |v|/2.$ This implies $\\begin{split}\\int _{B_{|v|/2}} K_f(v,v^{\\prime }) &[\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}} \\int _{B_{|v|/2}}\\frac{ \\langle v\\rangle ^{\\gamma +2s+3-q_0}}{|v-v^{\\prime }|^{3+2s}} \\langle v^{\\prime }\\rangle ^{-q} \\, \\mathrm {d}v^{\\prime }\\\\&\\approx \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{-q_0+\\gamma } \\int _{B_{|v|/2}} \\langle v^{\\prime }\\rangle ^{-q}\\, \\mathrm {d}v^{\\prime },\\end{split}$ where we used the nonnegativity of $K_f$ to discard the term $-\\langle v\\rangle ^{-q}$ inside the integral and we also used that $|v^{\\prime }-v|\\ge |v|/2$ .", "When $q >3$ , the final integral is finite and we clearly find $\\int _{B_{|v|/2}} K_f(v,v^{\\prime }) [\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{-q_0 + \\gamma }\\le \\Vert f\\Vert _{L^\\infty _{q_0}}\\langle v\\rangle ^{- q},$ due to the condition $q \\le q_0 - \\gamma $ .", "When $q < 3$ , the above becomes $\\begin{split}\\int _{B_{|v|/2}} K_f(v,v^{\\prime }) [\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }&\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{-q_0 + \\gamma + 3 - q}\\\\&\\le \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{-(3+\\gamma + 2s) + \\gamma + 3 - q}= \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{- 2s - q}\\end{split}$ since $q_0 > 3 + \\gamma + 2s$ .", "The case $q=3$ is the same up to an additional $\\log \\langle v\\rangle $ factor, which is controlled by $\\langle v\\rangle ^{-2s}$ .", "Hence, in all cases, we find $\\int _{B_{|v|/2}} K_f(v,v^{\\prime }) [\\langle v^{\\prime }\\rangle ^{-q} - \\langle v\\rangle ^{-q}] \\, \\mathrm {d}v^{\\prime }\\lesssim \\Vert f\\Vert _{L^\\infty _{q_0}} \\langle v\\rangle ^{- q}.$ This completes the proof." ], [ "Change of variables and global regularity estimates", "The regularity estimates and continuation criterion of Imbert-Silvestre [50] apply to the case $\\gamma + 2s\\in [0,2]$ .", "In this section, we discuss the extension of these results to $\\gamma + 2s < 0$ , with suitably modified hypotheses.", "As mentioned above, our current study requires these regularity estimates to establish the smoothness of our solutions for positive times.", "More generally, the global estimates and the change of variables used to prove them are important tools in the study of the non-cutoff Boltzmann equation, so extending these tools to the case $\\gamma +2s<0$ may be of independent interest.", "The key obstacle in passing from local estimates (Theorems REF and REF ) to global estimates on $[0,T]\\times \\mathbb {R}^3\\times \\mathbb {R}^3$ is the degeneration of the upper and lower ellipticity bounds for the collision kernel $K_f(t,x,v,v^{\\prime })$ as $|v|\\rightarrow \\infty $ .", "To overcome this, the authors of [50] developed a change of variables that “straightens out” the anisotropic ellipticity of the kernel and allows to precisely track the behavior of the estimates for large $|v|$ .", "Their change of variables is defined as follows: for a fixed reference point $z_0 = (t_0,x_0,v_0) \\in [0,\\infty )\\times \\mathbb {R}^6$ with $|v_0|>2$ and with $\\gamma + 2s\\ge 0$ , define the linear transformation $T_0:\\mathbb {R}^3\\rightarrow \\mathbb {R}^3$ by $T_0 (av_0 + w):= \\frac{a}{|v_0|}v_0 + w\\quad \\text{ where } a\\in \\mathbb {R}, \\, w\\cdot v_0 = 0.$ Next, for $(t,x,v) \\in Q_1(z_0)$ and when $\\gamma + 2s \\ge 0$ , define $\\begin{split}(\\bar{t}, \\bar{x}, \\bar{v})= \\mathcal {T}_0(t,x,v)&:= \\left(t_0 + \\frac{t}{|v_0|^{\\gamma +2s}},x_0 + \\frac{T_0x + tv_0}{|v_0|^{\\gamma +2s}},v_0 + T_0v\\right) \\\\&= z_0 \\circ \\left(\\frac{t}{|v_0|^{\\gamma +2s}},\\frac{T_0 x}{|v_0|^{\\gamma +2s}},T_0v\\right).\\end{split}$ When $|z_0| \\le 2$ , let $\\mathcal {T}_0(t,x,v) = z_0 \\circ z$ .", "The definition (REF ) applies when $\\gamma + 2s \\in [0,2]$ , and does not generalize well to the case $\\gamma + 2s< 0$ .", "Indeed, the solution $f$ of (REF ) is not defined at the point $(\\bar{t}, \\bar{x}, \\bar{v})$ if $\\bar{t} < 0$ , which occurs if, e.g., $\\gamma + 2s < 0$ and $v_0$ is sufficiently large.", "Thus, for the case $\\gamma +2s < 0$ , we introduce a new definition: first, extend the definition of $T_0$ as follows: $T_0 (av_0 + w):= \\frac{1}{|v_0|^\\frac{(\\gamma + 2s)_-}{2s}} \\Big (\\frac{a}{|v_0|}v_0 + w\\Big )\\quad \\text{ where } a\\in \\mathbb {R}, \\, w\\cdot v_0 = 0.$ Then, let $\\begin{split}(\\bar{t}, \\bar{x}, \\bar{v})= \\mathcal {T}_0(t,x,v)&:= \\left( t_0 + t, x_0 + T_0x + tv_0, v_0 + T_0v\\right) \\\\&= z_0 \\circ \\left(t , T_0 x, T_0v\\right).\\end{split}$ As above, when $|v_0|<2$ , we take $\\mathcal {T}_0z = z_0\\circ z$ .", "It is interesting to note that sending $s\\rightarrow 1$ in (REF ) recovers the change of variables that was applied to the Landau equation in [38] in the very soft potentials regime.", "For $r>0$ and $z_0 = (t_0,x_0,v_0)\\in \\mathbb {R}^7$ , define $\\mathcal {E}_r(z_0):= \\mathcal {T}_0(Q_r), \\quad E_r(v_0):=v_0 + T_0(B_r),$ and $\\mathcal {E}_r^{t,x}(z_0):= {\\left\\lbrace \\begin{array}{ll}\\Big \\lbrace \\Big (t_0 +\\frac{t}{|v_0|^{\\gamma +2s}}, x_0 + \\frac{1}{|v_0|^{\\gamma +2s}}(T_0 x + tv_0)\\Big ) : t\\in [-r^{2s},0], x\\in B_{r^{1+2s}}\\Big \\rbrace , &\\gamma +2s\\ge 0,\\smallskip \\\\\\big \\lbrace \\big (t_0 +t, x_0 + T_0x + tv_0\\big ) :t\\in [-r^{2s},0], x\\in B_{r^{1+2s}}\\big \\rbrace , &\\gamma +2s<0,\\end{array}\\right.}", "$ so that $\\mathcal {E}_r(z_0)= \\mathcal {E}_r^{t,x}(z_0)\\times E_r(v_0).$ Now, for a solution $f$ of the Boltzmann equation (REF ) and $z_0$ such that $t_0\\ge 1$ , let us define $ \\bar{f}(t,x,v) = f(\\bar{t}, \\bar{x}, \\bar{v}).$ By direct computation, $\\bar{f}$ satisfies $ \\partial _t \\bar{f} + v\\cdot \\nabla _x \\bar{f} = \\mathcal {L}_{\\bar{K}_f} \\bar{f} + \\bar{h}, \\quad \\text{ in } Q_1,$ where $\\bar{K}_f(t,x,v,v^{\\prime }):= {\\left\\lbrace \\begin{array}{ll}\\frac{1}{|v_0|^{1+\\gamma +2s}} \\, K_f(\\bar{t}, \\bar{x}, \\bar{v}, v_0 + T_0v^{\\prime }), &\\qquad \\gamma +2s\\ge 0,\\smallskip \\\\|v_0|^{2+\\frac{3\\gamma }{2s}} \\, K_f(\\bar{t}, \\bar{x}, \\bar{v}, v_0 +T_0 v^{\\prime }), &\\qquad \\gamma +2s<0, \\end{array}\\right.", "}$ and $ \\bar{h}(t,x,v) = c_b |v_0|^{-(\\gamma +2s)_+} f(\\bar{t}, \\bar{x}, \\bar{v}) [f\\ast |\\cdot |^\\gamma ](\\bar{t}, \\bar{x}, \\bar{v}).$ For use in the convolution defining $K_f$ , we also define $\\bar{v}^{\\prime } := v_0 + T_0 v^{\\prime }.$ In order to apply the local regularity estimates (Theorems REF and REF above) to $\\bar{f}$ , one obviously needs to check that the kernel $\\bar{K}_f$ satisfies the hypotheses of these two theorems.", "In the case $\\gamma +2s\\in [0,2]$ , this was done in [50].", "As stated, the results in [50] require a uniform upper bound on the energy density $\\int _{\\mathbb {R}^3} |v|^2 f(t,x,v) \\, \\mathrm {d}v$ , but it is clear from the proof that this can be replaced by a bound on the $2s$ moment $\\int _{\\mathbb {R}^3} |v|^{2s}f(t,x,v) \\, \\mathrm {d}v$ .", "Proposition 3.1 [50] Assume $\\gamma + 2s \\in [0,2]$ .", "Let $z_0 = (t_0,x_0,v_0)$ be arbitrary, and let $\\mathcal {E}_1(z_0)$ be defined as above.", "If $f$ satisfies $\\begin{aligned}&0<m_0\\le \\int _{\\mathbb {R}^3}f(t,x,v) \\le M_0,&\\qquad &\\qquad &&\\int _{\\mathbb {R}^3}|v|^{2s}f(t,x,v) \\, \\mathrm {d}v \\le \\tilde{E}_0, \\\\& \\int _{\\mathbb {R}^3} f(t,x,v)\\log f(t,x,v) \\, \\mathrm {d}v \\le H_0,&\\quad &\\text{and }\\quad &&\\sup _{v\\in \\mathbb {R}^3}\\int _{\\mathbb {R}^3}f(t,x,v+u)|u|^\\gamma \\, \\mathrm {d}u \\le C_\\gamma ,\\end{aligned}$ for all $(t,x) \\in \\mathcal {E}_1^{t,x}(z_0)$ , then the kernel $\\bar{K}_f(t,x,v,v^{\\prime })$ defined in (REF ) ($\\gamma +2s\\ge 0$ case) satisfies the conditions (REF )—(REF ) of Theorem REF , with constants depending only on $\\gamma $ , $s$ , $m_0$ , $M_0$ , $\\tilde{E}_0$ , $H_0$ , and $C_\\gamma $ .", "In particular, all constants are independent of $v_0$ .", "Furthermore, for each $z=(t,x,v) \\in \\mathcal {E}_1$ , the kernel $\\bar{K}_z(w) = \\bar{K}_f(t,x,v,v+w)$ lies in the ellipticity class defined in Definition REF , with constants as in the previous paragraph.", "For the case $\\gamma +2s<0$ , the result corresponding to Proposition REF is contained in the following proposition, which we prove in Appendix : Proposition 3.2 Asume $\\gamma +2s <0$ .", "Let $z_0$ and $\\mathcal {E}_1(z_0)$ be as in Proposition REF , and assume that $ \\Vert f(t,x,\\cdot )\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\le L_0, \\qquad \\text{for all } (t,x) \\in \\mathcal {E}_1^{t,x}(z_0),$ for some $q>2s+3$ , and $ f(t,x,v) \\ge \\delta , \\quad \\text{for all } (t,x,v) \\in \\mathcal {E}_1^{t,x}(z_0)\\times B_r(v_m),$ for some $v_m\\in \\mathbb {R}^3$ , and $\\delta , r>0$ .", "Then the kernel $\\bar{K}_f(t,x,v,v^{\\prime })$ defined in (REF ) ($\\gamma +2s<0$ case) satisfies the conditions (REF )—(REF ) of Theorem REF , with constants depending only on $L_0$ , $\\delta $ , $r$ , and $v_m$ .", "Furthermore, for each $z\\in \\mathcal {E}_1$ , the kernel $\\bar{K}_z(w) = \\bar{K}_f(t,x,v,v+w)$ lies in the ellipticity class defined in Definition REF , with constants as in the previous paragraph.", "We note that one should be able to obtain a more optimized version of p:very-soft-kernel by working in a weighted $L^p_v$ -based space with $p < \\infty $ ; however, this is not necessary for our work here, so we state and prove the simpler version above.", "It is also interesting to note that our definition (REF ) would not work well in the case $\\gamma +2s\\ge 0$ , so the separate definitions seem to be unavoidable.", "Let us give a more concise, and less sharp, restatement of the previous two propositions, that is sufficient for our purposes.", "When $q>2s+3$ , the norm $\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ controls the mass and entropy densities, as well as the $2s$ -moment and the constant $C_\\gamma $ , so we have the following: Proposition 3.3 Let $z_0$ and $\\mathcal {E}_1(z_0)$ be as in Proposition REF , and assume that $ \\Vert f(t,x,\\cdot )\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\le L_0, \\qquad \\text{for all } (t,x) \\in \\mathcal {E}_1^{t,x}(z_0),$ for some $q>2s+3$ .", "Assume further that $f(t,x,v) \\ge \\delta ,\\qquad \\text{for all } (t,x,v) \\in \\mathcal {E}_1^{t,x}(z_0)\\times B_r(v_m),$ for some $v_m\\in \\mathbb {R}^3$ , and $\\delta , r>0$ .", "Then the kernel $\\bar{K}_f(t,x,v,v^{\\prime })$ defined in (REF ) satisfies all the conditions of Theorem REF and belongs to the ellipticity class defined in Definition REF , with constants depending only on $L_0$ , $\\delta $ , $r$ , and $v_m$ .", "The Hölder regularity of the kernel is also required for applying the Schauder estimate of Theorem REF .", "In the $\\gamma +2s\\ge 0$ case, this regularity is given by [50].", "For $\\gamma +2s<0$ , we prove it in Lemma REF below.", "We summarize these two results here: Lemma 3.4 For any $f:[0,T]\\times \\mathbb {R}^6 \\rightarrow [0,\\infty )$ such that $f\\in C^\\alpha _{\\ell ,q_1}([0,T]\\times \\mathbb {R}^6)$ with $q_1>{\\left\\lbrace \\begin{array}{ll}5+\\frac{\\alpha }{1+2s}&\\quad \\text{ if }\\gamma +2s\\ge 0,\\\\3 + \\frac{\\alpha }{1+2s}&\\quad \\text{ if }\\gamma +2s< 0,\\end{array}\\right.", "}$ and for any $|v_0|>2$ and $r\\in (0,1]$ , let $\\bar{K}_{f,z}(w):= \\bar{K}_f(t,x,v,v+w)\\qquad \\text{ for } z = (t,x,v) \\in Q_{2r}.$ Then we have $\\int _{B_\\rho } |\\bar{K}_{f,z_1}(w) - \\bar{K}_{f,z_2}(w)| |w|^2 \\, \\mathrm {d}w\\le \\bar{A}_0 \\rho ^{2-2s} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }}, \\quad \\rho > 0, z_1,z_2\\in Q_{2r},$ with $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ and $\\bar{A}_0 \\le C |v_0|^{P(\\alpha ,\\gamma ,s)} \\Vert f\\Vert _{C^\\alpha _{\\ell , q_1}([0,T]\\times \\mathbb {R}^6)},$ where $P(\\gamma ,s,\\alpha ) = {\\left\\lbrace \\begin{array}{ll} \\frac{\\alpha }{1+2s}(1-2s-\\gamma )_+, &\\gamma +2s\\ge 0,\\\\2+\\frac{\\alpha }{1+2s}, & \\gamma +2s<0,\\end{array}\\right.", "}$ and the constant $C$ depends on universal quantities, $\\alpha $ , and $q_1$ , but is independent of $|v_0|$ .", "Next, we discuss global regularity estimates for solutions of the Boltzmann equation.", "The following is a global (in $v$ ) Hölder estimate that combines the local De Giorgi/Nash/Moser-type estimate of Theorem REF with the change of variables $\\mathcal {T}_0$ .", "It extends [50] to the case where $\\gamma +2s$ may be negative.", "Theorem 3.5 Let $f$ be a solution of the Boltzmann equation (REF ) in $[0,T]\\times \\mathbb {R}^6$ .", "For some domain $\\Omega \\subseteq \\mathbb {R}^3$ and $\\tau \\in (0,T)$ , assume there are $\\delta , r , R>0$ such that for each $x\\in \\Omega $ , there exists $v_x\\in B_R$ with $f(t,x,v) \\ge \\delta , \\quad \\text{in } [\\tau /2,T]\\times (B_r(x,v_x) \\cap \\Omega \\times \\mathbb {R}^3),$ and assume that $\\Vert f\\Vert _{L^\\infty _{q_0}([0,T]\\times \\mathbb {R}^6)} \\le L_0$ for some $q_0> 2s+3$ .", "Define $\\bar{c}= {\\left\\lbrace \\begin{array}{ll} 1&\\qquad \\text{ if }\\gamma +2s\\ge 0,\\\\- \\frac{\\gamma }{2s}&\\qquad \\text{ if }\\gamma +2s<0.\\end{array}\\right.", "}$ Then, for any $q>3$ such that $f\\in L^\\infty _{q}([0,T]\\times \\mathbb {R}^6)$ , and any $\\Omega ^{\\prime }$ compactly contained in $\\Omega $ , there exists $\\alpha _0>0$ such that for any $\\alpha \\in (0,\\alpha _0]$ , one has $f\\in C^\\alpha _{\\ell ,q-\\bar{c}\\alpha }$ , with $\\Vert f\\Vert _{C^\\alpha _{\\ell ,q-\\bar{c}\\alpha }([\\tau ,T]\\times \\Omega ^{\\prime }\\times \\mathbb {R}^3)} \\le C\\Vert f\\Vert _{L^\\infty _{q}([0,T]\\times \\mathbb {R}^6)}.$ The constants $C$ and $\\alpha _0$ depend on $L_0$ , $q$ , $\\gamma $ , $s$ , $\\delta $ , $r$ , $v_m$ , $\\tau $ , $\\Omega $ , and $\\Omega ^{\\prime }$ .", "If $\\Omega = \\mathbb {R}^3$ , then we can replace $\\Omega ^{\\prime }$ with $\\mathbb {R}^3$ in (REF ).", "Choose a point $z_0= (t_0,x_0,v_0)$ and $r\\in (0,1)$ .", "We prove an interior estimate in the cylinder $Q_r(z_0)$ , which implies the statement of the Theorem via a standard covering argument.", "We claim that for any $\\bar{z}_1, \\bar{z}_2 \\in \\mathcal {E}_{r/2}(z_0)$ , $|f(\\bar{z}_1) - f(\\bar{z}_2)| \\le C (1+|v_0|)^{-q} d_\\ell (z_1,z_2)^\\alpha ,$ with $C, \\alpha >0$ as in the statement of the theorem.", "If $|v_0|\\le 2$ , then (REF ) follows from Theorem REF .", "The dependence of the constants $C$ and $\\alpha $ in this case is made clear from the discussion in Remark REF .", "Note that $2s+3\\ge \\gamma +2s+3$ .", "If $|v_0|>2$ , the proof of (REF ) proceeds exactly as in [50]The proof of Proposition 7.1 in [50] makes use of Lemma REF below, as well as [50].", "Although [50] works under the global assumption $\\gamma +2s\\in [0,2]$ , it is clear from the proof that [50] applies to both cases $\\gamma +2s\\ge 0$ and $\\gamma +2s<0$ , with the same statement.", "and uses the change of variables (REF ) or (REF ).", "The key point is that we can apply Theorem REF with the constants $\\lambda $ and $\\Lambda $ depending on $L_0$ , $\\delta $ , $r$ , and $v_m$ , and independent of $v_0$ , by Proposition REF .", "We omit the details of the proof of (REF ) since they are the same as in [50].", "Estimate (REF ) is equivalent to $[\\bar{f}]_{C^\\alpha _\\ell (Q_{r/2}(z_0))}\\lesssim (1+|v_0|)^{-q}.$ Using Lemma REF or [50] to translate from $\\bar{f}$ to $f$ , we obtain $\\Vert f\\Vert _{C^\\alpha _\\ell (\\mathcal {E}_{r/2}(z_0))}\\lesssim (1+|v_0|)^{\\bar{c}\\alpha -q} + \\Vert f\\Vert _{L^\\infty (\\mathcal {E}_{r/2}(z_0))},$ with $\\bar{c}$ as in the statement of the theorem.", "In order to estimate the $C^{\\alpha }_{\\ell ,q}$ seminorm of $f$ (See Definition REF ), we need to work with local seminorms of $f$ on cylinders $Q_{r/2}(z_0)$ rather than on the twisted cylinders $\\mathcal {E}_{r/2}(z_0)$ .", "From the definitions (REF ) and (REF ) of the $\\mathcal {T}_0$ change of variables, we see that $\\mathcal {E}_{r/2}(z_0)\\supset Q_{(r/2)|v_0|^{-\\bar{c}}}(z_0)$ .", "Using [50], we extend the upper bound to the larger set $Q_{r/2}(z_0)$ : $\\begin{split}[f]_{C^\\alpha _\\ell (Q_{r/2}(z_0))}&\\lesssim \\left((1+|v_0|)^{\\bar{c}\\alpha - q} + (|v_0|^{-\\bar{c}})^{-\\alpha }\\Vert f\\Vert _{L^\\infty (Q_{r/2}(z_0))}\\right)\\\\& \\lesssim (1+|v_0|)^{\\bar{c}\\alpha - q} \\Vert f\\Vert _{L^\\infty _q([0,T]\\times \\mathbb {R}^6)}.\\end{split}$ This estimate implies the desired upper bound on $[f]_{C^{\\alpha }_{\\ell ,q-\\bar{c} \\alpha }([0,T]\\times \\mathbb {R}^6)}$ , which concludes the proof.", "Next, we have a global Schauder estimate that improves $C^\\alpha _\\ell $ -regularity as in Theorem REF to $C^{2s+\\alpha }_\\ell $ -regularity.", "In order to apply the estimate to derivatives of $f$ , this estimate is stated for the linear Boltzmann equation $\\partial _t g + v\\cdot \\nabla _x g = Q_{\\rm s}(f,g) + h.$ Choosing $g=f$ and $h = Q_{\\rm ns}(f,f)$ would recover the original Boltzmann equation (REF ).", "Unlike in the previous theorem, we work out the explicit dependence on $\\tau $ of the estimate in Theorem REF .", "The dependence on $\\tau $ is needed in Proposition REF which is one ingredient of our proof of uniqueness.", "Theorem 3.6 Let $f:[0,T]\\times \\mathbb {R}^6\\rightarrow [0,\\infty )$ , and assume that for some domain $\\Omega \\subseteq \\mathbb {R}^3$ , there exist $\\delta , r, R>0$ such that, if $x \\in \\Omega $ , there is $v_x\\in \\mathbb {R}^3$ with $f(t,x,v) \\ge \\delta , \\quad \\text{ in } [0,T]\\times \\left(B_r(x,v_x) \\cap (\\Omega \\times \\mathbb {R}^3)\\right).$ Assume that $\\Vert f\\Vert _{L^\\infty _{q_0}([0,T]\\times \\mathbb {R}^6)} \\le L_0$ for some $q_0>2s+3$ .", "Furthermore, assume $f\\in C^\\alpha _{\\ell ,q_1}([0,T]\\times \\mathbb {R}^6)$ for some $\\alpha \\in (0,\\min (1,2s))$ and $q_1$ as in Lemma REF .", "Let $g$ be a solution of (REF ) with $h\\in C^{\\alpha ^{\\prime }}_{\\ell ,q_1}([0,T]\\times \\mathbb {R}^6)$ , where $\\alpha ^{\\prime } = \\frac{2s}{1+2s}\\alpha $ and $2s+\\alpha ^{\\prime }\\notin \\lbrace 1,2\\rbrace $ .", "Then, for any $\\tau \\in (0,T)$ and $\\Omega ^{\\prime }$ compactly contained in $\\Omega $ , one has the estimate $\\begin{split}&\\Vert g\\Vert _{C^{2s+\\alpha ^{\\prime }}_{\\ell ,q_1-\\kappa }([\\tau ,T]\\times \\Omega ^{\\prime }\\times \\mathbb {R}^3)} \\\\&\\quad \\le C\\left(1 + \\tau ^{-1 + \\frac{\\alpha -\\alpha ^{\\prime }}{2s}}\\right)\\Vert f\\Vert _{C^\\alpha _{\\ell , q_1}([0,T]\\times \\mathbb {R}^6)}^\\frac{\\alpha -\\alpha ^{\\prime }+ 2s}{\\alpha ^{\\prime }}\\left( \\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}([0,T]\\times \\mathbb {R}^6)} + \\Vert h\\Vert _{C^{\\alpha ^{\\prime }}_{\\ell , q_1}([0,T]\\times \\mathbb {R}^6)}\\right),\\end{split}$ where the moment loss for higher order regularity of $g$ is $\\begin{split}\\kappa & := {\\left\\lbrace \\begin{array}{ll} (1-2s-\\gamma )_+ \\left(1 + \\frac{\\alpha }{1+2s} - \\frac{\\alpha }{2s}\\right) + 2s+\\alpha ^{\\prime }, &\\gamma +2s\\ge 0\\\\(2 + \\frac{\\alpha }{1+2s})(\\frac{1+2s}{\\alpha }- \\frac{1}{2s}) - \\gamma \\left(1+\\frac{\\alpha }{1+2s}\\right), &\\gamma +2s < 0,\\end{array}\\right.", "}\\end{split}$ and the constant $C$ depends on $\\gamma $ , $s$ , $\\alpha $ , $q_0$ , $q_1$ , $L_0$ , $\\delta $ , $r$ , $R$ , $\\Omega $ , and $\\Omega ^{\\prime }$ .", "If $\\Omega = \\mathbb {R}^3$ , then we can replace $\\Omega ^{\\prime }$ with $\\mathbb {R}^3$ in (REF ).", "We follow the proof of [50] suitably modifying the argument in order to allow the case $\\gamma + 2s < 0$ and to determine the explicit dependence on $\\tau $ .", "In this proof, to keep the notation clean, any norm or seminorm given without a domain, such as $\\Vert f\\Vert _{C^\\alpha _{\\ell ,q}}$ , is understood to be over $[0,T]\\times \\mathbb {R}^6$ .", "Let $z_0 = (t_0,x_0,v_0)\\in (\\tau ,T]\\times \\Omega ^{\\prime }\\times \\mathbb {R}^3$ be fixed, and define $\\rho = \\min \\big (1, (t_0/2)^\\frac{1}{2s}\\big ).$ Since we do not track the explicit dependence of this estimate on $\\Omega ^{\\prime }$ and $\\Omega $ , we assume without loss of generality that $\\mathcal {E}_\\rho (z_0)\\subset [0,T]\\times \\Omega \\times \\mathbb {R}^3$ .", "If $|v_0|>2$ , then let $\\varphi \\in C_c^\\infty (\\mathbb {R}^3)$ be a cutoff function supported in $B_{|v_0|/8}$ and identically 1 in $B_{|v_0|/9}$ .", "Define $\\bar{g} := [(1-\\varphi )g]\\circ \\mathcal {T}_0$ .", "The function $\\bar{g}$ is defined on $Q_\\rho $ and satisfies $ \\partial _t \\bar{g} + v\\cdot \\nabla _x \\bar{g} = \\int _{\\mathbb {R}^3} (\\bar{g}^{\\prime } - \\bar{g}) \\bar{K}_f(t,x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } + \\bar{h} + \\bar{h}_2,$ where $\\bar{g}^{\\prime } = \\bar{g}(t,x,v^{\\prime })$ , and $\\bar{K}_f$ is defined in (REF ).", "The source terms are defined by $ \\begin{split}\\bar{h} &= |v_0|^{-(\\gamma +2s)_+} h \\circ \\mathcal {T}_0,\\\\\\bar{h}_2 &= |v_0|^{-(\\gamma +2s)_+} \\int _{\\mathbb {R}^3} \\varphi (v^{\\prime }) g(\\bar{t}, \\bar{x}, v^{\\prime }) K_f(\\bar{t}, \\bar{x}, \\bar{v}, v^{\\prime }) \\, \\mathrm {d}v^{\\prime }.\\end{split} $ By Proposition REF , the kernel $\\bar{K}_f$ lies in the ellipticity class of Definition REF , with constants depending on $L_0$ , $\\delta $ , $r$ , and $R$ .", "Applying the local Schauder estimate Theorem REF to $\\bar{g}$ , we obtain $\\begin{split}[\\bar{g}]_{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_{\\rho /2})}&\\lesssim \\max \\left( \\rho ^{-2s-\\alpha ^{\\prime } + \\alpha }, \\bar{A}_0^{(2s+\\alpha ^{\\prime } - \\alpha )/\\alpha ^{\\prime }}\\right)[\\bar{g}]_{C^\\alpha _{\\ell }((-\\rho ^{2s},0]\\times B_{\\rho ^{1+2s}} \\times \\mathbb {R}^3)}\\\\&\\quad + [\\bar{h} + \\bar{h}_2]_{C_\\ell ^{\\alpha ^{\\prime }}(Q_\\rho )}+ \\max (\\rho ^{-\\alpha ^{\\prime }},\\bar{A}_0)\\Vert \\bar{h} + \\bar{h}_2\\Vert _{L^\\infty (Q_\\rho )},\\end{split}$ where we recall the definition of $\\bar{A}_0$ from (REF ).", "Let us estimate the terms in this right-hand side one by one.", "To estimate $\\bar{A}_0$ , we use Lemma REF : $ \\begin{split}\\bar{A}_0 &\\lesssim |v_0|^{P(\\gamma ,s,\\alpha )} \\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}},\\end{split} $ with $q_1$ and $P(\\gamma , s, \\alpha )$ as in Lemma REF .", "Next, Lemma REF below (if $\\gamma +2s<0$ ) or [50] (if $\\gamma +2s\\ge 0$ ) imply $[\\bar{g}]_{C^\\alpha _\\ell ((-\\rho ^{2s},0]\\times B_{\\rho ^{1+2s}}\\times \\mathbb {R}^3)}\\le \\Vert (1-\\varphi ) g\\Vert _{C^\\alpha _\\ell }\\lesssim |v_0|^{-q_1}\\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}},$ with $q_1$ as above.", "The last inequality follows because $1-\\varphi $ is supported for $|v|\\ge |v_0|/9$ .", "For the terms involving $\\bar{h}$ and $\\bar{h}_2$ , note that $\\begin{split}\\Vert \\bar{h}\\Vert _{L^\\infty (Q_\\rho )} &= |v_0|^{-(\\gamma +2s)_+} \\Vert h\\Vert _{L^\\infty (\\mathcal {E}_\\rho (z_0))} \\le |v_0|^{-(\\gamma +2s)_+ - q_1}\\Vert h\\Vert _{L^\\infty _{q_1}},\\\\[\\bar{h}]_{C^{\\alpha ^{\\prime }}_\\ell (Q_\\rho )} &\\le |v_0|^{-(\\gamma +2s)_+} [h]_{C^{\\alpha ^{\\prime }}_\\ell (\\mathcal {E}_\\rho (z_0))} \\le |v_0|^{-(\\gamma +2s)_+-q_1} [h]_{C^{\\alpha ^{\\prime }}_{\\ell ,q_1}},\\end{split}$ where the second line used Lemma REF or [50].", "For $\\bar{h}_2$ , we use [50]to write $ \\begin{split}\\Vert \\bar{h}_2\\Vert _{L^\\infty (Q_\\rho )} &\\lesssim |v_0|^{-q_1 +\\gamma - (\\gamma +2s)_+}\\Vert f\\Vert _{L^\\infty _{q_1}}\\Vert g\\Vert _{L^\\infty _{q_1}} ,\\\\[\\bar{h}_2]_{C^{\\alpha ^{\\prime }}_\\ell (Q_\\rho )} &\\lesssim |v_0|^{-q_1 + \\gamma -(\\gamma +2s)_+ + 2\\alpha /(1+2s)} \\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}} \\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}}.\\end{split} $ Combining all of these inequalities with (REF ), we have $ \\begin{split}& [\\bar{g}]_{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_{\\rho /2})}\\\\&\\lesssim \\max \\Big ( t_0^\\frac{-2s+ (\\alpha -\\alpha ^{\\prime })}{2s}, |v_0|^{P(\\gamma ,s,\\alpha )\\frac{2s- \\alpha + \\alpha ^{\\prime }}{\\alpha ^{\\prime }}}\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}}^\\frac{2s-\\alpha +\\alpha ^{\\prime }}{\\alpha ^{\\prime }}\\Big )|v_0|^{-q_1} \\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}}\\\\&\\quad +|v_0|^{-(\\gamma +2s)_+ - q_1}[h]_{C^{\\alpha ^{\\prime }}_{\\ell ,q_1}}+ |v_0|^{-q_1+\\gamma -(\\gamma +2s)_+ + \\frac{2\\alpha }{1+2s}}\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}} \\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}}\\\\&\\quad + \\max \\Big (t_0^{-\\frac{\\alpha ^{\\prime }}{2s}},|v_0|^{P(\\gamma ,s,\\alpha )}\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}}\\Big )\\Big (|v_0|^{-(\\gamma +2s)_+-q_1}\\Vert h\\Vert _{L^\\infty } + |v_0|^{-q_1+\\gamma -(\\gamma +2s)_+}\\Vert g\\Vert _{L^\\infty _{q_1}} \\Vert f\\Vert _{L^\\infty _{q_1}}\\Big ).\\end{split} $ Keeping only the largest powers of $t_0^{-1}$ , $|v_0|$ , and $\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}}$ , we have (recall that $\\alpha <2s$ ) $[\\bar{g} ]_{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_{\\rho /2})}\\lesssim \\mathcal {A},$ where we have introduced the shorthand $\\mathcal {A} := \\Big (1 + t_0^{-1+\\frac{\\alpha -\\alpha ^{\\prime }}{2s}}\\Big )|v_0|^{-q_1 + P(\\gamma ,s,\\alpha )\\big (1+\\frac{2s-\\alpha }{\\alpha ^{\\prime }}\\big )}\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}}^{1+\\frac{2s-\\alpha }{\\alpha ^{\\prime }}}\\left(\\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}}+ \\Vert h\\Vert _{C^{\\alpha ^{\\prime }}_{\\ell ,q_1}}\\right).$ Now, we apply Lemma REF or [50] to translate from $\\bar{g}$ back to $g$ : $[g]_{C^{2s+\\alpha ^{\\prime }}_\\ell (\\mathcal {E}_{\\rho /2}(z_0))} \\lesssim |v_0|^{\\bar{c} (2s+\\alpha ^{\\prime })} \\Vert \\bar{g}\\Vert _{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_{\\rho /2}(z_0))} \\lesssim |v_0|^{\\bar{c}(2s+\\alpha ^{\\prime })}\\left( \\mathcal {A} + \\Vert g\\Vert _{L^\\infty (Q_{\\rho /2}(z_0))}\\right),$ with $\\bar{c}$ as in (REF ).", "As in the proof of Theorem REF , we need to pass from twisted cylinders $\\mathcal {E}_\\rho (z_0)$ to kinetic cylinders $Q_\\rho (z_0)$ .", "Since $\\mathcal {E}_\\rho (z_0)\\supset Q_{\\rho |v_0|^{-\\bar{c}}}(z_0)$ , we use [50] to obtain $[g]_{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_\\rho (z_0))}\\lesssim |v_0|^{\\bar{c}(2s+\\alpha ^{\\prime })} \\mathcal {A}+ |v_0|^{\\bar{c}(2s+\\alpha ^{\\prime })} \\Vert g\\Vert _{L^\\infty (Q_\\rho (z_0))}.$ We finally have $[g]_{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_\\rho (z_0))}\\lesssim \\Big (1 + t_0^{-1+\\frac{\\alpha -\\alpha ^{\\prime }}{2s}}\\Big ) (1+|v_0|)^{-q_1 + \\kappa }\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1}}^{1+\\frac{2s-\\alpha }{\\alpha ^{\\prime }}}\\left(\\Vert g\\Vert _{C^\\alpha _{\\ell ,q_1}}+ \\Vert h\\Vert _{C^{\\alpha ^{\\prime }}_{\\ell ,q_1}}\\right),$ with $ \\kappa := P(\\gamma ,s,\\alpha )\\Big (1+\\frac{2s-\\alpha }{\\alpha ^{\\prime }}\\Big )+\\bar{c}(2s+\\alpha ^{\\prime }),$ as in the statement of the theorem.", "We have derived (REF ) under the assumption $|v_0|>2$ .", "If $|v_0|\\le 2$ , then we may apply Theorem REF directly to $g$ , without using the change of variables.", "Proceeding as above, we obtain (REF ) in this case as well, using $1\\lesssim ( 1+ |v_0|)^{-q_1+\\kappa }$ .", "This completes the proof, since $t_0\\gtrsim \\tau $ .", "When we apply the linear estimate of Theorem REF to solutions of the Boltzmann equation, we obtain the following time-weighted Schauder estimate: Proposition 3.7 Let $f:[0,T]\\times \\mathbb {R}^6\\rightarrow [0,\\infty )$ be a classical solution to (REF ) satisfying the assumptions for $f$ in t:global-schauder.", "For any $t_0\\in (0,T)$ , $\\alpha >0$ , and $q_1$ as in Lemma REF , the estimate $\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_{\\ell ,q_1}([t_0/2,t_0]\\times \\mathbb {R}^6)} \\le C t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1+\\kappa +\\alpha /(1+2s)+\\gamma }([0,t_0]\\times \\mathbb {R}^6)}^{1+(\\alpha +2s)/\\alpha ^{\\prime }},$ holds whenever the right-hand side is finite, where $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ , $C>0$ is a constant depending on $\\gamma $ , $s$ , $\\alpha $ , $q_0$ , $q_1$ , $L_0$ , $\\delta $ , $r$ , $R$ , $\\Omega $ , and $\\Omega ^{\\prime }$ , and $\\kappa $ is the constant from Theorem REF .", "Theorem REF with with $g=f$ and $h=Q_{\\rm ns}(f,f)$ implies $\\begin{split}&\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_{\\ell ,q_1}([t_0/2,t_0]\\times \\mathbb {R}^6)}\\\\& \\qquad \\le C t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1+\\kappa }([0,t_0]\\times \\mathbb {R}^6)}^{(\\alpha -\\alpha ^{\\prime }+2s)/\\alpha ^{\\prime } }\\left(\\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1+\\kappa }([0,t_0]\\times \\mathbb {R}^6)} + \\Vert Q_{\\rm ns}(f,f)\\Vert _{C^{\\alpha ^{\\prime }}_{\\ell ,q_1+\\kappa }([0,t_0]\\times \\mathbb {R}^6)}\\right)\\\\&\\qquad \\le C t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\Vert f\\Vert _{C^\\alpha _{\\ell ,q_1+\\kappa +\\alpha /(1+2s)+\\gamma }([0,t_0]\\times \\mathbb {R}^6)}^{1+(\\alpha +2s)/\\alpha ^{\\prime }} ,\\end{split}$ using Lemma REF to bound $Q_{\\rm ns}(f,f)$ .", "Next, we discuss higher regularity estimates for the solution $f$ .", "The following proposition is in some sense a restatement of the main theorem of [50], using hypotheses that are convenient for our purposes ($L^\\infty _q$ bounds and pointwise lower bounds for $f$ , rather than the mass, energy, and entropy density bounds used in [50]).", "This result extends the higher regularity estimates to the case $\\gamma +2s<0$ , although we should point out that the hypotheses here are stronger than in [50] and it is not currently known how to prove global regularity estimates depending only on mass, energy, and entropy bounds in the case $\\gamma +2s<0$ .", "Proposition 3.8 (Higher regularity) Let $f$ be a classical solution to (REF ) on $[0,T]\\times \\mathbb {R}^6$ .", "Assume that for some $\\Omega \\subseteq \\mathbb {R}^3$ , there exist $\\delta , r, R>0$ such that for any $x\\in \\Omega $ , there is a $v_x\\in B_R$ such that $f(t,x,v) \\ge \\delta $ whenever $(x,v)\\in B_r(x,v_x)\\cap (\\Omega \\times \\mathbb {R}^3)$ .", "Fix any $m \\ge 0$ and multi-index $k$ in the $(t,x,v)$ variables.", "Then there exists $q(k,m)$ such that, if $f \\in L^\\infty _{q(k,m)}([0,T]\\times \\mathbb {R}^6)$ , then $\\Vert D^k f\\Vert _{L^\\infty _m([\\tau ,T]\\times \\Omega ^{\\prime }\\times \\mathbb {R}^3)} \\le C.$ The constant $q(k,m)$ depends on $k$ , $m$ , $\\delta $ , $r$ , $R$ , $\\tau $ , $\\Omega ^{\\prime }$ , and $\\Omega $ .", "The constant $C$ depends on the same quantities as well as $\\Vert f\\Vert _{L^\\infty _{q(k,m)}}$ .", "Furthermore, $q(k,m)$ and $C$ are nonincreasing functions of $\\tau $ .", "If $\\Omega = \\mathbb {R}^3$ , then we can replace $\\Omega ^{\\prime }$ with $\\mathbb {R}^3$ in (REF ).", "If we were working on a periodic spatial domain and only considering the case $\\gamma +2s\\ge 0$ , we could remove the dependence on higher $L^\\infty _q$ -norms of $f$ , by using decay estimates as in [46].", "However, we need to apply these estimates on the whole space $\\mathbb {R}^3_x$ and also $\\gamma +2s<0$ , so we state it in the form above.", "We intend to apply Proposition REF in situations where higher $L^\\infty _q$ -norms of $f$ are bounded in terms of the initial data and weaker norms of $f$ .", "The proof of Proposition REF is the same as the proof of [50] and consists of the following ingredients: The change of variables and global estimates described in this section.", "The bootstrapping procedure explained in Section 9 of [50], which consists of applying Theorems REF and REF to partial derivatives and increments of $f$ .", "This makes use of certain facts about increments that are proven in Section 8 of [50].", "The analysis in Sections 8 and 9 of [50] does not use the sign of $\\gamma +2s$ in any way.", "At each step of the bootstrapping, a certain (non-explicit) number of velocity moments are used up.", "The result of [50] proves estimates for all partial derivatives of $f$ , but one can obviously stop the process after a finite number of iterations, which gives rise to the condition $f \\in L^\\infty _{q(k,m)}([0,T]\\times \\mathbb {R}^6)$ in our statement.", "Bilinear estimates for the operators $Q_{\\rm s}$ and $Q_{\\rm ns}$ in Hölder spaces, which are also needed in the course of bootstrapping.", "These lemmas are also essentially independent of the sign of $\\gamma +2s$ , but we record them in Appendix for the sake of completeness: see Lemmas REF and REF .", "Finally, we have a continuation criterion for smooth solutions, which will be used in our proof of Theorem REF .", "It is intended mainly as an internal result and works with solutions defined on a torus of general side length.", "It is certainly not sharp.", "Proposition 3.9 (Continuation criterion) Let $M^3$ be the periodic torus of side length $M>0$ .", "Let $f$ be a classical solution to (REF ) in $[0,T)\\times M^3\\times \\mathbb {R}^3$ with $T>0$ .", "Suppose that initial data $f_{\\rm in}$ is smooth, is rapidly decaying in $v$ , and that there is $\\delta , r>0$ $x_m \\in M^3$ , and $v_m\\in \\mathbb {R}^3$ such that $f_{\\rm in}(x,v)\\ge \\delta , \\quad (x,v)\\in B_r(x_m, v_m).$ Then there exists $q_{\\rm cont}$ such that, if $\\Vert f\\Vert _{L^\\infty _{q_{\\rm cont}}([0,T)\\times 3_M\\times \\mathbb {R}^3)} < \\infty ,$ then $f$ can be extended to a classical solution $[0,T+\\varepsilon ]\\times M^3\\times \\mathbb {R}^3$ for some $\\varepsilon >0$ .", "The decay rate $q_{\\rm cont}$ depends on $\\gamma $ , $s$ , $T$ , $\\delta $ , $r$ , and $|v_m|$ .", "The constant $\\varepsilon $ depends on the same quantities as well as $\\Vert f(t)\\Vert _{L^\\infty _{q_{\\rm cont}}([0,T)\\times 3_M\\times \\mathbb {R}^3)}$ .", "Furthermore, $q_{\\rm cont}$ is a nonincreasing function of $T$ , and $\\varepsilon $ is a nondecreasing function of $T$ .", "First, by scaling, we may consider the case $M=1$ .", "Indeed, defining $f^M(t,x,v) := M^{\\gamma +3}f(t,Mx,Mv)$ , it is clear that: (i) $f^M$ solves (REF ) on $[0,T)\\times 3\\times \\mathbb {R}^3$ , (ii) $f^M$ exists on $[0,T+\\varepsilon ]\\times 3\\times \\mathbb {R}^3$ if and only if $f$ exists on $[0,T+\\varepsilon ]\\times M^3\\times \\mathbb {R}^3$ , and (iii) inequality (REF ) holds if and only if $\\Vert f^M\\Vert _{L^\\infty _{q_{\\rm cont}}([0,T)\\times 3\\times \\mathbb {R}^3)} < \\infty $ .", "Fix $k= 6$ , and let $n$ and $p$ be the corresponding constants from p:prior-existence.", "We can apply p:higher-reg for all multi-indices $j$ of order at most 10 with the choice $m =n$ to find $q(10,n)$ such that $\\sup _{|j| \\le 10} \\Vert D^j f\\Vert _{L^\\infty _{n}([T/2,T]\\times 3\\times \\mathbb {R}^3)}\\le C_0.$ Note that the lower bound condition required to apply p:higher-reg follows from t:lower-bounds and the compactness of 3.", "Define $q_{\\rm cont} = \\max \\lbrace p, q(10,n)\\rbrace $ , and note that $q_{\\rm cont}$ depends on the quantities claimed in the statement, by Proposition REF .", "The constant $C_0$ in e.c62901 depends on $T$ , $n$ , $p$ , $\\delta $ , $r$ , $|v_m|$ , $\\Vert f(t)\\Vert _{L^\\infty _{q_{\\rm cont}}([0,T)\\times 3\\times \\mathbb {R}^3)}$ , and $\\Vert f\\Vert _{C^\\alpha ([0,T]\\times 3\\times \\mathbb {R}^3)}$ , again as a result of p:higher-reg.", "We claim that if $\\Vert f\\Vert _{L^\\infty _{q_{\\rm cont}}}([0,T)\\times 3\\times \\mathbb {R}^3)< \\infty $ , then $f$ can be extended past time $T$ .", "Indeed, for any $t \\in [T/2,T)$ , the estimate (REF ) plus Sobolev embedding in $3\\times \\mathbb {R}^3$ provides a uniform bound for $f(t) \\in H^6_{n} \\cap L^\\infty _{q_{\\rm cont}} (3 \\times \\mathbb {R}^3)$ depending only on the constants above.", "Since $q_{\\rm cont}\\ge p$ , and by our choice of $k$ , $n$ , and $p$ , we may apply p:prior-existence at any $t\\in [T/2,T)$ to obtain a solution $\\tilde{f}$ in $C^0([t, t+\\varepsilon ^{\\prime }), H^{k}_{n} \\cap L^\\infty _{q_{\\rm cont}}(3\\times \\mathbb {R}^3))$ with $\\varepsilon ^{\\prime }$ depending on the constant $C_0$ in (REF ).", "From Proposition REF , $C_0$ is a nonincreasing function of $\\tau = T/2$ , which implies $\\varepsilon ^{\\prime }$ is nondecreasing in $T$ .", "Since $k= 6$ , Sobolev embedding implies $\\tilde{f}$ is a classical solution (in particular, it is twice differentiable in $x$ and $v$ , and via the equation (REF ), once differentiable in $t$ ).", "The proof is then finished by choosing $\\varepsilon = \\varepsilon ^{\\prime }/2$ and $t \\in (T-\\varepsilon ^{\\prime }/2,T)$ and concatenating $f$ and $\\tilde{f}$ ." ], [ "Existence of solutions", "This section is devoted to the proof of Theorems REF and REF ." ], [ "Decay estimates", "We begin with novel decay estimates that are needed for our construction.", "These estimates are stated for any suitable solution $f$ of the Boltzmann equation (REF ) on a periodic spatial domain $M^3$ , i.e.", "the torus of side length $M>0$ .", "In Section REF below, we apply these estimates to our approximating sequence.", "Throughout this subsection, we assume the initial data corresponding to $f$ satisfies $f_{\\rm in} \\in C^\\infty (3_M\\times \\mathbb {R}^3) \\cap L^\\infty _{q^{\\prime }}(M^3\\times \\mathbb {R}^3) \\quad \\text{for all } q^{\\prime }\\ge 0.$ However, we do not assume a priori that $f$ satisfies polynomial decay of all orders for positive times.", "First, using a barrier argument, we show that polynomial upper bounds of order larger than $\\gamma +2s+3$ are propagated forward in time: Lemma 4.1 Let $q_0 > \\gamma +2s+3$ and $q\\in [q_0,q_0-\\gamma ]$ be fixed.", "Let $f$ be a solution of (REF ) on $[0,T]\\times M^3\\times \\mathbb {R}^3$ for some $M>0$ , and assume $f\\in L^\\infty _{q^{\\prime }}([0,T]\\times M^3\\times \\mathbb {R}^3)$ for some $q^{\\prime }> q$ and that $f_{\\rm in}$ satisfies (REF ).", "Then $\\Vert f(t)\\Vert _{L^\\infty _{q}(3_{M}\\times \\mathbb {R}^3)}\\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q}(M^3\\times \\mathbb {R}^3)} \\exp (C_0 \\Vert f\\Vert _{L^\\infty _{q_0}([0,T]\\times M^3\\times \\mathbb {R}^3)} t), \\quad 0\\le t\\le T,$ for a constant $C_0>0$ depending only on universal quantities and $q$ and $q_0$ .", "In particular, $C_0$ is independent of $f_{\\rm in}$ and the norm of $f$ in $L^\\infty _{q^{\\prime }}$ .", "Define the barrier function $g(t,x,v) =N e^{\\beta t} \\langle v\\rangle ^{-q}$ , with $N,\\beta >0$ to be chosen later.", "By taking $N> \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q}(3_{M_\\varepsilon }\\times \\mathbb {R}^3)}$ , we ensure $f(0,x,v) < g(0,x,v)$ for all $x$ and $v$ .", "We would like to show $f(t,x,v) < g(t,x,v), \\quad (t,x,v) \\in [0,T]\\times M^3\\times \\mathbb {R}^3.$ If this bound fails, then because $f$ decays at a rate strictly faster than $\\langle v\\rangle ^{-q}$ , and $f$ is periodic in the $x$ variable, there must be a first time $t_{\\rm cr}\\in (0,T]$ and location $(x_{\\rm cr},v_{\\rm cr})$ at which $f$ and $g$ cross.", "At the first crossing time, we have the following equalities and inequalities: $\\begin{split}\\partial _t f(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) &\\ge \\partial _t g(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}),\\\\\\nabla _x f(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) &= 0,\\\\f(t_{\\rm cr},x,v) &\\le g(t_{\\rm cr},x,v), \\quad x\\in {3_M}, v\\in \\mathbb {R}^3.\\end{split}$ Combining this with the Boltzmann equation (REF ), we have $\\partial _t g(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) \\le Q(f,f) (t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) \\le Q(f,g)(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}).$ To justify the last inequality, write $Q(f,f) = Q_{\\rm s}(f,f) + Q_{\\rm ns}(f,f)$ and use the nonnegativity of the kernel $K_{f}$ to obtain $ \\begin{split}Q_{\\rm s} (f, f)(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})&= \\int _{\\mathbb {R}^3} K_{f}(v,v^{\\prime }) [f(t_{\\rm cr},x_{\\rm cr},v^{\\prime }) - f(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})] \\, \\mathrm {d}v^{\\prime }\\\\&\\le \\int _{\\mathbb {R}^3} K_{f}(v,v^{\\prime }) [g(t_{\\rm cr},x_{\\rm cr},v^{\\prime }) - g(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})] \\, \\mathrm {d}v^{\\prime },\\end{split}$ since $f(t_{\\rm cr},x_{\\rm cr},v) \\le g(t_{\\rm cr},x_{\\rm cr},v)$ and $f(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) = g(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})$ .", "Next, realize that $Q_{\\rm ns}(f,f)(v) = f(t_{\\rm cr},x_{\\rm cr},v) [f\\ast |\\cdot |^\\gamma ](t_{\\rm cr},x_{\\rm cr},v) \\le g(t_{\\rm cr},x_{\\rm cr},v)[f\\ast |\\cdot |^\\gamma ](t_{\\rm cr},x_{\\rm cr},v)$ .", "We have established the last inequality in (REF ).", "From Lemma REF we have, for some $C_0>0$ as in the statement of the lemma, $Q(f,g)(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) = N e^{\\beta t_{\\rm cr}} Q(f,\\langle \\cdot \\rangle ^{-q})(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) \\le C_0 N e^{\\beta t_{\\rm cr}} \\Vert f(t_{\\rm cr})\\Vert _{L^\\infty _{q_0}(M^3 \\times \\mathbb {R}^3)} \\langle v_{\\rm cr}\\rangle ^{-q}.$ Together with (REF ) and $\\partial _t g = N\\beta e^{\\beta t} \\langle v\\rangle ^{-q}$ , this implies $ N \\beta e^{\\beta t_{\\rm cr}} \\langle v_{\\rm cr}\\rangle ^{-q} \\le C_0 N e^{\\beta t_{\\rm cr}} \\Vert f\\Vert _{L^\\infty _{q_0}([0,T]\\times {3_M}\\times \\mathbb {R}^3)} \\langle v_{\\rm cr}\\rangle ^{-q},$ which is a contradiction if we choose $\\beta = 2C_0\\Vert f\\Vert _{L^\\infty _{q_0}([0,T]\\times {3_M}\\times \\mathbb {R}^3)}$ .", "Hence, (REF ) must hold.", "After choosing $N = \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q}([0,T]\\times {3_M}\\times \\mathbb {R}^3)} + \\nu $ in (REF ) and sending $\\nu \\rightarrow 0$ , the proof is complete.", "Consider the $q= q_0$ case of Lemma REF .", "Intuitively, we would like to iterate this estimate to obtain a uniform a priori bound on $\\Vert f(t)\\Vert _{L^\\infty _{q_0}}$ up to some positive time.", "To do this precisely, we use the following technical lemma, which encodes the result of such an iteration.", "Lemma 4.2 [39] If $H:[0,T]\\rightarrow \\mathbb {R}_+$ is a continuous increasing function, and $H(t) \\le A e^{Bt H(t)}$ for all $t\\in [0,T]$ and some constants $A, B > 0$ , then $ H(t) \\le e A \\quad \\text{ for } \\quad 0\\le t\\le \\min \\left( T, \\frac{1}{eAB}\\right).$ Next, we prove the key result of this subsection, which allows us to bound higher $L^\\infty _q$ norms of $f(t)$ in terms of lower decay norms and the initial data: Lemma 4.3 Let $q_{\\rm base}>\\gamma +2s+3$ be fixed, and let $f\\in L^\\infty _{q_{\\rm base}}([0,T]\\times {3_M}\\times \\mathbb {R}^3)$ be a solution to (REF ), with $f_{\\rm in}$ satisfying (REF ).", "There exists $C>0$ depending on universal quantities and $q_{\\rm base}$ , such that for $T_f$ given by $T_f = \\frac{C}{\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times R^3)}},$ the following hold: The solutions $f$ satisfy $\\Vert f(t)\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times \\mathbb {R}^3)} \\le C\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times \\mathbb {R}^3)}, \\quad 0\\le t\\le \\min (T_f,T).$ If $q> q_{\\rm base}$ and $f\\in L^\\infty _{q}([0,T]\\times {3_M}\\times \\mathbb {R}^3)$ , there holds $\\Vert f(t)\\Vert _{L^\\infty _q({3_M}\\times \\mathbb {R}^3)}\\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _q({3_M}\\times \\mathbb {R}^3)} \\exp \\left[M_q\\left(t,\\Vert f_{\\rm in}\\Vert _{L^\\infty _q({3_M}\\times \\mathbb {R}^3)}\\right) \\right], \\quad 0\\le t\\le \\min (T_f,T),$ for some increasing function $M_q:\\mathbb {R}_+\\times \\mathbb {R}_+\\rightarrow \\mathbb {R}_+$ depending on universal constants, $q$ , $q_{\\rm base}$ , and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}(M^3\\times \\mathbb {R}^3)}$ .", "To prove (a), for any $t\\in (0,\\min (T_f,T)]$ , we define $H(t):= \\Vert f\\Vert _{L^\\infty _{q_{\\rm base}}([0,t]\\times {3_M}\\times \\mathbb {R}^3)}.$ Applying Lemma REF with $T = t$ , we obtain $H(t) \\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times \\mathbb {R}^3)} \\exp ( C_0 H(t) t)$ .", "Lemma REF with $A = \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times \\mathbb {R}^3)}$ and $B = C_0$ implies $\\Vert f(t)\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times \\mathbb {R}^3)}\\le C \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}({3_M}\\times \\mathbb {R}^3)},\\quad 0\\le t\\le \\min (T_f,T),$ with $T_f$ as in the statement of the proposition.", "This establishes (a).", "For (b), given $q>q_{\\rm base}$ , let $N\\in \\mathbb {Z}_{\\ge 0}$ and $\\theta \\in [0,1)$ be such that $q = q_{\\rm base} + (N+\\theta )|\\gamma |$ .", "Applying Lemma REF , followed by (REF ), we have $\\begin{split}\\Vert f(t)\\Vert _{L^\\infty _{q_{\\rm base}+|\\gamma |}({3_M}\\times \\mathbb {R}^3)}&\\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+|\\gamma |}({3_M}\\times \\mathbb {R}^3)}\\exp (C \\Vert f\\Vert _{L^\\infty _{q_{\\rm base}}([0,T]\\times {3_M}\\times \\mathbb {R}^3)} t),\\end{split}$ for $0\\le t\\le \\min (T_f,T)$ , where $C$ is the constant from (REF ).", "We have shown that, up to time $\\min (T_f,T)$ , the bound (REF ) holds in the case $q= q_{\\rm base} + |\\gamma |$ , with $M_{q_{\\rm base}+|\\gamma |}(t,z) = C t \\Vert f\\Vert _{L^\\infty _{q_{\\rm base}}}$ .", "We now iterate this argument $N$ times to obtain $\\begin{split}&\\Vert f(t)\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}({3_M}\\times \\mathbb {R}^3)}\\\\&~~\\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}({3_M}\\times \\mathbb {R}^3)}\\exp (C \\Vert f\\Vert _{L^\\infty _{q_{\\rm base}+(N-1)|\\gamma |}([0,T]\\times {3_M}\\times \\mathbb {R}^3)} t)\\\\&~~\\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}({3_M}\\times \\mathbb {R}^3)}\\exp \\left(C \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+(N-1)|\\gamma |}} \\exp \\left[ M_{q_0+(N-1)|\\gamma |}\\left(t,\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+(N-1)|\\gamma |}}\\right)\\right] t\\right)\\\\&~~\\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}({3_M}\\times \\mathbb {R}^3)}\\exp \\left(C \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}} \\exp \\left[ t M_{q_{\\rm base}+(N-1)|\\gamma |}\\left(\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}}\\right)\\right] t\\right)\\\\&~~= \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}({3_M}\\times \\mathbb {R}^3)}\\exp \\left(M_{q_{\\rm base} + N |\\gamma |}(t, \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}+N|\\gamma |}})\\right),\\end{split}$ where we use the recursive definition $M_{q_{\\rm base}+N|\\gamma |}(t,z) = C t z \\exp (M_{q_{\\rm base}+(N-1)|\\gamma |}(t,z))$ .", "Finally, for any small $\\nu >0$ , we apply Lemma REF again, with exponents $q-\\nu =q_{\\rm base} + (N+\\theta )|\\gamma | - \\nu $ and $q_{\\rm base}+N|\\gamma |$ , and argue similarly to obtain $\\Vert f(t)\\Vert _{L^\\infty _{q-\\nu }({3_M}\\times \\mathbb {R}^3)} \\le \\Vert f_{\\rm in}\\Vert _{L^\\infty _q({3_M}\\times \\mathbb {R}^3)} \\exp \\left[ M_q\\left(t,\\Vert f_{\\rm in}\\Vert _{L^\\infty _q({3_M}\\times \\mathbb {R}^3)}\\right)\\right], \\quad 0\\le t\\le \\min (T_f,T),$ with $M_q(t,z) = C_0 z t \\exp (M_{q_{\\rm base}+N|\\gamma |}(t,z))$ .", "The functions $M_q(t,z)$ depend on $\\Vert f\\Vert _{L^\\infty _{q_{\\rm base}}([0,t]\\times {M}^3\\times \\mathbb {R}^3)}$ , but this quantity is bounded in terms of $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}(M^3\\times \\mathbb {R}^3)}}$ , by (a).", "Since $q-\\nu < q$ , our applications of Lemma REF are justified.", "The right-hand side is independent of $\\nu $ , so we can send $\\nu \\rightarrow 0$ and conclude (b)." ], [ "Construction of approximate solutions", "Consider initial data $f_{\\rm in}\\in L^\\infty _{q_{\\rm base}}(M^3\\times \\mathbb {R}^3)$ with $q_{\\rm base}>\\gamma +2s+3$ .", "Our first step is to approximate $f_{\\rm in}$ by smoothing, cutting off large values of $x$ and $v$ , extending by $x$ -periodicity, and adding a region of uniform positivity.", "This will give rise to a sequence of solutions $f^\\varepsilon $ that solve (REF ) with initial data $f_{\\rm in}$ in the limit as $\\varepsilon \\rightarrow 0$ .", "The same construction of $f^\\varepsilon $ will be used to build classical solutions and weak solutions.", "In more detail, for any $r\\in (0,1)$ , define the following functions.", "First, let $\\psi :\\mathbb {R}^6\\rightarrow \\mathbb {R}_+$ be a standard smooth mollifier supported in $B_1(0)$ with $\\int _{B_1(0)} \\psi \\, \\mathrm {d}x = 1$ , and then denote $\\psi _\\varepsilon (x,v) = \\varepsilon ^{-6}\\psi (x/\\varepsilon ,v/\\varepsilon ).$ Next, for any $r>0$ , let $\\zeta _r:\\mathbb {R}^3\\rightarrow \\mathbb {R}_+$ be a smooth cutoff such that $\\sup _{\\mathbb {R}^3} |\\nabla \\zeta _r|\\lesssim 1$ and $\\zeta _r(\\xi ) = {\\left\\lbrace \\begin{array}{ll}1 \\quad \\text{ for } \\xi \\in B_{1/r}\\\\0 \\quad \\text{ for } \\xi \\in B_{1/r+1}^c,\\end{array}\\right.", "}$ Then, for any $\\varepsilon >0$ , define $f_{\\rm in}^\\varepsilon (x,v) := \\zeta _\\varepsilon (x)\\zeta _\\varepsilon (v) [f_{\\rm in}\\ast \\psi _\\varepsilon ](x,v)+ \\varepsilon \\psi (x,v).$ Letting ${3_{M_\\varepsilon }}$ be the three-dimensional torus of side length $M_\\varepsilon := 2(1/\\varepsilon +2)$ centered at $(0,0,0)$ , we extend $f_{\\rm in}^\\varepsilon $ by $x$ -periodicity to obtain a smooth function on ${3_{M_\\varepsilon }}\\times \\mathbb {R}^3$ , or equivalently, a smooth function on $\\mathbb {R}^6$ that is $M_\\varepsilon $ -periodic in the $x$ variable.", "The following construction of an approximate solution $f^\\varepsilon $ is more intricate than one might expect.", "First, the time of existence provided to us by Proposition REF depends on the $H^k_n \\cap L^\\infty _p$ space one chooses, so we need an extra argument to obtain a smooth, rapidly decaying solution on a uniform time interval, even though $f_{\\rm in}^\\varepsilon $ is smooth and rapidly decaying.", "Second, the exponent $q_{\\rm cont}$ in the continuation criterion of Proposition REF may degenerate to $+\\infty $ as $T\\searrow 0$ , so for small times we must perform the continuation “by hand” using Proposition REF and our decay estimates above.", "For each $\\varepsilon >0$ , by Proposition REF , there is a time $T_\\varepsilon >0$ and a solution $f^\\varepsilon (t,x,v) \\ge 0$ defined on $[0,T_\\varepsilon ]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3$ , continuous up to $t=0$ , with $f^\\varepsilon (0,x,v) =f_{\\rm in}^\\varepsilon (x,v)$ .", "We assume $T_\\varepsilon \\le T_f$ from Lemma REF , since our goal is to show $f^\\varepsilon $ exists up to time $T_f$ .", "Noting that $f_{\\rm in}^\\varepsilon $ is smooth in $(x,v)$ and rapidly decaying in $v$ , we choose some large (fixed) values of $k$ , $n$ , and $p$ when we apply Proposition REF .", "We then have $f^\\varepsilon (t) \\in H^k_{n}\\cap L^\\infty _{p}({M_\\varepsilon }^3\\times \\mathbb {R}^3) \\quad \\text{for } t\\in [0,T_\\varepsilon ].$ Choosing $T_\\varepsilon $ smaller if necessary, we also have $\\Vert f^\\varepsilon (t)\\Vert _{H^k_n({M_\\varepsilon }^3\\times \\mathbb {R}^3)} \\le 2\\left(\\Vert f_{\\rm in}^\\varepsilon \\Vert _{H^k_n({M_\\varepsilon }^3\\times \\mathbb {R}^3)} + \\Vert f_{\\rm in}^\\varepsilon \\Vert _{L^\\infty _p({M_\\varepsilon }^3\\times \\mathbb {R}^3)}\\right), \\quad t\\in [0,T_\\varepsilon ].$ Now, let $q>p$ be arbitrary.", "Applying Proposition REF a second time in the space $H^k_n \\cap L^\\infty _{q}$ , we see there is some $T_{\\varepsilon ,q}\\in (0,T_\\varepsilon ]$ such that $\\Vert f^\\varepsilon (t)\\Vert _{L^\\infty _{q}({M_\\varepsilon }^3\\times \\mathbb {R}^3)} < \\infty $ when $t\\in [0,T_{\\varepsilon ,q}]$ .", "Lemma REF (b) then implies estimate (REF ) holds up to time $T_{\\varepsilon ,q}$ .", "Since the right-hand side of (REF ) is bounded uniformly in $t\\in [0,T_{\\varepsilon }]$ , we can combine this with (REF ) to bound the norm of $f^\\varepsilon (t)$ in the space $H^k_n \\cap L^\\infty _q$ by a constant depending only on $T_\\varepsilon $ and the initial data $f_{\\rm in}^\\varepsilon $ .", "We apply Proposition REF again to conclude $f^\\varepsilon $ lies in $L^\\infty _q([0,T_{\\varepsilon ,q} + T_{\\varepsilon ,q}^{\\prime }]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3)$ for some $T_{\\varepsilon ,q}^{\\prime }$ depending only on the upper bound in (REF ).", "Lemma REF then implies the estimate (REF ) can be extended to the time interval $[0,T_{\\varepsilon ,q} + T_{\\varepsilon ,q}^{\\prime }]$ .", "Combining this with (REF ), the process can be iterated finitely many times until $f\\in L^\\infty _q([0,T_\\varepsilon ]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3)$ , with estimate (REF ) valid up to time $T_\\varepsilon $ .", "Since $q>p$ was arbitrary, we conclude that all $L^\\infty _q({M_\\varepsilon }^3\\times \\mathbb {R}^3)$ norms of $f^\\varepsilon (t)$ are finite, with the estimate (REF ) valid, up to time $T_\\varepsilon $ .", "Note that $f_{\\rm in}^\\varepsilon $ satisfies the lower bound condition (REF ) with $\\delta =\\varepsilon $ , $r =1$ , and $(x_m, v_m) = (0,0)$ , so that we can apply the continuation criterion of Proposition REF to extend the solution $f^\\varepsilon $ to a time interval $T_\\varepsilon + \\eta $ , with $\\eta $ depending on $T_\\varepsilon $ , the initial data, and the $L^\\infty _{q_{\\rm cont}}$ -norm of $f^\\varepsilon $ on $[0,T_\\varepsilon ]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3$ , which is controlled by (REF ).", "Since Proposition REF implies $f^\\varepsilon (t) \\in L^\\infty _{q_{\\rm cont}}$ for $t< T_\\varepsilon + \\eta $ , Lemma REF tells us that the bound (REF ) holds up to time $T_\\varepsilon +\\eta $ , and we can repeat this argument finitely many times until $f^\\varepsilon $ exists up to time $T_f$ and lies in $L^\\infty _{q_{\\rm cont}}([0,T_f]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3)$ .", "To extend higher $L^\\infty _q$ estimates up to time $T_f$ , we combine Proposition REF and Lemma REF in the same way as above.", "We omit the details of this step.", "We now have a solution $\\tilde{f}^\\varepsilon \\in L^\\infty _q([0,T_f]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3)$ for every $q>0$ .", "By Proposition REF , $f^\\varepsilon $ is also smooth, with regularity estimates depending on $\\varepsilon $ .", "Let us summarize the results of the last two subsections: Proposition 4.4 Let $q_{\\rm base}> \\gamma +2s+3$ , and let $T_f = C \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}}^{-1}$ as in Lemma REF .", "For any $\\varepsilon >0$ , with $f_{\\rm in}^\\varepsilon $ defined as in (REF ), there exist smooth, rapidly decaying solutions $f^\\varepsilon \\ge 0$ to (REF ) on $[0,T_f]\\times {M_\\varepsilon }^3\\times \\mathbb {R}^3$ with initial data $f_{\\rm in}^\\varepsilon $ .", "These $f^\\varepsilon $ satisfy the estimates $\\Vert f^\\varepsilon (t)\\Vert _{L^\\infty _{q_{\\rm base}}({3_{M_\\varepsilon }}\\times \\mathbb {R}^3)} \\le C\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}({3_{M_\\varepsilon }}\\times \\mathbb {R}^3)}, \\quad 0\\le t\\le T_f.$ and for all $q>0$ , $\\Vert f^\\varepsilon (t)\\Vert _{L^\\infty _q({3_{M_\\varepsilon }}\\times \\mathbb {R}^3)}\\le \\Vert f_{\\rm in}^\\varepsilon \\Vert _{L^\\infty _q({3_{M_\\varepsilon }}\\times \\mathbb {R}^3)} \\exp \\left[M_q\\left(t,\\Vert f_{\\rm in}^\\varepsilon \\Vert _{L^\\infty _q({3_{M_\\varepsilon }}\\times \\mathbb {R}^3)}\\right) \\right], \\quad 0\\le t\\le T_f,$ with $M_q$ as in Lemma REF .", "Recall that $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}(\\mathbb {R}^6)}< \\infty $ by assumption.", "For $\\varepsilon >0$ sufficiently small, we clearly have $\\Vert f_{\\rm in}^\\varepsilon \\Vert _{L^\\infty _{q_{\\rm base}}({3_{M_\\varepsilon }}\\times \\mathbb {R}^3)} \\le 2 \\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_{\\rm base}}(\\mathbb {R}^6)}$ .", "Therefore, the right-hand side of (REF ) is independent of $\\varepsilon $ .", "On the other hand, the right-hand side of (REF ) is independent of $\\varepsilon $ if and only if $f_{\\rm in} \\in L^\\infty _q(\\mathbb {R}^6)$ .", "Even if this upper bound is not uniform in $\\varepsilon $ , the quantities are still finite up to time $T_f$ (which is independent of $\\varepsilon $ ).", "In the next subsection, we derive regularity estimates that are uniform in $\\varepsilon $ ." ], [ "Regularity of $f^\\varepsilon $ for positive times", "In the next two subsections, we prove the existence of classical solutions (Theorem REF ).", "Therefore, we work under the assumption that $f_{\\rm in}$ satisfies the quantitative lower bound (REF ), and that $f_{\\rm in}\\in L^\\infty _{q_0}(\\mathbb {R}^6)$ with $q_0>2s+3$ .", "We no longer need to use the compactness of our spatial domain.", "From now on, we consider $f^\\varepsilon $ to be defined on $[0,T_f]\\times \\mathbb {R}^6$ , periodic in $x$ with period $M_\\varepsilon $ .", "Recall that $M_\\varepsilon \\rightarrow \\infty $ as $\\varepsilon \\rightarrow 0$ .", "The estimates in this subsection apply on domains that are bounded in the $x$ variable, so for any fixed such domain, the $x$ -periodicity is irrelevant for $\\varepsilon $ small enough.", "For brevity, we implicitly assume throughout this subsection that $\\varepsilon $ is small enough for any statement we make about a bounded $x$ domain.", "In order to apply regularity estimates in an $\\varepsilon $ -independent way, we first need suitable lower bounds for the solutions $f^\\varepsilon $ for positive times.", "For $\\varepsilon $ sufficiently small depending on $\\delta $ , $|x_m|$ , and $|v_m|$ , the hypothesis (REF ) implies $f_{\\rm in}^\\varepsilon (x,v) \\ge \\frac{\\delta }{2}, \\quad (x,v) \\in B_r(x_m,v_m).$ Applying Theorem REF to the smooth solutions $f^\\varepsilon $ , we have $f^\\varepsilon (t,x,v) \\ge \\mu (t,x) e^{-\\eta (t,x) |v|^2},$ with $\\mu $ and $\\eta $ uniformly positive and bounded on any compact subset of $(0,T_f]\\times \\mathbb {R}^3$ , and depending only on $\\delta $ , $r$ , $t$ , $|x-x_m|$ , $v_m$ , and $\\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)}$ .", "Because of Proposition REF , the norm $\\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)}$ is bounded above by a constant times $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^6)}$ , and therefore $\\mu $ and $\\eta $ can be chosen independently of $\\varepsilon $ .", "Now we apply regularity estimates.", "Let $z_0 = (t_0,x_0,v_0)\\in (0,T_f]\\times \\mathbb {R}^6$ , and let $r_0 \\le \\min (1,(t_0/2)^{1/(2s)})$ .", "With $\\bar{c}$ as in Theorem REF , choose $\\alpha >0$ small enough that $q_1 := q_0 - \\bar{c} \\alpha > 2s+3$ .", "Since $q_0>2s+3>3$ and (REF ) holds, Theorem REF gives $\\Vert f^\\varepsilon \\Vert _{C^\\alpha _{\\ell , q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)} \\le C_{t_0} \\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_0}([0,T]\\times \\mathbb {R}^6)},$ where $Q_{r_0}^{t,x}(z_0) := (t_0-r_0^{2s},t_0]\\times \\lbrace x: |x-x_0 - (t-t_0)v_0| < r_0^{1+2s}\\rbrace $ .", "The next step is to apply Schauder.", "Since we only assume decay of order $q_0>2s+3$ for $f_{\\rm in}$ , we cannot afford to use the global (in $v$ ) Schauder estimate of Theorem REF , so we proceed with the local Schauder estimate of Theorem REF instead.", "For any $z=(t,x,v)\\in Q_{r_0}(z_0)$ , we define $K_{f^\\varepsilon ,z}(w) = K_{f^\\varepsilon }(t,x,v,v+w)$ .", "We need to check that the kernel satisfies the Hölder hypothesis in Theorem REF : Lemma 4.5 With $q_1$ , $\\alpha $ , $z_0$ , $r_0$ , and $f^\\varepsilon $ as above but under the additional condition that $(q_1 - 3 - \\gamma -2s)(1+2s) > \\alpha $ , the kernel $K_{f^\\varepsilon ,z}(w)$ satisfies the Hölder continuity condition (REF ), with constant $A_0$ depending on universal constants, $q_1$ , $\\alpha $ , $t_0$ , $x_0$ , $r_0$ , $\\delta $ , $|v_m|$ , $|x_0-x_m|$ , and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_0}}$ .", "It has no dependence on $\\varepsilon $ as long as $\\varepsilon $ is small enough that (REF ) holds.", "For $z_1,z_2 \\in Q_{r_0}(z_0)$ , we have $\\begin{split}K_{f^\\varepsilon ,z_1}(w) - K_{f^\\varepsilon ,z_2}(w) &= |w|^{-3-2s} \\int _{\\lbrace h\\cdot w = 0\\rbrace } |h|^{\\gamma +2s+1} [f^\\varepsilon (z_1\\circ (0,0,h)) - f^\\varepsilon (z_2\\circ (0,0,h))] \\tilde{b}(w,h) \\, \\mathrm {d}h\\\\& = K_{g,z_1}(w),\\end{split}$ where $g(z) = f^\\varepsilon (z) - f^\\varepsilon ((z_2\\circ z_1^{-1}) \\circ z)$ .", "With Lemma REF , this implies, for $\\rho >0$ , $\\begin{split}\\int _{B_\\rho }&|K_{f^\\varepsilon ,z_1}(w) - K_{f^\\varepsilon ,z_2}(w)||w|^2 \\, \\mathrm {d}w = \\int _{B_\\rho } |K_{g,z_1}(w)| |w|^2\\, \\mathrm {d}w\\\\&\\le \\left(\\int _{\\mathbb {R}^3}|w|^{\\gamma +2s} |g(t_1,x_1,v_1+w) |\\, \\mathrm {d}w \\right) \\rho ^{2-2s}\\\\&= \\left(\\int _{\\mathbb {R}^3}|w|^{\\gamma +2s} |f^\\varepsilon (z_1\\circ (0,0,w)) - f^\\varepsilon (z_2\\circ (0,0,w))| \\, \\mathrm {d}w \\right) \\rho ^{2-2s}.\\end{split}$ Next, we estimate $|f^\\varepsilon (z_1\\circ (0,0,w)) - f^\\varepsilon (z_2\\circ (0,0,w))|$ .", "Note that for $w\\in \\mathbb {R}^3$ , one has $z_i\\circ (0,0,w) \\in [t_0/4, T]\\times \\mathbb {R}^6$ for $i=1,2$ .", "We claim that, for any $w\\in \\mathbb {R}^3$ , $\\begin{split}&|f^\\varepsilon (z_1\\circ (0,0,w)) - f^\\varepsilon (z_2\\circ (0,0,w))|\\\\&\\qquad \\lesssim \\Vert f^\\varepsilon \\Vert _{C^\\alpha _{\\ell ,q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)} \\langle v_1 + w\\rangle ^{-q_1} d_\\ell (z_1\\circ (0,0,w),z_2\\circ (0,0,w))^\\alpha ,\\end{split}$ where $q_1 = q_0 -\\bar{c}\\alpha $ as above.", "Indeed, this formula follows by using the seminorm $[f^\\varepsilon ]_{C^\\alpha _{\\ell ,q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)}$ when $d_\\ell (z_1\\circ (0,0,w),z_2\\circ (0,0,w))< 1$ and the norm $\\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)}$ when $d_\\ell (z_1\\circ (0,0,w),z_2\\circ (0,0,w))\\ge 1$ .", "We have also used $\\langle v_1 + w\\rangle \\approx \\langle v_2 + w\\rangle $ , since $v_1, v_2 \\in B_{r_0}(v_0)$ .", "Now, using (REF ), we have $\\begin{split}|f^\\varepsilon (z_1\\circ (0,0,w)) &- f^\\varepsilon (z_2\\circ (0,0,w))| \\\\& \\lesssim \\Vert f^\\varepsilon \\Vert _{C^\\alpha _{\\ell ,q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3} \\langle v_1 + w\\rangle ^{-q_1}\\Big (d_\\ell (z_1,z_2) + d_\\ell (z_1,z_2)^\\frac{2s}{1+2s} |w|^\\frac{1}{1+2s}\\Big )^{\\alpha },\\end{split}$ Returning to (REF ), we have $\\begin{split}&\\int _{B_\\rho }|K_{f^\\varepsilon ,z_1}(w) - K_{f^\\varepsilon ,z_2}(w)||w|^2 \\, \\mathrm {d}w\\\\&\\le \\rho ^{2-2s}\\Vert f^\\varepsilon \\Vert _{C^\\alpha _{\\ell ,q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)}\\int _{\\mathbb {R}^3} |w|^{\\gamma +2s} \\langle v_1 + w\\rangle ^{-q_1}\\Big (d_\\ell (z_1,z_2) + d_\\ell (z_1,z_2)^\\frac{2s}{1+2s} |w|^\\frac{1}{1+2s}\\Big )^{\\alpha } \\, \\mathrm {d}w\\\\&\\le \\rho ^{2-2s}\\Vert f^\\varepsilon \\Vert _{C^\\alpha _{\\ell ,q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)} \\langle v_0\\rangle ^{\\gamma +2s+\\frac{\\alpha }{1+2s}} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }},\\end{split}$ with $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ .", "We have used $q_1>2s+3 > \\gamma +2s +\\alpha /(1+2s)+3$ .", "Applying (REF ), we see that $\\Vert f^\\varepsilon \\Vert _{C^\\alpha _{\\ell ,q_1}(Q_{r_0}^{t,x}(z_0)\\times \\mathbb {R}^3)}$ is bounded independently of $\\varepsilon $ .", "This implies (REF ) holds, as in the statement of the lemma.", "Because of Lemma REF , we may apply Theorem REF to $f^\\varepsilon $ and obtain $\\Vert f^\\varepsilon \\Vert _{C^{2s+\\alpha ^{\\prime }}_\\ell (Q_{r_0/2}(z_0))} \\le C_0,$ with $C_0$ depending on $t_0$ , $|v_0|$ , and the initial data $f_{\\rm in}$ , but independent of $\\varepsilon $ ." ], [ "Convergence as $\\varepsilon \\rightarrow 0$ and the conclusion of t:existence", "For each compact subset $\\Omega \\subset (0,T_f]\\times \\mathbb {R}^6$ , our work above implies that $f^\\varepsilon $ is bounded in $C^{2s+\\alpha ^{\\prime }}_\\ell (\\Omega )$ for some $\\alpha ^{\\prime }$ depending on $\\Omega $ .", "(Note that the dependence of $\\alpha ^{\\prime }$ on $\\Omega $ follows from the dependencies of $\\alpha _0$ in t:global-degiorgi).", "This implies the sequence $f^\\varepsilon $ is precompact in $C^{2s+\\alpha ^{\\prime \\prime }}_\\ell (\\Omega )$ for any $\\alpha ^{\\prime \\prime }\\in (0,\\alpha ^{\\prime })$ , and some subsequence of $f^\\varepsilon $ converges in $C^{2s+\\alpha ^{\\prime \\prime }}_\\ell (\\Omega )$ to a function $f$ .", "Since $\\Omega $ was arbitrary, $f$ can be defined as an element of $C^{2s}_{\\ell , \\rm loc}([0,T]\\times \\mathbb {R}^6)$ , and for any compact $\\Omega $ , there is an $\\alpha ^{\\prime \\prime }$ with $f\\in C^{2s+\\alpha ^{\\prime \\prime }}_\\ell (\\Omega )$ .", "Since $f^\\varepsilon \\rightarrow f$ pointwise, $f$ also lies in $L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)$ , by Proposition REF .", "From [51], the norm $C^{2s+\\alpha ^{\\prime \\prime }}_\\ell $ controls the material derivative $(\\partial _t + v\\cdot \\nabla _x)f^\\varepsilon $ (but not the separate terms $\\partial _t f^\\varepsilon $ and $\\nabla _x f^\\varepsilon $ ).", "In particular, for each compact $\\Omega $ , $ \\Vert (\\partial _t + v\\cdot \\nabla _x) f^\\varepsilon \\Vert _{C^{\\alpha ^{\\prime \\prime }}_\\ell (\\Omega )} \\le C\\Vert f^\\varepsilon \\Vert _{C^{2s+\\alpha ^{\\prime \\prime }}_\\ell (\\Omega )},$ and the convergence of $f^\\varepsilon $ in $C^{2s+\\alpha ^{\\prime \\prime }}_\\ell $ implies $(\\partial _t+v\\cdot \\nabla _x)f$ is a locally Hölder continuous function.", "To analyze the convergence of $Q(f^\\varepsilon ,f^\\varepsilon )$ as $\\varepsilon \\rightarrow 0$ , we use the following lemma: Lemma 4.6 Let $g, h\\in L^\\infty _{q}(\\mathbb {R}^3)$ with $q> \\gamma +2s +3$ .", "For some $v_0\\in \\mathbb {R}^3$ and $\\alpha >0$ , assume $h\\in C^{2s+\\alpha }(B_1(v_0))$ .", "Then $|Q(g,h)(v)| \\le C\\Vert g\\Vert _{L^\\infty _{q}(\\mathbb {R}^3)}( \\Vert h\\Vert _{C^{2s+\\alpha }(B_1(v_0))}+\\Vert h\\Vert _{L^\\infty (\\mathbb {R}^3)}) \\langle v_0\\rangle ^{(\\gamma +2s)_+}, \\quad v\\in B_1(v_0).$ Writing $Q = Q_{\\rm s} + Q_{\\rm ns}$ as usual, the singular term is handled by [50], which implies $ |Q_{\\rm s}(g,h)(v)| \\le C\\left(\\int _{\\mathbb {R}^3} g(w)|v+w|^{\\gamma +2s} \\, \\mathrm {d}w\\right) \\Vert h\\Vert _{L^\\infty (\\mathbb {R}^3)}^{\\frac{\\alpha }{2s+\\alpha }} [h]_{C^{2s+\\alpha }(v)}^{\\frac{2s}{2s+\\alpha }},$ where $[h]_{C^{2s+\\alpha }(v)}$ denotes the smallest constant $N>0$ such that there exists a polynomial $p$ of degree less than $2s+\\alpha $ with $|h(v+w) - p(w)|\\le N |w|^{2s+\\alpha }$ for all $w\\in \\mathbb {R}^3$ .", "Using $a^{\\frac{\\alpha }{2s+\\alpha }}b^{\\frac{2s}{2s+\\alpha }}\\lesssim a + b$ , and noting that $ [h]_{C^{2s+\\alpha }(v)} \\le [h]_{C^{2s+\\alpha }(B_1(v))} + \\Vert h\\Vert _{L^\\infty (B_1(v)^c)},$ we see that $Q_{\\rm s}(g,h)(v)$ is bounded by the right-hand side of (REF ), using the convolution estimate of Lemma REF and $\\langle v\\rangle \\approx \\langle v_0\\rangle $ .", "For $Q_{\\rm ns}(g,h)(v) = c_b h(v) [g\\ast |\\cdot |^\\gamma ](v)$ , another application of Lemma REF implies $|Q_{\\rm ns}(g,h)(v)| \\le C \\Vert h\\Vert _{L^\\infty (\\mathbb {R}^3)} \\Vert g\\Vert _{L^\\infty _q(\\mathbb {R}^3)}\\langle v\\rangle ^\\gamma ,$ since $q>3$ .", "The conclusion of the lemma follows.", "Using bilinearity, we write $Q(f^\\varepsilon , f^\\varepsilon ) - Q(f,f) = Q(f^\\varepsilon , f^\\varepsilon -f) + Q(f^\\varepsilon -f, f)$ .", "Let $q$ equal the average of $q_0$ and $\\gamma +2s+3$ .", "Then, since $f^\\varepsilon $ and $f$ share a common uniform bound in $L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)$ with $q< q_0$ , and $f^\\varepsilon \\rightarrow f$ uniformly on compact sets, we in fact have $f^\\varepsilon \\rightarrow f$ strongly in $L^\\infty _{q}([\\tau ,T_f]\\times \\Omega _x\\times \\mathbb {R}^3)$ for any $\\tau \\in (0,T_f)$ and $\\Omega _x\\subset \\mathbb {R}^3$ .", "Together with the convergence in $C^{2s+\\alpha ^{\\prime }}_\\ell (\\Omega )$ for compact $\\Omega $ , this is enough to apply Lemma REF and conclude $Q(f^\\varepsilon ,f^\\varepsilon ) \\rightarrow Q(f,f)$ locally uniformly.", "In particular, $Q(f,f)$ is well-defined.", "We have shown that $f$ satisfies the Boltzmann equation (REF ) in the pointwise sense.", "To address the initial data, we multiply the equation (REF ) satisfied by $f^\\varepsilon $ by some $\\varphi \\in C^1_{t,x} C^2_v$ with compact support in $[0,T_f) \\times \\mathbb {R}^6$ , and integrate by parts: $\\begin{split}\\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\varphi (0,x,v) f_{\\rm in}^\\varepsilon (x,v) \\, \\mathrm {d}v \\, \\mathrm {d}x &= \\int _0^{T_f} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} f^\\varepsilon (\\partial _t \\varphi + v\\cdot \\nabla _x \\varphi )\\, \\mathrm {d}v \\, \\mathrm {d}x \\, \\mathrm {d}t \\\\&\\quad +\\int _0^{T_f} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\varphi Q(f^\\varepsilon , f^\\varepsilon ) \\, \\mathrm {d}x \\, \\mathrm {d}v \\, \\mathrm {d}t.\\end{split}$ The left-hand side converges to $\\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\varphi (0,x,v) f_{\\rm in}(x,v) \\, \\mathrm {d}x \\, \\mathrm {d}v$ by the convergence of $f_{\\rm in}^\\varepsilon $ to $f_{\\rm in}$ in $L^1(\\operatorname{supp}(\\varphi (0,\\cdot ,\\cdot ))$ (recall the definition of $f_{\\rm in}^\\varepsilon $  (REF )).", "The convergence of the first integral on the right in (REF ) is also straightforward, by the uniform upper bounds for $f^\\varepsilon $ in $L^\\infty _{q_0}$ and the pointwise convergence of $f^\\varepsilon $ to $f$ .", "For the second integral on the right, we need to proceed more carefully.", "The continuity properties needed to apply Lemma REF and control $Q(f^\\varepsilon , f^\\varepsilon )$ pointwise may degenerate as $t\\rightarrow 0$ at a potentially severe rate.", "Therefore, we use the weak formulation of the collision operator to bound this integral.", "This is made precise in the following lemma: Lemma 4.7 For any $\\varphi \\in C^2(\\mathbb {R}^3)$ , and $v, v_*\\in \\mathbb {R}^3$ , there holds $\\left|\\int _{\\mathbb {S}^2} B(v-v_*,\\sigma ) [\\varphi (v_*^{\\prime }) + \\varphi (v^{\\prime }) - \\varphi (v_*) - \\varphi (v)] \\, \\mathrm {d}\\sigma \\right| \\le C \\Vert \\varphi \\Vert _{C^2(\\mathbb {R}^3)}|v-v_*|^\\gamma (1+|v-v_*|^{2s}),$ for a universal constant $C$ .", "In particular, for any functions $g, h$ on $\\mathbb {R}^3$ such that the right-hand side is finite, one has $\\left|\\int _{\\mathbb {R}^3}W(g,h,\\varphi ) \\, \\mathrm {d}v\\right| \\le C \\Vert \\varphi \\Vert _{C^{2}(\\mathbb {R}^3)} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} g(v_*) h(v) |v-v_*|^\\gamma (1+|v-v_*|^{2s}) \\, \\mathrm {d}v_* \\, \\mathrm {d}v,$ where $W(g,h,\\varphi )$ is defined as in (REF ).", "Estimates of this general type are common in the Boltzmann literature, see e.g.", "[70].", "However, we could not find a reference with the (apparently sharp) asymptotics $|v-v_*|^{\\gamma +2s}$ inside the integral.", "The sharp asymptotics will be important in the proof of Theorem REF below, where we only assume enough velocity decay to control the $L^1$ moment of order $\\gamma +2s$ of our weak solutions.", "Let us recall some facts about the geometry of elastic collisions: $|v_*^{\\prime } - v_*| &= |v^{\\prime } - v|\\\\v_*^{\\prime } + v^{\\prime } &= v_*+v\\\\|v^{\\prime } - v| &\\approx \\theta |v-v_*|,$ where we recall $\\theta $ from (REF ).", "The second fact () corresponds to conservation of momentum.", "Let us also introduce the standard abbreviations $F^{\\prime } = F(v^{\\prime })$ , $F_*^{\\prime } = F(v_*^{\\prime })$ , $F_* = F(v_*)$ , and $F = F(v)$ for any function $F$ .", "Recalling that $B(v-v_*,\\sigma ) = |v-v_*|^\\gamma \\theta ^{-2-2s} \\tilde{b}(\\theta )$ , with $\\tilde{b} (\\theta ) \\approx 1$ , we divide the integral over $\\mathbb {S}^2$ in (REF ) into two domains: $ D_1 := \\lbrace \\sigma : |\\theta | \\le |v-v_*|^{-1}\\rbrace , \\quad D_2 := \\mathbb {S}^2 \\setminus D_1,$ where $D_2$ is empty if $|v-v_*|\\le 1/\\pi $ .", "In $D_1$ , following a common method for controlling the angular singularity, we Taylor expand $\\varphi $ and use the identities () and (REF ).", "We obtain $\\begin{split}\\varphi ^{\\prime } + \\varphi _*^{\\prime } - \\varphi _* - \\varphi &= \\nabla \\varphi _*\\cdot (v_*^{\\prime } - v_*) + \\nabla \\varphi \\cdot (v^{\\prime } - v) + O(\\Vert D^2\\varphi \\Vert _{L^\\infty } |v^{\\prime }-v|^2)\\\\&= (\\nabla \\varphi - \\nabla \\varphi _*)\\cdot (v^{\\prime } - v) + O(\\Vert D^2\\varphi \\Vert _{L^\\infty }|v^{\\prime } - v|^2).\\end{split}$ From (), the second term on the right in the last expression is proportional to $\\Vert D^2\\varphi \\Vert _{L^\\infty } \\theta ^2|v-v_*|^2$ .", "To handle the first term on the right, we parameterize $\\mathbb {S}^2$ with spherical coordinates $\\sigma = (\\theta , \\eta ) \\in [0,\\pi ]\\times [0,2\\pi ]$ , where $\\theta =0$ corresponds to $v = v^{\\prime }$ .", "A simple geometric argument shows $\\left| \\int _0^{2\\pi } (v^{\\prime } - v) \\, \\mathrm {d}\\eta \\right| \\lesssim |v-v_*|\\theta ^2$ .", "Therefore, we have $\\left|\\int _0^{2\\pi } [ \\varphi ^{\\prime } + \\varphi _*^{\\prime } - \\varphi _* - \\varphi ] \\, \\mathrm {d}\\eta \\right|\\lesssim \\Vert D^2\\varphi \\Vert _{L^\\infty } \\theta ^2 |v-v_*|^2,$ which implies $\\begin{split}\\Big |\\int _{D_1} &B(v-v_*,\\sigma )[\\varphi ^{\\prime } + \\varphi _*^{\\prime } - \\varphi _* - \\varphi ] \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\Big |\\\\&\\lesssim |v-v_*|^{\\gamma +2} \\int _0^{|v-v_*|^{-1}} \\theta ^{-2-2s} \\Vert D^2\\varphi \\Vert _{L^\\infty } \\theta ^2 \\sin \\theta \\, \\mathrm {d}\\theta \\lesssim \\Vert D^2\\varphi \\Vert _{L^\\infty } |v-v_*|^{\\gamma +2s} .\\end{split}$ For the integral over $D_2$ , since $|\\theta |\\ge |v-v_*|^{-1}$ , we have $ \\begin{split}\\Big | \\int _{D_2} &|v-v_*|^\\gamma \\theta ^{-2-2s} \\tilde{b}(\\theta )[\\varphi ^{\\prime } + \\varphi _*^{\\prime } - \\varphi _* - \\varphi ] \\, \\mathrm {d}\\sigma \\Big |\\\\&\\lesssim \\Vert D^2\\varphi \\Vert _{L^\\infty } |v-v_*|^{\\gamma } \\int _{|v-v_*|^{-1}}^\\pi \\int _0^{2\\pi } \\theta ^{-2-2s} \\sin \\theta \\, \\mathrm {d}\\eta \\, \\mathrm {d}\\theta \\lesssim \\Vert D^2\\varphi \\Vert _{L^\\infty } |v-v_*|^{\\gamma }(1+|v-v_*|^{2s}),\\end{split}$ which establishes (REF ).", "Next, recalling the weak formulation (REF ) we have $\\int _{\\mathbb {R}^3} W(g,h,\\varphi ) \\, \\mathrm {d}v = \\frac{1}{2} \\int _{\\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {S}^2} B(v-v_*,\\sigma ) g h_* [\\varphi ^{\\prime } + \\varphi _*^{\\prime }- \\varphi _* - \\varphi ] \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v,$ and the last conclusion of the lemma follows directly from (REF ).", "Returning to (REF ), for each $t\\in (0,T_f]$ , the locally uniform convergence of $Q(f^\\varepsilon ,f^\\varepsilon )$ to $Q(f,f)$ implies $\\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\varphi Q(f^\\varepsilon ,f^\\varepsilon ) \\, \\mathrm {d}x \\, \\mathrm {d}v \\rightarrow \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\varphi Q(f,f) \\, \\mathrm {d}x \\, \\mathrm {d}v.$ By Lemma REF and our uniform upper bound on $\\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_0}(\\mathbb {R}^6)}$ with $q_0>2s+3$ , we may apply the Dominated Convergence Theorem to the time integral of $\\varphi Q(f^\\varepsilon , f^\\varepsilon )$ and conclude that $f$ agrees with the initial data $f_{\\rm in}$ in the sense of (REF ).", "Finally, we consider the higher regularity of $f$ .", "The approximate solutions $f^\\varepsilon $ are smooth and rapidly decaying, so for any compact $\\Omega \\subset (0,T_f]\\times \\mathbb {R}^3$ , partial derivative $D^k$ , and $\\alpha , m>0$ , Proposition REF provides a $q(k,m)>0$ such that $\\Vert D^k f^\\varepsilon \\Vert _{C^\\alpha _{\\ell ,m}(\\Omega \\times \\mathbb {R}^3_v)}$ is bounded for positive times in terms of $\\Vert f_{\\rm in}^\\varepsilon \\Vert _{L^\\infty _{q(k,m)}(\\mathbb {R}^6)}$ .", "From Proposition REF , this bound is independent of $\\varepsilon $ if the initial data $f_{\\rm in}$ is bounded in $L^\\infty _{q(k,m)}(\\mathbb {R}^6)$ .", "Applying a standard compactness argument, these $C^\\alpha _{\\ell ,m}$ estimates for $D^k f^\\varepsilon $ imply $L^\\infty _{m}$ estimates for $D^k f$ in the limit as $\\varepsilon \\rightarrow 0$ .", "This concludes the proof of Theorem REF ." ], [ "t:weak-solutions: the existence of weak solutions", "In this subsection, we prove Theorem REF .", "The proof is based on the same approximating sequence $f^\\varepsilon $ from the proof of Theorem REF .", "The relaxed conditions on $f_{\\rm in}$ result in weaker uniform regularity for $f^\\varepsilon $ , and correspondingly, a different notion of convergence as $\\varepsilon \\rightarrow 0$ .", "In more detail, assume that $f_{\\rm in}\\ge 0$ lies in $L^\\infty _{q_0}(\\mathbb {R}^6)$ with $q_0>\\gamma +2s+3$ .", "This initial data may not necessarily satisfy any uniform positivity condition.", "Let $f^\\varepsilon _{\\rm in}$ be defined as in (REF ), and as above, let $f^\\varepsilon $ be smooth solutions to (REF ) with initial data $f^\\varepsilon _{\\rm in}$ .", "By Proposition REF , these solutions exist on a uniform time interval $[0,T_f]$ , and are uniformly bounded in $L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)$ .", "Since $L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)$ is the dual of $L^1_{-q_0}([0,T_f]\\times \\mathbb {R}^6)$ , some sequence of $f^\\varepsilon $ converges in the weak-$\\ast $ $L^\\infty _{q_0}$ sense to a function $f\\in L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)$ .", "To show $f$ is a weak solution of (REF ), note that for each $\\varepsilon >0$ and $\\varphi \\in C^1_{t,x} C^2_v$ with compact support in $[0,T_f)\\times \\mathbb {R}^6$ , integrating by parts implies the weak formulation (REF ) holds for $f^\\varepsilon $ .", "The left-hand side of (REF ) converges by the $L^1$ -convergence of $f^\\varepsilon _{\\rm in}$ to $f_{\\rm in}$ on $\\operatorname{supp}(\\varphi (0,\\cdot ,\\cdot ))$ , exactly as above.", "For the right-hand side, we have $\\int _0^{T_f} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} (f^\\varepsilon - f) (\\partial _t\\varphi +v\\cdot \\nabla _x\\varphi ) \\, \\mathrm {d}v\\, \\mathrm {d}x \\, \\mathrm {d}t \\rightarrow 0,$ by the weak-$\\ast $ convergence of $f^\\varepsilon $ to $f$ , since $(\\partial _t\\varphi + v\\cdot \\nabla _x \\varphi ) \\in L^1_{-q_0}([0,T_f]\\times \\mathbb {R}^6)$ .", "For the collision term, since $f^\\varepsilon $ is smooth and rapidly decaying, we may apply the identity $\\int _{\\mathbb {R}^3} \\varphi Q(f^\\varepsilon ,f^\\varepsilon ) \\, \\mathrm {d}v = \\int _{\\mathbb {R}^3} W(f^\\varepsilon ,f^\\varepsilon ,\\varphi ) \\, \\mathrm {d}v$ .", "Using bilinearity, we have, for each $t,x$ , $\\begin{split}\\int _{\\mathbb {R}^3}W(f^\\varepsilon ,&f^\\varepsilon ,\\varphi ) \\, \\mathrm {d}v - \\int _{\\mathbb {R}^3} W(f,f,\\varphi ) \\, \\mathrm {d}v = \\int _{\\mathbb {R}^3} W(f^\\varepsilon ,f^\\varepsilon -f,\\varphi ) \\, \\mathrm {d}v + \\int _{\\mathbb {R}^3} W(f,f^\\varepsilon -f, \\varphi ) \\, \\mathrm {d}v.\\end{split}$ The first term on the right is equal to $\\begin{split}\\frac{1}{2} \\int _{\\mathbb {R}^3}f^\\varepsilon \\int _{\\mathbb {R}^3}\\int _{\\mathbb {S}^2} (f^\\varepsilon _* - f_*) B(v-v_*,\\sigma )[\\varphi ^{\\prime } + \\varphi _*^{\\prime } - \\varphi _* - \\varphi ] \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v.\\end{split}$ From (REF ) in Lemma REF , we see that for fixed $v\\in \\mathbb {R}^3$ , $ \\int _{\\mathbb {S}^2} B(v-v_*,\\sigma ) [\\varphi ^{\\prime } + \\varphi _*^{\\prime } - \\varphi _* - \\varphi ] \\, \\mathrm {d}\\sigma \\in L^1_{-q_0}(\\mathbb {R}^3_{v_*}),$ since $q_0> \\gamma +2s+3$ .", "This implies (REF ) converges to 0 as $\\varepsilon \\rightarrow 0$ .", "The same argument, after exchanging the $v$ and $v_*$ integrals, implies $\\int _{\\mathbb {R}^3} W(f,f^\\varepsilon - f,\\varphi ) \\, \\mathrm {d}v \\rightarrow 0$ .", "We conclude $f^\\varepsilon $ is a weak solution to (REF ) in the sense of Theorem REF .", "Next, consider the additional assumption that $f_{\\rm in}(x,v)\\ge \\delta $ for all $(x,v) \\in B_r(x_m,v_m)$ .", "As above, this implies lower bounds of the form (REF ) for $f^\\varepsilon $ that are independent of $\\varepsilon $ .", "Together with our uniform bound on $\\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_0}([0,T_f]\\times \\mathbb {R}^6)}$ with $q_0>\\gamma +2s+3$ , this allows us to apply the local De Giorgi estimate of Theorem REF .", "(Note that under such an assumption on $q_0$ , we cannot necessarily apply the global De Giorgi estimate of Theorem REF .)", "We obtain, for any compact $\\Omega \\subset (0,T_f]\\times \\mathbb {R}^6$ , $ \\Vert f^\\varepsilon \\Vert _{C^\\alpha (\\Omega )} \\le C,$ with $C,\\alpha >0$ depending on $\\Omega $ , $\\delta $ , $r$ , $v_m$ , $x_m$ , and $\\Vert f_{\\rm in}\\Vert _{L^\\infty _{q_0}(\\mathbb {R}^6)}$ .", "Since this bound is independent of $\\varepsilon $ , the same conclusion applies to $f$ .", "This concludes the proof of Theorem REF ." ], [ "Continuous matching with initial data", "Here we show, under the assumption that $f_{\\rm in}$ is continuous, that the solution $f$ is continuous as $t\\searrow 0$ .", "We prove this as a consequence of a more general result for linear kinetic integro-differential equations of the following form: $\\partial _t f + v\\cdot \\nabla _x f= \\int _{\\mathbb {R}^3} K(t,x,v,v^{\\prime }) (f(t,x,v^{\\prime }) - f(t,x,v)) \\, \\mathrm {d}v^{\\prime }+ c f,$ where $K: [0,T]\\times \\mathbb {R}^3\\times \\mathbb {R}^3\\times \\mathbb {R}^3\\rightarrow [0,\\infty )$ and $c: [0,T]\\times \\mathbb {R}^3\\times \\mathbb {R}^3\\rightarrow [0,\\infty )$ satisfy, for all $(t,x,v)\\in [0,T]\\times \\mathbb {R}^3\\times \\mathbb {R}^3$ , $w\\in \\mathbb {R}^3$ , and $r>0$ , $\\begin{split}& K(t,x,v,v+w) = K(t,x,v,v-w),\\\\&\\int _{B_{2r}(v)\\setminus B_r(v)} K(t,x,v,v^{\\prime })\\, \\mathrm {d}v^{\\prime }\\le \\Lambda \\langle v\\rangle ^{\\gamma +2s} r^{-2s}, \\quad \\text{ and}\\\\&|c(t,x,v)|\\le \\Lambda \\langle v\\rangle ^\\gamma ,\\end{split}$ for some constants $\\Lambda >0$ , $\\gamma >-3$ , and $s\\in (0,1)$ .", "From the Carleman decomposition of $Q(f,f)$ described in Section REF , together with Lemma REF , one sees that equation (REF ) includes the Boltzmann equation (REF ) as a special case.", "Proposition 4.8 Let $f \\in C^2_\\ell \\cap L^{\\infty }([0,T]\\times \\mathbb {R}^6)$ be a solution to (REF ) with initial data $f_{\\rm in}$ .", "Then, for any $(x_0,v_0)$ and $\\eta >0$ , there exists $t_\\eta , r_\\eta >0$ such that, if $|(x,v) - (x_0,v_0)| < r_\\eta \\quad \\text{ and }\\quad t\\in [0,t_\\eta ),$ then $|f(t,x,v) - f_{\\rm in}(x_0,v_0)| < \\eta .$ The constants $t_\\eta $ and $r_\\eta $ depend only on $|v_0|$ , $\\eta $ , $\\Lambda $ , $s$ , $\\Vert f\\Vert _{L^\\infty }$ and the modulus of continuity of $f_{\\rm in}$ at $(x_0,v_0)$ .", "In particular, the constants do not depend on $\\Vert f\\Vert _{C^2_\\ell }$ .", "To apply this proposition to the solution $f$ to (REF ) constructed in Theorem REF , we use the smooth approximating sequence $f^\\varepsilon $ .", "Proposition REF applies to the smooth solutions $f^\\varepsilon $ , with constants depending on $\\Lambda \\lesssim \\Vert f^\\varepsilon \\Vert _{L^\\infty _{q_0}(\\mathbb {R}^6)}$ , which is bounded independently of $\\varepsilon $ by Proposition REF .", "Since $f^\\varepsilon \\rightarrow f$ pointwise in $[0,T]\\times \\mathbb {R}^6$ , the conclusion of Proposition REF also applies to $f$ .", "Note that we allow both cases $\\gamma < 0$ and $\\gamma \\ge 0$ in Proposition REF .", "In the proof, we only obtain the upper bound on $f - f_{\\rm in}$ in (REF ).", "The lower bound can be obtained in an exactly analogous way, so we omit it.", "Without loss of generality, we assume that $x_0 = 0$ .", "Fix $\\delta \\in (0,1)$ sufficiently small so that $|f_{\\rm in}(x,v) - f_{\\rm in}(0,v_0)|< \\frac{\\eta }{2}\\qquad \\text{ if } |x|^2 + |v-v_0|^2 \\le \\delta ^2.$ Let $T_\\delta = \\frac{\\delta }{4( |v_0| + 2\\delta )}.$ Our goal is to construct a supersolution $F$ for $f$ on $[0, T_\\delta ]\\times B_\\delta (0,v_0)$ .", "To begin, we let $F(t,x,v)= e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t} \\left(\\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\psi \\left(\\frac{|x-vt|^2 + |v-v_0|^2}{\\delta ^2} \\right) + \\frac{\\eta }{2} + \\rho t + f_{\\rm in}(0,v_0)\\right),$ where $\\psi \\in C^2(\\mathbb {R})$ satisfies $\\psi (s)= {\\left\\lbrace \\begin{array}{ll}0 \\qquad &\\text{ if } s \\le 0,\\\\1 \\qquad &\\text{ if } s \\ge 1/2,\\end{array}\\right.", "}\\qquad 0 \\le \\psi ^{\\prime } \\le 4,\\qquad \\text{ and } \\qquad |\\psi ^{\\prime \\prime }| \\le 32,$ and $\\rho = A \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\frac{\\langle v_0\\rangle ^{\\gamma + 2s}}{\\delta ^{2s}} e^{2\\Lambda T_\\delta \\langle v_0\\rangle ^\\gamma },$ for a large constant $A>0$ to be chosen depending only on $\\Lambda $ , $\\langle v_0\\rangle $ , $\\gamma $ , and $s$ .", "We claim that $F > f \\quad \\text{ on } [0,T_\\delta )\\times \\mathbb {R}^6.$ Note that all terms in $F$ except $f_{\\rm in}(0,v_0)$ can be made smaller than $\\eta $ by choosing $r_\\eta >0$ and $t_\\eta \\in (0,T_\\delta )$ sufficiently small.", "Therefore, the proof is complete once we establish (REF ).", "First we note that (REF ) holds initially.", "Indeed, from (REF ) and (REF ), it is clear that $F> f$ for $(t,x,v) \\in \\lbrace 0\\rbrace \\times \\mathbb {R}^6$ .", "Next, we show that (REF ) holds away from $(0,v_0)$ : $F > f\\qquad \\text{ for } (t,x,v) \\in (0,T_\\delta ]\\times B_\\delta (0,v_0)^c.$ Fix any $(t,x,v) \\in (0,T_\\delta ]\\times B_\\delta (0,v_0)^c$ .", "If $|v-v_0| \\ge \\delta /\\sqrt{2}$ , then, by the definition of $\\psi $  (REF ), $F(t, x, v)> \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\psi \\left(\\frac{|x-vt|^2 + |v-v_0|^2}{\\delta ^2} \\right)= \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)}\\ge f(t,x,v).$ Hence, (REF ) is established in this case.", "Assume now that $|v-v_0| < \\delta /\\sqrt{2}$ .", "Then $|x-tv|^2 + |v - v_0|^2= |x|^2 + |v-v_0|^2 - 2 t x \\cdot v + t^2 |v|^2\\ge |x|^2 + |v-v_0|^2 - 2 t x \\cdot v.$ Next, we use Young's inequality and then that $t< T_\\delta $ , where $T_\\delta $ is defined in (REF ), and $|v| \\le |v_0| + \\delta /\\sqrt{2}$ , to find $|x-tv|^2 + |v - v_0|^2\\ge \\frac{3|x|^2}{4} + |v-v_0|^2 - 4 t^2 |v|^2\\ge \\frac{3\\delta ^2}{4} - 4 \\frac{\\delta ^2}{4^2(|v_0|+\\delta )^2} (|v_0| + \\delta /\\sqrt{2})^2\\ge \\frac{\\delta ^2}{2}.$ The argument of (REF ) then applies to establish (REF ).", "Hence, (REF ) holds in both cases.", "Due to (REF ), if $F \\lnot > f$ on $[0,T_\\delta ]\\times \\mathbb {R}^6$ , then, defining the crossing time as $t_{\\rm cr}= \\inf \\lbrace t : F(t,x,v) = f(t,x,v) \\quad \\text{for some}\\quad (x,v) \\in \\mathbb {R}^6\\rbrace ,$ it follows that $t_{\\rm cr}\\in (0,T_\\delta ]$ by the continuity of $F$ and $f$ , and there exists a crossing point $(x_{\\rm cr}, v_{\\rm cr})\\in B_\\delta (0,v_0)$ such that $F(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}) = f(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}).$ Since $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ is a minimum of $F- f$ on $(0,T_\\delta )\\times B_\\delta (0,v_0)$ , then, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ , we have $\\begin{split}&F = f,\\qquad \\partial _t F \\le \\partial _t f,\\qquad \\nabla _x F = \\nabla _x f,\\qquad \\text{and}\\\\&\\int _{B_\\delta (v_{\\rm cr})} (F^{\\prime } - F_{\\rm cr}) K(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr},v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\ge \\int _{B_\\delta (v_{\\rm cr})} ( f^{\\prime } - f_{\\rm cr}) K(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr},v^{\\prime }) \\, \\mathrm {d}v^{\\prime }.\\end{split}$ Note that, in the last integral, we have used the notation that, for any $v^{\\prime }$ , $f^{\\prime } = f(t_{\\rm cr}, x_{\\rm cr}, v^{\\prime })\\quad \\text{ and }\\quad f_{\\rm cr}= f(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ and similarly for $F$ .", "It follows, from (REF ) and (REF ), that, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ , $\\partial _t F+ v\\cdot \\nabla _x F \\le \\partial _t f + v\\cdot \\nabla _x f =\\mathcal {L} f+ c F.$ By a direct calculation with (REF ), we also have $\\partial _t F+ v\\cdot \\nabla _x F= 2\\Lambda \\langle v_0\\rangle ^\\gamma F+ \\rho e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t}.$ Note that $c(t,x_{\\rm cr},v_{\\rm cr})\\le \\Lambda \\langle v_{\\rm cr}\\rangle ^\\gamma $ by (REF ).", "Hence, up to decreasing $\\delta $ , $c(t,x_{\\rm cr},v_{\\rm cr}) < 2\\Lambda \\langle v_0\\rangle ^\\gamma $ .", "Hence, we only need to show that, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ , $\\mathcal {L}f\\le \\rho e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t_{\\rm cr}},$ in order to obtain a contradiction with (REF ) and conclude the proof.", "To prove (REF ), we start by using (REF ) to write, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ , $\\begin{split}\\mathcal {L}f&\\le \\int _{B_\\delta (v_{\\rm cr})^c} (f^{\\prime } - f_{\\rm cr}) K(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr}, v^{\\prime }) \\, \\mathrm {d}v^{\\prime }+ \\int _{B_\\delta (v_{\\rm cr})} (F^{\\prime } - F_{\\rm cr}) K(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}, v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&=: I_1 + I_2.\\end{split}$ First, we bound $I_1$ .", "Using (REF ), we have, with $A_k = B_{2^{k+1}\\delta }(v_{\\rm cr})\\setminus B_{2^{k}\\delta }(v_{\\rm cr})$ , $\\begin{split}I_1&\\lesssim \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\int _{B_\\delta (v_{\\rm cr})^c} K(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr}, v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\sum _{k=0}^\\infty \\int _{A_k} K(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}, v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\langle v_{\\rm cr}\\rangle ^{\\gamma + 2s} \\sum _{k=0}^\\infty (\\delta 2^k)^{-2s}\\lesssim \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\langle v_{\\rm cr}\\rangle ^{\\gamma +2s} \\delta ^{-2s}.\\end{split}$ Next, we bound $I_2$ .", "From the integral estimate for $K$ in (REF ) applied on a series of annuli, it is easy to show $\\int _{B_r(v)} |v-v^{\\prime }|^2K(t,x,v,v^{\\prime })\\, \\mathrm {d}v^{\\prime }\\le \\Lambda \\langle v\\rangle ^{\\gamma +2s} r^{2-2s},$ for $r>0$ .", "Using the symmetry of $K$ with respect to $(v^{\\prime }-v_{\\rm cr})$ as in (REF ), we see $\\int _{B_\\delta (v_{\\rm cr})} (v^{\\prime } - v_{\\rm cr}) \\cdot \\nabla _v F(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) K(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}, v^{\\prime }) \\, \\mathrm {d}v^{\\prime } = 0.$ Then, using a Taylor expansion and the definition of $F$ , we see that $\\begin{split}&\\left|F^{\\prime } - F_{\\rm cr}- (v^{\\prime } - v_{\\rm cr}) \\cdot \\nabla _v F(v_{\\rm cr})\\right|\\\\& \\lesssim e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t_{\\rm cr}} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\left(\\frac{t_{\\rm cr}^2 + 1}{\\delta ^2} + \\frac{t_{\\rm cr}^2 |x_{\\rm cr}-v_{\\rm cr}t_{\\rm cr}|^2 + |v_{\\rm cr}-v_0|^2}{\\delta ^4} \\right)|v^{\\prime } - v_{\\rm cr}|^2\\\\&\\lesssim e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t_{\\rm cr}} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\frac{1}{\\delta ^2}|v^{\\prime } - v_{\\rm cr}|^2,\\end{split}$ using $|x_{\\rm cr}|^2 + |v_{\\rm cr}- v_0|< \\delta ^2$ and $t_{\\rm cr}< T_\\delta $ .", "Putting the above together and applying (REF ), we find $\\begin{split}I_2&\\lesssim e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t_{\\rm cr}} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\frac{1}{\\delta ^2} \\int _{B_\\delta (v_{\\rm cr})} |v^{\\prime } - v_{\\rm cr}|^2 K(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}, v^{\\prime })\\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t_{\\rm cr}} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)}\\frac{\\langle v_{\\rm cr}\\rangle ^{\\gamma + 2s}}{\\delta ^{2s}}.\\end{split}$ Combining (REF ), (REF ), and (REF ), we find, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ , $\\mathcal {L}f\\lesssim \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\frac{\\langle v_{\\rm cr}\\rangle ^{\\gamma + 2s}}{\\delta ^{2s}}\\left(1+ e^{2\\Lambda \\langle v_0\\rangle ^\\gamma t_{\\rm cr}}\\right).$ Using the choice of $\\rho $ from (REF ), the fact that $\\langle v^{\\prime }\\rangle \\approx \\langle v_{\\rm cr}\\rangle $ (recall that $|v_0 - v_{\\rm cr}| < \\delta < 1$ ), and increasing $A$ as necessary yields the claim (REF ).", "This concludes the proof." ], [ "Time regularity in kinetic integro-differential equations", "As part of our proof of uniqueness, we need to show that solutions $f$ to (REF ) are Hölder continuous for positive times with uniform bounds as $t\\searrow 0$ as long as the initial data $f_{\\rm in}$ is Hölder continuous.", "This will be accomplished in Section .", "To prove this, we first need to understand the following fundamental property: if $f(t,\\cdot ,\\cdot )$ is Hölder in $(x,v)$ for each time value $t$ , then they are Hölder in $(t,x,v)$ ?", "The corresponding fact for linear parabolic equations (regularity in $x$ implies regularity in $(t,x)$ , in the suitable scaling) is classical, but it is a nontrivial task to extend this to kinetic integro-differential equations (REF ) (including Boltzmann).", "For future potential applications, we prove this property for general linear kinetic equations of the form (REF ) given above, with $K$ and $c$ satisfying (REF ).", "Recall the local kinetic Hölder seminorm $[f]_{C^\\alpha _{\\ell ,x,v}(D)}$ defined for subsets $D\\subset \\mathbb {R}^6$ , as defined in Section REF .", "Since this section concerns Hölder exponents $\\alpha < s< 1$ , this seminorm can equivalently be defined as $ [f]_{C^\\alpha _{\\ell ,x,v}(D)} = \\sup _{(x,v), (x_0,v_0)\\in D} \\frac{|f(x,v) - f(x_0,v_0)|}{d_\\ell ((0,x,v),(0,x_0,v_0))^\\alpha }.$ As usual, we define $\\Vert f\\Vert _{C^\\alpha _{\\ell ,x,v}(D)} = \\Vert f\\Vert _{L^\\infty (D)} + [f]_{C^\\alpha _{\\ell ,x,v}(D)}$ .", "The main result of this section is as follows.", "We note that this proposition is proven in both cases $\\gamma < 0$ and $\\gamma \\ge 0$ .", "Proposition 5.1 Suppose that $f \\in C^\\alpha _{\\ell , q}([0,T]\\times \\mathbb {R}^6)$ for some $\\alpha \\in (0,\\min \\lbrace 1,2s\\rbrace )$ and $q\\ge 0$ , and that $f$ solves (REF ).", "Then we have $\\Vert f\\Vert _{C^\\alpha _{\\ell ,q}([0,T]\\times \\mathbb {R}^6)}\\le C \\sup _{t\\in [0,T]} \\left\\Vert \\langle v\\rangle ^{q+\\alpha (\\gamma +2s)_+/(2s)}f(t)\\right\\Vert _{C^\\alpha _{\\ell ,x,v}(\\mathbb {R}^6)}.$ The constant $C$ depends only on universal constants, $\\alpha $ , and $\\Lambda $ .", "The key lemma for proving Proposition REF is the following (recall the definition of the dilation $\\delta _r$  (REF )): Lemma 5.2 Under the assumptions of prop:holderint, let $z_1 \\in [0,T]\\times \\mathbb {R}^6$ , $r\\in (0,1]$ be arbitrary, and $z_2 = (t_2,x_2,v_2)$ be such that $t_2 \\in [0, \\langle v_1 \\rangle ^{-(\\gamma +2s)_+}]$ , $r^{2s} t_2 + t_1 \\in [0,T]$ , and $|x_2|, |v_2| < 1$ .", "Then we have $\\begin{split}|f(z_1 \\circ \\delta _r (z_2)) - f(z_1)|\\lesssim |r|^\\alpha \\left(\\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} + [f(t_1,\\cdot ,\\cdot )]_{C^\\alpha _{\\ell ,x,v}(B_1(x_1,v_1))}\\right),\\end{split}$ with the implied constant depending only on universal constants, $\\alpha $ , and $\\Lambda $ .", "The proof is based on a barrier argument.", "Let $z_1$ be as in the statement of the lemma.", "Without loss of generality, we may assume that $t_1 = 0$ and $x_1 = 0$ .", "Then $z_2 \\in [0, \\min \\lbrace \\langle v_1\\rangle ^{-(2s+\\gamma )_+}, T\\rbrace ]\\times B_1(0) \\times B_1(v_1)$ .", "Step 1: An auxiliary function and its equation.", "Let us set some useful notation.", "For any $r\\ge 0$ and any function $g$ , let $g_r(z) = g(z_1 \\circ \\delta _r(z)).$ Then, let $\\tilde{F}(z) = f_r(z) - f(0, 0, v_1).$ It is straightforward to check that $\\partial _t \\tilde{F} + v\\cdot \\nabla _x \\tilde{F} - \\mathcal {L}f_r= r^{2s} c_r f_r,$ where we have the defined the nonlocal operator and kernel $\\begin{split}&\\mathcal {L}f_r= \\int _{\\mathbb {R}^3} f_r(v^{\\prime }) - f_r(v)) \\mathcal {K}(t,x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&\\text{and } \\quad \\mathcal {K}(t,x,v,v^{\\prime })= r^{3+2s} K(r^{2s} t, r^{1+2s} x + r^{2s} v_1 t, rv+v_1, rv^{\\prime } + v_1).", "\\end{split}$ We first notice the following bounds derived from (REF ): for any $L>0$ and $v$ , $\\begin{split}\\int _{B_{2L}(v) \\setminus B_L(v)} &\\mathcal {K}(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }= r^{3+2s}\\int _{B_{2L}(v)\\setminus B_L(v)} K(rv + v_1, rv^{\\prime } + v_1) \\, \\mathrm {d}v^{\\prime }\\\\&= r^{2s} \\int _{B_{2rL}(rv+v_1)\\setminus B_{rL}(rv+v_1)} K(rv +v_1, w) \\, \\mathrm {d}w\\lesssim L^{-2s}\\langle rv + v_1\\rangle ^{\\gamma +2s} ,\\end{split}$ and, applying (REF ) on a decreasing sequence of annuli, $\\begin{split}&\\int _{B_L(v)} |v^{\\prime }-v|^2 \\mathcal {K}(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }=r^{3+2s}\\int _{B_L(v)} |v^{\\prime }-v|^2 K(rv + v_1, rv^{\\prime } + v_1) \\, \\mathrm {d}v^{\\prime }\\\\&\\quad = r^{-(2-2s)} \\int _{B_{rL}(rv+v_1)} |w - (rv + v_1)|^2 K(rv +v_1, w) \\, \\mathrm {d}w\\lesssim L^{2-2s}\\langle rv + v_1\\rangle ^{\\gamma +2s} ,\\end{split}$ where, for brevity, we have omitted the dependence on $(t,x)$ .", "We also have via (REF ) the symmetry property $ \\mathcal {K}(v,v+w) = r^{3+2s} K(r v + v_1, rv + v_1+rw) = r^{2s+3} K(rv+v_1,rv + v_1 - rw) = \\mathcal {K}(v,v-w).$ Our goal is to obtain a local upper bound on $\\tilde{F}$ ; that is, a bound at $(t_2, x_2, v_2)$ satisfying the smallness assumption in the statement of the lemma.", "Hence, we use a suitable multiplicative cutoff function.", "Let $\\phi \\in C_c^\\infty (\\mathbb {R}^6)$ be a cut-off function such that, for all $i,j \\in \\lbrace 1,2,3\\rbrace $ , $\\begin{split}&\\phi \\approx \\left( |v|^2 + |x|^2 + 1 \\right)^{-\\alpha /2}\\\\&|\\partial _{x_i} \\phi |\\lesssim |x|\\left(|v|^2+|x|^2+1 \\right)^{-\\alpha /2-1},\\\\&|\\partial _{x_ix_j} \\phi | \\lesssim |x|^2\\left(|v|^2+|x|^2+1 \\right)^{-\\alpha /2-2}, \\quad \\text{ and}\\end{split}\\qquad \\begin{split}&|\\partial _{v_i} \\phi |\\lesssim |v|\\left(|v|^2+|x|^2+1 \\right)^{-\\alpha /2-1},\\\\&|\\partial _{v_iv_j} \\phi | \\lesssim |v|^2\\left(|v|^2+|x|^2+1 \\right)^{-\\alpha /2-2},\\\\&|\\partial _{x_iv_j} \\phi | \\lesssim |x||v|\\left(|v|^2+|x|^2+1 \\right)^{-\\alpha /2-2}.\\end{split}$ Define $F = \\phi \\tilde{F}.$ Note that this is not the same function $F$ from the proof of Proposition REF .", "After a straightforward computation, we find $\\begin{split}\\partial _t F + v\\cdot \\nabla _x F= \\frac{v\\cdot \\nabla _x \\phi }{\\phi } F + \\phi \\mathcal {L}\\tilde{F}+ r^{2s}\\phi c_r f_r.\\end{split}$ The goal is now to estimate $F$ from above.", "Step 2: An upper barrier for $F$ .", "Fix $R = \\frac{\\langle v_1 \\rangle }{2r}.$ For $C_0$ to be determined, let $\\begin{split}\\overline{F}(t) = 2 e^{t C_0 \\langle v_1\\rangle ^{(\\gamma +2s)_+}}\\Big (&\\Vert F(0,\\cdot ,\\cdot )\\Vert _{L^\\infty (B_R\\times B_R)}+ \\sup _{s\\in [0,t_2], \\max \\lbrace |x|, |v|\\rbrace = R} F(s,x,v)_+\\\\&+ \\frac{1}{2} r^{2s} t \\Lambda \\langle 2v_1\\rangle ^{\\gamma _+}\\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1)}\\Big ),\\end{split}$ where $\\Lambda $ is the constant from (REF ).", "Our goal is to show that $F \\le \\overline{F}$ on $[0,t_2]\\times B_R \\times B_R$ .", "Notice that, by construction, $\\sup _{(x,v)\\in \\overline{B}_R \\times \\overline{B}_R} F(0,x,v)< \\overline{F}(0).$ Hence, if $F \\le \\overline{F}$ does not always hold, we can take the first crossing time $t_{\\rm cr}$ that $\\sup F(t_{\\rm cr}) = \\overline{F}(t_{\\rm cr})$ .", "By the assumed uniform continuity of $f$ in time, we immediately see that $t_{\\rm cr}>0$ .", "On the $(x,v)$ boundary, that is, when $\\max \\lbrace |x|,|v|\\rbrace = R$ , one has $F < \\overline{F}$ by construction.", "Hence, any crossing point must occur in the interior, and we can find $(x_{\\rm cr},v_{\\rm cr}) \\in B_R(0) \\times B_R(0)$ such that $F(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) = \\overline{F}(t_{\\rm cr}).$ Using that $(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})$ is the first crossing point, we find the following: $\\begin{split}\\partial _t F(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})&\\ge \\partial _t \\overline{F}(t_{\\rm cr}),\\\\\\nabla _{x,v} F(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) &= 0,\\\\F(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})&\\ge F(t_{\\rm cr},x,v) ~~\\text{for all }(x,v)\\in \\mathbb {R}^3\\times B_R,\\\\\\mathcal {L}F(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) &\\le 0.\\end{split}$ These facts imply, at the point $(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr})$ , $\\begin{split}\\partial _t F &+ v\\cdot \\nabla _x F\\ge \\partial _t \\overline{F}\\\\&= C_0 \\langle v_1\\rangle ^{(\\gamma +2s)_+} \\overline{F} + e^{t C_0 \\langle v_0\\rangle ^{(\\gamma +2s)_+}} r^{2s} \\Vert c\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1))}\\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1)}\\\\&\\ge C_0 \\langle v_1\\rangle ^{(\\gamma +2s)_+} F + r^{2s} \\Lambda \\langle 2v_1\\rangle ^{\\gamma _+}\\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1)}.\\end{split}$ Above we used (REF ) and the smallness assumption on $t_2$ .", "This, combined with the equation (REF ) satisfied by $F$ , yields $C_0 \\langle v_1\\rangle ^{(\\gamma +2s)_+} F + r^{2s}\\Lambda \\langle 2v_1\\rangle ^{\\gamma _+} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} \\le \\frac{v_{\\rm cr}\\cdot \\nabla _x \\phi }{\\phi } F + \\phi \\mathcal {L}\\tilde{F} + r^{2s} \\phi c_r f_r.$ By (REF ) and Young's inequality, we see that $|(v\\cdot \\nabla _x \\phi ) / \\phi |$ is bounded uniformly.", "Also, (REF ) as well as the fact that $|v_{\\rm cr}|\\le R$ and the definition of $f_r$ imply that $r^{2s} \\phi c_r f_r\\le r^{2s}\\Lambda \\langle 2v_1\\rangle ^{\\gamma _+} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1)}.$ Therefore, we will reach a contradiction if we can show that, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ , $\\phi \\mathcal {L}\\tilde{F}< C_0\\langle v_1 \\rangle ^{(\\gamma +2s)_+} F.$ Once (REF ) is established, we can conclude that $F\\le \\overline{F}$ on $[0,t_2]\\times B_R\\times B_R$ .", "To keep the proof clean, we adopt the notation $h^{\\prime } = h(t_{\\rm cr},x_{\\rm cr},v^{\\prime })$ and $h_{\\rm cr}= h(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ for any function $h$ .", "Recall from (REF ) that $F^{\\prime } \\le F_{\\rm cr}$ .", "Then, since $F = \\phi \\tilde{F}$ , we have $\\begin{split}\\phi _{\\rm cr}\\mathcal {L}\\tilde{F}_{\\rm cr}= \\int _{\\mathbb {R}^3} \\phi _{\\rm cr}(\\tilde{F}^{\\prime } - \\tilde{F}_{\\rm cr}) \\mathcal {K}\\, \\mathrm {d}v^{\\prime }&= \\int _{\\mathbb {R}^3} \\phi _{\\rm cr}\\left( \\frac{F^{\\prime }}{\\phi ^{\\prime }} - \\frac{F_{\\rm cr}}{\\phi _{\\rm cr}}\\right) \\mathcal {K}\\, \\mathrm {d}v^{\\prime }\\\\&\\le \\int _{\\mathbb {R}^3} \\phi _{\\rm cr}F_{\\rm cr}\\left( \\frac{1}{\\phi ^{\\prime }} - \\frac{1}{\\phi _{\\rm cr}}\\right) \\mathcal {K}\\, \\mathrm {d}v^{\\prime } = I.\\end{split}$ Fix $L = \\langle v_{\\rm cr}\\rangle / 2$ and decompose $I= \\int _{B_L(v_{\\rm cr})} \\phi _{\\rm cr}F_{\\rm cr}\\left( \\frac{1}{\\phi ^{\\prime }} - \\frac{1}{\\phi _{\\rm cr}}\\right) \\mathcal {K}\\, \\mathrm {d}v^{\\prime }+ \\int _{B_L(v_{\\rm cr})^c} \\phi _{\\rm cr}F_{\\rm cr}\\left( \\frac{1}{\\phi ^{\\prime }} - \\frac{1}{\\phi _{\\rm cr}}\\right) \\mathcal {K}\\, \\mathrm {d}v^{\\prime }= I_1 + I_2.$ Consider the first term $I_1$ .", "Expanding $1/\\phi ^{\\prime }$ to second order in the $v^{\\prime }$ variable, and noting that the first-order term vanishes because of the symmetry of the kernel $\\mathcal {K}$ , we have $\\begin{split}I_1&\\le \\frac{1}{2}\\phi _{\\rm cr}F_{\\rm cr}\\sup _{v^{\\prime } \\in B_L(v_{\\rm cr})} |D_v^2(1/\\phi ^{\\prime })| \\int _{B_L(v_{\\rm cr})} |v^{\\prime }-v_{\\rm cr}|^2 \\mathcal {K}\\, \\mathrm {d}v^{\\prime }.\\end{split}$ Now, using the properties  (REF ) of $\\phi $ , as well as the upper bound  (REF ) for $\\mathcal {K}$ , we find $I_1\\lesssim F_{\\rm cr}(|v_{\\rm cr}|^2+|x_{\\rm cr}|^2+1)^{-\\alpha /2}\\langle rv_{\\rm cr}+ v_1\\rangle ^{\\gamma +2s} L^{2-2s}\\sup _{v^{\\prime } \\in B_L(v_{\\rm cr})}\\left( |v^{\\prime }|^2 + |x_{\\rm cr}|^2 + 1 \\right)^{\\alpha /2-2}$ First notice that, since $|v_{\\rm cr}|<R = \\langle v_1\\rangle /(2r)$ , we have $\\langle rv_{\\rm cr}+ v_1\\rangle \\approx \\langle v_1 \\rangle $ .", "Next, by the choice of $L$ , we have $\\langle v^{\\prime }\\rangle \\approx \\langle v_{\\rm cr}\\rangle $ .", "Therefore, (REF ) becomes (up to increasing $C_0$ ) $I_1 \\lesssim \\frac{F_{\\rm cr}\\langle v_1\\rangle ^{\\gamma + 2s} \\langle v_{\\rm cr}\\rangle ^{2-2s}}{(|v_{\\rm cr}|^2 + |x_{\\rm cr}|^2 + 1)^2}< \\frac{C_0}{2} F_{\\rm cr}\\langle v_1\\rangle ^{(\\gamma +2s)_+},$ as desired.", "We now turn to the second term $I_2$ in (REF ).", "Since the $-1/\\phi _{\\rm cr}$ term in the integrand has a good sign, we immediately see $I_2\\le \\phi _{\\rm cr}F_{\\rm cr}\\int _{B_L(v_{\\rm cr})^c} \\frac{1}{\\phi ^{\\prime }} \\mathcal {K}\\, \\mathrm {d}v^{\\prime }.$ Using the asymptotics (REF ) of $\\phi $ we find $I_2\\lesssim \\phi _{\\rm cr}F_{\\rm cr}\\int _{B_L(v_{\\rm cr})^c} (|v^{\\prime }|^2+|x_{\\rm cr}|^2+1)^{\\alpha /2} \\mathcal {K}\\, \\mathrm {d}v^{\\prime }= \\phi _{\\rm cr}F_{\\rm cr}\\sum _{k=0}^\\infty \\int _{A_{k,L}} (|v^{\\prime }|^2+|x_{\\rm cr}|^2+1)^{\\alpha /2} \\mathcal {K}\\, \\mathrm {d}v^{\\prime },$ where we define $A_{k,L}= B_{2^{k+1}L}(v_{\\rm cr}) \\setminus B_{2^kL}(v_{\\rm cr}).$ On the annulus $A_{k,L}$ , we have $|v^{\\prime }| \\lesssim | v_{\\rm cr}| + 2^{k+1}L$ so that $(|v^{\\prime }|^2+|x_{\\rm cr}|^2+1)^{\\alpha /2} \\lesssim \\langle v_{\\rm cr}\\rangle ^\\alpha + \\langle x_{\\rm cr}\\rangle ^\\alpha + 2^{\\alpha k} L^\\alpha $ .", "Using the bound (REF ) for $\\mathcal {K}$ and the fact that $\\alpha < s$ yields $\\begin{split}I_2&\\lesssim \\phi _{\\rm cr}F_{\\rm cr}\\sum _{k=0}^\\infty \\left( \\langle v_{\\rm cr}\\rangle ^\\alpha + \\langle x_{\\rm cr}\\rangle ^\\alpha + 2^{\\alpha k}L^\\alpha \\right) \\int _{A_{k,L}} \\mathcal {K}\\, \\mathrm {d}v^{\\prime } \\\\&\\lesssim \\phi _{\\rm cr}F_{\\rm cr}\\sum _{k=0}^\\infty \\left( \\langle v_{\\rm cr}\\rangle ^\\alpha + \\langle x_{\\rm cr}\\rangle ^\\alpha + 2^{\\alpha k}L^\\alpha \\right) \\langle r v_{\\rm cr}+ v_1 \\rangle ^{\\gamma +2s} 2^{-2s k} L^{-2s}\\\\&\\lesssim \\frac{\\langle v_{\\rm cr}\\rangle ^\\alpha + \\langle x_{\\rm cr}\\rangle ^\\alpha + L^{\\alpha -2s}}{(|v_{\\rm cr}|^2+|x_{\\rm cr}|^2+1)^{\\alpha /2}} F_{\\rm cr}\\langle v_1 \\rangle ^{\\gamma +2s} \\lesssim F_{\\rm cr}\\langle v_1 \\rangle ^{\\gamma + 2s}\\end{split}$ where in the second-to-last step we used that $r|v_{\\rm cr}| \\le rR \\le |v_1|/2$ .", "Using (REF ) and (REF ) in (REF ), we find $\\phi \\mathcal {L}\\tilde{F}\\le I_1 + I_2< C_0 F(t_{\\rm cr}) \\langle v_1 \\rangle ^{(\\gamma +2s)_+},$ which concludes the proof of (REF ) and allows us to deduce the upper bound $F \\le \\bar{F}\\qquad \\text{ on } [0,t_2]\\times B_R\\times B_R.$ Step 3: Quantitative bounds on $\\overline{F}$ .", "We establish here the upper bound on $\\overline{F}$ : $~~~~~~\\overline{F}\\lesssim r^\\alpha \\langle v_1\\rangle ^{-q} \\left(\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)}+ [\\langle v\\rangle ^q f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_2(0,v_1))} \\right) ~~\\quad \\text{ in } [0,t_2]\\times B_R\\times B_R.$ To this end, fix any $(t,x,v)$ with $t \\in [0,t_2]$ .", "The first step is to notice that the exponential term in the definition (REF ) of $\\overline{F}$ can be bounded by $e^{C_0}$ since $t\\le t_2 \\le \\langle v_1\\rangle ^{-(\\gamma + 2s)_+}$ .", "Similarly, we have $t\\Lambda \\langle 2v_1\\rangle ^{\\gamma _+} \\lesssim 1$ .", "Using these two observation and that $2s > \\alpha $ yields $\\begin{split}\\overline{F}(t,x,v)&\\lesssim \\Vert F(0,\\cdot ,\\cdot )\\Vert _{L^\\infty (B_R\\times B_R)}+ \\sup _{t\\in [0,t_2], \\max \\lbrace |x^{\\prime }|, |v^{\\prime }|\\rbrace = R} F(t^{\\prime },x^{\\prime },v^{\\prime })_+\\\\&\\qquad + r^\\alpha \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1))}.\\end{split}$ We now bound the $F(0,\\cdot ,\\cdot )$ term in (REF ).", "Fixing any $(x^{\\prime },v^{\\prime }) \\in B_R\\times B_R$ , we have $ \\begin{split}F(0,x^{\\prime },v^{\\prime })&= \\phi (x^{\\prime },v^{\\prime }) (f(z_1\\circ (\\delta _r(0,x^{\\prime },v^{\\prime }))) - f(0,0,v_1))\\\\&= \\phi (x^{\\prime },v^{\\prime }) (f(0,r^{1+2s}x^{\\prime },rv^{\\prime }+v_1) - f(0,0,v_1)).\\end{split}$ If $r^{1+2s}|x^{\\prime }|, r|v^{\\prime }| \\le 1$ , then recalling the asymptotics of $\\phi $ from (REF ) and the definition (REF ) of $d_\\ell $ , we have $\\begin{split}|F(0,x^{\\prime },v^{\\prime })|&\\lesssim (|v^{\\prime }|^2+|x^{\\prime }|^2+1)^{-\\frac{\\alpha }{2}} r^\\alpha \\left(\\max \\lbrace |x^{\\prime }|^{\\frac{1}{1+2s}}, |v^{\\prime }|\\rbrace \\right)^\\alpha \\langle v_1\\rangle ^{-q}[\\langle v\\rangle ^q f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_1(0,v_1))}\\\\&\\le r^\\alpha \\langle v_1\\rangle ^{-q} [\\langle v\\rangle ^q f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_1(0,v_1))}.\\end{split}$ On the other hand, if either $r^{1+2s}|x^{\\prime }|$ or $r|v^{\\prime }| \\ge 1$ , we find $\\begin{split}|F(0,x^{\\prime },v^{\\prime })|&\\lesssim (|v^{\\prime }|^2+|x^{\\prime }|^2+1)^{-\\alpha /2} \\langle v_1\\rangle ^{-q}\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times B_{rR}(0)) \\times B_{rR}(v_1)}\\\\&\\le r^\\alpha \\langle v_1\\rangle ^{-q}\\Vert f\\Vert _{L^\\infty _q([0,T]\\times \\mathbb {R}^6)},\\end{split}$ since $r^{\\alpha (1+2s)} \\le r^\\alpha $ and $|v^{\\prime }-v_1| \\le rR$ implies that $\\langle v\\rangle \\approx \\langle v_1\\rangle $ .", "Combining (REF ) and (REF ), we obtain $|F(0,x^{\\prime },v^{\\prime })|\\lesssim r^\\alpha \\langle v_1\\rangle ^{-q}\\left([\\langle v\\rangle ^q f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_1(x_1,v_1))} + \\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)}\\right),$ as desired.", "Next, we turn to the middle term on the right hand side of (REF ).", "Fix any $(t^{\\prime },x^{\\prime },v^{\\prime }) \\in [0,t_2]\\times \\overline{B}_R \\times \\overline{B}_R$ where $\\max \\lbrace |x^{\\prime }|,|v^{\\prime }|\\rbrace = R$ .", "Again, recalling the properties  (REF ) and (REF ) of $\\phi $ , we have $\\begin{split}|F(t^{\\prime },x^{\\prime },v^{\\prime })|&\\lesssim (|v^{\\prime }|^2+|x^{\\prime }|^2+1)^{-\\alpha /2} \\Vert f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^3\\times B_{rR}(v_1))}\\\\&\\lesssim \\min \\left( \\langle v^{\\prime }\\rangle ^{-\\alpha }, \\langle x^{\\prime } \\rangle ^{-\\alpha } \\right) \\langle v_1\\rangle ^{-q}\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)}.\\end{split}$ Recall from (REF ) that $R = \\langle v_1\\rangle /(2r)$ .", "Then clearly, $|F(t^{\\prime },x^{\\prime },v^{\\prime })|\\lesssim r^{\\alpha } \\langle v_1\\rangle ^{-q} \\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)}.$ Combining (REF ), (REF ), and (REF ), we find that (REF ) holds true.", "Step 4: Conclusion of the proof.", "Recalling that $|t_2|, |x_2|, |v_2| \\le 1$ , we have $\\phi (t_2,x_2,v_2) \\approx 1$ .", "From Step 2, we have $F(t_2,x_2,v_2) \\le \\overline{F}(t_2)$ .", "This yields $\\begin{split}f(z_1 \\circ \\delta _r(z_2)) - f(z_1)= \\frac{1}{\\phi (t_2,x_2,z_2)} F(t_2,x_2,z_2)\\lesssim \\overline{F}(t_2).\\end{split}$ Combining this with (REF ), we find $f((z_1 \\circ \\delta _r(z_2)) - f(z_1)\\lesssim r^\\alpha \\langle v_1\\rangle ^{-q} \\left([\\langle v\\rangle ^{-q} f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_1(0,v_1))} + \\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)} \\right).$ The same proof with $-\\overline{F}$ as a lower barrier of $F$ gives $f((z_1 \\circ \\delta _r(z_2)) - f(z_1)\\gtrsim - r^\\alpha \\langle v_1\\rangle ^{-q} \\left([\\langle v\\rangle ^q f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_1(0,v_1))} + \\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)} \\right).$ We deduce $|f(z_1\\circ \\delta _r(z_2)) - f(z_1)|\\lesssim r^\\alpha \\langle v_1\\rangle ^{-q}\\left([\\langle v\\rangle ^q f(0,\\cdot ,\\cdot )]_{C^{\\alpha }_{\\ell ,x,v}(B_1(0,v_1))} + \\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)} \\right),$ which concludes the proof.", "Now we are ready to use lem:scalingholder to prove prop:holderint: We will show $\\begin{split}&\\Vert f\\Vert _{C^{\\alpha }_\\ell (Q_1(z_0)\\cap ([0,T]\\times \\mathbb {R}^6))}\\\\&\\qquad \\lesssim \\langle v_0\\rangle ^{-q + \\alpha \\frac{(\\gamma +2s)_+}{2s}} \\Big (\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)} + \\sup _{t \\in [(t_0-1)_+,t_0]} [\\langle v\\rangle ^q f(t,\\cdot ,\\cdot )]_{C_{\\ell , x,v}^{\\alpha }(B_2(x_0,v_0))}\\Big ).\\end{split}$ The conclusion of the Proposition follows from (REF ) in a straightforward way.", "Fix any $z_1$ and $z_2$ in $Q_1(z_0)\\cap ([0,T]\\times \\mathbb {R}^6)$ , and assume without loss of generality that $t_2\\ge t_1$ .", "If $d_\\ell (z_2, z_1) \\ge \\frac{1}{2} \\langle v_1\\rangle ^{-(\\gamma +2s)_+/2s}$ , then we simply have $\\frac{|f(z_2) - f(z_1)|}{d_\\ell (z_1,z_2)^{\\alpha }}\\lesssim \\langle v_0\\rangle ^{\\frac{(\\gamma +2s)_+}{2s}\\alpha } \\langle v_0\\rangle ^{-q}\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)}$ since $\\langle v_1\\rangle \\approx \\langle v_1\\rangle $ .", "Therefore, for the rest of the proof, we assume $d_\\ell (z_1,z_2) < \\frac{1}{2} \\langle v_1 \\rangle ^{-\\frac{(\\gamma + 2s)_+}{2s}}.$ We set the notation $s_2 = \\frac{t_2-t_1}{r^{2s}},\\quad w_2 = \\frac{v_2-v_1}{r},\\quad \\text{ and }\\quad y_2 = \\frac{x_2 - x_1 - r^{2s} s_2 v_1}{r^{1+2s}}.$ where $r \\in (0,1]$ is to be chosen based on two cases below.", "We immediately notice that $z_2 = z_1 \\circ \\delta _r(s_2,y_2,w_2).$ The first, simpler case is when $t_2-t_1 \\le \\langle v_1 \\rangle ^{-(\\gamma + 2s)_+} d_\\ell (z_1,z_2)^{2s}.$ In this case, we let $r = 2d_\\ell (z_1,z_2).$ Let us check that the hypotheses of Lemma REF are satisfied.", "From (REF ), we have $r\\le 1$ .", "Also, (REF ) implies $s_2= \\frac{t_2-t_1}{2^{2s}d_\\ell (z_1,z_2)^{2s}} < \\langle v_1\\rangle ^{-(2s+\\gamma )_+}.$ From the definition (REF ) of $d_\\ell $ , we have $ |w_2| = \\frac{|v_2-v_1|}{2d_\\ell (z_1,z_2)}\\le 1, \\qquad |y_2| = \\frac{|x_2 - x_1 - (t_2-t_1)v_1|}{2^{1+2s}d_\\ell (z_1,z_2)^{1+2s}}\\le 1.$ Therefore, we can apply lem:scalingholder to find $|f(z_2) - f(z_1)|\\lesssim r^\\alpha \\langle v_0\\rangle ^{-q} \\left(\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)} + [\\langle v\\rangle ^q f(t_1,\\cdot ,\\cdot )]_{C^\\alpha _{\\ell ,x,v}(B_2(x_0,v_0))}\\right),$ using $B_1(x_1,v_1)\\subset B_2(x_0,v_0)$ .", "Since $r\\approx d_\\ell (z_1,z_2)$ , this finishes the proof of (REF ) in this case.", "Next, we consider the case where $t_2-t_1 > \\langle v_1 \\rangle ^{-(\\gamma + 2s)_+} d_\\ell (z_1,z_2)^{2s}.$ In this case, we let $r = 2(t_2-t_1)^\\frac{1}{2s} \\langle v_1 \\rangle ^\\frac{(\\gamma +2s)_+}{2s}.$ Notice that $r > 2^{1/(2s)} d_\\ell (z_1,z_2)$ by (REF ).", "Once again, we would like to apply Lemma REF .", "From (REF ) and the definition of $d_\\ell $ , we have $r\\le 2d_\\ell (z_1,z_2) \\langle v_1\\rangle ^{(\\gamma +2s)_+/(2s)}\\le 1.$ We also have $s_2 < \\langle v_1\\rangle ^{-(\\gamma +2s)_+}$ by construction.", "Using $r> 2^{1/(2s)} d_\\ell (z_1,z_2)$ and the definition of $d_\\ell $ , we have $ |w_2|< \\frac{|v_2-v_1|}{2d_\\ell (z_1,z_2)}\\le 1, \\qquad |y_2| = \\frac{|x_2 - x_1 - (t_2-t_1)v_1|}{r^{1+2s}} < \\frac{|x_2 - x_1 - (t_2-t_1)v_1|}{2^{1+\\frac{1}{2s}}d_\\ell (z_1,z_2)^{1+2s}}\\le 1.$ Therefore, we can apply lem:scalingholder as above to find $|f(z_2) - f(z_1)|\\lesssim r^\\alpha \\langle v_1\\rangle ^{-q} \\left(\\Vert f\\Vert _{L_q^\\infty ([0,T]\\times \\mathbb {R}^6)} + [\\langle v\\rangle ^{-q} f(t_1,\\cdot ,\\cdot )]_{C^\\alpha _{\\ell ,x,v}(B_2(x_0,v_0))}\\right),$ which concludes the proof of (REF ), since $r\\lesssim \\langle v_1 \\rangle ^{(1+\\gamma /(2s))_+} d_\\ell (z_1,z_2)$ and $\\langle v_1 \\rangle \\approx \\langle v_0\\rangle $ ." ], [ "Propagation of Hölder regularity", "In this section and the next, we need to place extra assumptions on our initial data as in the statement of Theorem REF .", "We recall here the lower bound condition (REF ): there are $\\delta $ , $r$ , and $R>0$ such that $\\text{for each } x\\in \\mathbb {R}^3, \\, \\exists \\, v_x\\in B_R(0) \\text{ such that } f_{\\rm in} \\ge \\delta 1_{B_r(x,v_x)}.$ With $f$ the solution with initial data $f_{\\rm in}$ constructed in Theorem REF , the self-generating lower bounds of Theorem REF (in particular, estimate (REF )) then imply $f(t,x,v) \\ge \\mu e^{-\\eta |v|^2}, \\quad (t,x,v) \\in (0,T]\\times \\mathbb {R}^6.$ for some $\\mu , \\eta >0$ depending on $T$ , $\\delta $ , $r$ , $R$ , $\\Vert f\\Vert _{L^\\infty _q[0,T]\\times \\mathbb {R}^6)}$ , and $q> 3 + \\gamma + 2s$ .", "The uniformity of these lower bounds in $t$ and $x$ will allow us to control the time-dependence of the constants when we apply Schauder estimates.", "We also assume the initial data is Hölder continuous.", "The main result of this section propagates this Hölder regularity to positive times: Proposition 6.1 Let $f:[0,T]\\times \\mathbb {R}^6\\rightarrow \\mathbb {R}$ be the solution to (REF ) constructed in Theorem REF .", "Suppose that $f_{\\rm in}$ satisfies (REF ) for some $\\delta $ , $r$ , and $R$ , and that $\\langle v\\rangle ^{m} f_{\\rm in} \\in C^{\\alpha (1+2s)}_\\ell (\\mathbb {R}^6)$ for some $\\alpha \\in (0,\\min \\lbrace 1,2s\\rbrace )$ and $f_{\\rm in} \\in L^\\infty _q(\\mathbb {R}^6)$ , and that $m$ and $q$ are sufficiently large, depending on $\\gamma $ , $s$ , and $\\alpha $ .", "Then there exists $T_U \\in (0,T]$ such that $\\Vert \\langle v\\rangle ^{m-\\alpha (\\gamma +2s)_+/(2s)} f\\Vert _{C^\\alpha _\\ell ([0,T_U]\\times \\mathbb {R}^6)}\\le C \\Vert \\langle v\\rangle ^{m} f_{\\rm in}\\Vert _{C^{\\alpha (1+2s)}_{\\ell ,x,v}(\\mathbb {R}^6)}.$ The constants $C$ and $T_U$ depend only on universal constants, $m$ , $q$ , $\\alpha $ , $\\delta $ , $r$ , $R$ , $[f_{\\rm in}]_{C^\\alpha _{\\ell ,q}}$ , and $\\Vert f\\Vert _{L^{\\infty }_q([0,T]\\times \\mathbb {R}^6)}$ .", "To control the Hölder continuity of $f$ , we adapt an idea from a previous work on well-posedness for the Landau equation [39], originally inspired by a method of [25] to obtain regularity for the SQG equation.", "For $(t,x,v,\\chi ,\\nu ) \\in \\mathbb {R}_+ \\times \\mathbb {R}^3 \\times \\mathbb {R}^3 \\times B_1(0)^2$ and $m \\in \\mathbb {N}$ , define $\\begin{split}&\\tau f(t,x,v,\\chi ,\\nu ) := f(t,x+\\chi , v+\\nu ),\\\\&\\delta f (t,x,v,\\chi ,\\nu ) = \\tau f(t,x,v,\\chi ,\\nu ) - f(t,x,v),\\\\&g(t,x,v,\\chi ,\\nu ) = \\frac{\\delta f (t,x,v,\\chi ,\\nu )}{(|\\chi |^{2} + |\\nu |^2)^{\\alpha /2}} \\langle v\\rangle ^{m},\\end{split}$ Note that, if $f \\in C^\\infty $ , then $\\lim _{(\\chi ,\\nu )\\rightarrow (0,0)} g(t,x,v,\\chi ,\\nu )$ exists for every $(t,x,v)$ .", "The function $g$ , defined by this limit on $\\mathbb {R}_+ \\times \\mathbb {R}^3 \\times \\mathbb {R}^3 \\times \\lbrace 0 , 0 \\rbrace $ , is then $C^\\infty $ .", "By symmetry, the maximum of $g$ and the minimum of $g$ are the same magnitude.", "Thus, it is equivalent to the $L^\\infty _{x,v,\\chi ,\\nu }$ norm of $g$ , which is, in turn, equivalent to the weighted $C^\\alpha _{x,v}$ semi-norm of $f$ , as remarked in this elementary lemma: Lemma 6.2 Fix any $f : \\mathbb {R}^3 \\times \\mathbb {R}^3 \\rightarrow \\mathbb {R}$ and let $g: \\mathbb {R}^3 \\times \\mathbb {R}^3 \\times B_1\\times B_1 \\rightarrow \\mathbb {R}$ be defined by $g(x,v,\\chi ,\\nu ) = \\langle v\\rangle ^m \\frac{\\delta f(x,v,\\chi ,\\nu ) }{(|\\chi |^2+|\\nu |^2)^{\\alpha /2}}.$ Then $\\max _{(x,v,\\chi ,\\nu ) \\in \\mathbb {R}^6\\times B_1^2} g= \\Vert g\\Vert _{L^\\infty (\\mathbb {R}^6\\times B_1^2)}\\approx \\Vert \\langle v \\rangle ^m f\\Vert _{C_{x,v}^\\alpha (\\mathbb {R}^6)}\\approx \\sup _{(x,v)\\in \\mathbb {R}^6} \\langle v\\rangle ^{m} \\Vert f\\Vert _{C_{x,v}^\\alpha (B_1(x,v))}$ where the implied constants depend only on $m$ and $\\alpha $ .", "Here, $C_{x,v}^\\alpha $ denotes the standard Hölder space on $\\mathbb {R}^6$ .", "We emphasize that $g$ measures the Hölder continuity of $f$ in the Euclidean metric of $\\mathbb {R}^6$ , rather than the metric $d_\\ell $ that matches the scaling of the equation.", "This choice is imposed on us by the proof: see the term $\\chi \\cdot \\nu /(|\\chi |^2 + |\\nu |^2) g$ in (REF ) below.", "This term needs to be uniformly bounded, which would not be the case if the displacements $\\chi $ and $\\nu $ were given the natural exponents according to the kinetic scaling.", "It is straightforward to show that the two Hölder norms control each other, although with a loss of exponent: for any suitable function $h$ and domain $\\Omega \\subset \\mathbb {R}^6$ , $\\Vert h\\Vert _{C^\\frac{\\alpha }{1+2s}_{\\ell ,x,v}(\\Omega )}\\lesssim \\Vert h\\Vert _{C_{x,v}^\\frac{\\alpha }{1+2s}(\\Omega )}\\lesssim \\Vert h\\Vert _{C^\\alpha _{\\ell ,x,v}(\\Omega )},$ where the $C^\\alpha _{\\ell ,x,v}$ norm has been defined in Section REF .", "Our strategy to prove Proposition REF is to bound $g$ from above using a barrier argument.", "The defining equation for the barrier $\\overline{G}$ will correspond to the estimates that are available for $g$ at a first crossing point, so that we can derive a contradiction at that point.", "Therefore, we present the upper bounds for $g$ in the following key lemma, before explaining the barrier argument.", "Lemma 6.3 Let $f$ , $\\alpha $ , $m$ , and $q$ be as in Proposition REF , and let $g$ be defined as in (REF ).", "For some $t_{\\rm cr}>0$ , if $g$ has a global maximum over $[0,t_{\\rm cr}]\\times \\mathbb {R}^6\\times B_1(0)^2$ at $(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})$ , then $\\partial _t g\\le C \\left(g + t_0^{\\mu (\\alpha ,s)} g^{\\theta (\\alpha ,s)}\\right)\\qquad \\text{ at } (t_{\\rm cr},x_{\\rm cr}, v_{\\rm cr}, \\chi _{\\rm cr},\\nu _{\\rm cr}),$ where $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ , $ \\begin{split}\\mu (\\alpha ,s) &:= \\left( - 1 + \\frac{\\alpha -\\alpha ^{\\prime }}{2s}\\right)\\left( \\frac{2s+\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}\\right) \\in (-1,0),\\\\\\theta (\\alpha ,s) &:= 1 + \\left(1+\\frac{\\alpha +2s}{\\alpha ^{\\prime }}\\right) \\left( \\frac{2s+\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}\\right),\\end{split} $ and the constant $C>0$ depends on universal constants, $\\alpha $ , $q$ , $m$ , $\\delta $ , $r$ , $R$ , and $\\Vert f\\Vert _{L^\\infty _q([0,T]\\times \\mathbb {R}^6)}$ .", "Before proving Lemma REF , we show how to use it to conclude Proposition REF .", "We begin by noting that we can assume, without loss of generality, that $\\Vert \\langle v\\rangle ^{q^{\\prime }} D^k f\\Vert _{L^\\infty ([0,T]\\times \\mathbb {R}^6)} < \\infty \\qquad \\text{ for every $q^{\\prime }$ and multi-index $k$ sufficiently large.", "}$ Indeed, if not, we may use the approximating sequence $f^\\varepsilon $ from the proof of Theorem REF , which is sufficiently smooth, and the bound (REF ), which does not depend quantitatively on norms of order higher than $\\alpha $ , is inherited by $f$ in the limit.", "First, we claim that, with $f$ , $m$ , $\\alpha $ , and $T_U \\in (0,T]$ as in the statment of the proposition, $\\Vert \\langle v\\rangle ^{m} f\\Vert _{L^\\infty _t([0,T_U], C^\\alpha (\\mathbb {R}^6))}\\le 2 \\Vert \\langle v\\rangle ^{m} f_{\\rm in}\\Vert _{C^\\alpha (\\mathbb {R}^6)}.$ To prove (REF ), we use the function $g$ defined in (REF ) and construct a barrier $\\overline{G}$ on a small time interval, that controls $g$ from above.", "With $N > 0$ to be chosen later, define $\\overline{G}$ to be the unique solution to ${\\left\\lbrace \\begin{array}{ll}\\frac{d}{dt} \\overline{G}(t) = N \\left(\\overline{G} + t^{\\mu (\\alpha ,s)}\\overline{G}(t)^{\\theta (\\alpha ,s)}\\right),\\\\\\overline{G}(0) = 1 + \\Vert g(0,\\cdot )\\Vert _{L^\\infty (\\mathbb {R}^6 \\times B_1(0)^2)} + N \\Vert f \\Vert _{L^\\infty _q([0,T] \\times \\mathbb {R}^6)}.\\end{array}\\right.", "}$ where $\\mu (\\alpha ,s)$ and $\\theta (\\alpha ,s)$ are as in Lemma REF .", "This solution $\\overline{G}$ exists on some time interval $[0,T_G]$ with $T_G$ depending only on $\\alpha $ , $s$ , $N$ , $\\Vert g(0,\\cdot )\\Vert _{L^\\infty }$ , and $\\Vert f \\Vert _{L^\\infty _q}$ .", "Later, we will choose $N$ depending only on $\\Vert f \\Vert _{L^\\infty _q}$ .", "Our goal is to show that $g(t,x,v,\\chi ,\\nu ) < \\overline{G}(t)$ for all $t\\in [0,T_G]$ .", "Let $t_{\\rm cr}$ be the first time that $\\Vert g\\Vert _{L^\\infty ([0,t_{\\rm cr}]\\times \\mathbb {R}^3 \\times \\mathbb {R}^3 \\times B_1(0)^2)} = \\overline{G}(t_{\\rm cr})$ .", "It is clear from (REF ) that $t_{\\rm cr}>0$ .", "We seek a contradiction at $t=t_{\\rm cr}$ .", "Next, we claim that we may assume existence of a point $(x_{\\rm cr},v_{\\rm cr},\\chi _{\\rm cr},\\nu _{\\rm cr}) \\in \\mathbb {R}^3\\times \\mathbb {R}^3\\times \\overline{B}_1(0)^2$ so that $g(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})= \\Vert g(t_{\\rm cr},\\cdot ,\\cdot ,\\cdot ,\\cdot )\\Vert _{L^\\infty }= \\overline{G}(t_{\\rm cr}).$ Indeed, if not we may take any sequence $(t_{\\rm cr}, x_n, v_n, \\chi _n, \\nu _n)$ such that $g(t_{\\rm cr},x_n,v_n, \\chi _n, \\nu _n) \\rightarrow \\Vert g(t_{\\rm cr},\\cdot ,\\cdot ,\\cdot ,\\cdot )\\Vert _{L^\\infty }= \\overline{G}(t_{\\rm cr}).$ Then define $f_n(t,x,v,\\chi ,\\nu )= f(t, x+x_n, v, \\chi , \\nu )\\quad \\text{ and }\\quad g_n(t,x,v,\\chi ,\\nu )= g(t, x+x_n, v, \\chi , \\nu ).$ Due to the fast decay of $f$ as $|v|\\rightarrow \\infty $ and its smoothness, see (REF ), it follows that, up to passing to a subsequence, there exist $\\tilde{f}$ and $\\tilde{g}$ such that $f_n \\rightarrow \\tilde{f}$ and $g_n \\rightarrow \\tilde{g}$ in $C^{2+\\alpha }_{\\ell }$ locally uniformly.", "Using again the fast decay and smoothness of $f$ , it follows that $|v_n|$ is bounded, in which case, up to taking a subsequence, $v_n \\rightarrow v_{\\rm cr}$ for some $v_{\\rm cr}\\in \\mathbb {R}^3$ .", "Similarly, $(\\chi _n, \\nu _n) \\rightarrow (\\chi _{\\rm cr},\\nu _{\\rm cr}) \\in \\overline{B}_1(0)^2$ .", "It follows that $\\tilde{g}(t_{\\rm cr}, 0, v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})= \\lim _{n\\rightarrow \\infty } g_n(t_{\\rm cr}, 0, v_n, \\chi _n, \\nu _n)= \\overline{G}(t_{\\rm cr}).$ Notice that $\\tilde{f}$ inherits all the of the same (global) bounds as $f$ and satisfies the Boltzmann equation (REF ), by the locally uniform convergence in $C^{2+\\alpha }_\\ell $ .", "Therefore, without loss of generality, a crossing point exists as in (REF ).", "Now, we show that, up to increasing $N$ if necessary, $(\\chi _{\\rm cr},\\nu _{\\rm cr}) \\in B_1(0)^2$ ; that is, $\\chi _{\\rm cr}$ and $\\nu _{\\rm cr}$ lie in the interior of $B_1(0)$ .", "Indeed, if $(\\chi _{\\rm cr},\\nu _{\\rm cr})$ were located on the boundary of $B_1(0)^2$ , then a direct calculation using (REF ) and $\\chi _{\\rm cr}^2 + \\nu _{\\rm cr}^2 \\ge 1$ shows $g(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})\\le \\Vert f\\Vert _{L^\\infty _m}< \\overline{G}(t_{\\rm cr}),$ which contradicts (REF ).", "It follows that $(\\chi _{\\rm cr},\\nu _{\\rm cr})\\in B_1(0)^2$ .", "From Lemma REF , we find, at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})$ , $\\partial _t g\\le C\\left( g + t_0^{\\mu (\\alpha ,s)} g^{\\theta (\\alpha ,s)}\\right).$ Since $g = \\overline{G}$ at this point, we have $\\partial _t g\\le C\\left( \\overline{G} + t_0^{\\mu (\\alpha ,s)} \\overline{G}^{\\theta (\\alpha ,s)}\\right).$ Hence, by increasing $N$ if necessary, we have, due to (REF ), $\\partial _t g< \\frac{d\\overline{G}}{dt}.$ On the other hand, since $\\overline{G} - g$ has a minimum at $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})$ , one has $\\partial _t( \\overline{G} - g) \\le 0.$ The inequalities (REF ) and (REF ) contradict each other, which implies the time $t_{\\rm cr}$ does not exist.", "This establishes that $g< \\overline{G}$ on $[0,T_G]$ .", "The inequality (REF ) therefore holds up to a time $T_U = \\min \\lbrace T, T_G\\rbrace $ , which implies the existence of $T_U$ as in the statement of the proposition.", "Since the statement we want to prove is in terms of Hölder norms $C^\\alpha _\\ell $ with kinetic scaling, we apply (REF ) to both sides of (REF ) and obtain $\\Vert \\langle v\\rangle ^{m} f\\Vert _{L^\\infty _t([0,T_U], C^\\alpha _{\\ell ,x,v}(\\mathbb {R}^6))}\\lesssim \\Vert \\langle v\\rangle ^m f_{\\rm in}\\Vert _{C^{\\alpha (1+2s)}_{\\ell ,x,v}(\\mathbb {R}^6)}.", "$ Finally, we apply Proposition REF to conclude (REF ), and the proof is complete.", "We now prove the key estimate of Lemma REF .", "We proceed in several steps.", "First, we convert the classical derivative terms in the equation for $f$ to derivatives of $g$ .", "Next, we obtain bounds on the collision operator using the bounds on $g$ and the Schauder estimates for $f$ .", "This involves an intricate decomposition of the collision kernel, each portion of which we write as a separate step.", "Step 1: The equation for $g$ .", "If we take a finite difference of the Boltzmann equation (REF ), we get that $\\partial _t \\delta f + v \\cdot \\nabla _x \\delta f + \\nu \\cdot \\nabla _\\chi \\delta f = \\tau Q(f,f) - Q(f,f).$ Multiplying by $\\langle v\\rangle ^{m} (|\\chi |^2 + |\\nu |^2)^{-\\alpha /2}$ and commuting the derivative operators yields $\\partial _t g+ v \\cdot \\nabla _x g+ \\nu \\cdot \\nabla _\\chi g+ \\alpha \\frac{\\chi \\cdot \\nu }{|\\chi |^2 + |\\nu |^2} g= \\frac{\\langle v\\rangle ^m}{(|\\chi |^2 + |\\nu |^2)^\\frac{\\alpha }{2}} \\left( \\tau Q(f,f) - Q(f,f)\\right).$ We next consider the right hand side of (REF ).", "Applying the Carleman decomposition $Q = Q_{\\rm s} + Q_{\\rm ns}$ as usual, we write $\\tau (Q(f,f)) - Q(f,f)= Q_\\text{s}(\\delta f, \\tau f)+ Q_\\text{s}(f, \\delta f)+ Q_\\text{ns}(\\delta f, \\tau f)+ Q_\\text{ns}(f, \\delta f).$ Since $(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr},\\chi _{\\rm cr},\\nu _{\\rm cr})$ is the location of an interior maximum, we have $\\nabla _x g = \\nabla _\\chi g = 0$ at this point.", "Hence (REF ) becomes $\\begin{split}\\partial _t g&+ \\alpha \\frac{\\chi \\cdot \\nu }{|\\chi |^2 + |\\nu |^2} g= \\frac{\\langle v\\rangle ^m}{(|\\chi |^2 + |\\nu |^2)^\\frac{\\alpha }{2}} \\left( Q_\\text{s}(\\delta f, \\tau f) + Q_\\text{s}(f, \\delta f) + Q_\\text{ns}(\\delta f, \\tau f) + Q_\\text{ns}(f, \\delta f)\\right).\\end{split}$ Since $|\\chi \\cdot \\nu |/(|\\chi |^2+|\\nu |^2) \\le 1$ , the conclusion of the lemma follows if we can find suitable upper bounds for the terms on the right-hand side of (REF ).", "We estimate these four terms one by one.", "Step 2: Bounding the $Q_{\\rm s}(\\delta f, \\tau f)$ term in (REF ).", "As usual, since $Q_{\\rm s}$ acts only in the velocity variable, we omit the dependence on $(t_{\\rm cr}, x_{\\rm cr})$ in the following calculation.", "First, we make note of a useful fact often used in the sequel: since $\\nu \\in B_1(0)$ , it follows that $\\tau f (v) \\langle v\\rangle ^k\\lesssim \\Vert f \\Vert _{L^\\infty _k}\\qquad \\text{ for any $k\\ge 0$}.$ Next, we recall that $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr}, \\chi _{\\rm cr}, \\nu _{\\rm cr})$ is crucially the location of a maximum of $g$ .", "For ease of notation, we drop the `cr' subscript and simply refer to the point as $(t,x,v,\\chi ,\\nu )$ .", "We set $J_1 = \\frac{\\langle v\\rangle ^m}{(|\\chi |^2 + |\\nu |^2)^{\\alpha /2}} Q_{\\rm s}(\\delta f, \\tau f).$ Using (REF ), this can be rewritten as: $\\begin{split}J_1&= \\int _{\\mathbb {R}^3} (\\tau f(v) - \\tau f(v^{\\prime })) \\frac{\\langle v\\rangle ^m K_{\\delta f}(v,v^{\\prime })}{(|\\chi |^2 + |\\nu |^2)^{\\alpha /2}} \\, \\mathrm {d}v^{\\prime }= \\int _{\\mathbb {R}^3} (\\tau f(v) - \\tau f(v^{\\prime })) \\tilde{K}_g (v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime },\\end{split}$ where, recalling the definition (REF ) of $g$ , $\\tilde{K}_g(v,v^{\\prime })= \\frac{\\langle v\\rangle ^m K_{\\delta f}(v,v^{\\prime })}{(|\\chi |^2 + |\\nu |^2)^{\\alpha /2}}\\approx |v-v^{\\prime }|^{-3-2s} \\int _{w \\perp v-v^{\\prime }} g(v+w) \\frac{\\langle v\\rangle ^m}{\\langle v+w \\rangle ^m} |w|^{\\gamma +2s+1} \\, \\mathrm {d}w.$ Let us record some useful upper bounds for $\\tilde{K}_g$ : from Lemma REF , we have, for any $r>0$ , $\\begin{split}\\int _{B_{2r} \\setminus B_r} &|\\tilde{K}_g(v,v+z)|\\, \\mathrm {d}z \\lesssim \\frac{\\langle v\\rangle ^m}{(|\\chi |^2+|\\nu |^2)^{\\alpha /2}} \\left( \\int _{\\mathbb {R}^3} \\delta f(v+w) |w|^{\\gamma +2s}\\, \\mathrm {d}w\\right) r^{-2s}\\\\&\\lesssim r^{-2s} \\int _{\\mathbb {R}^3} \\frac{ |g(v+w)| |w|^{\\gamma +2s}\\langle v\\rangle ^m}{ \\langle v+w \\rangle ^m} \\, \\mathrm {d}w\\le C r^{-2s} \\Vert g \\Vert _{L^\\infty }\\langle v\\rangle ^{m+(\\gamma +2s)_+},\\end{split}$ since $m> \\gamma +2s+3$ .", "Applying (REF ) on an infinite union of annuli $B_{2r}\\setminus B_r$ , $B_{4r}\\setminus B_{2r}$ , etc., we obtain $\\int _{B_r^c}|\\tilde{K}_g(v,v+z)| \\, \\mathrm {d}z\\lesssim r^{-2s} \\Vert g \\Vert _{L^\\infty } \\langle v\\rangle ^{m+(\\gamma +2s)_+}.$ Finally, from [42] (which holds for all ranges of $\\gamma +2s$ ), we obtain the following pointwise upper bound: if $|v^{\\prime }| \\le |v|/3$ , then $|\\tilde{K}_g(v,v^{\\prime })|\\lesssim \\frac{\\Vert g \\Vert _{L^\\infty }}{|v-v^{\\prime }|^{3+2s}} \\langle v\\rangle ^{\\gamma + 2s + 3}\\lesssim \\frac{\\Vert g \\Vert _{L^\\infty }}{| v|^{3+2s}} \\langle v\\rangle ^{\\gamma + 2s + 3}.$ We require this estimate since we work in uniform spaces with weights in $v$ , which means we sometimes encounter the quantity $\\langle v\\rangle / \\langle v^{\\prime } \\rangle $ .", "This is bounded, except when $|v^{\\prime }|$ is small compared to $|v|$ , so we need the extra moment decay of (REF ) to compensate in that case.", "The analysis now proceeds in two slightly different ways, based on two cases.", "Case 1: $|v| \\ge 1$ .", "Setting $R = 4 \\langle v\\rangle / 3$ and $r = |v|/3$ , we write $\\begin{split}J_1 &= \\int _{B_R^c(v)} (\\tau f(v) - \\tau f(v^{\\prime })) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } + \\int _{B_r(0)} (\\tau f(v) - \\tau f(v^{\\prime })) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\\\&\\quad \\quad \\quad \\quad + \\int _{B_R(v) \\setminus B_r(0)} (\\tau f(v) - \\tau f(v^{\\prime })) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } =: J_{1,1} + J_{1,2} + J_{1,3}.\\end{split}$ Notice that the only term involving the singularity at $v=v^{\\prime }$ is $J_{1,3}$ due to the choice of $R$ and $r$ .", "For the long-range term, using (REF ) and the fact that $\\langle v^{\\prime }\\rangle \\gtrsim \\langle v\\rangle $ when $v^{\\prime }\\in B_R^c$ , we have $\\begin{split}J_{1,1} &\\le \\int _{B_R^c(v)} \\left(|\\tau f(v)|\\langle v\\rangle ^{m+(\\gamma +2s)_+} + |\\tau f(v^{\\prime })| \\langle v^{\\prime }\\rangle ^{m+(\\gamma +2s)_+}\\right) \\frac{|\\tilde{K}_g(v,v^{\\prime })|}{\\langle v\\rangle ^{m+(\\gamma +2s)_+}} \\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim \\Vert f \\Vert _{L^\\infty _q} \\int _{B_R^c(v)} \\frac{|\\tilde{K}_g(v,v^{\\prime })|}{\\langle v\\rangle ^{m+(\\gamma +2s)_+}} \\, \\mathrm {d}v^{\\prime }\\lesssim \\Vert f \\Vert _{L^{\\infty }_q} \\langle v\\rangle ^{-2s} \\Vert g \\Vert _{L^\\infty }.\\end{split}$ In the second inequality, we used that $q > m + (\\gamma + 2s)_+$ , by assumption.", "The near-zero term is bounded using (REF ) as follows: $\\begin{split}J_{1,2} &\\lesssim \\int _{B_r(0)} \\frac{|\\tau f (v)| + |\\tau f (v^{\\prime })|}{|v|^{3+2s}} \\langle v\\rangle ^{\\gamma + 2s + 3} \\Vert g \\Vert _{L^\\infty } \\, \\mathrm {d}v^{\\prime } \\\\&\\lesssim \\Vert g \\Vert _{L^\\infty } \\Vert f \\Vert _{L^\\infty _q}\\int _{B_r(0)}\\left( \\frac{\\langle v\\rangle ^{\\gamma +2s+3-q} + \\langle v\\rangle ^{\\gamma +2s+3} \\langle v^{\\prime }\\rangle ^{-q}}{ |v|^{3+2s}}\\right) \\, \\mathrm {d}v^{\\prime }.\\end{split}$ Recalling that $|v| \\ge 1$ so $|v|\\approx \\langle v\\rangle $ , we then find $J_{1,2}\\lesssim \\Vert g\\Vert _{L^\\infty } \\Vert f\\Vert _{L^\\infty _q}\\int _{B_r(0)} \\left(\\langle v\\rangle ^{\\gamma -q} + \\langle v\\rangle ^\\gamma \\langle v^{\\prime }\\rangle ^{-q}\\right) \\, \\mathrm {d}v^{\\prime }\\lesssim \\Vert g\\Vert _{L^\\infty } \\Vert f\\Vert _{L^\\infty _q} (1+\\langle v\\rangle ^\\gamma ),$ since $q>3$ .", "The short-range term $J_{1,3}$ is the most difficult to bound because it contains the singularity at $v=v^{\\prime }$ .", "To handle this, we use smoothness of $\\tau f$ , which follows from the Schauder estimate of Proposition REF : with $p\\ge 0$ to be chosen later, $\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_{\\ell ,m-p}([t_0/2,t_0]\\times \\mathbb {R}^6)}\\lesssim t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\Vert f\\Vert _{C^\\alpha _{\\ell ,m-p+\\kappa +\\alpha /(1+2s)+\\gamma }([0,t_0]\\times \\mathbb {R}^6)}^{1+(\\alpha +2s)/\\alpha ^{\\prime }}.$ Since we need to bound $J_{1,3}$ in terms of $g$ (which corresponds to the weighted $C^\\alpha $ norm of $f$ in $(x,v)$ -variables) rather than the $(t,x,v)$ -Hölder norm, we combine (REF ) with Proposition REF to write $\\begin{split}\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_{\\ell ,m-p}([t_0/2,t_0]\\times \\mathbb {R}^6)}&\\le C t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\sup _{t\\in [0,t_0]} \\left\\Vert \\langle v\\rangle ^{m} f(t)\\right\\Vert _{C^\\alpha _{\\ell ,x,v}(\\mathbb {R}^6)}^{1+(\\alpha +2s)/\\alpha ^{\\prime }}\\\\&\\le C t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\Vert g\\Vert _{L^\\infty }^{1+(\\alpha +2s)/\\alpha ^{\\prime }},\\end{split}$ using Lemma REF and (REF ).", "Here, we have chosen $p :=\\kappa + \\frac{\\alpha }{1+2s}+\\gamma +\\frac{\\alpha }{2s}(\\gamma +2s)_+.$ The decay exponent $m-p$ on the left in (REF ) is too weak for our estimates below.", "To get around this, we use interpolation to trade regularity for decay: recalling that $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ , for $q\\ge m+((\\gamma +2s)_+ + \\alpha ^{\\prime })\\Big (2+\\frac{4s}{\\alpha ^{\\prime }}\\Big )+\\Big (1+\\frac{4s}{\\alpha ^{\\prime }}\\Big )p,$ Lemma REF implies $\\begin{split}\\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }/2}_{\\ell ,m+(\\gamma +2s)_+ + \\alpha ^{\\prime }}([t_0/2,t_0]\\times \\mathbb {R}^6)} &\\lesssim \\Vert f\\Vert _{C^{2s+\\alpha ^{\\prime }}_{\\ell ,m-p}([t_0/2,t_0]\\times \\mathbb {R}^6)}^{\\frac{2s+\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}}\\Vert f\\Vert _{L^\\infty _{q}}^{\\frac{\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}} \\\\&\\lesssim \\left(t_0^{-1+(\\alpha -\\alpha ^{\\prime })/(2s)} \\Vert g\\Vert _{L^\\infty }^{1+(\\alpha +2s)/\\alpha ^{\\prime }}\\right)^{\\frac{2s+\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}} \\Vert f\\Vert _{L^\\infty _{q}}^{\\frac{\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}} \\\\&\\lesssim t_0^{\\mu (\\alpha ,s)} \\Vert g\\Vert _{L^\\infty }^{\\eta (\\alpha ,s)},\\end{split}$ where $ \\begin{split}\\mu (\\alpha ,s) &:= \\left( - 1 + \\frac{\\alpha -\\alpha ^{\\prime }}{2s}\\right)\\left( \\frac{2s+\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}\\right) \\in (-1,0),\\\\\\eta (\\alpha ,s) &:= \\left(1+\\frac{\\alpha +2s}{\\alpha ^{\\prime }}\\right) \\left( \\frac{2s+\\alpha ^{\\prime }/2}{2s+\\alpha ^{\\prime }}\\right),\\end{split} $ and we have absorbed the dependence on $\\Vert f\\Vert _{L^\\infty _{q}}$ into the implied constant.", "Now, we apply (REF ) to the term $J_{1,3}$ .", "The argument differs slightly based on whether $2s +\\alpha ^{\\prime }/2 > 1$ or not.", "We consider the former case, as it more complicated.", "For $v^{\\prime }\\in B_R(v)\\setminus B_r(0)$ , since $|v+\\nu | \\approx |v|$ , estimate (REF ) implies $\\begin{split}&\\frac{|\\tau f (v^{\\prime }) - \\tau f (v) - (v-v^{\\prime })\\cdot \\nabla _v (\\tau f)(v)|}{|v-v^{\\prime }|^{2s+\\alpha ^{\\prime }/2}} \\langle v\\rangle ^{m+(\\gamma +2s)_+ +\\alpha ^{\\prime }}\\lesssim t_0^{\\mu (\\alpha ,s)} \\Vert g\\Vert _{L^\\infty }^{\\eta (\\alpha ,s)}.\\end{split}$ Let $\\mathcal {A}_j := B_{R/2^j}(v) \\setminus B_{R/2^{j+1}}(v)$ with the added condition that $\\mathcal {A}_0 = (B_R(v) \\setminus B_{R/2}(v)) \\setminus B_r(0)$ .", "We treat the cases $j=0$ and $j>1$ separately.", "The simpler case is $j=0$ , where the regularity of $f$ is not required because $v^{\\prime }$ is bounded away from $v$ .", "In this case, using that $\\langle v^{\\prime }\\rangle \\approx \\langle v\\rangle $ and the inequality (REF ), we find $\\begin{split}\\int _{\\mathcal {A}_0} &|\\tau f(v) - \\tau f(v^{\\prime })| |\\tilde{K}_g(v,v^{\\prime })| dv^{\\prime }\\lesssim \\langle v\\rangle ^{-q} \\int _{\\mathcal {A}_0} |\\tilde{K}_g(v,v^{\\prime })| dv^{\\prime }\\\\&\\lesssim \\langle v\\rangle ^{-q} \\int _{B_R(v) \\setminus B_{R/2}(v)} |\\tilde{K}_g(v,v^{\\prime })| dv^{\\prime }\\lesssim \\langle v\\rangle ^{-q - 2s} \\langle v\\rangle ^{m + (\\gamma + 2s)_+} \\Vert g\\Vert _{L^\\infty }\\lesssim \\Vert g\\Vert _{L^\\infty }.\\end{split}$ Next we consider the general $j$ case.", "Here we use the symmetry of $K_g(v,v^{\\prime })$ around $v$ (i.e., $K_g(v,v+h) = K_g(v,v-h)$ ), the regularity estimate (REF ), and the fact that $\\langle v^{\\prime }\\rangle \\approx \\langle v\\rangle $ to obtain $\\begin{split}\\sum _{j \\ge 1} &\\Big |\\int _{\\mathcal {A}_j} (\\tau f(v^{\\prime }) - \\tau f(v)) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\Big | \\\\&= \\sum _{j \\ge 1} \\Big |\\int _{\\mathcal {A}_j} (\\tau f(v^{\\prime }) - \\tau f(v) - (v-v^{\\prime })\\cdot \\nabla _v (\\tau f)(v) ) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\Big | \\\\&\\lesssim t_0^{\\mu (\\alpha ,s)}\\Vert g\\Vert _{L^\\infty }^{\\eta (\\alpha ,s)}\\langle v\\rangle ^{-m-(\\gamma +2s)_+-\\alpha ^{\\prime }}\\sum _{j \\ge 0}\\int _{\\mathcal {A}_j} |v-v^{\\prime }|^{2s+\\alpha ^{\\prime }/2} |\\tilde{K}_g(v,v^{\\prime })| \\, \\mathrm {d}v^{\\prime } \\\\&\\lesssim t_0^{\\mu (\\alpha ,s)}\\Vert g\\Vert _{L^\\infty }^{\\eta (\\alpha ,s)}\\langle v\\rangle ^{-\\alpha ^{\\prime }} \\sum _{j \\ge 0} \\frac{R^{2s+\\alpha ^{\\prime }}}{2^{j(2s+\\alpha ^{\\prime })}}\\Vert g \\Vert _{L^\\infty } \\frac{2^{2s j}}{R^{2s}}\\lesssim t_0^{\\mu (\\alpha ,s)}\\Vert g\\Vert _{L^\\infty }^{1+\\eta (\\alpha ,s)},\\end{split}$ Putting together both estimates above, we find $J_{1,3} \\lesssim \\Vert g\\Vert _{L^\\infty } + t_0^{\\mu (\\alpha ,s)}\\Vert g\\Vert _{L^\\infty }^{1 + \\eta (\\alpha ,s)}.$ Putting the estimates of $J_{1,i}$ together, we have that $J_1 \\lesssim \\Vert g\\Vert _{L^\\infty } + t_0^{\\mu (\\alpha ,s)} \\Vert g\\Vert _{L^\\infty }^{1+\\eta (\\alpha ,s)}.$ Case 2: $|v| < 1$ .", "This case is similar to Case 1, but more simple.", "It is necessary because the estimates we used for $J_{1,2}$ above relied on $|v|$ being bounded away from zero.", "In this case, however, we do not need the near-zero term at all.", "Instead, with $R = 4\\langle v\\rangle /3$ as above, we write $J_1 = \\int _{B_R^c(v)} (\\tau f(v) - \\tau f(v^{\\prime })) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } +\\int _{B_R(v)} (\\tau f(v) - \\tau f(v^{\\prime })) \\tilde{K}_g(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } =: J_{1,1} + J_{1,4},$ observing that the first term is indeed the same as $J_{1,1}$ in the previous case (and is bounded in the same way).", "The new term $J_{1,4}$ is bounded in the same way as $J_{1,3}$ in the previous case, now taking $\\mathcal {A}_0 = B_R(v) \\setminus B_{R/2}(v)$ and observing that, since $|v| < 1$ , the smoothness estimate (REF ) for $\\tau f$ holds even when $v^{\\prime }$ is close to zero.", "Thus (REF ) holds in both cases.", "Step 3: Bounding the $Q_{\\rm s}(f, \\delta f)$ term in (REF ).", "Define $J_2 := \\frac{\\langle v\\rangle ^m}{(|\\chi |^2+|\\nu |^2)^{\\alpha /2}} Q_{\\rm s}(f,\\delta f) = \\int _{\\mathbb {R}^3} \\left( g(v^{\\prime }) \\frac{\\langle v\\rangle ^m}{\\langle v^{\\prime } \\rangle ^m} - g(v) \\right) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime },$ where the second equality used the definition of $g$ .", "Since $g(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr},\\chi _{\\rm cr}, \\nu _{\\rm cr}) = \\overline{G}(t_{\\rm cr}) > 0$ and this is the maximum of $g$ in all variables, we can proceed as in (REF ) from the proof of Lemma REF to write $g(v^{\\prime }) \\le g(v_{\\rm cr})$ , which yields $J_2 \\le \\Vert g \\Vert _{L^\\infty } \\int _{\\mathbb {R}^3} \\left( \\frac{\\langle v\\rangle ^m}{\\langle v^{\\prime } \\rangle ^m} - 1 \\right) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }.$ Defining $\\phi (v^{\\prime }) := \\langle v\\rangle ^m / \\langle v^{\\prime } \\rangle ^m - 1$ , we have $ J_2 \\le \\Vert g\\Vert _{L^\\infty } \\int _{\\mathbb {R}^3} \\phi (v^{\\prime }) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }.$ Once again, the analysis is slightly different depending on the size of $v$ .", "Case 1: $|v| \\ge 1$ .", "Define $R = |v|$ and $r = |v|/3$ .", "Note that, for $v^{\\prime } \\in B_R^c(0)$ , one has $\\phi (v^{\\prime }) \\le 0$ .", "Since we seek an upper bound, we can discard the integral over $B_R^c(0)$ from $J_2$ .", "Setting $\\mathcal {B} := (B_R(0)\\setminus B_r(0)) \\setminus B_r(v)$ , we then split $J_2$ as $\\begin{split}J_2 &\\lesssim \\Vert g \\Vert _{L^\\infty } \\left( \\int _{B_r(v)} \\phi (v^{\\prime }) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } + \\int _{B_r(0)} \\phi (v^{\\prime }) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }+ \\int _{\\mathcal {B}} \\phi (v^{\\prime }) K_f (v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\right)\\\\&=: \\Vert g \\Vert _{L^\\infty } \\left( J_{2,1} + J_{2,2} + J_{2,3} \\right).\\end{split}$ For $J_{2,1}$ , we Taylor expand $\\phi $ to second order around $v^{\\prime } = v$ and note that (as in the proof of Lemma REF ) the first order term vanishes due to the symmetry of the kernel, since the domain of integration is a ball centered at $v_{\\rm cr}$ .", "This yields $J_{2,1} \\lesssim \\int _{B_r(v)} E(v,v^{\\prime }) K_f (v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime },$ where $|E(v,v^{\\prime })| \\lesssim |v-v^{\\prime }|^2 \\langle v\\rangle ^m \\sup _{w \\in B_r(v)} \\langle w \\rangle ^{-m-2} \\lesssim |v-v^{\\prime }|^2 \\langle v\\rangle ^{-2}$ , since $\\langle w\\rangle \\approx \\langle v\\rangle $ for $w \\in B_r(v)$ .", "Using Lemma REF to bound $K_f$ , we have $\\begin{split}J_{2,1} &\\lesssim \\langle v\\rangle ^{-2} \\int _{B_r(v)} |v-v^{\\prime }|^2 K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim \\langle v\\rangle ^{-2} \\left( \\int _{\\mathbb {R}^3} f(v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w \\right) r^{2-2s} \\lesssim \\Vert f \\Vert _{L^{\\infty }_k} \\langle v\\rangle ^{(\\gamma +2s)_+} \\langle v\\rangle ^{-2s}\\lesssim \\Vert f\\Vert _{L^\\infty _q},\\end{split}$ since $q> \\gamma +2s+3$ .", "For $J_{2,2}$ , within $B_r(0)$ , we have no better estimate than $\\phi (v^{\\prime }) \\le \\langle v\\rangle ^m$ .", "However, we can use the pointwise upper bound of [42], and the fact that $|v| \\ge 1$ , to obtain $J_{2,2} \\lesssim \\int _{B_r(0)} \\frac{\\langle v\\rangle ^m}{|v-v^{\\prime }|^{3+2s}} \\Vert f \\Vert _{L^{\\infty }_q} \\langle v\\rangle ^{\\gamma +2s+3-q} \\, \\mathrm {d}v^{\\prime }\\lesssim \\langle v\\rangle ^{3+m+\\gamma -q} \\Vert f \\Vert _{L^{\\infty }_q},$ since $q\\ge m+\\gamma +3$ .", "Lastly, if $v^{\\prime } \\in \\mathcal {B}$ , then $\\phi (v^{\\prime })$ is bounded by a constant independent of $v$ .", "Since $\\mathcal {B} \\subset B_r^c(v)$ , we use the tail estimate for $K_f$ (Lemma REF ) to obtain $J_{2,3} \\lesssim \\int _{B_r^c(v)} K_f (v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\lesssim \\Vert f \\Vert _{L^{\\infty }_q} \\langle v\\rangle ^{-2s}.$ Combining the above estimates yields $J_2 \\lesssim \\Vert g \\Vert _{L^\\infty } \\Vert f \\Vert _{L^{\\infty }_q}.$ Case 2: $|v| < 1$ .", "As in Step 2 above, this step is less delicate than the large-$v$ case, but necessary since the estimate used above for $J_{2,2}$ degenerates as $|v| \\rightarrow 0$ .", "In this case, we remark that $\\phi $ is bounded uniformly, so we can use the splitting $J_2 \\lesssim \\Vert g \\Vert _{L^\\infty } \\left( \\int _{B_1(v)} E(v,v^{\\prime }) K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } + \\int _{B_1^c(v)} K_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\right).$ with $E(v,v^{\\prime })$ defined as in Case 1.", "Since $|E(v,v^{\\prime })| \\lesssim |v-v^{\\prime }|^2$ on $B_1(v)$ , the same calculation as for $J_{2,1}$ gives that the first term in (REF ) is bounded by $\\Vert f \\Vert _{L^{\\infty }_q}$ .", "The second term is also bounded by $\\Vert f \\Vert _{L^{\\infty }_q}$ by Lemma REF , and we conclude (REF ) holds in this case as well.", "Step 4: Bounding the $Q_{\\rm ns}$ terms in (REF ).", "The last two terms are more straightforward.", "Recalling that $\\tau f(v) \\langle v\\rangle ^q \\lesssim \\Vert f \\Vert _{L^\\infty _q}$ , we have $ \\begin{split}\\frac{\\langle v\\rangle ^m}{(|\\chi |^2+|\\nu |^2)^{\\alpha /2}} Q_{\\rm ns}(\\delta f, \\tau f) &= c_b \\tau f(v_{\\rm cr}) \\int _{\\mathbb {R}^3} g(v+w) \\frac{\\langle v\\rangle ^m}{\\langle v+w\\rangle ^m} |w|^\\gamma \\, \\mathrm {d}w \\\\&\\lesssim \\Vert g \\Vert _{L^\\infty }\\Vert f \\Vert _{L^\\infty _q}\\langle v\\rangle ^{m-q} \\int _{\\mathbb {R}^3} \\frac{|w|^\\gamma }{\\langle v + w \\rangle ^m} \\, \\mathrm {d}w \\lesssim \\Vert g \\Vert _{L^\\infty } \\Vert f \\Vert _{L^\\infty _q},\\end{split}$ since $m>\\gamma +3$ and $q> m+\\gamma $ .", "We have used Lemma REF to estimate the convolution in the last line.", "Finally, $ \\begin{split}\\frac{\\langle v\\rangle ^m}{(|\\chi |^2+|\\nu |^2)^{\\alpha /2}} Q_{\\rm ns}(f,\\delta f)&\\approx g(v) \\int _{\\mathbb {R}^3} f(v+w) |w|^\\gamma \\, \\mathrm {d}w\\\\&\\lesssim \\Vert g \\Vert _{L^\\infty } \\Vert f \\Vert _{L^\\infty _q} \\int _{\\mathbb {R}^3} \\frac{|w|^\\gamma }{\\langle v+w \\rangle ^q} \\, \\mathrm {d}w\\lesssim \\Vert g \\Vert _{L^\\infty } \\Vert f \\Vert _{L^\\infty _q},\\end{split}$ since $q>\\gamma +3$ .", "Combining our upper bounds for the four terms on the right in (REF ), the proof of the lemma is complete, setting $\\theta (\\alpha ,s) = 1+\\eta (\\alpha ,s)$ ." ], [ "Uniqueness", "In this section, we complete the proof of Theorem REF .", "Letting $f$ be the classical solution guaranteed by Theorem REF and $g$ a weak solution in the sense of Theorem REF , the goal is to establish a Grönwall-type inequality for $h := f-g$ in a space-localized, velocity-weighted, $L^2$ -based norm.", "Following [61], we define our cutoff as follows: let $\\phi (x,v) := \\frac{1}{1+|x|^2+|v|^2},$ and for $a \\in \\mathbb {R}^3$ , let $\\phi _a(x,v) := \\phi (x-a,v).$ Note that $\\left| v \\cdot \\nabla _{x} \\phi _a(x,v) \\right| = \\left| \\frac{2 (x-a) \\cdot v}{(1+|x-a|^2 + |v|^2)^2} \\right| \\lesssim \\phi _a(x,v).$ For given $n\\ge 0$ , we define the space-localized, velocity-weighted, $L^2$ -based space $X_n$ in terms of the following norm: $\\Vert f \\Vert _{X_n}^2 := \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a(x,v) \\langle v\\rangle ^{2n} f^2(x,v) \\, \\mathrm {d}v \\, \\mathrm {d}x.$ We begin with two auxiliary lemmas.", "First, we have a modification of Lemma 4.2 from [42]: Lemma 7.1 Suppose that $\\mu >-3$ , $n > 3/2 + \\mu $ , and $l > 3/2 + \\mu + (3/2 - n)_+$ .", "If $g \\in X_n$ , then $\\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a(x,v) \\langle v\\rangle ^{-2l} \\left( \\int _{\\mathbb {R}^3} g(x,w) |v-w|^\\mu \\, \\mathrm {d}w \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x\\lesssim \\Vert g \\Vert _{X_n}^2$ Without loss of generality, $g \\ge 0$ .", "We split the inner-most (convolutional) integral into two regions according to the ball $B_{|v|/10}(v)$ .", "That is, $\\begin{split}&\\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{-2l} \\left( \\int _{\\mathbb {R}^3} g(w) |v-w|^\\mu \\, \\mathrm {d}w \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad \\quad \\lesssim \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{-2l} \\left(\\left( \\int _{B_{|v|/10}(v)} g(w) |v-w|^\\mu \\, \\mathrm {d}w \\right)^2 \\right.\\\\&\\quad \\quad \\qquad \\qquad \\qquad \\qquad \\left.+ \\left( \\int _{B_{|v|/10}(v)^C} g(w) |v-w|^\\mu \\, \\mathrm {d}w \\right)^2 \\right) \\, \\mathrm {d}v \\, \\mathrm {d}x=: I_1 + I_2.\\end{split}$ Note that, when $w \\in B_{|v|/10}(v)$ , we have that $\\langle w \\rangle \\approx \\langle v\\rangle $ and $v \\in B_{|w|/2}(w)$ ; in particular, $\\sup _a\\phi _a(x,v) / \\phi _a(x,w) \\approx 1$ .", "Then, Cauchy-Schwarz and Fubini yield $\\begin{split}I_1 &\\lesssim \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3}\\phi _a(x,v) \\langle v\\rangle ^{-2l} \\langle v\\rangle ^{3+\\mu } \\int _{B_{|v|/10}(v)} g(w)^2 |v-w|^\\mu \\, \\mathrm {d}w \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\lesssim \\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\int _{\\mathbb {R}^3} g(w)^2 \\int _{B_{|w|/2}(w)} \\langle v\\rangle ^{-2l+3+\\mu } \\phi _a(x,v) |v-w|^\\mu \\, \\mathrm {d}v \\, \\mathrm {d}w \\, \\mathrm {d}x \\\\&\\lesssim \\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3}\\int _{\\mathbb {R}^3} \\phi _a(x,w) g(w)^2 \\langle w \\rangle ^{-2l+3+\\mu } \\int _{B_{|w|/2}(w)} |v-w|^\\mu \\, \\mathrm {d}v \\, \\mathrm {d}w \\, \\mathrm {d}x \\\\&\\lesssim \\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3}\\int _{\\mathbb {R}^3} \\phi _a(x,w) g(w)^2 \\langle w \\rangle ^{-2l + 2(3+\\mu )} \\, \\mathrm {d}w \\, \\mathrm {d}x \\lesssim \\Vert g \\Vert _{X_n}^2,\\end{split}$ where we also used that $-l+3+\\mu < n$ .", "For the remaining term, we again use Cauchy-Schwarz, followed by Hölder's inequality in $x$ , to obtain $\\begin{split}I_2 &\\lesssim \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a(x,v) \\langle v\\rangle ^{-2l} \\left( \\int _{B_{|v|/10}(v)^C} \\phi _a(x,w) \\langle w \\rangle ^{2n} g(w)^2 \\, \\mathrm {d}w \\right)\\\\&\\qquad \\qquad \\times \\left( \\int _{B_{|v|/10}(v)^C} \\frac{|v-w|^{2\\mu }}{\\langle w \\rangle ^{2n} \\phi _a(x,w)} \\, \\mathrm {d}w \\right) \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\lesssim \\Vert g \\Vert _{X_n}^2 \\sup _{a \\in \\mathbb {R}^3} \\sup _{x \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\phi _a(x,v) \\langle v\\rangle ^{-2l}\\int _{B_{|v|/10}(v)^C} \\frac{|v-w|^{2\\mu }}{\\langle w \\rangle ^{2n}}\\left( |x-a|^2 + \\langle w \\rangle ^2 \\right) \\, \\mathrm {d}w \\, \\mathrm {d}v \\\\&\\lesssim \\Vert g \\Vert _{X_n}^2 \\sup _{z \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\langle v\\rangle ^{-2l}\\frac{\\langle v\\rangle ^{2\\mu + (3-2n)_+}|z|^2 + \\langle v\\rangle ^{2\\mu + (5-2n)_+}}{|z|^2 + \\langle v\\rangle ^2} \\, \\mathrm {d}v \\lesssim \\Vert g \\Vert _{X_n}^2.\\end{split}$ We also recall the following result from [40]: Lemma 7.2 [40] For any $\\rho > 0$ and $v_0 \\in \\mathbb {R}^3$ such that $\\rho \\ge 2|v_0|$ , and any $H: \\mathbb {R}^3 \\rightarrow [0,\\infty )$ such that the right-hand side is finite, we have $\\int _{\\partial B_\\rho (0)} \\int _{(z-v_0)^\\perp } H(z+w) \\, \\mathrm {d}w \\, \\mathrm {d}z \\lesssim \\rho ^2 \\int _{B_{\\rho /2}^C(0)} \\frac{H(w)}{|w|} \\, \\mathrm {d}w.$ We are now ready to proceed with the proof of uniqueness.", "With $f$ and $g$ as above, we define $h = f-g$ and observe that $\\partial _t h + v \\cdot \\nabla _x h = Q(h,f) + Q(g,h),$ in the weak sense.", "We integrate (in $x$ and $v$ ) (REF ) against $\\phi _a(x,v) h \\langle v\\rangle ^{2n}$ for some $n\\ge \\frac{3}{2}$ to be determined later.", "Even though this is not an admissible test function for the weak solution $g$ , these calculations can be justified by a standard approximation procedure, which we omit.", "Next, we take a supremum over $a \\in \\mathbb {R}^3$ to yield $\\begin{split}\\frac{1}{2} \\frac{d}{dt} \\Vert h \\Vert _{X_n}^2&\\le \\frac{1}{2} \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} v \\cdot \\nabla _x \\phi _a h^2 \\langle v\\rangle ^{2n} \\, \\mathrm {d}v \\, \\mathrm {d}x+ \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a Q(h,f) h \\langle v\\rangle ^{2n} \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\qquad + \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a Q(g,h) h \\langle v\\rangle ^{2n} \\, \\mathrm {d}v \\, \\mathrm {d}x.\\end{split}$ Using (REF ), we bound the first term on the right by $\\frac{1}{2} \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} |v \\cdot \\nabla _x \\phi _a | h^2 \\langle v\\rangle ^{2n} \\, \\mathrm {d}v \\, \\mathrm {d}x\\le \\frac{1}{2} \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a h^2 \\langle v\\rangle ^{2n} \\, \\mathrm {d}v \\, \\mathrm {d}x= \\frac{1}{2} \\Vert h \\Vert _{X_n}^2,$ which yields $\\begin{split}\\frac{d}{dt} \\Vert h \\Vert _{X_n}^2&\\le \\Vert h \\Vert _{X_n}^2+ 2 \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\left( Q_\\text{s}(h,f) + Q_\\text{ns}(h,f) + Q(g,h) \\right) h \\langle v\\rangle ^{2n} \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&=: \\Vert h \\Vert _{X_n}^2 + I_1 + I_2 + I_3.\\end{split}$ We bound the terms in this right-hand side one by one.", "Since $t$ plays no role in these estimates, we prove them for general functions $f, g, h$ defined for $(x,v)\\in \\mathbb {R}^6$ , with the integrals $I_1$ , $I_2$ , and $I_3$ defined as in (REF ).", "As above, norms such as $\\Vert \\cdot \\Vert _{L^\\infty _q}$ with no specified domain are understood to be over $\\mathbb {R}^6$ throughout this section.", "For $I_1$ , we need the following intermediate lemma: Lemma 7.3 For $v\\in \\mathbb {R}^3$ , let ${\\mathcal {K}}(x,v) := \\int _{B_{\\langle v\\rangle /2}} K_{|h|}(x,v,v^{\\prime }) f(x,v^{\\prime }) \\, \\mathrm {d}v^{\\prime },$ where $K_{|h|}(x,v,v^{\\prime })$ is defined in terms of the function $|h|(x,v)$ according to formula (REF ).", "Assume that $n>\\max (\\frac{3}{2},\\gamma +2s+\\frac{5}{2})$ .", "If $\\gamma +2s > -2$ , suppose that $q > 3$ , and, if $\\gamma +2s \\le -2$ , suppose that $q > (3 +\\gamma )/2$ .", "Then there holds $\\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\int _{B_{10}(0)^C} \\phi _a \\langle v\\rangle ^{2n} {\\mathcal {K}}^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\le C \\Vert h \\Vert _{X_n}^2 \\Vert f \\Vert _{L^\\infty _q}^2,$ for a universal constant $C>0$ that tends to $\\infty $ as $\\gamma \\nearrow 0$ .", "The proof is divided into two cases depending on $\\gamma + 2s$ .", "Case 1: $\\gamma + 2s > -2$ .", "This argument is inspired by Proposition 3.1(i) of [40], but requires some modification due to the presence of the space-localizing weight $\\phi _a$ .", "Let $r = \\langle v\\rangle / 2$ and define $H_a(x,z) := h(x,z)^2 \\langle z \\rangle ^{2n} |z| \\phi _a(x,z) \\quad \\text{ and } \\quad \\Phi _a(x,v,w) := \\frac{|w|^{2\\gamma +4s+2}}{\\langle v+w \\rangle ^{2n} |v+w| \\phi _a(x,v+w)} .$ Using Cauchy-Schwarz twice, and the fact that $\\int _{\\mathbb {R}^3} \\langle v^{\\prime } \\rangle ^{-q} \\, \\mathrm {d}v^{\\prime } \\lesssim 1$ , we obtain $\\begin{split}\\phi _a(x,v)^{\\frac{1}{2}} {\\mathcal {K}}(x,v) &\\le \\Vert f \\Vert _{L^\\infty _q} \\int _{B_r(0)} \\phi _a(x,v)^{\\frac{1}{2}} \\langle v^{\\prime } \\rangle ^{-q} K_{|h|} (x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\\\&\\lesssim \\Vert f \\Vert _{L^\\infty _q} \\int _{B_r(0)} \\frac{\\langle v^{\\prime } \\rangle ^{-q}}{|v-v^{\\prime }|^{3+2s}}\\int _{(v-v^{\\prime })^\\perp } \\phi _a(x,v)^{\\frac{1}{2}} |h(v+w)||w|^{\\gamma +2s+1} \\, \\mathrm {d}w \\, \\mathrm {d}v^{\\prime } \\\\& \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\int _{B_r(0)} \\frac{\\langle v^{\\prime } \\rangle ^{-q}}{|v-v^{\\prime }|^{3+2s}}\\left( \\int _{(v-v^{\\prime })^\\perp } \\phi _a(x,v) \\Phi _a(x,v,w) \\, \\mathrm {d}w \\right)^{\\frac{1}{2}}\\\\&\\qquad \\qquad \\qquad \\times \\left( \\int _{(v-v^{\\prime })^\\perp } H_a(x,v+w) \\, \\mathrm {d}w \\right)^{\\frac{1}{2}} \\, \\mathrm {d}v^{\\prime }\\\\& \\lesssim \\Vert f \\Vert _{L^\\infty _q}\\left( \\int _{B_r(0)} \\frac{\\langle v^{\\prime } \\rangle ^{-q}}{|v-v^{\\prime }|^{6+4s}} \\left( \\int _{(v-v^{\\prime })^\\perp } \\phi _a(x,v) \\Phi _a(x,v,w) \\, \\mathrm {d}w \\right)\\right.\\\\&\\qquad \\qquad \\qquad \\times \\left.", "\\left( \\int _{(v-v^{\\prime })^\\perp } H_a(x,v+w) \\, \\mathrm {d}w \\right) \\, \\mathrm {d}v^{\\prime } \\right)^{\\frac{1}{2}}.\\end{split}$ At this point we observe a few important algebraic relations between the variables $v$ , $v^{\\prime }$ , and $w$ .", "Since $|v| \\ge 10$ , we have $|v| \\approx \\langle v\\rangle $ , and since $|v^{\\prime }| < \\langle v\\rangle / 2$ , we also have $|v-v^{\\prime }| \\approx \\langle v\\rangle $ .", "Furthermore, since $w \\perp (v-v^{\\prime })$ , we have that $|v+w| \\approx |v| + |w| \\approx \\langle v\\rangle + \\langle w \\rangle $ (that is, a converse to the triangle inequality for those two variables); see [42].", "With these relations in hand, we have $\\begin{split}&\\int _{(v-v^{\\prime })^\\perp } \\phi _a(x,v) \\Phi _a(x,v,w) \\, \\mathrm {d}w\\\\&\\quad \\quad \\quad =\\int _{(v-v^{\\prime })^\\perp } \\frac{|w|^{2\\gamma + 4s+2}(1+|x-a|^2+|v+w|^2)}{\\langle v + w \\rangle ^{2n} |v+w| (1+|x-a|^2+|v|^2)} \\, \\mathrm {d}w \\\\&\\quad \\quad \\quad \\lesssim \\int _{(v-v^{\\prime })^\\perp } \\frac{|w|^{2\\gamma + 4s+2}}{\\langle v\\rangle ^{2n+1} + \\langle w \\rangle ^{2n+1}} \\, \\mathrm {d}w+ \\int _{(v-v^{\\prime })^\\perp } \\frac{|w|^{2\\gamma + 4s + 4}}{(\\langle v\\rangle ^{2n+1}+\\langle w \\rangle ^{2n+1})\\langle v\\rangle ^2} \\, \\mathrm {d}w \\\\&\\quad \\quad \\quad \\lesssim \\langle v\\rangle ^{2\\gamma +4s+3-2n},\\end{split}$ where we needed $\\gamma + 2s > -2$ so that the singularities at $w=0$ are integrable, and $n > \\gamma + 2s + \\frac{5}{2}$ so the tails converge.", "Since $|v^{\\prime }| < \\langle v\\rangle /2$ and $|v| \\ge 10$ , we have that $|v-v^{\\prime }| \\approx \\langle v\\rangle $ .", "Thus, (REF ) becomes $\\phi _a(x,v) {\\mathcal {K}}(x,v)^2\\lesssim \\Vert f \\Vert _{L^\\infty _q}^2 \\langle v\\rangle ^{2\\gamma - 3 -2n} \\int _{B_r(0)} \\langle v^{\\prime } \\rangle ^{-q}\\int _{(v-v^{\\prime })^\\perp } H_a(x,v+w) \\, \\mathrm {d}w \\, \\mathrm {d}v^{\\prime }.$ This implies, using Lemma REF on $H_a$ , and spherical coordinates for the $v$ -integral, that $\\begin{split}&\\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\int _{B_{10}^C} \\phi _a \\langle v\\rangle ^{2n} {\\mathcal {K}}(x,v)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x\\approx \\sup _{a \\in \\mathbb {R}^3}\\int _{10}^\\infty \\rho ^{2n} \\int _{\\mathbb {R}^3} \\int _{\\partial B_\\rho (0)} \\phi _a(x,z) {\\mathcal {K}}(x,z)^2 \\, \\mathrm {d}z \\, \\mathrm {d}x \\, \\mathrm {d}\\rho \\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2\\sup _{a \\in \\mathbb {R}^3}\\int _{10}^\\infty \\rho ^{2\\gamma -3} \\int _{B_r(0)} \\langle v^{\\prime } \\rangle ^{-q} \\int _{\\mathbb {R}^3} \\int _{\\partial B_\\rho (0)}\\int _{(z-v^{\\prime })^\\perp } H_a(x,z+w) \\, \\mathrm {d}w \\, \\mathrm {d}z \\, \\mathrm {d}x \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}\\rho \\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2\\sup _{a \\in \\mathbb {R}^3} \\int _{10}^\\infty \\rho ^{2\\gamma -1} \\left( \\int _{\\mathbb {R}^3} \\langle v^{\\prime } \\rangle ^{-q} \\, \\mathrm {d}v^{\\prime } \\right)\\int _{\\mathbb {R}^3} \\int _{B_{\\rho /2}^C(0)} \\frac{h(x,w)^2 \\langle w \\rangle ^{2n} |w| \\phi _a(x,w)}{|w|} \\, \\mathrm {d}w \\, \\mathrm {d}x \\, \\mathrm {d}\\rho \\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2\\sup _{a \\in \\mathbb {R}^3} \\int _{10}^\\infty \\rho ^{2\\gamma -1} \\int _{\\mathbb {R}^3} \\int _{B_{\\rho /2}^C(0)} h(x,w)^2 \\langle w \\rangle ^{2n} \\phi _a(x,w) \\, \\mathrm {d}w \\, \\mathrm {d}x \\, \\mathrm {d}\\rho \\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2 \\int _{10}^\\infty \\rho ^{2\\gamma - 1}\\Vert h \\Vert _{X_n}^2 \\, \\mathrm {d}\\rho \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2 \\Vert h \\Vert _{X_n}^2,\\end{split}$ as desired.", "Note that the last inequality uses that $\\gamma < 0$ in an essential way.", "Case 2: $\\gamma + 2s \\le -2$ .", "This case requires a detailed analysis of the integral kernel to deal with the more severe singularity.", "As in Case 1, whenever $|v| \\ge 10$ , $|v^{\\prime }| < \\langle v\\rangle /2$ , and $w \\perp (v-v^{\\prime })$ , we have that $\\langle v\\rangle / \\langle v+w \\rangle \\lesssim 1$ .", "Therefore, $\\langle v\\rangle ^n K_{|h|}(x,v,v^{\\prime }) \\approx \\frac{1}{|v-v^{\\prime }|^{3+2s}} \\int _{(v-v^{\\prime })^\\perp } \\langle v\\rangle ^n |h(x,v+w)||w|^{\\gamma +2s+1} \\, \\mathrm {d}w\\lesssim K_{\\langle \\cdot \\rangle ^n |h|}(x,v,v^{\\prime }).$ Let us define $H(x,v) = \\langle v\\rangle ^n |h(x,v)|$ .", "To properly estimate ${\\mathcal {K}}(x,v)$ , we need to take advantage of the decay available from $f$ .", "However, since the integral is over $B_r(0)$ , we cannot exploit this smallness directly.", "Instead, we split the domain of integration $B_r(0)$ into $B_{r^s}(0)$ and $B_r(0) \\setminus B_{r^s}(0)$ (recall that $r = \\langle v\\rangle / 2$ and $|v| \\ge 10$ , so that the splitting makes sense).", "Then we have $\\begin{split}\\int _{B_r(0) \\setminus B_{r^s}(0)} K_H(x,v,v^{\\prime }) f(x,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } &\\lesssim \\Vert f \\Vert _{L^\\infty _q} \\langle v\\rangle ^{-sq}\\int _{B_r(0)} K_H(x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&\\lesssim \\Vert f \\Vert _{L^\\infty _q} \\langle v\\rangle ^{-sq-2s} \\int _{\\mathbb {R}^3} H(x,v+w) |w|^{\\gamma + 2s} \\, \\mathrm {d}w,\\end{split}$ using $B_r(0)\\subset B_{2\\langle v\\rangle }(v) \\setminus B_{\\langle v\\rangle /8}(v)$ and Lemma REF .", "Next, we note that $B_{r^s(0)} \\subset B_{|v|+r^s}(v) \\setminus B_{|v|-r^s}(v)$ , so that $\\int _{B_{r^s}(0)} K_H(x,v,v^{\\prime }) f(x,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\int _{B_{|v|+r^s}(v) \\setminus B_{|v|-r^s}(v)} K_H(x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }.$ Expanding into spherical coordinates centered at $v$ (i.e.", "$v^{\\prime } = v+\\rho z$ with $z\\in \\partial B_1$ ), we have $\\begin{split}&\\int _{B_{r^s}} K_H(x,v,v^{\\prime }) f(x,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\int _{|v|-r^s}^{|v|+r^s} \\frac{\\rho ^2}{\\rho ^{3+2s}} \\int _{\\partial B_1} \\int _{z^\\perp } H(x,v+w)|w|^{\\gamma +2s+1} \\, \\mathrm {d}w \\, \\mathrm {d}z \\, \\mathrm {d}\\rho \\\\&\\quad \\quad \\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\langle v\\rangle ^{-1 - 2s} r^s \\int _{\\partial B_1} \\int _{z^\\perp } H(x,v+w)|w|^{\\gamma +2s+1} \\, \\mathrm {d}w \\, \\mathrm {d}z \\\\&\\quad \\quad \\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\langle v\\rangle ^{-1-s} \\int _{\\mathbb {R}^3} H(x,v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w,\\end{split}$ where we used Lemma REF with $\\rho =1$ in the last line.", "Combining (REF ) and (REF ), and recalling $q > (1-s)/s$ by assumption, we now have $\\langle v\\rangle ^n {\\mathcal {K}}(v) \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\langle v\\rangle ^{-\\min \\lbrace 1+s, qs+2s\\rbrace } \\int _{\\mathbb {R}^3} H(x,v+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w,$ and so, using once more Lemma REF with $\\mu = \\gamma + 2s \\le -2$ , $n= 0 > 3/2 + \\mu $ , and $l = \\min \\lbrace 1+s, qs+2s\\rbrace > (3/2 + \\mu + (3/2-n)_+$ , we obtain $\\begin{split}\\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\int _{B_{10}^C} &\\phi _a \\left( \\langle v\\rangle ^n {\\mathcal {K}}(x,v) \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\lesssim \\Vert f \\Vert _{L^\\infty _q}^2 \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{-2(1+s)} \\left( \\int _{\\mathbb {R}^3} H(x,v+w) |w|^{\\gamma +2s}\\, \\mathrm {d}w \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\& \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2 \\Vert h \\Vert _{X_n}^2,\\end{split}$ using $\\Vert H\\Vert _{X_0} = \\Vert h\\Vert _{X_n}$ .", "Altogether, this yields (REF ).", "Now we are ready to bound the singular term in (REF ): Lemma 7.4 (Bound on $I_1$ ) Let $m$ and $n$ be as in Lemma REF , and assume in addition that $n> \\frac{3}{2} +(\\gamma +2s)_+$ and $q> n+3+\\gamma +4s$ .", "Assume also that $\\langle v\\rangle ^m f\\in L^\\infty _x C^{2s+\\alpha }_v$ for some $\\alpha \\in (0,1)$ , and that $m > n + \\frac{3}{2} + \\gamma +2s+\\alpha $ .", "With $I_1$ defined as in (REF ), we have $I_1 \\le C\\Vert h \\Vert _{X_n}^2 \\left( \\Vert f \\Vert _{L^{\\infty }_q} + \\Vert \\langle v\\rangle ^{m} f \\Vert _{L^\\infty _xC^{2s+\\alpha }_v} \\right),$ for any $h$ and $f$ such that the right-hand side is finite.", "The constant $C>0$ is universal.", "We use the annular decomposition (defining $A_k(v) := B_{2^k|v|}(v) \\setminus B_{2^{k-1}|v|}(v)$ ) to write $Q_\\text{s}(h,f) = \\sum _{k \\in \\mathbb {Z}} \\int _{A_k(v)} K_h(x,v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime }.$ Then we have $I_1 \\lesssim \\sup _{a \\in \\mathbb {R}^3} \\sum _{k \\in \\mathbb {Z}} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} h \\int _{A_k(v)} K_h(x,v,v^{\\prime }) (f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\, \\mathrm {d}x.$ The analysis is divided into four cases, based on range and the relative sizes of $v$ and $v^{\\prime }$ .", "Case 1: $k \\le -1$ .", "If $2s+\\alpha \\le 1$ , then we have $|f(x,v^{\\prime }) - f(x,v)| \\le \\Vert f(x,\\cdot )\\Vert _{C^{2s+\\alpha }_v(B_1(v))} |v^{\\prime }-v|^{2s+\\alpha } \\le \\Vert \\langle v\\rangle ^{m} f(x,\\cdot )\\Vert _{C_v^{2s+\\alpha }(\\mathbb {R}^3_v)} \\langle v\\rangle ^{-m} (2^k|v|)^{2s+\\alpha }.$ With Lemma REF , we have for each $x\\in \\mathbb {R}^3$ , $\\begin{split}&\\left| \\int _{A_k(v)} K_h(x,v,v^{\\prime }) (f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\right|\\\\& \\qquad \\qquad \\lesssim \\langle v\\rangle ^{-m} (2^k |v|)^{\\alpha } \\Vert \\langle v\\rangle ^m f(x,\\cdot ) \\Vert _{C_v^{2s+\\alpha }(\\mathbb {R}^3)}\\int _{\\mathbb {R}^3} |h(x,w)| |v-w|^{\\gamma +2s} \\, \\mathrm {d}w.\\end{split}$ On the other hand, if $2s+\\alpha \\in (1,2)$ (we may always assume $2s+\\alpha < 2$ by taking $\\alpha $ smaller if necessary), we have $\\begin{split}|f(x,v^{\\prime }) - f(x,v) - \\nabla _v f(x,v)\\cdot (v^{\\prime }-v)| &\\le \\Vert f(x,\\cdot )\\Vert _{C^{2s+\\alpha }_v(B_1(v))} |v^{\\prime }-v|^{2s+\\alpha }\\\\& \\lesssim \\Vert \\langle v\\rangle ^{m} f(x,\\cdot )\\Vert _{C_v^{2s+\\alpha }(\\mathbb {R}^3_v)} \\langle v\\rangle ^{-m} (2^k|v|)^{2s+\\alpha }.\\end{split}$ As usual, the symmetry of $K_h(x,v,v^{\\prime })$ implies the first-order term integrates to zero over $A_k(v)$ , and we obtain (REF ) in this case as well.", "From (REF ), using Lemma REF with $\\mu = \\gamma +2s$ and $l = m - n - \\alpha $ , we have $\\begin{split}&\\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a(x,v) \\langle v\\rangle ^{2n} \\left( \\int _{A_k(v)} K_h(v,v^{\\prime }) (f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _x C^{2s+\\alpha }_{m,v}}^2 2^{2k\\alpha }\\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3}\\phi _a \\langle v\\rangle ^{-2(m - n-\\alpha )}\\left( \\int _{\\mathbb {R}^3} |h(x,w)| |v-w|^{\\gamma +2s} \\, \\mathrm {d}w \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _xC^{2s+\\alpha }_{m,v}}^2 2^{2k\\alpha } \\Vert h \\Vert _{X_n}^2.\\end{split}$ The terms for $k \\le 1$ are summable, and we find that $\\begin{split}&\\sup _{a \\in \\mathbb {R}^3} \\sum _{k \\le 1} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} h \\int _{A_k(v)} K_h(v,v^{\\prime }) (f(x,v^{\\prime })-f(x,v))\\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad \\lesssim \\Vert h \\Vert _{X_n} \\sup _{a \\in \\mathbb {R}^3} \\sum _{k \\le 1} \\Big ( \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} \\Big ( \\int _{A_k(v)} K_h(v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\Big )^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\Big )^{\\frac{1}{2}}\\\\&\\quad \\lesssim \\Vert f \\Vert _{L^\\infty _xC^{2s+\\alpha }_{m,v}} \\Vert h \\Vert _{X_n}^2.\\end{split}$ Case 2: $k \\ge K_0$ for $K_0 = \\log _2(\\langle v\\rangle /|v|)$ .", "Notice that the choice of $K_0$ yields $|v^{\\prime }| \\ge \\langle v\\rangle /2$ .", "Thus, $\\begin{split}&\\left| \\int _{A_k(v)} K_h(x,v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\right| \\lesssim \\langle v\\rangle ^{-q} \\Vert f \\Vert _{L^{\\infty }_q} \\int _{A_k(v)} |K_h(x,v,v^{\\prime })| \\, \\mathrm {d}v^{\\prime } \\\\&\\quad \\quad \\quad \\lesssim \\langle v\\rangle ^{-q} \\Vert f \\Vert _{L^{\\infty }_q} (2^k \\langle v\\rangle )^{-2s} \\left( \\int _{\\mathbb {R}^3} |h(x,w)| |v-w|^{\\gamma +2s} \\, \\mathrm {d}w \\right).\\end{split}$ Here we used that $\\langle v^{\\prime }\\rangle \\approx \\langle v\\rangle $ to obtain the $\\langle v\\rangle ^{-q}\\Vert f \\Vert _{L^{\\infty }_q}$ decay from $f(x,v^{\\prime })$ .", "Therefore, $\\begin{split}&\\sup _{a \\in \\mathbb {R}^3} \\sum _{k \\ge K_0} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} h \\int _{A_k(v)} K_h(v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\quad \\quad \\lesssim \\Vert h \\Vert _{X_n} \\Vert f \\Vert _{L^\\infty _q} \\sup _{a \\in \\mathbb {R}^3} \\sum _{k > K_0} 2^{-2ks}\\left( \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2(n-q-2s)} \\left( \\int _{\\mathbb {R}^3} |h(x,w)| |v-w|^{\\gamma +2s} \\, \\mathrm {d}w \\right) \\, \\mathrm {d}v \\, \\mathrm {d}x \\right)^{\\frac{1}{2}} \\\\&\\quad \\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\Vert h \\Vert _{X_n}^2,\\end{split}$ from Lemma REF .", "We used $q >n+ 3 + 2\\gamma +2s$ .", "Case 3: $k \\in [0,K_0]$ and $|v| < 10$ .", "Notice that $|v^{\\prime }| \\le \\langle v\\rangle /2$ .", "This case is a formality, and uses exactly the same estimates as in Case 1.", "The only difference is in how we deduce that $\\langle v^{\\prime }\\rangle \\approx \\langle v\\rangle $ .", "In Case 1, this is because $|v-v^{\\prime }| \\le |v|/2$ from the condition on $k$ and the definition of $A_k$ .", "Here, it is due to the choice of $K_0$ (the $\\lesssim $ direction) and the smallness of $|v|$ (the $\\gtrsim $ direction).", "Hence, we omit the details.", "Case 4: $k \\in [0,K_0]$ and $|v| \\ge 10$ .", "This case uses estimates similar to Case 2.", "With $r = \\langle v\\rangle /2$ , we decompose as $\\begin{split}\\sum _{k = 0}^{K_0} &\\int _{A_k(v)} K_h(x,v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime }\\\\&= \\sum _{k = 0}^{K_0} \\left(\\int _{A_k(v) \\cap B_r(0)} + \\int _{A_k(v) \\cap B_r(0)^c}\\right) K_h(x,v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime }=: J_1 + J_2.\\end{split}$ We begin with $J_1$ : $\\begin{split}J_1&= \\sum _{k = 0}^{K_0} \\int _{A_k(v) \\cap B_r(0)} K_h(x,v,v^{\\prime })(f(x,v^{\\prime })-f(x,v)) \\, \\mathrm {d}v^{\\prime } \\\\& \\lesssim f(v) \\int _{B_r(0)} K_{|h|}(x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } + \\int _{B_r(0)}K_{|h|}(x,v,v^{\\prime }) f(x,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } =: J_{11} + J_{12}.\\end{split}$ For $J_{11}$ , arguing as in (REF ), we note that $\\int _{B_r(0)} K_{|h|}(x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\le \\int _{B_{2\\langle v\\rangle }(v) \\setminus B_{\\langle v\\rangle /4}(v)} K_{|h|}(x,v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\lesssim \\langle v\\rangle ^{-2s} \\int _{\\mathbb {R}^3} |h(x,v^{\\prime })| |v-v^{\\prime }|^{\\gamma +2s} \\, \\mathrm {d}v^{\\prime }.$ Then once more using Lemma REF yields $\\begin{split}&\\sup _{a \\in \\mathbb {R}^3} \\sum _{k>0} \\int _{\\mathbb {R}^3} \\int _{B_{10}(0)^C} \\phi _a \\langle v\\rangle ^{2n} h \\int _{A_k(v) \\cap B_r(0)} K_h(x,v,v^{\\prime })f(x,v)\\, \\mathrm {d}v^{\\prime }\\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad \\quad \\quad \\lesssim \\Vert h \\Vert _{X_n} \\Vert f \\Vert _{L^\\infty _q} \\sup _{a \\in \\mathbb {R}^3}\\left( \\int _{\\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\langle v\\rangle ^{-2q + 2n - 4s} \\left( \\int _{\\mathbb {R}^3} |h(x,w)| |v-w|^{\\gamma +2s} \\, \\mathrm {d}w \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\right)^{\\frac{1}{2}} \\\\&\\quad \\quad \\quad \\lesssim \\Vert f \\Vert _{L^\\infty _q} \\Vert h \\Vert _{X_n}^2.\\end{split}$ To estimate $J_{12}$ , we recall the notation ${\\mathcal {K}}(x,v) := \\int _{B_r(0)} K_{|h|}(x,v,v^{\\prime }) f(x,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }$ .", "Then $\\begin{split}\\sup _{a \\in \\mathbb {R}^3} \\sum _{k>0} \\int _{\\mathbb {R}^3}\\int _{B_{10}^C} &\\phi _a \\langle v\\rangle ^{2n} h \\int _{A_k(v) \\cap B_r(0)} K_{|h|}f(x,v^{\\prime })\\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\lesssim \\Vert h \\Vert _{X_n} \\left( \\sup _{a \\in \\mathbb {R}^3} \\int _{\\mathbb {R}^3} \\int _{B_{10}^C} \\phi _a \\langle v\\rangle ^{2n} {\\mathcal {K}}^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\right)^{\\frac{1}{2}}\\lesssim \\Vert h\\Vert _{X_n}^2 \\Vert f\\Vert _{L^\\infty _q},\\end{split}$ by Lemma REF .", "We now consider $J_2$ .", "Here, we have $|v^{\\prime }| \\ge \\langle v\\rangle /2$ , so the arguments of Case 2 apply verbatim.", "Hence, we omit the argument.", "This completes the desired estimate for Case 4, which together with the first three cases establishes the conclusion of the lemma.", "Lemma 7.5 (Bound on $I_2$ ) With $I_2$ defined as in (REF ), there holds $I_2 \\le C \\Vert h\\Vert _{X_n}^2\\Vert f\\Vert _{L^\\infty _q},$ whenever $n\\ge \\frac{3}{2}$ and $q>n+\\frac{3}{2} + \\gamma $ .", "The constant $C>0$ is universal.", "With the simple observation that $ I_2 \\lesssim \\Vert h \\Vert _{X_n} \\Vert Q_{\\text{ns}}(h,f) \\Vert _{X_n}$ we use the definition of $Q_{\\text{ns}}$ and Lemma REF with $\\mu = \\gamma $ and $l = q-n$ , to immediately find $\\begin{split}\\Vert Q_{\\text{ns}}(h,f) \\Vert _{X_n}^2 &\\lesssim \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} f(x,v)^2 \\left( \\int _{\\mathbb {R}^3} |h(x,z)| |v-z|^\\gamma \\, \\mathrm {d}z \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\& \\lesssim \\Vert f \\Vert _{L^\\infty _q}^2 \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2(n-q)} \\left( \\int _{\\mathbb {R}^3} |h(x,z)| |v-z|^\\gamma \\, \\mathrm {d}z \\right)^2 \\, \\mathrm {d}v \\, \\mathrm {d}x\\lesssim \\Vert f \\Vert _{L^\\infty _q}^2\\Vert h \\Vert _{X_n}^2,\\end{split}$ which yields the desired bound for $I_2$ .", "Lemma 7.6 (Bound on $I_3$ ) With $I_3$ defined as in (REF ), there exists a universal $C>0$ such that $I_3 \\le C \\Vert h\\Vert _{X_n}^2 \\Vert g\\Vert _{L^\\infty _q},$ whenever $n\\ge 2$ and $q>2n+\\gamma +5$ .", "First, define $ \\Psi _a(x,v) := \\phi _a(x,v)^{\\frac{1}{2}} \\langle v\\rangle ^n \\quad \\text{ and } \\quad J_a(x,v) = \\Psi _a(x,v) h(x,v).", "$ We begin by splitting $I_3$ into a coercive part and a commutator: $\\begin{split}I_3 &= \\sup _{a \\in \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} Q(g,h)h \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&= \\sup _{a \\in \\mathbb {R}^3} \\left( \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} J_a Q(g, J_a) \\, \\mathrm {d}v \\, \\mathrm {d}x + \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} J_a \\left( Q(g,h) \\Psi _a - Q(g, J_a) \\right) \\, \\mathrm {d}v \\, \\mathrm {d}x \\right) \\\\&=: \\sup _{a \\in \\mathbb {R}^3} \\left( I_{31} + I_{32} \\right).\\end{split}$ We need to keep the supremum in $a$ on the outside since the \"coercive\" term $I_{31}$ will contribute a strong negative component which is needed to control $I_{32}$ .", "Specifically, using a well-known symmetrization technique (see, e.g.", "[6]), we have $I_{31} = -\\frac{1}{2} D_a+ \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} Q(g, J_a^2) \\, \\mathrm {d}v \\, \\mathrm {d}x,$ where $D_a := \\iint _{\\mathbb {R}^9 \\times {\\mathbb {S}}^2} (J_a(x,v^{\\prime })-J_a(x,v))^2 g(x,v_*) B(|v-v_*|,\\cos \\theta ) \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x.$ For the second term in $I_{31}$ , recalling $Q = Q_{\\rm s} + Q_{\\rm ns}$ , we use a change of variables and the Cancellation Lemma [4] to write $\\begin{split}\\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} Q_{\\rm s}(g, J_a^2) \\, \\mathrm {d}v \\, \\mathrm {d}x &= \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} J_a^2 \\int _{\\mathbb {R}^3} (K_g(x,v^{\\prime },v) - K_g(x,v,v^{\\prime })) \\, \\mathrm {d}v^{\\prime } \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\lesssim \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\phi _a \\langle v\\rangle ^{2n} h^2 \\int _{\\mathbb {R}^3} g(x,z) |v-z|^\\gamma \\, \\mathrm {d}z \\, \\mathrm {d}v \\, \\mathrm {d}x\\lesssim \\Vert h\\Vert _{X_n}^2 \\Vert g\\Vert _{L^\\infty _q},\\end{split}$ since $q>2n+\\gamma +5> \\gamma +3$ .", "The nonsingular term is handled similarly: $\\begin{split}\\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} Q_{\\rm s}(g, J_a^2) \\, \\mathrm {d}v \\, \\mathrm {d}x &\\lesssim \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} J_a^2 \\int _{\\mathbb {R}^3} g(x,z) |v-z|^\\gamma \\, \\mathrm {d}z \\, \\mathrm {d}v \\, \\mathrm {d}x\\lesssim \\Vert h\\Vert _{X_n}^2 \\Vert g\\Vert _{L^\\infty _q}.\\end{split}$ We conclude $I_{31}+\\frac{1}{2} D_a \\lesssim \\Vert h \\Vert _{X_n}^2 \\Vert g \\Vert _{L^\\infty _q}.$ For $I_{32}$ , recalling the abbreviations $F = F(x,v)$ , $F_* = F(x,v_*)$ , $F^{\\prime } = F(x,v^{\\prime })$ , and $F_*^{\\prime } = F(x,v_*^{\\prime })$ for any function $F$ , and writing $B = B(|v-v_*|,\\cos \\theta )$ , we have $\\begin{split}I_{32} &= \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {S}^2} B J_a \\left[ (g_*^{\\prime } h^{\\prime } - g_*h) \\Psi _a - (g_*^{\\prime } J_a^{\\prime } - g_* J_a)\\right] \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&= \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {S}^2} B J_a g_*^{\\prime } h^{\\prime } (\\Psi _a - \\Psi _a^{\\prime })\\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x,\\end{split}$ since $J_a = \\Psi _a h$ .", "Next, we apply the pre-post-collisional change of variables: $v\\leftrightarrow v^{\\prime }$ , $v_* \\leftrightarrow v_*^{\\prime }$ , $\\sigma \\mapsto \\sigma ^{\\prime } := (v-v_*)/|v-v_*|$ .", "This transformation has unit Jacobian and leaves $B(|v-v_*|, \\sigma )$ invariant.", "This gives $\\begin{split}I_{32} &= \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {S}^2} B J_a^{\\prime } g_* h (\\Psi _a^{\\prime }-\\Psi _a)\\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&= \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {S}^2} B J_a g_* h (\\Psi _a^{\\prime }-\\Psi _a)\\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad + \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times \\mathbb {S}^2} B (J_a^{\\prime } - J_a) g_* h (\\Psi _a^{\\prime }-\\Psi _a)\\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x.\\\\&=: I_{321} + I_{322}.\\end{split}$ We consider the term $I_{322}$ first.", "We want to extract a “singular” piece (in the form of $D_a$ , as defined in (REF )) which will cancel with the coercive part of $I_{31}$ .", "Specifically, from the inequality $ab\\le \\frac{1}{2} (a^2+b^2)$ , we have $\\begin{split}I_{322} &\\le \\frac{1}{2} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times {\\mathbb {S}}^2} B (J_a^{\\prime }-J_a)^2 g_* \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\quad \\quad \\quad + \\frac{1}{2}\\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times {\\mathbb {S}}^2} B h^2 g_* (\\Psi _a^{\\prime } - \\Psi _a)^2 \\, \\mathrm {d}\\sigma \\, \\mathrm {d}w \\, \\mathrm {d}v \\, \\mathrm {d}x,\\end{split}$ so that $I_{322} - \\frac{1}{2} D_a \\le \\frac{1}{2} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times {\\mathbb {S}}^2} B h^2 g_* (\\Psi _a^{\\prime } - \\Psi _a)^2 \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x.$ Next, write $(\\Psi ^{\\prime }_a- \\Psi _a)^2 = 2 \\Psi _a (\\Psi _a - \\Psi _a^{\\prime }) + (\\Psi _a^{\\prime })^2 - \\Psi _a^2$ to obtain $\\begin{split}I_{322} - \\frac{1}{2} D_a &\\le \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times {\\mathbb {S}}^2} B h^2 g_* \\Psi _a (\\Psi _a - \\Psi _a^{\\prime }) \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x\\\\&\\quad + \\frac{1}{2} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times {\\mathbb {S}}^2} B h^2 g_* ((\\Psi _a^{\\prime })^2 - \\Psi _a^2) \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x.\\end{split}$ Since $J_a = \\Psi _a h$ , the first term on the right is the negative of $I_{321}$ .", "Returning to (REF ), we now have $I_{32} - \\frac{1}{2} D_a \\le \\frac{1}{2} \\iint _{\\mathbb {R}^3\\times \\mathbb {R}^3} \\iint _{\\mathbb {R}^3\\times {\\mathbb {S}}^2} B h^2 g_* ((\\Psi _a^{\\prime })^2 - \\Psi _a^2) \\, \\mathrm {d}\\sigma \\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x.$ It only remains to bound this right-hand side.", "To do this, we start with the Taylor expansion for $\\Psi _a^2 = \\phi _a(x,v) \\langle v\\rangle ^{2n}$ in $v$ (in this proof, subscipts such as $\\partial _i$ always denote differentiation in $v$ ): $ \\Psi _a^2(x,v^{\\prime })-\\Psi _a^2(x,v) = \\partial _{i} \\Psi _a^2(x,v) (v^{\\prime }-v)_i + \\frac{1}{2} \\partial _{ij}^2 \\Psi _a^2(x,\\tilde{v}) (v^{\\prime }-v)_i (v^{\\prime }-v)_j, $ where we sum over repeated indices, and $\\tilde{v} = \\tau v^{\\prime } + (1-\\tau ) v$ for some $\\tau \\in (0,1)$ .", "By a direct calculation, we have $|\\partial _i \\Psi _a^2(x,v)| \\lesssim \\Psi _a^2(x,v) \\langle v\\rangle ^{-1} \\quad \\text{ and } \\quad |\\partial _{ij}^2 \\Psi _a(x,\\tilde{v})| \\lesssim \\Psi _a^2(x,\\tilde{v}) \\left( \\frac{1}{ \\langle \\tilde{v} \\rangle ^2} + \\phi _a(x,\\tilde{v}) \\right).$ Noting that $v^{\\prime }-v = \\frac{1}{2} |v-v_*|(\\sigma -\\sigma ^{\\prime } \\cos \\theta ) + \\frac{1}{2} |v-v_*| (\\cos \\theta -1) \\sigma ^{\\prime }$ , we have $\\begin{split}&\\int _{{\\mathbb {S}}^2} B ((\\Psi _a^{\\prime })^2-\\Psi _a^2) \\, \\mathrm {d}\\sigma =\\frac{|v-v_*|}{2} \\nabla _v \\Psi _a^2(x,v) \\cdot \\int _{{\\mathbb {S}}^2}B (\\sigma - (\\sigma \\cdot \\sigma ^{\\prime }) \\sigma ^{\\prime }) \\, \\mathrm {d}\\sigma \\\\&\\quad \\quad \\quad + \\frac{|v-v_*|}{2} \\nabla _v \\Psi _a^2(x,v) \\cdot \\sigma ^{\\prime } \\int _{{\\mathbb {S}}^2}B (\\cos \\theta -1) \\, \\mathrm {d}\\sigma +\\frac{1}{2} \\int _{{\\mathbb {S}}^2}B \\partial _{ij}^2 \\Psi _a^2(x,\\tilde{v}) (v^{\\prime }-v)_i (v^{\\prime }-v)_j \\, \\mathrm {d}\\sigma .\\end{split}$ The first term on the right is zero by symmetry.", "Since $B \\approx \\theta ^{-2-2s}|v-v_*|^\\gamma $ , the second term is bounded by $\\lesssim \\Psi _a^2(x,v) |v-v_*|^{1+\\gamma } \\langle v\\rangle ^{-1}$ .", "Noting that $|v-v^{\\prime }|^2 = \\frac{1}{2} |v-v_*|^2 (1-\\cos \\theta )$ and that $|v|^2 + |v_*|^2 = |v^{\\prime }|^2 + |v_*^{\\prime }|^2$ , we bound the third term by $\\Psi _a^2(x,v) |v-v_*|^{2+\\gamma } \\int _{{\\mathbb {S}}^2} \\Theta _a(x,v,\\tilde{v})(1-\\cos \\theta ) |\\theta |^{-2-2s} \\, \\mathrm {d}\\sigma ,$ with $\\Theta _a(x,v,\\tilde{v}) := \\frac{\\langle \\tilde{v} \\rangle ^{2n}}{\\langle v\\rangle ^{2n}}\\frac{\\phi _a(x,\\tilde{v})}{\\phi _a(x,v)} ( \\langle \\tilde{v} \\rangle ^{-2} + \\phi _a(x,\\tilde{v})).$ Note that, since $\\tilde{v}$ depends on $v^{\\prime }$ , it also implicitly depends on $\\sigma $ .", "To estimate $\\Theta _a$ , we split into three cases: Case 1: $|\\tilde{v}| \\ge |v|/2$ .", "In this case, $\\phi _a(x,\\tilde{v}) / \\phi _a(x,v) \\lesssim 1$ .", "Using this, as well as $\\langle \\tilde{v} \\rangle ^2 \\lesssim \\langle v\\rangle ^2 + \\langle v_* \\rangle ^2$ , we obtain $ \\Theta _a(x,v,\\tilde{v}) \\lesssim \\left(1 + \\frac{\\langle v_* \\rangle ^{2n}}{\\langle v\\rangle ^{2n}} \\right) \\langle v\\rangle ^{-2}.", "$ Case 2: $|\\tilde{v}| < |v|/2$ and $|x+a| \\ge |v|$ .", "Here again $\\phi _a(x,\\tilde{v}) / \\phi _a(x,v) \\lesssim 1$ (due to the size of $|x+a|$ ) and also $\\phi _a(x,\\tilde{v}) \\le \\langle v\\rangle ^{-2}$ , so we have $ \\Theta _a(x,v,\\tilde{v}) \\lesssim \\langle v\\rangle ^{-2}.", "$ Case 3 ($|\\tilde{v}| < |v|/2$ and $|x+a| < |v|$ ).", "First, note that $\\langle \\tilde{v} \\rangle ^{2n} \\phi _a(x,\\tilde{v}) (\\langle \\tilde{v} \\rangle ^{-2} + \\phi _a(x,\\tilde{v})) \\lesssim \\langle \\tilde{v} \\rangle ^{2n-4} \\lesssim \\langle v\\rangle ^{2n-4} + \\langle v_*\\rangle ^{2n-4},$ which is always true, but in this case we also have that $\\langle v\\rangle ^{-2n} \\phi _a(x,v)^{-1} \\lesssim \\langle v\\rangle ^{2-2n}$ .", "Thus $\\Theta _a(x,v,\\tilde{v}) \\lesssim \\frac{\\langle v\\rangle ^{2n-4} + \\langle v_* \\rangle ^{2n-4}}{\\langle v\\rangle ^{2n-2}} = \\left( 1 + \\frac{\\langle v_* \\rangle ^{2n-4}}{\\langle v\\rangle ^{2n-4}} \\right) \\langle v\\rangle ^{-2}\\lesssim \\left( 1 + \\frac{\\langle v_*\\rangle ^{2n}}{\\langle v\\rangle ^{2n}} \\right) \\langle v\\rangle ^{-2}.$ The last inequality followed from $(\\frac{a}{b})^{2n-4} \\le 1 + (\\frac{a}{b})^{2n}$ , since $n\\ge 2$ .", "Putting all three cases together yields, for all $x$ , $v$ , $v_*$ , and $\\sigma $ , $\\Theta _a(x,v,\\tilde{v}) \\lesssim \\frac{\\langle v\\rangle ^{2n} + \\langle v_*\\rangle ^{2n}}{\\langle v\\rangle ^{2n+2}}.$ With (REF ) and (REF ), we then have $\\begin{split}I_{32} - \\frac{1}{2} D_a &\\lesssim \\iiint _{\\mathbb {R}^9} J_a^2 g_* \\left( |v-v_*|^{1+\\gamma }\\langle v\\rangle ^{-1} + |v-v_*|^{2+\\gamma }\\int _{{\\mathbb {S}}^2} \\Theta _a(x,v,\\tilde{v}) \\frac{1-\\cos \\theta }{|\\theta |^{2+2s}} \\, \\mathrm {d}\\sigma \\right)\\, \\mathrm {d}v_* \\, \\mathrm {d}v \\, \\mathrm {d}x \\\\&\\lesssim \\Vert h \\Vert _{X_n}^2 \\Vert g \\Vert _{L^\\infty _q} \\sup _{v \\in \\mathbb {R}^3} \\left( \\int _{\\mathbb {R}^3} \\frac{|v-v_*|^{1+\\gamma }}{\\langle v_* \\rangle ^q \\langle v\\rangle } \\, \\mathrm {d}v_*+ \\int _{\\mathbb {R}^3} \\frac{|v-v_*|^{2+\\gamma } (\\langle v\\rangle ^{2n} + \\langle v_* \\rangle ^{2n})}{\\langle v_* \\rangle ^q \\langle v\\rangle ^{2n+2}} \\, \\mathrm {d}v_* \\right) \\\\&\\lesssim \\Vert h \\Vert _{X_n}^2 \\Vert g \\Vert _{L^\\infty _q}.\\end{split}$ We used $q>\\gamma +4$ in the first term, and $q>2n+\\gamma +5$ in the second term.", "Putting (REF ) together with (REF ) yields the desired bound on $I_3$ .", "We are now able to complete the proof of the proof of uniqueness.", "First, note that, instead of the assumption $f_{\\rm in} \\in C^\\alpha _{\\ell , x, v}$ , we may assume that $\\langle v\\rangle ^{m} f_{\\rm in} \\in C^\\alpha _{\\ell ,x,v}$ , for any fixed $m>0$ .", "Indeed, up to decreasing $\\alpha $ and increasing the exponent $q$ from our hypotheses $f_{\\rm in} \\in L^\\infty _q$ , we can interpolate using l:moment-interpolation to trade regularity for velocity decay.", "Define $\\beta = \\frac{\\alpha }{1+2s}$ and $\\beta ^{\\prime } = \\beta \\frac{2s}{1+2s} = \\frac{2s\\alpha }{(1+2s)^2}$ .", "Next, choose $m>0$ large enough to satisfy the hypotheses of Lemma REF and Proposition REF , and define $m^{\\prime } := m-\\kappa -\\beta /(1+2s) - \\gamma -\\beta (\\gamma +2s)_+/(2s),$ where $\\kappa >0$ is the constant from Theorem REF .", "Now we apply the Schauder estimate of Proposition REF (which relies on (REF )), followed by the small-time Hölder estimate of Proposition REF to obtain, for $t\\in [0,T_U]$ , $\\begin{split}\\Vert \\langle v\\rangle ^{m^{\\prime }} f (t)\\Vert _{L^\\infty _x C^{2s+\\beta ^{\\prime }}_{v}} &\\lesssim \\Vert f\\Vert _{C^{2s+\\beta ^{\\prime }}_{\\ell ,m^{\\prime }}([t/2,t]\\times \\mathbb {R}^6)}\\\\&\\lesssim t^{-1+(\\beta -\\beta ^{\\prime })/(2s)}\\Vert f\\Vert _{C^\\beta _{\\ell ,m^{\\prime } + \\kappa + \\beta /(1+2s) + \\gamma }([0,t]\\times \\mathbb {R}^6)}^{1+(\\beta +2s)/\\beta ^{\\prime }}\\\\&\\lesssim t^{-1+(\\beta -\\beta ^{\\prime })/(2s)} \\Vert \\langle v\\rangle ^{m}f_{\\rm in}\\Vert _{C^{\\alpha }_{\\ell ,x,v}(\\mathbb {R}^6)}^{1+(\\beta +2s)/\\beta ^{\\prime }},\\end{split}$ since $\\alpha = \\beta (1+2s)$ .", "Now, combining Lemma REF (with $\\beta ^{\\prime }$ playing the role of $\\alpha $ ), Lemma REF , and Lemma REF with inequality (REF ), we have $\\frac{d}{dt} \\Vert h(t) \\Vert _{X_n}^2 - \\Vert h(t) \\Vert _{X_n}^2 \\lesssim \\Vert h(t) \\Vert _{X_n}^2 \\left( \\Vert g(t) \\Vert _{L^\\infty _q} + \\Vert f(t) \\Vert _{L^\\infty _q} + \\Vert \\langle v\\rangle ^m f (t)\\Vert _{L^\\infty _x C^{2s+\\beta ^{\\prime }}_{v}} \\right).$ Using (REF ) for the last term on the right, and absorbing the norm of $f_{\\rm in}$ into the implied constant, we now have $\\frac{d}{dt} \\Vert h \\Vert _{X_n}^2 \\le C \\left( \\Vert g(t) \\Vert _{L^\\infty _q} + \\Vert f(t) \\Vert _{L^\\infty _q} + t^{- 1 + (\\beta -\\beta ^{\\prime })/2s} \\right) \\Vert h \\Vert _{X_n}^2.$ By our assumptions that $\\Vert g(t)\\Vert _{L^\\infty _q(\\mathbb {R}^6)} \\in L^1([0,T_U])$ and $\\Vert f(t)\\Vert _{L^\\infty _q} \\in L^\\infty ([0,T_U])$ , and $\\Vert h(0)\\Vert _{X_n}^2 = 0$ , we conclude that $\\Vert h(t)\\Vert _{X_n} \\equiv 0$ for all $t\\in [0,T_U]$ by Grönwall's inequality.", "After replacing $\\alpha (1+2s)$ with $\\alpha $ , we obtain the statement of Theorem REF ." ], [ "Global existence near equilibrium", "In this section, we prove Corollary REF .", "The proof mainly follows the approach of [65].", "To pass from the local existence result of Theorem REF to a global existence result near equilibrium, we must first show that the time of existence depends on the distance of $f_{\\rm in}$ to the Maxwellian $M$ : Lemma 8.1 Let $M(x,v) = (2\\pi )^{-3/2} e^{-|v|^2/2}$ , and let $q>\\gamma +2s+3$ be fixed.", "Given $T>0$ and $\\varepsilon \\in (0,\\frac{1}{2})$ , there exists $\\delta >0$ such that if $ \\Vert f_{\\rm in} - M\\Vert _{L^\\infty _{q}(3\\times \\mathbb {R}^3)} < \\delta ,$ then the solution $f$ to (REF ) guaranteed by Theorem REF exists up to time $T$ , and satisfies $ \\Vert f(t,\\cdot ,\\cdot ) - M \\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} < \\varepsilon , \\quad t\\in [0,T].$ To begin, we make the restriction $\\Vert f_{\\rm in} - M\\Vert _{L^\\infty (3\\times \\mathbb {R}^3)}< \\frac{1}{2}$ .", "From Theorem REF , the solution $f$ exists on a time interval $[0,T_f]$ , with $T_f$ depending only on $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} \\le \\Vert M\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} + \\frac{1}{2}$ .", "In particular, $T_f$ is bounded below by a constant depending only on $q$ .", "Writing $f = M+\\tilde{f}$ , we have the following equation for $\\tilde{f}$ : $ \\partial _t \\tilde{f} + v\\cdot \\nabla _x \\tilde{f} = Q(M+\\tilde{f}, M+\\tilde{f}) = Q(M+\\tilde{f}, \\tilde{f}) + Q(\\tilde{f}, M), \\quad (t,x,v) \\in [0,T_f]\\times 3\\times \\mathbb {R}^3,$ since $Q(M,M) = 0$ .", "We will derive an upper bound for $\\Vert \\tilde{f}\\Vert _{L^\\infty _q}$ using a barrier argument similar to the proof of Lemma REF .", "With $T$ and $\\varepsilon $ as in the statement of the lemma, let $\\delta , \\beta >0$ be two constants such that $ \\delta e^{\\beta T} < \\varepsilon .$ The specific values of $\\delta $ and $\\beta $ will be chosen later.", "Defining $g(t,x,v) =\\delta e^{\\beta t} \\langle v\\rangle ^{-q}$ , and taking $\\delta > \\Vert f_{\\rm in}-M\\Vert _{L^\\infty _{q}(3\\times \\mathbb {R}^3)}$ , we have $|\\tilde{f}(0,x,v)| < g(0,x,v)$ for all $x$ and $v$ .", "We claim $\\tilde{f}(t,x,v)<g(t,x,v)$ in $[0,\\min (T,T_f)]\\times 3\\times \\mathbb {R}^3$ .", "If not, then by making $q$ slightly smaller, but still larger than $\\gamma +2s+3$ , we ensure the function $f$ decays in $v$ at a polynomial rate faster than $\\langle v\\rangle ^{-q}$ .", "Together with the compactness of the spatial domain 3, this implies there is a first crossing point $(t_{\\rm cr}, x_{\\rm cr}, v_{\\rm cr})$ with $t_{\\rm cr}>0$ , where $f(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) = g(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})$ .", "At this point, one has, as in the proof of Lemma REF , $\\partial _t g\\le Q(M+\\tilde{f},\\tilde{f}) + Q(\\tilde{f}, M) \\le Q(M+\\tilde{f},g) + Q(\\tilde{f}, M).$ Lemma REF implies, at $(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr})$ , $Q(M+ \\tilde{f},g) = \\delta e^{\\beta t_{\\rm cr}} Q(M+\\tilde{f},\\langle \\cdot \\rangle ^{-q}) \\le C \\delta e^{\\beta t_{\\rm cr}} \\Vert M+\\tilde{f}(t_{\\rm cr}, \\cdot ,\\cdot )\\Vert _{L^\\infty _{q}(3 \\times \\mathbb {R}^3)} \\langle v_{\\rm cr}\\rangle ^{-q}.$ Since $\\tilde{f}(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr}) = g(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr}) = \\delta e^{\\beta t} \\langle v_{\\rm cr}\\rangle ^{-q} < \\frac{1}{2} \\langle v_{\\rm cr}\\rangle ^{-q}$ , and $(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr})$ is the location of a maximum of $\\tilde{f}$ in $(x,v)$ space, we conclude $\\Vert \\tilde{f}(t_{\\rm cr},\\cdot ,\\cdot )\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} < \\frac{1}{2}$ .", "This implies $\\Vert M+\\tilde{f}(t_{\\rm cr},\\cdot ,\\cdot )\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} \\le C$ for a constant $C$ depending only on $q$ .", "We therefore have $Q(M+ \\tilde{f},g) \\le C \\delta e^{\\beta t_{\\rm cr}} \\langle v_{\\rm cr}\\rangle ^{-q}.$ For the term $Q(\\tilde{f},M)$ , which appeared as a result of recentering around $M$ , we write $Q(\\tilde{f},M) = Q_{\\rm s}(\\tilde{f},M) + Q_{\\rm ns}(\\tilde{f},M)$ .", "The singular part is handled by [65], whose proof does not depend on the sign of $\\gamma +2s$ and is therefore valid under our assumptions.", "This lemma gives $|Q_{\\rm s}(\\tilde{f}, M)(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr})|\\le C \\delta e^{\\beta t} \\langle v_{\\rm cr}\\rangle ^{-q+\\gamma }, \\quad \\text{ if } |v_{\\rm cr}|\\ge R,$ for universal constants $C, R>0$ .", "On the other hand, if $|v_{\\rm cr}|< R$ , the more crude estimate of Lemma REF yields $\\begin{split}|Q_{\\rm s}(\\tilde{f},M)(t_{\\rm cr}, x_{\\rm cr},v_{\\rm cr})| &\\le C \\left(\\int _{\\mathbb {R}^3} \\tilde{f}(v_{\\rm cr}+w)|w|^{\\gamma +2s}\\, \\mathrm {d}w\\right) \\Vert M\\Vert _{L^\\infty (\\mathbb {R}^3)}^{1-s} \\Vert D_v^2 M\\Vert _{L^\\infty (\\mathbb {R}^3)}^s \\\\&\\le C \\Vert \\tilde{f}\\Vert _{L^\\infty _q} \\langle R\\rangle ^{(\\gamma +2s)_+} \\le C\\delta e^{\\beta t_{\\rm cr}} \\langle v_{\\rm cr}\\rangle ^{-q},\\end{split}$ since $\\langle R\\rangle ^{(\\gamma +2s)_+} \\approx 1 \\approx \\langle v_{\\rm cr}\\rangle ^{-q}$ and $\\Vert \\tilde{f}(t_{\\rm cr},\\cdot ,\\cdot )\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} \\le \\delta e^{\\beta t_{\\rm cr}}$ .", "For the nonsingular part, we have $\\begin{split}|Q_{\\rm ns}(\\tilde{f}, M)(t_{\\rm cr},x_{\\rm cr},v_{\\rm cr})| &\\le C M(v_{\\rm cr}) \\int _{\\mathbb {R}^3} \\tilde{f}(t_{\\rm cr}, x_{\\rm cr}, w) |v_{\\rm cr}+w|^\\gamma \\, \\mathrm {d}w\\\\& \\le C \\Vert \\tilde{f}(t_{\\rm cr},\\cdot ,\\cdot )\\Vert _{L^\\infty _{q}(3\\times \\mathbb {R}^3)} \\langle v_{\\rm cr}\\rangle ^{-q} \\le C \\delta e^{\\beta t_{\\rm cr}}\\langle v_{\\rm cr}\\rangle ^{-q},\\end{split}$ since $q>\\gamma +2s+3>\\gamma +3$ and $M$ decays much faster than $\\langle v\\rangle ^{-q}$ .", "Collecting all our inqualities and recalling (REF ) and $\\partial _t g = \\delta \\beta e^{\\beta t} \\langle v\\rangle ^{-q}$ , we now have $ \\delta \\beta e^{\\beta t_{\\rm cr}} \\langle v_{\\rm cr}\\rangle ^{-q} \\le C_0 \\delta e^{\\beta t_{\\rm cr}} \\langle v_{\\rm cr}\\rangle ^{-q},$ where $C_0$ is the maximum among the constants in (REF ), (REF ), (REF ), and (REF ).", "This implies a contradiction if we choose $\\beta = 2C_0$ .", "We conclude $f(t,x,v)< g(t,x,v)$ on $[0,\\min (T,T_f)]\\times 3\\times \\mathbb {R}^3$ , as claimed.", "A similar argument using $-g$ as a lower barrier for $\\tilde{f}$ gives $|\\tilde{f}(t,x,v)|< g(t,x,v) = \\delta e^{2C_0 t} \\langle v\\rangle ^{-q}, \\quad 0\\le t \\le \\min (T,T_f).$ Finally, we choose $\\delta = \\min (\\frac{1}{4}, \\varepsilon e^{-2 C_0 T})$ , so that $\\langle v\\rangle ^q \\tilde{f}(t,x,v) < \\delta e^{2C_0 T} < \\varepsilon $ whenever $t\\le \\min (T,T_f)$ , and $\\Vert f_{\\rm in} - M\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} < \\delta < \\frac{1}{2}$ .", "The above barrier argument does not depend quantitatively on the time of existence $T_f$ .", "If $T> T_f$ , then we have shown $\\Vert \\tilde{f}(T_f, \\cdot , \\cdot )\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} < \\varepsilon < \\frac{1}{2}$ , and by applying Theorem REF again with initial data $f(T_f,\\cdot ,\\cdot )$ , we can continue the solution to a time interval $[0,2T_f]$ .", "Repeating finitely many times, we continue the solution to $[0,NT_f]\\times 3\\times \\mathbb {R}^3$ where $NT_f>T$ , with $\\Vert \\tilde{f}\\Vert _{L^\\infty _q([0,T]\\times 3\\times \\mathbb {R}^3)} < \\varepsilon $ , as desired.", "Next, we need the main result of [31].", "The result in [31] is stated for solutions defined on $[0,\\infty )\\times 3\\times \\mathbb {R}^3$ , and gives conditions under which solutions converge to a Maxwellian $M$ as $t\\rightarrow \\infty $ .", "Since the estimates at a fixed time $t$ do not depend on any information about the solution for times greater than $t$ , we easily conclude (as in [65]) the following restatement that applies to solutions defined on a finite time interval: Theorem 8.2 Let $f\\ge 0$ be a solution to (REF ) on $[0,T]\\times 3\\times \\mathbb {R}^3$ satisfying, for a family of positive constants $C_{k,q}$ , $ \\Vert f\\Vert _{L^\\infty ([0,T],H^{k}_{q}(3\\times \\mathbb {R}^3))} \\le C_{k,q} \\quad \\text{ for all } k,q\\ge 0,$ and also satisfying the pointwise lower bound $ f(t,x,v) \\ge K_0 e^{-A_0 |v|^2}, \\quad \\text{ for all } (t,x,v).$ Then for any $p>0$ and for any $k,q>0$ , there exists $C_p>0$ depending on $\\gamma $ , $s$ , $A_0$ , $K_0$ , the constant $c_b$ in (REF ), and $C_{k^{\\prime },q^{\\prime }}$ for sufficiently large $k^{\\prime }$ and $q^{\\prime }$ , such that for all $t \\in [0,T]$ , $ \\Vert f(t,\\cdot ,\\cdot )- M\\Vert _{H^{k}_{q}(3\\times \\mathbb {R}^3)} \\le C_p t^{-p},$ where $M$ is the Maxwellian with the same total mass, momentum, and energy as $f$ .", "Along with Lemma REF and Theorem REF , the proof of Corollary REF relies on the global regularity estimates of [50], which we extended to $\\gamma +2s< 0$ in Proposition REF .", "We are now ready to give the proof: First, let us assume $f_{\\rm in}\\in L^\\infty _q(3\\times \\mathbb {R}^3)$ for all $q>0$ , i.e.", "$f$ decays pointwise faster than any polynomial.", "For $q_0>5$ fixed, use Lemma REF to select $\\delta _1>0$ such that $f$ exists on $[0,1]\\times 3\\times \\mathbb {R}^3$ with $\\Vert f(t)-M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)} < \\varepsilon $ for all $t\\in [0,1]$ , whenever $\\Vert f_{\\rm in}-M\\Vert _{L^\\infty _q(3\\times \\mathbb {R}^3)} < \\delta _1$ .", "By our Theorem REF and the rapid decay of $f_{\\rm in}$ , the solution $f$ is $C^\\infty $ in all variables.", "Now, if the conclusion of the theorem is false, there is a first time $t_{\\rm cr}>1$ such that $\\Vert f(t_{\\rm cr},\\cdot ,\\cdot )-M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)} = \\varepsilon .$ Since $\\Vert f-M\\Vert _{L^\\infty _{q_0}([0,t_{\\rm cr}]\\times 3\\times \\mathbb {R}^3)} \\le \\varepsilon $ , the solution $f$ satisfies uniform lower bounds for $(t,x,v)\\in [0,t_{\\rm cr}]\\times B_1(0)\\times B_1(0)$ .", "These lower bounds, together with the bound on $\\Vert f\\Vert _{L^\\infty _{q_0}}$ , imply via Proposition REF that $f$ satisfies uniform estimates in $H^k_q([1,t_{\\rm cr}]\\times 3\\times \\mathbb {R}^3)$ for all $k,q>0$ , with constants independent of $t$ and depending only on $q_0$ and the norms $\\Vert f_{\\rm in}\\Vert _{L^\\infty _q}$ of the initial data for $q>0$ .", "On the time interval $[1,t_{\\rm cr}]$ , the function $f$ satisfies the hydrodynamic bounds $0< m_0\\le \\int _{\\mathbb {R}^3} f(t,x,v) \\, \\mathrm {d}v \\le M_0, \\,\\int _{\\mathbb {R}^3} |v|^2f(t,x,v) \\, \\mathrm {d}v \\le E_0, \\, \\int _{\\mathbb {R}^3} f(t,x,v) \\log f(t,x,v) \\, \\mathrm {d}v \\le H_0,$ uniformly in $t$ and $x$ , for some constants $m_0, M_0, E_0, H_0$ depending only on $q_0$ , as a result of the inequality $\\Vert f-M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)}\\le \\varepsilon < \\frac{1}{2}$ .", "(This follows from a quick computation, or we may apply [65].)", "From [47], this impliesWhen $\\gamma +2s<0$ , the lower bounds of [47] require a bound on the $L^\\infty $ norm in addition to the bounds in (REF ).", "This $L^\\infty $ bound also clearly folllows from the inequality $\\Vert f-M\\Vert _{L^\\infty _{q_0}} \\le \\varepsilon $ .", "the lower Gaussian bound $f(t,x,v)\\ge c_1 e^{-c_2|v|^2}$ , with $c_1,c_2>0$ depending only on $q_0$ .", "The hypotheses of [31], restated above as Theorem REF , are satisfied, so choosing $p=1$ , we have $ \\Vert f(t_{\\rm cr},\\cdot ,\\cdot )\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)} \\le C_1 t_{\\rm cr}^{-1},$ where $C_1>0$ depends only on $\\gamma $ , $s$ , $c_b$ , $q_0$ , and the norm of $f_{\\rm in}$ in $L^\\infty _{q_1}(3\\times \\mathbb {R}^3)$ for some $q_1$ depending on $q_0$ .", "Combining this with (REF ) gives $t_{\\rm cr}\\le C_1/\\varepsilon $ .", "Letting $T = C_1/\\varepsilon +1$ , we use Lemma REF again to select $\\delta _2>0$ such that $f$ exists on $[0,T]\\times 3\\times \\mathbb {R}^3$ , with $\\Vert f(t,\\cdot ,\\cdot )-M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)}< \\varepsilon ,$ for all $t\\in [0,T]$ .", "This inequality implies the first crossing time $t_{\\rm cr}> C_1/\\varepsilon +1$ , a contradiction with $t_{\\rm cr}\\le C_1/\\varepsilon $ .", "Therefore, if $\\Vert f_{\\rm in}- M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)} < \\delta := \\min (\\delta _1, \\delta _2)$ , we conclude there is no crossing time $t_{\\rm cr}$ , and $\\Vert f(t,\\cdot ,\\cdot )-M\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)}< \\varepsilon $ holds for all $t$ such that the solution $f$ exists.", "In particular, $\\Vert f(t)\\Vert _{L^\\infty _{q_0}(3\\times \\mathbb {R}^3)}$ is bounded by a constant independent of $t$ , and Theorem REF implies the solution can be extended for all time.", "Next, we consider the general case, where $f_{\\rm in}$ decays at only a finite polynomial rate.", "Looking at the proof of the previous case, we see that the choice of $\\delta $ depends only on $\\varepsilon $ , $q_0$ , and the size of $f_{\\rm in}$ in the $L^\\infty _{q_1}$ norm for some $q_1$ depending on $q_0$ .", "Therefore, if $f_{\\rm in} \\in L^\\infty _{q_1}(3\\times \\mathbb {R}^3)$ and satisfies all the hypotheses of Corollary REF , we can approximate $f_{\\rm in}$ by cutting off large velocities, apply the rapid-decay case considered above to obtain global solutions, and take the limit as the cutoff vanishes.", "We omit the details of this standard approximation procedure." ], [ "Change of variables", "This appendix is devoted to the proof of Proposition REF , which establishes the properties of the integral kernel $\\bar{K}_f$ defined in (REF ) for the case $\\gamma + 2s < 0$ .", "Noting that the case $\\gamma + 2s \\ge 0$ has received a full treatment in [50], we only consider the regime $\\gamma +2s<0$ throughout this appendix.", "In order to prove Proposition REF , we need to verify the following for the kernel $\\bar{K}_f(t,x,v,v^{\\prime })$ : coercivity, boundedness, cancellation, and Hölder continuity in $(t,x,v)$ .", "The proof strategies and notation broadly follow [50], which addressed the case $\\gamma +2s\\in [0,2]$ .", "However, the details are sufficiently different that it is necessary to provide full proofs.", "When $|v_0|\\le 2$ , the change of variables is defined as $\\mathcal {T}_0z = z_0\\circ z$ , i.e.", "a simple recentering around the origin.", "Therefore, $\\bar{K}_f$ inherits the properties of $K_f$ , which satisfies suitable ellipticity properties on any bounded velocity domain, see Remark REF .", "Therefore, in this appendix we focus only on the case $|v_0|> 2$ .", "Recalling the definition (REF ) of the linear transformation $T_0$ , we see that in the current regime, $T_0 (av_0+w) = |v_0|^{\\frac{\\gamma +2s}{2s}}\\left( \\frac{a}{|v_0|} v_0 + w\\right), \\quad \\text{where } a\\in \\mathbb {R}, w\\cdot v_0 = 0, \\gamma +2s< 0.$ When we import facts involving this linear transformation from [50], we use the notation $T_0^+$ for the transformation $T_0$ as it is defined in the case $\\gamma +2s\\ge 0$ .", "Then one has $T_0 v = |v_0|^{\\frac{\\gamma +2s}{2s}} T_0^+ v, \\quad \\text{ when } \\gamma +2s < 0.$ Note that the definition of $T_0^+$ as a linear transformation does not depend on $\\gamma $ or $s$ .", "The $T_0^+$ notation is intended only for use in the current appendix.", "In the following lemmas about the change of variables, we omit the dependence of $\\bar{f}$ and $\\bar{K}_f$ on $\\bar{t}$ and $\\bar{x}$ , since the conditions all hold uniformly in $t$ and $x$ ." ], [ "Coercivity", "With $A(v) \\subset \\mathbb {S}^2$ the subset of the unit sphere given by Lemma REF , define $\\Xi (v)$ to be the corresponding cone in $\\mathbb {R}^3$ , $\\Xi (v) := \\lbrace w\\in \\mathbb {R}^3 : \\frac{w}{|w|} \\in A(v)\\rbrace $ .", "Lemma 1.1 (Transformed cone of non-degeneracy) Let $f$ , $\\delta $ , $r$ , and $v_m$ satisfy the assumptions of Lemma REF .", "Fix $v_0 \\in \\mathbb {R}^3$ and $v\\in B_2$ , and define $ \\begin{split}\\bar{A}(v)&= \\lbrace \\sigma \\in \\mathbb {S}^2 : T_0\\sigma /|T_0\\sigma | \\in A(v_0+T_0v)\\rbrace ,\\\\\\bar{\\Xi }(v)&= \\lbrace w \\in \\mathbb {R}^3 : T_0 w \\in \\Xi (v_0+ T_0v)\\rbrace .\\end{split}$ Then there are constants $\\lambda , k>0$ , depending only on $\\delta $ , $r$ , and $v_m$ (but not on $v_0$ or $v$ ), such that $\\bar{K}_f(v,v+w) \\ge \\lambda |w|^{-3-2s}$ whenever $w \\in \\bar{\\Xi }(v)$ ; $\\mathcal {H}^2(\\bar{A}(v)) \\ge k$ , where $\\mathcal {H}^2$ is the 2-dimensional Hausdorff measure.", "For the first bullet point, Lemma REF and the definition (REF ) of $\\bar{K}_f$ imply that, for $w\\in \\bar{\\Xi }(v)$ , $\\begin{split}\\bar{K}_f(v,v+w)&= |v_0|^{2+\\frac{3\\gamma }{2s}}K_f(v_0+T_0 v, v_0 + T_0 v + T_0 w)\\\\& \\ge \\lambda |v_0|^{2+(3\\gamma )/(2s)} |\\bar{v}|^{\\gamma +2s+1}|T_0 w|^{-3-2s}\\ge \\lambda |w|^{-3-2s},\\end{split}$ since $|\\bar{v}|\\approx |v_0|$ and $|T_0w|\\le |v_0|^{\\frac{\\gamma +2s}{2s}}|w|$ .", "For the second bullet point, use (REF ) to write $v_0 + T_0 v = v_0 + T_0^+ \\tilde{v}$ with $\\tilde{v} = |v_0|^{\\frac{\\gamma +2s}{2s}} v \\in B_2$ .", "Next, recall the following fact from [50]: for any $\\tilde{v}\\in B_2$ , $\\mathcal {H}^2(\\lbrace \\sigma \\in \\mathbb {S}^2 : T_0^+\\sigma /|T_0^+\\sigma | \\in A(v_0 + T_0^+ \\tilde{v})\\rbrace ) \\ge k,$ for some $k>0$ depending on the constants of Lemma REF , and independent of $v_0$ .", "We note that the statement and proof of estimate (REF ) do not depend on the values of $\\gamma $ and $s$ .", "For any $v\\in B_2$ , we conclude from (REF ), using $T_0^+ \\tilde{v} = T_0 v$ and $T_0^+\\sigma /|T_0^+\\sigma | = T_0\\sigma /|T_0\\sigma |$ , that $\\mathcal {H}^2 (\\bar{A}(v)) = \\mathcal {H}^2(\\lbrace \\sigma \\in \\mathbb {S}^2 : T_0\\sigma /|T_0\\sigma | \\in A(v_0 + T_0v)\\rbrace ) \\ge k,$ as desired." ], [ "Boundedness conditions", "Next, we address the upper ellipticity bounds for the kernel $\\bar{K}_f$ .", "The following lemma corresponds to [50], but the proof must be modified to account for the extra powers of $|v_0|$ in the definition (REF ) of $\\bar{K}_f$ .", "Lemma 1.2 For $v_0 \\in \\mathbb {R}^3\\setminus B_2$ , $v\\in B_2$ , and $r>0$ , $\\int _{\\mathbb {R}^3\\setminus B_r(v)} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\le \\bar{\\Lambda }r^{-2s},$ with $\\bar{\\Lambda }:= |v_0|^{-\\gamma -2s} \\int _{\\mathbb {R}^3} f(\\bar{v} + w)\\left( |v_0|^2 - \\left(v_0\\cdot \\frac{w}{|w|}\\right)^2 + 1\\right)^s |w|^{\\gamma +2s} \\, \\mathrm {d}w.$ From the definition (REF ) of $\\bar{K}_f$ , we have $\\begin{split}\\int _{\\mathbb {R}^3\\setminus B_r(v)} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }&= |v_0|^{2+(3\\gamma )/(2s)} \\int _{\\mathbb {R}^3\\setminus B_r(v)} K_f(\\bar{v}, \\bar{v}^{\\prime }) \\, \\mathrm {d}v^{\\prime }\\\\&= \\int _{\\mathbb {R}^3\\setminus T_0(B_{ r})} K_f(\\bar{v}, \\bar{v} + u) \\, \\mathrm {d}u,\\end{split}$ from the change of variables $u = \\bar{v}^{\\prime } - \\bar{v} = T_0(v^{\\prime } - v)$ .", "Following [50], we use (REF ) to write $\\begin{split}\\int _{\\mathbb {R}^3\\setminus B_r(v)} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime }&\\lesssim \\int _{u\\in \\mathbb {R}^3\\setminus T_0(B_{ r})} |u|^{-3-2s} \\int _{w\\perp u} f(\\bar{v}+w) |w|^{1+\\gamma +2s} \\, \\mathrm {d}w \\, \\mathrm {d}u\\\\&= \\int _{w\\in \\mathbb {R}^3} \\left(\\int _{u\\perp w, u\\in \\mathbb {R}^3\\setminus T_0(B_{ r})} |u|^{-2-2s} \\, \\mathrm {d}u\\right) f(\\bar{v}+w)|w|^{\\gamma +2s} \\, \\mathrm {d}w,\\end{split}$ where we used $\\int _u \\int _{w\\perp u} (\\ldots ) \\, \\mathrm {d}w \\, \\mathrm {d}u = \\int _w \\int _{u\\perp w} (\\ldots ) \\frac{|u|}{|w|} \\, \\mathrm {d}u \\, \\mathrm {d}w.$ Recall that $T_0(B_{r})$ is an ellipsoid with radius $\\bar{r} := r |v_0|^\\frac{\\gamma +2s}{2s}$ in directions orthogonal to $v_0$ and radius $\\bar{r}/|v_0| = r|v_0|^{\\gamma /(2s)}$ in the $v_0$ direction.", "Its intersection with the plane $\\lbrace u\\perp w\\rbrace $ is an ellipse, whose smallest radius is $\\rho := \\frac{r|v_0|^\\frac{\\gamma +2s}{2s}}{ \\sqrt{|v_0|^2\\left(1 - \\left(\\frac{v_0\\cdot w}{|v_0||w|}\\right)^2\\right) + \\left(\\frac{v_0\\cdot w}{|v_0| |w|}\\right)^2}}.$ This follows from formula (5.10) in [50], with $\\bar{r} = r|v_0|^\\frac{\\gamma +2s}{2s}$ replacing $r$ .", "We therefore have $\\mathbb {R}^3\\setminus E_r \\subset \\mathbb {R}^3\\setminus B_\\rho $ , and $ \\begin{split}\\int _{u\\perp w, u\\in \\mathbb {R}^3\\setminus E_r} |u|^{-2-2s} \\, \\mathrm {d}u \\lesssim \\rho ^{-2s}&\\le r^{-2s} |v_0|^{-\\gamma -2s} \\left( |v_0|^2 \\left(1-\\left(\\frac{v_0\\cdot w}{|v_0||w|}\\right)^2\\right) + \\left(\\frac{v_0\\cdot w}{|v_0| |w|}\\right)^2\\right)^s\\\\&\\le r^{-2s} |v_0|^{-\\gamma -2s} \\left( |v_0|^2 - \\left( v_0\\cdot \\frac{w}{|w|}\\right)^2 + 1\\right)^s.\\end{split} $ Combining this expression with (REF ), the conclusion of the lemma follows.", "Lemma 1.3 (Boundedness conditions) If $\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}< \\infty $ for some $q> 2s+3$ , then the kernel $\\bar{K}_f$ satisfies the two conditions $\\int _{\\mathbb {R}^3\\setminus B_r(v)} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v^{\\prime } \\le \\Lambda r^{-2s}, \\quad \\text{ for all } v \\in B_2 \\text{ and } r>0,\\\\\\int _{\\mathbb {R}^3\\setminus B_r(v^{\\prime })} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v \\le \\Lambda r^{-2s}, \\quad \\text{ for all } v^{\\prime } \\in B_2 \\text{ and } r>0,$ for a constant $\\Lambda \\lesssim \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ .", "In particular, $\\Lambda $ is independent of the base point $v_0$ .", "The proof of (REF ) begins with estimating the expression $\\bar{\\Lambda }$ in Lemma REF from above.", "First, note that $\\left(|v_0|^2 - \\left(v_0 \\cdot \\frac{w}{|w|}\\right)^2 + 1\\right)^s \\lesssim \\left(|v_0|^2 - \\left(v_0 \\cdot \\frac{w}{|w|}\\right)^2\\right)^{s} + 1,$ and we have $ \\bar{\\Lambda }\\le \\int _{\\mathbb {R}^3} f(\\bar{v} + w) \\left(|v_0|^2 - \\left(v_0 \\cdot \\frac{w}{|w|}\\right)^2\\right)^{s}\\left(\\frac{|w|}{|v_0|}\\right)^{\\gamma +2s} \\, \\mathrm {d}w + \\int _{\\mathbb {R}^3} f(\\bar{v} + w) \\left(\\frac{|w|}{|v_0|}\\right)^{\\gamma +2s} \\, \\mathrm {d}w =: J_1 + J_2.$ To bound $J_2$ , the convolution estimate of Lemma REF gives $J_2 \\lesssim \\Vert f\\Vert _{L^\\infty _q} |\\bar{v}|^{\\gamma +2s} |v_0|^{-\\gamma -2s} \\lesssim \\Vert f\\Vert _{L^\\infty _q}$ , since $|\\bar{v}| \\approx |v_0|$ and $q > \\gamma +2s+3$ .", "For $J_1$ , letting $w = \\alpha v_0/|v_0| + b$ with $b\\cdot v_0 = 0$ , one has $ \\left(\\frac{|w|}{|v_0|}\\right)^{\\gamma +2s}\\left( |v_0|^2 - \\left( v_0 \\cdot \\frac{w}{|w|}\\right)^2\\right)^{s} = |v_0|^{-\\gamma } |b|^{2s}|w|^\\gamma .$ Noting that $|b|\\le |v_0+w|$ , we have $\\begin{split}\\int _{\\mathbb {R}^3} f(\\bar{v}+w) |v_0|^{-\\gamma } |b|^{2s} |w|^\\gamma \\, \\mathrm {d}w &\\le |v_0|^{-\\gamma } \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\int _{\\mathbb {R}^3} \\langle \\bar{v}+w\\rangle ^{-q} |v_0+w|^{2s} |w|^{\\gamma } \\, \\mathrm {d}w\\\\&\\lesssim |v_0|^{-\\gamma } \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\int _{\\mathbb {R}^3} \\langle v_0+w\\rangle ^{-q+2s} |w|^{\\gamma } \\, \\mathrm {d}w \\lesssim \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)},\\end{split}$ using $\\langle \\bar{v}+w\\rangle \\approx \\langle v_0+w\\rangle $ and $q> \\gamma +2s+3$ .", "This establishes the upper bound $\\bar{\\Lambda }\\lesssim \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ .", "Combining this with Lemma REF concludes the proof of the first boundedness condition (REF ).", "To establish (), we assume as usual that $|v_0|>2$ .", "For any $v^{\\prime }\\in B_2$ and $r>0$ , changing variables with $\\bar{v} = v_0 + T_0 v$ , we have $\\begin{split}\\int _{\\mathbb {R}^3\\setminus B_r(v^{\\prime })} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v&= |v_0|^{2+\\frac{3\\gamma }{2s}} \\int _{\\mathbb {R}^3\\setminus B_r(v^{\\prime })} K_f(\\bar{v}, \\bar{v}^{\\prime }) \\, \\mathrm {d}v= \\int _{\\mathbb {R}^3\\setminus E_r(\\bar{v}^{\\prime })} K_f(\\bar{v}, \\bar{v}^{\\prime }) \\, \\mathrm {d}\\bar{v}.\\end{split}$ The last integral is estimated in the proof of [50], up to choosing a different value of $r$ .", "More specifically, our $E_r$ would be $E_{\\bar{r}}$ with $\\bar{r} = r|v_0|^{\\frac{\\gamma +2s}{2s}}$ in the notation of [50].", "Therefore, their calculation (which does not depend on the sign of $\\gamma +2s$ ) implies $\\begin{split}& \\int _{\\mathbb {R}^3\\setminus B_r(v^{\\prime })} \\bar{K}_f(v,v^{\\prime }) \\, \\mathrm {d}v\\\\& \\le \\int _{\\mathbb {R}^3} f(\\bar{v}^{\\prime }+u)|u|^\\gamma \\Big ( (r|v_0|^\\frac{\\gamma +2s}{2s})^{-2s} |u|^{2s}\\Big ( 1+ |v_0|^2 - \\frac{(v_0\\cdot u)^2}{|u|^2} \\Big )^s+ (r|v_0|^\\frac{\\gamma +2s}{2s})^{-s}|u|^s\\Big | \\frac{u}{|u|}\\cdot v_0\\Big |^s \\Big ) \\, \\mathrm {d}u \\\\&\\le I_1 + I_2,\\end{split}$ where $\\begin{split}I_1&= r^{-2s} |v_0|^{-\\gamma -2s} \\int _{\\mathbb {R}^3} f(\\bar{v}^{\\prime } + u) |u|^{\\gamma +2s} \\Big (1+|v_0|^2 - \\frac{(v_0\\cdot u)^2}{|u|^2}\\Big )^s \\, \\mathrm {d}u,\\\\I_2&= r^{-s} |v_0|^{-\\gamma /2} \\int _{\\mathbb {R}^3} f(\\bar{v}^{\\prime } + u) |u|^{\\gamma +s} \\, \\mathrm {d}u.\\end{split}$ The term $I_1$ is bounded by a constant times $\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}r^{-2s}$ , by our estimate of $\\bar{\\Lambda }$ in the beginning of the current proof.", "For $I_2$ , we have $I_2\\le r^{-s} |v_0|^{-\\gamma /2} \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\int _{\\mathbb {R}^3} \\langle \\bar{v}^{\\prime } + u\\rangle ^{-q} |u|^{\\gamma +s} \\, \\mathrm {d}u\\lesssim r^{-s} |v_0|^{-\\gamma /2} \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\langle \\bar{v}^{\\prime }\\rangle ^{\\gamma +s},$ since $q >\\gamma +2s+3>\\gamma +s+3$ .", "By assumption, $v^{\\prime } \\in B_2$ , which implies $|\\bar{v}^{\\prime }| = |v_0 + T_0v^{\\prime }|\\lesssim |v_0|$ .", "Since $\\gamma +2s<0$ , the factor $|v_0|^{(\\gamma +2s)/2}$ is bounded by 1, and we conclude $I_2 \\lesssim r^{-s} \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ .", "Note that values of $r$ greater than 2 are irrelevant for (), because the kernel $\\bar{K}_f(v,v^{\\prime })$ is only defined for $v \\in B_1$ .", "For large $r$ , the domain of integration in () is empty.", "Therefore, we have $r^{-s} \\lesssim r^{-2s}$ , and the proof is complete.", "We remark that the bound of $I_2$ in the proof of the previous lemma is the only place where our definition (REF ) of the change of variables would not easily generalize to the case $\\gamma +2s\\ge 0$ .", "We also have the following alternative characterization of the upper bounds for $\\bar{K}_f$ , which is needed as one of the hypotheses of Theorem REF .", "It follows from (REF ) in Lemma REF in the same way that Lemma REF above follows from Lemma REF : Corollary 1.4 Let $f\\in L^\\infty _q([0,T]\\times \\mathbb {R}^6)$ for some $q>2s+3$ .", "For $|v_0|>2$ and $z= (t,x,v)\\in Q_1$ , let $\\bar{K}_{f,z}(w) = \\bar{K}_f(t, x,v, v+w)$ .", "Then for any $r>0$ , there holds $ \\int _{B_r} \\bar{K}_{f,z} (w) |w|^2 \\, \\mathrm {d}w \\le C \\Vert f(\\bar{t},\\bar{x},\\cdot )\\Vert _{L^\\infty _q(\\mathbb {R}^3)} r^{2-2s},$ The constant $C$ depends only on $\\gamma $ and $s$ ." ], [ "Cancellation conditions", "Next, we establish two cancellation conditions for $\\bar{K}_f$ , which say that $\\bar{K}_f(v,v^{\\prime })$ is not too far from being symmetric, on average.", "As a technical tool in proving these lemmas, one needs the following “modified principal value” result, which allows one to change variables according to $v^{\\prime } \\mapsto \\bar{v}^{\\prime }$ without altering the cancellation involved in defining principal value integrals.", "This lemma is proven in [50], with an argument that does not use the sign of $\\gamma +2s$ .", "Therefore, the lemma remains valid in our context.", "Lemma 1.5 Let $\\gamma +2s<0$ and $\\rho >0$ , and let $T_0$ be defined as above.", "If $f:\\mathbb {R}^3\\rightarrow \\mathbb {R}$ is such that $\\langle v\\rangle ^{\\gamma +2s} D^2 f \\in L^1(\\mathbb {R}^3)$ , then $ \\lim _{\\rho \\rightarrow 0+} \\int _{B_\\rho \\setminus T_0(B_\\rho )} (K_f(\\bar{v},\\bar{v}+w) - K_f(\\bar{v}+w,\\bar{v})) \\, \\mathrm {d}w = 0.$ If $f:\\mathbb {R}^3\\rightarrow \\mathbb {R}$ is such that $\\langle v\\rangle ^{\\gamma +2s}\\nabla f \\in L^1(\\mathbb {R}^3)$ , then $ \\lim _{\\rho \\rightarrow 0+} \\int _{B_\\rho \\setminus T_0(B_\\rho )} w K_f(\\bar{v}+w, \\bar{v}) \\, \\mathrm {d}w = 0.$ This lemma is proven for $T_0^+$ rather than $T_0$ in [50].", "However, $T_0$ and $T_0^+$ can easily be interchanged here, by rescaling the parameter $\\rho $ .", "Next, we prove the first cancellation condition, following the strategy of [50]: Lemma 1.6 (First Cancellation Condition) Fix $q>3+\\gamma $ .", "Suppose that $f\\in L^\\infty _q(\\mathbb {R}^3)$ and $\\langle v\\rangle ^{\\gamma +2s} D_v^2 f \\in L^1(\\mathbb {R}^3)$ .", "Then the kernel $\\bar{K}_f$ satisfies $\\begin{split}\\left| {\\rm p.v.}", "\\int _{\\mathbb {R}^3} (\\bar{K}_f(v,v^{\\prime }) - \\bar{K}_f(v^{\\prime },v)) \\, \\mathrm {d}v^{\\prime }\\right| &\\le C \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)},\\end{split}$ where $C>0$ is universal.", "If $|v_0|\\le 2$ , then the conclusion follows from the classical Cancellation Lemma, stated for example in [48].", "If $|v_0|>2$ , then we write for any $v\\in B_2$ , $\\begin{split}{\\rm p.v.", "}\\int _{\\mathbb {R}^3}& (\\bar{K}_f(v,v^{\\prime }) - \\bar{K}_f(v^{\\prime },v)) \\, \\mathrm {d}v^{\\prime }\\\\&= |v_0|^{2+\\frac{3\\gamma }{2s}} {\\rm p.v.", "}\\int _{\\mathbb {R}^3} (K_f(\\bar{v}, \\bar{v}^{\\prime }) - K_f(\\bar{v}^{\\prime },\\bar{v})) \\, \\mathrm {d}v^{\\prime }\\\\&= |v_0|^{2+\\frac{3\\gamma }{2s}} \\lim _{R\\rightarrow 0+} \\int _{\\mathbb {R}^3\\setminus B_{R}} (K_f(\\bar{v}, \\bar{v} + T_0 v^{\\prime }) - K_f(\\bar{v} + T_0 v^{\\prime }, \\bar{v})) \\, \\mathrm {d}v^{\\prime }\\\\&= \\lim _{R\\rightarrow 0+} \\int _{\\mathbb {R}^3\\setminus T_0(B_R)} (K_f(\\bar{v}, \\bar{v} + w) - K_f(\\bar{v}+ w, \\bar{v})) \\, \\mathrm {d}w\\\\&= \\lim _{R\\rightarrow 0+} \\int _{\\mathbb {R}^3\\setminus B_{R}} (K_f(\\bar{v}, \\bar{v} + w) - K_f(\\bar{v}+ w, \\bar{v})) \\, \\mathrm {d}w\\\\&= {\\rm p.v.}", "\\int _{\\mathbb {R}^3} (K_f(\\bar{v}, \\bar{v} + w) - K_f(\\bar{v}+ w, \\bar{v})) \\, \\mathrm {d}w,\\end{split}$ where we have used Lemma REF (a) with $\\rho = R|v_0|^{\\gamma /(2s)+1}$ in the next-to-last equality.", "Next, we apply [48] and obtain, for $q> \\gamma +3$ $ \\left| {\\rm p.v.", "}\\int _{\\mathbb {R}^3} (\\bar{K}_f(v,v^{\\prime }) - \\bar{K}_f(v^{\\prime },v)) \\, \\mathrm {d}v^{\\prime } \\right| \\le C\\left(\\int _{\\mathbb {R}^3} f(z) |\\bar{v} - z|^\\gamma \\, \\mathrm {d}z\\right) \\le C \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}, $ as desired.", "Lemma 1.7 (Second Cancellation Condition) Fix $s\\in [\\frac{1}{2}, 1)$ .", "Suppose that $f\\in L^\\infty _q(\\mathbb {R}^3)$ with $q>3+2s$ and $\\langle v\\rangle ^{\\gamma +2s} \\nabla _v f \\in L^1(\\mathbb {R}^3)$ .", "Then for all $r\\in [0,\\frac{1}{4}]$ and $v\\in B_{7/4}$ , there holds $ \\left| {\\rm p.v.}", "\\int _{B_r(v)} (\\bar{K}_f(v,v^{\\prime }) - \\bar{K}_f(v^{\\prime },v)) (v^{\\prime }-v) \\, \\mathrm {d}v^{\\prime }\\right| \\le \\Lambda (1+r^{1-2s}),$ with $\\Lambda $ depending on $\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ .", "This proof is similar to the proof of [50].", "Here, we give a sketch of the argument and discuss the changes needed for our setting.", "First, we claim that for $|v_0|\\ge 2$ , $u\\in B_1(v_0)$ , and $r\\in (0,1)$ , there holds $\\int _{\\mathbb {R}^3} f(u+z) |z|^{\\gamma +1} \\min (1, r^{2-2s} |z|^{2s-2}) \\, \\mathrm {d}z\\lesssim \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} |v_0|^{\\gamma +2s-1} r^{1-2s}.$ To show this, we divide the integral into $B_r$ and $\\mathbb {R}^3\\setminus B_r$ , and write $ \\int _{B_r} f(u+z) |z|^{\\gamma +1} \\, \\mathrm {d}z \\le \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\int _{B_r} \\langle u+ z\\rangle ^{-q} |z|^{\\gamma +1} \\, \\mathrm {d}z \\lesssim \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} |v_0|^{-q} r^{\\gamma +4}.$ Since $s\\in [\\frac{1}{2}, 1)$ and $r\\in (0,1)$ , we have $r^{\\gamma +4}\\le 1\\le r^{1-2s}$ .", "Also, $q> 2s+3> \\gamma +2s-1$ , so $|v_0|^{-q} < |v_0|^{\\gamma +2s-1}$ .", "Next, since $q> 2s+3 > \\gamma +2s+2$ , $\\begin{split}r^{2-2s}\\int _{\\mathbb {R}^3\\setminus B_r} f(u+z) |z|^{\\gamma +2s-1} \\, \\mathrm {d}z &\\le r^{2-2s} \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\int _{\\mathbb {R}^3\\setminus B_r} \\langle u+z\\rangle ^{-q} |z|^{\\gamma +2s-1} \\, \\mathrm {d}z\\\\& \\le r^{2-2s} \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\langle u\\rangle ^{\\gamma +2s-1} \\le r^{1-2s} |v_0|^{\\gamma +2s-1} \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)},\\end{split}$ using $|u|\\approx |v_0|$ , $\\gamma + 2s \\le 0$ , and $1 \\le r^{-1}$ .", "This establishes (REF ).", "Now, to prove the lemma, we may focus on the case $|v_0|>2$ , by [48].", "By the symmetry property $K_f(\\bar{v}, \\bar{v} + \\bar{w}) = K_f(\\bar{v}, \\bar{v}-\\bar{w})$ of $K_f$ , one can easily show ${\\rm p.v.}", "\\int _{B_r(v)} \\bar{K}_f(v^{\\prime },v) (v^{\\prime }-v) \\, \\mathrm {d}v^{\\prime } = 0$ .", "Therefore, it suffices to bound the remaining term ${\\rm p.v.}", "\\int _{B_r(v)} \\bar{K}_f(v,v^{\\prime }) (v^{\\prime }-v) \\, \\mathrm {d}v^{\\prime }.$ Using the definition (REF ) of $\\bar{K}_f$ and changing variables according to $\\bar{w} = T_0(v-v^{\\prime })$ (which is compatible with the principal value integral, by Lemma REF (b)), this term equals ${\\rm p.v.}", "\\int _{E_r} (T_0^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}.$ With $\\bar{r} = |v_0|^{\\frac{\\gamma +2s}{2s}} r$ as above, we decompose this integral as follows, using $E_r = T_0(B_r) = T_0^+(B_{\\bar{r}})$ : $\\begin{split}{\\rm p.v.}", "\\int _{E_r} (T_0^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}&\\le \\Big |{\\rm p.v.}", "\\int _{B_{\\bar{r}}} (T_0^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}\\Big |\\\\&\\qquad + \\Big | \\int _{T_0^+(B_{\\bar{r}}) \\setminus B_{\\bar{r}}} (T_0^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}\\Big |=: I_1 + I_2.\\end{split}$ For $I_2$ , we use the following inequality from the proof of [50], with $\\bar{r}$ replacing $r$ : $\\begin{split}\\Big | \\int _{T_0^+(B_{\\bar{r}}) \\setminus B_{\\bar{r}}} ((T_0^+)^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}\\Big |&\\lesssim |v_0| \\int _{\\mathbb {R}^3} f(\\bar{v}^{\\prime } + z) |z|^{\\gamma +1} \\min (1, \\bar{r}^{2-2s}|z|^{2s-2}) \\, \\mathrm {d}w \\\\&\\quad + \\int _{\\mathbb {R}^3}f(\\bar{v}^{\\prime } + z) \\bar{r}^{1-2s} |z|^{\\gamma +2s} (1+|v_0|^2 - (v_0\\cdot z)/|z|^2 )^s \\, \\mathrm {d}w.\\end{split}$ The first term on the right is estimated using (REF ) with $r = \\bar{r}$ .", "The second term is estimated using our upper bound $\\bar{\\Lambda }\\lesssim \\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)}$ from the proof of Lemma REF with $\\bar{v}^{\\prime }$ replacing $\\bar{v}$ , which is valid because $|\\bar{v}|\\approx |\\bar{v}^{\\prime }|\\approx |v_0|$ .", "In all, we have $\\left| \\int _{T_0^+(B_{\\bar{r}}) \\setminus B_{\\bar{r}}} ((T_0^+)^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}\\right|\\lesssim {\\bar{r}}^{1-2s}|v_0|^{\\gamma +2s}= |v_0|^{\\frac{\\gamma +2s}{2s}} r^{1-2s}.$ Since $T_0^{-1}w = |v_0|^{-\\frac{\\gamma +2s}{2s}} (T_0^+)^{-1} \\bar{w}$ , this implies $I_2 \\lesssim r^{1-2s}$ .", "For $I_1$ , we use another calculation quoted from the proof of [50], where $\\bar{r}$ once again plays the role of $r$ : $\\begin{split}\\left| \\int _{B_{\\bar{r}}} ((T_0^+)^{-1} \\bar{w}) K_f(\\bar{v} - \\bar{w}, \\bar{v}) \\, \\mathrm {d}\\bar{w}\\right|&\\lesssim |v_0| \\int _{\\mathbb {R}^3} f(\\bar{v}^{\\prime } + z) |z|^{\\gamma +1} \\min (1, \\bar{r}^{2-2s}|z|^{2s-2}) \\, \\mathrm {d}z ,\\end{split}$ and using (REF ) again, we conclude $I_1\\lesssim |v_0|^{- \\frac{\\gamma +2s}{2s} + \\gamma +2s}\\bar{r}^{1-2s}= r^{1-2s},$ which concludes the proof." ], [ "Hölder continuity", "In this subsection, we establish the Hölder continuity of the kernel $\\bar{K}_f$ .", "First, we have a lemma on the kinetic Hölder spaces and their relationship to the change of variables (REF ).", "Lemma 1.8 Given $z_0\\in \\mathbb {R}^{7}$ and $F:\\mathcal {E}_R(z_0) \\rightarrow \\mathbb {R}$ , define $\\bar{F} :Q_R \\rightarrow \\mathbb {R}$ by $\\bar{F}(z) = F(\\mathcal {T}_0(z))$ .", "Then, $ \\Vert \\bar{F}\\Vert _{C_\\ell ^\\beta (Q_R)} \\lesssim \\Vert F\\Vert _{C_\\ell ^\\beta (\\mathcal {E}_R(z_0))} \\le |v_0|^{\\bar{c} \\beta } \\Vert \\bar{F}\\Vert _{C^\\beta _\\ell (Q_R)}, $ with $\\bar{c} :=-\\gamma /(2s)>0$ .", "The argument is the same as [50], but the powers of $|v_0|$ are different because of the different definition of $\\mathcal {T}_0$ (recall (REF )).", "We claim that for all $|v_0|>2$ and $z, z_1 \\in \\mathbb {R}^7$ , $d_\\ell (\\mathcal {T}^{-1}(z), \\mathcal {T}^{-1}(z_1)) &\\le |v_0|^{-\\frac{\\gamma }{2s}} d_\\ell (z,z_1),\\\\d_\\ell (\\mathcal {T}(z), \\mathcal {T}(z_1)) &\\le d_\\ell (z,z_1).$ For (REF ), we use $\\Vert T_0^{-1}\\Vert = |v_0|^{1 - \\frac{\\gamma +2s}{2s}}$ and the definition (REF ) of $d_\\ell $ to write $\\begin{split}d_\\ell (&\\mathcal {T}^{-1}(z), \\mathcal {T}^{-1}(z_1))\\\\&= \\min _{w\\in \\mathbb {R}^3} \\max \\Big ( |t-t_1|^\\frac{1}{2s}, |T_0^{-1}(x- x_1 - (t-t_1) w)|^\\frac{1}{1+2s},|T_0^{-1} (v-w)|, |T_0^{-1} (v_1-w)| \\Big )\\\\&\\le |v_0|^{-\\frac{\\gamma }{2s}}\\min _{w\\in \\mathbb {R}^3} \\max \\left(|t-t_1|^\\frac{1}{2s}, |x-x_1 - (t-t_1)w|^\\frac{1}{1+2s},|v-w|, |v_1-w|\\right)\\\\&\\le |v_0|^{-\\frac{\\gamma }{2s}} d_\\ell (z,z_1).\\end{split}$ The proof of () is similar, so we omit it.", "With (REF ) and (), the left invariance of $d_\\ell $ implies that for $\\bar{z} = \\mathcal {T}_0 z$ and $\\bar{z}_1 = \\mathcal {T}_0 z_1$ , $d_\\ell (\\bar{z}, \\bar{z}_1) \\le d_\\ell (z,z_1) \\lesssim |v_0|^{-\\frac{\\gamma }{2s}} d_\\ell (\\bar{z}, \\bar{z}_1).$ To conclude the proof, fix $z, z_1 \\in Q_R$ , and let $\\bar{z} = \\mathcal {T}_0 z$ and $\\bar{z}_1 = \\mathcal {T}_0 z_1$ .", "Let $p$ be the polynomial expansion of $\\bar{F}$ at the point $z_1$ of degree $\\deg _k p < \\beta $ , such that $|\\bar{F}(z) - p(z)|\\le [\\bar{F}]_{C_\\ell ^\\beta } d_\\ell (z,z_1)^\\beta $ .", "Since $p\\circ \\mathcal {T}_0^{-1}$ is a polynomial of the same degree as $p$ , there holds $ |F(\\bar{z}) - (p\\circ \\mathcal {T}_0^{-1})(\\bar{z})| = |\\bar{F}(z) - p(z)|\\le [\\bar{F}]_{C_\\ell ^\\beta } d_\\ell (z,z_1)^\\beta \\le [\\bar{F}]_{C_\\ell ^\\beta } d_\\ell (\\bar{z}, \\bar{z}_1)^\\beta |v_0|^{-\\frac{\\beta \\gamma }{2s}},$ from the second inequality in (REF ).", "This implies the second inequality in the statement of the lemma.", "The first inequality in the lemma follows from the first inequality of (REF ) in a similar way.", "Next, we establish the Hölder regularity of the kernel $\\bar{K}_f$ .", "The following lemma extends [50].", "Lemma 1.9 Assume $\\gamma +2s < 0$ .", "For any $f:[0,T]\\times \\mathbb {R}^3\\times \\mathbb {R}^3 \\rightarrow \\mathbb {R}$ such that $f\\in C^\\alpha _{\\ell ,q}$ with $q> 3 + \\alpha /(1+2s)$ , and for any $|v_0|>2$ and $r\\in (0,1]$ , let $ \\bar{K}_{f,z}(w) := \\bar{K}_f(t,x,v,v+w), \\quad z = (t,x,v) \\in Q_{2r}.$ Then we have $\\int _{B_\\rho } |\\bar{K}_{f,z_1}(w) - \\bar{K}_{f,z_2}(w)| |w|^2 \\, \\mathrm {d}w\\le \\bar{A}_0 \\rho ^{2-2s} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }}, \\quad \\rho > 0, z_1,z_2\\in Q_{2r},$ with $\\alpha ^{\\prime } = \\alpha \\frac{2s}{1+2s}$ and $\\bar{A}_0 \\le C |v_0|^{2+\\frac{\\alpha }{1+2s}} \\Vert f\\Vert _{C^\\alpha _{\\ell , q}},$ where the constant $C$ depends on universal quantities, $\\alpha $ , $q$ , and $T-\\tau $ , but is independent of $|v_0|$ .", "Changing variables according to $w \\mapsto T_0^{-1} w$ , $ \\begin{split}\\int _{B_\\rho } |\\bar{K}_{f,z_1} (w) - \\bar{K}_{f,z_2}(w)| |w|^2 \\, \\mathrm {d}w&= |v_0|^{2+\\frac{3\\gamma }{2s}} \\int _{B_\\rho } |K_{f,\\bar{z}_1}(T_0w) - K_{f,\\bar{z}_2}(T_0w)| |w|^2 \\, \\mathrm {d}w\\\\&= \\int _{E_\\rho } |K_{f,\\bar{z}_1}(w) - K_{f,\\bar{z}_2}( w)| |T_0^{-1} w|^2 \\, \\mathrm {d}w\\\\&\\le |v_0|^{-\\frac{\\gamma }{s}}\\int _{B_{\\rho |v_0|^{(\\gamma +2s)/2s}}} |K_{f,\\bar{z}_1}( w) - K_{f,\\bar{z}_2}( w)| |w|^2 \\, \\mathrm {d}w,\\end{split}$ using $\\Vert T_0^{-1}\\Vert = |v_0|^{1 - \\frac{\\gamma +2s}{2s}}$ and $E_\\rho \\subset B_{\\rho |v_0|^{(\\gamma +2s)/2s}}$ .", "Next, it can be shown (see the proof of [50]) from the definition (REF ) of $K_f$ that $ |K_{f,\\bar{z}_1}(w) - K_{f,\\bar{z}_2}(w)| \\le K_{\\Delta f, \\bar{z}_1}(w),$ where, following the notation of [50], $\\Delta f(z) = |f(z) - f(\\xi \\circ z)|$ and $\\xi = \\bar{z}_2 \\circ \\bar{z}_1^{-1}$ .", "Using Lemma REF , we now have $\\begin{split}\\int _{B_\\rho } |\\bar{K}_{f,z_1} (w) - &\\bar{K}_{f,z_2}(w)| |w|^2 \\, \\mathrm {d}w\\lesssim |v_0|^{-\\gamma /s} \\int _{B_{\\rho |v_0|^{(\\gamma +2s)/2s}}} K_{\\Delta f, \\bar{z}_1}(w) |w|^2 \\, \\mathrm {d}w \\\\&\\lesssim |v_0|^{-\\gamma /s} \\left(\\int _{\\mathbb {R}^3} \\Delta f(\\bar{t}_1,\\bar{x}_1,\\bar{v}_1+u) |u|^{\\gamma +2s} \\, \\mathrm {d}u\\right) \\left( \\rho |v_0|^{\\frac{\\gamma +2s}{2s}}\\right)^{2-2s}\\\\&\\lesssim |v_0|^{ 2 - \\gamma - 2s} \\left(\\int _{\\mathbb {R}^3} \\Delta f(\\bar{t}_1,\\bar{x}_1,\\bar{v}_1 +u) |u|^{\\gamma +2s} \\, \\mathrm {d}u\\right) \\rho ^{2-2s}.\\end{split}$ To estimate $\\Delta f(\\bar{t}_1, \\bar{x}_1, \\bar{v}_1 +u)$ from above, using (REF ), one can show (see formula (5.23) in [50] or the analysis of (REF ) above) that $|f(\\bar{z}_1 \\circ (0,0,u)) - f(\\bar{z}_2\\circ (0,0,u))|\\lesssim \\Big (d_\\ell (\\bar{z}_1, \\bar{z}_2) + |\\bar{t}_1 - \\bar{t}_2|^\\frac{1}{1+2s}|u|^{1/(1+2s)}\\Big )^\\alpha \\langle \\bar{v}_1 +u\\rangle ^{-q} \\Vert f\\Vert _{C^\\alpha _{\\ell , q}},$ with $q := q^{\\prime } + \\alpha /(1+2s)$ .", "Therefore, $\\begin{split}\\Delta f(\\bar{t}_1, \\bar{x}_1, \\bar{v}_1 + u) &\\le \\langle \\bar{v}_1 + u\\rangle ^{-q} \\Vert f\\Vert _{C^\\alpha _{\\ell , q}} \\left( d_\\ell (\\bar{z}_1 , \\bar{z}_2) + |\\bar{t}_1 - \\bar{t}_2|^\\frac{1}{1+2s} |u|^\\frac{1}{1+2s}\\right)^\\alpha \\\\&\\lesssim \\langle \\bar{v}_1 + u\\rangle ^{-q} \\Vert f\\Vert _{C^\\alpha _{\\ell , q}} \\Big (d_\\ell (z_1 , z_2)^\\alpha + |t_1 - t_2|^{\\frac{\\alpha }{1+2s} }|u|^\\frac{\\alpha }{1+2s}\\Big )\\\\&\\le \\langle \\bar{v}_1 + u\\rangle ^{-q} |u|^\\frac{\\alpha }{1+2s} \\Vert f\\Vert _{C^\\alpha _{\\ell ,\\tilde{q}}} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }} ,\\end{split}$ since $d_\\ell (\\bar{z}_1, \\bar{z}_2) \\le d_\\ell (z_1,z_2)$ and $|\\bar{t}_1 - \\bar{t}_2| = |t_1-t_2|\\le d_\\ell (z_1,z_2)^{2s}$ .", "Returning to (REF ), we have $\\begin{split}\\int _{B_\\rho } |\\bar{K}_{f,z_1} (w) - \\bar{K}_{f,z_2}(w)| |w|^2 \\, \\mathrm {d}w &\\lesssim |v_0|^{2-\\gamma -2s} \\Vert f\\Vert _{C^\\alpha _{\\ell ,q}} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }}\\left(\\int _{\\mathbb {R}^3} |u|^{\\gamma +2s+\\alpha /(1+2s)} \\langle v_1 + u\\rangle ^{-q} \\, \\mathrm {d}u\\right) \\rho ^{2-2s}\\\\&\\lesssim |v_0|^{2+\\alpha /(1+2s)}\\Vert f\\Vert _{C^\\alpha _{\\ell ,q}} d_\\ell (z_1,z_2)^{\\alpha ^{\\prime }} \\rho ^{2-2s}\\end{split}$ since $q>3 + \\alpha /(1+2s)$ (recall $\\gamma +2s<0$ ).", "We have used the convolution estimate from Lemma REF and the fact that $|\\bar{v}_1|\\approx |v_0|$ ." ], [ "Technical lemmas", "In this appendix, we collect some technical lemmas.", "First, we have an estimate for convolutions with functions $v\\mapsto |v|^p$ .", "We state it without proof.", "Lemma 2.1 For any $p>-3$ and $f\\in L^\\infty _q(\\mathbb {R}^3)$ with $q>3 + p_+$ , there holds $ \\int _{\\mathbb {R}^3} f(v+w) |w|^p \\, \\mathrm {d}w \\le C\\Vert f\\Vert _{L^\\infty _q(\\mathbb {R}^3)} \\langle v\\rangle ^p,$ for a constant $C$ depending on $p$ and $q$ .", "The following interpolation lemma allows us to trade regularity for decay.", "We also omit the proof of this lemma, which is standard.", "Lemma 2.2 Suppose that $\\phi :\\mathbb {R}^6\\rightarrow \\mathbb {R}$ is such that $\\phi \\in L^\\infty _{q_1}(\\mathbb {R}^6)$ and $\\phi \\in C^\\alpha _{\\ell ,q_2}(\\mathbb {R}^6)$ , for some $\\alpha \\in (0,1)$ and $q_1 \\ge q_2 \\ge 0$ .", "If $\\beta \\in (0,\\alpha )$ and $m\\in [q_2, q_1]$ are such that $m\\le q_1 \\left(1 - \\frac{\\beta }{\\alpha }\\right) + q_2 \\frac{\\beta }{\\alpha },$ then $\\Vert \\phi \\Vert _{C^{\\beta }_{\\ell ,m}(\\mathbb {R}^6)}\\lesssim [\\phi ]_{C^\\alpha _{\\ell ,q_2}(\\mathbb {R}^6)}^\\frac{\\beta }{\\alpha } \\Vert \\phi \\Vert _{L^{\\infty }_{q_1}(\\mathbb {R}^6)}^{1-\\frac{\\beta }{\\alpha }}.$ Next, we quote an estimate for $Q_{\\rm s}(f,g)$ Hölder norms.", "This lemma is stated in [50] for the case $\\gamma +2s\\ge 0$ , but it is clear from the proof that the same statement holds when $\\gamma +2s<0$ .", "Lemma 2.3 [50] Let $q> 3 + (\\gamma +2s)_+$ and $\\alpha \\in (0,\\min (1,2s))$ , and assume that $f\\in C^\\alpha _{\\ell ,q}(\\mathbb {R}^3)$ and $g \\in C^{2s+\\alpha }_{\\ell ,q}(\\mathbb {R}^3)$ .", "Then $Q_1(f,g)\\in C^{\\alpha ^{\\prime }}_{\\ell ,q-(\\gamma +2s)_+ - \\alpha /(1+2s)}(\\mathbb {R}^3)$ for $\\alpha ^{\\prime } = \\frac{2s}{1+2s}\\alpha $ , and $ \\Vert Q_1(f,g)\\Vert _{C^{\\alpha ^{\\prime }}_{q-(\\gamma +2s)_+ - \\alpha /(1+2s)}(\\mathbb {R}^3)} \\le C \\Vert f\\Vert _{C^\\alpha _{\\ell ,q}(\\mathbb {R}^3)}\\Vert g\\Vert _{C^{2s+\\alpha }_{\\ell ,q}(\\mathbb {R}^3)}.$ The constant $C$ depends only on $s$ , $\\gamma $ , and the collision kernel.", "Finally, we have an estimate for $Q_{\\rm ns}(f,g)$ in Hölder norms: Lemma 2.4 [50] For $\\alpha \\in (0,\\min (1,2s))$ , let $q> 3 + \\alpha /(1+2s)$ and $\\alpha ^{\\prime } = \\frac{2s}{1+2s} \\alpha $ .", "If $f \\in C^\\alpha _{\\ell ,q}(\\mathbb {R}^3)$ and $g\\in C^{\\alpha ^{\\prime }}_{\\ell ,q+\\alpha /(1+2s)+\\gamma }(\\mathbb {R}^3)$ , then $\\Vert Q_{\\rm ns}(f,g) \\Vert _{C^{\\alpha ^{\\prime }}_{\\ell ,q}(\\mathbb {R}^3)} \\le C\\Vert f\\Vert _{C^\\alpha _{\\ell ,q}(\\mathbb {R}^3)} \\Vert g\\Vert _{C^{\\alpha ^{\\prime }}_{\\ell ,q+\\alpha /(1+2s) + \\gamma }(\\mathbb {R}^3)},$ with the constant $C$ depending only on $\\gamma $ , $s$ , and $\\alpha $ ." ] ]
2207.03497
[ [ "Meta-Learning the Difference: Preparing Large Language Models for\n Efficient Adaptation" ], [ "Abstract Large pretrained language models (PLMs) are often domain- or task-adapted via fine-tuning or prompting.", "Finetuning requires modifying all of the parameters and having enough data to avoid overfitting while prompting requires no training and few examples but limits performance.", "Instead, we prepare PLMs for data- and parameter-efficient adaptation by learning to learn the difference between general and adapted PLMs.", "This difference is expressed in terms of model weights and sublayer structure through our proposed dynamic low-rank reparameterization and learned architecture controller.", "Experiments on few-shot dialogue completion, low-resource abstractive summarization, and multi-domain language modeling show improvements in adaptation time and performance over direct finetuning or preparation via domain-adaptive pretraining.", "Ablations show our task-adaptive reparameterization (TARP) and model search (TAMS) components individually improve on other parameter-efficient transfer like adapters and structure-learning methods like learned sparsification." ], [ "Introduction", "Finetuning large pretrained language models (PLMs) on task-specific supervised data has become the default strategy to produce performant models for various NLP tasks ([6], [15], [31], inter alia), provided a task has enough training data to be adapted to without overfitting.", "For few-shot tasks, very large PLMs like the 175B-parameter GPT-3 [5] do surprisingly well without training using prompts, where task-specific examples $(x_j,y_j)$ are presented as text to condition the PLM before a test input $x_{\\text{test}}$ is given.", "Our work considers an important middle ground: minimizing the computational cost of finetuning while improving on its performance in low-resource and few-shot settings.", "In general, self-supervised objectives used for PLMs assume little about the nature of downstream tasks.", "Earlier works suggested that task-awareness is unnecessary for PLMs of sufficient scale; e.g., [32] found that multi-task learning underperformed pretrain-finetune for the largest T5 models on multi-format question answering.", "However, [11] showed that further pretraining on unlabeled text from the downstream task (task-adaptive pretraining, or TAPT) or a related domain (DAPT) consistently improved adaptation performance.", "[1] revisited [32] and found that by greatly improving the number and balance of tasks, one can utilize a multitask objective after pretraining and achieve gains in proportion to the number of tasks.", "As for even larger models, [5] argue that the impressive few-shot prompting ability of GPT-3 comes from “implicit” meta-learning [34], [4] which they term in-context learning, where the outer loop is performed by self-supervised pretraining, and the inner loop is performed by forward passes on implicit examples in unlabeled texts.", "These works motivate that exposure to broad information about downstream tasks remains useful in preparing a large PLM for adaptation.", "Hence, we propose explicit meta-learning for preparing large PLMs for data-efficient adaptation; a visual comparison is in fig:scheme.", "To also achieve parameter efficiency and performance, we adapt meta-transfer learning [39] to large PLMs in two proposed ways: an inner loop optimizing a low-rank task-adaptive reparameterization (TARP) of weights, and an outer loop learning an architecture controller for searching task-adaptive model structures (TAMS).", "These improve over general finetuning and even DAPT-prepared LMs on generative and unconditional few-shot and low-resource settings, such as multi-domain abstractive summarization (AdaptSum; [43]) and language modeling.", "Figure: Comparison between (top) implicit meta-learning from text corpora that incidentally contain task “prefixes”, as in GPT-3 (; Fig.", "1.1), and (bottom) explicit meta-learning the transformation of a PLM's weights and sublayers for a distribution of tasks.Furthermore, our analysis shows that each component of our task distribution-aware strategy independently improves over prior work: (1) meta-transfer learning improves over model-agnostic meta learning [8] even after multitask learning on the same data, setting a new state-of-the-art on few-shot Persona-Chat dialog personalization [46]; (2) our proposed dynamic low-rank TARP outperforms recent methods such as MAM adapters [13] and alternate reparameterizations like Kronecker products [45]; (3) our lightweight controller for generating task-aware architectures in TAMS extends improvements into higher resource tasks and rediscovers task-specific modifications like 1D convolutions for Transformers.", "Our proposal is summarized in fig:mltd, with pseudocode in Algorithm REF at the end of the next section.", "We publicly release the code for our experiments and our reference library online.https://github.com/amazon-research/meta-learning-the-difference" ], [ "Methodology", "Our goal is to explicitly optimize a PLM for efficient adaptation to any task $\\mathcal {T}_i$ sampled from a distribution of low-resource NLP tasks $p(\\mathcal {T})$ .", "Each task consists of a training set $\\mathcal {D}_{i}^{\\text{train}}$ , a test set $\\mathcal {D}_{i}^{\\text{test}}$ , and a loss function $\\mathcal {L}_i$ .", "The prevailing approach for efficiently optimizing a base model $f_{\\Theta }$ on a (relatively) small task-specific dataset is to use model-agnostic meta-learning (MAML; [8]).", "This is a bi-level optimization process that uses a stochastic gradient-based strategy to sample a batch of tasks $\\lbrace \\mathcal {T}_i\\rbrace _{i=1}^{B}$ from the task distribution $p(\\mathcal {T})$ in each meta-iteration.", "In the inner loop, each task finetunes a copy of the model's weights $\\Theta $ for a small number of steps $T_\\text{in}$ , producing task-specific weights $\\Theta _i$ .", "In the outer loop, each task model $f_{\\Theta _i}$ is evaluated on its corresponding task's test set $\\mathcal {D}_{i}^{\\text{test}}$ and these losses are summed to produce the overall meta-loss.", "The meta-loss $\\sum _{\\mathcal {T}_i \\sim p(\\mathcal {T})}\\mathcal {L}_{i}^{\\text{test}}(f_{\\Theta _i})$ is then used to optimize and update $\\Theta $ ; see [41]https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html for a more detailed overview.", "MAML, however, is not generally used in NLP as a competitive alternative to pretrain-then-finetune methods for low-resource and few-shot settings.", "To rectify MAML's limitations, we propose a meta-learning the difference (MLtD) framework to optimize PLMs for fast and data-efficient adaptations with the following contributions:" ], [ "MAML after pretraining.", "Earlier works performed MAML using random initializations, or at best with pretrained token embeddings [25], which was shown to underperform the pretrain-finetune paradigm.", "With the increased prevalence of large-scale pretraining, recent works have begun to initialize MAML with PLMs [7].", "We continue this approach, but further show that pretraining + MAML, even when labeled (i.e., multitask) and performed only on the meta-training data (i.e., no external text), improves performance and mitigates overfitting versus pretraining alone or MAML alone (sec:analysis), suggesting that pretraining produces a better initialization that promotes generalization in later meta-learning." ], [ "Parameter-efficient transfer.", "Adaptation data is typically limited, making it easy for large models to overfit.", "Previous works use very shallow CNNs [8], only adapt scale-and-shift parameters atop the original model [39], or apply various general regularization techniques such as weight decay, label smoothing, dropout, early stopping, and $\\ell _1$ regularization [25], [37].", "In contrast, we propose learning dynamic low-rank reparameterizations $g_{\\Phi _i}$ (sec:parameter-efficient) of the base model such that $\\Theta _i(x) = g_{\\Phi _i}(\\Theta _{\\text{LM}}, x)$ for task $\\mathcal {T}_i$ .", "Here, $\\Phi $ is a small set of new parameters that are adapted into task-specific $\\Phi _i$ when finetuning.", "Notably, we modify MAML to incorporate these parameter-efficient modules, so that during task adaptation in both meta-training (the inner loop) and meta-testing (novel tasks) $\\mathcal {T}_i$ , we only adapt $\\Phi \\rightarrow \\Phi _i$ instead of $\\Theta \\rightarrow \\Theta _i$ , speeding up both phases and improving overall performance.", "Though some works explore the benefits of joint training or fusion of parameter-efficient modules [38], [21], [28], prior work has not explored meta-learning to learn these adaptations in a task distribution-aware setting." ], [ "Architecture adaptation.", "While the Transformer has proven to be a robust general-purpose architecture, recent work has shown that the optimal attention-then-FFN sublayer structure can vary across tasks (e.g., Sandwich Transformers; [30]).", "However, previous data-driven sublayer searches are often task-agnostic (e.g., [36]), where the sublayer search is implemented before pretraining.", "Meta-learning enables learning data-driven sublayers after pretraining, in a differentiable, task-adaptive manner (sec:task-adaptive).", "Instead of a separate search per task as in previous methods (DARTS; [22]), we propose meta-learning a task-aware architecture controller to help it generalize to new tasks when searching neural architectures, by learning to directly generate task-specific sublayer structures from the dataset.", "By exploiting architectural knowledge learned over the task distribution, our task-adaptive model structure approach improves test-time performance.", "A related work in customizing model structure is CMAML [37], which applies a sparse pruning algorithm to obtain task-specific weight masks.", "Our method differs in that we consider generalization over a distribution of tasks (instead of a single task), and has a richer search space with different operations, numbers of layers, and widths of layers, so that our method provides architecture diversity to accommodate to the different task data.", "In all, we employ meta-learning to improve upon initializing from a pretrained $\\Theta _{\\text{LM}}$ , allowing better downstream finetuning on tasks.", "By learning only the transformation weights $\\Phi _i$ and (optionally) the task-specific architecture $\\alpha _i$ for new tasks $\\mathcal {T}_i$ , our method “learns to learn the difference” between a PLM and a task-specific LM in a training-efficient way." ], [ "Efficient parameter adaptation", "We categorize recent works in parameter-efficient adaptation of large PLMs into three types:" ], [ "Adding parameter-efficient layers.", "Low-dimensional adapters [33] have been injected into a frozen pretrained BERT either serially after each sublayer [14], or in parallel to the self-attention layers (PALs; [38]).", "Following works [3], [21] applied adapters to other NLP models, e.g.", "GPT-2.", "Compacters [17] reduce the adapter parameter count via hypercomplex multiplications [45]." ], [ "Adding parameter-efficient prefixes.", "Inspired by prompting, the learning of automated prompts or task-specific continuous variants has been applied for encoder-only PLMs like BERT [35], [12] and generative PLMs [20], [23], [18] where one learns task-specific vectors prepended to inputs or hidden representations." ], [ "Transformations only.", "The adapter and prefix-tuning strategies insert layers or introduce prefixes, increasing inference time or in-memory size.", "Instead, [47] learn binary masks, and diff pruning [10] learns sparse additive vectors.", "Both methods use unstructured sparsity to achieve parameter efficiency.", "Later works like BitFit [44] and LoRA [16] introduce parameter-efficient modifications targeting the Transformer architecture: BitFit only tunes the bias parameters, while LoRA adds low-rank decomposition weights to the self-attention weights.", "Recently, [13] proposed parallel mix-and-match (MAM) adapters which leverage benefits of the preceding types.", "Hence, to minimize overhead we focus on a “transformations only” approach.", "Inspired by the scale-and-shift parameters of [39], we propose learning affine transformations to reparameterize the pretrained model weights towards a task.", "For a pretrained weight matrix $W_0^l\\in \\mathbb {R}^{C_{\\text{in}}\\times C_{\\text{out}}}$ (can be any dense layer in the self-attention module or the FFN module in a transformer based architecture), we first reparameterize the task-specific weights as: $W^l = \\Phi _{1}^l\\odot W_0^l + \\Phi _{2}^l $ where $\\Phi _{1}^l, \\Phi _{2}^l \\in \\mathbb {R}^{C_{\\text{in}}\\times C_{\\text{out}}}$ and $\\odot $ denotes the elementwise (Hadamard) product.", "At adaptation time, we apply low-rank constraints while optimizing the reparameterization weights only, giving the training objective $\\underset{\\begin{array}{c}\\lbrace \\Phi _1^l, \\Phi _2^l\\rbrace _{l=1}^{L}\\\\ \\text{rank}(\\Phi _{i}^l)<r\\end{array}}{\\text{min}} \\sum _{t=1}^T \\log p(y_{t}|x,y_{<t}; {\\lbrace W^l\\rbrace _{l=1}^L}).$ A straightforward approach to solve the rank-constrained problem is to apply a low-rank decomposition to the transformation weights $\\Phi _{i}^{l}$ .", "We term this approach of learning parameter-efficient affine transformations task-adaptive reparameterization (TARP).", "We consider two standard static decomposition methods: Bilinear, which takes $\\Phi _{j}^{l} = U_{j}^l{V_{j}^l}^{T}$ where $U_{j}^l\\in \\mathbb {R}^{C_{\\text{in}}\\times r}$ and $V_{j}^l\\in \\mathbb {R}^{C_{\\text{out}}\\times r}$ , as done in the additive-only setting ($\\Phi _{1}^l = I$ ) by LoRA.", "Kronecker product, which takes $\\Phi _j^l = \\sum _{k=1}^n H_k^l\\otimes (U^l_k{V^l_k}^{T})$ where $H_k\\in \\mathbb {R}^{n\\times n}$ , $U^l_k\\in \\mathbb {R}^{({C_{\\text{in}}}/{n})\\times r}$ , $V^l_k\\in \\mathbb {R}^{({C_{\\text{out}}}/{n})\\times r}$ , and $n$ is a hyperparameter, as used in the “added-layer” Compacter approach.", "Figure: TARP with dynamic decomposition (only the additive Φ 2 l \\Phi _{2}^l is depicted for simplicity).In addition, we propose a novel decomposition inspired by the self-attention mechanism, which aggregate features using input-dependent attention weights.", "This can be regarded as a function with input-dependent parameters $y = f_{\\theta (x)}(x)$ .", "Similarly, the optimal reparameterization may vary with different input values.", "The computation of a reparameterized layer in the PLM becomes ${y}=f_{\\theta ({x})}({x})=[\\Phi _1^l(x)\\odot W_0^l + \\Phi _2^l(x)]x$ where TARP parameters $\\Phi _{j}^l(x)$ are modeled by a dynamic low-rank decomposition (fig:dynamic): $\\Phi _{j}^l(x) = U_{j}^l\\Sigma _{j}^l(x){V_{j}^l}^{T},$ The square matrices $\\Sigma _{j}^l(x) \\in \\mathbb {R}^{r\\times r}$ are generated by a lightweight multi-layer perceptron (MLP) for different input vectors and $U_j^l,V_j^l$ are the learnable weight matrices with $r\\ll \\text{min}(C_{\\text{in}},C_{\\text{out}})$ .", "We compare popular parameter-efficient transfer schemes and these three decompositions in sec:analysis." ], [ "Efficient architecture adaptation", "We also propose adapting the model structure for each task in a data-driven manner.", "The weights of the task-specific architectures are learned in the inner loop, while the task-aware architecture generator which produces architecture candidates is learned in the outer loop.", "We term this approach task-adaptive model structure (TAMS) learning.", "We first represent each task $\\mathcal {T}_i$ with an embedding vector $z_i$ based on the task training set $D_{i}^{\\text{train}}$ .", "An embedding module $\\mathcal {E}$ computes the task representation by aggregating features of all training data: $z_i = \\mathcal {E}(D_{i}^{\\text{train}}) = \\frac{\\sum _{(x,y)\\in D_{i}^{\\text{train}}} \\text{Embed}(x)}{|D_{i}^{\\text{train}}|},$ where $\\text{Embed}(x)$ are intermediate representations produced by the PLM.", "For encoder-decoder models (Transformer), we take Embed to be the encoder; for encoder-only or decoder-only PLMs (BERT, GPT-2), we use the token embedding layer.", "Inspired by DARTS [22], we define the possible sublayer structures by a search space expressed as a directed acyclic graph (DAG), where each directed edge corresponds to a set of candidate operations $\\mathcal {O}$ .", "The task architecture is represented by a set of parameters $\\alpha _i$ that encode the structure, where $\\alpha _i\\in \\mathbb {R}^{{E \\times |\\mathcal {O}|}}$ ($E$ is the number of edges, and $|\\mathcal {O}|$ is the number of operations).", "In our proposed TAMS approach we also introduce a controller $\\mathcal {A}$ to generate these task-specific architecture parameters, as a function of the task embedding vector $\\alpha _i = \\mathcal {A}(z_i)$ .", "The probability of choosing operation $m$ in edge $n$ is given by $P_n(m)=\\text{softmax}_{m\\in \\mathcal {O}}(\\alpha _i[n,m])$ .", "In meta-testing, the discrete architecture is obtained by taking the argmax.", "Since argmax is non-differentiable, we use the straight-through Gumbel-Softmax estimator to backpropagate gradients for optimizing the architecture controller during meta-training.", "In TAMS, all possible architectures are initialized as part of the meta-parameters $\\tilde{w}$ based on weight-sharing [29], i.e., architecture $\\alpha _i$ 's weights $\\tilde{w}(\\alpha _i)$ are selected from the meta-parameters.", "After the reparameterization steps in TARP and the architecture generation steps in TAMS, our inner loop optimization takes parameters $(\\Phi , \\tilde{w}(\\alpha _i))$ and performs a small number of gradient steps $T_\\text{in}$ on the task training set to give $(\\Phi _i, \\tilde{w}_i)$ .", "In the outer loop optimization, we thus have to simultaneously optimize the architecture controller to perform architecture search, as well as the parameter initialization.", "This is in contrast to MAML which just optimizes the parameter initialization in the outer loop.", "The meta-loss becomes: $\\underset{\\mathcal {W}}{\\text{min}}\\sum _{\\mathcal {T}_i\\sim p(\\mathcal {T})}\\mathcal {L}_{D_{i}^{\\text{test}}}(f_{\\Theta _{\\text{LM}}\\cup {\\Phi _i}\\cup \\tilde{w}_i})$ , where the tuple $\\mathcal {W}$ contains the base PLM's weights $\\Theta _{\\text{LM}}$ , the low-rank reparameterization weights $\\Phi $ , the architecture controller $\\mathcal {A}$ , and the weight-sharing meta-parameters $\\tilde{w}$ .", "In summary, our contributions with the TAMS framework are that (1) it meta-learns a task-aware controller by training on the task distribution and then generalizes to new tasks by automatically generating an optimized architecture $\\alpha _i$ from the task training data, and (2) it optimizes the controller and parameter initialization (shared by all tasks) simultaneously under a unified meta-learning objective.", "This is in contrast to DARTS, which performs a separate search and architecture parameter optimization for each task independently.", "We summarize our net method with TARP and TAMS in Algorithm REF .", "[t] Require: pretraining dataset $D^{\\text{pre}}$ ; meta-training dataset of tasks $D^{\\text{meta}}$ ; base model (LM) $f_{{\\Theta }}$ ; TARP weights ${\\Phi }$ ; embedding module $\\mathcal {E}$ in TAMS; architecture controller $\\mathcal {A}$ in TAMS; meta-parameters $\\tilde{{w}}$ in TAMS; inner-/outer-loop learning rate $\\eta _{\\text{in}}$ /$\\eta _{\\text{out}}$ ; meta-training iterations $T_{\\text{meta}}$ ; inner-loop iterations $T_\\text{in}$ ; meta-batch size $B$ Pretraining phase in MLtD Pretrain the base LM’s weights ${\\Theta }\\rightarrow {\\Theta }_{\\text{LM}}$ on $D^{\\text{pre}}$ darkpastelredIn contrast, MAML runs on random initialization.", "Meta-training phase in MLtD each meta-iteration $t\\in [T_{\\text{meta}}]$ Sample a batch of tasks $\\lbrace \\mathcal {T}_i\\rbrace _{i=1}^B$ from $D^{\\text{meta}}$ $\\mathcal {L}_{\\text{meta}\\_\\text{loss}} = 0$ $\\mathcal {T}_i\\in \\lbrace \\mathcal {T}_i\\rbrace _{i=1}^B$ Initialize $\\Phi _i = \\Phi $ Reparameterize $\\Theta _{\\text{LM}}$ with $\\Phi _i$ (Eq.1,3) Expand task-specific sublayers by TAMS-generated architecture $\\alpha _i\\hspace{-2.168pt}=\\hspace{-2.168pt}\\mathcal {A}(\\mathcal {E}(D_i^{\\text{train}}))$ Initialize sublayer's weights: $\\tilde{{w}}_i = \\tilde{{w}}(\\alpha _i)$ darkpastelredIn contrast, MAML does not adapt model architecture to a task.", "$T_\\text{in}$ iterations $\\mathcal {L}_{\\text{inner}}=\\mathcal {L}_{D_i^{\\text{train}}}\\big (f_{{\\Theta }_{\\text{LM}}\\cup {\\Phi _i}\\cup {\\tilde{{w}}_i}}\\big )$ $\\big ({\\Phi }_i,\\tilde{{w}}_i\\big )$  –= $ \\eta _{\\text{in}}\\nabla _{({\\Phi }_i,\\tilde{{w}}_i)}\\mathcal {L}_{\\text{inner}}$ darkpastelredIn contrast, MAML updates all parameters, but we only update a small number in the inner loop.", "Evaluate on $D_i^{\\text{test}}$ : $\\mathcal {L}_{\\text{meta}\\_\\text{loss}}$  += $\\mathcal {L}_{D_i^{\\text{test}}}\\big (f_{{\\Theta }_{\\text{LM}}\\cup {\\Phi }_i\\cup {\\tilde{w}_i}}\\big )$ Perform outer-loop optimization: $\\big ( {\\Theta _{\\text{LM}}}, {\\Phi }, \\mathcal {A}, \\tilde{{w}} \\big )$  –= $ \\eta _{out}\\nabla _{({\\Theta _{\\text{LM}}}, {\\Phi }, \\mathcal {A}, \\tilde{{w}})}\\mathcal {L}_{\\text{meta}\\_\\text{loss}}$ Return: meta-trained PLM with learned $({\\Theta _{\\text{LM}}}, {\\Phi }, \\mathcal {A}, \\tilde{{w}})$ Meta-Learning the Difference (MLtD) with TARP and TAMS." ], [ "Main results", "To demonstrate the overall benefit of our method, we compare our results to other approaches on generative adaptation tasks in the few-shot (dialogue personalization), low-resource (abstractive summarization), and medium-resource (multi-domain language modeling) regimes.", "In sec:analysis we perform some analyses and also compare TARP by itself to previous parameter-efficient works." ], [ "Implementation", "All of our experiments ran using PyTorch on single machines with NVIDIA V100 GPUs.", "See per-task hyperparameters in sec:hyperparams." ], [ "TARP decomposition.", "We apply task-adaptive reparameterization (TARP) to the pretrained self-attention and feed-forward network (FFN) blocks.", "In sec:efficiency-exps we conclude that TARP with dynamic decomposition outperforms other parameter-efficient transfer methods; TARP will always be of this form for our main experiments, with rank $r \\le 32$ ." ], [ "TAMS details.", "We apply TAMS to expand the FFN block, so while the shared (in structure) sublayers capture the commonalities among tasks, the new searched sublayers can capture task-specific structure.", "Our search DAG contains two input nodes that project the inputs to a low-dimensional space, one output node that projects the intermediate representation back to the original dimension, and three intermediate nodes.", "Candidate operations for each edge are {linear, conv-3$\\times $ 1, conv-5$\\times $ 1, gated linear unit (GLU), zeroize, and skip connection}; see code for definitions.", "All the candidates operate on a reduced feature dimension to ensure the parameter efficiency of the search cell.", "Our controller $\\mathcal {A}$ is a two-layer MLP.", "The first fully-connected layer has 128 output neurons, and the second layer has $E \\times |\\mathcal {O}|$ neurons (see sec:task-adaptive for notation).", "We apply ReLU after the first layer and softmax the final output." ], [ "Few-shot dialogue personalization", "Persona-Chat [46] is a dialogue generation benchmark with 1137/99/100 personas for training/validation/testing.", "We follow recent work [25], [37] and regard learning a dialogue model for each persona as a few-shot meta-learning task.", "On average, each persona has 8.3 unique dialogues, 6-8 turns per dialogue, and 15 words per turn.", "Following these works, we use a standard Transformer model with pretrained GLoVe embeddings and separate the dialogues by their persona description into meta-training/-validation/-testing using [25]'s splits and codehttps://github.com/HLTCHKUST/PAML." ], [ "Baselines.", "The following are from previous works.", "Pretrain denotes a multitask dialogue model trained on labeled data from all meta-training tasks.", "MAML meta-trains the Transformer model from scratch [25], and CMAML [37] additionally applies a pruning algorithm to customize the model structures for different tasks.", "+Finetune corresponds to finetuning on each testing task.", "Finally, Pretrain+Persona is a partial oracle for reference only, where the persona description is available.", "Table: Comparison of test perplexity (PPL; lower is better) and BLEU (higher is better) for few-shot dialogue generation on Persona-Chat dataset.", "* ^*: published results from , ; the rest are ours.Table: ROUGE F1s from multi-domain adaptation for abstractive summarization on AdaptSum (higher is better).", "All methods are initialized with pretrained BART and finetuned on the labeled task training set of each domain at the end.", "* ^*: published results from , using DAPT and TAPT methods from ; the rest are ours." ], [ "Results (Table ", "We include the same evaluation metrics from previous works (perplexity, BLEU score).", "Training MAML from scratch yields worse results than the Pretrain model.", "However, when MAML is initialized from the multitask model (Pretrain+MAML+Finetune), the result already outperforms previous work.", "Note that the same labeled data is used for both Pretrain and MAML, suggesting that meta-learning benefits from the more robust initialization that pretraining provides to improve task-specific few-shot adaptation (also see analysis in sec:pre-vs-meta).", "Moreover, we see further improvements by “meta-learning the difference” (MLtD).", "By using TARP for MAML's inner loop adaptation (MLtD, TARP only), we attain equivalent or better results and faster training time while only updating a small amount of task-specific parameters (sec:efficiency-exps).", "This indicates that our method helps mitigate overfitting to low-resource tasks.", "Finally, by incorporating TAMS (MLtD), we use the full framework and achieve the best performance, suggesting the task-adapted model structure gives better architectures for personas.", "In this regard, CMAML lags behind MLtD as well.", "We conjecture this is because it uses a pruning algorithm to “customize” the model with different weight masks, which may not generate enough model diversity for diverse tasks as the architectural inductive bias remains the same.", "Table: Test perplexities from multi-domain language modeling adaptation on AdaptSum (lower is better).", "All methods are initialized with pretrained GPT-2 medium and finetuned on the labeled domain set at the end.", "† ^\\dagger : our re-implementation of ." ], [ "Low-resource abstractive summarization", "AdaptSum [43] is a new multi-domain dataset used to evaluate domain adaptation schemes for abstractive summarization.", "It consists of six diverse target domains ranging from movie reviews to scientific abstracts.", "Each domain has a low-resource task corpus and a larger unlabeled text corpus as well (list and statistics in tab:adaptsumstats) that is used to evaluate domain- and task-adaptive pretraining (DAPT/TAPT; [11]).", "We use pretrained BART [19] and finetune to each low-resource task corpus as in [43], whose codehttps://github.com/TysonYu/AdaptSum we extend.", "Table: Data sizes for AdaptSum across the six domains, for both the text-only domain-related corpus and the low-resource task corpus." ], [ "Baselines.", "DAPT continues pretraining with BART's self-supervised objective using the unlabeled domain corpus.", "TAPT continues pretraining with the set of unlabeled documents found in the target summarization task.", "SDPT uses the XSum dataset in the News domain to further pretrain BART with a supervised training objective using document-summary pairs before finetuning." ], [ "Results (tab:adaptsumabs).", "We find that MLtD, even without architecture search (TARP only), outperforms DAPT, TAPT, and SDPT.", "These methods use in-domain/-task knowledge and the standard pretraining objective to help adaptation to the target task, while our method considers cross-domain knowledge via the meta-learning objective, sampling meta-training tasks from multiple domain corpora to train the model.", "Moreover, the use of meta-learning as preparation outperforms multitask pretraining (TARP only, multitask pretraining instead), signifying that mere exposure to the cross-domain data may not be enough and using a meta-learning objective to explicitly optimize for the lightweight adaptation is beneficial.", "Finally, we see that without meta-learning or multitasking (TARP only, no meta-learning) our performance is also better than the baseline.", "This demonstrates the effectiveness of the lightweight TARP adaptation, which matches the performance of full finetuning while only updating less than 5% of parameters." ], [ "Multi-domain language modeling", "Though the text corpora in AdaptSum were originally included to evaluate DAPT, we also use them to evaluate our methods on multi-domain language modeling.", "As this is a novel benchmark, to demonstrate fast adaptation we take $T_{\\text{in}} = 1$ ." ], [ "Baselines.", "We start with pretrained GPT-2 medium (345M) [31] with input sequence length 512 using the Transformers library [42].", "Finetuning is performed on the training documents of the task corpus, and we evaluate perplexity on the test documents of the task corpus.", "The only exception to finetuning is Zero-shot which evaluates the pretrained GPT-2 model directly.", "DAPT continues pretraining of GPT-2 with the language modeling objective on the unlabeled domain corpus before finetuning." ], [ "Results (tab:adaptsumlm).", "Our findings in summarization also hold for the unconditional causal language modeling task.", "Namely, we see equal or better performance of TARP vs. full finetuning and that meta-learning plays a significant role in the task adaptation quality.", "In contrast to summarization with BART (sec:adapt-sum) but similar to Persona-Chat with Transformer (sec:persona-chat), we see that TAMS leads to noticeable improvements.", "We explain why this may be the case and present the TAMS-learnt sublayer modules in sec:tams-analysis." ], [ "Pretraining improves meta-learning", "We analyze the performance of MLtD on Persona-Chat at meta-testing time (i.e., finetuning then testing on unseen personas) with respect to the number of inner loop steps and training dialogues.", "In fig:ablationmetalearning (left), we see that original MAML (no pretraining) overfits, while finetuning the multitask-pretrained model keeps improving.", "Moreover, MLtD atop the multitask-pretrained model followed by finetuning continues to improve test perplexity.", "In fig:ablationmetalearning (right), we fix the finetuning steps and vary the number of training dialogues used in finetuning.", "Using more dialogues improves perplexity for all three methods, with MLtD still leading over full MAML and direct finetuning after pretraining.", "The takeaway from these results is that applying MAML on a pretrained model prevents overfitting and promotes better generalizability from meta-learning.", "Figure: Perplexities on Persona-Chat testing tasks with MLtD (TARP only) versus Pretrain+Finetune and MAML+Finetune.", "Left: Influence of number of adaptation iterations.", "Right: Influence of the number of adaptation dialogues." ], [ "Dynamic TARP versus alternatives", "We benchmark our dynamic low-rank reparameterization on a variety of NLP models and tasks.", "To show that dynamic TARP individually improves on full finetuning and other parameter-efficient adaptation methods, we report single-task results here.", "For generative tasks, we use pretrained GPT-2 medium on natural language generation datasets: we specifically evaluate on E2E [27], which was used by adapter method [21]; WebNLG [9]; and DART [26], which was used by LoRA [16].", "For classification, we use pretrained RoBERTa [24] on low-resource GLUE [40] tasks, which were evaluated on by many recent parameter efficient adaptation methods [14], [47], [44], [13].", "Further dataset and experimental setup details are in sec:adaptation-data.", "In particular, we chose rank $r$ to give similar parameter counts to other approaches; $r=4$ in tab:sotaadaptation, $r=8$ in tab:sotaadaptationglue.", "For generative tasks, we compare with finetuning all layers; FT-Top2, which only finetunes the last two layers of the model; BitFit [44], which only finetunes the biases; and Adapter tuning [14], which only finetunes the adapter layers inserted after each feed-forward and self-attention sublayer.", "As shown in tab:sotaadaptation, TARP methods match or outperform other parameter-efficient methods, while learning task-specific parameters that are $<$ 3% of the number of base parameters and keep the base model unchanged.", "Among the three TARP variants, we find that Dynamic $>$ Bilinear $>$ Kronecker in terms of performance across generative metrics.", "This suggests that the optimal adaptation to the underlying model weights may vary per token, which dynamic low-rank accounts for.", "Moreover, dynamic TARP performs better than an alternative where the $O(n^2)$ Hadamard product in Eq.", "(REF ) is replaced by $O(n^3)$ matrix multiplication (w/ matrix mult.).", "For classification tasks, we compare with [13], which proposed a unified framework connecting several state-of-the-art adaptation methods [14], [16], [20] and devised an improved method (MAM Adapter).", "Our dynamic TARP can only partly be viewed in this unified framework as we explore a novel design dimension, i.e., making the modification to the base model dynamic w.r.t.", "input tokens.", "Moreover, in contrast to additive-only modifications in [13], our dynamic TARP applies both multiplicative and additive modifications.", "For fair comparisons, we follow past works [24], [47], [13] and set the maximum finetuning epochs to 10 on each task.", "In tab:sotaadaptationglue, dynamic TARP introduces and trains only 1% versus the number of original parameters, while achieving comparable results to full finetuning (+0.3 abs.)", "and outperforming the previous best, MAM adapters (+1.0 abs.", ")." ], [ "Dynamic TARP outperforms finetuning", "Tables REF and REF also show that dynamic low-rank reparameterization outperforms finetuning on corresponding evaluating metrics, while being faster as it only adapts a small set of weights.", "The training time further improves through utilizing the training data more efficiently.", "In fig:lrablations (left) we compare perplexities of our method against finetuning on subsets of WikiText-2 and see that finetuning increasingly underperforms as the number of examples decrease.", "To explain this behavior, in fig:lrablations (right) we fix the number of training examples to 100 and ablate the rank.", "Our method performs best with a very small rank value, suggesting that the difference between the pretrained and finetuned weight matrices lies in a lower-dimensional subspace.", "This complements [2]'s observation that direct adaptation in lower-dimensional spaces can be equally as effective as in the original space.", "Moreover, we find that the larger the model (GPT-2 medium vs. the GPT-2 small), the lower the rank value required for the best adaptation performance.", "Figure: Testing perplexities on WikiText-2 with full finetuning and/or low-rank adaptation with dynamic TARP.Left: Low-rank adaptation is extremely helpful on low-resource tasks.Right: Holding the number of training examples fixed, the model adaptation space is optimized by fewer dimensions." ], [ "TAMS discovers better architectures", "Recent studies have shown that simple modifications to the transformer architecture, such as re-organizing the MHSA and FFN modules [48] or adding 1D convolutions to self-attention [36], improve the task performance.", "Similarly, from our results in tab:adaptsumlm, adapting the model structure through sub-layer modifications in our meta-learning framework further reduces the testing perplexity compared to MLtD with fixed model structure.", "Applying task-aware architecture search (TAMS) on the FFN module incurs less than 5% additional model parameters compared to the original GPT-2 model, but reduces the perplexity by 3 points on average.", "A limitation we observe is that the TAMS method tends to produce a dominant architecture (cf.", "fig:searchedFFN) as opposed to one different architecture for each task.", "We conjecture this may be because our initial task representation strategy has low variance due to averaging across the entire task training data.", "This may explain why TAMS did not uniformly improve MLtD in all settings.", "Nevertheless, the perplexity reduction implies that there is still room to optimize the architecture of current LMs without significantly increasing total model size.", "Thus, we believe task-aware architecture search is a promising direction to continue to invest in the future.", "Figure: Dominant structure of the TAMS-learned sublayers for AdaptSum language modeling." ], [ "Training efficiency of MLtD", "We study training efficiency by comparing the training and finetuning wall-clock time for multi-domain abstractive summarization on AdaptSum.", "The results are shown in Table REF .", "Table: Wall-clock time comparison on AdaptSum during preparation on the meta-training data (Prep.)", "and during meta-testing (Finetuning) to convergence (early stopping), summed over all domains.Times were measured on one GPU.We have the following observations: (1) since meta-learning explicitly optimizes the model for fast adaptation.", "Compared with previous methods, MLtD takes fewer epochs to reach convergence (e.g., fig:convergence) and takes the least time to adapt the model to each the task; (2) since our lightweight adaptation method (TARP) only updates a small set of task-specific weights, our model variant (TARP only, no meta-learning) still reduces adaptation time by 25% over direct BART finetuning.", "Figure: Convergence analysis for finetuning BART models obtained by different methods on AdaptSum, using the Debate domain as an example.On the other hand, the proposed TARP and TAMS components introduce some inference overhead.", "Due to limitations of current DL libraries in implementing parallel computation branches, the dynamic low-rank decomposition and the task-aware architecture generation increases the inference time by 10% and 6%, respectively, measured with a batch size of 4 and a sequence length of 1024 on one GPU." ], [ "Conclusion", "We have shown that explicit meta-learning is a useful preparation step on top of PLMs to improve later finetuning.", "Specifically, our MLtD framework incorporating dynamic task-adaptive reparameterization (TARP) and task-adaptive model search (TAMS) enable data- and parameter-efficient adaptation to a family of low-resource tasks.", "Future avenues include applying our method in other modalities like vision and speech, as well as exploring better model formulations for TARP and TAMS." ], [ "Acknowledgements", "We thank our colleagues on the Speech Science team at Amazon AWS AI for supporting this research." ], [ "Hyperparameters", "Most of the experimental setups, e.g., model type, max sequence, optimizer, batch size, beam search size, are taken from previous methods for fair comparison.", "We tuned the inner-loop and outer-loop learning rates in meta-training on the meta-validation set, and adjust the learning rate schedule accordingly.", "We chose the rank values $r$ in our dynamic low-rank reparameterization to give similar parameter counts to other parameter-efficient methods.", "We adapted the search space in our task-aware model structure from DARTS.", "$\\eta _\\text{in}$ denotes inner-loop and finetuning learning rate, $\\eta _\\text{out}$ denotes outer-loop learning rate, $B_\\text{in}$ denotes inner-loop and finetuning batch size, $B_\\text{out}$ denotes meta-batch size, $\\text{bsz}$ denotes decoding beam size, and $T_\\text{in}$ denotes inner loop steps." ], [ "Few-shot dialogue personalization.", "We take $r = 4$ , $B_\\text{out} = 16$ (as in previous works), $\\text{bsz} = 5$ , $T_\\text{in} = 10$ .", "For meta-training we use SGD ($\\eta _\\text{in} = 0.01$ ) in the inner loop and Adam for the outer loop ($\\eta _\\text{in} = 0.0003$ )." ], [ "Low-resource abstractive summarization.", "We take $r = 16$ , $B_\\text{in} = 40$ (via gradient accumulation), $T_{\\text{in}} = 20$ , $\\text{bsz} = 4$ .", "We truncated the input documents into 1024 tokens due to the limit of max input length of BART model.", "We used Adam with momentum $(\\beta _1=0.9, \\beta _2=0.998)$ and the Noam schedule (linear warmup of 1000 steps, then inverse square-root decay).", "Since the low-resource training set of science domain only has 100 samples, we used 3 times more training epochs than other domains." ], [ "Multi-domain language modeling.", "We take $r=32$ , $B_\\text{in} = 4$ , and $T_{\\text{in}} = 1$ .", "We used Adam with $\\eta _\\text{in} = 5 \\times 10^{-4}$ , $\\eta _\\text{out} = 5 \\times 10^{-5}$ .", "In meta-testing we linear decay $\\eta _\\text{in}$ ." ], [ "Training costs", "Table REF provides information about the amount of training that MLtD has taken for the main experiments.", "The table reports the training cost of a single run.", "We tuned the hyperparameters, namely the inner-loop and outer-loop learning rates, over around five runs respectively.", "Table: Details on the training cost of MLtD.", "Cost ($) estimated from on-demand instance prices." ], [ "Datasets.", "E2E [27] is commonly used for data-to-text evaluation of NLG systems.", "It consists of approximately 50K examples in total from the restaurant domain.", "Each input consists of a sequence of slot-value pairs and can have multiple references.", "The average output length is 22.9.", "We use the official evaluation script, which reports BLEU, NIST, METEOR, ROUGE-L, and CIDEr.", "WebNLG [9] is a multi-domain dataset for data-to-text evaluation.", "It contains 22K examples in total from 14 distinct domains, and the average output length is 22.5.", "Nine domains are used for training, and the remaining five domains are used for testing.", "Each input is represented by a sequence of SUBJECT | PROPERTY | OBJECT triples.", "The evaluation metric is BLEU.", "DART [26] is an open-domain data-to-text dataset.", "The inputs are structured as sequences of ENTITY | RELATION | ENTITY triples.", "It contains 82K examples in total and the average output length is 21.6.", "The evaluation metric is BLEU.", "GLUE We report Matthew’s correlation for CoLA, Pearson correlation for STSB, and accuracy for the other tasks in Table REF .", "The dev set performance is presented by following [47] and [13]." ], [ "Setup.", "For the natural language generation tasks, we build upon [16]'s code.https://github.com/microsoft/LoRA/tree/snapshot-9-15-2021; they have greatly refactored their code since our experiments.", "We used the GPT2-medium as the underlying LM.", "In training, we used the AdamW optimizer with weight decay 0.01.", "The batch size is set to be 8 and we trained for 5 epochs in total.", "We used linear decay learning rate scheduler with the first 500 iterations for warmup.", "The initial learning rate is set to be 0.0002.", "In decoding, we used beam search with beam size 10.", "For GLUE tasks, we built upon [13]'s codehttps://github.com/jxhe/unify-parameter-efficient-tuning.", "Our experiments were performed on RoBERTa$_{\\text{base}}$ model.", "We limited maximum length of a sentence (pair) to be 512 after wordpiece tokenization.", "We used the Adam optimizer with batch size 32, and trained for 10 epochs on each task.", "The learning rate is a hyperparameter to tune for different tasks over $\\lbrace 1,2,3,4,5\\rbrace \\times 10^{-4}$ , with a linear warmup for the first 6% of steps followed by a linear decay to zero." ] ]
2207.03509
[ [ "Fast Flavor Transformations" ], [ "Abstract The neutrino fast flavor instability (FFI) can change neutrino flavor on time scales of nanoseconds and length scales of centimeters.", "It is expected to be ubiquitous in core-collapse supernovae and neutron star mergers, potentially modifying the neutrino signal we see, how matter is ejected from these explosions, and the types of heavy elements that form in the ejecta and enrich the universe.", "There has been a great deal of recent interest in understanding the role the FFI plays in supernovae and mergers, but the short length and time scales and the strong nonlinearity have prevented the FFI from being included consistently in these models.", "We review the theoretical nature of the FFI starting with the quantum kinetic equations, where the instability exists in neutron star mergers and supernovae, and how the instability behaves after saturation in simplified simulations.", "We review the proposed methods to test for instability in moment-based calculations where the full distribution is not available and describe the numerical methods used to simulate the instability directly.", "Finally, we close by outlining the trajectory toward realistic, self-consistent models that will allow a more complete understanding of the impact of the FFI in supernovae and mergers." ], [ "Introduction", "Core-collapse supernovae (CCSNe) and neutron star mergers (NSMs) are the only sites in the universe after the big bang where neutrinos are generated at sufficiently high densities that they are not only temporarily trapped by dense matter, but they interact and scatter with themselves in a way that drives a rich variety of strongly nonlinear effects.", "Prior to the start of the millenium, it was widely believed that the major channel of flavor conversions in a CCSN was due to resonant flavor conversions caused by the Mikheyev-Smirnov-Wolfenstein (MSW) effect [1], [2], [3].", "However, this picture neglects the effect of the neutrino self-interactions, which was shown to be absolutely crucial for neutrino flavor evolution in a dense media [4], [5], [6], [7], [8].", "These self-interactions manifest as a forward-scattering potential $\\propto \\sqrt{2}G_F n_\\nu $ for a background neutrino density $n_\\nu $ [9].", "Neutral current interactions between neutrinos of different flavors, being flavor-blind, cause this forward-scattering potential to have off-diagonal components as well [6].", "As a result, a dense ensemble of neutrinos undergo collective flavor transformation, exhibiting a rich phenomenology.", "Until the middle of the last decade, it was widely believed that these collective oscillations would lead to bipolar flavor conversions growing with a rate, roughly proportional to $\\sqrt{(\\Delta m^2) n_\\nu }$ , where $\\Delta m^2$ is the neutrino mass splitting.", "Simple toy model analyses predicted these flavor conversions to take place close to the stalled shockwave at a radius of $\\mathcal {O}(200)\\,$ km, thereby hinting at their possible role in aiding a shockwave-driven explosion.", "While most of the initial numerical studies were performed using a single-angle approximation, where neutrinos of all flavors were emitted with the same angle from a certain neutrinosphere, later studies showed that relaxing these approximations can lead to flavor synchronization, bipolar oscillations, spectral splits, and other instabilities, though at radii to large to affect the CCSN explosion mechanism [10], [11], [12], [13], [14], [15].", "We are still far from having a complete analytic picture of collective oscillations, but one can get an understanding of the onset of these instabilities using a linearized stability analysis [16], [17], [18].", "More recently, the different directional structures of neutrino distributions of different flavors has been recognized to cause an entirely different \"fast\" flavor instability (FFI), the subject of this review, that can occur in deep regions of CCSNe and NSMs not accessible to other flavor transformation mechanisms.", "Small perturbations can grow with a rate proportional to $n_\\nu $ [19] and can occur even for massless neutrinos, thereby being independent of the neutrino masses and mass hierarchy.", "These results were further confirmed in [12], although they were derived using discretized neutrino angular distributions, which can give rise to spurious instabilities [20].", "The FFI was further substantiated by [21] by employing larger number of angular modes, in [22] using simple toy models, and in [23] realistic SN neutrino spectra.", "Since then, the FFI has inspired a great deal of work in order to understand the instability from an analytic point of view, simulate its nonlinear nature, discover where it is realized in nature, and determine its effects in astrophysical explosions.", "This one of several recent reviews in the area of collective neutrino flavor transformations.", "[24] reviews slow collective neutrino oscillation simulations and implications for supernovae.", "[25] reviews neutrino oscillations applied to neutrino detectors, while [26], [27] additionally review how detections of neutrinos from a supernova might inform astrophysics and fundamental physics.", "[28] review a few phenomena associated with the FFI and [29] review much of the recent work on the FFI and anticipated effects in CCSNe and NSMs.", "In this short review, we attempt a comprehensive overview of only recent developments related to the FFI, both on the numerical and analytical fronts, paying special attention to the methods used to find where the FFI occurs in nature and to probe the nature of the FFI once it takes hold.", "We do not discuss many-body effects [30], [31], [32], [33], [34], [35], [36], [37], [38], the exciting possibility of simulating neutrinos with quantum computers [39], [40], [41], [42], non-standard interactions [43], [44], [45], [46], helicity coherence and sterile neutrinos [47], [48], [49], [50], [51], or wave packet separation [52], [53], [54].", "We hope this document helps to collect and structure work from the rapidly evolving field of neutrino fast flavor transformations, which bears relevance in so many areas of fundamental physics and astrophysics." ], [ "Quantum Kinetic Equations", "A mean-field treatment of neutrino flavor conversions may be modeled using the occupation number matrix formalism, which can account for mixed states and possible loss of coherence due to collisions [55], [56], [57], [58], [59], [60], [61], [62].", "The distribution of neutrinos is described by the Hermitian matrix-valued occupation number matrix $f^{ab}(\\mathbf {x},\\mathbf {p})$ , where $a$ and $b$ are flavor indices and the spacetime position $\\mathbf {x}$ and momentum $\\mathbf {p}$ are four-vectors.", "Throughout this work, we assume that neutrinos are hyper-relativistic, so their momenta are null ($p^\\alpha p_\\alpha =-m^2\\approx 0$ ).", "With this restriction, the distribution is a seven-dimensional quantity.", "Throughout this work we take care to indicate the structure of each quantity (matrix/tensor indices and spatial/momentum dependence) in the definition of each quantity, but suppress the additional markup elsewhere.", "The diagonal entries of this matrix are the occupation numbers for the corresponding neutrino species, while the off-diagonal elements encode quantum coherence between flavor states.", "Neglecting wave packet separation (valid on the short length/time scales of flavor instabilities; [53]), the dynamics of the occupation number matrices are dictated by the quantum kinetic equation $p^\\alpha \\frac{\\partial f}{\\partial x^\\alpha } + \\frac{d p^\\alpha }{d\\lambda }\\frac{\\partial f}{\\partial p^\\alpha }= \\epsilon \\left({\\mathcal {C}}- i \\left[\\mathcal {H}, f\\right]\\right)\\,\\ .$ Here, $d\\lambda $ is a differential unit of time in the frame comoving with the fluid and $p^\\alpha dx^\\alpha /d\\lambda $ .", "Factoring out the comoving-frame neutrino energy $\\epsilon p^\\alpha u_\\alpha $ allows the part of the right hand side within the parentheses to be evaluated completely in a frame comoving with the background fluid defined by a four velocity $u^\\alpha $ .", "Throughout this work, we assume a $(+,-,-,-)$ metric convention, $a$ and $b$ are flavor indices, and $\\alpha $ , $\\beta $ , and $\\gamma $ are spacetime indices.", "The Hamiltonian matrix $\\mathcal {H}$ is composed of three terms, as $\\mathcal {H}^{ab}(\\mathbf {x}, \\mathbf {p}) \\mathcal {H}_\\mathrm {vac}(\\epsilon ) + \\mathcal {H}_\\mathrm {matter}(\\mathbf {x}) + \\mathcal {H}_{\\nu \\nu }(\\mathbf {x},\\mathbf {v}) \\,\\,.$ $\\mathcal {H}_\\mathrm {vac}$ represents the contribution of the neutrino mass to its energy, which can be written in the flavor basis assuming hyper-relativistic neutrinos as $\\mathcal {H}_{{\\rm vac}}^{ab}(\\epsilon ) U \\frac{M^2}{2\\epsilon }U^\\dagger \\,$ where $U^{ab}$ is the PMNS mixing matrix describing the rotation between the mass to flavor basis (see [63] for values).", "$(M^2)^{ab}{\\rm diag}\\,(m_1^2,\\,m_2^2,\\,m_3^2)$ is the squared neutrino mass matrix.", "Neutrinos can interact with background matter (leptons, nucleons, other neutrinos) in a way that does not change the neutrino momentum (i.e., forward scattering).", "The leading order interactions with background leptons and nucleons result in $\\mathcal {H}^{ab}_\\mathrm {matter}(\\mathbf {x})\\sqrt{2}G_F \\Lambda ^{ab}(\\mathbf {x}) \\,\\,,$ where $G_F$ is the Fermi coupling constant, $\\Lambda ^{ab}(\\mathbf {x})=\\delta ^{ab}(n_a-n_{\\bar{a}})$ , and $n_a(\\mathbf {x})$ are lepton number densities in the fluid rest frame.", "Note that in astrophysical environments where there are no muon or tauon neutrinos, an additional radiative correction can provide the leading order contribution distinguishing the $\\nu _\\mu $ and $\\nu _\\tau $ flavor states [3].", "The neutral current contributions from protons and electrons cancel.", "The contribution from neutrons is proportional to the identity matrix and as a result does not contribute to dynamics unless considering helicity or pair coherence.", "Because of this, they are usually neglected.", "The Hamiltonian contribution due to forward scattering off of other neutrinos (somewhat confusingly dubbed self-interaction) is given by [9] $\\mathcal {H}_{\\nu \\nu }^{ab}(\\mathbf {x},\\mathbf {v}) \\sqrt{2} G_F v^\\alpha I_{1,\\alpha }^{ab}(\\mathbf {x}) \\,\\,,$ where the angular moments of the neutrino lepton number density distribution are given by $\\begin{aligned}I_0^{ab}(\\mathbf {x}) &\\int \\frac{d^2 \\mathbf {v}}{4\\pi } G(\\mathbf {x},\\mathbf {v}) \\\\I_{1,\\alpha }^{ab}(\\mathbf {x}) &\\int \\frac{d^2\\mathbf {v}}{4\\pi } G(\\mathbf {x},\\mathbf {v}) v_\\alpha \\\\I_{2,\\alpha \\beta }^{ab}(\\mathbf {x}) &\\int \\frac{d^2 \\mathbf {v}}{4\\pi } G(\\mathbf {x},\\mathbf {v}) v_\\alpha v_\\beta \\\\\\end{aligned}$ and so on.", "The null direction vector of the neutrino in an orthonormal tetrad comoving with the fluid is defined as $v^\\beta = p^\\alpha \\hat{x}^{(\\beta )}_\\alpha /\\epsilon $ , where $\\hat{x}^{(\\beta )}_\\alpha $ are the basis vectors defining the orthonormal tetrad.", "The differential neutrino lepton number distribution for each direction $\\mathbf {v}$ is $G^{ab}(\\mathbf {x},\\mathbf {v})\\int \\frac{\\epsilon ^2 d\\epsilon }{2\\pi ^2} \\left[f(\\mathbf {x},\\epsilon ,\\mathbf {v})-\\bar{f}^*(\\mathbf {x},\\epsilon ,\\mathbf {v})\\right]\\,\\,,$ The evolution equations for antineutrinos is analogous; one must simply put bars over $f$ , $\\mathcal {H}$ , and $\\mathcal {C}$ in Equation REF .", "In this case, $\\bar{\\mathcal {H}}=\\bar{\\mathcal {H}}_\\mathrm {vac}+\\bar{\\mathcal {H}}_\\mathrm {matter}+\\bar{\\mathcal {H}}_{\\nu \\nu }$ , where $\\bar{\\mathcal {H}}_\\mathrm {vac}=\\mathcal {H}_\\mathrm {vac}^*$ , $\\bar{\\mathcal {H}}_\\mathrm {matter}=-\\mathcal {H}_\\mathrm {matter}^*$ , and $\\bar{\\mathcal {H}}_{\\nu \\nu }=-\\mathcal {H}_{\\nu \\nu }^*$ .", "Some works express the antineutrino evolution equations for a different quantity $\\bar{f}^{\\prime }-\\bar{f}^*$ .", "This causes the integrand in Equations REF to be proportional to $(f+\\bar{f}^{\\prime })$ and allows the antineutrino Hamiltonians to be written as $\\bar{\\mathcal {H}}_\\mathrm {vac}^{\\prime }=-\\mathcal {H}_\\mathrm {vac}$ , $\\bar{\\mathcal {H}}_\\mathrm {matter}^{\\prime }=\\mathcal {H}_\\mathrm {matter}$ , and $\\bar{\\mathcal {H}}^{\\prime }_{\\nu \\nu }=\\mathcal {H}_{\\nu \\nu }$ .", "Detailed collisional interaction rates for a single neutrino species have been developed and extensively used in CCSN and NSM simulations (e.g., [64], [65]).", "The two most simple interaction types can be generalized to matrix-valued QKE collision terms by defining an opacity matrix $\\langle \\kappa \\rangle ^{ab}=(\\kappa _{\\nu _a}+\\kappa _{\\nu _b})/2$ , where $\\kappa _{\\nu _a}(\\mathbf {x},\\epsilon )$ are single-species opacities (and similarly for emissivity $\\eta (\\mathbf {x},\\epsilon )$ ; [60], [61], [62], [66]).", "Absorption and emission are modeled by $\\mathcal {C}_\\mathrm {abs/emit}^{ab}(\\mathbf {x},\\mathbf {p}) = \\langle \\eta \\rangle ^{ab} (\\delta ^{ab}-f^{ab}) - \\langle \\kappa \\rangle ^{ab}_\\mathrm {abs}f^{ab}\\,\\,.$ Elastic, isotropic scattering is given by $\\mathcal {C}_\\mathrm {elastic\\,\\,scat}^{ab}(\\mathbf {x},\\mathbf {p}) = \\langle \\kappa \\rangle ^{ab}_\\mathrm {scat} \\int \\frac{d^2\\mathbf {v}^{\\prime }}{4\\pi } \\left[f^{ab}(\\mathbf {\\mathbf {x},\\epsilon ,v^{\\prime }}) - f^{ab}(\\mathbf {x},\\epsilon ,\\mathbf {v})\\right]\\,\\,.$ A similar description of a more complete set of interactions is given in [66].", "Many calculations assume that there are only two flavors of neutrinos for simplicity, to turn each $3\\times 3$ matrix into a $2\\times 2$ matrix.", "This is often motivated by the fact that the distributions of $\\nu _\\mu $ and $\\nu _\\tau $ are very similar in CCSNe and NSMs.", "This reduces the computational cost of a calculation, makes the results visualizable using a Bloch vector (i.e., a point on the surface of a two-sphere), and in many cases produces qualitatively similar results as a three-flavor calculation.", "However, quantitative predictions of the net content of each flavor are not generally reliable and in certain cases phenomena arise with three flavors that do not occur with two flavors [67], [68], [69], [70], [71], [29]." ], [ "The Fast Flavor Instability", "Solving Eq.", "REF in its entirety for a realistic astrophysical environments is computationally intractable.", "However, it is possible to analytically solve for the evolution of small perturbations by linearizing the equations assuming all flavor-diagonal components are homogeneous.", "A flavor off-diagonal element of the occupation number matrix for neutrinos moving in direction $\\mathbf {v}$ can be decomposed into plane wave solutions $f^{ab}(\\mathbf {x},\\mathbf {p}) = \\frac{f^{aa}(\\mathbf {p})-f^{bb}(\\mathbf {p})}{2} Q^{ab}(\\mathbf {v}) e^{-iK^\\alpha x_\\alpha }\\,\\,,$ with (real) amplitude $Q^{ab}(\\mathbf {v})$ and (complex) four wave number $\\mathbf {K}=(\\omega ,k)$ .", "If we assume that a neutrino distribution is initially very nearly flavor diagonal (i.e., $Q\\ll 1$ ), one can plug this into Equation REF , keeping only terms linear in $Q$ and ensuring that $f^{ab}=f^{ba*}$ .", "The resulting equation has solutions that satisfy [16], [72], [73], [74] $\\mathrm {det}\\left[\\eta ^{\\alpha \\beta } +\\int \\frac{d^2\\mathbf {v}}{4\\pi }\\,G^{(ab)}(\\mathbf {v}) \\frac{v^{\\alpha }v^{\\beta }}{K^{\\prime }_\\gamma v^\\gamma }\\right]=0 \\,\\, ,$ where $K_\\alpha ^{\\prime }K_\\alpha - \\sqrt{2} G_F (I_{1,\\alpha }^{(ab)}+\\Lambda ^{(ab)}\\hat{t}_\\alpha )$ and $\\hat{t}^\\alpha =(1,0,0,0)$ is the timelike basis vector in the tetrad.", "We define the difference between two diagonal elements of a flavor matrix as $A^{(ab)} A^{aa} - A^{bb}\\,\\,.$ Even though global simulations including neutrino quantum kinetics are not currently possible, one can probe the potential importance of the FFI in existing simulation by searching for crossings of the ELN.", "In data where the full distribution is available, one can do this explicitly.", "However, many calculations have limited data, usually in the form of moments defined in Equation REF .", "In what follows, we will omit the flavor superscripts $(ab)$ with the understanding that an instability criterion applies to any pair of two flavors $a$ and $b$ .", "When spacetime indices are omitted, the moment tensors are assumed to be contracted with a unit vector along the axis of symmetry.", "We label a method as \"exact\" if there is a one-to-one correspondence between the criterion and instability, \"conservative\" if it cannot indicate instability where it does not exist, and \"approximate\" if it is a best-guess procedure.", "Furthermore, we illustrate the diversity of models that have been searched for instability, along with the diversity of methods employed, in Table REF .", "Explicit (exact): One can integrate the distribution function to get the differential lepton number asymmetry.", "That is, there is a lepton number crossing and hence instability where $\\mathrm {max}_{\\mathbf {v}}(G^{(ab)})\\mathrm {min}_{\\mathbf {v}}(G^{(ab)})<0$ .", "$\\mathbf {k}_0$ (conservative): [75] show that there is always some wavenumber $k=k_0$ that simplifies the dispersion relation.", "In this case, the dispersion relation takes the form $\\mathrm {det} \\left(\\eta _{\\alpha \\beta } + I^{(ab)}_{2,\\alpha \\beta }/\\omega \\right)=0$ .", "A complex value of $\\omega $ satisfying the dispersion relation implies that the mode with wavenumber $k_0$ is unstable.", "In axisymmetric distributions, this instability criterion simplifies to $(I_0+I_2)^2-4(I_1)^2<0$ .", "$\\mathbf {\\alpha }$ (conservative): [76], [77] propose that in regions of near-equilibrium (i.e.", "in the PNS or HMNS) and in directions opposite the LESA, it more practical to just look for locations that satisfy $\\alpha =n_{\\nu _e}/n_{\\bar{\\nu }_e}=1$ (equivalent to $I_0=0$ ), since if there are equal numbers of all neutrinos, crossings are inevitable.", "Polynomial (conservative): [78] appeal to the presence of a crossing that is not restricted to $k=0$ .", "Although the concept works for arbitrary distributions, it is most simple to express assuming an axisymmetric distribution.", "One can construct any linear combination of moments $I_\\mathcal {F} = a_0 I_0 + a_1 I_1 + a_n I_n + ...$ such that the function $\\mathcal {F}(\\mu )=\\sum a_n \\mu ^n$ is strictly positive for $\\mu \\in [-1,1]$ .", "A crossing must exist and the distribution is thus unstable if $I_0I_\\mathcal {F}<0$ .", "Pendulum (conservative): [79] further indicate instability in axisymmetric distributions according to the resonant trajectory test (unstable if $I_2^2 < I_1^2$ ) and the unstable pendulum test (unstable if $I_2^2 \\le \\frac{4}{5}I_1(5I_3-3I_1)$ ).", "Distribution Fit (approximate): [80], [81] propose a combination of a polynomial fits (fitted to a 1D CCSN simulation using $S_n$ transport) and a ray-tracing calculation to estimate the values of the radially ingoing and outgoing distributions.", "If $(f^{ee}-\\bar{f}^{ee})_\\mathrm {in} (f^{ee}-\\bar{f}^{ee})_\\mathrm {out}\\le 0$ and the heavy lepton neutrino distributions are equal to each other, there is a ELN crossing and thus instability.", "Maximum Entropy (approximate):Assuming the neutrino and antineutrino distributions follow the maximum entropy angular distribution of of [82] (i.e., $f(\\mu )\\sim \\exp (\\mu Z)$ , where $\\mu $ is the cosine of the angle from the flux direction), one can obtain the parameter $Z$ for each distribution to give it the appropriate net flux.", "The presence of crossings can be determined analytically from these assumed distributions [83]." ], [ "Direct Simulation", "While linear stability analysis indicates that small perturbations will grow exponentially, what happens when the perturbations are no longer small?", "There is some analytical work predicting the nonlinear evolution of the FFI in homogeneous/semi-isotropic cases [84] and crude estimates of post-instability flavor content [85], [86], but numerical simulations are still required to understand more general cases and to validate approximations made in analytical studies.", "Here we briefly review the numerical techniques that have been used to study the fast flavor instability.", "Differences fall largely into two categories: discretization scheme and assumed symmetries.", "[87] compare several different methods and show that each has advantages and weaknesses.", "We neglect a discussion of methods used to treat other phenomena, such as slow collective oscillations, the MNR, and the multi-azimuthal-angle instability.", "Discretization Scheme: Although the results of numerically converged simulations should not depend on the discretization scheme, it is important to understand the approximations and limitations involved.", "The majority of simulations employ the $\\mathbf {S_n}$ scheme, in which phase space is divided into discrete blocks, each of which contains some amount of radiation.", "One then uses discrete derivatives to evaluate the advection term to determine how neutrinos move from block to block (though there are still many ways to do this in detail; [88]).", "The approximation lies in the finite size of the blocks.", "[89] instead discretize space into Fourier modes, leaving momentum space discretized as in the $S_n$ scheme.", "This makes the spatial derivatives in the advection term easier to evaluate, but the advection term then couples every Fourier mode to every other Fourier mode, which can become computationally expensive for large systems.", "The approximation lies in the finite size of the blocks in momentum-space and the finite number of spatial Fourier modes.", "The Particle in cell method discretizes the radiation field into particles [90].", "This makes evaluation of the advection term more simple (the particles simply move in straight lines), but the Hamiltonian becomes more difficult to treat because one has to interpolate the full radiation field to the location of every particle.", "The approximation lies in the finite number of particles and the finite number of spatial grid cells that are used to evaluate the Hamiltonian.", "Finally, [91] expand momentum space into angular moments.", "This causes the evolution equation for each moment to depend on higher moments, such that one must approximate the system by cutting of the tower of equations at some moment order by applying a closure relation.", "Although it has yet seen use in the FFI literature, one can also discretize momentum space in terms of spherical harmonics (e.g., [92]), which is conceptually similar to a moment expansion.", "Symmetries: It is common to simplify calculations by assuming homogeneity in one or more directions.", "We use 0D, 1D, 2D, and 3D to describe simulations that allow for inhomogeneity in no, one, two, or three directions.", "Allowing for inhomogeneity permits the presence of modes with nonzero wavenumber in that direction.", "The FFI is fundamentally a multi-direction phenomena, so all FFI simulations allow for some degree of anisotropy.", "Angular distributions generally assume one of several common symmetries.", "Beam models assume that neutrinos move along a small number of discrete beams (0D: [93], [19], [12], [94], [95]).", "Minimally complex models that allows for both inhomogeneity and anisotropy in one Cartesian direction are planar geometry and the neutrino line model, where neutrinos are allowed only to travel with velocity components specified by a single angle coordinate (0D:[76], 1D:[96], [97], 2D:[98], [99]).", "In order to discuss neutrino distributions directly relevant to approximately spherical core-collapse supernovae, the majority of recent simulations assume axial symmetry around the radial direction (0D: [23], [94], [91], [100], [101], [84], [102], [103], [71], [104], [105], 1D: [106], [107], [85], [108], [109], [88], [89], [110], [111], [112], [86], [113], [114]).", "Finally, one can allow for general anisotropy (0D:[115], [116], 1D:[117], [90], 2D:[118], 3D:[119]).", "Other Considerations: First, the vast majority of simulations assume flavor transformation only between two flavors, and relatively few consider three flavors [70], [90], [116], [71], [89], [112].", "Second, even when simulations allow for inhomogeneity, they generally assume periodic boundary conditions.", "This allows modes with $k\\ne 0$ to grow, but does not allow for gradients associated with dynamics on larger scales [110].", "Third, the interpretation of a simulation can be muddied by the form of the perturbations imposed in the initial conditions [106], [88].", "Fourth, a full suite of collisional interactions has only been simulated assuming both isotropy and homogeneity, such that the FFI cannot arise [66].", "[105] use physically motivated initial conditions and interaction rates in a homogeneous two-moment calculation to demonstrate interactions between collisional instabilities and the FFI.", "First steps are also being taken to include toy-model elastic neutral-current scattering effects in simulations of the FFI [109], [102], [103], [95].", "They show that the isotropizing effect of scattering can keep distributions in an unstable state for a longer period of time, thus enhancing net flavor transformation, but if the collisions are too strong flavor transformation can be hindered.", "Finally, determining whether quantum many-body correlations (i.e., beyond the mean field limit) are important with macroscopic numbers of neutrinos, [38] suggests that the emergence of significant many-body correlations in distributions unstable to the FFI depends on the particular distribution." ], [ "Core-Collapse Supernovae and Neutron Star Mergers", "Given the microscopic nature of the FFI, the question remains: where does the FFI occur in nature and what does it do?", "Although there are many reactions that contribute to the neutrino distribution in CCSNe and NSMs [64], [65], the charged-current absorption/emission reaction causes the distributions of electron neutrinos and antineutrinos to be significantly different from other species: $p + e^- \\leftrightarrow n + \\nu _e\\,\\,.$ How this and other interactions cause the distributions (and instabilities) to manifest depends on a large number of factors, including turbulent relativistic hydrodynamics, the properties of matter above nuclear densities, the astrophysics creating the initial conditions of the event, and non-equilibrium neutrino radiation transport.", "Based on state of the art simulations of CCSNe and NSMs, the FFI seems to be robustly present, but the astrophysical implications are only beginning to be explored.", "PNS Convection: Crossings are found in the PNS convection region in all multidimensional models.", "Within the protoneutron star, the neutrinos are trapped and the above reaction is approximately in equilibrium.", "The electron neutrino chemical potential is then $\\mu _{\\nu _e}=\\mu _p+\\mu _e-\\mu _n$ .", "From a permutation of that reaction, $\\mu _{\\bar{\\nu }_e}=-\\mu _{\\nu _e}$ .", "In the region between $10\\,\\mathrm {km}\\lesssim r \\lesssim 20\\,\\mathrm {km}$ for the first few hundred milliseconds after core bounce, the electron fraction is relatively low, so the large $\\mu _n$ can approximately cancel $\\mu _e$ to make $\\mu _{\\nu _e}\\approx \\mu _{\\bar{\\nu }_e}\\approx 0$ .", "That is, there are a similar number of electron neutrinos and antineutrinos, so the small anisotropies caused by PNS convection and slightly different opacities between $\\nu _e$ and $\\bar{\\nu }_e$ can induce crossings [133].", "However, given that the chemical potential of all of the other flavors is also $\\mu _{\\nu _\\mu }\\approx \\mu _{\\nu _\\tau }\\approx \\mu _{\\bar{\\nu }_\\mu }\\approx \\mu _{\\bar{\\nu }_\\tau }\\approx 0$ .", "All neutrino flavors have approximately the same distributions anyway, and it is unclear if flavor mixing will have any significant effect on the dynamics or the neutrino signal [77].", "Under the Shock: Whether there are crossings in this region appears to depend on the details of the radiation transport method [125], [134].", "Exploding models seem to have more unstable regions [127], [128], [121], stellar rotation may help suppress instability [126], the presence of multidimensional effects encourages crossings [81] (though crossings can appear in 1D simulations; [121]), and the LESA seems to guarantee crossings in some regions [78], [127].", "Above the Shock: Crossings are found outside the shock in all models.", "Infalling stellar material above the shock is rich with large nuclei before they dissociate upon passing through the shock front.", "Neutrinos at energies present in supernovae interact coherently with all of the nucleons in the nucleus, such that the cross section scales roughly as the square of the number of nuclei and the square of the neutrino energy.", "The antineutrinos that escape from a supernova have a higher average energy, and so scatter more efficiently from these nuclei.", "Although there are generally more electron neutrinos than electron antineutrinos moving outward (a consequence of the dense matter becoming more neutron rich), it turns out that there are more ingoing (i.e.", "scattered) electron antineutrinos than electron neutrinos [123].", "Although there are very few ingoing neutrinos of any type, this technically constitutes a ELN crossing.", "Given that this crossing is very small, it is yet unclear if the resulting FFI can cause significant flavor change [85], [90], [88], [111].", "However, independently from the FFI, this \"halo\" of scattered neutrinos could significantly modify the distribution of neutrinos observable on Earth [135], [136], [137], [21], [138], [139], [140].", "Note that [121] show that these crossings are not actually present at mu=-1, but that there is a double crossing in the middle.", "[113] perform the first large-scale models of the FFI above the shock using imposed boundary conditions.", "PNS cooling: [141] indicate that, should the FFI occur in the cooling phase of a PNS, it could significantly increase mass loss rates and affect nucleosynthesis with more proton-rich conditions.", "Neutron Star Mergers [142], [129], [130] find unstable regions in a NSM disk by determining the emission surface of NSM accretion disk and ray-tracing to estimate full neutrino distributions.", "They suggest that the flavor transformation is quite ubiquitous, that there is enhancement of r-process element production for a disk around a black hole, and that there is little impact on the r-process for a disk around a hypermassive neutron star.", "[131] simulate a neutron star merger disk around a black hole assuming flavor equipartition wherever there is instability according to the $k_0$ test, which results in enhanced production of r-process elements.", "[132] also simulate a merger disk, but assume instability above a critical flux factor.", "They vary the flavor transformation prescription, disk mass, and MHD treatment, also finding moderately enhanced r-process yields in most cases.", "[99] simulate flavor transformation a toy model of a NSM disk that suggests minimal net flavor transformation even in the presence of instabilities.", "More work is needed to include the effects of the FFI self-consistently and in conjunction with collisions." ], [ "Future Directions", "There has been a great deal of progress since the FFI was proposed and discovered in models of CCSNe and NSMs, but there is still a lot of work to be done before its potential astrophysical effects are sorted out.", "There are a number of significant open questions: What is the effect of combining flavor transformation and collisional processes in a realistic CCSN or NSM?", "Can instabilities in disparate locations affect each other?", "Are effects in simplified simulations as strong when combined with the immense complexity of physical processes in multidimensional astrophysical explosions?", "How can flavor transformation be reliably incorporated into global CCSN and NSM simulations?", "One approach is to model the FFI as a small-scale phenomenon using a surrogate or sub-grid model.", "This work has already begun, and enable known effects to be included in large-scale simulations, but precludes finding new effects from the underlying instability operating and propagating on larger scales.", "Another approach is to simplify, truncate, or approximate the quantum kinetic equations in a way that is able to maintain the important features of the instability while ignoring unimportant features of the solution.", "The moment decomposition of the quantum kinetic equations is one example of this approach that shows promise for enabling the inclusion of flavor transformation effects on much larger scales and with a greater amount of additional physics than would be possible with a direct approach.", "Alternatively, one could artificially reduce the separation of scales between collisional and flavor transformation processes by effectively decreasing the interaction potential and extrapolating back to the full strength [143], though this extrapolation must be carefully checked to avoid large systematic errors.", "Finally, all of this work assumes a mean-field treatment of the neutrino quantum states, when in fact neutrinos will be entangled with other neutrinos.", "Increasingly realistic calculations of many-body neutrino flavor transformation (potentially requiring quantum computers) will help inform whether many-body effects manifest in supernova and merger conditions.", "Although many of these questions seem solvable within the next decade, neutrino physics never ceases to yield new surprises, and the FFI is but one of many exciting effects in the incredibly complex environments in supernovae and mergers." ] ]
2207.03561
[ [ "Quantum chemical roots of machine-learning molecular similarity\n descriptors" ], [ "Abstract In this work, we explore the quantum chemical foundations of descriptors for molecular similarity.", "Such descriptors are key for traversing chemical compound space with machine learning.", "Our focus is on the Coulomb matrix and on the smooth overlap of atomic positions (SOAP).", "We adopt a basic framework that allows us to connect both descriptors to electronic structure theory.", "This framework enables us then to define two new descriptors that are more closely related to electronic structure theory, which we call Coulomb lists and smooth overlap of electron densities (SOED).", "By investigating their usefulness as molecular similarity descriptors, we gain new insights in how and why Coulomb matrix and SOAP work.", "Moreover, Coulomb lists avoid the somewhat mysterious diagonalization step of the Coulomb matrix and might provide a direct means to extract subsystem information that can be compared across Born-Oppenheimer surfaces of varying dimension.", "For the electron density we derive the necessary formalism to create the SOED measure in close analogy to SOAP.", "Since this formalism is more involved than that of SOAP, we review the essential theory, but also introduce a set of approximations that eventually allow us to work with SOED in terms of the same implementation available for the evaluation of SOAP.", "We focus our analysis on elementary reaction steps, where transition state structures are more similar to either reactant or product structures than the latter two are with respect to one another.", "The prediction of electronic energies of transition state structures can, however, be more difficult than that of stable intermediates due to multi-configurational effects.", "The question arises to what extent molecular similarity descriptors rooted in electronic structure theory can resolve these intricate effects." ], [ "Introduction", "A molecular descriptor — in the machine-learning literature also known as feature[1] — is a representation of a molecule in terms of a computer readable vector.", "Molecular descriptors can be compared in order to assess the similarity of molecules of different composition and configuration.", "A similarity measure is a mathematical metric, i.e., a function that measures the distance between two points in descriptor space.", "The closer the two points are, the more similar they are.", "A kernel function generalizes this notion.", "[2] Examples are Gaussian kernels or linear kernels.", "[1], [3] They do not bear physical meaning per se, but assess whether two points are close together according to the measure.", "Often they are chosen for mathematical convenience (such as radial basis functions based on Gaussians[4]), but they can also be loaded with physical interpretation such as the smooth overlap of atomic positions (SOAP)[5].", "Even though the notions of features and kernels are distinct, they are sometimes treated as practically the same, since the physical and chemical properties of systems to be represented are encoded in both the kernel and the feature.", "There exists a multitude of molecular descriptors, many originating from the field of cheminformatics: One of the most comprehensive overviews was provided by Todeschini and Consonni[6], which, however, does not cover post-2012 descriptors, i.e., those that have gained traction within the recent revival of machine learning and artifical intelligence in chemistry.", "Todeschini and Consonni[6] classify the descriptors according to the theory from which they are derived: graph theory, discrete mathematics, physical chemistry, information theory, quantum chemistry, organic chemistry, differential topology, algebraic topology.", "On top of that, they distinguish how they are processed, namely by statistics, chemometrics, or cheminformatics.", "The bibliography of their review covers the period between 1741 and 2008 with about 6400 references and 3300 descriptors are listed.", "[6] With the advent of modern machine learning in chemistry, several groups have developed descriptors and methods that are better tailored to a particular machine-learning method and harness the latest computational developments more efficiently.", "The group of von Lilienfeld proposed various descriptors, most of which can be described as many-body expansions; examples are bag of bonds (BoB),[7] the atom-in-molecule-based descriptor called “amons”[8], FCHL[9] (named after the authors), and the development[10] and assessment[11], [12] of Coulomb matrices, their eigenvalues, or multiple Coulomb matrices per molecule.", "Huang and von Lilienfeld studied the uniqueness of some of their descriptors.", "[13] To establish size-intensive descriptors, Collins studied descriptors for machine learning and how to encode bonds.", "[14] The molecular-structure-based descriptor SOAP by Csanyi and co-workers has experienced continuous development and is among the most successful ones.", "[5], [15], [16] Ceriotti and co-workers have developed and analyzed physics-inspired molecular representations[17], [18], [19] and carried out considerable work in unifying the landscape of descriptors.", "[20], [21], [17], [22] Corminboeuf and co-workers have investigated ways to incorporate the electron density into machine learning procedures.", "[23], [24] Molecular similarity descriptors such as Coulomb matrix and SOAP have been used in machine-learning applications as 'rulers' to assign a degree of similarity to two different structures.", "Typically, the structures to be compared can differ significantly and may be taken from across chemical space.", "However, if we consider the opposite case, namely structures that are clearly related through an elementary reaction step, then we arrive at a somewhat paradoxical situation: while reactant and product are obviously rather similar molecules by construction (as they are related by an elementary reaction step), the connecting transition state structure is even more similar to either product or reactant.", "Yet, it is well known in electronic structure theory that transition state structures (which represent activated molecules that typically exhibit one or two stretched chemical bonds) present a very different electron correlation problem compared to the stable reactant or product structures.", "Hence, this example presents a well-defined situation, in which molecular structure similarity seems to be insufficient to also judge on electronic similarity, which governs the molecular properties.", "As a result, a molecular similarity measure might rightfully determine a transition state structure to be more similar to either side of the reaction arrow, but at the same time miss the fact that its electronic structure will be rather different (as measured, for instance, in terms of electron correlation diagnostics).", "Another measure might even determine all three structure to be basically identical, which will then ignore all fine-grained differentiation that quantum chemical methods attributes to them.", "We note that machine learning has hardly been applied to transition states [25], [26], [27], [28], but more can be expected in the future in view of the key role transition states play in chemistry.", "It is therefore important to better understand to what degree descriptors of molecular similarity can differentiate between stable intermediates (i.e., local minima on the Born-Oppenheimer potential energy surface) and transition states (i.e., first-order saddle points on that surface).", "In this work, we therefore consider the physical foundations of the Coulomb matrix and SOAP from the point of view of electronic structure theory in order to shed light on this paradox.", "We will first introduce example reactions for which we will compare the different descriptors.", "Then, we study a general expression for the electronic energy to which we want to relate the measures in order to establish a relation between a similarity measure and this key quantity of electronic structure theory.", "Afterwards, we first consider the relation of the electronic energy and the Coulomb matrix, which will also lead us to introduce Coulomb lists as a descriptor.", "Then, we turn to such a relation for SOAP, which will also lead us to the introduction of a new descriptor, namely the smooth overlap of electron densities." ], [ "An Elementary Reaction Step Example", "By contrast to a typical machine learning study, we will, in this first step, not consider vast amounts of data, but instead make an attempt to understand how Coulomb matrix and SOAP operate at the level of a single elementary reaction step.", "In order to be able to later extend this work to a large data set, we chose our (generic) example (reaction 1) in such a way that it is on the same Born-Oppenheimer surface as the QM9 reference data set of von Lilienfeld and co-workers [29].", "We will base the numerical part of this work on the reactions shown in Figure REF : The nucleophilic double bond between C1 and C5 abstracts the proton of the hydroxyl group, H9, which results in a change of the aromatic system as the imidazole ring opens up.", "An isocyanate group is created.", "In this generic reaction 1, electronic and structural changes are both present.", "This allows us to probe to what extent a descriptor can account for electronic changes in terms of its numerical values.", "Figure: Reactions for which an elementary step relates reactant and product through a transition state structure.In (the generic) reaction 1, the nucleophilic double bond between C1 and C5 abstracts the protonof the hydroxyl group, H9.", "The C2-C6 bond breaks and, hence, opens up theimidazole ring, yielding an isocyanate.Reaction 2 and 3 show analogous reactions with a push–pull system(in 2) and two ethyl substituents (in 3), respectively.Analogously to reaction 1, reactions 2 and 3 are chosen as proton abstraction reactions of the same type, but with modulating substituents.", "In the case of reaction 2, we introduced a push–pull-type $\\pi $ -system structure to replace two hydrogen atoms of the original reaction.", "The nitro group functions as an electron acceptor and the methoxy group may be considered as the electron donor.", "Accordingly, this system poses a challenge for a descriptor.", "In the case of reaction 3, two ethyl groups replace the same two hydrogen atoms to probe the effect of atoms that are farther away from the reactive site.", "Hence, the three reactions serve the purpose of studying descriptor transferability because the locality of the descriptor should highlight whether the generic reaction 1 can be recovered in reactions 2 and 3." ], [ "Electronic Energy in Terms of Nuclear Contributions", "To connect a descriptor of molecular similarity, which refers to an atom-resolved molecular structure (usually in terms of its nuclear coordinates), with the expression for the electronic energy, we first need to rewrite this energy in terms of nuclear contributions.", "Although every electronic structure model affords a different expression for the electronic energy, a common expression can be formulated in terms of one- and two-body reduced density matrices.", "If we focus on the nuclear contributions in such an expression, we may write the electronic energy in Hartree atomic units as: $E_\\text{el} =\\sum _{I>J}^{M} \\frac{Z_{I}Z_{J}}{\\left|\\mathbf {R}_{I}-\\mathbf {R}_{J}\\right|}-\\sum _{i,J}^{N,M} n_i\\left\\langle \\phi _{i}\\left|\\frac{Z_{J}}{\\left|\\mathbf {R}_{J}-\\mathbf {r}\\right|}\\right| \\phi _{i}\\right\\rangle +E_\\text{rest}[\\lbrace \\phi _i\\rbrace ]\\, ,$ where $I$ and $J$ are the indices running over the $M$ nuclei with their respective nuclear charges, $Z_I$ and $Z_J$ , and coordinates, $\\mathbf {R}_I$ and $\\mathbf {R}_J$ .", "The first sum refers to the Coulomb repulsion of all nuclei, which is independent of the electronic structure model.", "The second term delivers the potential energy for the interaction with the external potential and all other contributions are kept hidden in the third one.", "For the sake of simplicity, we have introduced a general occupation number $n_i$ , which can be easily generalized to the doubly-indexed one-body reduced density matrix.", "One may simply choose the $n_i$ to be all equal to one as in unrestricted Hartree-Fock or Kohn-Sham theory.", "The index $i$ then runs over $N$ spin orbitals $\\phi _i$ from which the electronic wave function is constructed.", "Finally, ${\\bf r}$ is the coordinate of an electron.", "$E_\\text{rest}[\\lbrace \\phi _i\\rbrace ]$ denotes the remaining contributions to the electronic energy, namely the expectation values for total kinetic energy, $\\langle T\\rangle $ , and the electron–electron interaction, $\\langle V_\\text{ee} \\rangle $ , $E_\\text{rest} = \\langle T \\rangle + \\langle V_\\text{ee} \\rangle \\, ,$ which all depend on the wave function and, hence, on all orbitals $\\lbrace \\phi _i\\rbrace $ .", "If molecular orbitals (MOs) are expanded into a set of $m$ basis functions $\\chi _\\mu $ , $\\phi _i = \\sum _{\\mu =1}^{m} c_{\\mu }^{(i)} \\chi _{\\mu } \\, ,$ their corresponding MO coefficients $c_\\mu ^{(i)}$ will enter the energy expression $E_\\text{el} =\\sum _{I>J}^{M} \\frac{Z_{I} Z_{J}}{\\left|\\mathbf {R}_{I}-\\mathbf {R}_{J}\\right|}-\\sum _{i,J}^{N,M} n_i\\sum ^m_{\\mu \\nu } c_{\\mu }^{(i)} c_{\\nu }^{(i)}\\left\\langle \\chi _{\\mu }\\left|\\frac{Z_{J}}{\\left|\\mathbf {R}_{J}-\\mathbf {r}\\right|}\\right| \\chi _{\\nu }\\right\\rangle +E_\\text{rest}[\\lbrace c_{\\mu }^{(i)} \\rbrace ] \\, .$ Note that we assume the basis functions to be real and therefore avoided a denotation for complex conjugation; however, this restriction can be easily lifted.", "We re-write the electron-nucleus attraction potential-energy integrals over basis functions $\\chi _\\mu $ into a matrix, $\\mathbf {V}^{(J)}$ , with elements $V_{\\mu \\nu }^{(J)} = \\left\\langle \\chi _{\\mu }\\left|\\frac{Z_{J}}{\\mid \\mathbf {R}_{J}-\\mathbf {r}\\vert }\\right| \\chi _{\\nu }\\right\\rangle $ and the MO coefficients into vectors, $\\mathbf {c}^{(i)}$ and $\\mathbf {c}^{(i)}$ , obtaining $E_\\text{el}=& \\sum _{I>J}^{M} \\frac{Z_{I} Z_{J}}{\\left|\\mathbf {R}_{I}-\\mathbf {R}_{J}\\right|}-\\sum _{i,J}^{N,M} n_i\\mathbf {c}^{(i)}\\mathbf {V}^{(J)}\\mathbf {c}^{(i)}+E_\\text{rest}[\\lbrace c_{\\mu }^{(i)} \\rbrace ] .$ Obviously, for a close connection of the quantum chemical foundations to a molecular similarity descriptor of a machine learning model, we may require that a descriptor should be based on Cartesian coordinates of all atomic nuclei, $\\lbrace \\mathbf {R}_{I}\\rbrace $ , of a molecule because these define a molecular structure to which the Born-Oppenheimer approximation assigns an electronic energy.", "Coulomb matrix and SOAP fulfill this requirement (see below).", "Moreover, according to the electronic energy expression we require that the descriptor depends on the nuclear charge numbers, which, together with the nuclear coordinates, defines the external potential.", "The external potential and the number of electrons in the system contain all information to formulate the electronic Hamiltonian and, hence, all information to solve for the electronic energy (E. Bright Wilson argument [30]).", "The Coulomb matrix fulfills this requirement by construction, but it should be noted that it involves a diagonalization step that changes the information encoded in a rather intransparent way.", "However, the standard formulation of SOAP considers molecular structure through fuzzy atoms, where all atoms are considered equal (see below).", "As such, nuclear coordinates enter the procedure, but their type (in terms of the nuclear charge number) is usually not resolved.", "According to the first Hohenberg-Kohn theorem [31], which is also taken as the basis of density functional theory, there exists a one-to-one correspondence between the external potential and the electron density.", "Since SOAP constructs a density distribution of molecular structure, one may wonder whether a relation to the electron density (and hence to the electronic energy) can be established.", "We will consider these matters later on in this work and now first turn to the Coulomb matrix due to its obvious link to the external potential." ], [ "Coulomb Matrix and Coulomb List", "The elements of the Coulomb matrix[10] are defined as $C_{IJ} ={\\left\\lbrace \\begin{array}{ll}\\dfrac{Z_I^{2.4}}{2} \\quad &I=J \\\\\\dfrac{Z_I Z_J}{|\\mathbf {R}_I - \\mathbf {R}_J| } \\quad &I\\ne J \\, .\\end{array}\\right.", "}$ The diagonal in the Coulomb matrix is sometimes referred to as the `self-interaction term'[11], even though there is no basis in classical electrodynamics for a self-interaction of a point charge.", "A pragmatic justification is that the diagonal term conveys information about the identity of the elements.", "The original publication[10] states that the “diagonal elements encode a polynomial fit of atomic energies to nuclear charge”.", "Since the authors wanted to predict atomization energies, this diagonal brings relevant information into the problem from a priori knowledge, but makes the descriptor less general and less interpretable.", "Yet another interpretation[32] is linking the diagonal terms to the total potential energy of a neutral atom in the Thomas–Fermi model, which is $E_\\mathrm {TF} = -0.7687 Z^{7/3}$ .", "[33] Molecules of different size, $M$ , will be padded with rows and columns of zeros in their Coulomb matrix representation to match the size of the largest molecule to be compared.", "The Coulomb matrix itself cannot be used as a descriptor for molecules of different sizes, because it is not permutationally invariant (exchange of rows and columns change the descriptor but physically, the order of the atoms in the molecule does not matter) and different information would be stored in different dimensions of the matrix.", "Three remedies have been proposed[11] to transform the Coulomb matrix into a permutationally invariant descriptor.", "The simplest one, which we are going to analyze here, is the eigenspectrum: Calculating the eigenvalues, $\\lambda _i$ and sorting them such that $\\vert \\lambda _i \\vert \\ge \\vert \\lambda _{i+1} \\vert $ .", "This is the original recipe proposed by Rupp et al.", "in 2012.", "[10] For $M$ atoms, this method reduces the dimensionality from $M^2$ degrees of freedom to only $M$ .", "The second option to make the Coulomb matrix permutationally invariant is the sorted Coulomb matrix: The rows (or equivalently, the columns) are sorted by their Euclidean norm, such that $\\vert \\vert C_i \\vert \\vert _2 \\ge \\vert \\vert C_{i+1}\\vert \\vert _2$ .", "This leads to an overdetermined system, as the dimensionality is $M^2$ now and may produce to non-smooth changes in the sorting even for small changes in the coordinates.", "And the third approach is to represent each system by a set of $n$ sorted Coulomb matrices, each injected with Gaussian noise to vastly augment dimensionality." ], [ "Atomic Descriptors $\\mathcal {F}_{\\text{n}}^{(J)}$ and {{formula:cd0e588a-f872-41c5-852b-e6bb21e34856}}", "We now split Eq.", "(REF ) into atomic contributions by moving the sum over the nuclei in front of the expression $E_\\text{el}= \\sum ^M_J \\left[\\mathcal {F}^{(J)}_\\text{n} -\\mathcal {F}^{(J)}_\\text{e}\\right]+E_\\text{rest}$ with $\\mathcal {F}^{(J)}_\\text{n} = \\frac{1}{2}\\sum _{I \\atop I\\ne J}^{M} \\frac{Z_{I} Z_{J}}{\\left|\\mathbf {R}_{I}-\\mathbf {R}_{J}\\right|}$ and $\\mathcal {F}^{(J)}_\\text{e} =\\sum _{i}^{N} n_i\\mathbf {c}^{(i)}\\mathbf {V}^{(J)}\\mathbf {c}^{(i)}$ (recall that all $n_i$ are equal to one for unrestricted Hartree-Fock theory and unrestricted Kohn-Sham theory).", "Figure REF depicts the features $\\mathcal {F}_{\\text{e}}^{(J)}$ and $\\mathcal {F}_{\\text{n}}^{(J)}$ obtained for reaction 1 of Figure REF .", "In each subplot of Figure REF , we separated the elements due to the large difference in scale.", "In (a), the external potential features, $\\mathcal {F}_{\\text{e}}^{(J)}$ , are shown, in (b), the nuclear repulsion features, $\\mathcal {F}_{\\text{n}}^{(J)}$ .", "The same plot scaled by the respective nuclear charge can be found in the Supporting Information.", "The scaling only changes the relationship between different elements but not within one group of elements.", "We see how the reaction details can be recovered in the plots of Figure REF .", "For instance, C6 and C2 (and similarly H13) show a big drop from the transition state (TS) structure (blue line) to the product (red line) which monitors that the bond between these two atoms is broken.", "Similarly, H9 features the biggest relative drop toward the product, which is due to the H shift observed in the reaction.", "Only for H12 is the product feature higher in energy than those of the TS structure and the reactant.", "The fact that the reactant and the TS structure features are close together while the product features are separated hints at a possible early TS structure, which resembles the reactant rather than the product according to Hammond's Postulate.", "[35] To further elucidate the correlation, we plot $\\mathcal {F}_{\\text{n}}^{(J)}$ and $\\mathcal {F}_{\\text{e}}^{(J)}$ against each other for each element in Figure REF .", "Apart from the scaling, the trends are very similar: Not only are all elements of the same type almost linearly correlated but the trend also holds over the course of the reaction (connected points).", "So apart from small deviations, the nuclear features encode very similar information to the external potential.", "Since the latter is much more expensive to calculate, it is a good trade-off to use the nuclear features, which are much more cost effective.", "This is evidence that at least approximatively, only one of the two features can be used without great loss of accuracy.", "Since the nuclear features are far more efficient to calculate, since they do not depend on a converged SCF calculation, we will solely consider $\\mathcal {F}_{\\text{n}}^{(J)}$ in what follows.", "To compare the effects of different substituents, we plotted the same representations as in Figure REF in Figure REF .", "In reaction 2 with the ethyl substituents, we can see that both proximal carbon atoms, C15 and C21, are of similar magnitude, as are the distal carbon atoms, C18 and C21.", "The patterns of C6 and C2 toward the product remain the same.", "Note that the relative pattern, e.g., the increase from C8 to C6 in reaction 1 but the decrease in the same two carbon atoms in reactions 2 and 3, should not be overinterpreted as this pattern is only dependent on the sorting and does not carry physical meaning.", "Even for reaction 3 with the electron donating and withdrawing groups, a similar pattern is observed.", "Figure: For reaction 1 of Figure :(a) external potential features, ℱ e (J) \\mathcal {F}^{(J)}_{\\text{e}}, and(b) nuclear repulsion features, ℱ n (J) \\mathcal {F}^{(J)}_{\\text{n}},in hartree (E h E_\\mathrm {h})(TS denoted in blue, reactant and product in green and red, respectively)The four subplots are sorted by element and the ordering in each subplot according to increasing value of the TS structure feature.Figure: For each element in the reaction 1 from Figure , hydrogen, carbon, nitrogen, and oxygen in (a)-(d) weshow the correlation between the external potential features, ℱ e (J) \\mathcal {F}_{\\text{e}}^{(J)},and the nuclear repulsion features, ℱ n (J) \\mathcal {F}_{\\text{n}}^{(J)}.The connection between three points is over the course of the reaction from reactant to TS structure, to product.Figure: For the reactionsof Figure , the nuclear repulsion features ℱ n (J) \\mathcal {F}^{(J)}_{\\text{n}} are shown(a) for reaction 1 with 'generic' H substituents,(b) for reaction 2 with electron donating and withdrawing groups NO2 and OCH3, and(c) for reaction 3 two ethyl substituents(TS denoted blue, reactant and product as green and red, respectively).The plots are each split into four subplots according to the elemental composition of the reacting molecule.Figure: For the reactionsof Figure , the inverse distance features, ℱ ˜ n (J) \\widetilde{\\mathcal {F}}^{(J)}_{\\text{n}} are shown(a) for reaction 1 with H substituents,(b) for reaction 2 with electron donating and withdrawing groups NO2 and OCH3, and(c) for reaction 3 with two ethyl substituents(TS denoted blue, reactant and product as green and red, respectively).The plots are each split into four subplots according to the elemental composition of the reacting molecule." ], [ "Relation of $\\mathcal {F}_{\\text{n}}^{(J)}$ to the Coulomb Matrix", "Consider the modified Coulomb matrix, $\\tilde{\\mathbf {C}}$ , with diagonal elements set to 0 and the interaction terms divided by two to resemble avoidance double counting: $\\tilde{C}_{IJ} ={\\left\\lbrace \\begin{array}{ll}0 \\quad &I=J \\\\\\dfrac{1}{2}\\dfrac{Z_I Z_J}{|\\mathbf {R}_I - \\mathbf {R}_J| } \\quad &I\\ne J \\,\\end{array}\\right.}", ".$ All elements $\\mathcal {F}^{(J)}_\\text{n}$ can be obtained by multiplying the $J$ -th row of this Coulomb matrix with 0-diagonal with a vector of one entries, $\\mathbf {1}$ : $\\tilde{\\mathbf {C}}^{(J)} \\cdot \\left[\\begin{array}{c}1 \\\\1 \\\\\\vdots \\\\1\\end{array}\\right]=\\mathcal {F}_\\text{n}^{(J)} .$ We have $\\mathcal {F}_{\\text{n}}^{(J)} = \\sum _{I,I\\ne J} C_{IJ} = \\sum _{I} \\tilde{C}_{IJ}$ , establishing a relationship to the (modified) Coulomb matrix $\\mathbf {C}$ ($\\tilde{\\mathbf {C}}$ ).", "To isolate the steric effects from the electronic effects, we consider, similarly to Figure REF but without the nuclear charges $Z_I$ and $Z_J$ in the numerator, a sum of inverse distances at nucleus $J$ , namely $\\widetilde{\\mathcal {F}}^{(J)}_\\text{n} \\equiv \\frac{1}{2} \\sum _I \\frac{1}{\\vert \\mathbf {R}_I - \\mathbf {R}_J \\vert }$ , in Figure REF .", "This function has a larger magnitude in areas of the molecule with a higher scaffold density that exhibit more steric effects.", "For instance, as before, the function shows higher values for C6 and C2 and a drastic change toward the product.", "But by contrast to Figure REF , the function is characterized by a larger value at C6 than at C2 because it strictly measures the neighbor density, of which C6 has more (N7, H13, C5 are nearby) than C2 (only N4 and O3 are nearby).", "In Figure REF , the larger nuclear charges of nitrogen atoms and oxygen atoms weighted the overall sum higher.", "More of such effects can be observed: another one is in reaction 2, where the sum of inverse distances at N15 is much smaller than in Figure REF compared to the other nitrogen atoms, since the neighboring oxygen atoms are not weighted according to their nuclear charges." ], [ "Comparison of $\\mathcal {F}_{\\text{n}}^{(J)}$ and Coulomb Matrix Eigenvalues", "In Figure REF , the nuclear repulsion features, $\\mathcal {F}_{\\text{n}}^{(J)}$ , are shown in the left panel and the eigenvalues of the Coulomb matrix, $\\lambda _i$ , in the right panel, all for the intrinsic reaction coordinate (IRC) of reaction 1 in Figure REF .", "In both feature spaces, it is clearly visible what the TS structure is (step 187) and that it is most likely an early one.", "The eigenvalues of the Coulomb matrix are hard to interpret, however, since the values are not tied to a particular atom.", "By contrast, from our $\\mathcal {F}_{\\text{n}}^{(J)}$ descriptor it is evident which atoms undergo large changes over the course of the reaction: The hydrogen atom that is shifted in the reaction, H9, features the largest change in the trace relative to the atoms of the same element.", "This is in line with Figures REF and REF , where this hydrogen atom produces the largest spread.", "As before when discussing Figure REF , we see that C2 and C6 behave similarly, as they are the bonding partners of the bond that is broken.", "The nitrogen atoms, N4 and N7 appear relatively stable in terms of the $\\mathcal {F}_{\\text{n}}^{(J)}$ descriptor apart from a slight relaxation observed in all heavy atoms, due to the molecule opening up and becoming less compact.", "The carbon atom C8 with its hydrogen atom H14 also show a rather flat trace as they hardly participate in the reaction.", "Since $\\mathcal {F}_{\\text{n}}^{(J)}$ yields interpretable traces and, at the same time, encodes very similar information to that encoded in the eigenvalues of the Coulomb matrix, it represents an excellent alternative, if not a superior type of feature, for many applications.", "This will be especially interesting if for a given model one wants to backtrack and re-inspect a feature that a machine learning model paid particular attention to.", "For instance, with such information it is possible to set a threshold of change over the course of the reaction and filter the rest of the data set according to reactions that behave in a similar way.", "In each case, it is possible to go back to the descriptor and evaluate where a particular contribution came from.", "Figure: The traces of reaction 1 in Figure over the IRC as representedby ℱ n (J) \\mathcal {F}_{\\text{n}}^{(J)} and by the eigenvalues of the Coulomb matrix.A drawback, however, is that $\\mathcal {F}_{\\text{n}}^{(J)}$ is not permutationally invariant and suffers from the same issues that plague the plain Coulomb matrix.", "Due to this very reason, the diagonalization was introduced for the Coulomb matrix (see above), to achieve permutational invariance at the price of interpretability.", "The reasons given for diagonalizing the Coulomb matrix are[10]: i) the unique encoding, ii) \"symmetrically equivalent atoms\" are treated the same, iii) invariance with respect to permutation, translation, and rotation and iv) continuous distance.", "Property i) has been discussed[36], [34] and holds even for homometric molecules, ii) is fulfilled, iii) is a consequence of the ordering, not the diagonalization (there is no intrinsic ordering of eigenvalues as they are complex numbers and the ordering is dependent on the diagonalization algorithm), and iv) is fulfilled.", "The sorting ensures that the structural properties of each molecule that are compared are of similar size.", "The caveat is, though, that these structural properties may appear for different chemical reasons.", "Since, in the general case, there is no way to track atoms and since this may not even be desirable (as in symmetric molecules certain atoms are symmetry redundant), we have to sort the list (or use any other way to introduce permutational invariance).", "Therefore, we simply sort the vector $\\mathcal {F}_{\\text{n}}^{(J)}$ in descending order and ignore tracking of atoms.", "This is in analogy to the eigenvalues of the Coulomb matrix that are sorted the same way, but the key difference remains: the entries are still atom specific and do not mix information from all atoms (as the diagonalization of the Coulomb matrix does).", "Similar to our Coulomb list, in the sorted Coulomb matrix[7] each element of the matrix is part of the feature, whereas the rows (or columns of this symmetric matrix) are ordered according to their norm.", "This is equivalent to our approach apart from the fact that there is no summation over the rows (or columns, respectively), leading to a high dimensional feature, which is quadratic in the number of atoms.", "Moreover, it has been noted that slight variations in atomic coordinates may cause abrupt changes in the Coulomb matrix ordering, thereby impeding the learning of structural similarities [11].", "Schrier found that the eigenvalues of the Coulomb matrix will not be able to distinguish larger molecules [12].", "We do not expect that this is different for the sorted $\\mathcal {F}_{\\text{n}}^{(J)}$ .", "Furthermore, for macromolecules of more than 10,000 atoms, the global description is not only granular enough but the diagonalization needed for the eigenvalues becomes a true computational bottleneck, as it scales with $\\mathcal {O}(M^3)$ , while subsequent sorting of is affected by a negligible cost of order $\\mathcal {O}(M \\log M)$ , where $M$ is the number of nuclei." ], [ "Descriptor Extensivity", "One long-standing issue in descriptor research is the desired transferability between molecules of different size.", "Hence, an intensive descriptor is sought for, i.e., one that is independent of molecular size.", "However, this desire contrasts the inherent extensivity of structures.", "One approach is to find sub-information that is intensive.", "The size (number of elements) of the nuclear repulsion features, $\\mathcal {F}_{\\text{n}}^{(J)}$ , grows with system size, $M$ .", "For the specific case of an elementary reaction step, only a very localized part of the system will react while most of the system remains largely unchanged (i.e., internal coordinates of observer atoms change only little), as illustrated by the observer hydrogen atom H14 in Figure REF .", "A simple solution to this problem is to truncate the feature and to consider only to a subset of atoms that vary more than a given threshold and can be considered a relevant subsystem for the process under consideration.", "More elaborate measures may consider features evaluated in an embedding framework.", "However, the issue remains that even if two feature vectors are truncated to the same size to only contain the physically relevant part, they cannot be compared in a direct manner.", "Only if the set of atoms to which the vector has been truncated to is the same for every system, a direct comparison is possible.", "The dilemma is that, as soon as convolutions are introduced to achieve permutational invariance, the comparability fades away." ], [ "Smooth Overlap of Atomic Positions and of Electron Densities", "We are advised to compare the results obtained so far to one of the most successful representations for molecular similarity: the SOAP kernel[5].", "In this section, we review the key derivation steps of the SOAP kernel as we need them later to formulate and evaluate our electron-density-based descriptor.", "In our derivation, we follow Ref.", "bartok2013, but extend it at key places to highlight important steps of the derivation that are not explicit in the original paper and that become important for our electron-density-based descriptor, for which the derivation must be made more transparent.", "For this electron-density-based descriptor, we will reinterpret the fuzzy atomic positions of SOAP and generalize them to the actual electron density.", "Accordingly, we call the resulting descriptor smooth overlap of electron densities (SOED).", "We emphasize again that the derivation reviewed for SOAP in the next section is necessary as it will turn out to be the key evaluation strategy within our setting to evaluate SOED." ], [ "The SOAP Kernel", "SOAP represents a measure for molecular similarity without making any direct connection to the associated electronic energies (by contrast to the Coulomb matrix).", "A molecule is put with its center of mass at the origin of the coordinate system.", "We place on each of its atoms $I$ at position $\\mathbf {R}_I$ , an unnormalized Gaussian function, $g_{\\alpha _I}$ , with parameter $\\alpha = 1/(2{\\sigma _I}^2)$ , where $\\sigma _I$ is the variance of the Gaussian function at atom $I$ , to obtain a superposition of fuzzy atomic positions, $\\rho _{\\text{mol}}(\\mathbf {r};\\lbrace \\mathbf {R}_I\\rbrace , \\lbrace \\alpha _I\\rbrace )= N_\\mathrm {mol}\\sum _{I=1}^Mg_{\\alpha _I}(\\left| \\mathbf {r} - \\mathbf {R}_I \\right|)$ with the unnormalized Gaussian $g_{\\alpha _I}(\\left| \\mathbf {r} - \\mathbf {R}_I \\right|)= \\exp \\left(-\\alpha _I\\left|\\mathbf {r}-\\mathbf {R}_I\\right|^{2}\\right)$ and normalization constant $N_\\mathrm {mol}$ to ensure that $\\langle \\rho _{\\text{mol}} \\vert \\rho _{\\text{mol}} \\rangle = 1$ .", "The variable $\\mathbf {r}$ is the variable of the field.", "Note that, in the original publication[5], the normalization constant $N_\\mathrm {mol}$ is omitted because later (see below), the kernel will be normalized.", "In the original paper[5], this superposition has been called 'atomic neighbor density' that represents the 'atomic environment', which we do not adopt here as the superposition in Eq.", "(REF ), primarily, does not put an emphasis on some local atomic structure so that other atoms become neighbors or an environment.", "Instead, it refers to the molecular structure as a whole.", "Hence, we may refer to it as a 'molecular-scaffold density'.", "This is also advantageous to conceptually emphasize its relation to our SOED descriptor to be introduced later.", "The comparison of two molecular-scaffold densities then requires the definition of an overlap measure (see below), which is the origin of the term 'smooth overlap of atomic positions', which we understand as an overlap of molecular scaffolds represented by fuzzy atomic positions.", "Usually, the $\\alpha _I$ in Eq.", "(REF ) are taken to be the same for all atoms, $\\alpha _I\\rightarrow \\alpha $ .", "In some applications, $\\alpha $ was optimized as a hyperparameter[15], [38] or simply fixed to some value[39], [40].", "Using a different $\\alpha $ as a hyperparameter for each element type[41] resulted in the same for each element type.", "Notably, the sum in Eq.", "(REF ) is permutationally invariant, a property that is important for machine learning: exchanging the terms in the sum does not change the total density.", "In order to determine the best match of the scaffold densities of two molecules for their comparison regarding the assessment of molecular similarity, it will be necessary to rotate one scaffold density in three-dimensional space with respect to the other To be able to rotate the sum of Gaussians in Eq.", "(REF ), the equation must be expanded into functions dependent on the global polar angles, $\\theta $ and $\\phi $ (see Figure REF ).", "[5] In the definition above, they are dependent on $\\mathbf {r}_A$ which refers to the frame of reference that is centered on nucleus $A$ and that is not easily rotatable from a global point of view.", "The functions sought for straightforward rotation of the whole molecular scaffold field[5] are all to be located at the same single center, which can be the center of mass coordinates of the molecule.", "Figure: In the global coordinate system, 𝐞 x \\mathbf {e}_x,𝐞 y \\mathbf {e}_y, and 𝐞 z \\mathbf {e}_z, the angles θ\\theta and φ\\phi denote the polar and azimuthal angles of the field variable 𝐫≡𝐫(θ,φ)\\mathbf {r}\\equiv \\mathbf {r}(\\theta ,\\phi ), labeled in green.", "Nucleus AA is at position𝐑 A ≡𝐑 A (θ,φ)\\mathbf {R}_A\\equiv \\mathbf {R}_A(\\theta ,\\phi ) with polar and azimuthal angles θ 𝐑 A \\theta _{\\mathbf {R}_A} and φ 𝐑 A \\phi _{\\mathbf {R}_A}.", "In the coordinate system that is centered on nucleus AA, namely 𝐞 x A \\mathbf {e}_{x_A}, 𝐞 y A \\mathbf {e}_{y_A}, and 𝐞 z A \\mathbf {e}_{z_A}, the distance to the field variable is 𝐫 A =𝐫-𝐑 A \\mathbf {r}_A = \\mathbf {r}-\\mathbf {R}_A with corresponding polar angles θ 𝐫 A \\theta _{\\mathbf {r}_A} and φ 𝐫 A \\phi _{\\mathbf {r}_A}.", "The angle between 𝐑 A \\mathbf {R}_A and 𝐫\\mathbf {r} is denoted γ\\gamma .When we multiply out the exponent in Eq.", "(REF ), we obtain $-\\alpha (\\mathbf {r}^2 - 2 \\, \\mathbf {r} \\cdot \\mathbf {R}_I + \\mathbf {R}_I^2)$ , i.e., two squared terms and a cross term.", "The cross term is then subjected to a Rayleigh expansion of a plane wave in terms of spherical waves,[5] $\\exp \\left(\\mathbf {r} \\cdot \\mathbf {R}_{I}\\right)=\\sum _{l=0}^{\\infty }(2 l+1) i_{l}\\left(r R_{I}\\right) P_{l}(\\cos \\gamma )$ where $i_l$ is the modified spherical Bessel function of the first kind of degree $l$ and $P_l$ the $l$ th Legendre polynomial, and $\\gamma = \\angle (\\mathbf {r},\\mathbf {R}_I)$ , measured from the origin of the coordinate system (see Figure REF ).", "The modified spherical Bessel function of the first kind, $i_{n}(x) \\equiv \\sqrt{\\frac{\\pi }{2 x}} I_{n+1 / 2}(x)$ , is one radial solution to the Helmholtz equation in spherical coordinates and related to the modified Bessel function of the first kind, $I_{n}(x) \\equiv i^{-n} J_{n}(i x)$ , where $J_n$ in turn is the Bessel function of the first kind, which is a solution to Bessel's differential equation, other solutions being the Bessel function of the second kind and Hankel functions.", "Hence, the molecular scaffold density in Eq.", "(REF ) can be expressed as $\\begin{aligned}\\rho _\\text{mol}( \\mathbf {r}; \\lbrace \\mathbf {R}_I \\rbrace , \\alpha )&= \\sum _I^M \\exp \\left(-\\alpha \\left|\\mathbf {r}-\\mathbf {R}_{I}\\right|^2\\right)\\\\&= \\sum _I^M \\exp \\left(-\\alpha (\\mathbf {r}^2 - 2\\mathbf {r R}_I + \\mathbf {R}_I^2)\\right)\\\\&= \\sum _I^M \\exp \\left(-\\alpha \\left(r^2+R_{I}^2\\right)\\right) \\sum _{l=0}^{\\infty }(2 l+1) i_{l}\\left(2 \\alpha r R_{I}\\right) P_{l}(\\cos \\gamma ) \\ ,\\end{aligned}$ where the normalization constant $N_\\text{mol}$ was omitted as explained above.", "By virtue of the spherical harmonics addition theorem, $P_{l}(\\cos \\gamma )=\\frac{4 \\pi }{2 l+1}\\sum _{m=-l}^{l}{Y}_{lm}^{\\ast }(\\theta _{\\mathbf {R}_I},\\phi _{\\mathbf {R}_I})Y_{lm}(\\theta ,\\phi )$ with angles $\\theta $ and $\\phi $ as the polar coordinates of the field variable $\\mathbf {r}$ measured from the origin and the angles $\\theta _{\\mathbf {R}_I}$ and $\\phi _{\\mathbf {R}_I}$ being the polar coordinates of $\\mathbf {R}_I$ , notably, also measured from the new common origin.", "These steps accomplish a single-center expansion necessary for the subsequent rotation in search of best matches of two molecular scaffold densities.", "We now obtain a compact expression of expansion coefficients and spherical harmonics, where the term $(2l+1)$ cancels out: $\\begin{aligned}\\rho _\\text{mol}(\\mathbf {r}; \\lbrace \\mathbf {R}_I \\rbrace , \\alpha )&= 4 \\pi \\sum ^M_{I}\\exp (-\\alpha (r^2 + R_{I}^2))\\sum _{l=0}^{\\infty }\\sum _{m=-l}^{+l}i_{l}\\left(2 \\alpha r R_{I}\\right)Y_{l m}^{*}(\\theta _{\\mathbf {R}_I},\\phi _{\\mathbf {R}_I})Y_{l m}(\\theta ,\\phi ) \\\\&= \\sum ^M_{I} \\sum _{l m} c_{l m}^{I}(r; \\mathbf {R}_I, \\alpha ) Y_{l m}(\\theta ,\\phi )\\end{aligned}$ with $c_{l m}^{I}(r;\\mathbf {R}_I, \\alpha )\\equiv 4 \\pi \\exp \\left(-\\alpha \\left(r^2+R_{I}^2\\right)\\right)i_{l}\\left(2 \\alpha r R_{I}\\right)Y_{l m}^{*}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I} ) \\ .$ where $\\sum _{lm}$ denotes $\\sum _l^\\infty \\sum _{m=-l}^{l}$ .", "Note that the angular information of the nuclear position is stored in the spherical harmonics $Y_{l m}^{*}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I} )$ .", "It is exactly these functions that we will need to rotate against each other to generate best overlaps over all possible rotations of the two molecular scaffold densities to be compared.", "This is due to the fact that the density field is defined in terms of Gaussian functions located at the positions of the nuclei, which therefore need to be rotated.", "We emphasize this point because it is important for our SOED derivation below and because it appears somewhat obscured in the original paper[5].", "The overlap between these two densities, $\\rho _\\text{mol}(\\mathbf {r};\\mathbf {R}_I, \\alpha )$ and $\\rho _\\text{mol}^\\prime (\\mathbf {r};\\mathbf {R}_{I^\\prime }, \\alpha )$ , is defined as the inner product of their densities, $S(\\rho _\\text{mol}, \\rho _\\text{mol}^{\\prime })= \\langle \\rho _\\text{mol} \\mid \\rho _\\text{mol}^{\\prime } \\rangle = \\int _{\\mathbb {R}^3} \\mathrm {d}\\mathbf {r} \\rho _\\text{mol} \\rho _\\text{mol}^{\\prime } \\ ,$ where we omitted the parameter dependence for the sake of clarity and $\\mathrm {d}\\mathbf {r} = \\mathrm {d}x \\, \\mathrm {d}y \\, \\mathrm {d}z$ .", "The center of mass of both molecules matches the origin of the coordinate system, leaving rotational freedom around three Euler angles, $\\alpha ,\\beta ,\\gamma $ .", "Hence, in search for optimal overlaps of the two molecular-scaffold densities we need to integrate over all possible rotations.", "[5] This procedure automatically guarantees rotational invariance.", "A rotation from the rotation group, $\\mathrm {SO}(3)$ , can be written as a matrix, $\\hat{R} \\equiv \\hat{R}(\\alpha , \\beta , \\gamma ) \\ ,$ where $\\alpha , \\beta , \\gamma $ are the Euler angles.", "This rotation operator needs to be integrated over SO(3) when acting on a density, i.e., all possible rotations of the density must be considered.", "The volume element on SO(3) integrating over all possible Euler angles yields the measure $\\mathrm {d}\\Omega = \\frac{1}{8 \\pi ^{2}} \\, \\sin \\beta \\, \\mathrm {d}\\alpha \\, \\mathrm {d}\\beta \\, \\mathrm {d}\\gamma $ with $\\alpha = [0, 2 \\pi )$ , $\\beta = [0, \\pi )$ , and $\\gamma = [0, 2 \\pi )$ .", "We now need to rotate the spherical harmonic $Y_{l m}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I})$ in the expansion in Eq.", "(REF ).", "An arbitrary rotation operator, $\\hat{R}$ , operating on a spherical harmonic yields a linear combination of new spherical harmonics with a different magnetic quantum number $m$ and the elements of a Wigner D-matrix as expansion coefficients.", "With this identity, we will rotate the angular information of the nuclear positions (since the radial part is separated) according to $\\hat{R} Y_{l m}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I})=\\sum _{m^{\\prime }=-l}^{l}D_{m m^{\\prime }}^{l}(\\hat{R})Y_{l m^{\\prime }}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I}) \\ .$ The elements of the Wigner matrices are given by $D_{m m^{\\prime }}^{l}(\\hat{R})=\\left\\langle Y_{l m}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I})\\left| \\, \\hat{R} \\, \\right|Y_{l m^{\\prime }}(\\theta _{\\mathbf {R}_I}, \\phi _{\\mathbf {R}_I})\\right\\rangle \\ .$ This is convenient as the rotation happens in a very compact manner without notational overhead usually involved in rotations.", "Furthermore, we will see later how multiple Wigner D-matrices will cancel each other out.", "When considering the overlap of two molecular-scaffold densities with one of them being rotated, $\\begin{aligned}S\\left(\\rho _\\text{mol}, \\hat{R}\\rho _\\text{mol}^{\\prime }\\right)= \\langle \\rho _\\text{mol} \\mid \\hat{R} \\mid \\rho _\\text{mol}^{\\prime } \\rangle = \\int _{\\mathbb {R}^3} d\\mathbf {r} \\rho _\\text{mol} \\hat{R} \\rho _\\text{mol}^{\\prime }\\end{aligned}$ we will have to integrate over all Euler angles $\\alpha ,\\beta ,\\gamma $ , which will yield the rotationally invariant kernel, $\\begin{aligned}k\\left(\\rho _\\text{mol}, \\rho _\\text{mol}^{\\prime }; n\\right)=\\int _{\\alpha ,\\beta ,\\gamma }\\left|\\int _{\\mathbb {R}^3} d\\mathbf {r} \\rho _\\text{mol} \\hat{R} \\rho _\\text{mol}^{\\prime }\\right|^{n}\\frac{1}{8 \\pi ^{2}} \\sin \\beta \\,\\mathrm {d}\\alpha \\, \\mathrm {d} \\beta \\, \\mathrm {d}\\gamma \\\\\\end{aligned}$ where we follow the original paper[5] and artificially introduced the power $n$ as a parameter to be considered later; for now, we set it equal to 1.", "As we will see in the following, this integral over Euler angles can be evaluated analytically with help of the Wigner D-matrices and will not require us to deal with the explicit rotation algebra as shown in Eq.", "(REF ).", "We substitute the expanded density of Eq.", "(REF ) into the overlap equation, Eq.", "(REF ), to obtain the overlap of the two molecular-scaffold densities, where we exploit the rotation identity of Eq.", "(REF ), $S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right)&=& \\int d \\mathbf {r} \\rho _\\text{mol} \\hat{R}\\rho _\\text{mol}^{\\prime } \\nonumber \\\\&=& \\int d \\mathbf {r}\\rho _\\text{mol}\\Big [\\sum _{I^{\\prime }}^{M^{\\prime }}\\sum _{l^{\\prime } m^{\\prime }}4 \\pi \\exp [-\\alpha _{I^{\\prime }}(r^2+R_{I^{\\prime }}^2)]i_{l^{\\prime }}(2 \\alpha _{I^{\\prime }} r R_{I^{\\prime }})\\hat{R}Y_{l^{\\prime } m^{\\prime }}^{*}(\\Omega _{\\mathbf {R}_{I^{\\prime }}})Y_{l^{\\prime } m^{\\prime }}(\\Omega )\\Big ] \\nonumber \\\\&=& \\int d \\mathbf {r}\\rho _\\text{mol}\\Big [\\sum _{I^{\\prime }}^{M^{\\prime }}\\sum _{l^{\\prime } m^{\\prime }}4 \\pi \\exp [-\\alpha _{I^{\\prime }}(r^2+R_{I^{\\prime }}^2)]i_{l^{\\prime }}(2 \\alpha _{I^{\\prime }} r R_{I^{\\prime }}) \\nonumber \\\\&&\\times \\sum _{m^{\\prime \\prime }=-l^{\\prime }}^{l^{\\prime }}D_{m^{\\prime } m^{\\prime \\prime }}^{l^{\\prime }}(\\hat{R})Y_{l^{\\prime } m^{\\prime \\prime }}^{*}(\\Omega _{\\mathbf {R}_{I^{\\prime }}})Y_{l^{\\prime } m^{\\prime }}(\\Omega )\\Big ]$ where a new coefficient, analogously to Eq.", "(REF ) with different indices, $c_{l^{\\prime } m^{\\prime \\prime }}^{I^{\\prime }} =4 \\pi \\exp [-\\alpha _{I^{\\prime }}(r^2+R_{I^{\\prime }}^2)]i_{l^{\\prime }}(2 \\alpha _{I^{\\prime }} r R_{I^{\\prime }})Y_{l^{\\prime } m^{\\prime \\prime }}^{*}(\\Omega _{\\mathbf {R}_{I^{\\prime }}})$ can be employed to collect many of the terms.", "Substituting this coefficient and separating the integral into a radial and a spherical part, we obtain $S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right)&=& \\sum _{I, I^{\\prime }}^{M, M^{\\prime }}\\sum _{l, m \\atop l^{\\prime }, m^{\\prime }}\\sum _{m^{\\prime \\prime }=-l^{\\prime }}^{l^{\\prime }}D_{m^{\\prime }m^{\\prime \\prime }}^{l^{\\prime }}(\\hat{R})\\underbrace{\\int _{r=0}^{\\infty } \\mathrm {d}rr^2c_{l m}^{I *}c_{l^{\\prime } m^{\\prime \\prime }}^{I^{\\prime }}}_{\\mathrm {radial}}\\nonumber \\\\&&\\times \\underbrace{\\int _{\\theta =0}^{2\\pi }\\int _{\\phi =0}^{\\pi }\\sin \\phi \\, \\mathrm {d}\\phi \\, \\mathrm {d}\\theta \\,Y_{l m}^{*}(\\Omega )Y_{l^{\\prime } m^{\\prime }}(\\Omega )}_{\\mathrm {spherical}}$ where we abbreviated the angular information according to $\\Omega = (\\theta ,\\phi )$ and $\\Omega _{\\mathbf {R}_{I}} = (\\theta _{\\mathbf {R}_{I}}, \\phi _{\\mathbf {R}_{I}})$ .", "Recall that we also chose to have a general $\\alpha _I$ for the unprimed and $\\alpha _{I^{\\prime }}$ for the primed molecular scaffolds.", "Now, the integral in Eq.", "(REF ) must be evaluated.", "Since the spherical harmonics are orthonormal by definition, the spherical part in Eq.", "(REF ) evaluates to $\\int _{\\theta =0}^{2\\pi }\\int _{\\phi =0}^{\\pi }\\sin \\phi \\, \\mathrm {d}\\phi \\, \\mathrm {d}\\theta \\,Y_{l m}^{*}(\\Omega )Y_{l^{\\prime } m^{\\prime }}(\\Omega )= \\delta _{ll^{\\prime }} \\delta _{m m^{\\prime }} \\ .$ This simplifies the last line in Eq.", "(REF ) to the following, $S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right) =\\sum _{I, I^{\\prime }}\\sum _{l, m}\\sum _{m^{\\prime \\prime }=-l}^{l}D_{mm^{\\prime \\prime }}^{l}(\\hat{R})\\int _{r=0}^\\infty \\mathrm {d} r r^2 c_{l m}^{I *} c_{l m^{\\prime \\prime }}^{I^{\\prime }}\\ .$ setting $m \\leftarrow m^{\\prime }$ and $l \\leftarrow l^{\\prime }$ .", "The radial part of the integral in Eq.", "(REF ) and Eq.", "(REF ) is non-trivial.", "We first expand it to move the quantities independent of the position before the integral, $\\begin{aligned}\\int _{r=0}^\\infty \\mathrm {d}rr^2c_{l m}^{I*}c_{l m^{\\prime \\prime }}^{I^{\\prime }}&=(4 \\pi )^2\\exp (-(\\alpha _{I^{\\prime }}R_{I^{\\prime }}^2 + \\alpha _{I}R_{I}^2))Y_{l m}\\left(\\Omega _{\\mathbf {R}_{I}}\\right)Y_{l^{\\prime } m^{\\prime \\prime }}^{*}\\left(\\Omega _{\\mathbf {R}_{I^{\\prime }}}\\right) \\\\& \\times \\int _{r=0}^\\infty \\mathrm {d}rr^2\\exp (-(\\alpha _{I^{\\prime }} + \\alpha _{I}) r^2 )i_l (2 \\alpha _I r R_I)i_{l^{\\prime }}(2 \\alpha _{I^{\\prime }} r R_{I^{\\prime }})\\end{aligned}$ where the integral is solved as $\\begin{aligned}\\int _{r=0}^\\infty &\\mathrm {d}rr^2\\exp (-(\\alpha _{I^{\\prime }} + \\alpha _{I}) r^2 )i_l (2 \\alpha _I r R_I)i_{l^{\\prime }}(2 \\alpha _{I^{\\prime }} r R_{I^{\\prime }}) \\\\&=\\frac{1}{4}\\left(\\frac{\\pi }{(\\alpha _I+\\alpha _{I^{\\prime }})^{3}}\\right)^{1 / 2}i_{l}\\left(2 \\frac{\\alpha _{I} \\alpha _{I^{\\prime }}}{\\alpha _{I}+\\alpha _{I^{\\prime }}}R_{I} R_{I^{\\prime }}\\right)\\exp \\left(\\frac{\\alpha _{I}^{2} R_{I}^{2} + \\alpha _{I^{\\prime }}^{2} R_{I^{\\prime }}^{2}}{\\alpha _{I}+\\alpha _{I^{\\prime }}}\\right)\\end{aligned}$ according to Weber.", "[42], [43] The prefactor in Eq.", "(REF ) can be combined with the result from the integral Eq.", "(REF ).", "If we had considered the two normalization constants $N_\\mathrm {mol}=(\\frac{2 \\alpha }{\\pi })^{3/4}$ that we omitted from Eq.", "(REF ), we would have obtained the same overlap as Kaufmann and Baumeister,[44] $S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right)=\\sum _{I, I^{\\prime }}\\sum _{l, m, m^{\\prime \\prime }}D_{mm^{\\prime \\prime }}^{l}(\\hat{R})\\tilde{I}_{mm^{\\prime \\prime }}^l(\\alpha _I, \\alpha _{I^{\\prime }}, \\mathbf {R}_{I}, \\mathbf {R}_{I^{\\prime }})$ with $\\begin{aligned}\\tilde{I}_{mm^{\\prime \\prime }}^l(\\alpha _I, \\alpha _{I^{\\prime }}, \\mathbf {R}_{I}, \\mathbf {R}_{I^{\\prime }})&=4 \\pi \\left(2 \\frac{(\\alpha _I \\alpha _{I^{\\prime }})^{1 / 2}}{\\alpha _I+\\alpha _{I^{\\prime }}}\\right)^{3 / 2}\\exp \\left(-\\frac{\\alpha _I \\alpha _{I^{\\prime }}}{\\alpha _I+\\alpha _{I^{\\prime }}}(R_{I}^{2}+R_{I^{\\prime }}^{2})\\right) \\\\& \\times i_{l}\\left(2 \\frac{\\alpha _I \\alpha _{I^{\\prime }}}{\\alpha _I+\\alpha _{I^{\\prime }}}R_{I} R_{I^{\\prime }}\\right)Y_{l m}(\\Omega _{\\mathbf {R}_{I}})Y_{l^{\\prime } m^{\\prime \\prime }}^{*}(\\Omega _{\\mathbf {R}_{I^{\\prime }}}) \\ .\\end{aligned}$ For $\\alpha \\leftarrow \\alpha _I$ and $\\alpha \\leftarrow \\alpha _{I^{\\prime }}$ , i.e.", "$\\tilde{I}(\\alpha , \\alpha , \\mathbf {R}_{I}, \\mathbf {R}_{I^{\\prime }})$ we recover the result obtained in the original SOAP paper,[5] which was denoted there $\\tilde{I}_{m m^{\\prime \\prime }}^{l}\\left(\\alpha , \\mathbf {R}_{I}, \\mathbf {R}_{I^{\\prime }}\\right)$ .", "The prefactor on the original paper, $\\sqrt{\\frac{2 \\pi ^{5}}{\\alpha ^{3}}}$ , as noted in the erratum[45], originates from the missing normalization constants, $N_\\mathrm {mol}$ .", "If they were included, that new prefactor would not have been necessary, as we can easily verify by $\\sqrt{\\frac{2 \\pi ^{5}}{\\alpha ^{3}}} N_\\mathrm {mol}^2 = 4\\pi $ , and where $4 \\pi $ is their original prefactor.", "As already noted in the erratum[45], this does not produce an error, as it gets cancelled at the normalization step.", "In the original paper[5], the authors defined a term for the sum over all pairwise interactions of the integral $I_{m m^{\\prime \\prime }}^{l}\\equiv \\sum _{I, I^{\\prime }}\\tilde{I}_{m m^{\\prime \\prime }}^{l}(\\alpha , \\mathbf {R}_{I}, \\mathbf {R}_{I^{\\prime }}) .$ to obtain a slightly more succinct form $S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right)=\\sum _{l, m, m^{\\prime \\prime }} I_{m m^{\\prime \\prime }}^{l} D_{m m^{\\prime \\prime }}^{l}(\\hat{R})$ which we will not adapt here as it obfuscates the sums that nicely emphasize the series expansion over the indices $l$ , $m$ , and $m^{\\prime }$ as well as the pairwise nuclear interaction over the indices $I$ and $I^{\\prime }$ .", "We recall from Eq.", "(REF ) that the integral of the overlap defines the kernel.", "For $n=2$ , the rotationally invariant kernel is $\\begin{aligned}k\\left(\\rho _\\text{mol}, \\rho _\\text{mol}^{\\prime };2\\right)&=\\int \\left| S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right) \\right| ^2 \\mathrm {d}\\Omega \\\\&=\\int \\mathrm {d} \\Omega \\,S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right)^\\ast S\\left(\\rho _\\text{mol}, \\hat{R} \\rho _\\text{mol}^{\\prime }\\right) \\\\&=\\sum _{I, I^{\\prime }}\\sum _{l, m, m^{\\prime } \\atop \\lambda , \\mu , \\mu ^{\\prime }}\\tilde{I}_{m m^{\\prime }}^{l*}\\tilde{I}_{\\mu \\mu ^{\\prime }}^{\\lambda }\\underbrace{\\int \\mathrm {d}\\Omega \\,D_{m m^{\\prime }}^{l}(\\hat{R})^{*}D_{\\mu \\mu ^{\\prime }}^{\\lambda }(\\hat{R})}_{\\text{Wigner's orth.", "relation}}\\\\&=\\sum _{I, I^{\\prime }}\\sum _{l, m, m^{\\prime }}\\dfrac{1}{2l+1} \\tilde{I}_{m m^{\\prime }}^{l*} \\tilde{I}_{m m^{\\prime }}^{l}\\end{aligned}$ where we used new indices to avoid the doubly primed $m^{\\prime \\prime }$ and where by virtue of Wigner's orthogonality relation, we have $\\int _{0}^{2 \\pi } \\mathrm {d} \\alpha \\int _{0}^{\\pi } \\mathrm {d} \\beta \\sin \\beta \\int _{0}^{2 \\pi }& \\mathrm {d} \\gamma &\\frac{1}{8 \\pi ^2}D_{m^{\\prime } k^{\\prime }}^{j^{\\prime }}(\\hat{R}(\\alpha , \\beta , \\gamma ))^{*}D_{m k}^{j}(\\hat{R}(\\alpha , \\beta , \\gamma )) \\nonumber \\\\&=&\\frac{1}{2 j+1} \\delta _{m^{\\prime } m} \\delta _{k^{\\prime } k} \\delta _{j^{\\prime } j}$ We see that the Greek indices collapse with the Latin ones, $m \\leftarrow \\mu $ , $m^{\\prime } \\leftarrow \\mu ^{\\prime }$ , and $l \\leftarrow \\lambda $ .", "The kernel is eventually defined with a power $\\zeta $ , which is also a hyperparameter as it steers the sensitivity to the kernel changing the atomic positions and we set to unity to not obfuscate the results further.", "Finally, a normalization is introduced, $K\\left(\\rho _\\text{mol}, \\rho _\\text{mol}^{\\prime }; n, \\zeta \\right)=\\left(\\frac{k\\left(\\rho _\\text{mol}, \\rho _\\text{mol}^{\\prime };n\\right)}{\\sqrt{k(\\rho _\\text{mol}, \\rho _\\text{mol}; n) k\\left(\\rho _\\text{mol}^{\\prime }, \\rho _\\text{mol}^{\\prime };n\\right)}}\\right)^{\\zeta }$ where each density is depending on the width $\\alpha $ of the Gaussians and on the positions of the atoms, $\\lbrace \\mathbf {R}_I\\rbrace $ .", "Due to the quadratic scaling of the integrals (for each pair of atoms in the scaffold), it can become inefficient for larger molecules.", "This is certainly true for very large molecules.", "A remedy for the quadratic scaling is proposed that involves an approximation in terms of an expansion using radial basis functions.", "However, no systematic study has every scrutinized this radial-basis approximation versus the analytic solution, apart from an initial discussion in the original paper[5].", "A further analysis of the accuracy of the power spectrum or engineered adversarial problems is beyond the scope of this analysis." ], [ "Electron Density Based Comparison: SOED", "Instead of attaching a Gaussian function to each atomic position to introduce a fuzzy atomic core as a component of a molecular scaffold density as in Eq.", "(REF ), one is tempted to exploit the electron density of a molecule for the assessment of molecular similarity.", "Not only does this quantity relate to the electronic energy through the Hohenberg-Kohn theorem[31], it is also an observable that is accessible in diffraction experiments and from any quantum chemical method.", "Moreover, it also includes a representation of the atomic cores because its maxima indicate the nuclear positions (and even the nuclear charge by virtue of the Kato cusp condition).", "In addition to this information about the molecular scaffold, the electron density encodes information about the electronic wave function in the valence regions – although their peculiarities are in the tiny details and might require derived fields (such as the Laplacian[46]) to be clearly visible.", "In this context, it is important to emphasize that the electron density is typically calculated from absolute squares of molecular orbitals which are decomposed into atomic orbitals centered on the atomic nuclei.", "These atomic orbitals can then be represented by standard Gaussian functions available from a basis set library.", "In this regard, to employ the electron density is even on a technical level very similar to the SOAP scheme based on the molecular scaffold density — although the derivation will be far more difficult as we will see in the following.", "Hence, the electron density could be taken as a replacement for the molecular scaffold density to represent structural information as does SOAP (best seen with width parameters, $\\alpha _I$ , that are different for every atomic core), but now also to encode electronic information.", "Already the simplest electronic structure model will allow us to combine the atomic orbitals located at the nuclear positions linearly to yield molecular orbitals and to then yield the electron density, in complete analogy to Eq.", "(REF ).", "However, note that the molecular orbitals themselves cannot be used to replace the electron density or the molecular scaffold density because the coefficients in the LCAO expansion can take negative values and, in a superposition of molecular orbitals, these molecular orbital coefficients would cancel each other and all information about individual molecular orbitals will be lost.", "Obviously, this problem does not occur for the electron density that is taken as a weighted sum of the absoulte squares of the molecular orbitals (see below).", "If the additional electronic structure information encoded in the electron density can be harnessed, it might be better suited as a descriptor than SOAP, which only encodes the molecular scaffold.", "A similar reasoning has led Carbó[47] in the context of cheminformatics to develop a similarity measure based on the overlap of electron density, which is sometimes called the Carbo index $r_{A B}$ , $r_{A B}=\\frac{\\int _{V} \\rho _{A} \\rho _{B} d {\\bf r} }{\\left(\\int _{V} \\rho _{A}^{2} d {\\bf r}\\right)^{1 / 2}\\left(\\int _{V} \\rho _{B}^{2} d {\\bf r}\\right)^{1 / 2} }$ and therefore reminiscent of Eq.", "(REF ) in the SOAP formalism.", "The derivation, where the first steps are similar to ours, is presented in the Appendix of Ref.", "carbo1980 but leaves out the key step of density rotations introduced in SOAP.", "The optimum overlap of $r_{AB}$ is numerically calculated whereas we aim at an analytical solution.", "However, bringing electron densities in a rotatable form in a single-center picture leads to significant mathematical overhead as we shall see in the following.", "Since we assume that we will always have results for a simple electronic structure model available from which we may take test data, we consider a single Slater determinant model.", "For such an ansatz, which is the basis of Kohn-Sham DFT, Hartree-Fock theory, and any approximate Hartree-Fock model, the electron density is the square of $w$ molecular orbitals, $\\phi _i(\\mathbf {r})$ , where $\\mathbf {r}$ is the coordinate of the electron, $\\rho _\\text{el}(\\mathbf {r})=\\sum _{i=1}^{w}n_i\\left|\\phi _i(\\mathbf {r})\\right|^{2}$ with occupation numbers $n_i$ (that may be taken to be one in an unrestricted framework, where $w$ will then be identical to the number of electrons $N$ ).", "The MOs are usually expanded as a linear combination of atomic orbitals (LCAO), $\\phi _{i}(\\mathbf {r})= \\sum _A^M \\sum _{\\mu @ A} c^{(i)}_{\\mu } \\chi _{\\mu } (\\mathbf {r}) \\ ,$ as in Eq.", "(REF ), but now with an explicit notion of the atom ('$A$ ') on which a function $ \\chi _{\\mu }$ is centered.", "In other words, the above expression introduces explicitly a sum over these atomic centers of the basis functions, which we may denote with the German word 'Aufpunkt' in order to introduce a notion that allows for centers that are not identical with nuclear positions.", "Hence, there is still a total of $m$ basis functions (i.e., the atomic orbitals (AOs) in this linear combination of atomic orbitals), $\\chi _\\mu (\\mathbf {r})$ , and their corresponding coefficients are $c^{(i)}_{\\mu }$ .", "Gaussian orbitals (GTOs) employed as AO are defined as $\\chi _\\mu (\\mathbf {r}_{A})=N_{\\mu }f_\\mu (\\mathbf {r}_{A})g_\\mu (\\mathbf {r}_{A})$ with the polynomial $f_\\mu (\\mathbf {r}_{A})=\\left|\\mathbf {r}_{A} \\right|^{L_\\mu }Y_{L_\\mu M_\\mu } \\left(\\Omega _{\\mathbf {r}_A}\\right) \\ ,$ the Gaussian $g_\\mu (\\mathbf {r}_{A})=\\exp \\left(-\\zeta _\\mu \\left|\\mathbf {r}_{A}\\right|^{2} \\right) \\ ,$ and with the normalization constant $N_{\\mu }$ .", "Furthermore, we have the local vector from nucleus $A$ at $\\mathbf {R}_{A}$ to the field variable $\\mathbf {r}$ (see Figure REF ), i.e., $\\mathbf {r}_{A} = \\vert \\mathbf {r} - \\mathbf {R}_{A} \\vert $ , the spherical harmonic, $Y_{L_\\mu M_\\mu } (\\Omega _{\\mathbf {r}_A})$ , which is also centered at the position of nucleus $A$ and hence in the local coordinate system $\\mathbf {e}^{\\prime }_{x_A}$ , $\\mathbf {e}^{\\prime }_{y_A}$ , and $\\mathbf {e}^{\\prime }_{y_A}$ , with orbital quantum number (degree) $L_\\mu $ and magnetic quantum number (order) $M_\\mu $ , and polar angles $\\Omega _{\\mathbf {r}_A} = (\\theta _{\\mathbf {r}_A}, \\phi _{\\mathbf {r}_A})$ with respect to nucleus at $\\mathbf {R}_{A}$ , and effective nuclear charge $\\zeta _\\mu $ .", "In the following, we omit the variable dependence for the sake of brevity, i.e., $\\chi _\\mu \\equiv \\chi _\\mu (\\mathbf {r})$ .", "As before, we will have to bring this multi-centered approach into a framework where all angles are to be taken with respect to a single center, which facilitates the rotation operation and which is taken to be the center of mass.", "Substituting the AO expansion, Eq.", "(REF ), into the density expression, Eq.", "(REF ), we obtain $\\rho _\\text{el}(\\mathbf {r}) =\\sum _{i}^{w}\\sum _{A, B}^{M}\\sum _{\\mu @ A \\atop \\nu @ B}n_ic^{(i)}_\\mu c^{(i)}_\\nu \\chi _\\mu \\chi _\\nu $ with basis functions (i.e., 'AOs') $\\chi _\\mu $ and $\\chi _\\nu $ and expansion coefficients $c^{(i)}_\\mu $ and $c^{(i)}_\\nu $ .", "As usual, the introduction of a density matrix, $D_{\\mu \\nu }= \\sum _{i}^{w}n_i c^{(i)}_\\mu c^{(i)}_\\nu $ (which is not to be confused with the Wigner D-matrices) brings the expression for the electron density into a more compact form, $\\rho _\\text{el}(\\mathbf {r}) =\\sum _{A, B}^{M}\\sum _{\\nu @ A \\atop \\nu @ B}D_{\\mu \\nu }\\chi _\\mu \\chi _\\nu \\ .$ Moreover, for notational convenience we introduce a product GTO, $\\begin{split}\\chi _{\\mu \\nu }(\\mathbf {r}_A,\\mathbf {r}_B)= N_{\\mu \\nu }f_\\mu (\\mathbf {r}_{A})f_\\nu (\\mathbf {r}_{B})g_{\\mu }(\\mathbf {r}_{A})g_{\\nu } (\\mathbf {r}_{B})\\end{split}$ with the normalization constant $N_{\\mu \\nu } = N_{\\mu }N_{\\nu }$ Since we want to rotate one of the electron densities with respect to the other, the expression must be made dependent on the rotation angles.", "However, as in the case of SOAP, the angles in the spherical harmonics are defined with respect to the local coordinate system at a nucleus, but must be defined with respect to a single center that will be the center of mass.", "For this, we follow the derivation of Kaufmann and Baumeister[44] here.", "However, the result from the exact derivation, which we sketch in the supporting information, is very long-winded and turns out to be computationally costly to evaluate.", "Therefore, we propose two ways of approximating the GTO in order to simplify the expression: First, we neglect all products of higher order spherical harmonics; e.g., no product of two $p$ functions was considered, if they are positioned at different nuclei.", "Whereas this simplifies the derivation, it is still very involved.", "Therefore, we eliminate the polynomial factors by replacing all $p$ basis functions with lobe functions, i.e., $s$ -type functions with shifted centers that resemble functions of higher angular momentum quantum number (see supporting information for details on how these new Aufpunkte were determined by starting from the nuclear positions).", "This is also possible for higher-orbital-momentum functions such as $d$ functions.", "In this way, we generate an all-$s$ -type basis set that sufficiently well represents the electron density and for which the calculated MO coefficients can be inherited.", "Note that, from a technical point of view, the new Aufpunkte of these lobe functions are then to be treated as (new artificial) nuclear positions in the evaluation procedure for SOAP and, therefore, SOED can now be evaluated with a SOAP-type procedure.", "By virtue of the Gaussian product theorem, each product of the two Gaussians $\\chi _\\mu $ and $\\chi _\\nu $ in Eq.", "(REF ) becomes another Gaussian.", "Hence, we can collapse the double indices $\\mu \\nu $ in Eq.", "(REF ) into a single index, which we call $\\check{I}$ in order to associate it to the corresponding expression in the SOAP derivation: since we consider a lobe-basis of $s$ -functions only, we obtain the electron density as an expansion into $s$ -functions only with Aufpunkte given at positions denoted by $\\check{I}$ , some of which are actual nuclear positions: $\\rho _{\\mathrm {el}}(\\mathbf {r})=\\sum _{\\check{I}}^{\\check{m}} D_{\\check{I}} \\chi _{\\check{I}}$ All of these positions $\\check{I}$ are subject to the SOAP rotation and overlap procedures and can be treated like ghost atoms.", "Hence, we have recovered a generalized version of SOAP with weights $D_{\\check{I}}$ in front of the basis functions $\\check{I}$ , each of which will, in general, carry a different exponent (rather than a common exponent $\\alpha $ that is the standard choice for SOAP).", "Recall that the weights $D_{\\check{I}}$ contain products of MO coefficients and occupation numbers (or first-order reduced density matrices), which can be determined in an electronic structure calculation (e.g., in a Hartree-Fock calculation).", "Of course, this will require one storing electronic structure data in addition to the nuclear coordinates of a molecule, but that should no present a hurdle as the molecular structures will typically be optimized with an electronic structure method whose wave-function ingredients then simply need to accompany the Cartesian coordinates in a data base in order to be exploited by a machine learning ansatz based on SOED.", "Finally, we obtain, by virtue of the direct analogy with SOAP, for the overlap of two electron densities, $k\\left(\\rho _{\\mathrm {el}}, \\rho _{\\mathrm {el}}^{\\prime } ; 2\\right) =\\sum _{\\check{I}, \\check{I}^{\\prime }}^{\\check{m},\\check{m}^{\\prime }} \\sum _{l, m, m^{\\prime }}\\frac{D^2_{\\check{I}} D^2_{\\check{I}^{\\prime }}}{2 l+1} \\tilde{I}_{m m^{\\prime }}^{l *} \\tilde{I}_{m m^{\\prime }}^{l}$" ], [ "Numerical Comparison of SOAP and SOED", "We show in Figure REF a comparison for the SOAP and SOED similarity results for the reactions in Figure REF .", "On both axes are the reactant (R), the TS (T), and the product (P) of the reaction.", "The diagonal is 1, as self-similarity is perfect by design.", "The first row shows SOAP and the second row shows the SOED results.", "As one can see, the electron density based descriptor is more sensitive to the reaction progress, because it drops from reactant to TS more than traditional SOAP.", "Again, the reactant and the TS are more similar in all descriptors shown than TS and product, implying an early TS again, as we already found for the Coulomb matrix and the energy diagram.", "However, product and TS are less similar than product and reactant, which seems unexpected at first sight.", "However, we note that this is actually reasonable if a purely structure based descriptor that is largely independent of directional information (owing to the radial Gaussians involved in SOAP) favors similarity in terms of equilibrium bond lengths over similarity between stretched and equilibrium bond lengths in one constitutional isomer.", "Figure: Comparison of SOAP (top panel) and SOED (bottom panel) for the three test reactions; 'R' denotes the reactant, 'T' the transitions state, and 'P' the product.One reason for the larger dissimilarity obtained for the SOED kernel is the fact that the parameter in the exponent of the Gaussian functions is no longer a hyperparameter.", "By contrast to SOAP with a fixed value $\\alpha $ for all nuclei, the analogous parameter in a Gaussian basis set that describes the atomic orbitals is fixed to represent these orbitals.", "In order to demonstrate how sensitive SOAP is in this respect, a comparison of SOAP results are shown in Figure REF that were obtained for the three reactions with varying $\\alpha $ : $\\alpha =1$ , $\\alpha =10$ , and $\\alpha =100$ (recall that the standard value is $\\alpha =0.4$ ).", "It is apparent from Figure REF , when rotating one SOAP scaffold density with respect to the other, narrower peaks will have less overlap compared to when they are spread out.", "Figure: Comparison of SOAP for three different values of the common exponent α\\alpha for the three test reactions; 'R' denotes the reactant, 'T' the transitions state, and 'P' the product." ], [ "Conclusions", "In this work, we considered an elementary expression for the electronic energy that allowed us to discuss two widely used descriptors of molecular similarity in machine learning from the point of view of electronic structure theory: Coulomb matrix and SOAP.", "We showed how to ground their definitions into electronic structure theory by (i) introducing Coulomb lists that allowed us to scrutinize the rather arbitrary diagonal entries of the Coulomb matrix and its intransparent diagonalization step and by (ii) relating the fuzzy density that encodes molecular structure for SOAP to the actual electron density, which then also carries electronic structure information directly into the descriptor.", "Our formal discussion was accompanied by a single example that served the purpose to illustrate the results one obtains with the standard descriptors Coulomb matrix and SOAP and with our new descriptors Coulomb list and SOED.", "The single example was chosen to provide structures connected through an elementary reaction step: reactant, transition state, product; i.e., structures that are rather similar by definition and that occur in chemical reaction networks.", "This example allowed us to study the variation of the descriptors along a coordinate that connects the three types of structures.", "Whereas structural change is therefore continuous, the electronic structure is different, which is the reason why the transistion state structure acquires a higher energy than the reactant and product structures.", "This situation therefore introduces a peculiar twist that would allow one to argue that the transition state struture is more similar to either product and reactant than product and reactant are similar to one another, while the difference in electronic structure prompts one to argue that the stable intermediates, i.e., product and reactant, should be more similar to one another with respect to the nature of their electronic structures.", "For the Coulomb list, we found that the $\\mathcal {F}_{\\text{n}}^{(J)}$ descriptor yields interpretable traces by contrast to the convoluted eigenvalues of the Coulomb matrix.", "At the same time, it contained the same information and can therefore replace the Coulomb-matrix eigenvalue features, alleviating also the need for the diagonalization step.", "We emphasize that traces of these features along reaction coordinates clearly correlated with the change in molecular structure and are able to identify the transition state structure in the feature.", "Hence, the electronic difference of the transition state compared to the stable intermediates (reactant and product) will be detectable in these features, if they are considered relative to one another.", "However, this was not possible for SOAP and SOED.", "We found that SOED is more sensitive than SOAP with the standard $\\alpha $ parameter.", "This is a consequence of the widths of the Gaussians that are present in the representation of the electron density.", "We found that narrowing the Gaussians that compose the SOAP kernel also increases sensitivity, so that SOAP, which is much easier to evaluate than SOED, could be used instead, but with a parameter $\\alpha $ in the exponent that is larger than the standard one.", "In future work, we will build upon these findings and elaborate on molecular similarity in the context of a huge number of molecular structures occuring in a large reaction network." ], [ "Acknowledgements", "Financial support by the Swiss National Science Foundation through project no.", "200021_182400 is gratefully acknowledged.", "The quantum chemical calculations for the model reactions were carried out with Orca 5.", "[48] We performed unrestricted Kohn–Sham PBE[49] structure optimizations with the SVP basis set with density fitting.", "[50] Our molecular similarity descriptors were then obtained from Hartree-Fock single-point calculations.", "The atomic descriptors $\\mathcal {F}^{(J)}_\\mathrm {n}$ and $\\mathcal {F}^{(J)}_\\mathrm {e}$ were obtained with the program Serenity[51], [52], which calculated the nuclear attraction integrals contracted with the density matrix elements and the nucleus–electron interaction with unrestricted Kohn–Sham PBE[49] and the def2-SVP basis set.", "[53] For SOED, we obtained the molecular orbital coefficients for the electron density in Hartree–Fock calculations with the tiny STO-3G basis set [54] with Gaussian[55], where the keyword Integral(SplitSP) had to be applied to not obtain S=P contracted orbitals but regular $s$ and $p$ orbitals.", "We note that despite its small size, this basis set already produces the main features of the electron density (non-isotropic local effects through minimal polarization by basis functions on atomic neighbors and an element-specific maximum of the density distribution at the various atomic nuclei) that makes SOED different from SOAP.", "We implemented Eq.", "(REF ), the SOAP kernel, in Mathematica[56] and this code is available from the authors.", "In addition, we established a python implementation for routine applications that will be made available open source.", "In our implementation, we used NumPy[57] array programming (vectorization).", "The data structure ndarray harnesses the CPU's SIMD (Single Instruction, Multiple Data) architecture for a significant speed-up compared to a loop-based implementation.", "Furthermore, we parallelized the calculation with the library Ray.", "[58] According to Eq.", "(REF ), the overlap for the SOAP procedure is dependent on an infinite sum that originates from the Rayleigh expansion in Eq.", "(REF ) and the rotation of the spherical harmonics as an expansion of Wigner-D matrices in Eq.", "(REF ), which converges uniformly.", "Even though we avoid the standard power spectrum approximation, which would introduce another approximation, we can only approach the exact overlap within numerical accuracy.", "As the authors of the original paper[5] noted, the pairwise evaluation for all $I$ and $I^{\\prime }$ leads to a big computational overhead.", "Since we approximated the $p$ -functions by (lobe) $s$ -functions, very many terms are created in this double sum.", "Our largest molecule from reaction 3 in Figure REF then requires 333 $s$ -functions.", "For this reason, it is hard to reach a high $l$ in the expansion, as $m$ and $m^{\\prime }$ range from $-l$ to $l$ in steps of one.", "Hence, for the results presented in Figure REF , the maximum value for $l$ was 3 for SOED and 5 for SOAP.", "In the original reference introducing SOAP[5], it had already been noted that the computation of SOAP can be expensive, since the terms inside the sums of Eq.", "REF have to be evaluated for each pairwise interaction of atoms.", "The sums over $l$ , $m$ , and $m^{\\prime }$ scale with $O(L_\\mathrm {max}^3)$ , where $L_\\mathrm {max}$ is the truncation of the infinite sum over $l$ , yielding an overall complexity of $O(M^2 L_\\mathrm {max}^3)$ .", "In SOED, Eq.", "(REF ), where the density is based on $m^2$ basis functions for each density, the evaluation becomes even more expensive as $O(m^4 L_\\mathrm {max}^3)$ , where the number of basis functions, $m$ , is in all practical cases much bigger than the number of atoms, $M$ .", "The straightforward way to treat this unfavorable scaling is to trade off accuracy for speed and reduce $L_\\mathrm {max}$ .", "For all but the smallest molecules, larger basis sets than those on the order of STO-3G will likely not be feasible.", "As in SOAP with the power spectrum, it might be possible to simplify the nested sum with some mathematical transformations to gain a speed-up." ] ]
2207.03599
[ [ "Robustness Evaluation of Deep Unsupervised Learning Algorithms for\n Intrusion Detection Systems" ], [ "Abstract Recently, advances in deep learning have been observed in various fields, including computer vision, natural language processing, and cybersecurity.", "Machine learning (ML) has demonstrated its ability as a potential tool for anomaly detection-based intrusion detection systems to build secure computer networks.", "Increasingly, ML approaches are widely adopted than heuristic approaches for cybersecurity because they learn directly from data.", "Data is critical for the development of ML systems, and becomes potential targets for attackers.", "Basically, data poisoning or contamination is one of the most common techniques used to fool ML models through data.", "This paper evaluates the robustness of six recent deep learning algorithms for intrusion detection on contaminated data.", "Our experiments suggest that the state-of-the-art algorithms used in this study are sensitive to data contamination and reveal the importance of self-defense against data perturbation when developing novel models, especially for intrusion detection systems." ], [ "Introduction", "Nowadays, information technology has become a central part of our daily activities, including communication, travel, manufacturing, banking, education, and work.", "While these technologies are beneficial to users, they also allow adversaries to conduct malicious activities on cyberinfrastructures.", "These malicious activities include but are not limited to information stealing, denial of service, manipulation of cyberinfrastructure components, and data poisoning.", "Cyber threats spare no one, from individuals, businesses, banks to governments.", "Therefore, cyberinfrastructures must be protected against these permanent and harmful cyber threats; firewalls, spam filters, anti-virus, and intrusion detection systems (IDS) form the ecosystem of network defense [12].", "Recent advances in machine learning (ML) play a pivotal role in cybersecurity and contribute to the progress of this field at a fast pace.", "Many ML techniques on anomaly detection (AD) have found their applications in cybersecurity.", "An anomaly is an observation that considerably deviates from what is deemed normal observations [1].", "Depending on the situation, such an observation is considered unusual, irregular, atypical, inconsistent, unexpected, rare, erroneous, faulty, fraudulent, malicious, etc [25], [7].", "AD approaches can be categorized as follows: probabilistic methods such as DAGMM [37], which estimate data probability density function and predict samples laying in the low-density region as anomalies; reconstruction based methods such as MemAE [14] and DUAD [19], which assume that normal data is compressible, and flag as anomalous, data samples that can not be reconstructed from their compression; distance based models such as LOF [6], which predict anomalies by using their distances from normal data; one-class classification approaches such as OC-SVM [26].", "Examples of AD applications in cybersecurity include malware detection, spam filtering, and intrusion detection systems [10], [20].", "Training ML models for AD requires a nearly large amount of data usually collected from various sources, including sensors, actuators, processes, network traffic, and people.", "As the volume of data grows, it becomes more and more susceptible to contamination.", "One source of contamination might be an ongoing attack campaign, yet unnoticed, during data collection.", "Moreover, adversaries are aware that data is crucial to developing ML systems for intrusion detection, so they attempt to attack it by injecting malicious samples into the training set [17] – this is known as data poisoning or data contamination.", "Adversaries' goal is to hamper model performance and probably create a backdoor for subsequent intrusions.", "For example, in spam filtering, an attacker will attempt to evade the spam detection system by misspelling words most likely to appear in spam and adding words likely to occur in legitimate emails.", "An attacker will slightly modify the malware code without altering its functionality while making it undetectable in malware detection.", "Currently, AD training pipelines operate under the assumption that the training data is clean.", "However, this is not always verified because checking data cleanliness is expensive for large-scale datasets [17].", "Therefore, it is crucial for AD models to be robust against contamination during training, especially in the use case of intrusion detection.", "Although the robustness of AD models has already been studied, the most recent AD models are rarely exposed to and tested on contaminated data.", "At the same time, that could be a decisive factor in their real-world applications.", "In this paper, we assess the robustness of six state-of-the-art deep learning models for AD, with different levels of training set contamination following our evaluation protocol.", "These models are: ALAD [34], Deep auto-encoder [8], DAGMM [37], DSEBM [35], DUAD [19], and NeuTraLAD [22].", "We employ CIC-CSE-IDS2018 dataset and two widely used benchmark datasets: KDDCUP and NSL-KDDhttps://www.unb.ca/cic/datasets/nsl.html.", "These are all cybersecurity datasets simulating computer networks.", "The rest of the paper is organized as follows: In Section , we describe the related work.", "Section  presents our evaluation protocol.", "We discuss different experimental results in Section .", "In Section , we conclude the paper and discuss future works." ], [ "Related Work", "Attacking ML systems through data manipulation can be done at training time or testing time.", "Evasion attacks and data poisoning (also known as data contamination) are among the most used techniques by adversaries to fool ML systems [3], [18].", "While evasion attacks consist of manipulating data at test time, data poisoning is performed at training time.", "In this work, we focus on data manipulation during training." ], [ "Data poisoning attacks", "Attacks that insert poisoned instances into the training data are known as data poisoning.", "These attacks aim to increase the classification errors at test time.", "The adversary's goal can be to increase misclassification generically, i.e., the classes where they occur do not matter.", "Alternatively, the adversary may target a specific class where the misclassification should occur.", "Research on data poisoning has gradually surged and proposed sophisticated techniques.", "For instance, earlier works on SVMs [4], feature selection [32], and PCA [23], as well as initial efforts with deep learning that mostly focused on lowering the model quality [21].", "Later, this area was dominated by research based on generative models [15].", "With generative models, attacks could aim toward creating a backdoor, where samples with specific and easy to manipulate characteristics would bypass detection by the model [31].", "We note that less attention has been paid to data poisoning in the case of intrusion detection systems.", "Our work studies data poisoning with both generic and specific types of attacks." ], [ "Deep Unsupervised Anomaly Detection", "Anomaly detection in an unsupervised setting is challenging since the absence of explicit labels limits exploitable information.", "Classical approaches to unsupervised AD work well with lower dimensions.", "However, as the feature space grows, they suffer from the curse of dimensionality [19].", "Deep learning architectures, including autoencoders, have shown significant potential to mitigate this issue.", "Unsupervised AD consists of three approaches.", "Reconstruction-based methods operate under the assumption that anomalies are harder to regenerate after compression.", "MemAE [14] augments encoder-decoder reconstruction with a memory module to store common patterns.", "DUAD [19] assumes normal sample clusters have lower variance and iteratively refines the clusters by their encoded representation to exclude and alienate anomalies.", "Reconstruction-based methods have also been implemented with GANs [34].", "Probabilistic and density-based methods exploit the isolation of anomalous samples and recognize them by lower density of samples in their feature space.", "As an example, DAGMM [37] is one the recently proposed methods of this approach, utilizing reconstruction error and latent representation to determine the parameters of a Gaussian Mixture Model (GMM), which in turn appoints anomalous samples by their log-likelihood.", "One-class classifiers are another anomaly detection approach that learns the boundary enveloping normal instances.", "Few prior works have been proposed this approach in an unsupervised context [24].", "Recent works employ self-supervised pre-trained feature extractors for one class classification [2], [13], [28].", "Even though a lot of progress has been made using deep learning architectures, we aim to measure the newly proposed models from the perspective of robustness to training data contamination." ], [ "Robustness in Anomaly Detection", "The robustness of a model is defined as its resistance to exposure to unexpected data for the same task.", "In real-world applications, data contamination and adversarial attacks threaten the capabilities of anomaly detection models.", "Robustness to noise contamination was explored in SVMs [33].", "This concept has also been introduced in principal component analysis under the name of Robust PCA (RPCA) [36].", "RPCA utilized low-rank matrix decomposition to achieve robustness, and the same has been applied to extend Autoencoder [16].", "To the best of our knowledge, the problem of robustness against noise contamination has not been clearly and carefully studied in the cybersecurity field.", "We believe that this problem represents a ubiquitous concern in the intrusion detection task.", "This work investigates the robustness of recent intrusion detection methods on different datasets, including a recent dataset that simulates a complex network.", "Our work will better inform the selection of an intrusion detection model whose quality should include resilience to data perturbations at training time." ], [ "Evaluation Protocol", "This section presents the data split strategy, the metrics, and the choice of the decision threshold.", "Table: Datasets statistics.Data split.", "Following [37], the training set contains $50\\%$ of normal data samples randomly drawn, but we add a ratio $c \\in \\lbrace 0, 5, 8, 12\\rbrace $ percent of attack data.", "The test set contains $50\\%$ of the rest of the normal data plus a percentage $(1-p)$ of attack data with $p \\in \\lbrace 20, 40\\rbrace $ .", "The remaining $p$ percent of attack data is in a set that we call the contamination set.", "More specifically, the ratio $c$ of attack data utilized to contaminate the training set stems from the contamination set.", "The rationale behind this data split strategy is to preserve the consistency over different sets of experiments and fairness in evaluating different algorithms, as pointed out by [1].", "Keeping constant the proportion of normal data and attack data in the test set allows for a good assessment of the impact of different levels of training set contamination on models’ performances.", "To draw reliable conclusions from our experiments, we furthermore apply this data split strategy in each run to capture the variance due to data sampling in addition to the variance due to parameters initialization as motioned in [5].", "Class of interest.", "Since cybersecurity datasets have a significant class imbalance, we consider the attack data as the class of interest.", "Attack data is often the minority class in a real-world application.", "Due to this scarcity, it is important to evaluate how good models perform on it.", "Performance metrics and threshold.", "We report the average and standard deviation of F1-score, precision, and recall over 20 runs for every ratio $c$ of the training set contamination.", "All metrics are computed on the attack data class, which is considered as the class of interest.", "These metrics are suitable for problems with imbalanced class datasets [9].", "However, F1-score, precision, and recall calculation require setting a decision threshold on the samples' scores.", "In our experiments, we use 20% of the test set to find the decision threshold and then test it on the remaining 80%.", "We choose the decision threshold such that the F1-score – the harmonic mean of precision and recall – is the best that the model could achieve on a specific dataset." ], [ "Experiments", "We present datasets and models' descriptions and discuss experimental results." ], [ "Datasets", " KDDCUPhttp://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html is a cyber-intrusion detection dataset that is 20 years old, and has some issues such as duplicated samples [29], but it is still widely used as a benchmark dataset.", "It contains data related to normal traffic and attacks simulated in a military network environment.", "The Attacks include DoS, R2L, U2R, and probing.", "The dataset has 41 features, with 34 continuous, and the rest are categorical features.", "It counts 80% of attack data and 20% of normal data, unlike what one could expect.", "So, we swapped data labels to reflect a real-world scenario where the majority class is the class of normal data.", "We use the version that holds only 10 percent of the original data.", "NSL-KDD is a revised version of the KDDCUP dataset provided by the Canadian Institute of Cybersecurity (CIC).", "Some inherent issues, such as duplicated samples from the original KDD dataset, are solved in this version.", "Further details on this dataset are discussed in [29].", "CIC-CSE-IDS2018 is also provided by CIC in collaboration with the Communication Security Establishment.", "Unlike the KDDCUP dataset, this dataset is recent, and it contains normal traffic plus attacks data simulated on a complex network.", "Attack types include Brute-force, Heartbleed, Botnet, DoS, DDoS, Web attacks, and infiltration of the network from the inside [27].", "Importantly, we combined all types of attacks data samples into one class, i.e., “attack” for all datasets.", "Especially for the CSE-CIC-IDS2018 dataset, we kept the original labels to investigate how the contamination of training data with some filtered types of attacks affects each class.", "Table REF shows the statistics of the three datasets.", "We applied Min-Max scaling to continuous features and one-hot encoding to categorical features for the three datasets." ], [ "Models", "This section briefly presents the six state-of-the-art anomaly detection models implemented for this study.", "Our code is available on Githubhttps://github.com/intrudetection/robevalanodetect.", "Deep Auto-Encoder [8].", "DAE is a neural network with two components; an encoder whose output is the input's low-dimensional representation (latent representation) and a decoder accounting for reconstructing the input from its low-dimensional representation.", "For the loss function, we have added an $L_2$ regularization on the latent variable in addition to the standard reconstruction error.", "The reconstruction error is also used as the anomaly score.", "Deep Unsupervised Anomaly Detection [19].", "DUAD is a method that uses a DAE for anomaly detection.", "It is based on the hypothesis that anomalies in the training set are approximately samples with high variance distribution.", "Unlike DAE, it applies a distribution clustering to select a subset of normal data from the training set after a fixed number of iterations.", "The reconstruction error is used as the anomaly score.", "Deep Structured Energy Based Models for Anomaly Detection [35].", "DSEBM, as its name suggests, is an energy-based model.", "It learns the energy function of input data through a neural network with structure.", "The algorithm provides two scoring functions based on energy and another based on the reconstruction error.", "For our experiments, we consider the energy-based anomaly scoring function.", "With the energy scoring function, samples with high energy are classified as anomalies and those with low energy are classified as normal.", "Deep Auto Encoding Gaussian Mixture Model [37].", "DAGMM consists of two neural networks: an auto-encoder and an estimation network.", "These networks are trained in an end-to-end fashion.", "The model concatenates the output of the encoder – the latent representation of the input – and the reconstruction error, then feeds them to the estimation network, whose output is used to compute the parameters of a Gaussian Mixture Model (GMM).", "Specifically, the estimation network is used to obtain sample likelihood, which is also considered the anomaly score.", "Adversarially Learned Anomaly Detection [34].", "ALAD extends BiGAN [11] by adding two more discriminators to ensure data-space and latent-space cycle consistencies.", "The scoring function is the reconstruction error on the output of an intermediate layer of one of the discriminators.", "Neural Transformation Learning for Deep Anomaly Detection Beyond Images [22].", "NeuTraLAD is a self-supervised learning method for anomaly detection.", "It combines contrastive learning with the idea of learning data transformation through neural networks to do data augmentation with tabular data.", "The loss function is defined to maximize agreement between an input and its transformations while minimizing agreement between transformations of an input.", "The same deterministic loss function is used as the scoring function." ], [ "Results and Discussion", "Table REF reports F1-score, precision, and recall of the six models with different values of contamination ratio $c$ .", "Performance on KDDCUP dataset.", "With no contamination, all six models perform well, with an average F1-score above 90% over the 20 runs; DSEBM and DUAD outperform other models with an F1-score of 96%.", "From the two-dimensional representation of a subset of the KDDCUP dataset in Figure REF , it can be seen that the normal and attacks data are mostly distinguishable – we observe a less proportion of overlapping between the two.", "This could explain why all models perform well on KDDCUP dataset.", "From Figure REF , we notice a loss of performance, albeit in different proportions, for all six models as the contamination ratio increases.", "Remarkably, the F1-scores of DAGMM and DAE drop dramatically from 93.8% to 40.7% and from 93.8% to 69.9%, respectively, with a contamination ratio of 12%.", "DAGMM estimates a gaussian mixture with no explicit way to handle outliers.", "Consequently, contaminated samples influence density estimation and move normal samples towards low-density regions.", "However, DUAD and DSEBM have demonstrated some resistance to contamination.", "DUAD's re-evaluation trick on the training data appears to have discarded some contaminated samples to allow training on a nearly clean subset.", "As for DSEBM, we note that the recall remains at approximately 99%, but only the precision decreases.", "That means DSEBM was able to classify all attacks well, assigning high energy to them, regardless of the contamination level.", "However, the precision of DSEBM decreases as the noise in the training data increases.", "Performance on NSL-KDD dataset.", "Figure REF shows different patterns than those observed on the KDDCUP dataset.", "Note that NSL-KDD is a relatively balanced dataset.", "For example, DSEBM yields poor performance with an F1-score that drops from 93% to 78% for a contamination rate of 12%.", "That may be due to the removal of duplicates in the KDDCUP dataset.", "On the other hand, DUAD distribution clustering plays a central role of defending against contamination of the training set.", "We observe that DUAD consistently performs well (F1 score of 91%), regardless of the contamination level.", "Performance on CSE-CIC-IDS2018 dataset.", "On this dataset, DUAD achieves the highest F1-score (67%), followed by the DAE (63%) and NeuTraLAD (61%).", "Overall, all six models struggle to achieve high performance with regards to what they yielded on KDDCUP and NSL-KDD datasets, even without contamination.", "In Figure REF , we visualize, by t-SNE, a subset of normal and infiltration attack data from CSE-CIC-IDS2018 dataset.", "Surprisingly, we observe an important number of overlaps between the benign traffic and the attack data, so this makes it difficult to distinguish between them.", "Figure REF shows an almost linear decrease in F1-scores for all six models as the contamination rate increases.", "DUAD has previously demonstrated resistance to contamination, but its F1 score drops from 67% to 43% on CSE-CIC-IDS2018 with a contamination rate of 12%.", "This poor performance is because, unlike KDDCUP and NSL-KDD (Figures REF  and REF ), some of the attack samples in CSE-CIC-IDS2018 (Figure REF ) are less different from normal traffic data.", "Therefore, it is difficult to eliminate contamination from the training set, even for DUAD.", "Nevertheless, DUAD still obtains a higher F1-score than the other models at different contamination rates; followed by DAE and then by ALAD.", "Figure: DAE accuracy on CSE-CIC-IDS2018 dataset grouped by class, when trained exclusively on normal data and when trained on normal data plus a ratio of five percent of randomly selected attack data.Figure: DAE accuracy on CSE-CIC-IDS2018 dataset – grouped by class (benign and types of attacks) – when trained on normal samples only, compared to when training is performed on normal samples mixed with a ratio of one percent of DDoS, DoS, and Bot attack, exclusively.Accuracy per class on CSE-CIC-IDS2018 dataset.", "We present the analysis of DAE detection accuracy at data class (benign and attacks type) level.", "Figure REF shows accuracy for each type of attack and benign traffic when DAE is solely trained on normal data or on normal data mixed with 5% of attack data, randomly drawn from the contamination set.", "When training is performed on normal data, we notice that some attacks, such as DDoS, infiltration, and SQL Injection, have a low detection rate, below 40%.", "One cause of these inaccurate predictions may be the similarity between some normal traffic and attack data as displayed in Figure REF .", "Training set contamination modifies the decision boundary.", "Still, in Figure REF , it can be observed that if the training set is at 5% corrupted, the detection rate of benign traffic decreases, while the detection rate of some attacks, including infiltration, and SQL Injection attacks, increases.", "On the one hand, the number of false positive samples increases, and on the other hand, some attacks are now being recognized by the model, although their accuracies remain low.", "Yet, the overall performance is lower, due to the imbalanced nature of classes in the dataset.", "Instead of contaminating the training set with all types of attacks, we found that it is important to investigate the impact of corrupting the training set with specific types of attacks.", "Figure REF shows the accuracy of DAE when trained only on normal data and when trained on normal data contaminated by a ratio of 1% of DDoS attacks, DoS attacks, and Bot attacks, exclusively.", "We observe that contamination with either DDoS, DoS, or Bot drops F1-scores from 63% to 57%, 50%, and 53%, respectively.", "It is important to note that with only 1% of the training set contamination with the Bot attack, F1 score drops and the Bot attack is absolutely undetected by DAE.", "That is an example of a backdoor attack scenario where an attacker pollutes the training set to allow a subsequent undetected intrusion at runtime.", "Summary.", "Based on our experiments, we see that the contamination of the training set can negatively impact the performance of even the most advanced models.", "Since there is no guarantee that data will always be clean, it is therefore critical to implement a defense against contamination when developing ML models for cybersecurity.", "One way of defense could be to infer data labels during training steps.", "Once inferred labels are available, the algorithm would eject from the training dataset samples likely to be anomalous.", "We note that the KDDCUP and NSL-KDD datasets contain attacks far from reflective of current cyber-attacks.", "Nowadays, hackers use advanced tools, including AI-based ones, to generate sophisticated attacks.", "These types of attacks remain challenging and difficult to detect.", "KDDCUP and NSL-KDD datasets are no longer suitable for simulating real computer networks.", "The results obtained on KDDCUP and NSL-KDD could be misleading compared to actual reality.", "For example, models with superior performance on these two datasets do not perform as well on the CSE-CIC-IDS2018 dataset.", "The latter seems challenging and relatively suitable for simulating the current state of computer networks and cyber-attacks.", "We observe that DUAD consistently showed some resistance to contamination, thanks to its re-evaluation of the training set through clustering.", "One drawback of DUAD is that clustering is performed over the entire training set, once in data space, then in latent space repeatedly after a fixed number of epochs until convergence.", "This renders DUAD very slow to train for large datasets.", "Alternatively, we could extend the idea of distribution clustering (introduced by DUAD) of the training set to reject contaminated data in an online fashion." ], [ "Conclusion", "In this paper, we evaluate the robustness of state-of-the-art anomaly detection models on network intrusion detection datasets with different levels of training set contamination.", "We show that model performance drops when the training set is poisoned by attack instances, even for modern deep learning models.", "Furthermore, our study reveals the importance of robustness to contamination as a criterion when choosing an anomaly detection model for cybersecurity applications.", "We also highlight how model performance on outdated cybersecurity datasets could be misleading with respect to the current state of computer networks and types of attacks.", "We aim to develop an appropriate defense framework against training set contamination for deep anomaly detection methods applied to cybersecurity as complementary work." ], [ "Acknowledgements", "We acknowledge the support of Hydro-Sherbrooke, Natural Resources Canada (NRCan) and Public Safety Canada for this work." ] ]
2207.03576
[ [ "Multi-critical Points in Black Hole Phase Transitions" ], [ "Abstract We present the first examples in black hole thermodynamics of multicritical phase transitions, in which more than three distinct black hole phases merge at a critical point.", "Working in the context of non-linear electrodynamics, we explicitly present examples of black hole quadruple and quintuple points, and demonstrate how $n$-tuple critical points can be obtained.", "Our results indicate that black holes can have multiple phases beyond the three types observed so far, resembling the behaviour of multicomponent chemical systems.", "We discuss the interpretation of our results in the context of the Gibbs Phase Rule." ], [ "Acknowledgements", "This work supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).", "Perimeter Institute and the University of Waterloo are situated on the Haldimand Tract, land that was promised to the Haudenosaunee of the Six Nations of the Grand River, and is within the territory of the Neutral, Anishnawbe, and Haudenosaunee peoples.", "We are grateful to Bill Power, David Yevick, and Donna Strickland for helpful discussions." ] ]
2207.03505
[ [ "Recent Results of Energy Disaggregation with Behind-the-Meter Solar\n Generation" ], [ "Abstract The rapid deployment of renewable generations such as photovoltaic (PV) generations brings great challenges to the resiliency of existing power systems.", "Because PV generations are volatile and typically invisible to the power system operator, estimating the generation and characterizing the uncertainty are in urgent need for operators to make insightful decisions.", "This paper summarizes our recent results on energy disaggregation at the substation level with Behind-the-Meter solar generation.", "We formulate the so-called ``partial label'' problem for energy disaggregation at substations, where the aggregate measurements contain the total consumption of multiple loads, and the existence of some loads is unknown.", "We develop two model-free disaggregation approaches based on deterministic dictionary learning and Bayesian dictionary learning, respectively.", "Unlike conventional methods which require fully annotated training data of individual loads, our approaches can extract load patterns given partially labeled aggregate data.", "Therefore, our partial label formulation is more applicable in the real world.", "Compared with deterministic dictionary learning, the Bayesian dictionary learning-based approach provides the uncertainty measure for the disaggregation results, at the cost of increased computational complexity.", "All the methods are validated by numerical experiments." ], [ " Response to Reviewers IREP 2022#42, \"Recent Results of Energy Disaggregation with Behind-the-Meter Solar Generation\" by Ming Yi, Meng Wang Dear editor: We are sending our revision of “Recent Results of Energy Disaggregation with Behind-the-Meter Solar Generation” with the response to all the questions/concerns raised by reviewers.", "We want to thank you for handling the paper and thank all the reviewers for their time and efforts in reviewing the paper and providing helpful suggestions/comments.", "We have addressed all the issues and concerns raised by the reviewers in the revision.", "We first summarize the major changes in this revision as follows.", "We added the statement of the original contribution of this paper.", "We added the discussion of selecting the Gamma priors and conjugate priors.", "We added the computational complexity of our Bayesian method and provided the computational time of both methods.", "We added the discussion about how many Monte-Carlo samples are necessary.", "We added the statement of the original contribution of this paper.", "We added the discussion of selecting the Gamma priors and conjugate priors.", "We added the computational complexity of our Bayesian method and provided the computational time of both methods.", "We added the discussion about how many Monte-Carlo samples are necessary.", "We attach a point-to-point response to the comments in the following pages.", "Best regards, Meng Wang We now list all the comments and concerns and address them one by one in details.", "Original comments by reviewers are repeated in italics.", "Reviewer: 1 The contents of this paper are built on previous results recently published by the authors in the references [16] and [17].", "The approach presented is interesting, especially because the Bayesian method provides the confidence intervals associated with the disaggregation results.", "1.", "At the end of the introduction, an explicit statement of the original contributions of this paper is needed.", "The authors would like to thank the reviewer for his/her detailed review.", "The following statement has been added at the end of the introduction, “The contributions of this paper are three folds: 1.", "We summarize our works  and for solving the “partial label” problem and modeling the uncertainty.", "2.", "We make a fair comparison between these two methods and the other two existing works in the experiment.", "(3) We provide more testing cases for these two methods in this paper.” 2.In equation (7), indicate explicitly the meaning of the operator used.", "We revised the paper as follows, “In (7), $\\odot $ represents the element-wise product.", "” 3.At the end of page 3, add more detailed notes on the rationale and role of selecting the Gamma priors and conjugate priors.?", "We added the following statement into the paper.", "“The Gamma priors are conjugate priors of the Gaussian distribution.", "If conjugate priors are selected, we can derive the analytical solution of the posterior distribution in the variational inference, which simplifies the updating process.” 4.", "The cases shown are based on a very small number of loads (including local generations).", "Provide indications about the scalability of the proposed calculations with respect to the number of loads considered.", "The 50 Monte Carlo samples used are relatively limited for a statistically significant analysis.", "Why didn't the authors use more samples, e.g., 100 or more?", "We provide the computational complexity of the Bayesian training stage and testing stage.", "The computational complexities per iteration are $\\mathcal {O}(CK_cPN)$ and $\\mathcal {O}(CK_cP)$ , respectively.", "C is the number of loads.", "Thus, the computational complexity scales linearly with respect to the number of loads.", "The following statements have been incorporated into the paper.", "“The computational complexity of Bayesian offline training at each iteration is $\\mathcal {O}(CK_cPN)$ .", "The computational complexity of online testing stage at each iteration is $\\mathcal {O}(CK_cP)$ .", "Thus, the computational complexity scales linearly with respect to the number of loads.", "” Regarding the number of Monte Carlo samples, more Monte Carlo samples provide estimations with higher accuracy, however, also cause higher computational burden.", "In our experiments, 50 Monte Carlo samples suffice to offer accurate estimations of the predictive mean and the predictive variance.", "We incorporated the following discussion regarding the Monte-Carlo samples, “More Monte-Carlo samples increase the estimation accuracy, at the cost of higher computational burden.", "Our experiments show that 50 Monte-Carlo samples suffice to provide accurate estimations of the predictive mean and the predictive variance.” 5.", "One of remarks of the authors (page 5) is that the deterministic approach is much more computationally efficient than the Bayesian method.", "How much?", "Regarding the computational time, we added the following discussion into the paper, “In Table I, the B-EDS requires around 50 seconds for offline training, and 4 seconds for each testing sample.", "In comparison, the D-EDS requires around 15 seconds for offline training, and 0.9 seconds for each testing sample.” The above discussions have been incorporated at the end of the Section IV.", "Finally, many thanks to everyone for helping improve this paper." ] ]
2207.03490
[ [ "Highlight Specular Reflection Separation based on Tensor Low-rank and\n Sparse Decomposition Using Polarimetric Cues" ], [ "Abstract This paper is concerned with specular reflection removal based on tensor low-rank decomposition framework with the help of polarization information.", "Our method is motivated by the observation that the specular highlight of an image is sparsely distributed while the remaining diffuse reflection can be well approximated by a linear combination of several distinct colors using a low-rank and sparse decomposition framework.", "Unlike current solutions, our tensor low-rank decomposition keeps the spatial structure of specular and diffuse information which enables us to recover the diffuse image under strong specular reflection or in saturated regions.", "We further define and impose a new polarization regularization term as constraint on color channels.", "This regularization boosts the performance of the method to recover an accurate diffuse image by handling the color distortion, a common problem of chromaticity-based methods, especially in case of strong specular reflection.", "Through comprehensive experiments on both synthetic and real polarization images, we demonstrate that our method is able to significantly improve the accuracy of highlight specular removal, and outperform the competitive methods to recover the diffuse image, especially in regions of strong specular reflection or in saturated areas." ], [ "Introduction", "Objects in real world scenes have both diffuse and specular reflections, as a common physical phenomenon.", "On one hand, detecting the specular reflection can help us to infer the light direction, scene geometry [16] and camera location.", "On the other hand, the existence of specular reflection presents difficulties for many applications including image segmentation, object detection [11], [25], [26], pattern recognition [5], background subtraction [29], [28], [3], HDR reconstruction [44], [37], tracking [7], [27], and 3D reconstruction [24], [18].", "Therefore, the specular reflection from images is a crucially important consideration in a variety of computer vision and robotics tasks.", "This line of research has been extensively studied, and many solutions proposed in the last decade on the benefit of chromaticity or polarization.", "However, almost all existing solutions have some significant drawbacks.", "Methods based on the analysis of color space rely on either image statistics or strong prior assumptions [35], [46] which are not robust in real scenes [42].", "Methods based on the theory of polarization use the fact that the specular reflection tends to be polarized while the diffuse reflection is unpolarized [38], [21], [39].", "However, in practice the specular reflection is partially polarized [42].", "Therefore, despite the promising result on some cases, these methods cannot obtain pleasing results while the assumption is violated.", "Figure: Sample results of the proposed method.", "Rows from top to bottom: Input rgb image, recovered diffuse image, and sparse highlight specular image.In this paper, we propose an optimization based solution to separate highlight specular reflection from diffuse reflection using a tensor low-rank and sparse decomposition framework, with the help of polarimetric cues.", "Specifically, we first analyze the polarization images to compute the polarization chromaticity image.", "We then create a group of candidates for each pixel using the obtained polarization chromaticity image, and form all the candidates in a tensor structure as different representations of the original image.", "Based on polarization theory, we also define the phase angle constraint as a new polarization regularization term.", "We then introduce our proposed formulation by imposing it into the tensor low-rank and sparse decomposition framework.", "Finally, we optimize the proposed formulation to extract the diffuse images.", "We have evaluated the proposed method on both synthetic images and real images captured by a polarization camera.", "Sample results of our proposed method are shown in Fig.", "REF .", "The main contributions of this paper are as follows.", "We propose to use the polarization chromaticity image using polarimetric information.", "We propose to create multiple representations for each pixel, and form them as a tensor.", "In particular, we use block-processing to select some candidates for each pixel with the help of polarization chromaticity image, and then stack them in a 3D tensor data structure.", "We introduce a phase angle regularization term using polarization images.", "Phase angle is only related to the surface geometry and so R, G, and B channels of each pixel should share a similar phase angle.", "We use this concept to form our phase angle regularization term.", "We introduce an iterative optimization process by integrating the chromaticity-based specular removal and our phase angle regularization term into the tensor low-rank and sparse decomposition framework.", "To our knowledge, this is the first tensor-based approach using polarimetric cues to remove highlight specular reflections from images.", "The remainder of this paper is organized as follows.", "Related works on specular and diffuse reflection separation and the theory of polarization are summarized in Section .", "Section  explains the details of our polarimetric highlight specular removal method.", "Experimental results and discussion are presented in Section , and concluding remarks in Section ." ], [ "Related Works", "In this section we first review related works on specular reflection removal and then explain relevant polarization theory that is the basis of our proposed method." ], [ "Highlight Specular Reflection Removal", "Specular reflection removal from images is a well-studied area of research and many solutions have been proposed which can be grouped into two major categories based on the number of images used." ], [ "Methods in this category remove specular reflection from a single image.", "Since this problem is inherently ill-posed, prior knowledge or assumption on the characteristics of natural images should be exploited to make the problem tractable.", "Most single-image-based methods require color segmentation [4], [12] which is not robust for complex textured images or requires user assistance for highlight detection [22].", "To address this issue, Tan et al.", "[35] proposed a method to remove highlights by iteratively shifting chromaticity values towards those of the neighboring pixels having the maximum chromaticity in the neighborhood without explicit color segmentation.", "Similarly, Mallick et al.", "[20] proposed an SUV color space which separated the specular and diffuse components into S and UV channels.", "They used this SUV space to remove highlights by iteratively eroding the specular channel.", "This type of approaches may encounter problems due to discontinuities in surface colors, across which diffuse information cannot be accurately propagated.", "These methods also perform poorly on large specular regions.", "Other approaches exist for single-image highlight specular removal and they pioneered by the idea of specular-free image [35], a pseudo-diffuse image that has the same geometrical profile as the true diffuse component [10], [34], [46], [31], [30], [48], [32].", "Kim et al.", "[10] proposed an approximated specular-free image via applying the dark channel prior.", "Sue et al.", "[34] defined $l_2$ chromaticity and used it to generate the specular-free image.", "Yang et al.", "[46] proposed a fast bilateral filter adopting the specular-free image the range weighting function.", "The main drawback of the specular-free image is it suffers from hue-saturation ambiguity which exists in many natural images [8].", "Inspired by the fact that highlight regions in many real-world scenes are contiguous pieces with relatively small size while colors of diffuse reflection can be well approximated by a small number of distinct colors, some methods [2], [8] use low-rank and sparse decomposition framework.", "Akashi et al.", "[2] formulate the separation of reflections as a sparse non-negative matrix factorization (NMF) problem.", "Since NMF is highly sensitive to outliers in general cases, this method may fail in the presence of strong specularity or noises.", "Besides, this method is sensitive to initial values and only guarantee finding a local minimum rather than a global minimum.", "[8] assumes the specular highlight of a natural image has large intensity and applies a sparse and low-rank decomposition on the weighting matrices of specular and diffuse components.", "This approach is more robust to outliers than  [2].", "However, this method, similar to other methods in this category, reshapes the input color image into a $3 \\times N$ matrix in which each column stores a pixel value.", "In that way, the spatial structure information of the original data will be lost, leading to inaccurate rank and low-efficiency computation.", "In this paper, we address this issue by proposing a new tensor low-rank and sparse decomposition method, which keeps the spatial information.", "In our method, we also use the polarimetric cues to overcome the limitations of the chromaticity-based approaches, as discussed." ], [ "Methods in the second category use multiple images to remove specular reflection from images.", "Since some highlights regions are direction-dependent, several methods use multiple images of one object (or scene) with different illumination directions [13], [23], [17], or from different point of views [16], [47], [15], [41], or with different polarization orientations [9], [38], [39], [21].", "Sato et al.", "[23] employed the dichromatic model for separation by analyzing color in many images captured with a moving light source.", "Lin et al.", "[17] also changed the light source direction to create two photometric images and used linear basis functions to separate the specular components.", "Multiple illumination based approaches are not applicable since the light source is usually fixed in the real world.", "Another early method uses images taken from different point of views to detect the specular regions based on the assumption of Lambertian consistency [15].", "Other methods use different viewing directions to remove the highlights by treating the specular pixels as outliers, and matching the remaining diffuse parts in other views [16], [47].", "However, These methods also fail if the size of the highlight region is large.", "In addition, methods in this category need a sequence of images which are not available in practice.", "Different from the color-based methods, polarization-based methods use polarimetric cues for specular reflection separation.", "Nayar et al.", "[21] proposed a method by analyzing the scene through the direct and global reflection components.", "Umeyama et al.", "[38] used a fixed coefficient for specular components and separate them by applying independent component analysis (ICA).", "Wang et al.", "[39] replaced the fixed coefficients with the spatially variable coefficient and improved the accuracy of the specular reflection separation.", "However, all these works still require a strict controllable light source, which limits the applicability of them in practice.", "To address this issue, Wen et al.", "[42] recently proposed a polarization guided specular reflection separation based on low-rank and sparse decomposition framework using a polarization camera.", "Since the polarization camera can capture all the polarization images in one shot, the method is more practical than all the above mentioned multi-image methods for real-world applications.", "Despite the promising results on some cases, they assume the diffuse components with different polarization angles remain constant, an assumption that is not the case in practice.", "They also followed the previous decomposition methods to cluster pixels with similar chromaticity values in a 2D matrix, where each row represents a pixel color (3 values).", "Therefore, the upper-bound of the rank for each cluster is 3, which may not be true for every scene.", "In addition, reshaping the input image into a $3 \\times N$ matrix removes the spatial structural information of the original data and reduces the overall accuracy of the method.", "To address these issues, in this paper, we propose a polarimetric tensor low-rank and sparse decomposition framework which keeps the spatial structure of data, and uses them while recovering the diffuse image with the help of the phase angle regularization." ], [ "Polarization Theory", "Our proposed method is based on a polarization camera, which implements pixel-level polarization filters and has resulted in real-time and high resolution measurment of incident polarization information.", "Each calculation unit of the camera consists of four pixels and uses four on-chip directional polarizers, at 0, 45, 90, and 135 degrees, to capture four perfectly aligned and polarized images.", "According to Fresnel's theory [43], diffuse components $I_d$ remain roughly constant while the specular components $I_s$ vary under different polarization orientation.", "In the real world, the specular reflection is partially polarized and so part of the specular component $I_s$ also remains constant.", "Therefore, the intensity of a pixel across the different angles of polarization orientation $\\vartheta $ is computed as follows [42].", "$\\begin{split}I_{\\phi }=I_d + I_{sc} + I_{sv}cos(2\\vartheta -2\\phi )= I_c + I_{sv}cos(2\\vartheta -2\\phi )\\hspace{25.0pt}\\end{split}$ where the specular component can be expressed as the sum of a constant component $I_{sc}$ and a cosine function term with amplitude $I_{sv}$ .", "Since four images are available from the polarization camera, Eq.", "REF forms an over-determined linear system of equations, and we are able to compute $\\phi $ , $I_c$ , and $I_{sv}$ .", "For an object with only diffuse reflection, phase angle $\\phi $ determines the azimuth angle $\\varphi $ up to a $\\pi $ ambiguity [33], [24], as follows.", "$\\varphi = \\phi \\,\\,\\, or \\,\\,\\, \\varphi = \\phi + \\pi $ where the azimuth angle $\\varphi $ represents the angle between the projected surface normal direction and $x-$ axis of the 2D image plane.", "We will use this property of $\\phi $ to form the phase angle regularization term in Section REF ." ], [ "proposed method", "Our proposed specular removal method has three main steps.", "In the first step, our method takes RGB polarization images as input.", "We then analyze them using the polarization theory to compute the polarization-based chromaticity image.", "In the second step, we create multiple representations for each image based on the obtained polarization chromaticity image and then form a tensor.", "Finally, we define a polarization regularization term and introduce a new tensor low-rank and sparse decomposition formulation using the polarization regularization.", "By solving the proposed formulation, we are able to extract the diffuse image from the input images.", "Since the specular chromaticity could be assumed to be uniform for a given image and equal the chromaticity of the incident illumination [14], [8], we first estimate the illumination chromaticity $\\Gamma $ of the captured image using [36], and then normalize the input image by $I(p)/(3\\Gamma )$ such that $\\Gamma _r = \\Gamma _g = \\Gamma _b = 1/3$ as a pure white illumination color.", "This preprocessing step is not necessary for computing the polarization-based chromaticity image; however, we use it to have a better initialization as will be explained in Section REF ." ], [ "Polarization-based Chromaticity Image", "Chromaticity image reveals color information of the image and most of the specular reflection separation methods use an estimated chromaticity image to remove the specular reflection.", "However, almost all of them have a strong assumption on illumination in order to compute chromaticity image, and they suffer from color distortion in practice.", "A recent approach [42] for computing the chromaticity image based on polarization information showed promising results without the aforementioned assumption.", "Regardless of the illumination and the color of the specular components, the polarization based chromaticity method can reveal the intrinsic diffuse reflection of the image much better than the conventional methods.", "In our proposed method, we also use the similar idea to compute the chromaticity image $I_{chro}$ as follows.", "$\\begin{split}I_{chro} = \\frac{I_{rawD}}{\\sum _{\\theta }{I_{rawD,\\theta }+\\overline{I_{min}}}} \\hspace{25.0pt} \\,\\,\\,\\, \\\\\\overline{I_{min}} = \\frac{\\sum _p {min(I_r(p),I_g(p),I_b(p))}}{N}\\end{split}$ where $I_{rawS}=2I_{sv}$ and $I_{rawD}= I_c - I_{rawS}$ are the approximate specular and diffuse components, respectively.", "$\\theta \\in {R,G,B}$ and $\\overline{I_{min}}$ is the average of the minimum values in the r, g, b channels of all pixels, and address the unstable situation caused by dark or noisy pixels [42]." ], [ "Multiple Representations and Initialization", "Unlike the existing methods which use a clustering algorithm to segment the input image, we propose to create a class of multiple candidates for each pixel with the help of the polarization-based chromaticity image.", "In particular, we use block-processing to select a certain number of candidates for each pixel.", "As explained in Section REF , the chromaticity image can describe the scene without being affected by the specular reflection.", "Therefore, for a pixel $p$ at the center of each block, we randomly select $n_4$ pixels from that block with the similar intrinsic diffuse color so that $\\Vert I_{chro}(p) - I_{chro}(q)\\Vert < T$ is satisfied, where $q$ is a random pixel in the block and $T$ is a similarity threshold.", "Each of those $n_4$ selected pixels can be considered as one representation of pixel $p$ .", "This process on all pixels provides $n_4$ different representation images for the observed image.", "Now, we apply the chromaticity-based approach presented in [46] to estimate the initial diffuse components for each representation.", "As discussed in Section , current chromaticity-based approaches including [46] may not be accurate in a large specular reflection region, and also do not provide satisfactory results on regions with one or more saturated color channels.", "However, [46] still works in regions where the specular reflection is weak and so can provide a reasonable initial diffuse component for our optimization framework." ], [ "Tensor Reconstruction and Formulation", "Let $\\mathcal {X}^{\\vartheta } \\in \\mathbb {R}^{n_1 \\times n_2 \\times n_3 \\times n_4}$ where $\\mathcal {X}_i^{\\vartheta } \\in \\mathbb {R}^{n_1 \\times n_2 \\times n_3}$ is the $i^{th}$ RGB representation of the original image ($i = 1, ..., n_4$ ) with the angle of polarization orientation $\\vartheta \\in \\lbrace 0^{\\circ }, 45^{\\circ }, 90^{\\circ }, 135^{\\circ }\\rbrace $ .", "$n_3 = 3$ denotes R, G and B channels.", "Therefore, $\\hat{\\mathcal {D}^{\\vartheta }}=\\Psi (\\mathcal {X}^{\\vartheta }) \\in \\mathbb {R}^{n_1 \\times n_2 \\times n_3 \\times n_4}$ denotes the initial diffuse images obtained from Section REF using [46] (see Fig.", "REF (a)).", "Corresponding pixels in the fourth dimension of the tensor $\\mathcal {X}^{\\vartheta }$ have a similar chromaticity basis color.", "This means the main structure of their diffuse component should be linearly correlated.", "Therefore the initial diffuse tensor $\\mathcal {\\hat{D}^{\\vartheta }}$ should also be linearly correlated, and can be approximated by a small number of basis colors, corresponding to the rank in the tensor low-rank decomposition.", "Since $\\hat{\\mathcal {D}}^{\\vartheta }$ is a $4D$ -tensor, solving the decomposition is not straightforward.", "Although we can use low-rank tensor ring (TR) decomposition [49] to separate the specular components from $\\hat{\\mathcal {D}}^{\\vartheta }$ , the method needs the TR rank to be specified before the optimization, which is not practical in real world objects and estimating it is time consuming.", "So, to make the problem tractable, we unfold $\\hat{\\mathcal {D}}^{\\vartheta }$ along the 3rd dimension as $\\mathcal {D}^{\\vartheta }= reshape(\\hat{\\mathcal {D}}^{\\vartheta },[n_1,m, n_4])$ where $m=n_2 \\times n_3$ , so that $\\mathcal {D}^{\\vartheta } = [\\mathcal {D_R}^{\\vartheta },\\mathcal {D_G}^{\\vartheta },\\mathcal {D_B}^{\\vartheta }]$ (see Fig.", "REF (b)).", "This block operation unfolds each of the color images by separating the channels and forms a 2D matrix.", "To take advantage of polarimetric cues in our tensor optimization, we stack all $\\mathcal {D}^{\\vartheta }, \\vartheta = [0^{\\circ }, 45^{\\circ }, 90^{\\circ }, 135^{\\circ }]$ to form tensor $\\mathcal {D}= [\\mathcal {D_R,D_G,D_B}] \\in \\mathbb {R}^{k \\times m \\times n_4}, k= 4n_1$ which includes all the representations of all four polarization images (Fig.", "REF (c)).", "Figure: Our proposed tensor structuresTo separate specular reflection and saturated regions from diffuse components, we use the fact that specular components are spatially sparse in images.", "Based on this observation, regions with specular reflection should satisfy sparsity constraint, which can be modeled with the $l_{1}$ -norm.", "As discussed in Section REF , using only chromaticity image and optimizing it to obtain diffuse image suffers from color distortion in practice.", "To address this issue and to improve the overall accuracy of the final diffuse image, we introduce a new regularization term based on polarimetric cues.", "This regularization term benefits from the similarity of phase angle between R, G, and B channels, and adds constraint between them which are already separated in $\\mathcal {D}$ .", "From Section REF , $\\phi $ is only related to the geometric surface information w.r.t the observer, and so it should remain unchanged among the color channels for each pixel in the ideal case.", "Based on this concept, $\\phi ^{\\bar{R}}=\\phi ^{\\bar{G}}=\\phi ^{\\bar{B}}$ should be satisfied.", "We use $\\phi ^{\\bar{R}} = mod(\\phi ^R, \\pi )$ to avoid the $\\pi $ -ambiguity on azimuth angle.", "Now let $\\Phi (\\mathcal {D})$ compute $\\phi ^{\\bar{R}}$ , $\\phi ^{\\bar{G}}$ , and $\\phi ^{\\bar{B}}$ from (REF ) and updates $\\phi =mean(\\phi ^{\\bar{R}},$ $\\phi ^{\\bar{G}},\\phi ^{\\bar{B}})$ .", "Also $\\Phi ^{-1}(\\mathcal {D})$ is the inverse function of $\\Phi (\\mathcal {D})$ so that $\\Phi ^{-1}(\\Phi ({\\mathcal {D}})) = \\bar{\\mathcal {D}}$ recovers the updated $\\bar{\\mathcal {D}}$ based on the new $\\phi $ .", "With the above definition, we propose the following tensor low-rank and sparse decomposition formulation to separate the diffuse reflections from the specular reflection.", "$\\begin{split}\\min _{\\mathcal {L},\\mathcal {S}}\\Vert \\mathcal {L}\\Vert _*+\\lambda \\Vert \\tau \\odot \\mathcal {S}\\Vert _{1}+ \\gamma \\Vert \\Phi ^{-1}(\\Phi (\\mathcal {L}))- \\mathcal {L}\\Vert _F^2 \\\\s.t.", "\\,\\ \\mathcal {D} = \\mathcal {L}+\\mathcal {S} \\hspace{50.0pt}\\end{split}$ where $\\odot $ denotes an element-wise multiplication.", "Since the tensor low-rank decomposition framework may smooth the images, we also define a spatially variant weight $\\tau = \\frac{1-I}{e^{-\\alpha \\Vert \\nabla {I}\\Vert ^\\beta }}$ where $I = \\mathcal {D}(:,:,1)$ , $\\alpha =2$ and $\\beta =0.25$ in our experiments.", "The third term in (REF ) works as a phase angle regularization by updating $\\mathcal {L}$ using (REF ) and the above explanation so that the updated $\\mathcal {L}$ shares a similar phase angle between color channels." ], [ "Optimization", "In order to solve (REF ), we use the inexact augmented Lagrangian method (IALM) with the augmented Lagrangian function $\\mathcal {H}(\\mathcal {L}, \\mathcal {S},\\mathcal {Y};\\mu )$ whose main steps are described as follows.", "$\\begin{split}\\mathcal {H}(\\mathcal {L}, \\mathcal {S},\\mathcal {Y};\\mu ) = \\Vert \\mathcal {L}\\Vert _*+\\lambda \\Vert \\tau \\odot \\mathcal {S}\\Vert _{1} + \\gamma \\Vert \\Phi ^{-1}(\\Phi (\\mathcal {L}))- \\mathcal {L}\\Vert _F^2\\\\+ <\\mathcal {Y}, \\mathcal {D}-\\mathcal {L}-\\mathcal {S}>+\\frac{\\mu }{2}\\Vert \\mathcal {D}-\\mathcal {L}-\\mathcal {S}\\Vert _F^2 \\hspace{20.0pt}\\end{split}$ where $\\mathcal {Y}$ is a Lagrangian multiplier, $\\mu $ is a positive auto-adjusted scalar, and $<A,B>=trace(A^TB)$ .", "$\\lambda =1/\\sqrt{max(n_1 \\times n_2, n_3)n_4}$ and $\\gamma $ is an increasing positive scalar.", "Now we first replace $\\Phi ^{-1}(\\Phi (\\mathcal {L}))$ with $\\mathcal {Q}$ , and then solve the problem through alternatively updating $\\mathcal {L}$ , $\\mathcal {S}$ , and $\\mathcal {Q}$ in each iteration to minimize $\\mathcal {H(L,S,Q,Y};\\mu )$ with other variables fixed until convergence as follows.", "$\\begin{split}\\mathcal {L}^{t+1}\\leftarrow \\min _{\\mathcal {L}}\\frac{1}{2\\gamma +\\mu }\\Vert \\mathcal {L}\\Vert _*+\\frac{1}{2}\\Vert \\mathcal {L}-(\\frac{2\\gamma }{2\\gamma +\\mu }(\\mathcal {D}-\\mathcal {S}^t)\\\\+\\frac{1}{2\\gamma +\\mu }\\mathcal {Y}^t+\\frac{\\mu }{2\\gamma +\\mu }\\mathcal {Q}^t)\\Vert _F^2\\end{split}$ $\\mathcal {S}^{t+1}\\leftarrow \\min _{\\mathcal {S}}\\lambda \\Vert \\tau \\odot \\mathcal {S}\\Vert _{1}+\\frac{\\mu }{2}\\Vert \\mathcal {S}-(\\mathcal {D}-\\mathcal {L}^{t+1}+\\frac{\\mathcal {Y}^t}{\\mu })\\Vert _F^2$ $\\mathcal {Q}^{t+1} = \\Phi ^{-1}(\\Phi (\\mathcal {L}^{t+1}))$ $\\mathcal {Y}^{t+1} = \\mathcal {Y}^t + \\mu (\\mathcal {D}^{t+1}-\\mathcal {L}^{t+1}-\\mathcal {S}^{t+1})$ where $\\mu = min(\\rho \\mu ,\\mu _{max})$ .", "Both (REF ) and (REF ) have closed form solutions in [19] and [50], respectively.", "The error is computed as $\\Vert \\mathcal {D}-\\mathcal {L}-\\mathcal {S}\\Vert _F /\\Vert \\mathcal {D}\\Vert _F$ .", "The loop stops when the error falls below a threshold ($10^{-5}$ in our experiments).", "After convergence, $\\mathcal {L}(:,:,1)$ includes all four diffuse polarization images.", "We use the average of those four results as the final diffuse image." ], [ "Time Complexity", "In this work, we use ADMM to update $\\mathcal {L}$ and $\\mathcal {S}$ , which have closed form solutions.", "In these two steps the main cost lies in the update of $\\mathcal {L}^{t+1} \\in \\mathbb {R}^{k \\times m \\times n_4}$ , which requires computing FFT and $n_4$ SVDs of $k \\times m$ matrices.", "Thus, time complexity of the first two steps per iteration is $O(kmn_4logn_4 +k_{(1)}k_{(2)}^2n_4)$ , where $k_{(1)} = max(k, m)$ and $k_{(2)} = min(k, m)$  [19].", "The time complexity to update $\\mathcal {Q}^{t+1}$ is $O(kmn_4)$ , since $\\Phi (.", ")$ is an element-wise function.", "Therefore, the total time complexity of the optimization problem (REF ) is $O(kmn_4logn_4 +k_{(1)}k_{(2)}^2n_4)$ .", "Figure: Sample synthetic images under different source of light and specularity information.Table: Quantitative evaluation in terms of PSNR and SSIM for Bunny images under different specular reflections (best scores: bold, second best scores: underline)." ], [ "Experimental Results and Discussion", "In this section we present the experimental results of our proposed method on both synthetic and real polarization images.", "In the first set of experiments, we evaluate our method quantitatively by comparing the results with those from chromaticity-based approaches [46], [31], [30], [48], [32], [45], [35], a matrix factorization based mothod [2], and a deep learning based method [6] on synthetic polarization images.", "In the second set of experiments we show the qualitative results of the proposed method and compare them with the results of the competing methods on real polarization images captured by a polarization camera." ], [ "Evaluation on Synthetic Polarization Images", "To overcome the lack of polarization image datasets for highlight specular reflection removal, we have had to first create our own synthetic polarization image datasets, and then evaluate our proposed method on them.", "To create our synthetic dataset, we use point clouds of three objects “Bunny\", “Dragon\", and “Armadillo\" from the Stanford 3D scanning repository [1], and create 2D depth images.", "We then render the depth images with the Blinn-Phong reflectance model under different arbitrary point lighting source $s$ using the pinhole camera model, with uniform and non-uniform albedo texture.", "In this process we use polarizer filter angles at $0^{\\circ }, 45^{\\circ }, 90^{\\circ },$ and $135^{\\circ }$ to create synthetic polarization images.", "Fig.", "REF shows the obtained images under different sources of light and specularity information which we use in our evaluations.", "Each image in Fig.", "REF is the average of the four polarization images.", "Figure: Performance of the proposed method and the competing methods while the specular reflection increasesIn the first experiment, we evaluate the results of our proposed method and compare the results with the competing methods using two common evaluation metrics SSIM [40] and PSNR.", "Table REF shows the performance comparison on “Bunny\" dataset under different illumination and specular reflections.", "For all the samples, the proposed method achieves the best scores with significant improvement in the evaluation metrics.", "In images with weak specular reflection, almost all competing methods provide acceptable results; however, the proposed method still outperforms all of them.", "The capability of the proposed method is more clear in cases with strong specular reflection, e.g, “Bunny5\" and “Bunny6\", where the competing methods cannot recover the diffuse reflection properly.", "Experiment on “Bunny7\" evaluates our proposed method in the presence of saturated regions under varying albedo which shows a noticeable improvement in comparison with the competitive methods.", "We also evaluate the proposed method while the specular reflection increases in Fig.", "REF .", "Specular reflection and saturation regions are increasing in the samples in the x-axis.", "This figure specifically visualizes the capability of our proposed method in the presence of strong specular reflection and saturation (e.g., “Bunny6\").", "Figure: Comparison of results between the proposed method and the competing methods.", "(a) RGB image, (b) Ground truth, (c) Shen'08 , (d) Yang, (e) Akashi , (f) Shen'13 , (g) Yoon , (h) Yamamoto , (i) Gang , (j) OursFig.", "REF shows the qualitative results of four sample images from Table REF .", "This figure clearly shows that the chromaticity based methods may not have enough information to recover the diffuse image in regions with strong specular reflection, and all of them fail in such cases.", "In contrast, the proposed method can recover diffuse values of those regions due to use of polarimetric cues and spatial information in the proposed tensor structure.", "To show the capability of our proposed method, we also evaluate our method with two more datasets “Dragon\" and “Armadillo\" under different conditions with saturated regions, quantitatively.", "Table REF shows the performance of the proposed method in comparison with the competing methods.", "In these experiments, the proposed method also outperforms all the competing methods in terms of both SSIM and PSNR.", "Figs.", "REF and REF show the qualitative results of all the methods available in Table REF on “Dragon1\" and “Armadillo\".", "Table: Quantitative evaluation in terms of PSNR and SSIM for Bunny images under different specular reflections (best scores: bold, second best scores: underline).Figure: Comparison of results between the proposed method and the competing methods.", "(a) RGB image, (b) Ground truth, (c) Shen'08 , (d) Yang, (e) Akashi , (f) Shen'13 , (g) Yoon , (h) Yamamoto , (i) Gang , (j) OursFigure: Comparison of results between the proposed method and the competing methods.", "(a) RGB image, (b) Ground truth, (c) Shen'08 , (d) Yang, (e) Akashi , (f) Shen'13 , (g) Yoon , (h) Yamamoto , (i) Gang , (j) Ours" ], [ "Evaluation of the phase angle regularization and $\\tau $", "In this section, we evaluate the effects of the penalty term $\\tau $ and the phase angle regularization term on our proposed tensor structure.", "In the first experiment, we evaluate the capability of our proposed tensor decomposition without phase angle regularization terms, where (REF ) becomes $\\begin{split}\\min _{\\mathcal {L},\\mathcal {S}}\\Vert \\mathcal {L}\\Vert _*+\\lambda \\Vert \\tau \\odot \\mathcal {S}\\Vert _{1} \\,\\,\\,\\,s.t.", "\\,\\ \\mathcal {D} = \\mathcal {L}+\\mathcal {S}\\end{split}$ We further remove the penalty term $\\tau $ to show the capability of the proposed tensor structure itself, where the proposed formulation becomes $\\begin{split}\\min _{\\mathcal {L},\\mathcal {S}}\\Vert \\mathcal {L}\\Vert _*+\\lambda \\Vert \\mathcal {S}\\Vert _{1} \\,\\,\\,\\,s.t.", "\\,\\ \\mathcal {D} = \\mathcal {L}+\\mathcal {S}\\end{split}$ We solve both (REF ) and (REF ) and compare the obtained results with the results of (REF ).", "Table REF shows that (REF ) without $\\tau $ and polarization regularization term can still outperform the competing methods of Table REF , due to the use of tensor structure which keeps the spatial information of the images.", "The results of (REF ) in Table REF show the effectiveness of the spatially variant weight $\\tau $ where prevents the smoothing of the images through the optimization process, a common artifact of low-rank decomposition.", "$\\tau $ works as a penalty term and increases the cost of optimization around high frequency component of images (e.g., edges) and therefore, those regions grouped into the low-rank component $\\mathcal {L}$ .", "This means, $\\mathcal {L}$ includes all the details of the images as much as possible which can boost the accuracy of the recovered diffuse images.", "Finally, the last two columns of Table REF show the results of our proposed formulation presented in (REF ).", "This experiment illustrates the benefit of polarimetric information to improve the accuracy of specular reflection separation from diffuse images, especially in regions of strong specular reflection or in saturated areas (e.g., “Bunny6\").", "Figure: Comparison of results between the proposed method and the competing methods.", "(a) RGB image, (b) Tan , (c) Shen'08 , (d) Yang, (e) Akashi , (f) Shen'13 , (g) Yoon , (h) Yamamoto , (i) Gang , (j) Ours" ], [ "Evaluation on Real Polarization Images", "In this experiment, we show the results of the proposed method on real polarization images captured by a polarization camera.", "Since there is no available ground truth for these polarization images, we only compare our method with the competing methods qualitatively in Fig.", "REF .", "Overall, our method performs well on a diversity of real images which may contain various materials with different textures, overexposure and natural illumination.", "As shown, our method can handle saturation areas without annoying artifacts.", "In comparison with the competetive methods, the proposed method well suppresses the specular highlights and preserves the image details as much as possible.", "Table: Evaluating the effects of the penalty term τ\\tau and the phase angle regularization in terms of PSNR and SSIM for “Bunny4\" and “Bunny6\" images with strong specular reflections" ], [ "Conclusion", "In this paper we have proposed a novel method based on polarimetric information for highlight specular reflection removal.", "Our method exploits polarimetric cues obtained by a polarization camera to extract the diffuse image from the polarization images.", "The proposed method is built upon a tensor low-rank and sparse decomposition framework.", "In our method, we first select multiple candidates for each pixel based on the polarization chromaticity image.", "Then we stack those candidates in a tensor structure where the candidates for each pixel can be considered as a different representation of that pixel.", "Since the diffuse colors of pixels in a small area with similar material and chromaticity value are linearly correlated, our proposed tensor low-rank method can recover them.", "Different from previous low-rank decomposition based methods, the proposed method keeps the spatial structure of data and recovers the saturated regions using diffuse color of adjacent pixels.", "Due to the use of phase angle regularization between the color channels, the proposed method also can handle color distortion without annoying artifacts that are common with chromaticity-based approaches.", "Our experimental results on both synthetic and real polarization images demonstrate the superiority of our method with respect to the state-of-the-art." ] ]
2207.03543
[ [ "BibleTTS: a large, high-fidelity, multilingual, and uniquely African\n speech corpus" ], [ "Abstract BibleTTS is a large, high-quality, open speech dataset for ten languages spoken in Sub-Saharan Africa.", "The corpus contains up to 86 hours of aligned, studio quality 48kHz single speaker recordings per language, enabling the development of high-quality text-to-speech models.", "The ten languages represented are: Akuapem Twi, Asante Twi, Chichewa, Ewe, Hausa, Kikuyu, Lingala, Luganda, Luo, and Yoruba.", "This corpus is a derivative work of Bible recordings made and released by the Open.Bible project from Biblica.", "We have aligned, cleaned, and filtered the original recordings, and additionally hand-checked a subset of the alignments for each language.", "We present results for text-to-speech models with Coqui TTS.", "The data is released under a commercial-friendly CC-BY-SA license." ], [ "Introduction", "The majority of the world's approximately 7,000 languages [1] do not have open speech datasets, and even fewer have high-quality data with aligned text and speech, which can be used for training text-to-speech (TTS) models.", "The creation of benchmark datasets such as Librispeech [2], LibriTTS [3], and LJSpeech [4] enabled significant advances through community development on common resources, but these resources cover few languages, and most TTS systems evaluate on English only.", "Speech synthesis systems have received significant attention in recent years due to the advances provided by deep learning.", "These advances enable TTS models to achieve improved naturalness with respect to human speech [5], [6], [7], and improved synthesized speech as driven adoption of virtual assistants [8], [9].", "However, neural models often require a non-trivial amount of data for training.", "This necessity leaves many language communities under-served in the development of speech technologies [10], and it further results in researchers not evaluating models on diverse linguistic phenomena.", "In this work, we present the BibleTTS corpus, a high-quality aligned speech corpus for ten African languages.", "This data enables further research and resource creation for these languages and will allow researchers to create meaningful benchmarks against non-English languages.", "Creating high-quality aligned datasets typically requires tools not available for most languages, hindering the creation of datasets for lower-resourced languages.", "Specifically, forced alignment of speech and text typically requires pre-trained acoustic models and grapheme-to-phoneme (G2P) models.", "This process can be challenging and error-prone without high-quality resources.", "We demonstrate that it is possible to force-align data without access to any pre-trained models (acoustic or G2P), and still produce quality output.", "Additionally, recent corpora that significantly expand linguistic coverage for TTS datasets are often not freely available [11], [12], contain less single-speaker data [13], and/or have lower-quality recordings.", "BibleTTS stands out in this regard as it is a large, high-fidelity corpus made of single-speaker recordings.", "The corpus is released under an open CC-BY-SA license.", "Corpus links and samples created with our TTS models can be accessed from the project websitehttps://masakhane-io.github.io/bibleTTS/." ], [ "Related Work", "We focus on related work for African languages in the following section.", "Existing publicly available datasets are typically small.", "For Yorùbá these include a 2.75 hour corpus [14], [15] and a 4 hour multi-speaker dataset [16].", "TWB Gamayun kits [17] include a 6-hour single speaker high-quality Swahili speech corpus optimized for TTS training.", "Earlier Yorùbá TTS efforts typically used bespoke private data [18], [19], [20], [21], [22].", "For isiXhosa, Sesotho, Setswana and Afrikaans, multi-speaker corpora of approximately 2 hours each have been developed for TTS [23], [24].", "The CMU Wilderness dataset [11] includes up to 20 hours of high-quality, single-speaker data for several African languages, but it is not publicly available and the alignments can contain noise.", "TTS systems research for African languages has comprised development efforts in frameworks like Festival [25] or MaryTTS [26] for Yorùbá  [27], [28], Ibibio [29], Amharic [30], Fon [31], isiZulu [32], KiSwahili [33].", "While many of these systems used concatenative synthesis, in large part because the available corpora were small, there have also been investigations into statistical parametric speech synthesis for Ibibio [34].", "Finally, there have been efforts in related tasks, such as grapheme-to-phoneme research for Yorùbá  [35], intonation modeling [21], and numeral preprocessing [36]." ], [ "Languages represented", "Table REF shows the languages in the BibleTTS corpus, with their language families, the number of speakers[1] and the regions in Africa where they are spoken.", "The corpus consists of ten languages from the three largest language families in Africa (Niger-Congo, Afro-Asiatic and Nilo-Saharan) and four regions of Africa.", "All of these languages are tonal and are spoken primarily in sub-Saharan Africa." ], [ "Language Characteristics", "Éwé [ewe] uses 35 Latin letters excluding (c, j, q), with 12 additional letters (ɖ, dz, ɛ, ƒ, gb, ɣ, kp, ny, , ɔ, ts, ʋ).", "Ewe has three tones, and they are marked in text.", "Hausa [hau] uses two different writing scripts: Ajami and Boko.", "The Boko script is the most widely used and is based on the Latin alphabet with 44 letters.", "The alphabet excludes letters (p, q, v and x) and uses 12 additional letters: ɓ, ɗ, ƙ, y, kw, ƙw, gw, ky, ƙy, gy, sh, ts.", "Hausa is tonal, but tones are not represented in text.", "Kikuyu [kik] uses Latin script with 27 letters excluding (f, l, p, s, v, x, y, z), and including additional nine letters (ĩ, ũ, mb, nd, nj, ng, ng`,ny, th).", "Kikuyu uses two tones (high and low) but they are not marked in text.", "Lingala [lin] uses the Latin script with 40 letters excluding (j, q, x) and including an additional 17 letters (ɛ, gb, kp, mb, mf, mp, mv, nd, ng, ngb, nk, ns, nt, ny, nz, ɔ, ts).", "Lingala uses two tones (high and low), but they are not marked in text.", "Luganda [lug] uses 24 Latin letters excluding (h, q, x), and including additional two letters (, ny).", "Luganda uses three tones, but they are not marked in text.", "Luo [luo] or Dholuo uses Latin script with 31 letters excluding the letters (c, q, x, v, z), and additional letters (ch, dh, mb, nd, ng’, ng, ny, nj, th, sh).", "Luo has four tones, but they are not marked in text.", "Chichewa [nya] uses the Latin script with 31 letters excluding (q, x, y), and including additional eight letters (ch, kh, ng, , ph, tch, th, ŵ).", "Chichewa uses two tones (high and low) but they are not marked in text.", "Akan [aka] is a language with multiple dialects (including Fante, Bono, Asante, and Akuapem), and they are collectively known as Twi.", "In this study, we focus on Asante and Akuapem which are mutually intelligible and share the same alphabets (referred to herein as aka-Asante and aka-Akuapem).", "Twi uses 22 Latin letters excluding (c,j,q,v,x,z), and including two additional letters (ɛ, ɔ).", "Yorùbá [yor] uses 25 Latin letters without the letters (c, q, v, x and z) and with additional letters (ẹ, gb, ṣ, ọ).", "Yorùbá is a tonal language with three tones: low, middle, and high.", "These tones are represented by the grave (e.g.", "“è ”), optional macron (e.g.", "“ē”) and acute (e.g.", "“é”) accents respectively but the mid tone is usually ignored in writing." ], [ "Corpus creation", "The BibleTTS corpus consists of high-quality audio released as 48kHz, 24-bit, mono-channel FLAC files.", "Recordings for each language are under professional quality, close-microphone conditions (i.e., without background noise or echo).", "BibleTTS is unique among open speech corpora for the volume of data per speaker and suitability for TTS.", "The corpus consists of ten languages which are under-represented in today's voice technology landscape, both in academia and in industry.", "We release train/dev/test splits for each language, where dev is the Book of Ezra, test is Colossians, and train is all other books.", "Figure: Distribution of the sample length per language.", "Samples longer than 30s and with fewer than 10 characters were removed, and outlier segments were detected and discarded as described in sec:outlier-detection.Lingala is a slight outlier with the majority of segments between 10 and 20 seconds, while the other five languages have segments centered at 5-10s each." ], [ "Alignment", "The BibleTTS corpus contains audio recordings and text transcripts (i.e.", "“Open Contemporary Bible” translations) which were released by Biblica via the Open.Bible project.https://open.bible/resources The original audio recordings were 48kHz, mono-channel WAV, typically one recording per chapter of the Bible.", "Each chapter was up to 30 minutes long, which is too long for most modeling tasks.", "Verses are a natural alternative, as the text already contains verse boundaries.", "Aligning at the verse level creates more manageable recording lengths of up to 30 seconds (see fig:samplelengths) which are more likely to be consistent across languages than segmentation on voice activity detection or other alternatives.", "Potential challenges in alignment include additional content in either the speech or text, such as spoken titles and headings or text annotations, and the availability of pre-trained acoustic models and grapheme-to-phoneme mappings.", "We have employed various alignment techniques depending on the availability of verse timestamps and resources in each language, and evaluated a subset of the alignments with native speakers." ], [ "Verse timestamps", "Three languages (aka-Akuapem, aka-Asante, and lin) were straightforwardly segmented using verse-level timestamps released by the Open.Bible project.", "The timestamps show the start time of every verse, as well as when the book and chapter titles were spoken.", "With these timestamps, verses were isolated and saved as individual audio files using sox.", "These alignment scripts can be found on Github at coqui-ai/open-bible-scripts.", "https://github.com/coqui-ai/open-bible-scripts" ], [ "Forced alignment using pre-trained acoustic models", "Forced alignment is the process of extracting timestamps given an audio and a transcript pair, and requires either a pre-trained acoustic model or training one from scratch.", "For Hausa (hau), we opted to use the Montreal Forced Aligner (MFA) [37] for which there is a pre-trained Hausa model [38].", "The code is open-sourced on Githubhttps://github.com/alpoktem/bible2speechDB.", "The process is as follows: Audio of each chapter of each book is downloaded together with their script in the form of an XML file, XML script is parsed and converted into a plain normalized text file.", "Normalization entails: (a) adding the chapter title at the beginning of the script as \"Sura <chapter-no>\", (b) converting numbers into written form using a dictionary prepared with Hausa linguists, and (c) adding a new line after every sentence ending punctuation mark (e.g.", ".", "?!\").", "A grapheme-to-phoneme (G2P) dictionary is created from the word list extracted from the transcripts using the Hausa G2P model.", "Alignment is performed for each chapter using the audio and normalized script with a beam length of 1000.", "The time-aligned TextGrid file is processed in parallel with the sentence-segmented transcript to partition the chapter audio into sentence-level audio chunks with their transcriptions." ], [ "Forced alignment from scratch", "Two languages (ewe and yor) were aligned via forced alignment from scratch.", "Using only the found audio and transcripts (i.e., without a pre-trained acoustic model), an acoustic model was trained and the data aligned with the Montreal Forced Aligner.", "Graphemes were used as a proxy for phonemes in place of G2P data.", "The code used to generate alignments can be found in the coqui-ai/open-bible-scripts repository.https://github.com/coqui-ai/open-bible-scripts After forced alignment, we used regular expressions to pull out whole verses which were aligned such that silence occurred both at the beginning and the end of a verse.", "Segmenting out audio at the verse-level instead of splitting on silence may allow downstream TTS models to capture higher-level prosody." ], [ "Outlier detection", "Following the alignment stage, we detected and removed outliers using the data-checker toolkit together with human judgments.", "The relevant code is open-sourced on Github at coqui-ai/data-checker.https://github.com/coqui-ai/data-checker First, all segments longer than 30 seconds, or less than 10 characters in the aligned transcript, were removed.", "Then, the removal of outliers was performed and fine-tuned for each language independently until the major offending samples\"Major offending samples\" was not explicitly defined, but refers to samples labeled by a non-native speaker of these languages as containing obvious mismatches between transcripts and speech.", "were no longer encountered, as described below.", "Every pair of <audio,transcript> was assigned an \"outlier score\", and the most extreme outliers were removed.", "First, the ratio of transcript length (characters) to audio length (seconds) was calculated for each sample.", "Then a Gaussian distribution was estimated for all samples in a given language.", "Lastly, the number of standard deviations from the mean was calculated for each sample.", "Outliers were excluded if they existed more than N standard deviations away from the mean, where N was fine-tuned per language with an iterative human-in-the-loop approach, until minimal offending samples were encountered.", "For most languages, it was sufficient to exclude samples more than 3 standard deviations from the mean (or .2% of the data).", "However, yor notably required more outliers removed to attain a quality dataset.", "The resulting distribution of segment lengths per language is shown in fig:samplelengths." ], [ "Human evaluation of alignment quality", "We facilitated human evaluation of both the alignment and the output of the TTS models.", "In total, we collected labels from 15 annotators (three per language) for ewe, hau, lin, aka-Asante, and aka-Akuapem and an additional five annotators for yor.", "To judge the quality of <audio,transcript> pairs from our alignments, we randomly sampled 50 example pairings of aligned transcripts and the corresponding audio clips across the train, dev, and test sets.", "Annotators selected the one option that best described the quality of the alignment: Audio contains EXTRA words not in the transcript Audio is MISSING words that are in the transcript Audio is MISSING words AND includes EXTRA words No missing or extra words In cases where the labels corresponding to various annotators disagreed, we took the majority vote label.", "In cases where the number of labels was spread evenly among different choices, we noted these as \"conflicting.\"", "Results of human evaluation are shown in tab:humanevalalignment.", "As discussed in Section REF , some languages (aka-Asante, aka-Akuapem, lin) were segmented using existing verse-level timestamp files.", "Interestingly, annotators labeled these languages as having a high percentage of samples where the audio contains additional words not present in the aligned text.", "Aligning from scratch (ewe, yor) produced a greater proportion of segments with exact matches between speech and text than using forced alignment with a pre-trained acoustic model (hau).", "However, it should be noted that significantly more data was removed due to outliers for yor, and less data overall was aligned with ewe, yor (see the statistics for unaligned vs. aligned hours in Table REF ).", "Table: Corpus statistics.", "The corpus consists of data for ten languages, of which six have been aligned and formatted for immediate use to train TTS models." ], [ "TTS Models", "To experimentally validate the quality of our dataset, we train the VITS end-to-end speech synthesis model [7] with the sampling rate 22050 Hz in the six aligned languages.", "The chosen languages are Akuapem Twi, Asante Twi, Éwé, Hausa, Lingala, and Yorùbá.", "We chose VITS for its state-of-the-art naturalness and also for its robust alignment mechanism [39].", "The model takes characters as input and does not require a phonemizer.", "To accelerate training we use transfer learning.", "We start from a model pre-trained on LJSpeech [40] for 1M steps, which is available via the Coqui TTS repositoryhttps://github.com/coqui-ai/TTS.", "We continue training for approximately 110K steps for each one of the languages.", "We use the AdamW optimizer [41] with betas 0.8 and 0.99, weight decay 0.01, and an initial learning rate of 0.0002 decaying exponentially by a gamma of 0.999875  [42].", "The models were trained using an NVIDIA A100 SXM4 80GB with a batch size of 100.", "All models are released in the Coqui TTS toolkithttps://github.com/coqui-ai/TTS." ], [ "Results and Discussion", "We evaluated the synthesized speech using subjective judgments, averaged across multiple speakers.", "We randomly sampled 50 segments from the in-domain test set, as well as out-of-domain corpora to test the models' ability to generalize to non-Bible contexts.", "The out-of-domain sentences were obtained from the NEWS corpushttp://github.com/masakhane-io/lacuna_pos_ner (except for Akuapem Twi).", "Annotators rated the quality of synthesized speech in terms of naturalness of voice and appropriateness of pronunciation for the particular language or dialect.", "Annotators selected from a 5-point Likert rating for each sample: 1 (bad), 2 (poor), 3 (fair), 4 (good), and 5 (excellent).", "The mean opinion scores (MOS) are shown in tab:evaltts.", "We additionally use mel cepstral distortion (MCD) [43], an automatic edit distance metric, to assess quality for the in-domain segments where we have reference speech, with dynamic time warping (DTW) to align the segments.", "MCD largely follows MOS: languages with better human judgments (higher MOS) typically have better (lower) MCD scores, though MCD can be misleading, as in Lingala.", "The MOS judgments seem related to the goodness of alignment evaluations.", "That is, the language with the best alignments (ewe) was also rated the best MOS for speech synthesized from the resulting model.", "Similarly, the language with the worst alignments (hau) resulted in the TTS model with the lowest out-of-domain MOS scores.", "To improve MOS, it may be necessary to either improve the alignments or apply more stringent outlier exclusion criteria, which should be possible, as the training data size remains significantly larger than many available TTS corpora for these languages or others.", "Table: Human evaluation of alignment.", "Shown are percentages of <audio,transcript> samples with an exact match (EM), added words, missing words, or both.Table: Evaluation of TTS model outputs using both human judgments (MOS) and an automatic metric (MCD).", "In-Domain texts are Bible verses, and Out-of-Domain is news." ], [ "Conclusions", "The BibleTTS corpus is the first of its kind in many respects.", "The quality and volume of the data is extremely rare in open speech corpora – these are professional, studio quality, 48kHz recordings, with up to 86 hours of verse-aligned data per language, for 10 languages spoken in sub-Saharan Africa.", "The BibleTTS license is research and commercial friendly: CC-BY-SA.", "We hope that this corpus will enable advances in speech technology for African languages and also will unlock new techniques in TTS, which require more, higher-quality data.", "We described our approach to verse and sentence-level alignment of the original found data with a variety of different resources.", "We used human evaluation to assess the quality of the resulting alignments, and validate the resulting data by training high-quality speech synthesis models with Coqui TTS.", "There are two clear and immediate avenues for future work: (1) verse-level alignment of the remaining four languages (kik, lug, luo, and nya), and (2) improvement of the quality of existing alignments.", "Given the volume of data per language, it may well be the case that we can be more conservative with outlier removal, keeping only 20 or 30 hours of the best data, and obtain even better resulting TTS models.", "Nevertheless, we have shown that the data can already be used to produce high-quality TTS models (as with Ewe), on both in and out of domain text.", "We plan to update BibleTTS such that we have high-quality verse-level alignments for all ten languages." ], [ "Acknowledgements", "We are very grateful to the volunteers, Richard J. Bonnie, Komlanvi D. Akoly, Komlanvi M. Klove, Ibrahim Haruna, Oluwabusayo O. Awoyomi, Emmanuel Anebi, Christian Kilapi, Pacifick Taba, who helped with human evaluation and the Masakhane community." ] ]
2207.03546
[ [ "Testing the Collisionless Nature of Dark Matter with the Radial\n Acceleration Relation in Galaxy Clusters" ], [ "Abstract The radial acceleration relation (RAR) represents a tight empirical relation between the inferred total and baryonic centripetal accelerations, $g_{\\rm{tot}}=GM_{\\rm{tot}}(<r)/r^2$ and $g_{\\rm{bar}}=GM_{\\rm{bar}}(<r)/r^2$, observed in galaxies and galaxy clusters.", "The tight correlation between these two quantities can provide insight into the nature of dark matter.", "Here we use BAHAMAS, a state-of-the-art suite of cosmological hydrodynamical simulations, to characterize the RAR in cluster-scale halos for both cold and collisionless dark matter (CDM) and self-interacting dark matter (SIDM) models.", "SIDM halos generally have reduced central dark matter densities, which reduces the total acceleration in the central region when compared with CDM.", "We compare the RARs in galaxy clusters simulated with different dark matter models to the RAR inferred from CLASH observations.", "Our comparison shows that the cluster-scale RAR in the CDM model provides an excellent match to the CLASH RAR obtained by Tian et al.", "including the high-acceleration regime probed by the brightest cluster galaxies (BCGs).", "By contrast, models with a larger SIDM cross-section yield increasingly poorer matches to the CLASH RAR.", "Excluding the BCG regions results in a weaker but still competitive constraint on the SIDM cross-section.", "Using the RAR data outside the central $r<100$kpc region, an SIDM model with $\\sigma/m=0.3$cm$^{2}$g$^{-1}$ is disfavored at the $3.8\\sigma$ level with respect to the CDM model.", "This study demonstrates the power of the cluster-scale RAR for testing the collisionless nature of dark matter." ], [ "Introduction", "Based on modern cosmological studies [19], dark matter is known to be the dominant matter component in the Universe, while its nature is still a mystery.", "In the current concordance cosmological model, $\\Lambda $ Cold Dark Matter (CDM), large-scale structure formed hierarchically, with dark matter halos growing through a series of mergers of smaller halos as well as accretion.", "This standard model provides a good description of the observed large-scale structure.", "However, there are issues on smaller scales that are potentially challenging for the CDM model.", "For example, collisionless dark matter particles in CDM produce cuspy dark matter halos, where the density rises towards the halo center, which is inconsistent with the cored density profiles inferred for the halos hosting some dwarf galaxies.", "This is the so-called cusp–core problem [14], [37].", "[54] proposed a promising alternative to collisionless CDM, known as Self-Interacting Dark Matter (SIDM).", "SIDM was proposed to solve the small-scale problems with CDM, while preserving the successful predictions on large-scales in the $\\Lambda $ CDM model.", "Elastic collisions of dark matter particles effectively smooth out the mass distribution at the center of halos, leading to a deviation from the cuspy density profile of CDM halos.", "We show this effect for simulated SIDM halos in Figure REF .", "The scattering rate of dark matter particles, $\\Gamma $ , is proportional to the local dark matter density $\\rho _{\\rm {dm}}(r)$ , the dark matter scattering cross-section $\\sigma $ , and the local velocity-dispersion $v(r)$ of dark matter particles [51], $\\Gamma (r)\\simeq \\rho _\\mathrm {dm}(r)v(r)\\sigma /m,$ where $m$ is the dark matter particle mass.", "Therefore, with the highest $\\rho _\\mathrm {dm}$ and $v$ , massive galaxy clusters are crucial laboratories to search for dark matter self-interactions.", "Galaxy clusters are the most massive gravitationally-bound structures resulting from the hierarchical formation process.", "About $85\\%$ of their mass content is invisible dark matter, with the remainder being baryons that are mostly in the form of X-ray emitting hot gas.", "Since cluster properties depend on the growth of structure, they contain an abundance of cosmological and astrophysical information.", "Several characteristic features of galaxy clusters have been used to test dark matter models, such as dark matter halo shapes , [64], offsets between dark matter and galaxies in merging systems , , [28], the wobbling of brightest central galaxies [18] and the amount and lensing efficiency of dark matter substructures [22], [56], [34].", "In the $\\Lambda $ CDM model, dark matter has negligible interaction with baryons, except for gravity.", "However, tight relations between the distribution of dark matter and of baryonic matter have been discovered.", "At the scale of spiral galaxies, the ratio of dynamical to baryonic masses, $M_\\mathrm {tot}(<r)/M_\\mathrm {bar}(<r)$ , is found to be tightly coupled with gravitational acceleration, whereas no clear correlation with other physical quantities, such as galaxy size, has been found to date [31].", "By analyzing rotation curves of 153 spiral galaxies, [32] found a tight correlation between two independent observables, namely the centripetal acceleration $g_\\mathrm {tot}(r) = V^2/r = GM_\\mathrm {tot}(<r)/r^2$ and the baryonic contribution to this acceleration $g_\\mathrm {bar}(r)=GM_\\mathrm {bar}(<r)/r^2$ : $\\frac{g_\\mathrm {tot}}{g_\\mathrm {bar}}=\\frac{M_\\mathrm {tot}}{M_\\mathrm {bar}}=\\frac{1}{1-e^{-\\sqrt{g_\\mathrm {bar}/g_\\dagger }}},$ characterized by a characteristic acceleration scale, $g_{\\dagger }= 1.20\\pm 0.24\\times 10^{−10}$  m s$^{-2}$ .", "This empirical relation between the total and baryonic centripetal accelerations is referred to as the radial acceleration relation (RAR).", "Since then, a number of efforts have been invested to study the RAR in various galaxy samples [26], [52], [4], [2], [41].", "Hydrodynamical simulations in the $\\Lambda $ CDM framework succeed in reproducing the observed RAR of galaxies [25], [27], [15], [10], [42].", "Recently, observational studies of the RAR have been extended to cluster-scale objects [57], [5], [46], [45], [11].", "[57] studied the RAR for a subsample of 20 high-mass galaxy clusters targeted by the CLASH program [44].", "In their analysis, the total mass of each cluster is inferred from a combined analysis of strong and weak lensing data [62] and the baryonic mass from X-ray gas mass and stellar mass estimates [9].", "[5] analyzed X-ray data for a sample of 52 non-cool-core clusters.", "They obtained the cluster RAR using X-ray hydrostatic estimates for the total mass and X-ray gas mass estimates for the baryonic mass, ignoring the stellar mass contribution to the baryonic component.", "[11] studied the total and baryonic mass distributions for a sample of 12 X-COP clusters with X-ray and Sunyaev–Zel'dovich (SZ) effect observations, accounting for the stellar mass contribution.", "They found a complex shape of the RAR that strongly departs from the RAR in galaxies.", "All of these studies found that the characteristic acceleration scale $g_{\\dagger }$ in clusters is about an order of magnitude larger than that obtained from galaxy-scale objects.", "Observational RAR studies have also been extended to group-scale objects.", "[16] found that $g_{\\dagger }$ of group-scale halos falls in between that found for galaxies and galaxy clusters.", "These observations suggest that there is no universal RAR that holds at all scales from galaxies to galaxy clusters.", "In this study, we use cosmological hydrodynamical simulations to study the RAR for simulated halos in both CDM and SIDM scenarios.", "We aim to explore a new method for constraining the collisionless nature of dark matter using the cluster-scale RAR, as well as to compare the RARs derived from numerical simulations with multiwavelength cluster observations.", "This paper is organized as follows.", "Section  introduces the simulation data sets we use in this work.", "Section  shows the results of the halo RAR obtained from the simulations.", "Section  compares the theoretical predictions from the simulations with observational data from the CLASH program.", "In Section , we discuss the results and implications of our findings.", "Finally a summary is given in Section .", "Throughout this paper, we assume a Wilkinson Microwave Anisotropy Probe (WMAP) 9-year $\\Lambda $ CDM cosmology [19] with $\\Omega _\\mathrm {m}=0.287$ , $\\Omega _\\Lambda = 0.713$ , and a Hubble constant of $H_0 = 100\\,h$  km s$^{-1}$  Mpc$^{-1}$ with $h=0.693$ .", "We denote the critical density of the universe at a particular redshift $z$ as $\\rho _\\mathrm {c}(z)=3H^2(z)/(8\\pi G)$ with $H(z)$ the Hubble function.", "We also define the dimensionless expansion function as $E(z)=H(z)/H_0$ .", "We adopt the standard notation $M_\\Delta $ to denote the total mass enclosed within a sphere of radius $r_\\Delta $ within which the mean overdensity is $\\Delta \\times \\rho _\\mathrm {c}(z)$ .", "We use “$\\ln $ ” to denote the natural logarithm.", "In this work, we use simulations run with four different SIDM models, as well as CDM, that were presented in [48].", "Three of the SIDM models have velocity-independent cross-sections with isotropic scattering of $\\sigma /m = 0.1, 0.3$ , and 1 cm$^{2}$  g$^{-1}$ , which we refer to as SIDM0.1, SIDM0.3, and SIDM1.0, respectively.", "The other SIDM model (hereafter vdSIDM) has a velocity-dependent and anisotropic cross-section.", "The vdSIDM differential cross-section is [49] $\\frac{d\\sigma }{d\\Omega }=\\frac{\\sigma _0}{4\\pi \\left[1+(v^2/w^2)\\sin ^2\\frac{\\theta }{2}\\right]^2},$ where $w$ is a characteristic velocity below which the scattering is approximately isotropic with $\\sigma =\\sigma _0$ .", "For collision velocities greater than $w$ , scattering becomes anisotropic (favoring scattering by small angles) and the cross-section decreases.", "Our vdSIDM model has $\\sigma _0/m = 3.04$  cm$^{2}$  g$^{-1}$ and $w=560$  km s$^{-1}$ , which was chosen to reproduce the best-fit cross-section in [24].", "For more details about the SIDM models we used in this work, we refer the reader to [48]." ], [ "BAHAMAS Simulations", "We use $N$ -body particle data from the BAryons And HAloes of MAssive Systems (BAHAMAS) suite of cosmological hydrodynamical simulations [30], [29] with WMAP 9-year [19] cosmology.", "BAHAMAS implements sub-grid models for star formation, and stellar and black hole feedback, and produces a good match to the observed stellar mass function, as well as the X-ray luminosities and gas mass fractions of galaxy groups/clusters.", "The simulations occupy large periodic boxes, $400\\,h^{-1}\\,\\mathrm {Mpc}$ on a side.", "For the SIDM simulations, we use the BAHAMAS-SIDM suite [48] which used the same initial conditions and sub-grid models as BAHAMAS, but included an implementation of dark matter scattering.", "The parameters associated with the galaxy formation physics used in BAHAMAS-SIDM, were kept the same as for the original BAHAMAS CDM simulation.", "The friends-of-friends algorithm [8] with a linking length of $0.2$ times the mean inter-particle separation was run on each $z=0.375$ simulation output.", "From each simulation we extract the 10,000 most massive friends-of-friends groups, which have spherical-overdensity masses in the range of $12.5<\\log _{10}[E(z)M_{200}/M_{\\odot }]<15.3$ .", "For each halo we calculate the total enclosed mass profile, as well as the enclosed mass profile of the baryons.", "The center of the halo is defined by the location of the most gravitationally-bound particle, and the enclosed masses are calculated at 101 different radii, logarithmically spaced between proper (as opposed to comoving) lengths of $0.1$  kpc and 4 Mpc.", "In addition, the total density as a function of radius is calculated by taking the difference in total enclosed mass at two successive radii, and dividing by the volume of the associated spherical shell.", "We consider the geometric mean of the inner and outer shell radii to be the radius at which this density is calculated.", "This density profile for each halo was used to make Figure REF .", "Figure: Mean mass density profiles of cluster-scale halos with masses E(z)M 200 >5×10 14 M ⊙ E(z)M_{200} > 5\\times 10^{14}M_\\odot at z=0.375z=0.375, shown for simulations run with five different dark matter models." ], [ "Characterization of the Cluster-scale RAR in the BAHAMAS Simulations", "With the enclosed total and baryonic mass profiles $M_\\mathrm {tot}(<r)=M_\\mathrm {dm}(<r)+M_\\mathrm {bar}(<r)$ and $M_\\mathrm {bar}(<r)$ measured for each individual halo (Section REF ), we calculate their total and baryonic centripetal acceleration profiles as $\\begin{aligned}g_\\mathrm {tot}(r) &= \\frac{GM_\\mathrm {tot}(<r)}{r^2},\\\\g_\\mathrm {bar}(r) &= \\frac{GM_\\mathrm {bar}(<r)}{r^2}.\\end{aligned}$ Equation (REF ) should be regarded as the definition of $g_\\mathrm {tot}(r)$ and $g_\\mathrm {bar}(r)$ , not the result of assuming spherical symmetry.", "In this section, we aim to characterize the relationship between $g_\\mathrm {tot}$ and $g_\\mathrm {bar}$ for samples of halos selected from the BAHAMAS-CDM and -SIDM runs, focusing on massive cluster-scale objects." ], [ "RARs in CDM and SIDM Halos", "Figure REF shows the joint distribution of baryonic and total centripetal accelerations ($g_\\mathrm {bar},g_\\mathrm {tot}$ ) derived from a subsample of group and cluster scale halos at $z=0.375$ , with masses $E(z)M_{200}>5\\times 10^{13}M_\\odot $ .", "The results are shown separately for the CDM and four different SIDM models.", "The halo centripetal accelerations are logarithmically sampled at scales from $r = 15$  kpc to $r = 4000$  kpc.", "For each dark matter run, the magenta solid line represents the mean $g_\\mathrm {tot}$ as a function of $g_\\mathrm {bar}$ relation, or the halo RAR.", "The yellow dashed line represents the expectation corresponding to the cosmic mean ratio of total to baryonic mass densities, $g_\\mathrm {tot}=(\\Omega _\\mathrm {m}/\\Omega _\\mathrm {b})g_\\mathrm {bar}$ .", "In all cases, the RARs of simulated halos converge towards the cosmic mean, $g_\\mathrm {tot}/g_\\mathrm {bar}=\\Omega _\\mathrm {m}/\\Omega _\\mathrm {b}$ , in the low-acceleration limit of $g_\\mathrm {bar}$ < $$ 10-13$~m~s$ -2$.$ The red dashed line shows the [32] relation (Equation (REF )) observed in spiral galaxies over the acceleration range of $-12$ < $$ 10(gbar/m s-2)$\\; < \\over \\sim \\;$ -9$.", "Overall, the RARs of our simulated sample have a normalization that is higher than that observed at galaxy scales \\cite {2016PhRvL.117t1101M}, suggesting a higher contribution from dark matter at a given baryonic acceleration.$" ], [ "Mass Dependence of the Halo RAR", "The full sample of 10,000 BAHAMAS simulated halos spans a wide range of halo mass.", "We therefore split our sample into three mass bins: $12.5<\\log _{10}[E(z)M_{200}/M_{\\odot }] \\le 13.7$ , $13.7<\\log _{10}[E(z)M_{200}/M_{\\odot }]\\le 14.7$ , and $\\log _{10}[E(z)M_{200}/M_{\\odot }]>14.7$ , to investigate the halo mass dependence of the RAR.", "Figure REF shows the resulting RARs in the three mass bins, for the five different dark matter models.", "For the lowest mass bin, the RARs for different dark matter models largely overlap with each other, while for the highest mass bin (corresponding to massive cluster halos) the slope of the RAR at high $g_\\mathrm {bar}$ decreases with increasing SIDM cross-section.", "The flattening feature in $g_\\mathrm {tot}$ at high acceleration corresponds to the approximately constant-density dark matter “cores” at the center of SIDM halos.", "This distinguishing feature, being more significant in more massive halos, is consistent with the fact that the scattering rate is proportional to the local dark matter density and velocity dispersion (Equation.", "(REF )).", "This result indicates that the high-acceleration cluster-scale RAR can be used to probe the nature of dark matter.", "In the following analyses, we will focus on the 48 cluster-scale halos in the highest mass bin, which are more sensitive to the SIDM cross-section.", "The velocity dependence of the vdSIDM cross-section is apparent in Figure REF .", "For high mass halos, vdSIDM behaves most similarly to SIDM0.3 while for the group-scale halos, vdSIDM halos are most like SIDM1.0 halos.", "This behaviour is consistent with [48], and reflects the fact that the vdSIDM cross-section decreases with increasing relative velocity between dark matter particles, and relative velocities are larger in more massive halos.", "In the above analyses, a minimum radius of $r=15$  kpc is applied.", "From cluster observations, the enclosed mass in the inner regions $r\\in [15,100]$  kpc is difficult to measure.", "We therefore study the RARs for the same sets of halos discussed above, but with three larger minimum radii.", "In Figure REF , we plot the RARs for the high-mass objects with radial cuts of $r_\\mathrm {cut}=15$  kpc, 58 kpc, 98 kpc, and 206 kpc, respectively.", "The radial cut of 98 kpc corresponds approximately to the regime of combined strong and weak-lensing analyses, while a radial cut of 206 kpc represents the regime of weak-lensing-only analyses for clusters at $z$ > $$ 0.1$.", "We recall that beyond $ r100$~kpc, it is challenging to distinguish the CDM and SIDM models by measuring the mass density profile of galaxy clusters, as shown in Figure~\\ref {fig:density_profile}.", "The bottom-left panel of Figure~\\ref {fig:all_RAR_radial_cut} shows that even beyond $ r100$~kpc, the slope of the RAR for SIDM1.0 is shallower than that for CDM.", "Deviations in the RAR between CDM and SIDM is thus more significant than the conventional cusp--core features in the density profiles at larger radii.", "For the radial range of weak-lensing-only measurements, the discrepancy becomes tiny and would be almost impossible to detect.", "Hence, a combined strong and weak-lensing analysis is typically required to distinguish SIDM and CDM in terms of the RAR, because this enables the measurement of the total enclosed mass down to and below approximately $ 100$~kpc.$" ], [ "Power-law Characterization of the Cluster-scale RAR", "lccc[tbp] Cluster-scale RAR and its intrinsic scatter characterized in the high-acceleration regime Dark matter model $m$ $b$ $\\sigma _\\mathrm {int}$ (dex) CDM 0.53 -4.10 0.064 SIDM0.1 0.42 -5.23 0.058 SIDM0.3 0.35 -6.02 0.076 SIDM1.0 0.33 -6.27 0.100 vdSIDM 0.38 -5.75 0.091 For each model, the RAR is derived for a subsample of cluster halos at $z=0.375$ with masses $E(z)M_{200}>5\\times 10^{14}M_\\odot $ .", "A cut-off radius of $r_\\mathrm {cut}=15$  kcp is used.", "The quantities $m$ and $b$ represent the slope and intercept of the power-law fit (Equation (REF )) in the high-acceleration region of $\\log _{10}(g_{\\mathrm {bar}}/\\mathrm {m~s}^{-2})>-10.6$ .", "The $\\sigma _\\mathrm {int}$ parameter is the intrinsic scatter in dex.", "We characterize the halo RARs obtained in Section REF , focusing on cluster halos in the highest-mass bin with $E(z)M_{200} > 5\\times 10^{14}M_\\odot $ at $z=0.375$ .", "To this end, we assume a power-law function of the form: $\\log _{10}(g_\\mathrm {tot}/\\mathrm {m~s}^{-2})=m\\log _{10}(g_\\mathrm {bar}/\\mathrm {m~s}^{-2})+b,$ with $m$ the logarithmic slope and $b$ the intercept.", "At $g_\\mathrm {bar}\\sim 10^{-11}$  m s$^{-2}$ , the mean cluster RARs for all the dark matter runs converge to a power-law with $m\\approx 1.15$ and $b\\approx 2.68$ (see the top panel of Figure REF ).", "Then, the logarithmic slope begins to flatten gradually at $g_\\mathrm {bar}$ > $$ 10-11$~m~s$ -2$.", "At $ gbar$\\; > \\over \\sim \\;$ 10-10.6$~m~s$ -2$, the cluster RAR is found to be highly sensitive to the SIDM cross-section.$ In Table REF , we summarize the best-fit values of $m$ and $b$ for the CDM and four SIDM runs characterized in the high-acceleration regime $g_\\mathrm {bar}> 10^{-10.6}$  m s$^{-2}$ .", "It should be noted that we also fitted the mean RARs at $g_\\mathrm {bar}>10^{-10.6}$  m s$^{-2}$ using the [32] relation (Equation (REF )), finding that only the CDM case can be well described by this functional form with an acceleration scale of $g_{\\dagger } = (1.42\\pm 0.06)\\times 10^{-9}$  m s$^{-2}$ , which is much higher than the characteristic acceleration scale $g_{\\dagger }\\approx 1.2\\times 10^{-10}$  m s$^{-2}$ observed at galaxy scales (Section ).", "Table REF also lists the levels of intrinsic scatter $\\sigma _\\mathrm {int}$ around the mean relations obtained for the five different dark matter runs.", "In all cases, we find a remarkably tight distribution in $\\log g_\\mathrm {tot}$ –$\\log g_\\mathrm {bar}$ space, with a slight increase in $\\sigma _\\mathrm {int}$ with increasing cross-section.", "For the CDM, SIDM0.1 and SIDM0.3 models, the values of $\\sigma _\\mathrm {int}$ agree within the errors with $\\sigma _\\mathrm {int} = 0.064^{+0.013}_{-0.012}$ (in dex) determined for the CLASH sample [57]." ], [ "Evolution of the Cluster-scale RAR", "Here we investigate the evolution of the RAR by analyzing simulated halos at two redshifts, $z=0$ and $z=0.375$ .", "For this purpose, we focus on massive cluster halos with $E(z)M_{200} > 5\\times 10^{14}M_\\odot $ (Sections REF and REF ).", "At $z=0$ , we have a total of 82 cluster halos in this subsample.", "Figure REF shows the comparison of the cluster-scale RARs at $z=0$ and $z=0.375$ .", "Compared to $z=0.375$ , we find a larger discrepancy at $z=0$ between the CDM and SIDM results: the larger the scattering cross-section, the lower the total acceleration at high $g_\\mathrm {bar}$ (see also Section REF ).", "The SIDM dependence increasing toward lower redshift is expected, because the radius out to which SIDM significantly affects density profiles is well described by the radius where there has been one scattering event per particle over the age of the halo [50].", "This suggests that cluster RAR measurements for lower-$z$ samples should provide a more sensitive test of the SIDM cross-section.", "We find that the cluster RAR derived from vdSIDM at $z=0$ resembles well that from SIDM0.3 at $z=0$ .", "In this section, we compare the cluster-scale RARs derived from the BAHAMAS simulations with observations of galaxy clusters.", "An observational determination of the cluster RAR relies on accurate measurements of both total and baryonic mass profiles, $M_\\mathrm {tot}(<r)$ and $M_\\mathrm {bar}(<r)$ , for a sizable sample of galaxy clusters.", "The baryonic mass in galaxy clusters is dominated by the hot intracluster gas, except in their central region where the BCG dominates the total baryonic mass [53].The mean effective (half-light) radius of the CLASH BCGs is $\\langle R_\\mathrm {e}\\rangle \\sim 30$  kpc [57]..", "Thus, baryonic mass estimates for both components are essential.", "Furthermore, unbiased estimates for the total mass in galaxy clusters are critical for a robust determination of the cluster RAR.", "In this context, [57] determined the RAR at BCG–cluster scales for a sample of 20 high-mass galaxy clusters targeted by the CLASH program, by combining weak and strong gravitational lensing data [63], [62], [35], [69], X-ray gas mass measurements [9], and BCG stellar mass estimates.", "The stellar mass contribution from member galaxies was statistically corrected for.", "To date, this is the only study that uses gravitational lensing data to directly probe the total acceleration $g_\\mathrm {tot}$ in galaxy clusters.", "By contrast, other studies based on hydrostatic estimates for $g_\\mathrm {tot}$ could potentially bias the true underlying RAR.", "In this study, we thus focus on the CLASH RAR studied by [57]." ], [ "CLASH Data", "With the aim of precisely determining the mass profiles of galaxy clusters using deep 16-band imaging with the Hubble Space Telescope [44] and ground-based weak-lensing observations [63], a subsample of 20 CLASH clusters was X-ray selected to be massive ($>5$  keV), with nearly concentric X-ray isophotes and a well-defined X-ray peak located close to the BCG.", "For this subsample, no lensing information was used a priori to avoid a biased sample selection.", "Cosmological hydrodynamical simulations suggest that the CLASH X-ray-selected subsample is mostly composed of relaxed systems ($\\sim 70\\%$ ) and largely free of orientation bias [33].", "Another subsample of five clusters was selected by their exceptional lensing strength to magnify galaxies at high redshift.", "These clusters often turn out to be dynamically disturbed massive systems [60].", "The CLASH sample spans nearly an order of magnitude in mass, 5 < $$ M200/1014M$\\; < \\over \\sim \\;$ 30$.", "For each of the 25 clusters, HST weak- and strong-lensing data products are available in their central regions \\cite {Zitrin2015}.", "\\cite {Umetsu2016} combined wide-field weak-lensing data obtained primarily with Suprime-Cam on the Subaru telescope \\cite {Umetsu2014} and the HST weak- and strong-lensing constraints of \\cite {Zitrin2015}.", "For an observational determination of the RAR, \\cite {Tian2020} combined the X-ray data products from \\cite {Donahue2014} and the lensing data products from \\cite {Umetsu2016}, yielding a subsample of 20 CLASH clusters composed of 16 X-ray-selected and 4 lensing-selected systems.", "We note that five clusters of the CLASH sample were not included in the joint weak- and strong-lensing analysis performed by \\cite {Umetsu2016} because of the lack of usable wide-field weak-lensing data.", "Consequently, they were also excluded in our analysis.$ The CLASH subsample analyzed by [57] has a median redshift of $\\overline{z} = 0.377$ , which closely matches our simulation snapshot at $z=0.375$ .", "The typical resolution limit of the mass reconstruction set by the HST lensing data is 10, which corresponds to $\\approx 50$  kpc at $\\overline{z}=0.377$ [62].", "It was found by [62] that the stacked lensing signal of the CLASH X-ray-selected subsample is well described by a family of cuspy, sharply steepening density profiles, such as the Navarro–Frenk–White [38], [39], Einasto [13], and DARKexp [20] profiles.", "Of these, the NFW model best describes the CLASH lensing data [61].", "In contrast, the single power-law, cored isothermal, and Burkert models are statistically disfavored by the averaged lensing profile having a pronounced radial curvature.", "For each of the 20 clusters, [62] performed a spherical NFW fit to the reconstructed projected mass density profile by accounting for all relevant sources of uncertainty, including measurement errors, cosmic noise due to the projection of large-scale structure uncorrelated with the cluster, statistical fluctuations of the projected cluster lensing signal due to halo triaxiality and correlated substructures.", "In this analysis, we use the CLASH-RAR data set [57] published in [58], which contains $N_\\mathrm {data}=84$ data points in $\\log g_\\mathrm {tot}$ –$\\log g_\\mathrm {bar}$ space.", "[57] extracted total and baryonic mass estimates where possible at $r=100$  kpc, 200 kpc, 400 kpc, and 600 kpc.", "For each cluster, they also included a single constraint at $r$ < $$ 30$~kpc in the central BCG region.", "These data points are sufficiently well separated from each other, so as to avoid oversampling and reduce correlations between adjacent data points.$ The CLASH measurements of centripetal accelerations ($g_\\mathrm {bar},g_\\mathrm {tot}$ ) are shown in Figure REF , along with our BAHAMAS predictions of the RAR for massive cluster halos with $E(z)M_{200}>5\\times 10^{14}M_\\odot $ , derived from five different dark matter runs at $z=0.375$ .", "The best-fit CLASH RAR obtained by [57] has $m=0.51^{+0.04}_{-0.05}$ , $b=-4.26^{+0.46}_{-0.47}$ , and $\\sigma _\\mathrm {int}=0.064^{+0.013}_{-0.012}$ dex, which is in excellent agreement with the best-fit RAR for the BAHAMAS-CDM run characterized in the high-acceleration region of $g_\\mathrm {bar}>10^{-10.6}$  m s$^{-2}$ (Table REF )." ], [ "Statistical Comparison", "lcccccc[tbp] Summary of the $\\chi ^2$ test 3cWith BCGs 3cWithout BCGs $\\chi ^2$ a PTE b Fraction c $\\chi ^2$ a PTE b Fraction c CDM 91.7 0.264 0.264 71.3 0.246 0.246 SIDM0.1 108.6 0.037 0.036 73.2 0.200 0.200 SIDM0.3 177.1 $1.3\\times 10^{-8}$ 0.0 85.3 0.039 0.039 SIDM1.0 354.5 $6.3\\times 10^{-35}$ 0.0 113.8 $1.3\\times 10^{-5}$ $1.4\\times 10^{-5}$ vdSIDM 224.7 $8.9\\times 10^{-15}$ 0.0 86.3 0.033 0.033 aObserved $\\chi ^2$ value between the CLASH data and each DM model.", "bProbability to exceed the observed $\\chi ^2$ value assuming the standard $\\chi ^2$ probability distribution function.", "cFraction of Monte-Carlo realizations exceeding the observed value of $\\chi ^2$ .", "To make a quantitative comparison between the BAHAMAS simulations and the CLASH observations, we deinfe the $\\chi ^2$ function as $\\chi ^2 = \\sum _{i=1}^{N_\\mathrm {data}}\\frac{\\left(\\log _{10} g_{\\mathrm {tot},i} - \\log _{10} \\widehat{g}_{\\mathrm {tot}.i} \\right)^2}{\\sigma _i^2 + \\sigma _\\mathrm {int}^2},$ where $i$ runs over all data points from the CLASH data set, $g_{\\mathrm {tot},i}$ is the total acceleration at the $i$ th data point, $\\sigma _i$ is the measurement uncertainty, $\\sigma _\\mathrm {int}\\approx 0.064$ is the intrinsic scatter for the CLASH sample determined by [57], and $\\widehat{g}_{\\mathrm {tot},i}$ is the theoretical prediction for the total acceleration at $g_\\mathrm {bar}=g_{\\mathrm {bar},i}$ from the BAHAMAS simulations.", "The resulting $\\chi ^2$ values evaluated for different dark matter runs are listed in Table REF .", "To statistically characterize the level of agreement between data and simulations, we use frequentist measures of statistical significance.", "Specifically, we use the chance probability of exceeding the observed $\\chi ^2$ value to quantify the significance of the match for each dark matter run.", "For each case, we calculate the probability to exceed (PTE), or the right-tailed $p$ -value, for a given value of $\\chi ^2$ assuming the standard $\\chi ^2$ probability distribution function.", "We adopt a significance threshold of $\\alpha =0.05$ as the dividing line between satisfactory ($\\mathrm {PTE}>0.05$ ) and unsatisfactory ($\\mathrm {PTE}<0.05$ ) matches to the CLASH-RAR data set.", "Since no optimization (or model fitting) is performed in our $\\chi ^2$ evaluations, the number of degrees of freedom is $N_\\mathrm {data}=84$ in all cases.", "The resulting values of PTE for each dark matter run are listed in Table REF .", "We find that the CDM run ($\\mathrm {PTE}=0.264$ ) gives a satisfactory match, whereas all the SIDM runs give unacceptable matches to the CLASH data.", "Among the SIDM runs, SIDM0.1 has a PTE of $0.036$ that is close to but slightly below the adopted threshold.", "As a consistency check, we perform Monte-Carlo simulations to derive the $\\chi ^2$ distribution expected from the measurement errors $\\lbrace \\sigma _i\\rbrace _{i=1}^{N_\\mathrm {data}}$ and the intrinsic scatter $\\sigma _\\mathrm {int}$ for the CLASH RAR.", "In each simulation, we construct a synthetic data set $\\lbrace g_{\\mathrm {bar},i}^{(\\mathrm {MC})}, g_{\\mathrm {tot},i}^{(\\mathrm {MC})}\\rbrace _{i=1}^{N_\\mathrm {data}}$ by creating a Monte-Carlo realization of random Gaussian noise $n_i \\equiv \\log _{10}g_{\\mathrm {tot},i}^{(\\mathrm {MC})} -\\log _{10}\\widehat{g}_{\\mathrm {tot},i} = \\pm \\sqrt{\\sigma _i^2 + \\sigma _\\mathrm {int}^2}$ at $\\log _{10} g_{\\mathrm {bar},i}^{(\\mathrm {MC})}=\\log _{10} g_{\\mathrm {bar},i}^{(\\mathrm {CLASH})}$ and then calculate the value of $\\chi ^2=\\sum _i n_i^2/(\\sigma _i^2+\\sigma _\\mathrm {int}^2)$ .", "We repeat this procedure $10^6$ times to generate a large set of Monte-Carlo realizations and obtain the distribution of $\\chi ^2$ values.", "In Table REF , we list the fraction of Monte-Carlo realizations exceeding the observed value of $\\chi ^2$ for each dark matter run.", "In all cases, the Monte-Carlo fraction of exceeding the $\\chi ^2$ value is precisely consistent with the PTE calculated with the standard $\\chi ^2$ distribution function.", "In the upper panel of Figure REF , we compare the $\\chi ^2$ distribution of the CLASH data set constructed from our Monte-Carlo simulations with the observed $\\chi ^2$ values for the CDM and four SIDM models.", "This figure gives a visual summary of the $\\chi ^2$ test (Table REF ).", "lcccc[tbp] Likelihood-ratio test of the SIDM models 2cWith BCGs 2cWithout BCGs $\\Delta \\chi ^2$ Significance level $\\Delta \\chi ^2$ Significant level SIDM0.1 16.9 4.1$\\sigma $ 1.9 1.4$\\sigma $ SIDM0.3 85.4 9.2$\\sigma $ 14.0 3.8$\\sigma $ SIDM1.0 262.8 16.2$\\sigma $ 42.5 6.8$\\sigma $ vdSIDM 133.0 11.2$\\sigma $ 15.0 3.4$\\sigma $ The observed value of $\\Delta \\chi ^2=\\chi ^2-\\chi ^2_\\mathrm {CDM}$ relative to the CDM model is listed for each SIDM model explored in this work.", "The number of degrees of freedom for each comparison is 1 for all cases except for vdSIDM with 2 degrees of freedom.", "We perform a likelihood-ratio test of SIDM models to quantify whether the inclusion of collisional features of dark matter is statistically warranted by the data.", "Velocity-independent SIDM models have one additional degree of freedom relative to the CDM model.", "For vdSIDM, there are two additional degrees of freedom relative to the CDM model.", "These models are reduced to the CDM model in the limit of the vanishing cross-section.", "Table REF lists for each SIDM model the differenced $\\chi ^2$ value $\\Delta \\chi ^2=\\chi ^2-\\chi ^2_{\\mathrm {CDM}}$ relative to the CDM model and the corresponding significance level.", "Compared with the fiducial model of CDM, SIDM0.1 is disfavored at a significance level of $4.1\\sigma $ .", "These results are based on the CLASH RAR at BCG–cluster scales, which includes, for each cluster, a single constraint in the central BCG region at $r$ < $$ 30$~kpc.", "However, since the typical resolution of CLASH mass reconstructions is $ r50$~kpc (Section~\\ref {subsec:clash}), the mass distribution is not resolved in the BCG region and thus the CLASH constraints on $ gtot$ at the BCG scale are model dependent to some extent.", "Moreover, the distribution of baryonic and dark matter in cluster cores is sensitive to baryonic physics \\cite {Cui2018}.$ We therefore repeat the tests described above using core-excised RAR data.", "Excluding the central $r<100$  kpc region from the CLASH data set, we have 64 data points at $r \\in [100,600]$  kpc.", "The results of the $\\chi ^2$ test with the core-excised data set are summarized in Table REF .", "Of the five BAHAMAS dark matter runs, CDM and SIDM0.1 provide satisfactory matches to the core-excised CLASH RAR, at a significance level of $\\alpha =0.05$ .", "Excluding the BCG regions results in a weaker but still competitive constraint on the SIDM cross-section (Table REF ).", "With a likelihood ratio test, we find that the core-excised CLASH RAR data disfavor the SIDM0.3 model at the $3.8\\sigma $ level with respect to the CDM model." ], [ "Discussion", "In this section, we first discuss current limitations and possible improvements of SIDM constraints from measurements of the cluster RAR.", "Then, we comapre our results to previous astrophysical constraints on SIDM." ], [ "Possible Systematics and Improvements", "In this work, we analyzed the CLASH RAR data set of [57], which consists of $N_\\mathrm {data}=64$ data points ($g_\\mathrm {bar},g_\\mathrm {tot}$ ) inferred from the multiwavelength CLASH observations of 20 high-mass galaxy clusters (Section REF ).", "In [57], the measurements of centripetal accelerations were sparsely sampled over a sufficiently wide range of clustercentric distances, so as to reduce the covariance between adjacent data for each cluster.", "Thus, our comparison of the BAHAMAS simulations and CLASH measurements (Equation (REF )) involves a two step procedure, which ignores the covariance and does not fully exploit all the information contained in the data.", "In principle, these limitations can be overcome by using a forward-modeling method.", "In particular, likelihood-free approaches based on forward simulations allow us to bypass the need for a direct evaluation of the likelihood function assuming Gaussian statistics, which avoids the complex derivation of the covariance matrix in an inherently complex problem [55].", "Another potential source of systematic uncertainty is the smoothing of the inner density profile due to cluster miscentering [23].", "Because of the CLASH selection, our cluster sample exhibits, on average, a small positional offset between the BCG and X-ray peak, characterized by an rms offset of $\\sim 40$  kpc [63], [62].", "This level of offset is comparable to the typical effective radius of CLASH BCGs ($\\langle R_\\mathrm {e}\\rangle \\sim 30$  kpc; Section ) but sufficiently small compared to the range of cluster radii of interest (say, $r$ > $$ 100$~kpc).", "Moreover, since the RAR method uses the same aperture to compare total and baryonic accelerations, the miscentering effect is not expected to significantly affect the SIDM constraint, although it could potentially contribute to the scatter in the RAR inferred from cluster observations.$" ], [ "Comparison with Other Work", "Self-interactions between dark matter particles are expected to make halos more spherical compared to triaxial halos of collisionless CDM, especially in the central region where the scattering rate is largest.", "Self-interactions in the optically thin regime also reduce the central dark matter densities, transforming a cusp into a core.", "Moreover, offsets between the galactic and dark-matter centroids in merging clusters can be used to constrain the SIDM cross-section.", "Previous studies placed upper limits on the self-interaction cross-section of dark matter particles using such observed density features in galaxy clusters [59].", "[43] compared the halo ellipticities inferred from lensing and X-ray observations with cosmological simulations with SIDM cross-sections of $\\sigma /m=0.03$ , $0.1$ , and 1 cm$^2$  g$^{-1}$ .", "They found that the strong-lensing measurement of the cluster ellipticity for MS 2137−23 [36] is compatible with an SIDM cross-section of $\\sigma /m= 1$  cm$^2$  g$^{-1}$ , whereas the X-ray shape measurement of the isolated elliptical galaxy NGC 720 [3] is consistent with $\\sigma /m= 0.1$  cm$^2$  $g^{-1}$ .", "Using the galaxy–dark-matter offset measured in the moving subcluster of the Bullet Cluster, [47] placed an upper limit of $\\sigma /m < 1.25$  cm$^{2}$  g$^{-1}$ at the $68\\%$  CL.", "[17] performed an ensemble analysis of offset measurements for 72 substructures in 30 systems, including both major and minor mergers, and set an upper limit of $\\sigma /m<0.47$  cm$^2$  g$^{-1}$ at the $95\\%$  CL.", "[68] revisited the analysis of [17] using more comprehensive data and carefully reinterpreted their refined offset measurements, finding that the SIDM constraint of [17] is relaxed to $\\sigma /m\\lesssim 2$  cm$^2$  g$^{-1}$ .", "Analyzing X-ray and SZ effect observations, [12] constrained the structural parameters of the mass density profiles for a sample of 12 massive X-COP clusters assuming that the intracluster gas is in hydrostatic equilibrium.", "They used the BAHAMAS-SIDM simulations to construct an empirical scaling relation between the Einasto shape parameter and the velocity-independent SIDM cross-section.", "With this relation and the assumption of hydrostatic equilibrium, they obtained an upper limit of $\\sigma /m<0.19$  cm$^{2}$  g$^{-1}$ at the $95\\%$  CL.", "In this study, we have obtained competitive constraints on the SIDM cross-section using cluster RAR measurements.", "By comparing the CLASH RAR data set with the mean RARs derived from the BAHAMAS simulations, we are able to reject an SIDM model with $\\sigma /m=0.1$  cm$^{2}$  g$^{-1}$ at the $4.1\\sigma $  CL with respect to the CDM model.", "Excluding the central $r<100$  kpc region, we find that an SIDM model with $\\sigma /m=0.3$  cm$^{2}$  g$^{-1}$ is disfavored at the $3.8\\sigma $ level with respect to CDM." ], [ "Summary and Conclusions", "The RAR is a tight relation between the total and baryonic centripetal accelerations, $g_\\mathrm {tot}$ and $g_\\mathrm {bar}$ , inferred for galaxies and galaxy clusters [32], [57].", "This tight empirical correlation offers a new possibility of testing the collisionless nature of dark matter at galaxy–cluster scales.", "As a first step toward this goal, we studied in this paper the RAR in simulated halos for both CDM and SIDM models, using the BAHAMAS suite of cosmological hydrodynamical simulations [30], [29], [48].", "We analyzed simulations at $z=0$ and $z=0.375$ run with four different SIDM models (SIDM0.1, SIDM0.3, SIDM1.0, and vdSIDM; see Section ), as well as collisionless CDM.", "For each dark matter model, we have determined the mean $g_\\mathrm {tot}$ as a function of $g_\\mathrm {bar}$ , or the halo RAR, in halos of different mass bins (Section ; see Figure REF ).", "We find that the slope of the halo RAR at high acceleration ($g_\\mathrm {bar}$ > $$ 10-10.6$~m~s$ -2$) decreases with increasing SIDM cross-section.", "This flattening feature at high $ gbar$ is more significant in more massive halos at lower redshift (Figures~\\ref {fig:all_RAR_mass_split} and \\ref {fig:compare_z}), consistent with the fact that the scattering rate is proportional to the local dark matter density and velocity dispersion (Equation~(\\ref {eq:scatter})).", "This suggests that the high-$ gbar$ cluster-scale RAR for low-redshift samples can be used to probe the nature of dark matter.$ Focusing on massive cluster halos at $z=0.375$ , we have also characterized the slope ($m$ ), intercept ($b$ ), and intrinsic scatter ($\\sigma _\\mathrm {int}$ ) of the mean RAR at high $g_\\mathrm {bar}$ for different dark matter models (see Table REF ).", "In all cases, we find a remarkably tight distribution in $\\log g_\\mathrm {tot}$ –$\\log g_\\mathrm {bar}$ space, with a slight increase in $\\sigma _\\mathrm {int}$ with increasing SIDM cross-section.", "We find that only the CDM case can be well described by Equation (REF ) proposed by [32], with an acceleration scale of $g_{\\dagger } = (1.42\\pm 0.06)\\times 10^{-9}$  m s$^{-2}$ .", "This is much higher than the characteristic acceleration scale of $g_{\\dagger }\\approx 1.2\\times 10^{-10}$  m s$^{-2}$ observed at galaxy scales [32].", "We have compared the halo RARs from the BAHAMAS-CDM and -SIDM runs to the cluster RAR inferred from CLASH observations (Section ; see Tables REF and REF ).", "Our comparison shows that the RAR in the CDM model provides an excellent match to the CLASH RAR [57].", "This comparison includes the high-$g_\\mathrm {bar}$ regime probed by the BCGs.", "By contrast, models with a larger SIDM cross-section (hence with a greater flattening in $g_\\mathrm {tot}$ at high $g_\\mathrm {bar}$ ) yield increasingly poorer matches to the CLASH RAR.", "Excluding the BCG regions, we obtain a weaker but still competitive constraint on the SIDM cross-section.", "Using the RAR data outside the central $r<100$  kpc region, we find that an SIDM model with $\\sigma /m=0.3$  cm$^{2}$  g$^{-1}$ is disfavored at the $3.8\\sigma $ level with respect to the CDM model.", "In this study, we have demonstrated the power and potential of the RAR for testing the collisionless nature of dark matter.", "Thus far, the cluster RAR has been determined using gravitational lensing only for the CLASH sample.", "To place stringent and robust constraints on the SIDM cross-section, it is necessary to increase the sample of clusters for which lensing and X-ray observations are available over a broad radial range down to $r\\sim 100$  kpc (Section REF ).", "For nearby clusters at $z<0.1$ , such mass measurements can be obtained from wide-field weak-lensing observations [40].", "For clusters at higher redshifts, combined strong and weak lensing is required to distinguish SIDM and CDM using the RAR.", "The ongoing CHEX-MATE project will provide such ideal multiwavelength data sets of high quality, for a minimally biased, signal-to-noise-limited sample of 118 Planck galaxy clusters at $0.05<z<0.6$ detected through the SZ effect [6].", "Extending this work to the CHEX-MATE sample will thus be a substantial step toward understanding the collisionless nature of dark matter.", "We acknowledge fruitful discussions with Yong Tian, Teppei Okumura, and Stefano Ettori.", "This work is supported by the Ministry of Science and Technology of Taiwan (grant MOST 109-2112-M-001-018-MY3) and by the Academia Sinica Investigator award (grant AS-IA-107-M01).", "Parts of this research were carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).", "Astropy [1], matplotlib [21], NumPy [65], Python [66], Scipy [67]" ] ]
2207.03506
[ [ "Reinforcement Learning-based Joint User Scheduling and Link\n Configuration in Millimeter-wave Networks" ], [ "Abstract In this paper, we develop algorithms for joint user scheduling and three types of mmWave link configuration: relay selection, codebook optimization, and beam tracking in millimeter wave (mmWave) networks.", "Our goal is to design an online controller that dynamically schedules users and configures their links to minimize the system delay.", "To solve this complex scheduling problem, we model it as a dynamic decision-making process and develop two reinforcement learning-based solutions.", "The first solution is based on deep reinforcement learning (DRL), which leverages the proximal policy optimization to train a neural network-based solution.", "Due to the potential high sample complexity of DRL, we also propose an empirical multi-armed bandit (MAB)-based solution, which decomposes the decision-making process into a sequential of sub-actions and exploits classic maxweight scheduling and Thompson sampling to decide those sub-actions.", "Our evaluation of the proposed solutions confirms their effectiveness in providing acceptable system delay.", "It also shows that the DRL-based solution has better delay performance while the MAB-based solution has a faster training process." ], [ "Motivations", "MmWave is a key feature of next-generation communication systems due to the high bandwidths available versus lower frequency [1].", "Large-scale phased arrays with compact size, coupled with the hybrid architecture, provide higher antenna gain and flexible MIMO operations for mmWave link configuration [2].", "Configuring these links is challenging, especially when considering user scheduling and the rapidly changing channels due to mobility [3].", "We summarize three of the most important challenges below: Dynamic and static obstacles may block direct line-of-sight (LOS) paths, which provide high-quality mmWave links [4].", "These blockages may create long outage duration [5].", "This limits the network coverage and capacity.", "One practical solution is to leverage relay-assisted transmission, by enabling device-to-device (D2D) links, to bypass the blockages and extend the network coverage [6].", "Joint user scheduling and relay selection are challenging, though, in fast-varying mmWave networks.", "First of all, it is unrealistic to probe all relays (users in the network) to decide an instantaneously optimal relay due to large beam training overhead [7].", "Second, potential imbalanced data traffic demands among users play a role in scheduling user and relay [8].", "For example, without a significant quality of service (QoS) degradation, an access point (AP) could frequently select the user with low data traffic demand as a relaying node for the data transmission to the users that are far away or suffering from blockage loss [8].", "Excessively selecting a single user (especial the user with the high demand) as relay, however, greatly impacts its QoS.", "Thus, it is imperative to design a low-overhead algorithm that jointly performs user scheduling and relay selection without sacrificing the QoS of users." ], [ "Codebook optimization with beamforming gain, training overhead, and link robustness", "Codebook-based exhausted/hierarchical beam scanning was adopted by IEEE 802.11ad/ay [9], [10] and 5G NR [11].", "The large antenna arrays are used to generate highly directional beams, hence providing beamforming gain in LOS channels.", "The gain, however, comes with beam scanning overhead that grows when a large number of narrow beams are used [12] (even with hierarchical search at the very beginning stage).", "In addition to the overhead, a mmWave link that is composed of narrower beams is more vulnerable to beam misalignment (or even the outage) caused by device mobility, as depicted in Fig.", "REF .", "It has been shown that even relatively small beam alignment error could lead to significant rate reduction [13], [14].", "As a result, selecting an appropriate codebook that resolves this tension among beamforming gain, training overhead, and link robustness is critical.", "In particular, a dynamic approach is encouraging since the optimal codebook is generally user-dependent and scenario-dependent [15], [16]." ], [ "Impact of beam training periodicity on user scheduling and beam training efficiency", "5G NR specifies five possible training periods that vary from 10 ms to 160 ms [11].", "The selection of the actual value is left to the vendors.", "On the one hand, a smaller training period would be favorable and flexible for maintaining the links with users that are fast-moving or locating close to the transmitter [16], which may however reduce the beam training efficiency.", "On the other hand, a longer training period is more efficient as it has a longer data transmission duration after each beam training, but it may waste time resources when the connected user has low data traffic demands, hence adding unnecessary delay to the other users.", "One potential solution is to use a smaller training period attached with an optional beam tracking period whose duration is the same as the training period.", "In other words, it is analogous to dividing a long training period (e.g.", "160 ms) into multiple flexible slots (e.g.", "16 slots of 10 ms).", "Some slots could be configured as beam tracking periods for users with high traffic demands.", "This approach avoids allocating a long period to a single user all at once but with the cost of beam tracking overhead, which is relatively small and even negligible.", "Though, it is still unclear how frequent or when beam tracking should be applied." ], [ "Contributions", "In this paper, we design a controller that jointly schedules users and configures mmWave links in a multi-user mmWave network under mobility.", "We consider three types of link configuration: relay selection, codebook selection, and beam tracking.", "Our goal is to minimize the average packet delay.", "Due to the large overhead associated with channel probing, it is unrealistic to provide a deterministic optimal strategy [7].", "In contrast, an online solution is a good fit for the stochastic nature of mmWave networks under mobility.", "Recently, reinforcement learning (RL) has been shown to be a powerful tool to solve complex decision-making problems using dynamic programming [17], [18], [19] (please see Sec.", "REF for more examples of applying RL in wireless system designs).", "In this work, we exploit RL to jointly solve user scheduling and mmWave link configuration problem.", "The contributions of our paper are summarized as below:" ], [ "System Modeling", "We model the joint user scheduling and link configuration of a mmWave network as a queuing system, where each queue buffers the data for a user.", "Three types of mmWave link configuration are considered, i.e.", "relay selection, codebook selection, and beam tracking.", "We mathematically formulate the system as a discrete-time decision-making process by showing how the queues are influenced by the decisions on link configuration, under various environmental dynamics, such as network traffic, device mobility, blockage, and channel fading.", "Prior work has not jointly considered these configurations with user scheduling." ], [ "DRL-based controller", "We transform the modeled system into a partially observable Markov decision process (POMDP) and further propose a DRL-based controller to dynamically schedule the users and configure their mmWave links.", "In particular, we exploit the proximal policy optimization (PPO) [20] within the advantage-to-critic (A2C) framework [21] to train a neural network (NN)-based stochastic controller, which is carefully crafted such that the interplays among user scheduling and link configuration are included." ], [ "Empirical MAB-based controller", "To combat the potential large sample complexity brought by the DRL-based solution, we propose a sample-efficient and low-complexity learning algorithm, called Empirical MAB-based controller.", "This algorithm decomposes the entire decision-making process into a sequence of four sub-actions: namely that user scheduling, selecting relay, choosing codebook, and deciding on beam tracking.", "It then exploits maxweight scheduling [22] and bandit algorithms [15] to sequentially decide these four sub-actions." ], [ "Comprehensive empirical evaluation", "We evaluate the two proposed RL-based controllers by numerical simulation with practical system parameters.", "Our results show that both of them are effective in providing acceptable system delay.", "In particular, the DRL-based solution provides better system performance while the MAB-based solution results in a faster training process." ], [ "RL-based user scheduling", "In queue-based scheduling, the classic maxweight algorithm [22] is throughput optimal and provides good delay performance [23], though requires the channel state information beforehand.", "The channel in practical mobile networks is dynamic, which drives the need for learning scheduling policies [24].", "Some recent work has been exploiting the newfound power of RL techniques [17], e.g deep Q-network (DQN) [25] and deep deterministic policy gradient (DDPG) [26].", "In [27], [28], two DQN-based DRL schedulers were developed to minimize the system latency.", "In [29], [24], two DDPG-based DRL schedulers were further developed to allocate the time/frequency resource allocation among users to minimize the queuing delay.", "Besides, the DDPG algorithm was exploited to perform wireless routing in [30] to learn a scheduling policy that minimizes the end-to-end packet delay.", "This prior work shows that their obtained delay performance is close or even superior to the maxweight scheduling.", "Unfortunately, either user mobility or the effect of mmWave link configuration (e.g.", "relay selection and beam tracking) were considered in the user scheduling in [24], [27], [28], [29], [30].", "In contrast, we study user scheduling by considering multiple types of mmWave link configuration and exploit the most recently proposed PPO algorithm for the learning process rather than using DQN or DDPG." ], [ "Relay selection in mmWave systems", "MmWave relaying schemes have been intensively studied (see [31] and references therein).", "For example, relay selection and spatial reuse were jointly optimized to reduce the system delay of a mmWave WPAN in [8], but only for static channel conditions.", "A relay probing strategy was investigated in [7], but the impact of the relay selection on the user scheduling was not considered.", "Motivated by the fact that relay selection is resource-demanding under rapidly-varying mmWave channel [32], some recent work has started leveraging RL to perform dynamic selection rather than solving instantaneous optimization problems as [8], [7] did.", "For example, in [33], a DQN-based relay selection was developed for a downlink vehicular-to-infrastructure network.", "In [34], the $\\epsilon $ -greedy policy was used to solve the routing selection in a self-backhauled mmWave cellular network.", "Both [33] and [34] showed that their RL-based solutions do not require prior information on network dynamics but their delay performance is robust to complex environmental dynamics.", "Nevertheless, the applications of RL in the mmWave band are still largely open.", "In contrast to prior work, we study the mmWave relay selection under a complicated scenario by considering various factors, such as codebook selection, beam tracking design, and data requirements of users." ], [ "MmWave codebook optimization", "The codebook optimization problem was initially investigated in [35], [36] by optimizing beamwidth.", "Their solutions, however, depend on prior knowledge such as channel state information, which restricts their practical use in the mobile networks where the channels are usually rapidly changing.", "In contrast, our proposed RL-based solutions are model-free, and thus more flexible for deployments at different sites.", "Recently, data-driven approaches have been widely exploited.", "A set of beam pairs was learned out via deep learning in [37] and a geo-located context database was built in [38] to assist the beam width/direction selection.", "These two offline approaches, however, require a large amount of data for a given site.", "This would limit their fast implementation when the system dynamics are high.", "Furthermore, [39], [40], and [41] designed online learning-based algorithms to adaptively optimize codebook beam patterns.", "In [42], a joint beamwidth and power control problem was solved via DRL." ], [ "Beam tracking design", "Beam tracking has been widely studied for a single user.", "In [43], hybrid mmWave phased arrays were used to perform beam tracking by probing the channel in multiple spatial directions simultaneously.", "In [44], the sparsity of the mmWave channel was exploited to perform wideband channel tracking.", "The effects of beam tracking and data communication durations were optimized in [45].", "A TS-based beam tracking algorithm was proposed in [3], where the time-varying optimal beam is tracked with numerous feedbacks before each beam retraining.", "The prior work, however, focused on beam tracking design for the single-user case, whereas the potential delay introduced by tracking to other users was not considered.", "In contrast, our work focuses on learning whether and when the beam tracking should be performed in a multi-user mmWave network such that the system delay is minimized.", "We use bold lowercase font to represent vectors and normal font for scalars.", "We use $q_i$ to denote the $i$ -th element of a vector $\\mathbf {q}$ .", "We denote $[K]^+$ as the set $\\left\\lbrace 1,\\hdots ,K\\right\\rbrace $ , and $[K]$ as the set $\\left\\lbrace 0\\right\\rbrace \\cup [K]^+$ .", "${1}\\left\\lbrace \\cdot \\right\\rbrace $ denotes the indicator function.", "$\\mathbf {1}$ is an all-ones vector with a context-dependent dimension.", "$\\lceil \\cdot \\rceil $ and $\\lfloor \\cdot \\rfloor $ denote the ceiling and flooring operator, respectively.", "$\\text{Mod}\\left(\\cdot \\right)$ denotes the Modulo operation.", "$X\\sim \\text{Bernoulli}\\left(p\\right)$ means that $X$ follows a Bernoulli distribution with parameter $p$ .", "$Dir\\left(\\alpha _{0},\\hdots ,\\alpha _{M}\\right)$ denotes a Dirichlet distribution with parameter vector $ [\\alpha _{0},\\hdots ,\\alpha _{M}]^{\\text{T}}$ ." ], [ "System model", "We consider downlink transmission in a mmWave network, in which an AP (or base station) communicates with $U$ mobile users (UE) in a slot-based manner.", "For each time slot (of a fixed duration $T^\\text{slot}$ ), the AP dedicates all bandwidth $B$ to a single UE.", "The time slot is composed of a beam alignment (BA) phase and a data transmission (DT) phase, as shown in Fig.", "REF .", "During the BA phase, the AP performs a 2D codebook-based beam training, where a codebook refers to a collection of directional beams that share the same beamwidth and together cover the whole spatial space, as shown in Fig.", "REF .", "The AP and UE test all the beams in their receptive codebooks in the BA phase and the indexes of the best transmitting-receiving beam pair along with the received signal strength (RSS) (or estimated SNR) is feedback to the AP.", "Afterward, the AP and UE would use this identified best beam pair to send/receive data, which is referred to as the DT phase.", "In particular, the highest supportable modulation and coding scheme (MCS) would be used based on a predefined RSS-MCS (or SNR-MCS) table.", "We refer to the $u$ -th UE in the network as UE $u$ or device $u$ , where the integer $u$ is its network ID.", "Without any ambiguity, we assign ID 0 to the AP for notational simplicity.", "Three types of link configuration are available in the system.", "They are codebook selection, relay selection, and beam tracking.", "A pictorial example is given in Fig.", "REF , which is described as below:" ], [ "Codebook selection", "To be flexible with different channel conditions and locations of UEs, $K$ codebooks of different beamwidth are available at the AP while only a single codebook (the one with the widest beamwidth) is used by UEs since UEs are generally equipped with smaller arrays, implying fewer antennas and wider beams.", "Hence, codebook selection is only enabled for the AP.", "In this work, we focus on the codebooks shown in Fig.", "REF .", "The generation of codebooks with different beamwidth is out of the scope of this work and a potential approach is to use the antenna on/off techniques [46], [47].", "It is worth pointing out that any beam training strategies or a strategy with different parameters (including hierarchical search) can be incorporated into our codebook selection framework by regarding them as different “abstract codebooks\" [15]." ], [ "Relay selection", "An example of relay-assisted transmission is given in Fig REF : the LOS path between UE 4 and the AP is temporarily blocked.", "The AP first transmitted data (with UE 4 as destination) to UE 2 and in the consecutive time slot, UE 2 forwards its received data to UE 4.", "We refer to the link between AP and any UE as a main link while the link between two UEs as a D2D link.", "As a result, a UE could be in one of the following four modes: idle, being a receiver (RX) in a main link, being a transmitter (TX) in a D2D link, or being a RX in the D2D link.", "We consider that all devices (AP and UEs) are working in a half-duplex mode, which implies that any devices cannot simultaneously participate in a main link and a D2D link." ], [ "Beam tracking", "When the AP continues serving the same UE in the consecutive time slots, testing the neighboring beams of the current best beam pair would be sufficient to maintain the link quality.", "This process is called beam tracking.", "A consecutive time slot that tracks a UE is called a tracking slot.", "In our studied system, beam tracking is not activated to track a relay.", "In the following subsections, we will characterize the studied system by mathematically modeling network traffic, mobility, blockage, and mmWave link configuration.", "The notation is summarized in Table REF for reference.", "Table: NO_CAPTION" ], [ "Queue-based network traffic modeling", "We consider a discrete-time horizon $t=0,1,\\hdots $ , where each time step represents a time slot.", "The downlink data traffic for UEs arrives at the AP in forms of packets with fixed size in bits, denote as $S^\\text{pkg}$ .", "Denoting $z_u[t]$ as the number of arrived packets for UE $u$ during the $t$ -th time slot, we assume that the packet arrival follows a random process $\\mathbf {z}[t]=\\left[z_1[t],\\hdots ,z_U[t]\\right]^\\text{T}$ which is independently and identically distributed (IID) across time slots and its expectation ${E}\\left\\lbrace \\mathbf {z}[t]\\right\\rbrace $ is $\\lambda =\\left[\\lambda _1,\\hdots ,\\lambda _U\\right]^\\text{T}$ .", "The AP maintains $U$ queues to respectively buffer the packets for the UEs.", "Denoting $q_u$ as the number of packets in the queue of UE $u$ at the beginning of the $t$ -th time slot, we define $\\mathbf {q}[t]=\\left[q_1[t],\\hdots ,q_U[t]\\right]^\\text{T}$ as a queue state vector.", "With $d_u[t]$ as the number of packets that are successfully delivered to UE $u$ during the $t$ -th time slot, we define $\\mathbf {d}[t]=\\left[d_1[t],\\hdots ,d_U[t]\\right]^\\text{T}$ as the packet departure vector.", "Denoting $\\left\\lbrace \\cdot \\right\\rbrace ^+\\triangleq \\max (\\cdot ,0)$ , the transition of the state of queues is given as $\\mathbf {q}[t+1]= \\left\\lbrace \\mathbf {q}[t] - \\mathbf {d}[t]\\right\\rbrace ^++ \\mathbf {z}[t].$ The stochastic vector $\\mathbf {d}[t]$ is determined by the channel quality, user scheduling, and mmWave link configuration, which are quantified in the following subsections.", "We assume that each device (UE or AP) randomly moves within its region as shown in Fig.", "REF , a commonly used model adapted from [48].", "We consider that, for every $N^\\text{mobility-period}$ time slots, each device randomly picks a new speed uniformly from $\\left[v_\\text{min},v_\\text{max}\\right]$ , a new movement direction uniformly from $\\left[0,2\\pi \\right]$ , and a new rotation rate uniformly from $\\left[r^\\text{rotation}_\\text{min},r^\\text{rotation}_\\text{max}\\right]$ with random direction (clockwise or anti-clockwise along vertical axis as 2D-movement is considered).", "The device then performs a uniform motion with the chosen parameters for $N^\\text{mobility-period}$ time slots or it will reconfigure the those parameters when hitting the boundary of its region.", "This boundary can be any shape, as we consider in this paper a circle, of which the center is the device's initial position and the radius is parameterized as $r_u^\\text{move}$ for device $u$ ." ], [ "Blockage", "For each UE, we use a $\\left(N^\\text{block}+1\\right)$ -state Markov model to characterize its blockage condition with respect to the AP.", "This is adapted from a two-state blockage model proposed in [49].", "We denote the blockage condition of UE $u$ at the beginning of the $t$ -th time slot as $H_u[t]\\in [N^\\text{block}]$ , which represents that UE $u$ would be under blockage for the following $H_u[t]$ time slots.", "Thus, the UE is under a LOS channel when $H_u[t]=0$ and a NLOS channel when $H_u[t]>0$ .", "The state transition is given in Fig.", "REF where $p_{u,n}^\\text{block}$ are unknown.", "We assume that the blockage is identified when it happens (e.g.", "by a drastic change of link quality).", "Denoting $l_u^\\text{block}[t]$ as the number of time slots have passed since the last observed blockage to UE $u$ , we define a blockage state vector $\\mathbf {l}^\\text{block}[t] = \\left[l_1^\\text{block}[t],\\hdots ,l_U^\\text{block}[t]\\right]^\\text{T}$ for later use.", "It is worth pointing out that there could exist correlation among the blockage conditions of UEs, i.e.", "$\\left\\lbrace H_u[t]\\right\\rbrace _{u=1}^{U}$ , given the mobility pattern of the UEs.", "Since UEs are modeled to be moving with unknown and random direction/velocity/rotation, we assume that $H_u[t]$ are independent among UEs." ], [ "Packet departure under mmWave link configuration", "We now show how the packet departure vector $\\mathbf {d}[t]$ is determined by the mmWave link configuration.", "We first define some notation: We denote $b_u^\\text{d2d}[t]$ as a binary variable, which equals 1 when UE $u$ is activated in a D2D link in the $t$ -th time slot.", "We then define $\\mathbf {b}^\\text{d2d}[t] = \\left[b_1^\\text{d2d}[t],\\hdots ,b_U^\\text{d2d}[t]\\right]^\\text{T}$ as a D2D state vector.", "We denote $I_{\\text{d2d-tx}}[t]$ and $I_{\\text{d2d-rx}}[t]$ as the ID of the TX and RX in the D2D link respectively, when the D2D link exists.", "We denote $I_{\\text{main-rx}}[t]$ as the ID of the RX in the main link, and further denote $I_{\\text{main-dest}}[t]$ as the ID of the destination of the packets transmitted via the main link.", "Accordingly, $I_{\\text{main-rx}}[t] \\ne I_{\\text{main-dest}}[t]$ indicates that a relay is used in the $t$ -th slot, namely that a D2D link will be activated in the $(t+1)$ -th slot.", "Therefore, the evolution of the D2D state vector can be expressed as $b_u^\\text{d2d}[t+1] = {1}\\left\\lbrace I_{\\text{main-dest}}[t] \\ne I_{\\text{main-rx}}[t]\\right\\rbrace {1}\\left\\lbrace I_{\\text{main-dest}}[t] = u \\text{~or~} I_{\\text{main-rx}}[t] = u\\right\\rbrace .$ We denote $b_u^\\text{track}[t]$ as a binary variable, which equals 1 when the $t$ -th time slot is a tracking slot for UE $u$ .", "We then define $\\mathbf {b}^\\text{track}[t] = \\left[b_1^\\text{track}[t],\\hdots ,b_U^\\text{track}[t]\\right]^\\text{T}$ as a tracking state vector.", "We denote a binary variable $I_{\\text{track}}[t]$ , which equals 1 when the system decides that the time slot $t+1$ is to become a tracking slot.", "Therefore, the evolution of the tracking state vector is given as $b_u^\\text{track}[t+1] = I_\\text{track}[t] {1}\\left\\lbrace I_\\text{main-rx}[t] = u\\right\\rbrace .$ We denote the ID of the codebook used by the AP for the $t$ -th time slot as $I_{\\text{cb}}[t]$ .", "At the beginning of the $t$ -th time slot, the system controller (aka decision maker) uses a policy to make a decision (aka action) on which UE is to be served and how its mmWave link is to be configured.", "This action is denoted by a quadruple $a[t]=\\left( I_{\\text{main-dest}}[t], I_{\\text{main-rx}}[t], I_{\\text{cb}}[t], I_\\text{track}[t]\\right)$ .", "In the following, we will use $a[t]$ to quantify the packet departure vector $\\mathbf {d}[t]$ ." ], [ "Beam training overhead and outage", "We consider that the $k$ -th codebook has $N_k^\\text{beam}$ non-overlapped directional beams to cover the entire horizontal space (360 degrees), as shown in Fig.", "REF .", "We assume that all devices adopt a subarray-based hybrid architecture: The AP and UEs are respectively equipped with $N^\\text{arr}_\\text{AP}$ and $N^\\text{arr}_\\text{UE}$ phased arrays, and all the arrays could be simultaneously used to reduce the beam alignment overhead by up to $N^\\text{arr}_\\text{AP}N^\\text{arr}_\\text{UE}$ times.", "For the $k$ -th codebook, we define a effective coefficient $C_k^\\text{normal}$ which represents the ratio of the time for the DT phase to the entire time slot duration $T^\\text{slot}$ .", "By denoting $T^\\text{meas}$ as the time duration of each beam pair testing, $C_k^\\text{normal}$ can expressed as $C_k^\\text{normal} = \\left(1-\\left\\lceil \\frac{N_k^\\text{beam}}{N^\\text{arr}_\\text{AP}}\\right\\rceil \\left\\lceil \\frac{N_1^\\text{beam}}{N^\\text{arr}_\\text{UE}}\\right\\rceil \\frac{T^\\text{meas}}{T^\\text{slot}}\\right).$ When beam tracking is to be activated for a time slot, the UE to be scheduled would be the same as the one scheduled in the precedent time slot.", "Considering the whole spatial space as a full circle, we perform the beam tracking within a fixed-size circular sector.", "This circular sector (aka beam tracking region) is equally divided by the direction that is associated with the currently connected beam pair.", "We then quantize the size of this circular sector in radius and denote it as $\\phi ^\\text{track}$ .", "We further denote $\\mathcal {F}_k^\\text{track}$ as the minimum subset of beams in the $k$ -th codebook that can cover the target beam tracking region.", "Accordingly, we have $\\left|\\mathcal {F}_k^\\text{track}\\right|=\\left\\lceil \\frac{\\phi ^\\text{track}}{2\\pi }N_k^\\text{beam}\\right\\rceil $ .", "As a result, the effective coefficient of a tracking slot, denoted by $C_k^\\text{track}$ , is given as $C_k^\\text{track} = \\left(1-\\left|\\mathcal {F}_k^\\text{track}\\right|\\left|\\mathcal {F}_1^\\text{track}\\right| \\frac{T^\\text{meas}}{T^\\text{slot}}\\right).$ We can see that $C_k^\\text{track}\\ge C_k^\\text{normal}$ as $\\frac{\\phi ^\\text{track}}{2\\pi }$ is usually smaller than $\\frac{1}{N^\\text{arr}_\\text{AP}}$ and $\\frac{1}{N^\\text{arr}_\\text{UE}}$ , which motivates the use of beam tracking when the same UE is served consecutively.", "Different codebooks not only result in different overhead reflected by (REF ) and (REF ), but also different levels of robustness to mobility.", "As shown in Fig.", "REF , narrower beams suffer more from link outages caused by device self-rotation and motion.", "To include this in our model, for any link in which the TX has ID $I_\\text{tx}$ and uses the $k$ -th codebook, and RX has ID $I_\\text{rx}$ , we define an outage coefficient $C_{k,I_\\text{tx},I_\\text{rx}}^\\text{outage}[t]$ which is the ratio of the link outage duration to the entire DT phase.", "It is worth pointing out that $C_{k,I_\\text{tx},I_\\text{rx}}^\\text{outage}[t]$ is a random variable as it depends on the transceivers' mobility pattern and channel condition.", "As a result, the eventual effective coefficients for the main, denoted as $C^\\text{eff-main}[t]$ and for D2D link, denoted as $C^\\text{eff-d2d}[t]$ , can be given as below to incorporate the outage events: $&C^\\text{eff-main}[t] = \\left(1-C_{I_\\text{cb}[t],0,I_\\text{main-rx}[t]}^\\text{outage}[t]\\right)\\left[C_{I_\\text{cb}[t]}^\\text{normal}\\left(1-\\mathbf {1}^\\text{T}\\mathbf {b}^\\text{track}[t]\\right) + C_{I_\\text{cb}[t]}^\\text{track}\\mathbf {1}^\\text{T}\\mathbf {b}^\\text{track}[t]\\right],\\\\&C^\\text{eff-d2d}[t] =\\left(1-C_{1,I_\\text{d2d-tx}[t],I_\\text{d2d-rx}[t]}^\\text{outage}[t]\\right)C_1^\\text{normal},$ where we recall that $\\mathbf {1}^\\text{T}\\mathbf {b}^\\text{track}[t]=I_\\text{track}[t-1]$ ." ], [ "Effective data rate", "We now formulate the outcome of the BA phase and its impact on the packet departure of the DT phase.", "The following formulation is for a main link and a similar formulation could be done for a D2D link.", "Denoting $k$ as the ID of codebook used by the TX, $j_\\text{tx}$ and $j_\\text{rx}$ are as the ID of beams used by the TX and RX, respectively, we define a function $f(t,k, I_\\text{tx},I_\\text{rx},j_\\text{tx},j_\\text{rx})$ as the RSS obtained between device $I_\\text{tx}$ and $I_\\text{rx}$ .", "In particular, $f$ is an unknown channel function that incorporates the physical layer factors on the received signals, such as previously described UE mobility, blockage condition, channel shadowing, hardware impairments, noise and etc.", "Denoting $\\mathcal {F}[t]$ as the set of beams used by the TX and $\\text{rss}_\\text{main}[t]$ as the maximum RSS obtained after scanning all the beam pairs, we have $\\text{rss}_\\text{main}[t] = \\max _{\\begin{array}{c}j_\\text{tx}\\in \\mathcal {F}[t], j_\\text{rx}\\in \\left[N^\\text{beam}_{1}\\right]^+\\end{array}}f\\left(t,I_\\text{cb}[t],0, I_{\\text{main-rx}}[t], j_\\text{tx}, j_\\text{rx}\\right),$ where $\\mathcal {F}[t] = \\left[N^\\text{beam}_{I_\\text{cb}[t]}\\right]^+$ for a normal time slot and $\\mathcal {F}[t] = \\mathcal {F}_{I_\\text{cb}[t]}^\\text{track}$ for a tracking time slot.", "We consider that there are $M+1$ MCSs which correspond to $M+1$ increasing date rates $R_0,\\hdots , R_M$ .", "They are sequentially indexed by MCS $0,\\hdots ,M$ and MCS 0 (i.e.", "$R_0=0$ ) indicates a failure connection.", "Given the RSS in (REF ), the AP uses the highest supportable MCS for the DT phase by referring to a predefined RSS-MCS table where the minimum RSS required to support MCS $m$ is denoted as $\\text{rss}_m$ .", "Therefore, the instantaneous data rate of the main link is given as $R_\\text{main}[t] = \\max _{m\\in [M]} {1}\\left\\lbrace \\text{rss}_\\text{main}[t]\\ge \\text{rss}_m\\right\\rbrace R_m.$ It is worth pointing out that we assume that the MCS is selected perfectly and thus there is no error in packet decoding.", "Though packet retransmission exists in real systems, they could be treated as new arrivals to the queues to fit our model.", "Similarly, if a D2D link exists in the $t$ -th time slot, its instantaneous data rate can be given as $&R_\\text{d2d}[t] = \\max _{m\\in [M]} {1}\\left\\lbrace \\text{rss}_\\text{d2d}[t]\\ge \\text{rss}_m\\right\\rbrace R_m,\\\\&\\text{rss}_\\text{d2d}[t] = \\max _{j_\\text{tx}\\in \\left[N^\\text{beam}_{1}\\right]^+, j_\\text{rx}\\in \\left[N^\\text{beam}_{1}\\right]^+} f\\left(t,1,I_{\\text{d2d-tx}}[t], I_{\\text{d2d-rx}}[t], j_\\text{tx}, j_\\text{rx}\\right) .$ Therefore, the number of packets departed via the main link, denoted as $d_\\text{main}[t]$ , or via the D2D link, denoted as $d_\\text{d2d}[t]$ , during the $t$ -th slot, are: $&d_\\text{main}[t] = \\left\\lfloor \\frac{R_\\text{main}[t]C^\\text{eff-main}[t]}{S^\\text{pkg}}\\right\\rfloor ,\\\\&d_\\text{d2d}[t] = \\left\\lfloor \\frac{R_\\text{d2d}[t]C^\\text{eff-d2d}[t] }{S^\\text{pkg}}\\right\\rfloor .$ We consider that a packet is out of the queue when it has arrived at its destination and the maximum number of packets that can be delivered is limited by the capacity of both the main link and D2D link.", "Then the elements of the packet departure vector $\\mathbf {d}[t]$ are given as $d_u[t] ={\\left\\lbrace \\begin{array}{ll}d_\\text{main}[t] {1}\\left\\lbrace I_{\\text{main-dest}}[t] = I_{\\text{main-rx}}[t]\\right\\rbrace &u = I_{\\text{main-dest}}[t]\\\\\\min \\left(d_\\text{main}[t-1],d_\\text{d2d}[t]\\right)b_u^\\text{d2d}[t] & u = I_{\\text{d2d-rx}}[t]\\\\0 &\\text{otherwise}.\\end{array}\\right.", "}$ Given the formulated system model, our objective is to minimize the packet delay by dynamically and jointly scheduling a user for each time slot and configuring its link settings.", "This problem is a complex dynamic decision-making process which involves combinatorial and integer optimization.", "Further, the function $f$ that captures the properties of the wireless system is usually unknown.", "This makes finding an optimal solution challenging.", "In the following two sections, we will solve it with the aid of RL techniques." ], [ "DRL-based joint user and link controller", "In this section, we propose a DRL-based controller to solve this joint user scheduling and link configuration problem.", "We first restate the studied problem as an POMDP.", "Then we show how to adapt the state-of-the-art PPO to our problem.", "Finally, we summarize some data scaling operations that are used in training our DRL-based controller." ], [ "POMDP formulation of joint mmWave UE scheduling and link configuration", "Our studied problem can be characterized as a POMDP described by a 6-tuple $\\left(\\mathcal {S},\\mathcal {D}_0,\\mathcal {A},\\mathcal {P},\\mathcal {R},\\gamma \\right)$ , which is detailed below: We define an observable state for the $t$ -th time slot as a quadruple $s[t] = \\left(\\mathbf {q}[t],\\mathbf {b}^\\text{d2d}[t],\\mathbf {b}^\\text{track}[t],\\mathbf {l}^\\text{block}[t]\\right)$ and the state space $\\mathcal {S}$ consists of all possibilities of $s[t]$ .", "There are countably infinite discrete states in $\\mathcal {S}$ as $\\mathbf {q}[t]$ and $\\mathbf {l}^\\text{block}[t]$ can be composed of any non-negative integers.", "We assume the initial state $\\mathbf {s}[0]$ follows a distribution $\\mathcal {D}_0$ ." ], [ "Action space $\\mathcal {A}$", "The action space $\\mathcal {A}$ consists of all the feasible actions that could be potentially taken by a controller.", "We recall that the action at the $t$ -th time slot has been already defined as the quadruple $a[t] = \\left(I_{\\text{main-dest}}[t], I_{\\text{main-rx}}[t], I_{\\text{cb}}[t], I_\\text{track}[t]\\right)$ , where $I_{\\text{main-dest}}[t]\\in [U]^+$ , $I_{\\text{main-rx}}[t]\\in [U]^+$ , $I_{\\text{cb}}[t]\\in [K]^+$ and $I_\\text{track}[t]\\in \\left\\lbrace 0,1\\right\\rbrace $ .", "It is worth pointing out that $I_\\text{track}[t]$ has to be 0 when $I_{\\text{main-dest}}[t]\\ne I_{\\text{main-rx}}[t]$ , which is due to the fact that tracking a relay is not allowed in the studied system.", "This results in the size of action space $\\mathcal {A}$ being calculated as $\\left|\\mathcal {A}\\right|=U^2K + UK$ , where $U^2K$ corresponds to the scenario when beam tracking is not activated ($I_\\text{track}[t]=0$ ) while $UK$ corresponds to the other scenario ($I_\\text{track}[t]=1$ )." ], [ "Transition probabilities $\\mathcal {P}$", "We denote the probability of being in a state ${\\textbf {s}}^\\prime $ after taking an action $\\mathbf {a}$ in the state ${\\textbf {s}}$ as $\\mathcal {P}\\left({s}^\\prime ,s,a\\right)={P}\\left(s[t+1]={s}^\\prime |s[t]=s,a[t]=a\\right)$ .", "Since the mobility and position information of the UEs are not exploited, the underlying transition probabilities between observable states are non-stationary.", "This makes our studied system a POMDP, where the decision on action is made under the uncertainty and ${\\textbf {s}}$ is a partially observation of the true environment.", "These transition probabilities $\\mathcal {P}$ are unknown since they are decided by the unknown packet arrival process $\\mathbf {z}[t]$ and channel function $f$ in (REF ), which jointly reflects the underlying dynamics and randomness of the studied mmWave system (aka environment in RL terminology)." ], [ "Reward function $\\mathcal {R}$", "In MDP settings, a reward, denoted by $r[t]$ for the $t$ -th time slot, is a random variable being conditional on the current state $s[t]$ and action taken $a[t]$ .", "Its observation is provided by the environment after the execution of the action, which can be expressed as $r[t]=\\mathcal {R}\\left(s[t],a[t]\\right)$ .", "In this work, we select $\\mathcal {R}$ as the number of packets that are delivered to the destination, i.e.", "$r[t] = \\mathbf {1}^\\text{T}\\mathbf {d}[t]$ .", "We will discuss later the motivation of this reward design." ], [ "Discount factor $\\gamma $ and related standard definitions\nin RL settings", "We further present some standard concepts and definitions that are frequently used in a RL setting.", "With $\\gamma \\in [0,1]$ representing a discount factor that determines the importance of current and future rewards, the cumulative discounted reward from the $t$ -th time slot, denoted by $G[t]$ , is defined as [17]: $G[t] = \\sum \\nolimits _{\\ell =t}^{\\infty }\\gamma ^{\\ell -t} r[l].$ We use $\\pi $ to denote a stochastic policy and $\\pi \\left(a|s\\right)$ to denote the probability of choosing an action $a$ when observing a state $s$ .", "For a policy $\\pi $ , its state value function $v^\\pi \\left(\\cdot \\right)$ , state-action value function $Q^\\pi \\left(\\cdot \\right)$ , and advantage function $A^\\pi \\left(\\cdot \\right)$ are defined as below [21]: $&v^\\pi \\left(s[t]\\right) = {E}_{a[t],s[t+1],a[t+1],\\cdots }\\left\\lbrace G[t]\\Large |s[t]\\right\\rbrace ,\\\\ &Q^\\pi \\left(s[t],a[t]\\right) = {E}_{s[t+1],a[t+1],\\cdots }\\left\\lbrace G[t]\\Large |s[t],a[t]\\right\\rbrace ,\\\\ &A^\\pi \\left(s[t],a[t]\\right) = Q^\\pi \\left(s[t],a[t]\\right) - v^\\pi \\left(s[t]\\right).$ The expectations in (REF ) are taken over the future actions and states.", "They are determined by $\\pi $ (as $a[t]\\sim \\pi \\left(a[t]|s[t]\\right)$ ) and the unknown transition probabilities $\\mathcal {P}$ (as $s[t+1]\\sim \\mathcal {P}\\left(s[t+1],s[t],a[t]\\right)$ ).", "By now, we have restated our problem within a POMDP framework using RL terminology.", "With our proposed step-wise reward $\\mathcal {R}$ , i.e.", "$r[t] = \\mathbf {1}^\\text{T}\\mathbf {d}[t]$ , we approximate the original objective of minimizing the average packet delay by maximizing an expected cumulative discounted reward, i.e.", "${E}\\left\\lbrace \\sum _{l=0}^{\\infty }\\gamma ^{l} r[l]\\right\\rbrace $ .", "This is motivated by the following facts: (1) In queuing networks, minimizing the delay suggests minimizing the sizes of queues as soon as possible according to Little's Law.", "Therefore, a reward that results in small queues timely would help reduce the mean delay experienced by packets.", "The queues might however experience a large buffer at the early stage of training/learning, which is not beneficial to NN training as the loss function could have a large variance.", "According, we resort to maximizing the total number of packets departed as it is similar to minimizing the queue state given that we have no control over the arrival rates.", "(2) In particular, the $\\ell _1$ -norm of the packet departure vector, i.e.", "$\\mathbf {1}^\\text{T}\\mathbf {d}[t]$ , is an immediate feedback of the action taken, which quantifies how much of the queue is immediately reduced, hence is an effective metric for delay minimization.", "(3) Moreover, note that there is a discount factor $\\gamma ^{l}$ applied to the reward.", "This implies that the number of packets departed (the step-wise reward) in the immediate future is more important than those in the distant future, which intrinsically meet the objective of minimizing the delay, namely departing more packets as soon as possible." ], [ "Policy gradient with PPO algorithm within A2C framework", "We now present a DRL-based controller to solve our POMDP problem.", "We design this controller by exploiting a state-of-the-art policy gradient method, called the PPO algorithm [20] within the A2C framework [21].", "The major reasons that we use a policy-based method rather than a value-based method are: (1) Our POMDP has infinite discrete states and a large number of actions while it has been empirically shown that DQN [25] could fail for MDPs with large dimensionality [20].", "(2) Our POMDP has a discrete action space, which prevents us from using DDPG [26].", "In the following, we will briefly explain PPO (see [20] for more details) and show how it is used to train a DRL-based controller in the A2C framework." ], [ "Objective function design with PPO", "A policy-based method is to dynamically learn an optimal policy $\\pi ^*$ that maximizes the objective function ${E}\\left\\lbrace \\sum _{l=0}^{\\infty }\\gamma ^{l} r[l]\\right\\rbrace $ .", "Directly optimizing ${E}\\left\\lbrace \\sum _{l=0}^{\\infty }\\gamma ^{l} r[l]\\right\\rbrace $ results in high computational and sample complexity [20].", "To tackle this, PPO has been proposed in [20] to focus on the first-order surrogate of the original objective function ${E}\\left\\lbrace \\sum _{l=0}^{\\infty }\\gamma ^{l} r[l]\\right\\rbrace $ .", "It was shown to be sample-efficient and have better empirical performance than other policy-based methods, such as vanilla PG [21] and TRPO [50].", "We first present some definitions borrowed from [20]: $\\pi _{\\theta }$ denotes a policy parameterized by ${\\theta }$ while ${\\theta }_{\\text{old}}$ denotes the parameter of the most recently learned policy.", "$\\hat{A}[t]$ denotes an estimator of the advantage function $A^{\\pi _{\\theta }}\\left(s[t],a[t]\\right)$ .", "$\\rho _{\\theta } = \\frac{\\pi _{\\theta }\\left(a[t]|s[t]\\right)}{\\pi _{{{\\theta }}_{\\text{old}}}\\left(a[t]|s[t]\\right)}$ and $\\text{clip}\\left(\\rho _{\\theta },1-\\epsilon ,1+\\epsilon \\right)$ is a clipping function that limits the ratio $\\rho _{\\theta }$ by $1+\\epsilon $ when $\\hat{A}[t]>0$ and by $1-\\epsilon $ when $\\hat{A}[t]<0$ .", "Instead of maximizing ${E}\\left\\lbrace \\sum _{l=0}^{\\infty }\\gamma ^{l} r[l]\\right\\rbrace $ , PPO opts to maximizes the following surrogate: $L^{\\text{clip}}({\\theta }) = {E}\\left\\lbrace \\min \\left(\\rho _{\\theta }\\hat{A}[t],\\text{clip}\\left(\\rho _{\\theta },1-\\epsilon ,1+\\epsilon \\right)\\hat{A}[t]\\right)\\right\\rbrace .$ Please refer to [50] and [20] for more details on how (REF ) is derived from ${E}\\left\\lbrace \\sum _{l=0}^{\\infty }\\gamma ^{l} r[l]\\right\\rbrace $ .", "With the objective in (REF ), in the following, we will exploit the A2C framework to learn the estimator $\\hat{A}[t]$ and the policy $\\pi _{{\\theta }}$ via NN, as shown in Fig.", "REF .", "Figure: A2C framework of proposed DRL-based controller, where ⊙\\odot is the Hadamard product" ], [ "DRL-based controller with A2C framework", "As shown in Fig.", "REF , the proposed DRL-based controller has an A2C framework consisting of an actor network and a critic network." ], [ "Actor network", "The actor network is composed of a NN parameterized by ${\\theta }$ and an additional computational layer which we called Constraint handler.", "The output of the NN is a probability vector denoted as $\\tilde{\\pi }_{{\\theta }}$ .", "The Constraint handler generates a boolean vector which has the same size as $\\tilde{\\pi }_{{\\theta }}$ and is denoted as $\\mathbf {m}(s)$ .", "The Constraint handler sets the elements of $\\mathbf {m}(s)$ to either zero or one with the rule: an element is zero if its index corresponds to one of the unfeasible actions that violate the scheduling constraints mentioned in Sec.", "REF .", "We recall that the constraints are: (1) If a UE is scheduled as either TX or RX in a D2D link, this UE cannot be scheduled as the destination (i.e.", "RX) in the main link for the same time slot.", "(2) If the controller decides, at the $t$ -th time slot, to track a UE in the consecutive time slot, then this UE has to be scheduled as the destination (i.e.", "RX) in the main link in the $(t+1)$ -th time slot.", "As a result, the output of the actor network is a stochastic policy given as $\\pi _{{\\theta }}(a\\big |s)\\sim \\tilde{\\pi }_{{\\theta }}(a\\big |s) \\odot \\mathbf {m}(s)$ , where the $\\ell _0$ -norm of $\\pi _{{\\theta }}(a\\big |s)$ is scaled to 1." ], [ "Critic network", "The critic network is a NN parameterized by ${\\omega }$ , which outputs an estimation of the state-value function $v^{\\pi _{\\theta }}(s)$ , denoted by $v^{\\pi _{\\theta }}_{\\omega }(s)$ .", "This estimation would be used to design $\\hat{A}[t]$ with the generalized advantage estimation (GAE) method [51].", "In particular, GAE estimates $A^\\pi \\left(\\cdot \\right)$ by using the old policy $\\pi _{{{\\theta }}_{\\text{old}}}$ for finite times (aka a batch).", "We assume that a batch of $T$ samples are collected before each policy update.", "For notational simplicity, we omit the initial slot index and index these $T$ steps by $\\tilde{t} \\in [0,T-1]$ .", "GAE proposes to estimate $Q^\\pi \\left(s[\\tilde{t}],a[\\tilde{t} ]\\right)$ with a linear combination of $T$ -step bootstrapping given as below [51]: $\\hat{Q}^\\pi \\left(s[\\tilde{t} ],a[\\tilde{t} ]\\right) = r[\\tilde{t} ] + \\gamma r[\\tilde{t} +1] +\\cdots + \\gamma ^{T-\\tilde{t} -1}r[T-1] + \\gamma ^{T-\\tilde{t} }v^\\pi _{\\omega }(s[T]).$ Accordingly, an estimator of $A^\\pi \\left(s[\\tilde{t} ],a[\\tilde{t} ]\\right)$ can be obtained as [20]: $\\hat{A}[\\tilde{t}] &= \\hat{Q}^\\pi \\left(s[\\tilde{t} ],a[\\tilde{t} ]\\right) - v^\\pi _{\\omega }(s[\\tilde{t} ]),\\\\& = \\delta [\\tilde{t} ] + \\gamma \\delta [\\tilde{t} +1] +\\cdots + \\gamma ^{T-\\tilde{t} -1}\\delta [T-1],\\\\\\delta [\\tilde{t} ] & = r[\\tilde{t} ] + \\gamma v^\\pi _{\\omega }(s[\\tilde{t} +1]) - v^\\pi _{\\omega }(s[\\tilde{t} ]),$ where $\\delta [\\tilde{t}]$ is called the temporal difference (TD) error in RL literature." ], [ "Loss functions for NN training", "In this part, we present the loss functions used to train the two NNs.", "For the actor network, as pointed out by [21], an entropy term, denoted by $L^\\text{entropy}\\left({\\theta }\\right)$ , is added into the loss function to encourage the exploration: $L^\\text{entropy}\\left({\\theta }\\right) = -\\frac{1}{T}\\sum _{\\tilde{t}=0}^{T}\\sum _{a\\in \\mathcal {A}}\\pi _{{\\theta }}\\left(a\\big | s[\\tilde{t}]\\right) \\log \\left(\\pi _{{\\theta }}\\left(a\\big | s[\\tilde{t}]\\right)\\right).$ An approximation of $L^{\\text{clip}}({\\theta })$ in (REF ), denoted as $\\hat{L}^{\\text{clip}}({\\theta })$ can be obtained by taking the expectation over the observed trajectory $\\tilde{t}\\in [0,T-1]$ .", "By defining $c_\\text{e}$ as a tunable coefficient, the loss function to be minimized for the actor network, denoted by $L\\left({\\theta }\\right)$ , is given as $L^{\\text{actor}}\\left({\\theta }\\right) = - \\hat{L}^{\\text{clip}}({\\theta }) - c_\\text{e}L^\\text{entropy}\\left({\\theta }\\right),$ For the critic network, the estimation error (i.e.", "the loss function to be minimized) of $v^{\\pi _{\\theta }}$ is also approximated with the $T$ samples as below [20] $L^{\\text{critic}}({\\omega }) = \\frac{1}{T}\\sum \\nolimits _{\\tilde{t}=0}^{T-1}\\left(v^{\\pi _{\\theta }}_{\\omega }(s[\\tilde{t}]) - \\sum \\nolimits _{l=\\tilde{t}}^{T-1}\\gamma ^{l-\\tilde{t}}r[l] - \\gamma ^{T-\\tilde{t}}v^{\\pi _{\\theta }}_{\\omega }(s[T-1])\\right)^2.$ In practical NN implementation, we consider the actor and critic networks as a single united NN (weighted by ${\\theta }$ and ${\\omega }$ ) that outputs the $\\pi _{{\\theta }}(a\\big |s)$ and $v^{\\pi _{\\theta }}_{{\\omega }}(s)$ with different blocks.", "Therefore, the eventual loss function would be the sum of $L^{\\text{actor}}\\left({\\theta }\\right)$ and $L^{\\text{critic}}({\\omega })$ .", "When training a NN, its inputs and outputs are usually scaled (e.g.", "normalization and standardization) to avoid unstable learning process or exploding gradients.", "Our designed model has involved data scaling in three parts.", "(1) We scale the queue state vector $\\mathbf {q}[t]$ by dividing it by its maximum element, i.e.", "$\\mathbf {q}[t]\\leftarrow \\frac{\\mathbf {q}[t]}{\\left||\\mathbf {q}[t]\\right||_{\\ell _0}}$ .", "This makes all the elements of $\\mathbf {q}[t]$ bounded by 1 while the ratios among the length of queues are still kept.", "(2) The elements of $\\mathbf {L}^{\\text{block}[t]}$ could be large as it is possible that a UE never encounters a blockage.", "Since ${N}^\\text{block}$ is unknown, we introduce a large enough constant $\\tilde{N}^\\text{block}$ such that $\\tilde{N}^\\text{block}>{N}^\\text{block}$ , and we transform $l_u^{\\text{block}[t]}$ as $l_u^{\\text{block}[t]} \\leftarrow P_u^{\\text{block}[t]}\\triangleq \\max \\left(\\frac{\\tilde{N}^\\text{block}-l_u^{\\text{block}[t]}}{\\tilde{N}^\\text{block}+1},0\\right)$ , which is bounded in $[0,1]$ .", "$P_u^{\\text{block}[t]}$ can be intuitively interpreted as a likelihood that UE $u$ is still undergoing blockage.", "This is because when $l^{\\text{block}}_{u}$ is large, it is intuitive to guess that the blockage event has already finished.", "(3) We scales the reward $r[t]$ as $r[t]\\leftarrow \\frac{r[t]}{N_\\text{p}(x)}$ , where $N_\\text{p}(x)\\triangleq \\frac{x 10^9T^\\text{slot}}{S^\\text{pkg}}$ is the maximum number of packets that could be transmitted when the link rate is $x$ Gbps.", "Given the system bandwidth $B$ , $x$ could be easily tuned to make the reward $r[t]$ bounded by $[0,1]$ .", "It is worth pointing out the data scaling only impacts the NN training but not the algorithm itself.", "By now we have illustrated the background theory and implementation of the proposed DRL-based controller, which is summarized in Algorithm REF .", "[!t] Training process for DRL-based controller using PPO within A2C framework [1] Input: Total time steps $T^{\\text{total}}$ , batch size $T$ , clipping parameter $\\epsilon $ , entropy loss coefficient $c_\\text{e}$ , data scaling coefficient $x$ , $\\tilde{N}^\\text{block}$ .", "Initialize: randomize NN parameters ${\\theta }$ and ${\\omega }$ ; ${\\theta }_{\\text{old}} \\leftarrow {\\theta }$ ; $s[0]\\sim \\mathcal {D}_0$ $t=0,\\hdots ,T^{\\text{total}}$ Scale the state $s[t]$ and input it to the actor/critic networks Buffer the estimation of the state-value function $v^{\\pi _{\\theta }}_{\\omega }(s[t])$ provided by the critic network Choose an action $a[t]$ following the stochastic policy $\\pi _{{{\\theta }}_{\\text{old}}}(a|s[t])$ provided by the actor network and execute $a[t]$ in system (aka environment) Get the new state $s[t+1]$ provided by the system Buffer the scaled reward $r[t]$ provided by the system $ \\text{Mod}(t+1,T) == 0$ reindex the buffered $T$ time steps from $\\left\\lbrace t,t+1,\\cdots ,t+T-1\\right\\rbrace $ to $\\left\\lbrace 0,1,\\cdots ,T-1\\right\\rbrace $ Calculate TD error $\\delta [\\tilde{t}],~\\tilde{t}\\in [0,T-1]$ as in () Calculate GAE $\\hat{A}[\\tilde{t}],~\\tilde{t}\\in [0,T-1]$ as in (REF ) Optimize the NN parameter ${\\theta }$ (actor network) by $\\nabla _{\\theta } L^{\\text{actor}}({\\theta })$ as in (REF ) Optimize the NN parameter ${\\omega }$ (critic network) by $\\nabla _{\\omega } L^{\\text{critic}}({\\omega })$ as in (REF ) $\\pi _{{{\\theta }}_{\\text{old}}} \\leftarrow \\pi _{\\theta }$ Clear the buffer" ], [ "Empirical MAB-based joint user and link controller", "DRL-based solutions to MDP problems generally result in a high sample and computational complexity due to NN training.", "This could be a huge challenge in wireless applications, where feedbacks are usually online available unless a well-modeled simulator could be built.", "Motivated by this, in this section, we further propose another learning-based controller which is sample-efficient and has low computational complexity.", "It is called the Empirical MAB-based controller.", "Its key idea is to decompose the decision-making at each time slot into a sequence of sub-actions as follows: First, an empirical maxweight policy is exploited for UE scheduling.", "Thereafter, the MAB framework, one of the most basic RL settings, is leveraged to learn whether and which relay should be configured as the main link.", "Then the MAB framework is again used to learn the optimal codebook for the configured main link.", "Finally, a heuristic rule is derived to decide whether beam tacking is performed, which completes the whole decision-making process." ], [ "UE scheduling with empirical maxweight", "It is well-known that the queue-aware maxweight scheduling policy is throughput optimal [22].", "Given the instantaneous packet departure rate of UE $u$ at time slot $t$ for $u\\in [U]^+$ , denoted as $\\tilde{d}_u[t]$ , the maxweight allocates time slot $t$ to the user that satisfies $\\tilde{u} = \\arg \\max \\nolimits _{u\\in [U]^+} q_u[t]\\tilde{d}_u[t].$ Unfortunately, $\\tilde{d}_u[t]$ is unknown in our studied system due to the fact that $\\tilde{d}_u[t]$ is the function of random channel and stochastic link configuration.", "Therefore, we propose to use the empirical mean of $\\tilde{d}_u[t]$ to perform the UE selection described in (REF ).", "To be specific, a sample of $\\tilde{d}_u[t]$ is observable at the end of time slot $t$ when UE $u$ is activated as the RX in either a main or D2D link, which is captured by the packet departure vector $\\mathbf {d}[t]$ .", "We denote $N^\\text{rx}_u[t]$ as the number of times that UE $u$ has been the RX in any link until the beginning of time slot $t$ , then we have $N^\\text{rx}_u[t] = N^\\text{rx}_u[t-1]+{1}\\left\\lbrace u=I_{\\text{main-rx}}[t] \\text{~or~} u=I_{\\text{main-dest}}[t-1]\\right\\rbrace .$ We then propose an empirical estimation of $\\tilde{d}_u[t+1]$ , denoted by $\\hat{d}_u[t+1]$ , as below $\\hat{d}_u[t+1] = \\frac{N^\\text{rx}_u[t-1]\\hat{d}[t]+d_u[t]\\left(\\frac{1}{2}{1}\\left\\lbrace b^\\text{d2d}_u[t]=1\\right\\rbrace + {1}\\left\\lbrace u=I_\\text{main-rx}[t]\\right\\rbrace \\right) }{N^\\text{rx}_u[t]},$ where the coefficient $\\frac{1}{2}$ in $\\frac{1}{2}{1}\\left\\lbrace B^\\text{d2d}_u[t]=1\\right\\rbrace $ is to penalize the extra delay (one time slot) that is introduced by the use of a D2D link.", "As a result, our empirical maxweight policy for the UE scheduling can be summarized as $I_{\\text{main-dest}}[t] =\\left\\lbrace \\begin{array}{lr}I_{\\text{main-dest}}[t-1], & \\mathbf {1}^\\text{T}\\mathbf {b}^\\text{track}[t] = 1,\\\\\\arg \\max _{u\\in \\left\\lbrace i\\in [U]^+ \\big |B_i^\\text{d2d}[t]=0\\right\\rbrace } q_u[t]\\hat{d}_u[t], & \\text{otherwise},\\end{array}\\right.$ where the first case deals with scenarios when beam tracking is activated while the second one tackles the constraint that UEs in an activated D2D link could not be scheduled for a main link.", "Without considering the state information $s[t]$ , selecting a relay (could be any UE in the network) for UE $I_\\text{main-dest}[t]$ can be regarded as a MAB problem, in which UE $u$ is called arm $u$ and a reward of the arm is provided each time when it is used.", "The arms are dynamically explored and exploited over the time such that the expected cumulative reward is maximized.", "As the state $s[t]$ is omitted, a MAB problem is also called single-state (or stateless) RL problem [17].", "At each time slot, one of the $U$ arms (relays) is chosen to be a potential relay for the scheduled UE $I_{\\text{main-dest}}[t]$ , and its network ID is denoted as $I_\\text{main-rx}[t]$ .", "The case $I_\\text{main-rx}[t]=I_\\text{main-dest}[t]$ simply represents that using UE $I_\\text{main-dest}[t]$ itself as a 'relay', namely that no D2D transmission would be used in the next time slot." ], [ "Designing arm reward", "With the chosen relay $I_\\text{main-rx}[t]$ , the instantaneous data rate $R_\\text{main}[t]$ (in (REF )) is observed by completing the BA phase and the effective coefficient $C^\\text{eff-main}[t]$ (in (REF )) is observed at the end of the DT phase.", "Accordingly, we design the reward of arm $u$ as the eventual effective data rate provided by relay $u$ , which is denoted as $R_{\\text{relay},u}[t] =\\left\\lbrace \\begin{array}{lr}R_{\\text{main}}[t]C^\\text{eff-main}[t], & u = I_\\text{main-rx}[t],\\\\\\frac{1}{2}\\text{min}\\left(R_{\\text{main}}[t]C^\\text{eff-main}[t],R_{\\text{d2d}}[t+1]{C}^\\text{d2d}[t+1]\\right), & \\text{otherwise},\\end{array}\\right.$ where the coefficient $\\frac{1}{2}$ in the second case is to penalize the extra delay introduced by the D2D link (similar to that in (REF )).", "The motivation of using the eventual effective data rate in the reward is to penalize the uses of bad relays that provide poor link quality (e.g.", "UEs that are under blockage or too far away UE $I_\\text{main-dest}[t]$ ).", "It is worth pointing out that $R_{\\text{relay},u}[t]$ in (REF ) would be a delayed reward when $u \\ne I_\\text{main-rx}[t]$ ." ], [ "Selecting relay by Thompson sampling (TS)-based algorithm", "TS is well-known for its simple implementation and outstanding empirical performance [52] in solving MAB problems.", "The essence of TS is, at each time step, to choose an arm by sampling the predefined prior distributions of all arms' rewards.", "Then the reward provided by executing the chosen action is further used to update the posterior distributions of those priors.", "As this process repeats, the posteriors converge to the true distributions of the arms' rewards.", "We briefly introduce its adaptation in this paper by detailing the prior distribution, sampling rule, and posterior update.", "Please refer to our prior work [15] for more details.", "Prior distribution of arm reward: As there are $M+1$ MCS levels, the instantaneous data rate provided by arm $u$ , i.e., $R_{\\text{main}}[t]$ or $R_{\\text{d2d}}[t+1]$ , actually follows a one-trial multinomial distribution with support $\\left\\lbrace R_0,\\cdots ,R_M\\right\\rbrace $ and an unknown probability vector $\\mathbf {P}^\\text{relay}_u[t] = \\left[P^\\text{relay}_{0,u}[t],\\cdots ,P^\\text{relay}_{M,u}[t]\\right]^\\text{T}$ , where $P^\\text{relay}_{m,u}[t]$ is the probability of using MCS $m$ at time slot t when using arm $u$ .", "Therefore, the conjugate prior of a multinomial distribution, i.e.", "a Dirichlet distribution, is a perfect prior for estimating $\\left\\lbrace \\mathbf {P}^\\text{relay}_u[t]\\right\\rbrace _{u=1}^U$ .", "To fully characterize the true reward of arm, i.e.", "effective data rate $R_{\\text{relay},u}[t]$ , the effect coefficients in (REF ) should be considered.", "We will handle this in the posterior update.", "Sampling rule: At the beginning of each time slot, the controller samples the priors of the available arms (i.e.", "the UEs which are not in a D2D link) and chooses the arm that provides the largest predictive reward, as described by Line 6-9 in Algorithm REF .", "Posterior update: Note that $R_{\\text{relay},u}[t]$ is not a timely reward when a D2D link is to be activated.", "Accordingly, the posterior update would be processed whenever the reward is available.", "We take the case with timely reward for example, which is described by Line 18-22 in Algorithm REF : since $C^\\text{eff-main}[t]$ is revealed when time slot $t$ ends, we randomize the $R_{\\text{main},u}[t]$ by generating a Bernoulli random variable $X_\\text{relay}\\sim \\text{Bernoulli}\\left(\\tilde{C}^\\text{eff-main}[t]\\right)$ , and forcing $R_{\\text{relay},u}[t]=0$ when $X_\\text{relay}=0$ .", "This randomization incorporates the effect of $C^\\text{eff-main}[t]$ into $\\mathbf {P}^\\text{relay}_u[t]$ , hence helping characterize the true reward of an arm in form of a multinomial distribution.", "For the case with delayed reward, a similar process is performed as shown by Line 23-34 Algorithm REF ." ], [ "MAB-based codebook selection", "Similar to relay selection, the codebook selection could be regarded as a MAB problem with $K$ arms ($K$ codebooks at the AP).", "The reward of the $k$ -th codebook is $R_{\\text{cb},k}[t] = R_\\text{main}[t]C^\\text{eff-main}[t]~\\left(\\text{with}~ I_\\text{CB[t]}= k\\right)$ .", "It is worth pointing out that this is an optimally designed reward since the objective of codebook selection is to find out the optimal beamwidth for the main link, which is regardless of user scheduling and relay selection.", "A similar TS-based algorithm could be applied to solve the codebook selection problem.", "Two remarks on the application of TS in relay and codebook selection are provided as below: Remark 1 It has been shown in our prior work [15] that the TS-based algorithm used in Sec.", "REF and Sec.", "REF is asymptotically optimal in choosing the best arm when the rewards are IID across the time slots.", "This condition may hold for codebook selection but not for the relay selection.", "This is because the reward distributions of the relays are time-variant due to the learning process of the codebook selection and beam tracking.", "Nevertheless, the convergence of the learning process is confirmed by our later evaluations.", "Remark 2 We did not merge the $U$ -armed relay selection and the $K$ -armed codebook selection into a single one with $UK$ arms due to the fact that they have different reward functions." ], [ "Heuristic beam tracking", "We finally derive a decision rule on whether to conduct beam tracking.", "It is designed based on the following observation: if the same UE is to be chosen by (REF ) for the consecutive time slot, it is definitely beneficial to activate the beam tracking mechanism.", "Accordingly, we empirically predict the sizes of queues as $\\hat{q}_u[t+1] = \\left\\lbrace q_u[t] - \\hat{d}_{I_{\\text{main-rx}}[t]}[t]{1}\\left\\lbrace u =I_{\\text{main-rx}}[t]\\right\\rbrace \\right\\rbrace ^+ + \\hat{Z}_u[t],$ where $\\hat{Z}_u[t] = \\frac{(t-1)\\hat{Z}_u[t-1]+ Z_u[t]}{t}$ is the empirical mean of the packet arrival rate of UE $u$ .", "Therefore, the UE to be served in time slot $t+1$ , denoted by $\\hat{I}_{\\text{main-rx}}[t+1]$ , can be predicted using (REF ) as $\\hat{I}_{\\text{main-rx}}[t+1] = \\arg \\max _{u\\in [U]^+} \\hat{q}_u[t+1]\\hat{d}_u[t].$ As a result, a heuristic decision rule on whether beam tracking is to be performed is given as $I_{\\text{track}}[t] =\\left\\lbrace \\begin{array}{lr}0, & I_{\\text{main-dest}}[t] \\ne I_{\\text{main-rx}}[t],\\\\{1}\\left\\lbrace \\hat{I}_{\\text{main-rx}}[t+1] = I_{\\text{main-rx}}[t] \\right\\rbrace , & \\text{otherwise}.\\end{array}\\right.$ By now, we have illustrated our proposed Empirical MAB-based controller and its whole process is summarized in Algorithm REF .", "In particular, we initialize the algorithm by exploring each feasible action $a[t]=\\left( I_{\\text{main-dest}}[t], I_{\\text{main-rx}}[t], I_{\\text{cb}}[t], I_\\text{track}[t]\\right)$ for at least $N_\\text{EMAB-Init}$ times to reach a good initial state of the priors and those empirical means.", "It is worth pointing out that this Empirical MAB-based controller is an approximation and combination of several classic algorithms which have provable performance guarantees.", "Providing a reward (or regret) analysis to this empirical combination is however challenging due to that the TS is applied twice sequentially and that the reward distribution of relay selection is time-variant due to the user scheduling, which are pointed out in the Remarks REF and REF .", "Hence a theoretical analysis of the proposal empirical algorithm is out of the scope of this work.", "We will confirm the feasibility of the proposed learning controller by numerical evaluation in Sec. .", "Empirical MAB-based controller (To be continued) [1] Input: Total time steps $T^{\\text{total}}$ ; number of UE $U$ , number of codebooks $K$ ; number of non-zero MCSs $M$ ; rate vector $\\mathbf {r}=\\left[r_0,r_1,\\hdots ,r_M\\right]^{\\text{T}}$ ; initial exploration number $N_\\text{EMAB-Init}$ .", "Initialize: $s[0]\\sim \\mathcal {D}_0$ ; $\\alpha ^\\text{relay}_{m,u,u} = 1, m\\in [M], u \\in [U]^+$ ; $\\alpha ^\\text{cb}_{m,k,u} = 1, m\\in [M],k \\in [K]^+, u\\in [U]^+$ ; $\\hat{\\mathbf {d}}[0] = \\mathbf {0}$ ; $\\hat{\\mathbf {q}}[0] = \\mathbf {0}$ ; $\\hat{\\mathbf {z}}[0] = \\mathbf {0}$ ; counters for empirical means.", "Initialize: Use each feasible action for at least $N_\\text{EMAB-Init}$ times and update priors, empirical means and counters accordingly.", "$t=N_\\text{EMAB-Init} |\\mathcal {A}|,\\hdots ,T^{\\text{total}}$ Decide $I_{\\text{main-dest}}[t]$ as in (REF ).", "Empirical maxweight UE scheduling $u\\in \\left\\lbrace i\\in [U]^+ \\big |B_i^\\text{d2d}[t]=0\\right\\rbrace $ Sample $\\mathbf {d}_u\\sim Dir\\left(\\alpha _{0,u,I_{\\text{main-dest}}[t]}^\\text{relay},\\hdots ,\\alpha _{M,u,I_{\\text{main-dest}}[t]}^\\text{relay}\\right)$ .", "$I_\\text{main-rx}[t] = \\arg \\max _{u\\in \\left\\lbrace i\\in [U]^+ \\big |B_i^\\text{d2d}[t]=0\\right\\rbrace } \\mathbf {r}^{\\text{T}} \\mathbf {d}_u$ .", "MAB-based relay selection $k\\in [K]^+$ Sample $\\mathbf {d}_k\\sim Dir\\left({\\alpha }_{0,k,I_{\\text{main-rx}}[t]}^\\text{cb},\\hdots ,{\\alpha }_{M,k,I_{\\text{main-rx}}[t]}^\\text{cb}\\right)$ .", "$I_\\text{cb}[t] = \\arg \\max _{k\\in [K]^+} \\mathbf {r}^{\\text{T}} \\mathbf {d}_k$ .", "MAB-based codebook selection Decide $I_{\\text{track}}[t]$ as in (REF ).", "Heuristic beam tracking Execute action $a[t] = \\left(I_{\\text{main-dest}}[t], I_{\\text{main-rx}}[t], I_{\\text{cb}}[t], I_\\text{track}[t]\\right)$ in system.", "Lookup the RSS-MCS table to get $R_\\text{main}[t]=R_{m_\\text{main}[t]}$ with $m_\\text{main}[t]\\in [M]$ .", "Observe the effective coefficient $C^\\text{eff-main}[t]$ at the end of the DT phase.", "$I_\\text{main-dest}[t]=I_\\text{main-rx}[t]$ Prior update with timely reward of relay selection Generate a Bernoulli random variable $X_\\text{relay}\\sim \\text{Bernoulli}\\left(\\tilde{C}^\\text{eff-main}[t]\\right)$ .", "$m_\\text{relay}[t] = m[t] X_\\text{relay}$ .", "Prior update: $\\alpha ^\\text{relay}_{m_\\text{relay}[t],I_{\\text{main-rx}}[t],I_{\\text{main-dest}}[t]}:=\\alpha ^\\text{relay}_{m_\\text{relay}[t],I_{\\text{main-rx}}[t],I_{\\text{main-dest}}[t]}+1$ .", "$\\mathbf {1}^\\text{T}\\mathbf {b}^\\text{d2d}[t]\\ne 0$ Prior update with delayed reward of relay selection Lookup the RSS-MCS table to get $R_\\text{d2d}[t]=R_{m_\\text{d2d}[t]}$ with $m_\\text{d2d}[t]\\in [M]$ .", "Observe the effective coefficient $C^\\text{eff-d2d}[t]$ at the end of the DT phase.", "$R_\\text{d2d}[t] < R_\\text{main}[t-1]$ Generate a Bernoulli random variable $X_\\text{relay}\\sim \\text{Bernoulli}\\left(\\frac{1}{2}C^\\text{eff-main}[t-1]\\right)$ .", "$m_\\text{Relay-D2D} = m_\\text{main}[t-1] X_\\text{relay}$ .", "part1 1 Empirical MAB-based controller (Continued) [1] part1 Generate a Bernoulli random variable $X_\\text{relay}\\sim \\text{Bernoulli}\\left(\\frac{1}{2}C^\\text{eff-d2d}[t]\\right)$ .", "$m_\\text{Relay-D2D} = m_\\text{d2d}[t] X_\\text{relay}$ .", "Prior update: $\\alpha ^\\text{relay}_{m_\\text{Relay-D2D},I_{\\text{d2d-tx}}[t],I_{\\text{d2d-rx}}[t]}:=\\alpha ^\\text{relay}_{m_\\text{Relay-D2D},I_{\\text{d2d-tx}}[t],I_{\\text{d2d-rx}}[t]}+1$ .", "Generate a Bernoulli random variable $X_\\text{cb}\\sim \\text{Bernoulli}\\left(C^\\text{eff-main}[t]\\right)$ .", "Prior update with timely reward of codebook selection $m_\\text{cb}[t] = m[t] X_\\text{cb}$ .", "Prior update: $\\alpha ^\\text{cb}_{m_\\text{cb}[1],I_{\\text{cb}}[t],I_{\\text{main-rx}}[t]}:=\\alpha ^\\text{cb}_{m_\\text{cb}[t],I_{\\text{cb}}[t],I_{\\text{main-rx}}[t]}+1$ .", "Update empirical means $\\hat{\\mathbf {d}}[t+1]$ , $\\hat{\\mathbf {q}}[t+1]$ , $\\hat{\\mathbf {z}}[t+1]$ , and corresponding counters." ], [ "Evaluation results", "In this section, we provide the performance evaluation of the proposed two RL-based controllers.", "We first illustrate the system and training setups in Sec.", "REF and Sec.", "REF , respectively.", "Then we show the performance comparison in Sec.", "REF ." ], [ "System setup", "We simulate an outdoor mmWave system that operates at a center frequency $f_c=60$ GHz with a total bandwidth $B=2.16$ GHz [9].", "Since the outdoor mmWave channel is generally sparse and codebook-based beam alignment is performed, the LOS path could be well estimated and dominant if it exists.", "We only consider the RSS of a dominant path after the BA phase.", "The path loss of an existing LOS path can be modeled as [11] $PL^\\text{LOS} \\text{(dB)} = 28+22\\log _{10}(d)+20\\log _{10}(f_c)+\\mathcal {X},$ where $d$ is the distance between the transceivers and $\\mathcal {X}$ is the shadowing fading that follows the normal distribution $\\mathcal {N}(0,\\sigma ^2)$ .", "Generally, a NLOS path could suffer from more than 15 dB path loss (or even worse) than a LOS path loss [11].", "Accordingly, we assume an additional path loss $PL^\\text{block}$ would be added to $PL^\\text{LOS}$ when the LOS link does not exist, which yields: $PL^\\text{NLOS} \\text{(dB)} = PL^\\text{LOS} + PL^\\text{block}.$ In this work, we model $PL^\\text{block}$ as a random variable which is uniformly distributed between 10 to 30 dB.", "When the random variable $PL^\\text{block}$ has a relatively small value, it could be interpreted as the path loss of the NLOS link provided by the potential reflectors or scatterers.", "When $PL^\\text{block}$ has a relatively larger value, it can be viewed as an outage happens.", "As we can see from $(\\ref {eq:LOS_PL})$ and $(\\ref {eq:NLOS_PL})$ , the channel fluctuation in the simulation, collectively referred to as the path loss represented by $PL$ , is caused by the device mobility, link blockage, and shadowing fading.", "We denote $W$ as the link marginal budget and implementation loss, $P_\\text{T}$ is the transmitting power, which is set as $P_\\text{T}\\triangleq P_\\text{AP}=15$ dBm for the AP (main link) and $P_\\text{T}\\triangleq P_\\text{AP}=10$ dBm for UEs (D2D link).", "We use $G_\\text{T}~(\\text{or}~G_\\text{R})\\triangleq \\frac{16\\pi }{6.67b^\\text{azi}b^\\text{ele}}$  [53] to represent the antenna gain of a beam pattern whose azimuth width is $b^\\text{azi}$ and elevation width is $b^\\text{ele}$ .", "We recall that only $b^\\text{azi}$ is configurable via the codebook selection, i.e.", "$b^\\text{azi}(k)=\\frac{2\\pi }{N^\\text{beam}_k}$ .", "With the above notation, the RSS obtained by a link after beam alignment can be given as $RSS \\text{(dB)} = P_\\text{T} + G_\\text{T} + G_\\text{R} - PL -W.$ The associated SNR of the link could be further calculated as $SNR = RSS - P_\\text{N}$ , where $P_\\text{N}$ is the bandwidth-dependent noise power and $P_\\text{N} = -174 + 10\\log _{10}B + 10$ .", "For convenience, we summarize all relevant simulation parameters and their values in Table REF .", "Table: NO_CAPTIONBoth the actor and critic networks of the DRL-based controller have three hidden dense layers and each layer has 128 units.", "We use the Adam optimizer to update the NN parameters with a learning rate $\\ell _\\text{r} =0.001$ .", "In particular, $\\ell _\\text{r}$ decays every 20 updates with a decay coefficient 0.9.", "The clipping parameter $\\epsilon $ is 0.2 and the discounting factor $\\gamma $ is 0.999.", "The entropy loss coefficient $c_\\text{e}$ is 0.05.", "The data scaling coefficients are set as $x=2$ , $\\tilde{N}^\\text{block}=10$ .", "For the Empirical MAB-based controller, the initial exploration number $N_\\text{EMAB-Init}$ is set to 5." ], [ "Training setups", "We monitor the learning process of our proposed solutions for 240 iterations, where one iteration consists of 1500 slots, which corresponds to 15 seconds as $T^\\text{slot} = 10$ ms.", "The total monitored time steps $T^{\\text{total}}$ is 36,000 seconds, i.e one hour.", "Unless otherwise stated, the batch size $T$ is 5, namely that the NNs are updated every 5 time slots." ], [ "Description of scenario", "The network topology used for evaluation is shown in Fig.", "REF , where the initial position of the AP is set as the origin.", "The initial distances between the five UEs and the AP are $10, 10, 15, 25, 30$ meters, respectively, and the initial angles of the five UEs from the AP, with respect to the x-axis, are $5, 85, 45, 10, 80$ degrees, respectively.", "All UEs and the AP randomly move within their respective circular area of which the center is the initial position and the radius is 5 meters.", "The borders of the circular areas are shown for reference and indicated by the legend of Fig.", "REF .", "The total downlink traffic is 1 Gbps and its allocation among UEs is 1/7, 3/7, 1/7, 1/7, 1/7.", "For the blockage model $p_{u,n}^\\text{block}$ in Fig.", "REF , they are assigned valued with the following rules: $p_{u,1}^{\\text{B}}=0$ for $u\\in [U]^+$ ; $p_{u,n}^{\\text{B}}=0.0026$ for $u\\in \\left\\lbrace 1,2,4,5\\right\\rbrace $ and $2\\le n\\le N^\\text{block}$ ; $p_{u,n}^{\\text{B}}=0.1$ for $u\\in \\left\\lbrace 3\\right\\rbrace $ and $2\\le n\\le N^\\text{block}$ ; $p_{u,0}^{\\text{B}}=1-\\sum _{n=1}^{N^\\text{block}}p_{u,n}^{\\text{B}}$ for $u\\in [U]^+$ .", "With the above assignments and the model in Fig.", "REF , the expected ratio of time that the UEs are under blockage to the total time, which is defined as the probability of being under blockage is $0.05, 0.05, 0.8, 0.05, 0.05$ for the five UEs, respectively.", "Note that these values are calculated by the numerical simulation.", "Similarly, we use the model in Fig.", "REF to character the blockage between any two UEs, giving the probability of being under blockage for any D2D links 0.05." ], [ "Discussion on simulation parameter setup", "Given the complexity of the studied system, evaluating all potential scenarios would be demanding since there are many possibilities of network topology and data traffic patterns as multiple UEs are considered.", "The simulation setup described above has been inspired by the following intuition: (1) Network topology: different large-scale distances between AP and UEs would result in different channel quality among UEs, which is one of the key factors that motivate the joint optimization of user scheduling and link configuration.", "(2) User mobility: a restricted moving area per UE guarantees that their channel conditions will not change dramatically during the whole evaluation process.", "This is because the RL algorithms are generally effective when the statistics of the environment are stationary.", "Simulating a completely random system, meaning that the statistics are changing much faster than the convergence speed of the developed RL algorithms, is unfair in evaluating the RL algorithmS.", "This is because once the environment has changed dramatically, the RL algorithms usually need to be reinitialized to recapture the system statistics.", "Therefore, we did not set all users randomly moving in the entire space, which is consistent with our system modeling.", "(3) Data traffic: the uneven packet arrival rates among UE aim at testing whether the proposed RL-based controllers could learn to use the UEs with low traffic demands as a relay for the other UEs with larger traffic demands.", "(4) Other parameters shown in Table REF are set with reasonable values according to literature and wireless standards.", "More simulation results on other scenarios could be found in our open-source code due to space limitations [54].", "Figure: Network topology" ], [ "Evolution of performance during whole monitored time", "In Fig.", "REF , we show the evolution of the performance of the learned DRL-based controller and Empirical MAB-based controllers versus the training iteration.", "We recall that one iteration consists of 1500 time slots.", "In particular, each data point in Fig.", "REF to Fig.", "REF is obtained by averaging the testing results of the learned controllers for 20 realizations and the each realization has 1500 time slots.", "From Fig.", "REF , several observations can be summarized as below: (1) Fig.", "REF presents the evolution of the overall data rate, which is bounded by the arrival rate $\\sum _{u=1}^U\\lambda _u S^\\text{pkg} =1$ Gbps.", "A stable policy should provide a data rate that equals the arrival rate, otherwise, the queue would explode.", "Fig.", "REF shows that the Empirical MAB-based controllers without relay selection all suffer from a data rate smaller than 1 Gbps, which yields exploding queue networks.", "This is because without using a relay, UE 3 is frequently under blockage and its direct link has very poor quality, which is not able to support the required data traffic.", "(2) Fig.", "REF shows that the proposed two controllers can learn good policies to select relays for the blocked UEs such that the average percentage of time that the UEs are suffering from the blockage loss in a main link is brought down to below 10%.", "In particular, the DRL-based controller makes this percentage even below 5%.", "(3) Fig.", "REF provides the performance of the average delay per packet.", "We can see that both controllers provide acceptable delay performance that the average delay is within 5 time slots (50 ms).", "In particular, the DRL-based controller makes delay even below 30 ms. (4) Note that Fig.", "REF to Fig.", "REF only monitored one training process.", "In Fig.", "REF , we show the robustness to the stochastic training by averaging the results over 60 training processes.", "The area between max and min values of the 60 training processes is shaded in Fig.", "REF .", "We can observe that it overall took around 60 iterations (15 minutes) for the DRL-based controller to learn a good policy and its final performance is better than that of the Empirical MAB-based controller.", "The Empirical MAB-based controller only requires 10 iterations (2.5 minutes) to learn a good enough policy.", "Figure: Performance versus training iterations" ], [ "Testing performance of the learned controllers", "We now provide the testing performance of the proposed controllers (trained with 240 iterations) in Fig.", "REF .", "The result is an average of 200 realizations and each realization consists of 1500 time slots.", "The evolution of queue length is given in Fig.", "REF and the cumulative distribution function (CDF) of the delay per packet is shown in Fig.", "REF .", "It can be seen that both learned controllers well stabilize the queue.", "The average delay provided by the DL-based controller and the Empirical MAB-based controller is 25.6 ms and 44.0 ms, respectively.", "Combined with the results given in Fig.", "REF , we can conclude that the DRL-based controller provides better performance in terms of system delay but it requires higher sample complexity (roughly 6 times) as around 60 iterations are required for the DRL-based controller while only 10 iterations are enough for the MAB-based controller.", "It is worth noting that the statistics of the system are the same for both the training and testing phase.", "If the statistics, most importantly the network topology of users, have changed dramatically, the agent has to be retrained.", "For example, the optimal codebook depends on UE mobility and its distance to the AP.", "If the distance changed dramatically, e.g.", "from being close to being far from the AP, the optimal codebook could be changed from being wide to being narrow.", "One potential solution is to retrain the system with a previously learned agent as an initial point.", "Figure: Testing performance of trained controllers" ], [ "Conclusions", "In this work, we studied the joint user scheduling and link configuration in a multi-user mmWave network.", "We modeled this complex scheduling/designing problem as a dynamic decision-making process and proposed two RL-based solutions.", "Our evaluation confirmed their viability and effectiveness and also showed that the DRL-based solution provides better system performance while the MAB-based solution results in a faster training process.", "One potential future direction is to exploit multiple mmWave array groups and OFDMA to further reduce the link latency by simultaneously communicating with multiple users.", "1.18" ] ]
2207.03526
[ [ "A Note on Stability of Event-Triggered Control Systems with Time Delays" ], [ "Abstract This note studies stability of nonlinear time-delayed control systems with the event-triggered strategy proposed in [1].", "In particular, by constructing a novel Halanay-type inequality, we show that the sufficient conditions proposed in the main results of [1] additionally ensure system stability, complementing the attractivity result that was obtained in [1].", "Hence, the event-triggered control systems in [1] are globally asymptotically stable.", "[1] K. Zhang, B. Gharesifard, and E. Braverman, Event-triggered control for nonlinear time-delay systems, IEEE Transactions on Automatic Control, vol.", "67, no.", "2, pp.", "1031-1037, 2022." ], [ "Introduction", "In [1], an event-triggered control strategy was proposed for stabilizing nonlinear time-delay systems.", "Three tunable parameters play a vital role in ensuring boundedness and attractivity of the system states while excluding Zeno behavior, a phenomenon that the control updates are triggered infinitely many times over a finite time period.", "However, stability criterion with the proposed event-triggered algorithm was not established in [1].", "In this note, we will show that event-triggered control systems in [1] actually are stable under the sufficient conditions of Theorem 2 (or Theorem 3) in [1].", "For the sake of brevity of this note, notations are inherited from [1]." ], [ "Ensuring stability", "In order to prove the mentioned stability result, we rely on a new Halanay-type inequality which we establish next." ], [ "A Halanay-type inequality", "For a continuous function $g:\\mathbb {R}\\rightarrow \\mathbb {R}$ , the Dini-derivatives $\\mathrm {D}^+g(t)$ and $\\mathrm {D}_-g(t)$ are defined as follows: $\\mathrm {D}^+g(t)=\\limsup _{\\epsilon \\rightarrow 0^+} \\frac{g(t+\\epsilon )-g(t)}{\\epsilon }$ and $\\mathrm {D}_-g(t)=\\liminf _{\\epsilon \\rightarrow 0^-} \\frac{g(t+\\epsilon )-g(t)}{\\epsilon }.$ The following lemma establishes a relationship between $\\textrm {D}^+g(t)$ and $\\textrm {D}_-g(t)$ , which will shortly be used to construct a Halanay-type inequality.", "Lemma 1 (Lemma 5(d) in [2]) Let $p$ and $q$ be continuous functions with $\\mathrm {D}^+p(t) \\le q(t)$ for $t$ in some open interval $\\mathcal {I} \\subset \\mathbb {R} $ .", "Then $\\mathrm {D}_-p(t)\\le q(t)$ for $t\\in \\mathcal {I}$ .", "We now introduce a new Halanay-type inequality.", "Lemma 2 Let $g:[t_0-r,t_0+\\Gamma )\\rightarrow \\mathbb {R}^+$ be a continuous function satisfying $\\mathrm {D}^+g(t)\\le \\gamma _1 g(t_0) +\\gamma _2 \\Vert g_t\\Vert _r \\textrm {~~~for~~~}t_0\\le t< t_0+\\Gamma ,$ where $r$ , $\\Gamma $ , $\\gamma _1$ , and $\\gamma _2$ are positive constants.", "Then $g(t)\\le \\Vert g_{t_0}\\Vert _r e^{\\lambda (t-t_0)} \\textrm {~~~for~~~}t_0\\le t< t_0+\\Gamma ,$ where $\\lambda =\\gamma _1+\\gamma _2$ .", "Define $w(t)=\\left\\lbrace \\begin{array}{ll}\\Vert g_{t_0}\\Vert _r e^{\\lambda (t-t_0)}, &\\textrm {~~if~~} t_0<t<t_0+\\Gamma \\cr \\Vert g_{t_0}\\Vert _r, &\\textrm {~~if~~} t_0-r\\le t\\le t_0\\end{array}\\right.$ and let $K>1$ be an arbitrary constant.", "Then, for $t\\in [t_0-r,t_0]$ , we have $g(t)\\le \\Vert g_{t_0}\\Vert _r = w(t) <K w(t),$ that is, $g(t)<Kw(t)$ for $t\\in [t_0-r,t_0]$ .", "Next, we use a contradiction argument to show that $g(t)<Kw(t)$ for $t\\in (t_0,t_0+\\Gamma )$ .", "Suppose there exists some $t\\in (t_0,t_0+\\Gamma )$ such that $g(t)\\ge Kw(t)$ , then we define $\\bar{t}=\\inf \\left\\lbrace t\\in (t_0,t_0+\\Gamma ): g(t)\\ge Kw(t)\\right\\rbrace .$ From the continuity of $g$ and $w$ , we have $g(t)<Kw(t) \\textrm {~~for~~} t_0<t<\\bar{t}$ and $g(\\bar{t})=Kw(\\bar{t}).$ By (REF ) and (REF ), we conclude that $\\frac{g(\\bar{t}+\\epsilon )-g(\\bar{t})}{\\epsilon }> \\frac{Kw(\\bar{t}+\\epsilon )-Kw(\\bar{t})}{\\epsilon }$ for $\\epsilon <0$ close to 0.", "Hence, $\\mathrm {D}_-g(\\bar{t})\\ge K\\dot{w}(\\bar{t}).$ On the other hand, by Lemma REF and (REF ), we have that $\\mathrm {D}_-g(\\bar{t}) &\\le \\gamma _1 g(t_0) +\\gamma _2 \\Vert g_{\\bar{t}}\\Vert _r \\cr &< \\gamma _1 Kw(t_0) +\\gamma _2 K\\Vert w_{\\bar{t}}\\Vert _r \\cr &< (\\gamma _1 + \\gamma _2)K w(\\bar{t}) \\cr &= \\lambda Kw(\\bar{t}) \\cr &=K\\dot{w}(\\bar{t}),$ where we have used (REF ), (REF ), (REF ), and the definition of $w$ in the last two inequalities.", "This is a contradiction to (REF ).", "Therefore, we conclude that $g(t)<Kw(t)$ for $t\\in (t_0,t_0+\\Gamma )$ .", "Since $K>1$ is arbitrary, we let $K\\rightarrow 1$ and then $g(t)\\le w(t)$ for $t\\in [t_0,t_0+\\Gamma )$ , that is, the proof is completed.", "Now we are ready to show stability of the event-triggered control system in [1]." ], [ "Proof of stability", "In what follows, we adopt the parameter definitions from [1] and use $(\\cdot )^*$ to indicate the corresponding equation number in [1].", "Furthermore, we assume that the sufficient conditions of Theorem 2 (or Theorem 3) in [1] hold with $a>0$ .", "From the system dynamics $(2)^*$ on $[t_i,t_{i+1})$ and the Lipschitz conditions on $f$ and $k$ , we have $\\mathrm {D}^+\\Vert x(t)\\Vert \\le \\Vert \\dot{x}(t)\\Vert &=\\Vert f(t,x_t,k(x(t_i)))\\Vert \\cr &\\le L_2 \\Vert x_t\\Vert _{\\tau } +L_3 \\Vert x(t_i)\\Vert $ which implies (REF ) holds on $[t_i,t_{i+1})$ with $r=\\tau $ , $g(t)=\\Vert x(t)\\Vert $ , $\\gamma _1=L_3$ , and $\\gamma _2=L_2$ .", "We then conclude from Lemma REF that $\\Vert x(t)\\Vert \\le \\Vert x_{t_i}\\Vert _{\\tau } e^{\\lambda (t-t_i)} \\textrm {~~~for~~~}t_i\\le t< t_{i+1} \\textrm {~and~} i\\in \\mathbb {Z}^+,$ where $\\lambda =L_2+L_3$ , and $\\mathbb {Z}^+$ denotes the set of non-negative integers.", "We next use mathematical induction to show that $\\Vert x(t)\\Vert \\le \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)}$ for all $t\\ge t_0$ .", "We can see from (REF ) that this statement holds for $t\\in [t_0,t_1)$ .", "Suppose $\\Vert x(t)\\Vert \\le \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)}$ holds for $t\\in [t_0,t_i)$ ; we will show this inequality holds for $t\\in [t_i,t_{i+1})$ .", "By (REF ), we have that $\\Vert x(t)\\Vert &\\le \\Vert x_{t_i}\\Vert _{\\tau } e^{\\lambda (t-t_i)} \\cr &= e^{\\lambda (t-t_i)} \\sup _{s\\in [-\\tau ,0]}\\Vert x(t_i+s)\\Vert \\cr &\\le e^{\\lambda (t-t_i)} \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t_i-t_0)}\\cr &= \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)} \\textrm {~~~for~~~}t_i\\le t< t_{i+1},$ that is, $\\Vert x(t)\\Vert \\le \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)}$ for all $t\\in [t_0,t_{i+1})$ .", "By induction and sufficient conditions of Theorem 2 (or Theorem 3) in [1] on ruling out Zeno behavior, we conclude that $\\Vert x(t)\\Vert \\le \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)} \\textrm {~~~for~all~~}t\\ge t_0.$ Then (REF ) and $(17)^*$ imply $\\alpha _1(\\Vert x(t)\\Vert )\\le \\min \\left\\lbrace \\alpha _1\\left(\\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)}\\right), M e^{-\\eta (t-t_0)} \\right\\rbrace \\textrm {~~~for~all~~}t\\ge t_0.$ Let $\\delta _1=\\inf \\left\\lbrace s\\ge 0: \\alpha _2(s)+\\alpha _3(s)\\ge \\bar{M}\\right\\rbrace .$ Then $\\Vert \\varphi \\Vert _{\\tau }<\\delta _1$ implies $M=\\alpha _2(\\Vert \\varphi (0)\\Vert )+\\alpha _3(\\Vert \\varphi \\Vert _{\\tau })+\\bar{M}<2\\bar{M}$ and $\\alpha _1(\\Vert x(t)\\Vert )\\le \\min \\left\\lbrace \\alpha _1\\left(\\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)}\\right), 2\\bar{M} e^{-\\eta (t-t_0)} \\right\\rbrace $ for all $t\\ge t_0$ .", "Since $\\alpha _1$ is strictly increasing, we have that $\\Vert \\varphi \\Vert _{\\tau }<\\delta _2$ implies $\\alpha _1(\\Vert \\varphi \\Vert _{\\tau })< 2\\bar{M}$ where $\\delta _2=\\alpha _1^{-1}(2\\bar{M})$ .", "Next, we consider the initial function $\\varphi $ satisfying $\\Vert \\varphi \\Vert _{\\tau }<\\min \\lbrace \\delta _1,\\delta _2\\rbrace $ .", "Then, there exists a unique $\\hat{t}>t_0$ such that $\\alpha _1\\left(\\Vert \\varphi \\Vert _{\\tau }e^{\\lambda (\\hat{t}-t_0)}\\right)= 2\\bar{M} e^{-\\eta (\\hat{t}-t_0)},$ where we have used the facts that both $\\alpha _1\\left(\\Vert \\varphi \\Vert _{\\tau }e^{\\lambda (t-t_0)}\\right)$ and $2\\bar{M}e^{-\\eta (t-t_0)}$ are strictly monotonic in $t$ , and $\\alpha _1(\\Vert \\varphi \\Vert _{\\tau })< 2\\bar{M}$ .", "Furthermore, for any $\\varepsilon >0$ , there exists a $\\delta _3$ , depending on $\\varepsilon $ , such that $\\Vert \\varphi \\Vert _\\tau < \\delta _3$ implies $\\alpha _1\\left(\\Vert \\varphi \\Vert _{\\tau }e^{\\lambda (\\hat{t}-t_0)}\\right)= 2\\bar{M} e^{-\\eta (\\hat{t}-t_0)}<\\alpha _1(\\varepsilon ),$ that is, small enough $\\Vert \\varphi \\Vert _\\tau $ leads to large enough $\\hat{t}$ so that $2\\bar{M} e^{-\\eta (\\hat{t}-t_0)}< \\alpha _1(\\varepsilon )$ .", "Note that $\\bar{M}$ is independent of $\\Vert \\varphi \\Vert _{\\tau }$ .", "Now we can conclude from (REF ) that, for any $\\varepsilon >0$ , there exists a $\\delta =\\min \\lbrace \\delta _1,\\delta _2,\\delta _3\\rbrace $ such that $\\Vert \\varphi \\Vert _\\tau < \\delta $ implies $\\alpha _1(\\Vert x(t)\\Vert )\\le 2\\bar{M} e^{-\\eta (\\hat{t}-t_0)}< \\alpha _1(\\varepsilon )$ for $t\\ge t_0$ , that is, $\\Vert x(t)\\Vert < \\varepsilon $ .", "The stability proof is completed.", "As a corollary, if $a>0$ in the execution rule $(7)^*$ , and the sufficient conditions on exclusion of Zeno behavior in [1] hold, we can conclude from the conditions of Theorem 2 (or Theorem 3) in [1] that the closed-loop system $(5)^*$ is in fact globally asymptotically stable.", "We finish this note with two remarks.", "Remark 1 It should be mentioned that the above proof does not rely on the Lipschitz condition on $\\alpha _1^{-1}$ .", "Nevertheless, if $\\alpha ^{-1}_1$ is locally Lipschitz, then $\\hat{t}$ and $\\delta $ in the proof can be obtained explicitly.", "To be more specific, suppose $\\alpha ^{-1}_1$ is locally Lipschitz, and we can show stability as follows.", "Combining (REF ) and $(17)^*$ yields $\\Vert x(t)\\Vert \\le \\min \\left\\lbrace \\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (t-t_0)}, L_1M e^{-\\eta (t-t_0)} \\right\\rbrace \\textrm {~~~for~all~~}t\\ge t_0,$ where $L_1$ is the Lipschitz constant of $\\alpha _1^{-1}$ on interval $[0,M]$ as defined in [1].", "Consider $\\Vert \\varphi \\Vert _{\\tau }<\\min \\lbrace \\delta _1,\\bar{\\delta }_2\\rbrace $ with $\\bar{\\delta }_2=\\alpha _1^{-1}(2L_1\\bar{M})$ , then $M<2\\bar{M}$ , $\\Vert \\varphi \\Vert _{\\tau }<2L_1\\bar{M}$ , and there exists a unique $\\hat{t}>t_0$ such that $\\Vert \\varphi \\Vert _{\\tau } e^{\\lambda (\\hat{t}-t_0)}= 2L_1\\bar{M} e^{-\\eta (\\hat{t}-t_0)},$ and then $\\hat{t}$ can be derived as $\\hat{t}=\\frac{\\ln \\left(\\frac{2L_1\\bar{M}}{\\Vert \\varphi \\Vert _{\\tau }} \\right)}{\\lambda +\\eta } +t_0.$ By (REF ) and the definition of $\\hat{t}$ , we have $\\Vert x(t)\\Vert &< 2L_1\\bar{M} e^{-\\eta (\\hat{t}-t_0)} \\cr &= 2L_1\\bar{M} \\exp \\left(\\frac{-\\eta }{\\lambda +\\eta } \\ln \\left( \\frac{2L_1\\bar{M}}{\\Vert \\varphi \\Vert _{\\tau }} \\right) \\right) \\cr & = \\Vert \\varphi \\Vert _{\\tau }^{\\frac{\\eta }{\\lambda +\\eta }} \\left(2L_1\\bar{M}\\right)^{\\frac{\\lambda }{\\lambda +\\eta }},$ for all $t\\ge t_0$ .", "For any $\\varepsilon >0$ , let $\\delta =\\min \\lbrace \\delta _1,\\bar{\\delta }_2,\\bar{\\delta }_3\\rbrace $ with $\\bar{\\delta }_3=\\varepsilon ^{(\\lambda +\\eta )/\\eta } (2L_1\\bar{M})^{-\\lambda /\\eta }$ .", "For $\\Vert \\varphi \\Vert _{\\tau }<\\delta $ , we can derive from (REF ) that $\\Vert x(t)\\Vert < \\Vert \\varphi \\Vert _{\\tau }^{\\frac{\\eta }{\\lambda +\\eta }} \\left(2L_1\\bar{M}\\right)^{\\frac{\\lambda }{\\lambda +\\eta }}< \\varepsilon $ for $t\\ge t_0$ , that is, the closed-loop system $(5)^*$ is stable.", "It can be seen that $\\delta $ is given explicitly since $\\delta _1$ , $\\bar{\\delta }_2$ , and $\\bar{\\delta }_3$ are specifically defined.", "Remark 2 The upper bound $\\alpha _1^{-1}(M e^{-\\eta (t-t_0)})$ of the state norm in (17)* guarantees attractivity of the closed-loop system.", "However, $M=\\alpha _2(\\Vert \\varphi (0)\\Vert )+\\alpha _3(\\Vert \\varphi \\Vert _{\\tau }) + \\bar{M}$ depends not only on the initial function $\\varphi $ but also on the parameter $a$ in $\\bar{M}$ .", "Since $\\bar{M}$ is independent of the initial function $\\varphi $ , the stability criterion of the event-triggered control system (2)* could not be derived from this upper bound solely.", "The role of Lemma REF is to provide another bound in (REF ) for $\\Vert x\\Vert $ .", "Combining these two bounds in (REF ) or (REF ) allows stability analysis for the closed-loop system.", "Lemma REF is different from the existing Halanay-type inequalities (see, e.g., [2], [3]) in the following sense.", "In the existing Halanay-type inequalities, the Dini derivative $\\mathrm {D}^+g(t)$ is bounded by the sum of a function of $g(t)$ and a function of $\\Vert g_t\\Vert _r$ , while in Lemma REF we bound $\\mathrm {D}^+g(t)$ by a linear combination of $g(t_0)$ and $\\Vert g_t\\Vert _r$ .", "This major difference in the dependence of $g$ on the initial time $t_0$ allows the estimation of the state bound over each interval $[t_i,t_{i+1})$ since the control input is unchanged during two consecutive event times." ] ]
2207.03566
[ [ "Fourier Versus Singular Value Decompositions of Nucleon Azimuthal Angle\n Distributions in Heavy-Ion Reactions Around $E_{\\rm beam}/{\\rm nucleon}=1$\n GeV" ], [ "Abstract Background: Coefficients of Fourier decompositions of particle azimuthal angle distributions are well-established messengers of the equation of state (EOS) and transport properties of dense matter formed in heavy-ion collisions from low to ultra-relativistic energies.", "Principal Component (PC) Analysis (PCA) via Singular Value Decomposition (SVD) of large datasets is an adaptive exploratory method to uncover natural patterns underlying the data.", "Purposes: We study (1) if the PCs of event-by-event nucleon azimuthal angle distributions in heavy-ion reactions around 1 GeV/nucleon are naturally since and/or cosine functions and (2) what if any advantages the PCA may have over the standard Fourier analysis for studying the EOS of dense matter.", "Method: We perform Fourier and SVD analyses for column-centered, non-centered and standardized (column-centered and scaled by the standard deviation of each column) nucleon azimuthal angle distribution matrices from simulating Au+Au collisions at $E_{\\rm beam}/A$=1.23 GeV using an isospin-dependent Boltzmann-Uehling-Uhlenbeck (IBUU) transport model.", "Results: We found that in none of the analyses the PCs come out naturally as sine and/or cosine functions.", "Conclusions: While the PCA creates new uncorrelated variables that successively maximize variances of the data matrices (the singular value continuously decreases as the number of used PCs increases), both the PC loadings and its singular values are appreciably EOS dependent.", "Since any azimuthal angle distribution can be periodically extended and then expanded as a Fourier series, and its coefficients contain all the information about the EOS, comparing the standard Fourier and SVD decompositions of nucleon azimuthal angle distributions in heavy-ion collisions around 1 GeV/nucleon, the former is advantageous at least for the purpose of investigating the EOS of dense matter formed in these reactions." ], [ "Introduction and conclusions", "A central goal of heavy-ion reaction experiments over a broad beam energy range from the Fermi energy all the way to LHC energies is to investigate the equation of state (EOS) of dense matter formed in these reactions.", "In realizing this goal, comparisons of hydrodynamics and/or transport model predictions with the experimental data of various components and/or forms of nuclear collective flow have been found very fruitful [1], [2], [3].", "In particular, analyses of single-particle azimuthal angle $\\phi $ distribution $\\frac{dN}{d\\phi }$ with respect to the reaction plane have played an important role.", "Usually, a Fourier decomposition of the $\\frac{dN}{d\\phi }$ is performed according to $\\frac{2\\pi }{N}\\frac{dN}{d\\phi } = 1 + 2\\sum _{n=1}^{\\infty }v_n\\cos {[n(\\phi -\\Psi _{\\rm PP_n})]}$ where $\\Psi _{{\\rm PP}_n}$ is the experimentally estimated azimuthal angle of the $n^{th}$ harmonic participant plane.", "The latter is normally taken as zero in model simulations where the true reaction plane is known.", "The $v_n=<cos(n\\phi )>$ is the $n$ -th harmonic coefficient.", "In particular, $v_1$ is the strength of the so-called directed flow and $v_2$ is that of the elliptical flow.", "Since one can always do a periodic extension of the $\\frac{dN}{d\\phi }$ measured in some kinematic regions, the Fourier decomposition of $\\frac{dN}{d\\phi }$ has been the standard technique for analyzing the nuclear collective flow.", "While the sine and cosine functions constitutes mathematically a good basis for analyzing essentially any signals/observables, the question whether they are also naturally the most optimal basis according to the $\\frac{dN}{d\\phi }$ data itself was recently studied in Refs.", "[4], [5].", "Interestingly, singular value decompositions of the particle azimuthal angle distributions generated by using the VISH2+1 hydrodynamic [4], [6], [7] and AMPT transport model simulations [5], [8] of ultra-relativistic heavy-ion collisions at LHC energies indicate that the leading principle component loadings (PCA eigenvectors in terms of the original observables) are naturally very similar [4] or almost identical [5] to the first few traditional Fourier bases.", "Moreover, it was found that mode-coupling effects are reduced for the flow harmonics defined by the PCA, indicating one of its possible advantages[4].", "In this work, in comparison with the standard Fourier analysis we perform PCA via SVD analyses for the column-centered, non-centered and standardized $\\frac{dN}{d\\phi }$ data matrices generated for Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV using the IBUU transport model [9], [10].", "We found that (1) in none of the analyses the PCs are naturally sine and/or cosine functions, (2) both the PC loadings and the corresponding singular values depend appreciably on the EOS used, (3) the singular value of the non-centered (raw data) matrix is overwhelmed by the first PC reflecting merely the mean value of the nucleon azimuthal angle distribution while the PCs from the column-centered analysis reflect simply the decompositions of the standard deviations with their singular values decrease slowly.", "For the purpose of investigating the EOS of dense matter formed in heavy-ion collisions around $E_{\\rm beam}/{\\rm nucleon}=1$ GeV, we conclude that the standard Fourier analysis is more useful.", "The rest of the paper is organized as follows.", "In the next section, within the IBUU transport model we first examine the EOS dependence of free proton azimuthal angle $\\phi $ distributions $\\frac{dN}{d\\phi }$ using typical soft and stiff EOSs without using the momentum dependence in the underlying single-nucleon potential.", "We then study in Section 3 both the integrated and differential transverse and elliptic flows by performing the standard Fourier analyses for mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV.", "The same 1.5 million events in each case are then examined in Section 4 using PCA via SVD.", "Since in the PCA-SVD literature, different approaches have been widely used in pre-processing the raw data leading to PCs and the corresponding singular values having different meanings, we perform our PCA-SVD analyses using the column-centered, non-centered and standardized $\\frac{dN}{d\\phi }$ matrices.", "Results of these PCA-SVD analyses will be compared.", "Moreover, the relevant EOS information from the traditional Fourier analysis and PCA will be compared.", "Finally, we summarize." ], [ "IBUU transport model predictions for proton azimuthal angle distributions in mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV", "In studying nuclear collective flow in heavy-ion reactions at intermediate-relativistic energies, Boltzmann-Uehling-Uhlenbeck (BUU)-like transport models [11], [12], [13], [14], [15] played a particularly important role for extracting useful information about the EOS of dense matter [16], [17], [18], [19].", "We use here an isospin-dependent BUU model [9], [10].", "Most of its details and many applications are reviewed in Ref.", "[20].", "For the purposes of this work, we use the simplest momentum independent isoscalar single-nucleon potential corresponding to an incompressibility of K=230 MeV (soft) and K=380 MeV (stiff), respectively.", "For the comparative studies here, this choice is sufficient and computationally efficient.", "While the more advanced potentials with momentum dependence for both the isoscalar and isovector single-nucleon potentials are more physical [21], the large number of reaction events necessary for the present study is unfortunately computationally prohibitive.", "We also only look at free protons identified as protons with local densities less than $1/8$ the saturation density $\\rho _0$ of nuclear matter in the final state of the reaction.", "We notice that the collective flow signatures in Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV have been studied recently by the HADES Collaboration [22] using free protons and light clusters.", "To compare quantitatively with the HADES data would require us to use momentum-dependent single-nucleon potential and a coalescence model coupled to our transport model.", "Such a study is planned.", "The proton azimuthal angle distributions $\\frac{dN}{d\\phi }$ are calculated with respect to the true reaction plane $(x-o-z)$ of the simulations with the beam along the $z$ direction.", "For the reactions considered, because of the symmetry of the reaction system, it is sufficient to calculate the $\\frac{dN}{d\\phi }$ for $0\\le \\phi \\le \\pi $ with the $\\phi ={\\rm arccos}(p_x/p_t)$ where $p_t=(p_x^2+p_y^2)^{1/2}$ is the transverse momentum.", "Figure: Normalized azimuthal angle distributions of free protons in mid-central Au+Au collisions at E beam /AE_{\\rm beam}/A=1.23 GeV with a soft (K=230 MeV) and a stiff (K=380 MeV) EOS, respectively.", "Lower panel: mean values.", "Upper panel: standard deviations from1.5 million reaction events with each EOS.Figure: The probability distributions of the normalized angle φ\\phi distributions of free protons shown in Fig.", "in the three specified φ\\phi bins.Shown in Fig.", "REF are the normalized azimuthal angle distributions $F_n(\\phi _n)$ of free protons with $|y|\\le 0.5$ and $p_t\\ge 0.3$ GeV/c in mid-central (with impact parameters between 6 and 9 fm) Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV with the soft and stiff EOS, respecrtively.", "Here the $F_n(\\phi _n)$ is defined as $F_n(\\phi _n)\\equiv \\Delta N(\\phi )/N$ where $\\Delta N(\\phi )$ is the number of free protons in the n-th $\\phi $ bin of size $\\pi /20$ while $N$ is the total number of free protons in the kinematic region considered.", "The mean values and standard deviations of $F_n(\\phi _n)$ from 1.5 million reaction events with each EOS are shown in the lower and upper panel, respectively.", "They all peak around $\\phi =\\pi /2$ indicating clearly an elliptical flow pattern.", "It is seen that the mean azimuth asymmetry is appreciably stronger with the stiff EOS as one expects.", "As we shall show in the next section, with the stiff EOS there are more free protons (larger stopping power) in the mid-rapidity region than the case with the soft EOS.", "Consequently, the $F_n(\\phi _n)$ of free protons at mid-rapidity has a smaller standard deviation with the stiff EOS than the soft one.", "As we shall discuss later, the separate evaluation of the means and standard deviations are useful for the PCA via SVD of the particle azimuthal angle distributions.", "For several purposes, it is also interesting to know the form of the event-by-event fluctuation of $F_n(\\phi _n)$ .", "Shown in Fig.", "REF are the probability distributions $P(F_n)$ in the three $\\phi $ bins around $\\phi =\\pi /2, \\pi /4$ and $3\\pi /4$ , respectively.", "The $P(F_n)$ values at $\\pi /4$ and $3\\pi /4$ are almost identical as one expects from the symmetry of the reaction and they are consistent with the results shown in Fig.", "REF .", "At these two azimuthal angles, the $P(F_n)$ has little dependence on the EOS used.", "However, at $\\phi =\\pi /2$ the stiff EOS shifts the peak of $P(F_n)$ slightly towards higher $F_n(\\phi _n)$ values compared to the soft EOS.", "This is also consistent with the results shown in Fig.", "REF .", "Interestingly, the $P(F_n)$ is definitely not Gaussian.", "As the Poisson distribution requires the mean to be the same as the square of the standard deviation, we have checked and found that the $P(F_n)$ is not a Poisson distribution either.", "As shown in Fig.", "REF , the $P(F_n)$ peaks around $F_n(\\phi _n)=0.05\\sim 0.1$ .", "Even for the heavy reaction system considered, this corresponds only to less than 10 protons in the most populous $\\phi $ bin in each event.", "The number of particles in each bin is too small to be considered a statistical system.", "It is thus not surprising that the $P(F_n)$ is not Gaussian as one would normally expect for a statistical system." ], [ "Integrated and differential transverse and elliptical flow of free protons in mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV", "The transverse flow (also called directed flow) has been studied most commonly by analyzing the average transverse momentum per nucleon in the reaction plane as a function of rapidity $y$ [1] $< p_x/A >(y)=\\frac{1}{A(y)}\\sum _{i=1}^{A(y)} p_{ix}(y)$ where $A(y)$ is the number of nucleons at rapidity $y$ and $p_{ix}(y)$ ($i=1$ to $A(y)$ ) are their transverse momenta in the $x-$ direction.", "It is also frequently referred as integrated (over $p_t$ ) transverse flow.", "In setting up the simulations, we use the convention that $<p_x>$ (and the corresponding $v_1$ ) is positive for forward (positive $y$ ) going particles in the center of mass (cms) frame of the two colliding nuclei.", "Figure: The rapidity distribution (upper) and average in-plane transverse momentum <p x /A><p_x/A> (lower) of free protons in mid-central Au+Au collisions at E beam /AE_{\\rm beam}/A=1.23 GeV with a soft (K=230 MeV) and a stiff (K=380 MeV) EOS, respectively.Shown in Fig.", "REF are the rapidity distribution (upper) and average in-plane transverse momentum $<p_x/A>$ (lower) of free protons in the mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV with the soft and stiff EOS, respectively.", "The main features including the EOS dependence of these distributions are all well understood and are consistent with previous studies.", "They are presented here for completeness and comparisons with the PCA-SVD analysis for the purpose of extracting the EOS information.", "Normally one uses the slope of the $<p_x/A>$ at mid-rapidity and/or its magnitude around the target/projectile rapidity to characterize the strength of directed flow.", "It is seen that they both show significant EOS effects.", "The differential directed flow as a function of $y$ and $p_t$ is characterized by the strength of the first harmonics $v_1(y,p_t)=<cos(\\phi )>(y,p_t)=\\frac{1}{n}\\sum _{i=1}^{n}\\frac{p_{ix}}{p_{it}}$ while the differential elliptical flow is described by $v_2(y,p_{t})=<cos(2\\phi )>(y,p_t)=\\frac{1}{n}\\sum _{i=1}^{n}\\frac{p_{ix}^2-p_{iy}^2}{p_{it}^2}$ where $n(y,p_t)$ is the total number of particles with rapidity $y$ and transverse momentum $p_t$ .", "Compared to the integrated flow, the differential flows may help uncover more detailed information about the EOS of dense matter, see, e.g., Ref.", "[23].", "On the other hand, the $v_1(y,p_t)$ and $v_2(y,p_t)$ normally depend strongly on the rapidity $y$ and transverse momentum $p_t$ , requiring detailed analyses and the results normally have strong dependences on the acceptances of the detectors.", "Figure: The transverse momentum distributionsd 2 N/(p t dp t dy)d^2N/(p_tdp_tdy) of free protons in the two representative rapidity windows indicated for the mid-central Au+Au reactions.Before analyzing the $v_1(y,p_t)$ and $v_2(y,p_t)$ , it is instructive to first examine the transverse momentum distributions $d^2N/(p_tdp_tdy)$ in two representative rapidity windows.", "Shown in Fig.", "REF are the $d^2N/(p_tdp_tdy)$ of free protons in the Au+Au reactions in the rapidity range of $0.1\\le |y_{\\rm cms}|\\le 0.3$ and $0.6\\le |y_{\\rm cms}|\\le 1.0$ , respectively.", "Because of the symmetry of the reaction, as shown in Fig.REF , these two rapidity windows contain typical nucleons from the participants and target/projectile spectators.", "It is seen that around the target/projectile rapidity $(\\pm 0.74)$ in the range of $0.6\\le |y_{\\rm cms}|\\le 1.0$ where the $<p_x/A>$ is the strongest, the $d^2N/(p_tdp_tdy)$ peaks around $p_t=0.15$ GeV/c.", "While in the participant region in the rapidity range of $0.1\\le |y_{\\rm cms}|\\le 0.3$ , the $d^2N/(p_tdp_tdy)$ peaks at a much higher value around $p_t=0.15$ GeV/c.", "Moreover, in this rapidity range, the $d^2N/(p_tdp_tdy)$ shows a significantly stronger EOS effect.", "Interestingly, the EOS effects are actually most visible around and/or below the peaks of $d^2N/(p_tdp_tdy)$ instead of in its high-momentum tails.", "This information might be useful for designing experiments searching for EOS effects.", "Overall, the EOS information revealed from analyzing the $d^2N/(p_tdp_tdy)$ versus $p_t$ and the $<p_x/A>$ versus $y$ are consistent and complementary to each other.", "Figure: The differential directed flow v 1 (y,p t )v_1(y,p_t) values as functions of p t p_t in the specified rapidity ranges for the mid-central Au+Au reactions.We now turn to the differential transverse flow $v_1(y,p_t)$ .", "Shown in Fig.", "REF are the $v_1(y,p_t)$ values as functions of $p_t$ for the near mid-rapidity and target/projectile rapidity ranges.", "First, it is seen that there is a change in sign around $p_t\\approx 0.5$ GeV/c.", "The majority of free protons in the low $p_t$ region (thus also low energy $E=\\sqrt{m^2+p_t^2}{\\rm cosh(y)}$ ) have positive $p_x$ (thus also $v_1$ ), while the high energy protons have negative $p_x$ .", "The net sums of these particles lead to the negative $<p_x/A>$ in the two rapidity ranges considered.", "Considering the information from the $p_t$ distribution in Fig.", "REF and the average in-plane transverse momentum in Fig.REF , it is seen that it is the high-momentum nucleons dominate the $v_1(y,p_t)$ .", "In particular, around the target/projectile rapidity, this phenomenon is stronger.", "This is understandable.", "While there are only few high-$p_t$ free protons as shown in Fig.", "REF , the large $p_x$ values carried by these high-$p_t$ particles contribute more to the net $<p_x/A>$ compared to the contributions of a lot randomly moving low-$p_t$ particles.", "As to the effects of nuclear EOS, the integrated directed flow $<p_x/A>$ appears to be a better tool compared to the differential one $v_1(y,p_t)$ although they bare consistent EOS information.", "Figure: The differential elliptical flow v 2 (y,p t )v_2(y,p_t) values as functions of p t p_t in the specified rapidity ranges for the mid-central Au+Au reactions.Shown in Fig.", "REF are the differential elliptical flow $v_2(y,p_t)$ values as functions of $p_t$ in the two mid-rapidity bins for the mid-central Au+Au reactions.", "While the low-$p_t$ free protons are azimuthally isotropic, the ellipticity increases to about -9% (the squeeze-out perpendicular to the reaction plane dominates over the in-plane flow) at $p_t\\approx 0.6\\sim 0.8$ GeV/c.", "It then decreases and finally change sign at very high-$p_t$ (in-plane flow dominates).", "Compared to both the integrated and differential transverse flows studied above, it is interesting to see clearly that the elliptical flow $v_2(y,p_t)$ has the strongest sensitivity to the variation of nuclear EOS.", "Moreover, the sensitivity increases slightly when the rapidity range is further narrowed towards the mid-rapidity.", "Of course, because of the total energy-momentum conservation, the ellipticity peaks at different $p_t$ values in the two rapidity ranges considered." ], [ "Singular value decompositions of proton azimuthal angle distributions in mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV", "The PCA has been widely used in many fields of sciences and engineering.", "It reduces the dimensionality of large datasets by creating new uncorrelated variables that successively maximize variance.", "Since the new variables are defined by the dataset itself instead of a priori PCA is an adaptive data analysis tool [24].", "There are many textbooks and articles about the fundamentals and applications of PCA in the literature.", "Here we adopt the terminologies from the recent review [24] on PCA via SVD using column-centered, non-centered (raw data) and standardized data matrices.", "For discussions on the relationship and advantages/disadvantages of these three ways of preparing the data matrices we refer the readers to Ref.", "[25].", "In applying the PCA via SVD to the azimuthal angle distributions of particles in heavy-ion collisions, we follow the approach used in Refs.", "[4], [5].", "According to the PCA via SVD formalism [24], any arbitrary matrix $\\mathbf {Y}$ of dimension $n\\times p$ can be decomposed with three matrices according to $\\quad \\quad \\quad \\quad \\mathbf {Y}=\\mathbf {{U}{L}{A}^T}$ where $\\mathbf {{U}}$ and $\\mathbf {{A}}$ are $n\\times r$ and $p\\times r$ ($r\\le {\\rm min(n,p)}$ ) matrices with orthonormal columns while $\\mathbf {{L}}$ is a $r\\times r$ diagonal matrix with decreasing singular values ${\\sigma }_j$ (j=1 to r).", "The covariance matrix $\\mathbf {S}$ of the data is given by $\\quad \\quad \\quad \\quad (n-1)\\mathbf {S}=\\mathbf {{Y}^T{Y}}=\\mathbf {{A}{L}^2{A}^T}.$ The columns of $\\mathbf {{A}}$ are the eigenvectors of $\\mathbf {{Y}^T{Y}}$ (thus also $\\mathbf {S}$ ), while those of $\\mathbf {{U}}$ are the eigenvectors of $\\mathbf {{Y}{Y}^T}$ and $\\mathbf {L}^2$ is a diagonal matrix with the squared singular values (the eigenvalues of $(n-1)\\mathbf {S}$ ).", "Adopting the approach and notations used in Refs.", "[4], [5] in applying the PCA-SVD to azimuthal angle distributions of particles from ultra-relativistic heavy-ion collisions, we can sort particles into $m$ -bins (columns) in azimuthal angle $\\phi $ for $N$ number of reaction events (raws).", "The elements $m_{i,j}$ of the resulting data matrix $\\mathbf {M_f}$ of dimension $N\\times m$ are the number of particles in the $i$ -th event (raw) and $j$ -th $\\phi $ bin (column) with $i$ from 1 to $N$ and $j$ from 1 to $m$ .", "Applying the SVD to $\\mathbf {M_f}$ , one can write [4] $\\quad \\quad \\quad \\quad \\mathbf {M}_f=\\mathbf {{X}{\\Sigma }{Z}}=\\mathbf {{V}{Z}}.$ We notice that $\\mathbf {Z}=\\mathbf {{A}^T}$ , thus the raws (columns) of the $\\mathbf {Z}$ $(\\mathbf {{A}})$ matrix contains the loadings (coefficients) of the principle components (PCs) in terms of the original variables.", "The azimuthal angle distribution in the $i$ -th event $dN/d\\phi ^{(i)}$ can be expressed by the linear combination of the eigenvectors $z_j$ (the $j_{th}$ row of matrix $\\mathbf {Z}$ ) with $j=1,2,... ,m $ as [4] $dN/d\\phi ^{(i)}=\\sum _{j=1}^m {x}_j^{(i)}{\\sigma }_j {z}_j=\\sum _{j=1}^m \\tilde{v}_j^{(i)} {z}_j.$ If the singular value $\\sigma _j$ decreases quickly with $j$ , normally the first few PCs will be sufficient to account for most of the covariance of the $\\mathbf {S}$ matrix.", "In this case, the above summation can be truncated at $k$ significantly less than $m$ .", "The $\\tilde{v}_j^{(i)}$ can be further averaged over all events to evaluate on average how each PC contributes to the event averaged azimuthal angle distribution $dN/d\\phi $ (We shall use the phrase “Event averaged PC coefficients\" in presenting the event averaged $\\tilde{v}_j^{(i)}$ in the following).", "In the literature, different ways have been used in preparing the data matrix $\\mathbf {M_f}$ [24].", "For the azimuthal angle distribution, it is first normalized by the total number of particles $N$ detected in the kinematic region considered to obtain the normalized raw data $dN/d\\phi /N$ .", "We refer the corresponding data matrix $\\mathbf {M}_f$ as the raw uncentered data matrix.", "Its elements are $m_{i,j}$ in index notation.", "If one subtract from $m_{i,j}$ its event averaged value $<m_j>$ in each column ($\\phi $ bin), namely $m_{i,j}-<m_j>$ , the resulting data matrix is the so-called column-centered matrix.", "Furthermore, if one divides the $m_{i,j}-<m_j>$ with its standard deviation $\\delta _j$ in each $\\phi $ bin, one obtains the standardized data matrix with elements $[m_{i,j}-<m_j>]/\\delta _j$ .", "Using the means and standard deviations of $dN/d\\phi /N$ in each $\\phi $ bin shown in Fig.", "REF from 1.5 million events in each case we constructed the above three kinds of data matrices.", "The advantages and disadvantages of using the three matrices for the PCA-SVD analyses as well as the interpretations of the resulting PCs were discussed using several examples in Refs.", "[24], [25].", "We also notice that in the literature there are debates on whether the Gaussian distribution of the dataset is required or not [26], [27].", "According to Ref.", "[24], the PCA as a descriptive tool needs no distributional assumption.", "Indeed, the PCA has been used on various data types.", "As we have shown earlier, the event-by-event fluctuation of free protons in each $\\phi $ bin is not Gaussian.", "Thus, our results presented below have to be understood in the context and conditions given above.", "Figure: Loadings of the first 4 principle components with the soft and stiff EOS obtained using the non-centered (raw) data matrix of the free proton azimuthal angle distribution in mid-central Au+Au collisions at E beam /AE_{\\rm beam}/A=1.23 GeV.Figure: Same as Fig.", "but for the singular values (lower) and event averaged PC coefficients (upper).In the following, we compare the first few PC loadings, the singular values and the event averaged PC coefficients obtained by using the three ways of preparing the data matrices.", "We notice that since the eigenvectors can be multiplied by a minus sign without changing any physical content, only the pattern and relative signs of the PC loadings are relevant.", "As our central goal is to extract reliable information about the EOS of dense matter formed in heavy-ion reactions, we focus on comparing effects of the EOS revealed from the PCA-SVD analyses with those from the Fourier analyses presented earlier.", "Shown in Fig.", "REF are loadings of the first 4 principle components with the soft and stiff EOS obtained by using the non-centered data matrix of the free proton azimuthal angle distribution in mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV.", "It is seen that the PC1 (except a minus sign) resembles the mean value of $dN/d\\phi /N$ shown in Fig.", "REF .", "While the PC2, PC3 and PC4 are fluctuations around $\\phi =\\pi /2$ with increasing frequencies of oscillations.", "The EOS effect is appreciable in PC1 and PC4.", "The corresponding singular values and event averaged PC coefficients are shown in the lower and upper panel of Fig.", "REF , respectively.", "It is seen that the singular value is overwhelmed by the contribution from PC1, while the higher order PCs contribute successively less.", "Consequently, only the coefficient of PC1 (in expressing the event averaged azimuthal angle distribution $dN/d\\phi $ in terms of the PCs) is relevant.", "This is understandable.", "For the non-centered (raw) data, the first PC is naturally dominated by the mean.", "These features are qualitatively consistent with those found in analyzing other non-centered data [25].", "Most strikingly, none of the PCs are naturally sine and/or cosine functions.", "Figure: Same as Fig.", "but from using the column-centered data matrix.Figure: Same as Fig.", "but from using the column-centered data matrix.Shown in Fig.", "REF and Fig.", "REF are the PC loadings as well as the corresponding singular values and the event averaged PC coefficients using the column-centred data matrices.", "The PC loadings are basically oscillations around $\\phi =\\pi /2$ with increasingly higher frequencies.", "Some PCs show clear EOS dependence but this dependence is not easy to interpret.", "The singular value decreases gradually and shows a clear EOS dependence.", "Interestingly, the coefficients of the first two PCs also show a clear EOS dependence.", "We notice again that none of the PCs are naturally sine and/or cosine functions.", "Figure: Same as Fig.", "but from using the standardized data matrix.Figure: Same as Fig.", "but from using the standardized data matrix.Finally, the results obtained from using the standardized data matrices are shown in Fig.", "REF and Fig.", "REF .", "Since in this way of preparing the data, the deviation $m_{i,j}-<m_j>$ of each event is scaled by the standard deviation $\\delta _j$ in each $\\phi $ bin, the data just measures the scaled relative error event-by-event.", "All characteristics (loadings, singular values and the PC coefficients) show little dependence on the EOS.", "The singular value does not drop as quickly as necessary to reduce all the relevant information to the first few PCs, indicating that the PCA is ineffective.", "Moreover, the PC loadings are essentially flat, indicating little collectivity (correlations) among particles in different $\\phi $ bins.", "Obviously, the PCs are not naturally sine and/or cosine functions.", "In fact, in this case we do not see any physical reason to expect the PCs to behave as harmonic functions.", "Overall, in the SVD-PCA analyses effects of the EOS on the azimuthal distribution function $dN/d\\phi $ are being splited to the PC loadings, singular values and PC coefficients, instead of being contained only in the Fourier coefficients.", "Consequently, the Fourier analysis is more useful for extracting reliable information about the EOS of dense matter formed in heavy-ion collisions, although none of the SVD-PCA analyses carried out here can prove that the harmonic functions constitute naturally the most optimal basis for analyzing the particle azimuthal angle distribution in heavy-ion collisions around $E_{\\rm beam}/A$ =1 GeV.", "Nevertheless, it remains an interesting question why the two previous SVD-PCA analyses at ultra-relativistic energies have drown a very different conclusion [4], [5]." ], [ "Summary", "Using IBUU transport model generated events for mid-central Au+Au collisions at $E_{\\rm beam}/A$ =1.23 GeV, we compared the standard Fourier and SVD-PCA analyses of the azimuthal angle distributions of free protons for the purpose of extracting information about the EOS of dense matter formed in the reaction.", "We also examined whether the SVD-PCA analyses can prove that the harmonic functions constitutes naturally the most optimal basis for flow analyses using the three different ways of preparing the data matrices.", "We found a negative answer to the last question.", "Moreover, because the EOS effects on the azimuthal distribution function $dN/d\\phi $ are being shared by the PC loadings, singular values and event averaged coefficients, they all show weaker dependence on the EOS while in the Fourier analyses all EOS information is carried by the harmonic coefficients.", "In particular, the strength of elliptical flow has the strongest sensitivity to the varying EOS.", "We conclude that the Fourier analysis is more useful for extracting reliable information about the EOS of dense matter formed in heavy-ion collisions." ], [ "Acknowledgement", "This work is supported in part by the U.S. Department of Energy, Office of Science, under Award No.", "DE-SC0013702, the CUSTIPEN (China- U.S.", "Theory Institute for Physics with Exotic Nuclei) under US Department of Energy Grant No.", "DE-SC0009971.", "We would like to thank Xian-Gai Deng, Yu-Gang Ma, Wen-Jie Xie and Kai Zhou for helpful discussions on machine learning techniques.", "P. Danielewicz and G. Odyniec, Phys.", "Lett.", "B157, 146 (1985).", "J.-Y.", "Ollitrault, Nucl.", "Phys.", "A638, 195c (1998) and references therein.", "A. Poskanzer and S.A. Voloshin, Phys.", "Rev.", "C55, 1671 (1998).", "Z. Liu, W. Zhao and H. Song, Eur.", "Phys.", "J.", "C 79, 870 (2019).", "I. Altsybeev, Phys.", "Part.", "Nucl.", "51, 314 (2020).", "H. Song, U. W. Heinz, Phys.", "Lett.", "B658, 279 (2008).", "C. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass, U. Heinz, Comput.", "Phys.", "Commun.", "199, 61 (2016).", "Zi-Wei Lin, Che Ming Ko, Bao-An Li, Bin Zhang, Subrata Pal, Phys.", "Rev.", "C 72, 064901 (2005).", "B.A.", "Li, W. Bauer and G.F. Bertsch, Phys.", "Rev.", "C44, 2095 (1991).", "B.", "A. Li, C. B. Das, S. Das Gupta and C. Gale, Phys.", "Rev.", "C 69, 011603 (2004); ibid, Nucl.", "Phys.", "A 735, 563 (2004).", "H. Stöcker and W. Greiner, Phys.", "Rep. 137, 277 (1986).", "G.F. Bertsch and S. Das Gupta, Phys.", "Rep. 160, 189 (1988).", "W. Cassing, V. Metag, U. Mosel and K. Niita, Phys.", "Rep. 188, 363 (1990).", "J. Xu, Prog.", "Part.", "Nucl.", "Phys.", "106, 312 (2019).", "M. Colonna, Prog.", "Part.", "Nucl.", "Phys.", "113, 103775 (2020).", "S. Das Gupta and G.D. Westfall, Physics Today, 46(5), 34 (1993).", "W. Reisdorf and H.G.", "Ritter, Ann.", "Rev.", "Nucl.", "Part.", "Sci.", "47, 663 (1997).", "S.A. Basset al, Prog.", "Part.", "Nucl.", "Phys.", "41, 255 (1998).", "P. Danielewicz, R. Lacey, W. G. Lynch, Science 298, 1592 (2002).", "B.", "A. Li, L. W. Chen and C. M. Ko, Phys.", "Rep. 464, 113 (2008).", "B.", "A. Li and L. W. Chen, Phys.", "Rev.", "C 72, 064611 (2005).", "J. Adamczewski-Musch et al (HADES Collaboration), Phys.", "Rev.", "Lett.", "125, 262301 (2020).", "B.A.", "Li and A. T. Sustich, Phys.", "Rev.", "Lett.", "82 , 5004 (1999).", "I.T.", "Jolliffe and J. Cadima, Phil.", "Trans.", "R. Soc.", "A 374, 20150202 (2016).", "J. Cadima and I.T.", "Jolliffe, Pak.", "J. Statist.", "25, 473 (2009).", "J. Shlens, A TUTORIAL ON PRINCIPAL COMPONENT ANALYSIS Derivation , Discussion and Singular Value Decomposition.", "(2003).", "https://cis.temple.edu/~latecki/Courses/AI-Fall10/Lectures/PCA-Tutorial-Intuition.pdf J. Shlens, A tutorial on principal component analysis.", "ArXiv Preprint ArXiv:1404.1100.", "(2014)" ] ]
2207.03563
[ [ "Metal Mixing in the R-Process Enhanced Ultra-Faint Dwarf Galaxy\n Reticulum II" ], [ "Abstract The ultra-faint dwarf galaxy Reticulum~II was enriched by a single rare and prolific r-process event.", "The r-process content of Reticulum~II thus provides a unique opportunity to study metal mixing in a relic first galaxy.", "Using multi-object high-resolution spectroscopy with VLT/GIRAFFE and Magellan/M2FS, we identify 32 clear spectroscopic member stars and measure abundances of Mg, Ca, Fe, and Ba where possible.", "We find $72^{+10}_{-12}$% of the stars are r-process-enhanced, with a mean $\\left\\langle\\mbox{[Ba/H]}\\right\\rangle=-1.68~\\pm~0.07$ and unresolved intrinsic dispersion $\\sigma_{\\rm [Ba/H]} < 0.20$.", "The homogeneous r-process abundances imply that Ret~II's metals are well-mixed by the time the r-enhanced stars form, which simulations have shown requires at least 100 Myr of metal mixing in between bursts of star formation to homogenize.", "This is the first direct evidence of bursty star formation in an ultra-faint dwarf galaxy.", "The homogeneous dilution prefers a prompt and high-yield r-process site, such as collapsar disk winds or prompt neutron star mergers.", "We also find evidence from [Ba/H] and [Mg/Ca] that the r-enhanced stars in Ret~II formed in the absence of substantial pristine gas accretion, perhaps indicating that ${\\approx}70$% of Ret~II stars formed after reionization." ], [ "Introduction", "Ultra-faint dwarf galaxies (UFDs) are Milky Way satellite galaxies with luminosities $M_V > -7.7$ (stellar masses ${\\lesssim }10^5 M_\\odot $ , [170]).", "UFDs appear to form all their stars in the first 1-2 billion years, before their star formation is cut off by reionization [15], [30].", "UFDs probe the extreme low-mass end of galaxy formation, where star formation is inefficient and massive stars form stochastically, resulting in intermittent feedback and incomplete sampling of nucleosynthetic sources [108], [107], [171], [62], [60], [91], [94].", "UFDs are also relics of early galaxy formation, providing a unique window into the first stars and galaxies in a pre-reionization universe, as well as a clean probe of the first metal-free Population III stars [27], [167], [58], [90].", "Since many halo stars with $\\mbox{[Fe/H]} < -2.5$ likely form in UFD-like environments, even if they later grow into or accrete into larger systems [29], it is crucial to understand the star formation conditions for UFDs to interpret the most metal-poor stars.", "To understand these early properties, the red giant branch stars in UFDs have been the subject of intense spectroscopic study.", "The last 15 years have resulted in high-resolution spectra of ${\\gtrsim }$ 100 stars across ${\\sim }$ 20 UFDs with detailed elemental abundances (see [60], [170], [94], [95] for a description of the basic characteristics and chemical evolution trends).", "Reticulum II (Ret II) is a UFD discovered in the Dark Energy Survey [10], [110], located only 32 kpc away.", "Initial followup spectroscopy showed that its velocity dispersion, mean metallicity, and metallicity dispersion were consistent with typical UFDs ([172], [111], [192], henceforth Simon15,Koposov15b,Walker15).", "Subsequent high-resolution spectroscopy surprisingly showed that most Ret II stars displayed some of the highest $r$ -process enhancements known ([91], [92], [159], henceforth Ji16c,Roederer16b).", "By comparing to other UFDs (which display unusually low neutron-capture element abundances, [60], [94]), the clear conclusion is that Ret II experienced enrichment from a single rare and prolific $r$ -process event.", "The source of the $r$ -process elements is still debated, as it could be consistent with $r$ -process nucleosynthesis in a prompt neutron star merger or rare core-collapse supernova [91], [13], [165], [163], [143], [169], [182], [138], [85], [46].", "The single $r$ -process event in Ret II provides a unique opportunity to probe metal mixing in a UFD.", "Because all the $r$ -process elements (including barium and europium) were deposited in a single enrichment event, the distribution of $r$ /H ratios in Ret II stars depends only on the overall amount of enriched gas and the and homogeneity of metal mixing into the gas.", "This contrasts with elements synthesized by more common sources like supernovae or asymptotic giant branch (AGB) stars, since the frequency of element production interacts with metal mixing to produce the distribution of stellar abundances [114], [52], [53].", "A direct constraint on metal mixing by measuring the [$r$ /H] distribution could play a major role in interpreting UFD abundances and formation histories [58], [90], [195], [182].", "We thus present a detailed spectroscopic study of Ret II chemical abundances obtained with multi-object spectroscopy using VLT/FLAMES and Magellan/M2FS.", "We find 32 clear member stars and 8 more candidates, the most spectroscopically confirmed members to date in Ret II.", "About half the stars have Ba and Fe constraints, while a third have Mg and/or Ca measurements as well.", "Our primary focus is measuring the distribution of [Ba/H] in these stars, which is a tracer of $r$ -process enrichment in Ret II due to the high $r$ -process enhancement and negligible s-process contribution in Ret II [91].", "Since this is the largest spectroscopic sample of members yet, we also explore more general kinematics, binarity, chemical evolution, and spatial gradients.", "Section  presents the spectroscopic observations and data reduction.", "Section  describes the velocity and chemical abundance analysis methods, as well as membership determination including auxiliary information from the Dark Energy Survey (DES, [49]) and Gaia EDR3 [66], [67], [120].", "Section  gives our results for the Ret II radial velocity distribution, chemical abundance trends, Fe and Ba distributions, and radial gradients.", "Section  discusses the implications of our measurements on metal mixing in dwarf galaxies, the origin of the $r$ -process elements, and chemical evolution in Ret II.", "We summarize and conclude in Section .", "Multi-epoch velocities are provided in Appendix .", "A major systematic for our Ba results is microturbulence, which is discussed extensively in Appendices  and ." ], [ "Observations and Data Reduction", "We observed Reticulum II with VLT/FLAMES in October 2017 [146], and with Magellan/M2FS in September 2016 at medium-resolution and November 2017 at high-resolution [132].", "Table REF contains details about which stars were observed at which settings.", "Note that all signal-to-noise ratios (SNR) quoted in this paper refer to the SNR per pixel." ], [ "VLT/FLAMES, GIRAFFE", "The FLAMES/GIRAFFE setup on the VLT UT2 provides high-resolution spectra of ${\\sim }100$ stars over a field of view of diameter 0.4 degrees.", "Observations were taken in visitor mode on 26-27 October 2017 with excellent weather.", "We used the HR14A setting, covering one order from 6300$-$ 6500Å with $R\\sim 18000$ .", "Targets were selected based on our own photometry of public DES Y1 images following [110].", "We chose targets near the fiducial CMD within the single field we targeted.", "The total FLAMES exposure time was 11.8h, with most exposures being 3000s but a few exposures of 2400s and 3600s at the end of the night.", "Data were reduced with the standard ESO pipeline, which provides flat-corrected and wavelength calibrated 1D flux and error spectra.", "The 1D spectra are extracted to a common rebinned dispersion without cosmic ray rejection or sky subtraction.", "For each object, we removed cosmic rays in 1D by normalizing individual exposures by their median flux, then masking pixels with ${>}5\\sigma $ deviations from the combined median spectrum.", "Care was taken not to mask pixels associated with variable sky lines.", "Sky subtraction was performed in 1D mostly following [8].", "For each exposure, we constructed a master sky spectrum from ${\\sim }15$ sky fibers using an inverse-variance weighted mean.", "The master sky flux was split into two components, a sky emission line and a continuum component.", "The line component was used to identify wavelength bins associated with emission lines.", "Then for each object spectrum, we also split the flux into emission lines and continuum, rescaled the master sky line flux to match the object emission line flux by minimizing the L1 norm (total absolute deviation at wavelengths associated with sky emission lines), applied the same scaling factor to the sky continuum, and subtracted the rescaled master sky from the object spectrum.", "Visual inspection of the sky-subtracted spectra suggests this procedure was generally effective, with no correction to the line spread function or wavelength recalibration needed.", "Still, there are sometimes sky subtraction residuals from spatially variable sky lines, which does impact our Ba line of interest (see Section REF ).", "Final coadded spectra were obtained using an inverse-variance weighted average of individual exposures." ], [ "Magellan/M2FS HiRes", "We obtained high-resolution spectra of Ret II stars with M2FS on 16-17 November 2017.", "We used the HiRes mode with $180{\\mu }$ m slits, providing $R \\sim 18000$ .", "The detectors were binned 2x2 with 4 amplifier slow readout.", "Two different blocking filters were used to observe the targets (one on each M2FS channel), based on a visual examination of the VLT spectra.", "For fainter targets with unclear Ba detections or upper limits, we used the BulgeGC1 filter, which includes 24 fibers covering 6 orders from $6100-6700$ Å, including the Ba line at 6496.7Å.", "The Ba line at 6141Å is on the blue end of the filter cutoff and cannot be used.", "For brighter targets that already had clear Ba detections in the VLT data or upper limits, we instead used the MgWide filter, which includes 28 targets covering 4 orders from $5150-5400$ Å.", "6 sky fibers were allocated for each arm.", "The total exposure time was 14h.", "The data were reduced with a custom pipelinehttps://github.com/alexji/m2fs_reduction.", "Each of 4 amplifier images was bias subtracted using the overscan and stitched into one image, then had dark current subtracted.", "Every science frame was associated with a single arc and flat obtained closest in time to the science frame.", "The object trace was fit to each flat using a 5th order Legendre polynomial.", "Scattered light was subtracted from every flat and science frame by fitting the inter-object regions with a 2D Legendre polynomial of degree 5 in either direction.", "Twilight flats were used to determine throughput corrections for each fiber.", "The wavelength calibration was motivated by [100] and adapted for fiber spectroscopy.", "An initial feature identification was done once by hand, extracting all orders of each fiber and using the IRAF identify command to manually identify positions of $50-70$ arc lines in each order of each fiber in the X (wavelength) direction on the CCD [184], [185].", "These identifications were then turned back into 2D coordinates using the trace functions.", "The actual wavelength calibration was performed in 2D, finding sources in each arc frame using Source Extractor [21], and matching the detected sources to identified lines using a KD tree2D wavelength calibration was necessary for the BulgeGC1 filter because the ThArNe arcs taken for this setting were extremely saturated, introducing many spurious features in 1D extracted arcs.. We then fit a 5th order Legendre polynomial for the wavelength solution, iteratively rejecting outliers, and using lines from all object fibers to fit the overall distortion.", "A single X and Y pixel offset is allowed for each fiber (but not orders within a fiber) to account for any movement of the fibers in the pseudo-slit.", "In total, $40-50$ lines were identified and used in each order for the MgWide filter, and $10-30$ lines were identified and used for the BulgeGC1 filter (fewer due to the saturated arcs).", "The final wavelength solution has a typical RMS ${<}0.01$ Å in both arms.", "Data were then extracted using flat-relative optimal extraction [205], which we found performed better than fitting a functional form to the object profile.", "To perform sky subtraction, we linearly rebinned the extracted spectra onto a uniform wavelength grid, then followed essentially the same sky subtraction procedure as the VLT data.", "The main differences were that the M2FS data has multiple orders, so sky subtraction was done independently for each order; and since the MgWide filter has few sky lines, we did not rescale the master sky spectrum to match line strengths, instead just directly subtracting the throughput-corrected master sky.", "There were clear differences in the line spread function for different fibers, resulting in residuals around sky lines.", "We thus rejected data around sky lines.", "Different exposures were coadded order-by-order with an inverse-variance weighted average.", "Coadded orders were then continuum-normalized separately in smhrhttps://github.com/andycasey/smhr, originally described in [35] and expanded in [96] before being stitched into a single spectrum." ], [ "Magellan/M2FS MedRes", "We conducted two sets of medium resolution observations of Ret II stars using M2FS on 2016 September 6 and 10, totaling 6.72 hr of integration time.", "We used the MedRes grating on the `R' spectrograph, 95 $\\mu $ m slits, 2x2 binning, and the MedRes_Ba(23) filter, which transmits one order with $4450 \\le \\lambda \\le 4615$  Å.", "This setup yields $R \\sim 9,000$ , as measured from individual Ar or Th emission lines in the comparison lamp spectra.", "We performed data reduction, extraction, wavelength calibration, sky subtraction, co-addition, and continuum normalization following the procedures described in [157], modified for use with MedRes spectra.", "We extracted the two sets of Ret II MedRes spectra (one set from each night) separately.", "We measured radial velocities of each star in each observation by cross-correlating (using the IRAF fxcor task) its spectrum against a synthetic metal-poor template spectrum smoothed to the same spectral resolution.", "Repeat observations of probable Ret II members show a standard deviation of 1.7 $\\mathrm {\\,km\\,s^{-1}}$ , which we regard as the uncertainty of an individual measurement.", "We also observed two comparison stars, [CD -24 1782]CD $-$ 24$^{\\circ }$ 1782, and [BPS CS 31082-001]CS 31082–001 using the same M2FS MedRes setup.", "Their radial velocities, after applying Heliocentric corrections computed using the IRAF rvcorrect task, agree with published values [158], [76] to better than 1.7 $\\mathrm {\\,km\\,s^{-1}}$ .", "which we regard as the systematic uncertainty of our measurements.", "Combining the individual and systematic uncertainty, we adopt a total velocity uncertainty of 2.4 $\\mathrm {\\,km\\,s^{-1}}$ for the MedRes velocities.", "However, when later comparing repeat velocity measurements of Ret II stars, we found a systematic offset of ${\\approx }$ 10 $\\mathrm {\\,km\\,s^{-1}}$ .", "We thus decided not to use the M2FS MedRes velocities in this work, though we report their values in Appendix ." ], [ "Comment on Sky Subtraction and Ba Lines", "Our primary Ba line at 6496.7Å is very close to a strong sky line at 6498.7Å, and our Ba abundances are potentially susceptible to sky subtraction residuals.", "For the VLT data with SNR $> 25$ , we were able to obtain good simultaneous fits to the Ba lines and the sky line residuals.", "We decided stars with SNR $< 25$ were too strongly impacted by sky subtraction to have reliable Ba measurements, especially since a small error can make a big asymmetric difference in the abundances for saturated lines (Section REF ).", "For the M2FS data, the signal-to-noise was lower and the heliocentric correction moved the Ba line right into the sky line, so none of the Ba 6496.7Å line measurements from M2FS were reliable.", "There was also a 6141Å Ba line located on the filter cutoff, but after investigation we decided it was adversely affected by scattered light subtraction systematics and thus not sufficiently reliable for abundance measurements.", "rllccrrrrrccrcrr 16 Observations ID RA Dec $g_0$ $r_0$ $r_e/r_{\\rm h}$ $v_{\\rm hel}$ $\\sigma _v$ $\\mu _\\alpha \\cos \\delta $ $\\mu _\\delta $ Mem Bin SNR/px HiRes SNR/px SNR/px (h:m:s) (d:m:s) (mag) (mag) ($\\mathrm {\\,km\\,s^{-1}}$ ) ($\\mathrm {\\,km\\,s^{-1}}$ ) (mas/yr) (mas/yr) (VLT) Mode (HiRes) (MedRes) 1 03:35:23.8 -54:04:07.66 16.45 15.65 0.60 $+65.3$ 0.2 $ 2.43$ $-1.39$ M N 140 94 2 03:36:07.7 -54:02:35.58 17.43 16.81 0.57 $+62.5$ 3.0 $ 2.39$ $-1.33$ M Y 73 MgWide 39 58 3 03:34:47.9 -54:05:25.03 17.49 16.87 1.49 $+62.0$ 0.4 $ 2.36$ $-1.39$ M N 94 MgWide 34 57 4 03:35:31.1 -54:01:48.24 17.61 17.02 0.79 $+58.5$ 0.3 $ 2.25$ $-1.35$ M N 69 MgWide 37 68 5 03:35:48.0 -54:03:49.84 18.27 17.69 0.39 $+61.7$ 0.3 $ 2.28$ $-1.29$ M N 76 MgWide 19 40 6 03:35:37.1 -54:04:01.25 18.57 18.03 0.37 $+64.3$ 0.4 $ 2.50$ $-1.60$ M N 65 MgWide 24 25 7 03:35:56.3 -54:03:16.29 18.86 18.38 0.39 $+63.2$ 0.4 $ 2.28$ $-1.34$ M N 50 MgWide 14 26 8 03:34:57.6 -54:05:31.42 18.94 18.40 1.25 $+60.2$ 0.4 $ 2.34$ $-1.28$ M N 52 MgWide 15 29 9 03:34:54.2 -54:05:58.05 18.93 18.42 1.35 $+69.2$ 0.4 $ 2.40$ $-1.36$ M N 52 MgWide 17 33 10 03:35:21.0 -54:03:48.16 18.93 18.43 0.68 $+64.5$ 4.6 $ 2.24$ $-1.20$ M Y 53 MgWide 15 11 03:35:02.5 -54:03:54.27 19.23 18.73 1.20 $+67.0$ 0.9 $ 2.68$ $-1.40$ M N BulgeGC1 12 30 12 03:35:58.1 -54:02:04.78 19.30 18.81 0.27 $+64.6$ 0.5 $ 2.35$ $-1.13$ M N 39 MgWide 15 29 13 03:35:11.7 -54:03:21.81 19.31 18.83 1.00 $+67.0$ 3.4 $ 2.71$ $-1.31$ M Y 42 MgWide 10 14 03:34:39.7 -54:07:54.37 19.37 18.84 1.82 $+62.3$ 0.7 $ 2.54$ $-1.51$ C N 40 BulgeGC1 10 15 03:36:01.8 -54:04:05.49 19.57 19.07 0.81 $+62.6$ 0.6 $ 2.18$ $-1.21$ M N 34 BulgeGC1 12 16 03:35:50.1 -54:01:39.24 19.72 19.29 0.39 $+64.6$ 0.9 $ 2.47$ $-1.63$ C N 30 MgWide 10 17 03:35:13.7 -54:04:56.72 19.67 19.20 0.87 $+60.2$ 0.7 $ 2.52$ $-1.30$ M N 32 BulgeGC1 12 27 18 03:35:17.0 -54:04:03.05 19.68 19.20 0.77 $+65.5$ 8.0 $ 2.52$ $-1.08$ M Y 21 BulgeGC1 12 19 03:35:15.2 -54:08:43.03 19.69 19.21 1.81 $+67.8$ 11.1 $ 2.39$ $-1.07$ M Y 31 BulgeGC1 12 22 20 03:35:14.0 -54:05:58.19 19.98 19.53 1.01 $+63.6$ 0.7 $ 2.76$ $-1.60$ M N 27 BulgeGC1 5 21 03:36:35.8 -54:01:20.21 20.19 19.73 1.23 $+61.0$ 0.8 $ 2.32$ $-1.37$ M N 16 BulgeGC1 7 22 03:36:21.9 -54:00:40.68 20.29 19.85 0.86 $+65.6$ 0.7 $ 2.02$ $-2.11$ M N 15 BulgeGC1 5 23 03:35:24.0 -54:02:26.69 20.28 19.85 0.82 $+61.6$ 0.7 $ 2.70$ $-1.78$ M N 19 BulgeGC1 6 24 03:35:02.9 -54:01:09.84 20.38 19.94 1.81 $+63.6$ 0.8 $ 3.11$ $-1.14$ M N 19 BulgeGC1 4 25 03:35:44.2 -54:01:50.03 20.41 19.93 0.43 $+62.0$ 0.7 $ 2.22$ $-2.30$ M N 20 BulgeGC1 6 26 03:35:35.4 -54:02:54.88 20.72 20.38 0.36 $+61.5$ 0.8 $ 2.83$ $-1.63$ M N 13 BulgeGC1 4 97 03:36:27.7 -53:58:26.31 17.13 16.27 1.34 $+67.5$ 0.7 $ 2.34$ $-1.29$ C N 125 99 03:35:14.5 -54:02:33.16 20.39 19.92 1.08 $+67.2$ 0.8 $ 2.50$ $-0.90$ M N 19 BulgeGC1 7 100 03:36:18.7 -53:57:45.14 18.04 18.20 1.53 $+61.0$ 0.8 $ 2.46$ $-1.48$ M N MgWide 25 102 03:36:12.7 -53:56:02.25 18.05 18.23 2.16 $+66.8$ 0.7 $ 2.19$ $-1.21$ M N MgWide 28 134 03:35:12.5 -54:00:59.16 20.82 20.47 1.58 $+62.2$ 1.0 $ 3.01$ $-1.90$ M N 13 MgWide 2 142 03:35:38.7 -54:04:55.59 20.89 20.42 0.67 $+61.0$ 0.7 $ 2.36$ $-1.46$ C N 14 MgWide 3 143 03:35:39.5 -54:00:23.48 20.93 20.62 1.07 $+58.9$ 0.9 $ 3.59$ $-2.82$ C N 10 MgWide 3 144 03:35:39.4 -54:03:57.31 20.93 20.67 0.35 $+65.0$ 1.1 $ 2.06$ $-1.47$ M N 11 MgWide 2 151 03:35:47.5 -54:03:25.05 20.59 20.36 0.23 $+68.0$ 1.0 $ 2.68$ $-0.51$ C N 14 154 03:36:05.4 -54:02:06.36 20.41 20.21 0.44 $+61.9$ 1.0 $ 2.70$ $-1.46$ C N 13 157 03:36:11.6 -54:00:34.45 20.64 20.23 0.71 $+67.9$ 0.9 $ 1.94$ $-2.10$ M N 12 BulgeGC1 5 188 03:34:59.3 -53:58:23.98 20.90 20.59 2.79 $+68.4$ 1.2 $ 2.05$ $-2.94$ C N 7 192 03:35:16.9 -54:05:22.59 20.69 20.29 0.87 $+69.1$ 0.9 $ 1.89$ $-1.77$ M N 17 BulgeGC1 4 195 03:35:16.5 -54:04:36.12 20.60 20.21 0.78 $+68.1$ 0.8 $ 2.14$ $-2.22$ M N 14 BulgeGC1 4 Positions and photometry from DES.", "Half-light radius assuming [141] structural parameters.", "Heliocentric radial velocities and uncertainties are the average systematics-corrected uncertainties described in Section REF and Appendix .", "Proper motions are from Gaia EDR3.", "The HiRes and MedRes columns refer to M2FS observations described in this paper.", "Only the member and candidate member stars are shown here.", "The full table is available in electronic form in the online journal." ], [ "Analysis", "We assigned each star a numerical ID.", "IDs $1-26$ were red giant branch stars identified as members or candidate members by [172] (a superset of [192], [111]), sorted by magnitude.", "IDs $97-200$ were assigned arbitrarily to other observed stars." ], [ "Photometry and Astrometry", "For all observed stars, we queried the DES DR1 catalog for $griz$ magnitudes using NOAO datalab, which were dereddened using the DES DR1 reddening coefficients [49].", "The exception is star ID 1, which is saturated in the DES data release but had photometry determined from an individual DECam exposure in [172].", "The subscript 0 (e.g., $g_0$ ) indicates dereddened magnitudes.", "RA and Dec were taken from the DES catalog (differing by ${\\sim }$ 01 compared to Gaia).", "We also computed the elliptical distance $r_e$ for each star, assuming Ret II is centered at 03:35:47.83 $-$ 54:02:47.8 with a position angle of 68$^\\circ $ , ellipticity of 0.6, and half-light radius 6.3 arcmin [141].", "We adopted a distance modulus of $17.5$ whenever needed [141].", "Proper motions $\\mu _\\alpha \\cos \\delta $ and $\\mu _\\delta $ were obtained by cross-matching with Gaia EDR3 [120].", "Our spectroscopic target selection did not use proper motions, as it was performed before Gaia DR2 was released." ], [ "Radial Velocities", "Radial velocities were measured with weighted cross-correlation against a high-resolution template spectrum of HD 122563 obtained with the MIKE spectrograph and shifted to rest-frame [20].", "Each spectrum was first normalized with a 3rd order polynomial.", "Then, for a range of velocities, we shifted the template spectrum by that velocity and calculated the $\\chi ^2$ of pixels within a specific wavelength interval.", "For the VLT data and the M2FS BulgeGC1 data, we used H$\\alpha $ to measure velocities, cross-correlating from $6550-6575$ Å.", "For the M2FS MgWide data, we used the Mg triplet, cross-correlating from $5150-5200$ Å. Velocities were then found as the minimum of the $\\chi ^2$ contour, and 1 $\\sigma $ statistical velocity errors were determined by $\\Delta \\chi ^2 = 1$ [127].", "For the M2FS data, we used just one of the orders for velocity measurement: order 54 for BulgeGC1 and order 69 for MgWide.", "We did not attempt to determine very detailed radial velocities for the MedRes M2FS data, given its lower resolution.", "We determined systematic velocity uncertainties from repeated velocity measurements.", "For both the VLT and M2FS run, every star was observed with 7-15 individual exposures over two nights.", "We thus measured the velocity and statistical uncertainty for each individual exposure.", "Then, following [118], we took all pairs of repeated velocity measurements with $\\sigma _v < 30$ $\\mathrm {\\,km\\,s^{-1}}$ and fit the following Gaussian-plus-outlier model: $\\begin{split}v_i - v_j \\sim &f \\mathcal {N}\\left(0, \\sqrt{F(\\sigma _{v,i})^2 + F(\\sigma _{v,j})^2}\\right) \\\\ + &(1-f) \\mathcal {N}(0, \\sigma _{\\text{outlier}})\\end{split}$ where $v_i$ , $v_j$ , $\\sigma _{v,i}$ , and $\\sigma _{v,j}$ are the individual velocities and uncertainties; $f$ is the fraction of pairs that are good; $\\sigma _{\\text{outlier}}$ is a large value characterizing a broad background outlier model; and $F(\\sigma _v) = \\sqrt{\\sigma _{v,\\text{floor}}^2 + (k \\times \\sigma _v)^2}$ is a rescaling of the velocity uncertainty with a scale factor $k$ and a systematic floor $\\sigma _{v,\\text{floor}}$ .", "The VLT data had $f=0.98$ , $\\sigma _{v, \\text{floor}} = 0.69$ $\\mathrm {\\,km\\,s^{-1}}$ , and $k=1.02$ .", "The M2FS MgWide data had $f=0.66$ , $\\sigma _{v, \\text{floor}} = 0.74$ $\\mathrm {\\,km\\,s^{-1}}$ , and $k=1.64$ .", "The M2FS BulgeGC1 data had $f=0.93$ , $\\sigma _{v, \\text{floor}} = 3.20\\mathrm {\\,km\\,s^{-1}} $ , and $k=0.11$ .", "In this case, having $k < 1$ suggests that the individual velocities are dominated by systematic errors, which is primarily due to the low SNR of individual exposures causing poor continuum fits and template matches.", "We thus conservatively take $k=1$ for the BulgeGC1 data.", "These values of $k$ and $\\sigma _{v, \\text{floor}}$ are applied to generate our final velocity uncertainties.", "Figure REF shows the individual velocity measurements for all observed stars.", "The left panel shows the 26 likely members from previous observations, while the right panel shows other observed stars.", "The red bars on the lower axis indicate stars that are clear members, while orange bars indicate possible members.", "For comparison, in the left panel we also plot velocities derived by Simon15, Koposov15b, Ji16c, and Roederer16b.", "Since many literature velocities were measured using the same spectra, duplicate velocity measurements from Walker15 and Simon15 were removed from this figure.", "For the 40 members and candidate members (determined in Section REF ), we determined velocities by averaging our VLT and M2FS velocities with the literature velocities from Simon15, Koposov15b, Ji16c, and Roederer16b, for a total of up to 6 independent velocity measurements.", "We combine all available literature velocities with an inverse-variance weighted mean.", "Details are given in Appendix , but this includes estimating a systematic velocity offset for each data sample using the Simon15 velocities as a reference, and identifying binary star candidates using a chi-squared test.", "Five candidate binary stars are identified in Table REF and as grey bars on the top axis of Fig REF .", "These binary star velocities were also found with a weighted mean, but the uncertainty indicates the range of velocities.", "Binary candidates are excluded from the velocity and velocity dispersion calculations.", "Our focus in this paper is on chemical abundances, so this assessment of binarity and velocity systematics is incomplete, and a more comprehensive future study is warranted.", "Figure: Individual radial velocity measurements for all stars from this study and the literature.", "No systematic velocity corrections have been applied in this figure.The left panel shows red giant branch stars previously marked as confirmed or candidate members, while the right panel shows other stars.", "Along the bottom axis, red bars indicate stars that are certain members, while orange lines indicate candidate members.", "Along the top axis, grey lines indicate binary stars." ], [ "Membership", "We used four criteria to establish membership within our selected targets: radial velocities in the range $53\\,\\mathrm {\\,km\\,s^{-1}} < v_{\\rm r, hel} < 75\\,\\mathrm {\\,km\\,s^{-1}} $ ; proper motion within 2 units of Mahalanobis distance from the Ret II mean proper motionFor two vectors $x$ and $y$ with covariance matrix $\\Sigma $ , $d_{\\text{Mahalanobis}} = \\sqrt{(x-y)^\\text{T} \\Sigma ^{-1} (x-y)}$ .", "; position within 2 elliptical half light radii of the Ret II centerWe used the structural parameters from [141], but the membership is the same using the structural parameters from [140].", "; and $g_0-r_0$ color between 13 Gyr, $\\alpha $ -enhanced Dartmouth isochrones [50] with $\\mbox{[Fe/H]}=-2.5$ and $\\mbox{[Fe/H]}=-1.5$ , allowing a 0.03 mag buffer on each side of the isochrone given an expected reddening uncertainty.", "Since we used hard cuts, edge cases near these boundaries were inspected individually.", "The results are shown in Figure REF .", "70 of the 129 total spectroscopic targets were rejected as being outside of our radial velocity limits (small blue points).", "Of the remaining 59 stars, 18 stars had consistent radial velocities but were rejected based on clearly discrepant spatial location, CMD position, or proper motion (cyan crosses).", "One additional star (ID 191) was also rejected as having [Fe/H] $\\sim -0.5$ from our later analysis, too high to be part of Ret II.", "This leaves 40 stars, of which 32 stars were confident members, clearly matching at least three of the four criteria (large red points).", "In the rest of this paper, we refer to these stars as “clear members.” The remaining 8 stars were highlighted as uncertain members (small orange squares), which we refer to as “candidate members” in the rest of this paper.", "Stars 188 and 143 are CMD members but at somewhat large distance and proper motion away from the galaxy core to be considered very confident members.", "Stars 14, 97, and 142 are redward of the isochrones, requiring high metallicities $\\mbox{[Fe/H]}\\gtrsim -1.5$ (or perhaps large carbon bands) to be considered part of Ret II.", "Our subsequent analysis finds none of these stars has such a high metallicity.", "Star 16 is blueward of the isochrone and has an unusually shallow H$\\alpha $ line shape suggesting it is a hotter star.", "Finally, stars 151 and 154 have consistent kinematics and would be CMD non-members, but they are located where evolved blue stragglers or an unusually young stellar population might be found, as indicated by a 10Gyr isochrone in the top-left panel of Figure REF .", "In other dwarf spheroidal galaxies, blue stragglers tend to appear even younger ($2.5 \\pm 0.5$  Gyr, [168]) so it is possible these stars could indicate a younger population of stars in Ret II.", "However, the spectra of these potential member stars are too low SNR to reliable measure any abundances.", "Stars 100 and 102 are known BHB member stars [172].", "Other than rejecting the clear non-member star ID 191, we have avoided using metallicity information in the membership selection so as to remain unbiased in final abundance distributions.", "Note that several Ret II members have slightly lower $\\mu _\\alpha $ and $\\mu _\\delta $ than most of the galaxy (at $(\\mu _\\alpha , \\mu _\\delta ) = (+2.0, -2.0)$ compared to $(+2.4, -1.3)$ ).", "These 5 stars were also offset in the same direction in Gaia DR2.", "However, they have larger proper motion uncertainties individually consistent with the bulk of the galaxy, and they do not stand out in radial velocity.", "Future Gaia data releases can test whether these represent a true feature.", "Figure: Diagnostic plots for membership determination.", "Red circles indicate clear Ret II members, orange squares are candidate Ret II members (with star ID labeled), cyan crosses are non-member stars with radial velocities similar to Ret II, small blue dots indicate spectroscopic targets with velocities far from Ret II, and small grey points indicate DES stars within 5 elliptical half-light radii.", "Top left: color-magnitude diagram with dereddened DES photometry.", "The solid magenta line is a Dartmouth 13 Gyr, [Fe/H] =-2.5=-2.5 isochrone.", "The dashed magenta lines are more metal-rich 13 Gyr isochrones with metallicities [Fe/H] =-2.0=-2.0 and -1.5-1.5.", "The dashed blue line is a 10 Gyr isochrone with [Fe/H] =-2.5= -2.5 to show where potential younger member stars or blue stragglers could lie.Top right: spatial position of stars around the Ret II center, only including stars within 5 elliptical half-light radii.Bottom left: radial velocity of spectroscopically observed stars vs elliptical half-light radius.", "Binaries are indicated as open circles, with the error bar spanning the range of observed velocities.", "There is no significant radial velocity gradient.Bottom right: Gaia EDR3 proper motions.", "All our spectroscopic targets are bright enough to have EDR3 proper motions.", "Stars 14, 16, 97, 142, and 154 are located at the center and thus not labeled.Note the bulk of field stars are beyond the plot limits." ], [ "Chemical Abundances", "We derived most abundances using a standard analysis using 1D [37] model atmospheres and Local Thermodynamic Equilibrium (LTE) radiative transfer with MOOG [175] including scattering [176] and [6] dampinghttps://github.com/alexji/moog17scat.", "Since Ba is the element of most interest, we analyzed Ba with both LTE and non-LTE, see Section REF .", "Our stellar parameters are given in Table REF , atomic data in Table REF , and abundance results in Table REF .", "rrrrrrr 7 Stellar Parameters ID $T_{\\rm eff}$ $\\sigma _T$ $\\log g$ $\\sigma _g$ $\\nu _t$ $\\sigma _\\nu $ 1 4655 53 1.24 0.19 1.61 0.14 2 4952 49 1.85 0.18 1.48 0.13 3 4953 49 1.87 0.18 1.48 0.13 4 5009 49 1.95 0.18 1.46 0.13 5 5034 50 2.23 0.18 1.41 0.13 6 5157 68 2.41 0.19 1.39 0.13 7 5316 77 2.61 0.19 1.36 0.13 8 5146 67 2.56 0.19 1.36 0.13 9 5248 68 2.60 0.19 1.36 0.13 10 5256 68 2.61 0.19 1.36 0.13 11 5286 71 2.74 0.19 1.34 0.13 12 5331 82 2.79 0.19 1.34 0.13 13 5336 83 2.80 0.19 1.33 0.13 14 5166 69 2.74 0.19 1.34 0.13 15 5275 70 2.87 0.19 1.33 0.13 16 5564 83 3.06 0.19 1.31 0.13 17 5349 88 2.95 0.19 1.32 0.13 18 5320 77 2.94 0.19 1.32 0.13 19 5324 79 2.94 0.19 1.32 0.13 20 5481 92 3.13 0.20 1.30 0.13 21 5420 93 3.18 0.20 1.30 0.13 22 5492 91 3.26 0.19 1.29 0.13 23 5550 86 3.28 0.19 1.29 0.13 24 5499 91 3.30 0.19 1.29 0.13 25 5309 75 3.22 0.19 1.29 0.13 26 5915 88 3.60 0.19 1.27 0.13 97 4584 64 1.47 0.19 1.56 0.14 99 5379 95 3.25 0.20 1.29 0.13 134 5861 91 3.62 0.19 1.27 0.13 142 5363 90 3.44 0.20 1.28 0.13 143 6055 91 3.73 0.19 1.26 0.13 144 6263 113 3.81 0.20 1.26 0.13 151 6439 124 3.74 0.20 1.26 0.13 154 6645 130 3.73 0.20 1.26 0.13 157 5602 79 3.44 0.19 1.28 0.13 188 6046 92 3.72 0.19 1.26 0.13 192 5650 76 3.48 0.19 1.27 0.13 195 5704 87 3.47 0.19 1.27 0.13 We adopt $\\mbox{[M/H]} = -2.5$ and $\\mbox{[$\\alpha $/Fe]}=+0.4$ for all stars.", "rrrrl 5 Atomic Data $\\lambda $ (Å) Species $\\chi $ (eV) $\\log gf$ Reference VLT 6439.075 Ca I 2.524 0.390 SR81 6493.781 Ca I 2.521 -0.109 SR81 6494.980 Fe I 2.402 -1.273 BLA82 6496.897 Ba II 0.604 -0.380 GAL20 M2FS Mg Wide 5171.596 Fe I 1.484 -1.720 OBR91 5172.684 Mg I 2.712 -0.393 NIST 5183.604 Mg I 2.717 -0.167 NIST 5269.537 Fe I 0.858 -1.330 OBR91 5328.532 Fe I 1.556 -1.850 OBR91 5371.489 Fe I 0.957 -1.640 OBR91 M2FS MedRes 4554.03 56.1 0.000 +0.17 IVA06 References: SR81 [173], BLA82 [23], IVA06 [82], GAL20 [68], OBR91 [142], NIST [113], accessed through the Kurucz, VALD3, and linemake databases [115], [160], [148].", "r|rrr|rr|rr|rr|rrr|rr 15 Abundances ID Mem VLT SNR M2FS SNR [Mg/H] $\\sigma _{\\rm Mg}$ [Ca/H] $\\sigma _{\\rm Ca}$ [Fe/H] $\\sigma _{\\rm Fe}$ [Ba/H]$_{\\rm NLTE}$ $\\sigma _{\\rm Ba}$ [Ba/H]$_{\\rm LTE}$ [Ba/H]$_{\\rm MR}$ $\\sigma _{\\rm Ba,MR}$ 1 M 140 $-2.69$ $ 0.07$ $-2.89$ $ 0.10$ $-1.86$ $ 0.18$ $-1.58$ $-1.64$ $ 0.18$ 2 M 73 Mgb, 39 $-2.48$ $ 0.12$ $-2.39$ $ 0.06$ $-2.76$ $ 0.09$ $-1.63$ $ 0.17$ $-1.33$ $-1.13$ $ 0.16$ 3 M 94 Mgb, 34 $-2.59$ $ 0.13$ $-2.46$ $ 0.06$ $-2.77$ $ 0.08$ $-1.58$ $ 0.17$ $-1.28$ $-1.42$ $ 0.25$ 4 M 69 Mgb, 37 $-2.55$ $ 0.11$ $-2.95$ $ 0.07$ $-2.66$ $ 0.19$ $-3.46$ limit $-3.46$ $-0.83$ limit 5 M 76 Mgb, 19 $-2.52$ $ 0.13$ $-1.87$ $ 0.07$ $-2.02$ $ 0.13$ $-1.86$ $ 0.15$ $-1.57$ $-1.60$ $ 0.34$ 6 M 65 Mgb, 24 $-2.75$ $ 0.13$ $-2.91$ $ 0.08$ $-3.18$ $ 0.12$ $-1.83$ $ 0.18$ $-1.54$ $-1.04$ $ 0.29$ 7 M 50 Mgb, 14 $-2.67$ $ 0.22$ $-2.77$ limit $-2.93$ limit $-1.64$ limit $-1.64$ $-0.83$ limit 8 M 52 Mgb, 15 $-2.03$ $ 0.16$ $-2.19$ $ 0.20$ $-2.14$ $ 0.15$ $-1.35$ $ 0.17$ $-1.12$ $-0.98$ $ 0.22$ 9 M 52 Mgb, 17 $-2.72$ $ 0.13$ $-2.98$ $ 0.08$ $-2.89$ $ 0.14$ $-1.38$ $ 0.35$ $-1.08$ $-1.14$ $ 0.29$ 10 M 53 Mgb, 15 $-2.09$ $ 0.18$ $-2.88$ $ 0.08$ $-3.28$ limit $-3.41$ limit $-3.41$ 11 M H$\\alpha $ , 12 $-1.14$ $ 0.37$ 12 M 39 Mgb, 15 $-2.84$ $ 0.15$ $-2.69$ $ 0.11$ $-2.72$ limit $-1.50$ $ 0.38$ $-1.20$ $-1.16$ $ 0.37$ 13 M 42 Mgb, 10 $-2.60$ $ 0.16$ $-2.11$ limit $-2.88$ $ 0.16$ $-1.64$ $ 0.22$ $-1.35$ 14 C 40 H$\\alpha $ , 10 $-2.84$ $ 0.09$ $-3.14$ $ 0.20$ $-2.59$ $ 0.25$ $-2.53$ 15 M 34 H$\\alpha $ , 12 $-2.64$ limit $-3.07$ limit $-2.83$ limit $-2.83$ 16 C 30 Mgb, 10 $-1.94$ limit $-2.99$ limit $-0.19$ limit $-0.19$ 17 M 32 H$\\alpha $ , 12 $-2.33$ $ 0.09$ $-2.60$ $ 0.21$ $-1.84$ $ 0.21$ $-1.63$ $-1.32$ $ 0.47$ 18 M 21 H$\\alpha $ , 12 $-1.93$ limit $-2.52$ limit $+0.10$ limit $+0.10$ 19 M 31 H$\\alpha $ , 12 $-2.72$ $ 0.12$ $-2.66$ $ 0.21$ $-1.61$ $ 0.23$ $-1.35$ $-1.82$ $ 0.87$ 20 M 27 H$\\alpha $ , 5 $-1.66$ limit $-2.21$ limit $+0.69$ limit $+0.69$ 21 M 16 H$\\alpha $ , 7 $-1.84$ limit $-1.74$ limit $-0.49$ limit $-0.49$ 22 M 15 H$\\alpha $ , 5 $-1.32$ limit $-2.22$ limit $-0.19$ $ 0.53$ $-0.21$ 23 M 19 H$\\alpha $ , 6 $-1.52$ limit $-2.00$ limit $+0.79$ limit $+0.79$ 24 M 19 H$\\alpha $ , 4 $-1.23$ limit $-1.86$ limit $+0.46$ limit $+0.46$ 25 M 20 H$\\alpha $ , 6 $-2.38$ $ 0.20$ $-2.13$ $ 0.44$ $-0.56$ $ 0.56$ $-0.60$ 26 M 13 H$\\alpha $ , 4 limit $-2.21$ limit $-1.55$ $ 0.27$ $-1.35$ 97 C 125 $-2.36$ $ 0.11$ $-2.35$ $ 0.16$ $-1.84$ $ 0.17$ $-1.64$ 99 M 19 H$\\alpha $ , 7 limit $-2.30$ $ 0.27$ $-0.63$ $ 0.50$ $-0.60$ 100 M Mgb, 25 102 M Mgb, 28 134 M 13 Mgb, 2 $-1.62$ limit $-1.90$ limit $+0.68$ limit $+0.68$ 142 C 14 Mgb, 3 $-1.89$ $ 0.23$ $-1.98$ limit $-0.64$ $ 0.71$ $-0.68$ 143 C 10 Mgb, 3 $-0.86$ limit $-1.92$ limit $-1.32$ $ 0.47$ $-1.04$ 144 M 11 Mgb, 2 $-1.65$ limit $-1.27$ limit $-0.14$ limit $-0.14$ 151 C 14 $-1.58$ limit $-1.64$ limit $+0.36$ limit $+0.36$ 154 C 13 $ 0.34$ limit $-0.37$ limit $-0.34$ $ 0.61$ $+0.23$ 157 M 12 H$\\alpha $ , 5 $-1.45$ limit $-1.21$ limit $+0.71$ limit $+0.71$ 188 C 7 $-0.26$ limit $-1.09$ limit $+0.30$ limit $+0.30$ 192 M 17 H$\\alpha $ , 4 $-1.65$ limit $-2.11$ limit $+0.12$ limit $+0.12$ 195 M 14 H$\\alpha $ , 4 $-2.22$ limit $-2.02$ limit $-1.56$ $ 0.37$ $-1.34$ In column “Mem”, M indicates a clear member, and C indicates a candidate member.", "In the abundance uncertainty columns, “limit” means the indicated value is a 99% upper limit.", "[Ba/H]$_{\\rm NLTE}$ refers to our fiducial Ba abundances derived from MULTI/MARCS, [Ba/H]$_{\\rm LTE}$ is the Ba abundances derived from MOOG/ATLAS, and [Ba/H]$_{\\rm MR}$ is the MOOG/ATLAS abundances from the 4554Å line derived from the medium-resolution M2FS observations." ], [ "Stellar Parameters", "Since our spectra cover a very small wavelength range with few lines, we determined effective temperatures for member stars from DES photometry and Dartmouth isochrones [50].", "We adopted a 13 Gyr, $\\mbox{[Fe/H]}=-2.5$ , [$\\alpha $ /Fe]$=+0.4$ isochrone as our fiducial isochrone.", "The isochrone was used to fit $T_{\\rm eff}$ as a function of $g_0-r_0$ , and then we applied this to every member star.", "The brightest known member star (ID=1) is near the DES saturation limit with obviously incorrect $r$ and $i$ magnitudes in DES DR1.", "We adopted $g_0-r_0=0.80$ from [172] to determine this star's stellar parameters.", "Statistical uncertainties were calculated assuming a fixed error in $g_0-r_0 = 0.02$ mag (the typical reddening uncertainty) resulting in a typical temperature error of 50–100 K. Systematic uncertainties were estimated by taking the largest difference between the fiducial isochrone and the $T_{\\rm eff}$ calculated using isochrones of (12, 13, 14) Gyr, $\\mbox{[Fe/H]} = (-2.5, -2.0)$ , and $\\mbox{[$\\alpha $/Fe]} = (0.0, 0.4)$ , a typical uncertainty of 30–40 K. The total temperature uncertainty was the quadrature sum of these two uncertainties.", "The surface gravity $\\log g$ was determined photometrically using the equation $\\log g= 4.44 + \\log M_\\star + 4 \\log T_{\\rm eff}/5780\\text{K} + 0.4 (g_0 - \\mu + BC(g) - 4.75)$ [191] where $M_\\star = 0.75 \\pm 0.1 M_\\odot $ is the typical mass of an old red giant branch star, $g_0$ is the dereddened DES $g$ magnitude, $\\mu = 17.5$ is the distance modulus to Ret II, and $BC(g)$ is the [34] bolometric correction.", "Besides the $M_\\star $ and temperature uncertainties, we propagated a conservative 0.2 mag total uncertainty for the distance modulus and dereddening, as well as 0.03 mag uncertainty in the bolometric correction.", "The total $\\log g$ uncertainty is about 0.2 dex for all stars.", "Using $r$ instead of $g$ led to the same $\\log g$ within 0.05 dex, and using different metallicities for the bolometric correction led to differences within 0.02 dex.", "The $T_{\\rm eff}$ and $\\log g$ in Table REF agree with two previous works that studied a total of 9 stars in Ret II using high-resolution spectroscopy Ji16c,Roederer16b.", "The stellar parameters in these two studies were derived using standard 1D-LTE methods, i.e., by balancing abundances of Fe lines with respect to excitation potential, ionization, and line strength, including a temperature recalibration to a photometric scale [59].", "All nine stars agree within the $1\\sigma $ stellar parameter uncertainties with no mean offset.", "Our primary line of interest, the 6496Å Ba line, is saturated in essentially all of our stars with a detected Ba line.", "Microturbulence ($\\nu _t$ ) thus plays a major role in determining the final abundance and uncertainty.", "As we do not have enough lines to determine $\\nu _t$ self-consistently in our stars, we instead adopt a microturbulence relation based on $\\log g$ from the metal-poor giants in [158]: $\\nu _t = 0.039 (\\log g)^2 - 0.331 (\\log g) + 1.960$ where the typical scatter around the relation is 0.13 km s$^{-1}$ .", "We note that this relation is quite different from the direct $\\nu _t$ measurements in Ji16c and Roederer16b, but it eliminates a trend in Ba abundance with effective temperature that would otherwise be present.", "We have decided to adopt this relation for the rest of the main paper, and a complete investigation of microturbulence choices and the impact on our results is discussed in Appendices  and .", "As we do not have Fe or $\\alpha $ -element constraints for most stars, we assume everywhere a model metallicity of $\\mbox{[M/H]} = -2.5$ and $\\mbox{[$\\alpha $/Fe]}=+0.4$ .", "We verified in stars with $\\mbox{[Fe/H]}$ and $\\mbox{[Mg/Fe]}$ measurements that changing these parameters produces negligible differences to the resulting abundances.", "The final stellar parameters and uncertainties for all members or candidate members are given in Table REF .", "We do not determine stellar parameters for the BHB stars (IDs 100 and 102)." ], [ "VLT Data", "In the VLT data, most Ret II member stars only have 1-5 significant absorption features: H-$\\alpha $ , the Ba ii line at 6496.897Å, an Fe i line at 6494.98Å, a Ca i line at 6439.08Å, and sometimes a Ca i line at 6493.78Å.", "We use this data to derive Ba, Fe, and Ca abundances where possible.", "Atomic data used are given in Table REF .", "The Ba and Fe abundances are derived by fitting the spectral region from 6494$-$ 6504Å.", "Given the systemic velocity of 64 $\\mathrm {\\,km\\,s^{-1}}$ , the Ba line is right next to a strong and variable sky line at 6498.74 Å.", "The sky subtraction procedure described in Section  often leaves a significant residual (Figure REF ).", "After testing several possible alternate sky subtraction procedures, we decided that the best course of action was to include the sky line residual in the model of this spectral region and marginalize over the uncertainty in our final results.", "For all member stars, a coarse normalization was first performed using a sigma-clipped 3rd order polynomial between 6450$-$ 6550Å.", "We then excised a region of the spectrum from 6494$-$ 6504Å to fit in detail.", "Our model of this region is an 8$+$ 4 parameter model summarized in Table REF : a single Gaussian line width, six amplitudes for the Ca, Fe, and Ba stellar absorption features, a residual amplitude characterizing the sky line, a linear wavelength shift applied to the star but not the sky line, and a 3rd-order polynomial to fit the residual continuum.", "Using the same line width for both stellar and sky features implicitly assumes that all lines were unresolved by the spectrograph, which is valid here.", "Note that the two lines at 6498.9Å and 6499.7Å are not present in any of the Ret II stars, but these were included in the model to fit more metal-rich foreground stars.", "The model parameters were optimized using scipy.optimize.curve_fit.", "For five stars with high SNR (IDs 1, 3, 4, 5, 97), the best-fit $\\chi ^2$ was larger than 1, suggesting that the data uncertainties were underestimated or the model did not provide a sufficient description of the data.", "For these stars, we increase the data uncertainties by 5–30% such that the reduced $\\chi ^2$ is 1.", "The other stars had reduced $\\chi ^2$ of ${\\approx } 0.8-0.9$ , but we leave their errors unscaled.", "Uncertainties in the fit were then found using dynamic nested sampling with dynesty [177].", "The priors used for the sampling are listed in Table REF .", "Every fit and MCMC chain was visually inspected to ensure goodness of fit before accepting its results.", "Lines considered poor visual fits were marked for upper limit determination.", "LTE abundances and uncertainties were determined by putting equivalent width distributions into curves of growth calculated by MOOG.", "First, the posterior distribution of the absorption line parameters were analytically converted into a posterior for the equivalent widths that marginalized over the effect of the sky line.", "Curves of growth were constructed with MOOG, which were used to convert the equivalent width distribution to an abundance distribution.", "For Ba, the effect of hyperfine structure was included assuming an $r$ -process isotope distribution [82], [174].", "For detected lines, we adopt the optimum fit as the point estimate and larger of the difference between the point estimate and the 16th and 84th percentiles as a $1\\sigma $ abundance error.", "For undetected lines, we adopt the 99th percentile of the abundance posterior as the upper limit.", "Despite this effort, stars with SNR $< 25$ still appear to have unreliable Ba abundances (Figure REF , Appendix ).", "Figure: VLT spectra with best-fit models of the Ba line.From low to high wavelength the primary absorption features are due to Ca, Fe, and Ba.The Ba abundance measurement or limit is indicated.Top section: stars with SNR >25> 25.", "Bottom section: stars with SNR <25< 25.The sky residuals are clearly substantial for the lower SNR stars.The black line shows the data, while the grey shaded region indicates the 1σ\\sigma spectrum noise.Best-fit models are shown in red and orange for clear and candidate members, respectively.The solid colored lines are the model including the sky line, while the dashed colored lines are the model fit in the absence of the fitted sky line residual.", "In some cases (e.g.", "star 020, 134, 157) the Ba line is completely degenerate with the sky emission line, so no useful constraint is obtained.The shaded colored regions indicate the 16-84th percentile range of model parameters.Note that we have not explicitly plotted the upper limit model.ccc 3 VLT Spectrum Fit Parameters Parameter Description Prior $A_{\\text{Ba}}$ Amplitude of 6496.9Å Ba line $\\mathcal {U}[0, 1]$ $\\lambda _{\\text{Ba}}$ Observed wavelength of Ba line $\\mathcal {U}[6493.4, 6503.4]$ $\\sigma $ Width of all lines $\\mathcal {U}[0.01, 0.30]$ $A_{\\text{sky}}$ Amplitude of 6498.7Å sky line $\\mathcal {U}[-0.50, 0.50]$ $A_{\\text{Fe}}$ Amplitude of 6495.0Å Fe line $\\mathcal {U}[0, 1]$ $A_{\\text{Ca}}$ Amplitude of 6493.8Å Ca line $\\mathcal {U}[0, 1]$ $A_{\\text{Fe,2}}$ Amplitude of 6498.9Å Fe line $\\mathcal {U}[0, 1]$ $A_{\\text{Ca,2}}$ Amplitude of 6499.7Å Ca line $\\mathcal {U}[0, 1]$ $c_0$ Constant Continuum Coefficient $\\mathcal {U}[0.5, 1.5]$ $c_{1,2,3}$ Continuum Coefficient $\\mathcal {U}[-1.0, 1.0]$ Stellar parameter uncertainties were found by calculating new curves of growth for $1\\sigma $ differences in stellar parameters, redoing the above calculations, and taking the difference.", "These are added in quadrature to the statistical uncertainties.", "As part of the above procedure we fit the Ca line at 6493.8Å.", "However, there is a stronger Ca line at 6439Å that is more often detected and also less susceptible to non-LTE effects [130].", "The Ca abundance is thus derived using this 6439Å line with smhr.", "This includes the normalization, equivalent width measurement, and stellar parameter uncertainty propagation [96].", "Formal 4 $\\sigma $ upper limits for Ca were also calculated in smhr by synthesizing a Ca line such that $\\Delta \\chi ^2 = 16$ ." ], [ "M2FS HiRes Abundances", "We used smhr to normalize and stitch coadded orders, fit equivalent widths, measure element abundances from MOOG, and propagate stellar parameter uncertainties (see description in [35], [96]).", "The primary use of this data is to measure the Mg abundances, which we derive from equivalent widths of the Mg b lines at 5172 and 5183Å.", "Useful abundances were ultimately only derived for the MgWide arm, since the BulgeGC1 arm only had fainter and warmer stars, and the 6496Å Ba line was completely blended with a sky line." ], [ "M2FS MedRes Abundances", "The primary line of interest in these data is the Ba 4554Å line.", "We measure the abundance from this line with a procedure identical to that used for the VLT data, except that when fitting the equivalent width we manually define valid continuum wavelength ranges instead of modeling the many absorption lines.", "The results are provided in Table REF , and they are consistent with the Ba abundances derived from the 6496Å line in the VLT data but with larger uncertainties.", "We thus do not use these abundances further.", "However, in the future this medium-resolution mode may be an efficient way to search for strong Ba lines in other UFDs." ], [ "Barium NLTE Abundances", "Ba can be substantially affected by non-LTE effects [18], [128], [68] especially when the lines are near saturation.", "For metal-poor giants, NLTE makes the Ba 6497 line stronger, resulting in lower inferred Ba abundances when including NLTE.", "To determine this quantitatively, we computed NLTE abundances for the Ba 6497 line using an updated version of the MULTI radiative transfer code [32], [19], [68] and MARCS model atmospheres [74].", "Like the MOOG/ATLAS analysis, we adopted a metallicity of [M/H] $=-2.5$ and [$\\alpha $ /Fe] $=+0.4$ for all model atmospheres, and used the model atom presented in [68] including [6] damping.", "After pre-computing a grid of NLTE curves of growth at many $T_{\\rm eff}$ and $\\log g$ values, we use Delaunay triangulation and linear interpolation to find NLTE abundances as a function of stellar parameters and equivalent widths (using scipy.interpolate.LinearNDInterpolator).", "In the rest of this paper, we adopt the NLTE [Ba/H] abundances as our fiducial abundances.", "However in Appendix  we show a comparison between the MOOG LTE/ATLAS and MULTI NLTE/MARCS abundances.", "Overall, the NLTE effects are approximately $-0.3$ dex, resulting in lower abundances compared to LTE modeling.", "For completeness, we investigated NLTE corrections for Mg [16], Ca [129], and Fe [17], [131]Correction grids available at https://nlte.mpia.de/ and http://spectrum.inasan.ru/nLTE/.. We found that [Mg/H] increased by $0.06 \\pm 0.02$ dex and [Ca/H] increased by $0.12 \\pm 0.04$ dex, where the uncertainty indicates the variation across the stellar parameter range.", "For Fe, in most stars only the 6494.98Å line is available to measure.", "Most NLTE correction grids do not include this line, with the exception of [131] whose interpolation grid only extends up to 5000K.", "The correction for the three stars with $T_{\\rm eff}< 5000\\mathrm {\\,K} $ increases [Fe/H] by $0.15 \\pm 0.01$ dex.", "We decided not to include these corrections in our results, though these mean offsets can be applied if desired." ], [ "Abundance Summary", "In summary, the main abundance results are Ba, Fe, Ca, and Mg measurements or upper limits.", "The VLT data are used to measure Ba, Fe, and Ca.", "The M2FS high-resolution data are used to measure Mg and to verify Fe and Ca.", "The M2FS medium-resolution data are used to verify Ba.", "For the brightest stars, we use the M2FS HiRes data to derive abundances for Cr, Ti, and Nd, which we report in Appendix .", "We do not discuss these elements more, as they are consistent with the discussion in Ji16c and Roederer16b.", "We adopt NLTE abundances for Ba and LTE abundances for other elements." ], [ "Radial Velocity Distribution", "Figure REF shows the radial velocities of all stars in our sample (solid blue histogram).", "There is a clear peak at ${\\approx }65$  km s$^{-1}$ associated with Ret II.", "Stars considered clear Ret II members are shown as the solid red histogram, while all Ret II members including candidates are the open orange histogram (see Section REF for details).", "For comparison, we show the radial velocity distribution of a smooth background halo from the Besançon model [154].", "We restrict the background to a CMD region surrounding our targets defined by four points $(g-r, g) = (0.2, 21.2), (0.7, 21.2), (0.45, 16.0), (1.0, 16.0)$ .", "We query a large area for statistics and rescale the distribution to the FLAMES field of view.", "The residual background after removing the clear and candidate members is shown as a purple histogram.", "There continues to be an excess of stars near the velocity of Ret II relative to the Besançon model.", "Detailed investigation of the proper motions and spatial position shows that these residual stars cannot be members of Ret II.", "There may potentially be additional structure in this region of the sky that is not part of Ret II, though we do not see any clear spatial or proper motion trends.", "We determine the mean velocity $\\bar{v}$ and velocity dispersion $\\sigma _v$ of Ret II with a Gaussian scatter model.", "Each star is assumed to have a true velocity distributed according to a Gaussian with mean and standard deviation $\\bar{v}$ and $\\sigma _v$ , and the actual observed velocity has a Gaussian noise added to it with the individual velocity uncertainty in Table REF .", "The prior on the mean velocity is uniform with no bounds, and the prior on the scatter is uniform in log space from $\\sigma _v \\in [0.1, 10]$ $\\mathrm {\\,km\\,s^{-1}}$ .", "The posterior is sampled using Stan ([33], following [36]).", "All five likely binary stars are removed.", "Using only the 27 clear likely single members of Ret II, we obtain $\\bar{v} = 63.9 \\pm 0.5 \\mathrm {\\,km\\,s^{-1}} $ and $\\sigma _v = 2.97^{+0.43}_{-0.35}\\mathrm {\\,km\\,s^{-1}} $ .", "Adding the 8 candidate members gives $\\bar{v} = 64.0 \\pm 0.5 \\mathrm {\\,km\\,s^{-1}} $ and $\\sigma _v = 2.96^{+0.44}_{-0.36}\\mathrm {\\,km\\,s^{-1}} $ , which is the same within the uncertainties.", "Our velocity dispersion is consistent both with previous measurements of ${\\approx }3.3\\mathrm {\\,km\\,s^{-1}} $ (Simon15,Walker15,Koposov15b) and with the lower value of $2.8^{+0.7}_{-1.2}$ inferred by [137] who statistically accounted for binaries.", "Our velocity dispersion is about two times more precise given more stars and additional independent velocity measurements, but we caution that our velocity dispersion uncertainties may be overly optimistic given the simple treatment of possible systematics (see Appendix ).", "lrr 3 Reticulum II Properties Quantity Value Reference/Prior RA (J2000) 03:35:47.83 [141] Dec (J2000) $-$ 54:02:47.8 [141] Position Angle (deg) 68 $\\pm 2$ [141] Ellipticity 0.6 $\\pm 0.1$ [141] Half-light radius (arcmin) 6.3 $\\pm 0.4$ [141] Half-light radius (pc) 58 $\\pm 4$ [141] Distance modulus 17.5 $\\pm 0.1$ [141] Distance (kpc) 31.4 $\\pm 1.4$ [141] Heliocentric Radial Velocity ($\\mathrm {\\,km\\,s^{-1}}$ ) $63.9 \\pm 0.5$ Uniform Velocity dispersion ($\\mathrm {\\,km\\,s^{-1}}$ ) $2.97^{+0.43}_{-0.35}$ $\\log \\sigma _v \\sim \\mathcal {U}\\left[-1,1\\right]$ Mean metallicity $\\left\\langle \\mbox{[Fe/H]}\\right\\rangle $ $-2.64 \\pm 0.11$ Uniform Metallicity dispersion $\\sigma _{\\rm [Fe/H]}$ $0.32^{+0.10}_{-0.07}$ $\\log \\sigma _{\\mbox{[Fe/H]}} \\sim \\mathcal {U}\\left[-2,0\\right]$ Mean NLTE barium abundance $\\left\\langle \\mbox{[Ba/H]}\\right\\rangle $ $-1.68 \\pm 0.07$ Uniform Barium dispersion $\\sigma _{\\rm [Ba/H]}$ $0.05^{+0.08}_{-0.03}$ or $<0.20$ $\\log \\sigma _{\\mbox{[Ba/H]}} \\sim \\mathcal {U}\\left[-2,0\\right]$ Fraction of $r$ -enhanced stars $0.72^{+0.10}_{-0.12}$ $f_r = \\mathcal {U}\\left[0,1\\right]$ Absolute Magnitude $M_{V}$ $-3.1 \\pm 0.1$ [141] Stellar Mass $M_{\\star }$ ($M_\\odot $ ) $10^{3.51 \\pm 0.04}$ Assuming $M/L=2.2$ Dynamical Mass $M_{\\rm dyn, 1/2}$ ($M_\\odot $ ) $10^{5.6 \\pm 0.2}$ Using [203] Rows where the third column has a prior are measured from this work.", "Figure REF shows the [Fe/H], [Ba/H], and [Ba/Fe] abundances of our sample.", "The red and orange points are clear and candidate Ret II members, respectively.", "Stars with limits on both [Fe/H] and [Ba/H] are shown in grey.", "Stars with SNR $>25$ are shown as larger points, while stars with SNR $<25$ are shown as smaller points.", "We only consider the large, high-SNR data points for the interpretation, but we show all the data for completeness.", "There are two high SNR candidate member stars labeled on Fig REF .", "Star 97 is deemed a candidate member because it is extremely red compared to the fiducial Ret II CMD.", "Its low inferred metallicity of $\\mbox{[Fe/H]}=-2.4$ is inconsistent with its color unless it is an extremely carbon-enhanced star [112].", "If it is a carbon-enhanced member, our model atmosphere grid may not be sufficient to accurately describe its properties.", "With this caveat in mind, the [Ba/H] for this star is right in line with most other Ret II stars.", "Star 14 is also deemed a candidate member due to its CMD position, which overlaps with a $\\mbox{[Fe/H]}=-1.5$ isochrone.", "However, its weak Fe line suggests $\\mbox{[Fe/H]} \\sim -3.1$ .", "This clear inconsistency suggests it is either a contaminant or its spectrum cannot be modeled using photometric stellar parameters.", "We thus do not strongly consider either candidate member's Ba abundance.", "It is clear that the majority (${\\approx }$ 2/3) of high-SNR Ret II stars with meaningful Ba abundance measurements lie in a constant [Ba/H] plateau, while the minority (${\\approx }$ 1/3) have low undetected Ba abundances.", "This corroborates the conclusions of [91], Ji16c, and Roederer16b that Ret II is enriched by a single prolific $r$ -process event, but now with more than two times the number of $r$ -process abundance measurements or upper limits.", "We note that for stars in the [Ba/H] plateau, there is a larger span in [Fe/H] than [Ba/H].", "This is discussed in Section .", "Figure: [Ba/H] and [Ba/Fe] vs [Fe/H] as measured by VLT/GIRAFFE.Red points indicate clear Ret II members and orange points indicate candidate Ret II members.Large symbols are stars with SNR >25> 25, while small symbols are stars with SNR <25<25.1σ1\\sigma error bars are shown for all detected abundances, and arrows drawn to indicate upper/lower limits.Open circles indicate stars with at least one upper limit, and stars are colored grey if they have no detection of either Ba or Fe.Considering only the clear members with high SNR (large red points), there is no apparent scatter in Ba but significant spread in Fe." ], [ "[$\\alpha $ /Fe] vs [Fe/H]", "We measure two $\\alpha $ -elements, Mg and Ca, in several stars.", "The [$\\alpha $ /Fe] vs [Fe/H] results are shown in Figure REF .", "The number of stars is low and the error bars are large, but the left two panels show that both [Mg/Fe] and [Ca/Fe] broadly decline with increasing [Fe/H].", "Given the low statistics, there is no clear “$\\alpha $ -knee” as seen in the Milky Way and some dwarf galaxies [190], [186], [78], though we note that most dwarf galaxies do not have such a clear knee [102], [183].", "Regardless, both [Mg/Fe] and [Ca/Fe] show clear decreases somewhere between $-3.0 < \\mbox{[Fe/H]} < -2.5$ , a lower metallicity than the more massive classical dSph galaxies, as expected in a standard time-delay scenario [58], [189].", "While often assumed to vary together, in fact Mg and Ca can have different origins, since Mg is hydrostatically synthesized primarily in the most massive core-collapse supernova (CCSN) progenitors while Ca is explosively synthesized in most CCSN progenitors as well as Type Ia SNe [135], [77], [95].", "Most stars in Ret II have similar Mg and Ca abundances, with two notable exceptions.", "First, Star 5 has [Fe/H] $\\sim -2.0$ and very low [Mg/Fe] (as previously pointed out by Ji16c).", "Second, Star 10 has [Fe/H] $< -3$ but a very high [Mg/Ca] $= +0.8$ .", "These two stars drive an overall decreasing slope in [Mg/Ca] with respect to [Fe/H], which could (but does not have to) imply an early excess (and late-time lack) of the most massive CCSNe in Ret II.", "[95] pointed out that a decreasing [Mg/Ca] vs [Fe/H] trend is seen in several UFDs, but so far the trend is confined to UFDs that are kinematically associated with the Large Magellanic Cloud.", "An exciting possibility is that this indicates environment-dependent stellar populations or star formation histories.", "However, Ret II is likely only recently captured by the Large Magellanic Cloud ([147], [55], though see [9]).", "Within the uncertainties, our Ca, Mg, and Fe measurements are consistent with previous results by Ji16c and Roederer16b.", "However, the lower SNR data in Ji16c suggested a broadly flat [Ca/Fe] trend, while our more precise and larger number of Ca measurements now clearly suggest an overall decreasing [Ca/Fe] trend with [Fe/H].", "Figure: α\\alpha -element abundances in Ret II.Symbols are same as Figure , with large red solid points indicating abundances of confident members, large orange points indicating abundances of candidate members, open red points indicating stars with one upper limit, and open grey points indicating stars with two upper limits.Note that two stars have Mg and Ca measurements but only Fe upper limits.Left: [Mg/Fe] vs [Fe/H].Middle: [Ca/Fe] vs [Fe/H].Right: [Mg/Ca] vs [Fe/H]." ], [ "Intrinsic Scatter in [Fe/H]", "We measure the intrinsic [Fe/H] scatter using the 13 definite member stars with [Fe/H] detections.", "The [Fe/H] distribution is modeled with a Gaussian [117], and sampled using Stan in a manner similar to the velocity dispersion.", "We adopt a log-uniform prior for the intrinsic scatter of $10^{-2}-10^0$ .", "The results are a mean $\\left<\\mbox{[Fe/H]}\\right> = -2.64$ with dispersion 0.32 dex, matching previous results [172], [192], [111].", "Adding Fe measurements from the two candidate members or including upper limits does not substantially affect these results.", "Figure: Barium measurements/upper limits and best-fit models for stars with SNR >25>25.The left panel shows our fiducial result only including clear members (16 stars), while the right panel also includes candidate members (3 more stars).Detections are shown as points with 1σ1\\sigma error bars, stacked within bins of 0.5 dex like a histogram.", "Stars with larger uncertainties are placed towards the bottom of each bin.Upper limits are shown as leftward-pointing arrows.", "Red and orange symbols indicate definite and candidate members, respectively.The shaded regions show the best-fit two-Gaussian model normalized to the number of stars on each panel.The width of both distributions (σ Ba \\sigma _{\\rm Ba}) is unresolved, so we plot the width using the 95% upper limit.Note that the model including candidates is wider primarily to accommodate Star 14." ], [ "Intrinsic Scatter in [Ba/H]", "We detected [Ba/H] in 16 of our 32 member stars (21 out of 40 stars including candidate members).", "However, only [Ba/H] measurements from stars with SNR $> 25$ were considered reliable due to sky subtraction residuals (see Figure REF and Appendices  and ), restricting the [Ba/H] measurements to 11 out of 16 member stars (13 out of 19 candidate members).", "Figure REF shows that the [Ba/H] measurements in Ret II have a clear peak at [Ba/H] $\\sim -1.7$ and another group of stars with upper limits of [Ba/H] $\\lesssim -3$ (as previously seen in [91], [159]Star 7 has a much more stringent [Ba/H] upper limit in [91], but we kept its high [Ba/H] limit here to remain self-consistent.).", "We modeled this [Ba/H] distribution as a two-component Gaussian mixture.", "The model has 5 parameters: two means, two intrinsic dispersions, and a mixing fraction.", "We assume a log-flat prior for the intrinsic dispersions from $10^{-2}-10^{0}$ , a flat prior from 0 to 1 for the mixing fraction, and no constraint on the means (except that one has to be larger than the other).", "For the abundance likelihoods, for stars with [Ba/H] detections we used the usual Gaussian likelihood using the [Ba/H] measurement and uncertainty as the mean and standard deviation for that star.", "For the stars with [Ba/H] 99% upper limits, we adopted a step-function likelihood where 99% of the probability is uniform between [Ba/H]$=-5$ to the measured upper limit, and 1% of the probability is uniform from the measured upper limit to [Ba/H]$=+1$ .", "This model was implemented in Stan and sampled using the NUTS sampler with 4 chains and $10^6$ steps.", "Model parameters were initialized near likely values from initial visual examination (i.e., mixing fraction based on the number of limits vs measurements, intrinsic scatters of 0.1 dex, component means of $-4.0$ and $-1.5$ ).", "Our final chains were visually well-mixed and had $>10000$ effective samples for all parameters.", "The results of our fiducial [Ba/H] dispersion fits are shown in Figure REF .", "Only stars with SNR $> 25$ are shown.", "The points show the exact [Ba/H] values and their uncertainties, and they are stacked in the vertical direction in bins of 0.5 dex like a histogram.", "As before, clear members are shown in red and candidate members in orange.", "The arrows indicate the 99% upper limits for stars with undetected Ba lines, which are included in our modeling procedure.", "The best-fit two-component models are shown as shaded regions (red for clear members, orange for candidate members).", "Overall, the detected Ba abundances for the clear member stars are clearly unimodal, and the per-star [Ba/H] uncertainty can explain the observed scatter in [Ba/H] abundances.", "We obtain a 95% upper limit $\\sigma _{\\rm [Ba/H]} < 0.20$ dex.", "If adding the candidate members, we obtain a weaker upper limit $\\sigma _{\\rm [Ba/H]} < 0.31$ dex, which is primarily driven by star 14 that has a weak Ba line.", "As discussed previously, if this star is a member it is not likely that its stellar parameters can be determined with photometry, so the [Ba/H] inference for this star is likely inaccurate.", "As a note of caution, we believe the choices made for our fiducial measurement are the most appropriate for quantifying the [Ba/H] intrinsic scatter in Ret II, as they minimize the impact from known systematic issues.", "But for completeness, in Appendix  we show the effect of all choices (i.e., SNR cut, membership, $\\nu _t-\\log g$ relation) on the intrinsic scatter measurement." ], [ "Abundance Trends with Radius", "Figure REF shows trends in [Fe/H], [Ba/H], [Mg/Fe], and [Ca/Fe] with radius for confirmed and candidate members.", "Overall, there are no strong abundance gradients for Fe and Ba.", "This lack of trend within this radius is expected, as any initial abundance gradients produced when Ret II was forming its stars at $z \\gtrsim 6$ will likely be dynamically mixed away by $z=0$ : for $10^4$ stars within the half-light radius of 58 pc and typical velocity of 3 $\\mathrm {\\,km\\,s^{-1}}$ , the two-body relaxation time is about 2.5 Gyr [22].", "Note a metallicity gradient could be present at larger radii, as illustrated by the clear gradient in Tucana II [40], but our observations only reach two half-light radii.", "For Mg and Ca, there may be a small abundance gradient in [Mg/Fe] and [Ca/Fe], perhaps decreasing slightly at larger distances.", "For Mg, this could easily be due to small number statistics rather than an abundance gradient; and for Ca, the scatter around a mean trend would be comparable to or larger than the size of the gradient itself.", "Thus, we do not consider there to be evidence for any abundance gradients out to two half-light radii.", "We have carefully determined the [Ba/H] distribution in Ret II.", "Figures REF  and REF and Table REF show that ${\\sim }30\\%$ of the stars in Ret II are relatively metal-poor with no detected $r$ -process elements, while ${\\sim }70\\%$ of the stars in Ret II have identical $r$ -process abundances of $\\mbox{[Ba/H]} = -1.7$ , precise to 0.2 dex.", "We now consider the implications of this measurement for galaxy formation and metal mixing (Section REF ), the $r$ -process site (Section REF ), and chemical evolution in Ret II (Section REF ).", "We consider and dismiss the possibility of Ba contamination from non-$r$ -process sources in Section REF ." ], [ "Well-mixed [Ba/H] Implies Bursty Star Formation in Ret II", "We first interpret the distribution of detected [Ba/H], which has a low intrinsic dispersion $\\sigma _{\\rm Ba} < 0.20$ .", "Ret II has $r$ -process abundances 2-3 orders of magnitude higher than the neutron-capture element abundances in other UFDs, implying that the $r$ -process elements in this galaxy are produced in a single rare $r$ -process event [91], [159].", "Because all this $r$ -process material is deposited at a single time in Ret II, the observed variation in [Ba/H] is entirely due to variations in metal mixing.", "The unresolved Ba dispersion thus implies that the $r$ -process material in Ret II must have been very well-mixed by the time the high-Ba stars in Ret II have formed.", "To interpret this homogeneous Ba, consider a parcel of gas with a fresh source of metals.", "There are two crucial timescales: $\\tau _{\\rm mix} $ , the time for the metals to completely mix within that gas, and $\\tau _{\\rm sf} $ , the time to turn that parcel of gas into stars.", "Stars forming from this gas will be chemically homogeneous if mixing is faster than star formation, i.e.", "$\\tau _{\\rm mix} < \\tau _{\\rm sf} $ .", "The homogeneous Ba abundances in Ret II thus provide a lower limit on the time between when the $r$ -process event occurs and when the next generation of stars forms.", "In other words, the mixing timescale provides a constraint on the burstiness of star formation.", "We first describe some basic physical ingredients determining $\\tau _{\\rm mix} $ .", "Metal mixing in dwarf galaxies can be approximated as proceeding in two phases: the initial explosion remnant and subsequent turbulent mixing [99], [53].", "For an individual explosion, the initial remnant is dominated by a momentum-driven snowplow, lasts only ${\\sim }10^5$ yr, and sweeps up ${\\sim }10^5 M_\\odot $ of gas (for a $10^{51}$ erg explosion; e.g., [43], [161], [72], [90], [122], [125]).", "This mass is insignificant compared to a dwarf galaxy's ISM.", "The dominant process determining $\\tau _{\\rm mix} $ is thus turbulent mixing, where large-scale energy injection cascades to small-scale velocity eddies that enable microscopic diffusion [145].", "Turbulent mixing is typically modeled as a diffusion process [105], [99], [71], [90], [114], [12], [182], $R_t^2 \\propto D_t \\tau $ , where $R_t$ is the turbulent diffusion distance, $D_t$ is the turbulent diffusion coefficient, and $\\tau $ is the time since material is deposited.", "Drawing an analogy to mixing length theory, the diffusion coefficient can be estimated by a typical length and velocity scale driving turbulence, $D_t \\sim R_{\\rm turb} v_{\\rm rms}$ .", "Complete mixing occurs when $R_t$ reaches a length scale associated with the full size of the galaxy, $R_t = R_{\\rm gal}$ .", "Thus, $\\tau _{\\rm mix} \\sim R_{\\rm gal}^2 / (R_{\\rm turb} v_{\\rm rms})$ [145].", "In early dwarf galaxies, turbulence is primarily driven by gravitational gas accretion or mergers [202], [73], [104], [166], [152], which can be used to estimate a turbulent diffusion coefficient [99], [90].", "While intuitive, the mixing length formalism fails to describe the full physics of the complex, anisotropic, and multi-phase metal-mixing process in dwarf galaxies.", "For example, it is well known that mixing depends on the temperature of the ISM phase that the metals reside in, such that hot ISM phases mix much more efficiently than cold ISM phases [106], [47], [54], [52], [53]; and the anisotropic topology of cold gas clumps and filaments in early dwarf galaxies affects where metals get deposited and new stars form [195], [39], [124].", "It is thus crucial to study metal mixing with hydrodynamic galaxy formation simulations.", "A few recent simulations have explicitly studied metal mixing in dwarf galaxies [196], [194], [79], [80], [151], [56], [52], [53], [182], [85].", "In the vast majority of simulations, abundance scatter from individual metal sources is typically very large, ranging from 0.4-2.0 dex [165], [53], [2]Our scatter limit of 0.2 dex is a $1\\sigma $ rms, for which almost no simulation provides a quantitative value.", "Since most UFDs in simulations are resolved by fewer than 100 star particles, we currently estimate the rms by taking the range of star particle abundances, which corresponds to $\\pm 2\\sigma $ interval, and dividing by 4.", "It would be helpful for future simulations to provide the actual rms values..", "Note that many simulations compare their simulation abundance scatter directly to the observed abundance data, without deconvolving the abundance uncertainties.", "[182] performed the most direct comparison to Ret II, simulating a UFD and injecting $r$ -process elements from a single NSM to see the resulting $r$ -process abundance spread.", "In order to homogenize the gas to a level consistent with that observed in Ret II, they found that the gas had to mix for a time period of a few hundred Myr before forming stars.", "They measure an effective diffusion coefficient in their simulation of $D_t \\approx 10^{-3}$ kpc$^2$ Myr$^{-1}$ , resulting in a complete mixing timescale of ${\\approx }250$ Myr.", "The overall timescale of $>100$ Myr to mix matches results from other simulations [80], [52] and estimates based on the mixing length scaling relation [98], [145], [90].", "Thus, we expect $100 \\text{Myr} < \\tau _{\\rm mix} < \\tau _{\\rm sf} $ , and the timescale between bursts of star formation in Ret II should be over 100 Myr.", "There are a few important caveats to this interpretation.", "First, the mixing time can be affected by stochastic events.", "When [182] exploded the NSM at the outskirts of the galaxy rather than the center (to mimic a velocity kick, also see [165], [164], [25]), the lower diffusion coefficient resulted in less efficient mixing of the $r$ -process elements.", "[52], [53] also emphasize that the exact timing and location of $r$ -process production relative to other stellar feedback sources that drive turbulence can both increase and decrease the mixing time.", "Second, the abundances in the [182] simulation do not match Ret II observations: these simulations produce a flat trend in [Ba/Fe] vs [Fe/H], whereas Figure REF shows a flat trend in [Ba/H] vs [Fe/H].", "A correlation between Ba and Fe can occur if Ba and Fe are well-mixed relative to each other but with varying overall metallicity differences.", "The simulations by [182] indeed have a gas-rich merger that helps homogenize Ba and Fe but causes a dispersion in [X/H] at fixed time (Y. Tarumi, private communication).", "Finally, most galaxy formation simulations stay at relatively low resolutions, e.g.", "[182] adopt the ISM model from Auriga that uses an effective equation of state model below 0.1 $\\mathrm {\\,cm^{-3}}$ [69].", "This may resolve mixing in the large-scale ISM, but it may not resolve small-scale inhomogeneous mixing [145], [39], [124].", "In summary, the mixing time in UFDs is likely larger than 100 million years.", "The homogeneous $r$ -process abundances in Ret II thus indicate that at least two early bursts of star formation occurred in Ret II, separated by at least a few hundred million years.", "The first burst produced $r$ -process elements that enriched stars born in the second burst.", "Given that we find ${\\approx }$ 70% of Ret II stars are $r$ -process enhanced, a concrete prediction is that star formation histories of Ret II with a precision of ${\\lesssim }100$ Myr should show 30% of stars forming first, a gap of $>100$ Myr, and then the other 70% of star formation." ], [ "A Prompt, High-Yield R-Process Site", "The [Ba/H] distribution in Ret II introduces some new constraints on the $r$ -process site.", "First, the mean $\\left\\langle \\mbox{[Ba/H]}\\right\\rangle = -1.68 \\pm 0.07$ provides a constraint on the ratio $M_{\\rm Ba}/M_{\\rm H} = 136.5 \\times 10^{\\mbox{[Ba/H]} - 9.82} \\approx 10^{-9.36}$ , where $136.5$ is the average atomic mass of Ba and $9.82 = 12.00 - 2.18$ accounts for the [3] solar composition.", "Assuming an $r$ -process ratio of $\\mbox{[Ba/Eu]}=-0.80$ in Ret II [89], this means $\\left\\langle \\mbox{[Eu/H]}\\right\\rangle = -0.88$ , or $M_{\\rm Eu}/M_{\\rm H} = 152.0 \\times 10^{\\mbox{[Eu/H]} - 11.48} \\approx 10^{-10.18}$ , where 152.0 is the average atomic mass of Eu and $11.48 = 12.00 - 0.52$ accounts for the solar composition.", "The mass ratio $M_{\\rm Eu}/M_r$ is $10^{-3.0}$ , assuming $M_r$ is elements with $A \\ge 80$ from the solar $r$ -process pattern [174], [45], [88].", "Thus, we find that $M_r/M_{\\rm H} \\approx 10^{-7.2 \\pm 0.1}$ , where $M_{\\rm H}$ is an effective dilution mass of hydrogen.", "Inferring the yield $M_r$ of the $r$ -process site depends on what is assumed for $M_{\\rm H}$ .", "[91] previously argued that expected dilution masses range from $10^5-10^7 M_\\odot $ .", "The lower end is set by the initial explosion remnant [124], and the upper end is set by the total available gas in a $10^8 M_\\odot $ dark matter halo that is likely to host Ret II at high redshift.", "This would correspond to $M_r \\sim 10^{-2.2}-10^{-0.2} M_\\odot $ .", "However, the homogeneity of $r$ -process elements in Ret II suggests that it is extremely unlikely for the dilution mass to be near the lower bound of only the explosion remnant's dilution mass.", "Simulations confirm this qualitative argument: [85] found that the high [$r$ /H] abundances in Ret II were difficult to reproduce in simulations because the $r$ -process material was diluted so quickly.", "Hydrodynamically, they were able to achieve high [$r$ /H] ratios by having the $r$ -process ejecta interact with dense cold clumps near the explosion site.", "Stars forming in this scenario would have inhomogeneous abundances inconsistent with Ret II [39], [124].", "We thus suggest a more typical dilution mass should be $10^6-10^7 M_\\odot $ , corresponding to $M_r \\sim 10^{-1.2}-10^{-0.2} M_\\odot $ .", "Another important effect is that a large fraction of $r$ -process can be lost to the intergalactic medium due to the low gravitational potential of early dwarf galaxies [11], [28].", "Both empirically and theoretically, only $\\lesssim 10^{-2}$ of metals are retained in dwarf galaxies [48], [153], [103], [134].", "We can estimate the total mass of $r$ -process in Ret II using its present-day stellar mass of ${\\approx }3300M_\\odot $ [141].", "Assuming a hydrogen mass fraction of 0.75, the total $r$ -process mass contained in Ret II today is $10^{-3.8} M_\\odot $ .", "Thus the expected yield of the $r$ -process site should be $M_r \\gtrsim 10^{-1.8} M_\\odot $ , consistent with our previous estimate $M_r \\sim 10^{-1.2}-10^{-0.2} M_\\odot $ .", "Note that the dilution masses described in [91] implicitly include this metal loss to the IGM, as the higher effective dilution masses can be thought of as corresponding to lower metal retention (also see Fig 11 of [124]).", "Together, the higher $r$ -process yield and more prompt $r$ -process event implied by homogeneous $r$ -process mixing slightly favor rare core-collapse supernovae over neutron star mergers as the source of $r$ -process elements in Ret II.", "Our higher expected $r$ -process yield of $10^{-1.2}-10^{-0.2} M_\\odot $ is a better match to the $0.08-0.3 M_\\odot $ of $r$ -process produced in collapsar disk winds [65], [178], [169], [136], but also consistent with magnetorotationally driven jets ($10^{-2.5}-10^{-1.5} M_\\odot $ of $r$ -process, [139]) or neutron star mergers ($10^{-3}-10^{-1} M_\\odot $ , [204], [150]; note GW170817 had $r$ -process mass ${\\approx }10^{-1.5 \\pm 0.3} M_\\odot $ , [51], [101], [179], [180], [42]).", "The fact that there is a few hundred Myr delay after $r$ -process enrichment also favors core-collapse supernovae.", "[93] originally argued that recovery times in UFDs were longer than $10-100$  Myr [86], [90], [24], allowing a significant fraction of ordinary neutron star mergers with $10-100$  Myr delay times to enrich the gas before star formation.", "The added requirement of ${\\gtrsim }100$  Myr of metal mixing puts strong pressure on how prompt the $r$ -process site must be, though very prompt and high-yield neutron star mergers are still on the table [13], [14], [163]." ], [ "No Gas Accretion During Most Ret II Star Formation", "Figure REF shows that the [Ba/H] abundance of the $r$ -process rich stars stays very flat over an extended range of [Fe/H].", "This can be seen quantitatively by comparing the metallicity (Fe) dispersion of $0.32^{+0.10}_{-0.07}$ dex to the $r$ -process (Ba) dispersion of $<0.20$ dex.", "The simplest interpretation of the larger Fe dispersion is that the $r$ -process stars formed over some extended period of time where Ret II was able to self-enrich with iron from supernovae, as expected for a dwarf galaxy [201].", "The flat [Ba/H] abundance would then clearly indicate that there is no pristine gas accretion nor any significant $r$ -process production during the last 70% of Ret II's stellar mass growth.", "If there were significant pristine gas accretion during this time, it would reduce [Ba/H] at high [Fe/H][187] used a similar feature in Draco to argue for discrete $r$ -process events, but in this relatively luminous galaxy there is a degeneracy between the number of $r$ -process enrichment events and the presence/lack of gas accretion..", "This gas cutoff scenario also could explain the [Mg/Ca] trend seen in Figure REF through the integrated galactic initial mass function (IGIMF) [197].", "In this model, a gas-poor galaxy is unable to create the densest and largest molecular clouds, introducing an effective upper mass limit to stars formed.", "Since Mg is predominantly produced in the most massive core-collapse supernovae and Ca is produced in all supernovae, a restricted gas supply will result in lower [Mg/Ca] abundances relative to a fully sampled IMF [135], [95], [116].", "Thus, a lack of gas accretion could explain both the flat [Ba/H] and the declining [Mg/Ca] observed in Ret II.", "This observation should simplify chemical evolution models aiming to reproduce the $r$ -process abundance trends of Ret II [109], [143], [138], [38].", "A lack of gas accretion may indicate something about the broader formation environment of Ret II.", "In particular, it is expected that UFDs like Ret II are ultimately quenched by reionization [31], [15], [30], [155], but it is not yet clear whether reionization immediately removes cold gas from halos or just restricts gas inflow [144], [198], [84], [26], [200].", "Since ${>}70\\%$ of Ret II stars form in the absence of significant gas accretion, it may be that it formed all these stars after reionization.", "In this vein, it is interesting to note that Ret II is a satellite of the Large Magellanic Cloud [147], [55], [9].", "[162] tentatively find that the star formation histories of LMC UFD satellites (including Ret II) take longer to complete the last 10% of their star formation history compared to Milky Way UFD satellites.", "This could suggest that LMC UFD satellites like Ret II were relatively isolated when they formed compared to Milky Way UFD satellites, and thus experienced delayed reionization.", "If so, it would support the concept of patchy reionization at the smallest galactic scales [121], [5].", "For completeness, we note that the above discussion has implicitly assumed that $\\tau _{\\rm mix} $ and $\\tau _{\\rm sf} $ are the same for both Fe and Ba.", "One can imagine scenarios where the timing of stellar feedback causes an early source of Ba to be mixed more than a later source of Fe [152].", "In this case, stars at all metallicities would form simultaneously.", "This scenario seems unlikely for Ret II given the coherent evolution of Mg and Ca with [Fe/H], but it provides motivation to obtain more precise star formation histories in Ret II." ], [ "Contamination of Ba by Other Sources", "We have interpreted our Ba measurements in Ret II as tracing pure $r$ -process, based on the pure $r$ -process patterns found in [91] and [159].", "However in principle there could be three possible contaminating sources of Ba that are empirically found in UFDs: (1) a low-yield source of Ba observed in most UFDs, of unknown origin [60], [156], [94], but possibly attributed to $r$ -process in neutrino driven winds (Ji16c, [170]) or s-process in rotating massive stars [64], [119], [181]; (2) late-time AGB enrichment in the ISM [61], [95]; or (3) mass transfer of s-process Ba from a binary companion [63].", "None of these possible contaminants will impact our conclusions.", "The impact of the first two sources is much less than the $r$ -process content of Ret II, and it can be estimated by considering Ba abundances in other UFDs.", "The low-yield Ba source produces typical $\\mbox{[Ba/H]} \\sim -4$ [94].", "Ba in the ISM from AGB stars is not often seen in UFDs given their short star formation durations, but where it is seen it reaches $\\mbox{[Ba/H]}_{\\rm LTE} \\sim -2.5$ [61], [95].", "In both cases, the amount of contamination is at most 1/10 of the Ba in Ret II, too low to make a significant perturbation.", "For the third source, AGB mass transfer tends to produce [Ba/Fe] $\\sim +2$ , much more Ba than is observed in these stars [63], [75].", "Thus, we find it extremely unlikely that Ba is tracing anything other than the single $r$ -process event in Ret II.", "However, we note that one candidate member, Star 97, is quite red both in DES and Gaia photometry.", "It is possible this is due to large amounts of carbon on the surface of the star, in which case it may have experienced mass transfer possibly including Ba.", "If so, this further justifies excluding star 97 from our main results." ], [ "Conclusion", "We have obtained multi-object spectroscopy of red giant branch members in the ultra-faint dwarf galaxy Reticulum II using VLT/GIRAFFE and Magellan/M2FS.", "Our redetermination of the velocity and metallicity dispersion is consistent with past results, and we detect no significant spatial gradients in the element abundances.", "Ret II is of special interest due to its enrichment by a single $r$ -process event.", "Our primary new result is a quantitative measurement of the [Ba/H] distribution (Figure REF , Table REF ), which is a unique probe of gas dynamics and metal mixing within a faint, currently gas-free dwarf galaxy.", "Approximately 30% of Ret II stars have no detected $r$ -process material, while the other 70% are enriched to a high enhancement.", "We place an upper limit of $\\sigma _{\\rm [Ba/H]} < 0.20$ dex on the intrinsic [Ba/H] dispersion of the high-Ba stars, which implies that the initial $r$ -process enrichment needs to turbulently mix and homogenize for at least 100 Myr before stars form.", "This is the first direct evidence of bursty star formation in a UFD.", "The long mixing time also favors an $r$ -process site that is very prompt and produces a high $r$ -process yield (${\\gtrsim }10^{-1.5} M_\\odot $ ).", "We thus slightly favor rare core-collapse supernovae as the source of $r$ -process elements in this galaxy due to their higher $r$ -process yield, though prompt high-yield neutron star mergers are allowed as well.", "Examining the chemical evolution in Ret II, we find an overall declining [$\\alpha $ /Fe] vs [Fe/H] pattern as expected in dwarf galaxies.", "Since [Ba/H] is flat over an extended [Fe/H] range, this suggests that Ret II did not accrete significant gas during the last 70% of its star formation.", "This is consistent with the observed declining [Mg/Ca] ratio if Ret II was too gas-poor to form the most massive core-collapse supernovae.", "The chemical evolution of Ret II thus suggests that it may have formed in an underdense environment, consistent with its status as a satellite of the Large Magellanic Cloud.", "These constraints on UFD formation and the $r$ -process site demonstrate the power of dwarf galaxy archaeology.", "By finding stars in a common formation environment, it becomes possible to ask questions that could not be answered if these same stars were found individually scattered through the Milky Way.", "Reticulum II is unusually nearby and thus currently accessible for this type of study, but as the next generation of extremely large telescopes comes online, it will become possible to extend similar techniques to study the chemistry of ultra-faint dwarf galaxies throughout the Milky Way [87].", "We thank Edward Olszewski, Meghin Spencer, and Matthew Walker for their help acquiring the M2FS data.", "APJ thanks Andy Casey, Anirudh Chiti, Dan Kelson, Andy McWilliam, and Ting Li for many discussions about data reduction, processing, and analysis.", "APJ thanks Paz Beniamini, Alyson Brooks, Benoit Cote, Andrew Emerick, Evan Kirby, Mordecai Mac-Low, Brian O'Shea, and Yuta Tarumi for enlightening discussions about metal mixing and chemical evolution.", "Most of this study was done while APJ was supported by NASA through Hubble Fellowship grant HST-HF2-51393.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.", "APJ also acknowledges support from a Carnegie Fellowship, the Thacher Research Award in Astronomy, and MCT.", "JDS acknowledges support from the U.S. National Science Foundation (NSF) grant AST 1714873.", "IUR acknowledges support from NSF grants PHY 14-30152 (Physics Frontier Center/JINA-CEE), AST 1613536, and AST 1815403/1815767, and the NASA Astrophysics Data Analysis Program, grant 80NSSC21K0627.", "EM, MM, and RSK acknowledge support by the German Research Foundation (DFG) via the Collaborative Research Center SFB 881 The Milky Way System (subprojects A1, A5, A10, B1, B2, B8).", "RSK furthermore thanks for support from the Heidelberg Cluster of Excellence EXC 2181 (Project-ID 390900948) STRUCTURES: A unifying approach to emergent phenomena in the physical world, mathematics, and complex data funded by the German Excellence Strategy, and from the European Research Council in the ERC synergy grant ECOGAL – Understanding our Galactic ecosystem: From the disk of the Milky Way to the formation sites of stars and planets (project ID 855130).", "MB is supported through the Lise Meitner grant from the Max Planck Society.", "We acknowledge support by the Collaborative Research centre SFB 881 (projects A5, A10), Heidelberg University, of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation).", "This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No.", "949173).", "Based on observations collected at the European Southern Observatory under ESO programme 0100.B-0502(A).", "This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.", "This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France [199].", "This research has made use of NASA's Astrophysics Data System Bibliographic Services.", "This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna.", "This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).", "Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.", "This research uses services or data provided by the Astro Data Lab at NSF's National Optical-Infrared Astronomy Research Laboratory.", "NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation.", "This project used public archival data from the Dark Energy Survey (DES).", "Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.", "The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas–Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l’Espai (IEEC/CSIC), the Institut de Física d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.", "Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.", "Magellan-Clay (M2FS), VLT-UT2 (FLAMES/GIRAFFE) MOOG [175], [176], smhr [35], [96], emcee [57], Stan [33], dynesty [177], numpy [188], scipy [97], matplotlib [81], pandas [133], seaborn [193], and astropy [4], [149]" ], [ "Radial Velocities and Binarity", "As a relatively nearby UFD of great scientific interest, Ret II has obtained many different epochs of radial velocities.", "We have collected all currently available literature velocities in Table .", "The literature velocities are mostly derived from coadding spectra taken across 1-4 adjacent nights, so the MJD reported here is only accurate to 2 days of precision.", "These velocities are not homogeneous and may suffer from systematic effects.", "l|cc|rrrrrrrr|rrrr|rrrr 19 Literature Radial Velocities ID $v_{\\rm hel}$ $\\sigma _{\\rm vhel}$ $v_{\\rm S15}$ $\\sigma _{\\rm S15}$ $v_{\\rm K15}$ $\\sigma _{\\rm K15}$ $v_{\\rm J16}$ $\\sigma _{\\rm J16}$ $v_{\\rm R16}$ $\\sigma _{\\rm R16}$ $v_{\\rm VLT}$ $\\sigma _{\\rm VLT}$ $v_{\\rm HR}$ $\\sigma _{\\rm HR}$ $v_{\\rm W15}$ $\\sigma _{\\rm W15}$ $v_{\\rm MR}$ $\\sigma _{\\rm MR}$ 2c|MJD=57072 2c|MJD=57090 2c|MJD=57298 2c|MJD=57341 2c|MJD=58052 2c|MJD=58073 2c|MJD=57072 2cMJD=57639 1 $+65.3$ $ 0.2$ $+66.3$ $ 0.2$ $+66.8$ $ 1.0$ $+65.5$ $ 1.0$ $+67.6$ $ 0.7$ $+56.3$ $ 2.4$ 2 $+61.1$ $ 3.2$ $+59.1$ $ 0.9$ $+61.4$ $ 0.4$ $+62.7$ $ 1.0$ $+62.0$ $ 1.0$ $+65.5$ $ 0.7$ $+64.9$ $ 0.7$ $+61.8$ $ 0.4$ $+55.1$ $ 2.4$ 3 $+62.0$ $ 0.4$ $+62.3$ $ 1.0$ $+62.0$ $ 1.0$ $+62.2$ $ 1.0$ $+63.5$ $ 0.7$ $+66.8$ $ 0.8$ $+63.2$ $ 0.5$ $+52.4$ $ 2.4$ 4 $+58.5$ $ 0.3$ $+57.7$ $ 1.0$ $+59.6$ $ 0.5$ $+60.9$ $ 1.0$ $+59.7$ $ 1.0$ $+61.2$ $ 0.7$ $+60.4$ $ 0.8$ $+60.4$ $ 0.7$ $+47.3$ $ 2.4$ 5 $+61.7$ $ 0.3$ $+63.5$ $ 0.5$ $+61.9$ $ 1.0$ $+63.8$ $ 0.7$ $+63.2$ $ 0.8$ $+54.0$ $ 2.4$ 6 $+64.3$ $ 0.4$ $+64.4$ $ 1.1$ $+65.6$ $ 0.9$ $+63.5$ $ 1.0$ $+67.4$ $ 0.7$ $+67.0$ $ 0.8$ $+62.9$ $ 1.2$ $+59.7$ $ 2.4$ 7 $+63.2$ $ 0.4$ $+65.2$ $ 1.2$ $+65.9$ $ 1.2$ $+62.7$ $ 1.0$ $+65.7$ $ 0.7$ $+65.0$ $ 0.9$ $+63.9$ $ 2.3$ $+43.6$ $ 2.4$ 8 $+60.2$ $ 0.4$ $+59.8$ $ 1.2$ $+61.9$ $ 0.8$ $+61.9$ $ 1.0$ $+62.5$ $ 0.7$ $+61.7$ $ 0.8$ $+61.8$ $ 1.4$ $+51.6$ $ 2.4$ 9 $+69.2$ $ 0.4$ $+69.7$ $ 1.4$ $+70.8$ $ 1.1$ $+71.6$ $ 1.0$ $+71.2$ $ 0.7$ $+70.8$ $ 0.8$ $+70.0$ $ 1.7$ $+59.6$ $ 2.4$ 10 $+62.1$ $ 3.9$ $+62.3$ $ 1.1$ $+69.1$ $ 1.0$ $+64.0$ $ 0.7$ $+61.4$ $ 0.8$ $+65.6$ $ 1.1$ 11 $+67.0$ $ 0.9$ $+67.9$ $ 1.1$ $+65.4$ $ 1.8$ $+70.2$ $ 3.2$ $+67.9$ $ 1.3$ $+61.3$ $ 2.4$ 12 $+64.6$ $ 0.5$ $+65.7$ $ 1.1$ $+65.0$ $ 1.4$ $+67.2$ $ 0.7$ $+66.7$ $ 0.8$ $+69.1$ $ 1.5$ $+53.2$ $ 2.4$ 13 $+63.6$ $ 2.6$ $+65.6$ $ 1.3$ $+68.2$ $ 1.7$ $+67.8$ $ 0.7$ $+63.1$ $ 0.8$ $+70.4$ $ 1.9$ 14 $+62.3$ $ 0.7$ $+59.3$ $ 1.8$ $+65.3$ $ 0.7$ $+69.0$ $ 3.3$ 15 $+62.6$ $ 0.6$ $+63.2$ $ 1.4$ $+63.4$ $ 1.7$ $+65.3$ $ 0.7$ $+65.1$ $ 3.3$ $+62.5$ $ 1.9$ 16 $+64.6$ $ 0.9$ $+59.1$ $ 8.2$ $+67.4$ $ 0.9$ $ +0.0$ $ 0.0$ 17 $+60.2$ $ 0.7$ $+57.4$ $ 2.4$ $+60.0$ $ 2.1$ $+63.3$ $ 0.7$ $+64.9$ $ 3.3$ $+60.1$ $ 2.1$ $+40.7$ $ 2.4$ 18 $+59.3$ $ 7.1$ $+66.3$ $ 1.4$ $+62.9$ $ 3.7$ $+59.4$ $ 0.8$ $+73.6$ $ 3.3$ 19 $+60.7$ $ 10.5$ $+67.9$ $ 1.4$ $+78.9$ $ 1.8$ $+57.9$ $ 0.8$ $+64.4$ $ 3.3$ $+70.2$ $ 3.3$ $+26.5$ $ 2.4$ 20 $+63.6$ $ 0.7$ $+63.5$ $ 1.4$ $+66.3$ $ 0.7$ $+70.9$ $ 3.4$ $+66.4$ $ 2.9$ 21 $+61.0$ $ 0.8$ $+56.7$ $ 1.9$ $+64.7$ $ 0.9$ $+64.2$ $ 3.4$ 22 $+65.6$ $ 0.7$ $+64.7$ $ 1.8$ $+68.6$ $ 0.8$ $+68.4$ $ 3.5$ $+66.7$ $ 2.0$ 23 $+61.6$ $ 0.7$ $+59.8$ $ 1.8$ $+64.7$ $ 0.8$ $+65.2$ $ 3.5$ 24 $+63.6$ $ 0.8$ $+68.0$ $ 3.5$ $+65.9$ $ 0.8$ $+71.6$ $ 3.4$ 25 $+62.0$ $ 0.7$ $+61.9$ $ 2.0$ $+64.9$ $ 0.8$ $+63.0$ $ 3.4$ $+65.0$ $ 2.9$ 26 $+61.5$ $ 0.8$ $+61.7$ $ 4.8$ $+64.0$ $ 0.9$ $+69.1$ $ 4.0$ 97 $+67.5$ $ 0.7$ $+70.3$ $ 0.7$ 99 $+67.2$ $ 0.8$ $+69.9$ $ 0.8$ $+70.9$ $ 3.8$ 100 $+61.0$ $ 0.8$ $+63.4$ $ 0.8$ 102 $+66.8$ $ 0.7$ $+69.1$ $ 0.7$ 134 $+62.2$ $ 1.0$ $+65.0$ $ 1.0$ $ +0.0$ $ 0.0$ 142 $+61.0$ $ 0.7$ $+64.0$ $ 0.9$ $+63.0$ $ 1.1$ 143 $+58.9$ $ 0.9$ $+61.7$ $ 0.9$ $ +0.0$ $ 0.0$ 144 $+65.0$ $ 1.1$ $+67.6$ $ 1.2$ $+68.5$ $ 2.6$ 151 $+68.0$ $ 1.0$ $+70.8$ $ 1.0$ 154 $+61.9$ $ 1.0$ $+64.6$ $ 1.0$ 157 $+67.9$ $ 0.9$ $+70.5$ $ 0.9$ $+73.1$ $ 3.4$ 188 $+68.4$ $ 1.2$ $+71.2$ $ 1.2$ 192 $+69.1$ $ 0.9$ $+72.2$ $ 0.9$ $+66.9$ $ 3.5$ 195 $+68.1$ $ 0.8$ $+71.1$ $ 0.8$ $+65.2$ $ 4.2$ In a first attempt to calibrate the systematic effects, we adopt the Simon15 velocities as a reference velocity scale, as they have the most stars with common velocities compared to other literature sources.", "For matched stars in each sample, we calculate a weighted mean velocity offset.", "After removing this offset, stars with velocity variations inconsistent with a chi-squared test with $p < 0.01$ are identified as likely binaries [41].", "We iterate this process until convergence, resulting in five binary stars: Star 2, 13, 18, 19, and 21.", "The final mean velocity offsets relative to Simon15 are 1.01 $\\mathrm {\\,km\\,s^{-1}}$ for Koposov15b, 0.63 $\\mathrm {\\,km\\,s^{-1}}$ for Ji16c, 1.02 $\\mathrm {\\,km\\,s^{-1}}$ for [159], 2.79 $\\mathrm {\\,km\\,s^{-1}}$ for our VLT spectra, and 2.37 $\\mathrm {\\,km\\,s^{-1}}$ for our HighRes M2FS spectra.", "Our MedRes M2FS data have a very large offset of $-8.9\\mathrm {\\,km\\,s^{-1}} $ , and we thus decided to exclude it from any velocity studies." ], [ "Effect of Microturbulence on Barium Abundances", "Microturbulence ($\\nu _t$ ) is a parameter introduced to 1D stellar atmospheres to account for unmodeled 3D atmospheric effects.", "It affects lines at the saturated part of the curve of growth, where a higher microturbulence effectively desaturates strong lines by adding some extra doppler broadening [70].", "When lots of Fe I lines are available, microturbulence is usually found by balancing Fe I abundance as a function of line strength.", "Empirical measurements of microturbulence show that red giants with lower surface gravities (and temperatures) tend to have higher microturbulence values [7].", "In our VLT spectra, the Ba 6496Å line is at or near saturation when detected.", "For our coolest giants, increasing $\\nu _t$ by 0.2 $\\mathrm {\\,km\\,s^{-1}}$ reduces [Ba/H] by 0.15 dex.", "Since we aim to resolve abundance scatter on the order of 0.20 dex, this systematic effect is often the dominant uncertainty, especially for the brightest giants where the equivalent width is well-measured.", "There are not enough Fe lines to self-consistently measure $\\nu _t$ in our stars, so we must use existing correlations between $\\log g$ and $\\nu _t$ .", "Here, we investigate five different datasets that measured $\\nu _t$ using high-resolution spectroscopy of metal-poor red giant stars, examining systematic differences in $\\nu _t-\\log g$ relations, as well as the scatter around those relations.", "The effect of these choices on our barium abundance results is investigated in Appendix .", "Figure REF shows the result of our investigation.", "The left column in Figure REF plots $\\log g$ vs $\\nu _t$ for data from [7] (B05), [126] (M08), [44] (C13), [158] (R14), and [83] (J15), while the other columns show the measured [Ba/H]$_{\\rm LTE}$ compared to $T_{\\rm eff}$ , [Fe/H], and the signal-to-noise ratio (SNR).", "We separate [158] into a giants-only sample as well (R14g).", "When available, we use the stellar parameters as tabulated in JINAbase [1].", "Horizontal branch stars have higher microturbulence than RGB stars, so they are removed with a cut $\\log g> 0.00286\\,T_{\\rm eff}- 12.7$ .", "To each dataset, we fit a linear and quadratic polynomial for $\\nu _t$ as a function of $\\log g$ , using a robust fitter based on the routine robust_poly_fit in the AstroIDL library.", "The scatter around the fit is measured with the biweight scale of the residuals (a robust standard deviation).", "The coefficients and scatter around each relation are given in Table REF .", "Note that the right column of Figure REF shows that our stars with SNR $< 25$ have a much larger scatter than stars above that threshold.", "Visual examination of the spectra (Figure REF ) suggests that these stars are adversely affected by inaccurate sky subtraction that is blended with the Ba line.", "We thus have decided to exclude the low-SNR stars from most analyses.", "Also note that we used MOOG LTE/ATLAS [Ba/H] abundances for this figure, though the conclusions are robust if using NLTE instead." ], [ "Mean Relations", "There are clear differences in the average $\\nu _t-\\log g$ relation across different literature samples: at low $\\log g$ , the first three rows (B05, C13, J15) have systematically higher $\\nu _t$ than the last three rows (M08, R14, R14g).", "The origin of this difference is not clear.", "One possibility is the spectra for M08 and R14 have SNR $\\sim 100$ , substantially higher than B05 and J15 which typically have SNR $\\sim 30$ .", "The low SNR of weak iron lines could bias microturbulence too high (as described by [123]).", "However, C13 also have SNR $\\sim 100$ and obtain higher microturbulence values.", "Another possibility is that NLTE effects bias the microturbulence due to underlying correlations between excitation potential, line strength, and typical NLTE correction size [17].", "Further investigation of these differences would be valuable but is beyond the scope of this paper appendix.", "In this paper, we have decided to pick a $\\nu _t-\\log g$ relation that leaves no trend in the [Ba/H] and $T_{\\rm eff}$ for our stars.", "In Figure REF , the 2nd column shows [Ba/H] vs $T_{\\rm eff}$ , where the first three rows of Figure REF have a strong systematic trend such that cooler stars have lower [Ba/H].", "The bottom three rows do not have a significant trend.", "The coolest stars have the highest SNR and lowest statistical uncertainty, so the different mean trend makes a significant difference on our final inferred Ba scatter.", "Note that the trends are primarily driven by the two coolest and brightest stars ($T_{\\rm eff}\\sim 4500\\,$ K).", "Because these two stars have low statistical uncertainty on their [Ba/H] abundances, the systematic effect of microturbulence can make a large difference in the inferred [Ba/H] scatter.", "Because we do not expect a trend between Ba abundance and temperature, we have decided to adopt the quadratic $\\nu _t-\\log g$ relation from the giant stars in [158] (R14g) as our fiducial results.", "R14g have the highest SNR spectra of metal-poor giants with the largest wavelength coverage out of all these data samples.", "However, this $\\nu _t-\\log g$ relation is different from most previous studies of dwarf galaxy stellar abundances [92], [89], [159].", "Thus in Appendix , we give all results using the $\\nu _t-\\log g$ relation from B05 as well, which matches those previous abundance studies.", "We note that adding NLTE effects for Ba exacerbates the trend for [Ba/H] vs $T_{\\rm eff}$ when using the B05 $\\nu _t$ -$\\log g$ relation, because the NLTE corrections are larger (more negative) when microturbulence is higher." ], [ "Microturbulence Scatter", "Typically, a systematic uncertainty of ${\\approx }$ 0.2 $\\mathrm {\\,km\\,s^{-1}}$ is adopted for microturbulence, which accounts for the systematic mean differences described above.", "However, because we are interested in the abundance scatter within Ret II, another crucial value is the intrinsic scatter in microturbulence around a “true” $\\nu _t-\\log g$ relation, i.e., changes in the atmospheric structure that are unmodeled by $\\log g$ alone.", "This error is likely smaller than the observational scatter, because the microturbulence measurements themselves are noisy.", "Examining our five datasets, two have a scatter of ${\\sim }0.2$ $\\mathrm {\\,km\\,s^{-1}}$ (C13, J15), while three have a scatter of ${\\sim }0.1$ $\\mathrm {\\,km\\,s^{-1}}$ (B05, M08, R14).", "Using a smaller intrinsic $\\nu _t$ scatter would increase the significance of our scatter detections, while a larger intrinsic $\\nu _t$ scatter reduces the significance.", "Since we have adopted the mean $\\nu _t-\\log g$ relation from R14g, we also decide to adopt the intrinsic scatter of $0.13\\mathrm {\\,km\\,s^{-1}} $ from that data sample.", "In our systematic investigations using the B05 sample, we adopt the corresponding intrinsic scatter of $0.12\\mathrm {\\,km\\,s^{-1}} $ .", "Figure: Impact of different ν t -logg\\nu _t-\\log g relations.Left column: logg\\log g vs ν t \\nu _t for each sample of stars.", "The thin red lines at the bottom indicate the logg\\log g of Ret II stars.", "The best quadratic fit is plotted as a thick red line.", "The grey points show stars from all other rows for context.", "Left-middle column: [Ba/H] vs T eff T_{\\rm eff}.", "Right-middle column: [Ba/H] vs [Fe/H].", "Right column: [Ba/H] vs SNR.", "Stars with SNR <25< 25 display substantially larger [Ba/H] scatter, likely due to residuals from sky subtraction.Overall, the first three rows (B05, C13, J15) have similar trends, showing an upturn in ν t \\nu _t at low logg\\log g but a very noticable trend in [Ba/H] vs T eff T_{\\rm eff}.The last three rows have smaller ν t \\nu _t and little trend in [Ba/H] vs T eff T_{\\rm eff} (excluding stars with SNR <25< 25).l|cc|cc 5 Fit parameters for $\\nu _t$ Sample Quadratic Fit $\\sigma $ Linear Fit $\\sigma $ B05 $0.1001 \\log g^2 + -0.7394 \\log g + 2.847$ 0.12 $-0.2527 \\log g + 2.316$ 0.15 C13 $0.1048 \\log g^2 + -0.7744 \\log g + 2.965$ 0.18 $-0.2189 \\log g + 2.300$ 0.20 J15 $0.1307 \\log g^2 + -0.9812 \\log g + 3.322$ 0.21 $-0.5217 \\log g + 2.973$ 0.22 M08 $0.0175 \\log g^2 + -0.3242 \\log g + 2.009$ 0.08 $-0.2545 \\log g + 1.944$ 0.08 R14 $0.0471 \\log g^2 + -0.3474 \\log g + 1.969$ 0.18 $-0.1201 \\log g + 1.764$ 0.18 R14 giants $0.0386 \\log g^2 + -0.3313 \\log g + 1.960$ 0.13 $-0.2247 \\log g + 1.897$ 0.10" ], [ "Systematic Effects on Barium Scatter", "Here we explore the effect of different data subsets and microturbulence relations on the main result of this paper, the [Ba/H] mean and scatter.", "For the data samples, we consider permutations of membership (clear members only vs including candidate members) and MULTI NLTE/MARCS vs MOOG LTE/ATLAS.", "For the microturbulence relations, we use the fiducial R14 giants (R14g) relation, as well as the B05 relation that has a higher microturbulence for the coolest/lowest gravity giants.", "For each of these permutations, we fit the two-component Ba scatter model described in Section REF .", "Note that while our fiducial model is run with a very large number of steps, for the other models we only sampled to reach ${\\gtrsim }100$ effective samples, and thus the uncertainties and limits on the parameters will be less accurate.", "Table  gives the results of the model fits.", "The first row is our fiducial value, while the other rows show various data permutations.", "$\\mu _1$ and $\\sigma _1$ are the most important values, indicating the mean and intrinsic spread on the detected [Ba/H] abundances.", "$\\mu _2$ and $\\sigma _2$ are the mean and scatter of the undetected [Ba/H] component, which is not well-constrained given that no low [Ba/H] abundances were detected.", "$p_2$ is the fraction of stars in the undetected [Ba/H] component, i.e., $1-p_2$ is the fraction of $r$ -enhanced stars.", "The uncertainties are $1\\sigma $ , and the limit on $\\sigma _1$ is a 95% limit.", "We point out three main conclusions of Table .", "First, the MULTI NLTE/MARCS mean [Ba/H] abundances ($\\mu _1$ ) are typically lower than the MOOG LTE/ATLAS abundances by about 0.3 dex.", "Figure REF shows that the typical [Ba/H] correction going from MOOG to MULTI is $-0.27 \\pm 0.04$ dex in a way that is fairly close to a constant offset.", "This is a result both of the different model atmospheres as well as the effect of NLTE.", "Second, when considering just the clear member stars, none of the models detect a significant [Ba/H] dispersion $\\sigma _1$ .", "The constraint is stronger when using NLTE, but weaker when using the B05 microturbulence relation instead of the R14g microturbulence relation.", "This is driven primarily by the coolest Ret II star (ID 1), which is most affected by the different microturbulence relations (Figure REF ).", "Third, when including the candidate member stars, all the upper limits on $\\sigma _1$ get looser, and actually in one case (B05 MOOG with candidates) the intrinsic dispersion is resolved at $2\\sigma $ .", "This is predominantly because of the outlier star 14, which has a weak but detected Ba line.", "If this star is actually part of Ret II, then our two-component model for [Ba/H] is likely insufficient to describe the data because star 14 is well outside of the main peak of [Ba/H] detections, but well above the more stringent [Ba/H] upper limits.", "After examining Table  and Figure REF , we decided that using the R14g MULTI/NLTE results with only clear members is the most reliable measurement.", "It is clear that using NLTE and definite members will result in a better measurement, and eliminating the trend with stellar parameters discussed in Appendix  justifies using R14g instead of B05.", "However, for completeness, we show several permutations of best-fit [Ba/H] distributions in Figure REF .", "The top-left panel shows the R14g and NLTE abundances used in the main paper, but plotting all low-SNR detections as small data points and low-SNR upper limits as grey arrows.", "As also seen in the right column of Figure REF , the low-SNR data are skewed towards higher [Ba/H] abundances primarily due to bad sky subtraction (Figure REF ).", "The top-right panel shows three alternate fits to different permutations of data used (members and candidates; high- and low-SNR data).", "The bottom left panel shows the effect of changing the radiative transfer, and the bottom-right panel shows the effect of changing the microturbulence relation.", "These differences make a relatively small change to the Ba dispersion (which is not resolved) but a fairly large change to the mean abundance.", "Figure: Differences between NLTE (MULTI/MARCS) and LTE (MOOG/ATLAS) abundances for stars with SNR >25> 25.", "Left: differences as a function of [Ba/H] (LTE).Right: differences as a function of T eff T_{\\rm eff}.", "The NLTE correction for saturated lines clusters closely around Δ\\Delta [Ba/H]=-0.27±0.04=-0.27 \\pm 0.04 (the outlier is the candidate member star 14 with a relatively low [Ba/H] abundance).Figure: Exploration of different best-fit models by permuting the data sample (top-right panel),radiative transfer (bottom-left panel), and logg-ν t \\log g-\\nu _t relation (bottom-right panel).l|ccc|cc|c 7 [Ba/H] distribution fits (stars with SNR $>25$ ) Data $\\mu _1$ $\\sigma _1$ $\\sigma _1$ limit $\\mu _2$ $\\sigma _2$ $p_2$ R14g MULTI NLTE members $-1.68^{+0.07}_{-0.07}$ $ 0.05^{+0.08}_{-0.03}$ $<0.20$ $-4.32^{+0.49}_{-0.46}$ $ 0.08^{+0.32}_{-0.06}$ $ 0.28^{+0.12}_{-0.10}$ R14g MOOG LTE members $-1.38^{+0.06}_{-0.06}$ $ 0.06^{+0.09}_{-0.04}$ $<0.22$ $-4.26^{+0.53}_{-0.50}$ $ 0.09^{+0.32}_{-0.07}$ $ 0.28^{+0.12}_{-0.10}$ B05 MULTI NLTE members $-1.91^{+0.07}_{-0.06}$ $ 0.11^{+0.10}_{-0.08}$ $<0.28$ $-4.31^{+0.49}_{-0.47}$ $ 0.09^{+0.36}_{-0.07}$ $ 0.26^{+0.12}_{-0.10}$ B05 MOOG LTE members $-1.60^{+0.07}_{-0.07}$ $ 0.14^{+0.09}_{-0.09}$ $<0.30$ $-4.22^{+0.48}_{-0.51}$ $ 0.11^{+0.37}_{-0.09}$ $ 0.23^{+0.14}_{-0.13}$ R14g MULTI NLTE with candidates $-1.73^{+0.07}_{-0.08}$ $ 0.12^{+0.11}_{-0.09}$ $<0.31$ $-4.12^{+0.57}_{-0.56}$ $ 0.23^{+0.55}_{-0.20}$ $ 0.26^{+0.12}_{-0.11}$ R14g MOOG LTE with candidates $-1.43^{+0.07}_{-0.08}$ $ 0.13^{+0.13}_{-0.09}$ $<0.36$ $-3.77^{+0.62}_{-0.69}$ $ 0.62^{+0.28}_{-0.50}$ $ 0.29^{+0.14}_{-0.11}$ B05 MULTI NLTE with candidates $-1.95^{+0.07}_{-0.04}$ $ 0.19^{+0.10}_{-0.10}$ $<0.37$ $-4.25^{+0.52}_{-0.51}$ $ 0.10^{+0.46}_{-0.08}$ $ 0.22^{+0.12}_{-0.09}$ B05 MOOG LTE with candidates $-1.69^{+0.11}_{-0.10}$ $ 0.24^{+0.11}_{-0.10}$ $<0.46$ $-3.83^{+0.74}_{-0.61}$ $ 0.40^{+0.44}_{-0.37}$ $ 0.25^{+0.15}_{-0.13}$" ], [ "Additional Chemical Abundances", "We provide a table of member stars with sufficiently high S/N in the M2FS HiRes Mg b data to measure detailed chemical abundances.", "This illustrates the usefulness of the M2FS Mg Wide configuration for measuring detailed chemical abundances in metal-poor stars.", "rr|rr|rr|rr|rr|rr|rr 14 M2FS Mg b Abundances ID SNR [Mg/H] $\\sigma _{\\rm Mg}$ [Ca/H] $\\sigma _{\\rm Ca}$ [Ti/H] $\\sigma _{\\rm Ti}$ [Cr/H] $\\sigma _{\\rm Cr}$ [Fe/H] $\\sigma _{\\rm Fe}$ [Nd/H] $\\sigma _{\\rm Nd}$ 2 39.3 $-2.48$ $ 0.12$ $-2.21$ $ 0.07$ $-2.29$ $ 0.10$ $-2.83$ $ 0.13$ $-2.63$ $ 0.11$ $-1.20$ $ 0.09$ 3 34.2 $-2.59$ $ 0.13$ $-2.38$ $ 0.07$ $-2.57$ $ 0.08$ $-3.35$ $ 0.11$ $-2.78$ $ 0.13$ $-1.35$ $ 0.08$ 4 37.3 $-2.55$ $ 0.11$ $-2.47$ $ 0.08$ $-2.71$ $ 0.08$ $-3.38$ $ 0.09$ $-3.14$ $ 0.08$ 5 19.4 $-2.52$ $ 0.13$ $-1.86$ $ 0.13$ $-1.65$ $ 0.19$ $-2.10$ $ 0.20$ $-1.93$ $ 0.17$ $-0.89$ $ 0.15$ 6 23.5 $-2.75$ $ 0.13$ $-2.24$ $ 0.15$ $-2.55$ $ 0.10$ $-2.81$ $ 0.12$ 7 13.8 $-2.67$ $ 0.22$ 8 15.2 $-2.03$ $ 0.16$ $-1.52$ $ 0.17$ $-1.76$ $ 0.14$ $-2.42$ $ 0.23$ $-2.17$ $ 0.12$ $-0.90$ $ 0.12$ 9 17.0 $-2.72$ $ 0.13$ $-2.55$ $ 0.12$ $-3.32$ $ 0.14$ $-2.67$ $ 0.11$ 10 15.3 $-2.09$ $ 0.18$ $-2.44$ $ 0.17$ $-3.49$ $ 0.16$ $-2.86$ $ 0.16$ 12 15.2 $-2.84$ $ 0.15$ $-1.94$ $ 0.16$ $-2.40$ $ 0.13$ $-3.07$ $ 0.22$ 13 9.9 $-2.60$ $ 0.16$ $-2.46$ $ 0.27$" ] ]
2207.03499
[ [ "Causal Effective Field Theories" ], [ "Abstract Physical principles such as unitarity, causality, and locality can constrain the space of consistent effective field theories (EFTs) by imposing two-sided bounds on the allowed values of Wilson coefficients.", "In this paper, we consider the bounds that arise from the requirement of low-energy causality alone, without appealing to any assumptions about UV physics.", "We focus on shift-symmetric theories, and consider bounds that arise from the propagation around both a homogeneous and a spherically-symmetric background.", "We find that low-energy causality, namely the requirement that there are no resolvable time advances within the regime of validity of the EFT, produces two-sided bounds in agreement with compact positivity constraints previously obtained from $2 \\rightarrow 2$ scattering amplitude dispersion relations using full crossing symmetry." ], [ "Introduction", "From a bottom-up perspective, the construction of effective field theories (EFTs) based on symmetry principles allows us to compute observables in the infrared (IR) without the full knowledge of the ultraviolet (UV) completion of the theory.", "This has proven to be a useful approach not only in particle physics and cosmology but also when studying gravitational systems.", "While the EFT contains an infinite number of higher derivative interactions, at low energies only a finite number are relevant at a given order in the EFT expansion.", "Nevertheless, symmetry principles on their own are not sufficient to ensure that the EFT is unitary and causal.", "Imposing these physical principles leads to constraints on the possible values of the coefficients in the Wilsonian effective action of the low energy EFT [1].", "A well-known approach for bounding these Wilson coefficients consists of looking at dispersion relations for $2 \\rightarrow 2$ scattering amplitudes and engineering positive bounded functions of the scattering amplitude [2], [3], [4], [5], [6], [7], [8]Earlier approaches in the chiral perturbation theory context are found in [9], [10], [11], [12], [13], [14]..", "The associated positivity bounds require assumptions about the UV completion such as unitarity, locality, causality, Poincaré symmetry, and crossing symmetry.", "Additionally, one can obtain stronger bounds when considering weakly coupled tree-level UV completions.", "In recent years this has proven to be a fruitful approach (see for example [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38]).", "Crucially in [33], [34] it was shown that incorporating the constraints of full crossing symmetry, now referred to as null constraints, imposes two-sided positivity bounds generically on the space of all Wilson coefficientsThis phenomenon was already noted in [15], [16], [17], [18], [19], [20] in the context of massive spin-1 and -2 theories where the two-sidedness comes from consideration of different external polarizations..", "The purpose of the present work is to show that these two-sided bounds can be largely anticipated from low-energy causality considerations alone.", "The extension of positivity bounds to (massless) gravitational theories and arbitrary curved spacetimes, and more specifically to time-dependent gravitational backgrounds is not straightforward [39], [40].", "Gravitational amplitudes in Minkowski spacetime have recently been incorporated by using dispersive arguments that evade the $t$ -channel pole inevitable in gravitational amplitudes [41], [42], [43], [44], [45] or account for it by its implied Regge behaviour [46], [47], [48], [49], [50], [51].", "While perturbative unitarity rules can be generalised on curved spacetime [52], analyticity has proven more challenging and some initial explorations of positivity bounds to curved spacetimes were proposed in [53], [54], [55].", "Further analyses considering that the bounds arising from positivity constraints around a Minkowski vacuum can be translated into bounds for Wilson coefficients around a curved vacuum are examined in [56], [57], [58], [59], [60], [61].", "The main difficulties in constructing dispersion relations in curved backgrounds arise due to the broken Lorentz symmetries and the lack of an S-matrix.", "Some progress has been made recently for broken Lorentz boost theories [62] and in de Sitter spacetimes where there is an equivalent notion of positivity of spectral densities [63], [64], [65], [66], [67], [68].", "By contrast the causality approach discussed here is easily generalizable to curved spacetimes.", "In this paper, we will focus on constraints arising purely from causality in the low energy regime.", "Our central tool is the scattering time delay well studied in non-relativistic scattering [69], [70], [71], [72], [73] and gravitational scattering [74], [75] which describes in the semi-classical (WKB) or eikonal approximation the delay of a scattered wave relative to a freely propagating wave.", "Causality violation is associated with the presence of a resolvable time advance, and this criterion has in recent years been utilised to impose similar bounds on Wilson coefficients [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86] which, importantly, do not require any assumption on the UV behaviour of the theory.", "Indeed, since small superluminalities could lead to correlation functions having support outside of the light-cone when present at large distances, the violations of causality can be measured within the low energy EFT that describes the infrared physics.", "In a generic EFT, the higher-derivative interactions will modify the equation of motion for the propagation of a perturbation around an arbitrary background rendering a sound speed $c_s\\ne 1$ .", "Note however that a small superluminal low-energy speed is not necessarily in contradiction with causality since the would-be observation detecting violations of causality could turn out to be unmeasurable within the regime of validity of the EFT [87], [88], [89].", "For a local field theory in Minkowski spacetime, causality tells us that the retarded Green's function evaluated in an arbitrary quantum state does not have support outside of the forward Minkowski light-cone.", "For a generic EFT, locally the propagation of information is encoded in effective metric arising in the hyperbolic equations of motion for small fluctuations around a given background which at leading order is determined by the sound speed $c_s$ and reads $\\mathrm {d} s_{\\text{eff.", "}}^{2}=-c_{s}^{2}(x^\\mu ,\\omega ) \\mathrm {d}t^{2}+\\mathrm {d} \\vec{x}^{2} \\ .", "$ Generically this speed is dependent on the momentum scale/frequency of propagating fluctuations $\\omega $ .", "Causality does not directly impose constraints on the phase velocity, but it requires that its high frequency (high $\\omega $ ) limit, that is, the front velocity is luminal $c_{s}^{2}(x^\\mu ,\\infty )=1$ .", "This determines the support of the retarded propagator and implies that information propagates (sub)luminally.", "Furthermore, it can be shown that causality implies analyticity of the scattering amplitude and refractive index in the upper half complex $\\omega $ -plane [90], [91], [92].", "Here, we will only focus on the causal properties of the EFT as encoded on light-cones defined by the effective metric in Eq.", "(REF ) in the low frequency regime where the EFT is under control.", "In the EFT, the true front velocity is unknown, as is whether there is a Lorentz invariant UV completion.", "Furthermore demanding locally the strict bounds $c_{s}^{2}(x^\\mu ,\\omega ) \\le 1$ is too strong since the associated apparent superluminality may be unresolvable within the EFT (furthermore in the gravitational context the local speed is sensitive to field redefinitions, although this last subtlety will not be relevant hereOn curved backgrounds, the notion of asymptotic causality [75], [76] (requiring the absence of superluminalities as compared to the asymptotic flat metric which imposes bounds on the net scattering time delay) is a physical requirement, but it does not always capture the full implications of causality.", "In fact, it leads to weaker bounds than the notion of infrared causality [87], [88], [89], [86], [85], (requiring the absence of superluminalities as compared to the local metric which imposes bounds on the net scattering time delay minus the Shapiro time delay).).", "The presence of local low energy superluminality does not in itself imply the possibility of creating closed time-like curves.", "For that these superluminalities ought to be maintained for sufficiently large regions of spacetime.", "A cleaner diagnostic is the scattering time delay which is defined from the S-matrix and is hence independent of field redefinitions.", "The scattering time delay for a given incident state containing a particle of energy $\\omega $ may be defined in terms of the $S$ -matrix by $\\Delta T = -i \\left\\langle {\\rm in} \\right| \\hat{S}^{\\dagger } \\frac{\\partial }{\\partial \\omega } \\hat{S} \\left|{\\rm in} \\right\\rangle \\, .$ The scattering phase shifts may be defined as the eigenvalues of the $S$ -matrix, $\\hat{S}|{\\rm in}\\rangle =e^{2i\\delta }|{\\rm in}\\rangle $ , so that in an incident eigenstate the time delay is simply $\\Delta T =2 \\frac{\\partial \\delta }{\\partial \\omega }\\, .$ For example, for one-particle scattering in a spherically symmetric background, the $S$ -matrix diagonalises in multipoles $\\ell $ and we may define the associated multipole time delays $\\Delta T_{\\ell } =2 \\frac{\\partial \\delta _{\\ell } }{\\partial \\omega }\\Big |_\\ell \\, .$ In the large-$\\ell $ limit, we may consider scattering at fixed impact parameter $b=(\\ell +1/2)\\omega ^{-1}$ , giving the time delay traditionally calculated in the Eikonal approximation [93], [94] $\\lim _{\\ell \\rightarrow \\infty } \\delta _{\\ell =b \\omega -1/2}(\\omega ) = \\delta _{\\rm Eikonal}(\\omega ,b) \\, ,$ for which the time delay is (see for example [76]) $\\Delta T_{b} =2 \\frac{\\partial \\delta _{\\ell } }{\\partial \\omega } \\Big |_b \\, .$ The signature of true causality violation would be the manifest existence of closed-time-like-curves within the regime of validity of the EFT, however it is understood that such phenomena are akin to experiencing a resolvableStrict positivity of the scattering time delay is sometimes incorrectly imposed.", "This is not required since the time delay is only a meaningful indication of causality in the semi-classical region (WKB or eikonal).", "scattering time advance, (within the regime of validity of the EFT).", "The resolvability requirement comes from the uncertainty principle which is reflected in the fact that a time advance no bigger than the uncertainty $\\Delta t \\sim \\omega ^{-1}$ is clearly not in conflict with causality.", "Indeed in general, as is well understood, scattering time advances can be mildly negative without contradicting causality, but only in a bounded way.", "For example for s-wave (monopole) scattering in a spherically symmetric potential which vanishes for $r>a$ , causality imposes the bound on the scattering time delay of the form [69], [70], [71], [72], [73] $\\Delta T_{\\ell =0} \\ge - \\frac{2a }{v} + \\frac{1}{k v}\\sin (2 k a+\\delta _0) \\ge - \\frac{2a }{v} - \\frac{1}{k v}\\, ,$ with $v$ the group velocity and $k$ the momentum with $\\omega \\sim \\mathcal {O}(k v)$ .", "The first term gives the allowed time advance associated with the spherical waves scattering of the boundary $r=a$ , and the second term gives an allowed time advance due to the wave nature of propagation, i.e.", "the uncertainty principle.", "For the intermediate scale frequencies and smooth backgrounds considered in what follows the first term will be absent (see Appendix for a discussion) but we must still allow for the uncertainty principle.", "Hence our de facto relativistic causality requirement is that $\\Delta T \\gtrsim -\\frac{1}{\\omega } \\, .$ applied in the relativistic region where the background is sufficiently smooth on scales set by the wavelength $\\omega ^{-1}$ that the hard sphere type time advances $- 2a/v$ are absent.", "The goal of this paper is then to determine constraints we obtain on a given EFT by imposing (REF ) around different backgrounds.", "Since our primary concern will be non-gravitational scalar field theories, we can choose to probe the EFT by adding an external source.", "This device allows us to consider backgrounds which are not solutions of the unsourced background equations of motion.", "By choosing different sources, we can adjust the background solution to probe different possible scattering phases, and by extremising over the choices of backgrounds we will be able to obtain competitive constraints from the scattering time delay.", "The rest of the paper is structured as follows.", "In Section , we introduce the shift-symmetric low energy scalar EFT we will be considering and discuss the positivity constraints that arise from consideration of their scattering amplitudes.", "We also provide generic arguments for the expected time delay within a WKB approach on generic backgrounds.", "For concreteness, we then focus on specific profiles for the rest of the manuscript.", "In Section , we consider the simple case of a homogeneous background and argue for the need of less symmetric configurations to make further contact with positivity bounds.", "We then proceed to consider the scattering of perturbations around a spherically-symmetric background in Section .", "We examine two limits: one where the waves have no angular dependence and the other where they have large angular momentum.", "For each of these cases, we spell out carefully the conditions for the validity of the EFT and the WKB approximation.", "After computing the time delay and requiring that we cannot obtain a resolvable violation of causality we obtain bounds on the Wilson coefficients of the EFT.", "The case of no angular momentum gives rise to a lower bound while the large angular momentum case draws an upper bound that approaches the non-linear positivity bounds obtained in [33], [34].", "Lastly, we discuss our results and conclude in Section .", "In the Appendices, we show details of our calculations at higher orders in the EFT and for large angular momentum.", "We also explain our setup for obtaining bounds on the Wilson coefficients." ], [ "Low energy effective field theory and propagation speed", "In this paper, we consider the requirements for a scalar effective field theory to be causal.", "For pedagogical simplicity we focus on theories invariant under a shift symmetry $\\phi \\rightarrow \\phi +c$ .", "Since we are interested in comparing the constraints arising from $2 \\rightarrow 2$ tree-level scattering, we will consider only operators up to quartic order in the field $\\phi $ , and we will ensure to work in a regime where operators that are higher order in the field remain irrelevant to our causality considerations.", "In the following, we work with a minimal set of such independent operators up to dimension-12, so that our Lagrangian is given by [95] $\\mathcal {L}= - \\frac{1}{2} (\\partial \\phi )^2 - \\frac{1}{2} m^2 \\phi ^2+ \\frac{g_8}{\\Lambda ^4} (\\partial \\phi )^4+ \\frac{g_{10}}{\\Lambda ^6} (\\partial \\phi )^2 \\Big [ (\\phi _{, \\mu \\nu })^2 - (\\Box \\phi )^2 \\Big ] + \\frac{g_{12}}{\\Lambda ^8} (( \\phi _{, \\mu \\nu } )^2 )^2 - g_{\\text{matter}} \\phi J\\, ,$ where $(\\phi _{,\\mu \\nu })^2= \\partial _{\\mu } \\partial _{\\nu } \\phi \\partial ^{\\mu } \\partial ^{\\nu } \\phi $ , $(\\partial \\phi )^2=\\partial _\\mu \\phi \\partial ^\\mu \\phi $ , $g_{\\text{matter}}$ is the coupling strength to external matter and $J$ is an arbitrary external source.", "Note that for convenience we choose to write down the dimension-10 operator as the quartic GalileonThe time delay remains manifestly invariant under field redefinitions as long as we can neglect boundary terms.", "This can be seen for instance explicitly in Section REF for the zero angular momentum case up to the EFT order that we consider here.", "[96].", "The scale $\\Lambda $ has been introduced as the standard cutoff of this low energy EFT.", "Note that even though some EFTs may be reorganised so as to remain valid beyond $\\Lambda $ (see for instance [97] for a discussion), here we take the more conservative approach and consider the low energy EFT to break down at $\\Lambda $ .", "Except when we consider the case $g_8=0$ , it proves convenient to redefine $\\Lambda $ so that $g_8=1$ ." ], [ "Positivity Bounds:", "The aim of what follows is to establish to which extent positivity bounds constraining $\\lbrace g_8 , g_{10}, g_{12} \\rbrace $ can be reproduced using low energy infrared causality arguments (i.e.", "the statement of causality as manifested directly at the level of the low energy EFT without any prior on the embedding of this EFT within a unitary high energy completion).", "Technically, the derivation of the positivity bounds requires the presence of a mass gap and to this purpose, in principle, we can always introduce a shift-symmetry-breaking mass term in Eq.", "(REF ).", "The mass term can indeed be treated as an irrelevant deformation of the shift-invariant Lagrangian which, at the quantum level, does not induce any further symmetry-breaking operators [98], [22].", "In the following, we will be working in the limit $m\\ll \\omega $ where the mass term can be neglected (hence effectively restoring shift symmetry).", "The positivity bounds from [33], [34] can be translated into bounds on the Wilson coefficients appearing in the Lagrangian of Eq.", "(REF ) by using Table REF and read $g_8 > 0, \\qquad g_{12} > 0, \\qquad g_{10} < 2 g_8, \\qquad g_{12} < 4 g_8, \\qquad - \\frac{16 }{3} \\sqrt{g_8 g_{12}} < g_{10} < \\sqrt{g_8 g_{12}} \\,.", "$ From the above, only the left hand side of the last bound is derived by using full crossing symmetry whereas the other bounds follow from standard fixed $t$ dispersion relations." ], [ "Causality:", "Violations of causality can occur when superluminal speeds can be consistently maintained within a region of spacetime so as to lead to a physical support of the retarded propagator outside the standard Minkowski light-cone.", "In this section, we will compute the low frequency propagation speed of a perturbation $\\psi =\\phi -\\bar{\\phi }$ living on an arbitrary background $\\bar{\\phi }$ created by an external source $J$ .", "For this, we work within the WKB approximation such that the background's scale of variation, $r_0$ , is much larger than the scale on which the perturbation varies ($\\omega ^{-1}$ ).", "In Section  we will perform a precise analysis of the time delay arising in a homogenous background and in a static and spherically-symmetric background in Section , but for now it instructive to consider perturbations given by a plane wave $\\partial _\\mu \\psi =i k_\\mu \\psi $ .", "The equation of motion for the scalar field $\\phi $ is given by $ \\Box \\phi &=& \\frac{4g_8}{\\Lambda ^4} \\left(\\phi _{,\\mu } (\\partial \\phi )^2 \\right)^{,\\mu }- \\frac{2g_{10}}{\\Lambda ^6} \\left[(\\Box \\phi )^3- 3 \\Box \\phi (\\phi _{,\\mu \\nu })^2 + 2 (\\phi _{,\\mu \\nu })^3 \\right]- \\frac{4g_{12}}{\\Lambda ^8} \\left( \\phi _{, \\alpha \\beta } (\\phi _{,\\mu \\nu })^2 \\right)^{, \\alpha \\beta } \\\\&-& g_{\\text{matter}} J \\,.\\nonumber $ In the WKB approximation, we assume that perturbations can be characterised by planes wave with wave-vector $k_\\mu =(\\omega ,\\bf {k})$ .", "In the regime of validity of the EFT, the $g_{8,10,12}$ operators considered in (REF ) are treated perturbatively implying $-k_\\mu k^\\mu =\\omega ^2-|{\\bf {k}}|^2=(c_s^2-1)|{\\bf {k}}|^2\\ll |{\\bf {k}}|^2$ .", "One should note that remaining within the regime of validity of the EFT requires $\\frac{\\partial {\\phi }}{\\Lambda ^2}\\equiv \\delta _1 \\ll 1 \\ ,\\qquad \\frac{\\partial ^{p+1} {\\phi }}{\\Lambda ^{p+2}}\\equiv \\delta _1 \\, \\delta _2^p \\ll 1 \\ , $ where $p\\in \\mathbb {N}$ and the derivatives can hit the background or the perturbation.", "The most stringent bounds are obtained when $p\\rightarrow \\infty $ .", "Since we are interested in contributions up to dimension-12 operators, we will consider the expansion up to order $\\delta _1^2\\delta _2^2,\\delta _1^4$ and assume $\\delta _1^2\\ll \\delta _2, \\ \\delta _2^2\\ll \\delta _1$ .", "Thus, at order $\\delta _1^2\\delta _2^2,\\delta _1^4$ and leading order in $\\omega r_0$ we have: $c_s^2|{\\bf {k}}|^2=&|{\\bf {k}}|^2-g_8\\frac{8}{\\Lambda ^4}(k^\\mu \\partial _\\mu \\bar{\\phi })^2+g_8^2\\frac{32}{\\Lambda ^8}(k^\\mu \\partial _\\mu \\bar{\\phi })^2(\\partial \\bar{\\phi })^2-g_{12}\\frac{8}{\\Lambda ^8}(k^\\mu k^\\nu \\partial _\\mu \\partial _\\nu \\bar{\\phi })^2 \\nonumber \\\\& +g_{10}\\frac{12 k^\\mu k^\\nu }{\\Lambda ^6}\\left(\\partial _{\\mu }\\partial _\\rho \\bar{\\phi }\\partial ^\\rho \\partial _\\nu \\bar{\\phi }-\\square \\bar{\\phi }\\partial _\\mu \\partial _\\nu \\bar{\\phi }\\right) \\ ,$ where we can immediately see that the $g_8$ and $g_{12}$ contributions are sign definite which are directly equivalent to the first two positivity bounds included in (REF ).", "This direct equivalence was pointed out for the $g_8$ operator in [2].", "In what follows we shall attempt to make contact with the remaining bounds in (REF ) but note that the contributions of $g_8$ and $g_{12}$ to the speed imply that, within this framework we consider here, it will be impossible to reproduce the bound $g_{12} < 4 g_8$ from pure infrared causality considerations since, in the absence of $g_{10}$ , positivity of both $g_8$ and $g_{12}$ is sufficient to prevent causality violation in this limit.", "To establish the basic setup, it will be useful to start by looking at the simple example of a homogeneous background first (even though no further bounds will be derived), before proceeding to a more instructive spherically-symmetric situation which will allow us to make further contact with the remaining bounds of (REF )." ], [ "Time Delay:", "To understand whether the perturbations are causal around an arbitrary background, we need to consider a hierarchy between the scales of variation of the background and the perturbations, namely, $\\lambda _{\\text{background}}\\gg \\lambda _{\\text{perturbation}}$ .", "Hence, we can use the WKB approximation to obtain the phase shift experienced by the scattered perturbation from which the time delay is easily computed.", "Considering the wave nature of the scattering and the uncertainty principle, we define a resolvable time advance as one that satisfies $\\omega \\Delta T < - 1 \\ , $ where $\\omega $ is the asymptotic energy of the scattered state.", "This states that a resolvable time advance needs to be larger than the resolution scale of geometric opticsThe geometric optics or eikonal limit assumes that the scattering problem can be described in terms of particle trajectories with large impact parameters and that the energies of the asymptotic states are large., [87], [89].", "If Eq.", "(REF ) is satisfied within the WKB approximation and the regime of validity of the EFT, then we have an observable violation of causality.", "At leading order in the EFT, the requirement in Eq.", "(REF ) can be equivalently written in terms of the scattering phase shift as $\\delta ^{\\text{EFT}}<-1$ .", "However, this does not hold when including higher EFT corrections that modify the speed of sound with $\\omega $ -dependent contributions.", "In such cases, one should simply consider the bound in Eq.", "(REF ).", "A commonly taken approach to understanding causality bounds consists of working in the eikonal (geometric optics) limit.", "This amounts to considering the scattering of waves with large angular momentum $\\ell $ so that the dynamics can be described in terms of the scattering of particle trajectories with fixed impact parameter $b=(\\ell +1/2) \\omega ^{-1}$ .", "We shall consider both this region and the small $\\ell $ region which can also be described semi-classically.", "We will see that exploration of both limits is complementary when imposing bounds on Wilson coefficients from the requirement of causality in the EFT and will give rise to the two-sided bounds known from other considerations.", "As already noted, demanding locally that $c_s(\\omega ) \\le 1$ is too strict a requirement.", "The leading-order contributions to violations of causality, as encoded in the support of the retarded Green's function, are determined by the light-cones defined by the effective metric in the EFT equations of motion [99], [88] which is given by Eq.", "(REF ).", "Thus, acausality can be measured by integrating the effects of the sound speed.", "More precisely, by measuring whether the scattered waves can propagate outside the Minkowski light-cone and if this effect is observable within the regime of validity of the EFT.", "Within a WKB approach, the criterion for acausality can be written schematically as $\\left(\\frac{\\lambda _{\\text{background}}}{\\lambda _{\\text{perturbation}}}\\right) \\int _{X \\subset \\mathbb {R}^{3+1}} \\left(1-c_s({\\lambda _{\\text{perturbation}}}) \\right) \\lesssim -1\\ , $ where the integral is over $X$ , a hypersurface embedded in Minkowski in units of the background length scale.", "For example, for a homogeneous time-dependent background the speed will depend only on the time and the integration is over fixed-space slices parameterised by $t$ , that is, $\\int _{X \\subset \\mathbb {R}^{3+1}}=H \\int \\mathrm {d} t$ , where $\\lambda _{\\text{background}}=H^{-1}$ is the background timescale.", "While in the WKB approximation the term in the parenthesis is large, the integrand will be small as long as we stay within the regime of validity of the EFT.", "From this, we can see that a subluminal speed will always lead to a causal theory, and small superluminalities do not violate causality if they do not have support in a large region of spacetime.", "Note that the terms neglected in Eq.", "(REF ) are suppressed by powers of $\\lambda _{\\text{perturbation}}/ \\lambda _{\\text{background}} \\ll 1$ and can become relevant in certain situations.", "In what follows we perform a careful treatment, including these terms when relevant, for homogeneous backgrounds and static and spherically symmetric ones." ], [ "Homogeneous background", "In this section, we will derive the dispersion relation for a homogeneous background.", "To do so, we consider the equation of motion in Eq.", "(REF ) and perturb the field around a homogeneous background $\\bar{\\phi }(t)$ which varies on a time scales of order $H^{-1}$ , which we shall consider as being constant to a first approximation.", "To access the information encoded in the retarded Green's function, we consider a perturbative setup in which we derive a perturbative, second-order in time, hyperbolic, equation of motion for the perturbations.", "Any higher-order ($>2$ ) time derivative can be iteratively removed at each order in the EFT expansion.", "Schematically, the equations of motion for the perturbation reads $\\ddot{\\psi } + A \\dot{\\psi } + B \\psi = 0 \\,,$ where $A$ and $B$ are functions of the background $\\bar{\\phi }(t)$ and its derivatives, the wavenumber $k$ , the coupling constants $g_I$ , and the energy scale $\\Lambda $ .", "For convenience, the friction term can be removed by performing a field redefinition of the form $\\psi (t) = f(\\bar{\\phi }(t)) \\psi _0(t)$ leading to the perturbation equation $\\ddot{\\psi }_0 + \\left( B - \\frac{A^2 + 2 \\dot{A}}{4} \\right) \\psi _0 = 0 \\,.$ This allows us to write down an effective dispersion relation for the perturbations as $\\omega ^2 = m_{\\rm eff}^2 + c_s^2( {\\bf k}) |{\\bf {k}}|^2 \\,,$ where $m_{\\rm eff}^2$ is the effective mass square and $c_s^2( {\\bf k})$ is the ${\\bf k}$ -dependent square sound speed.", "Our definition of the sound speed corresponds to a momentum-dependent phase velocity.", "Note that at leading order in $|{\\bf {k}}|$ , the notions of phase velocity and group velocity are equivalent when the mass is negligible, which is the case under consideration.", "Considering only the $g_8, g_{10},$ and $g_{12}$ operators, we find at order $\\delta _1^2\\delta _2^2$ , (where the expansion parameters $\\delta _1$ and $\\delta _2$ are defined in (REF )), $m_{\\rm eff}^2&=-\\frac{12}{\\Lambda ^4}g_8\\partial _t \\left( \\ddot{\\bar{\\phi }} \\dot{\\bar{\\phi }} \\right) \\ , \\\\c_s^2( {\\bf k}) &=1- \\frac{8}{\\Lambda ^4} g_8 \\dot{\\bar{\\phi }}^2 - \\frac{8}{\\Lambda ^8}g_{12}|{\\bf {k}}|^2 \\ddot{\\bar{\\phi }}^2 +\\frac{96}{\\Lambda ^8} g_8^2 \\dot{\\bar{\\phi }}^4 \\, , $ up to terms that are more suppressed.", "Note that the $g_{10}$ contribution to the square sound speed vanishes as expected as the quartic GalileonAs expected, if one were to choose the parametrisation for the $g_{10}$ operator in (REF ) where the term $\\Box \\phi (\\partial \\phi )^2$ is removed by field redefinition, the $g_{10}$ contribution to $c_s^2({\\bf k})$ would be a total derivative and would also vanish at the level of the time delay so long as one considers background profiles with vanishing boundary terms, as is done in our analysis.", "vanishes on an effectively one-dimensional background.", "To analyse this term, one needs to explore backgrounds that are effectively at least two-dimensional and in the next section we will consider a spherically-symmetric backgroundCylindrically-symmetric background were also considered and lead to no additional insights, we shall therefore not present them in this work..", "Note that to stay within the regime of validity of the EFT we require that $\\frac{H\\bar{\\Phi }_0}{\\Lambda ^2} \\ll 1 \\, , \\quad \\frac{H}{\\Lambda }\\ll 1 \\, , \\quad \\text{and} \\quad \\frac{\\omega H}{\\Lambda ^2} \\ll 1 \\ ,$ where $\\bar{\\Phi }_0$ is the overall scale of the background field, or one can take $\\bar{\\Phi }_0={\\rm max}(|\\bar{\\phi }(t)|)$ .", "As a consequence, this ensures that $c_s \\sim 1$ , up to small perturbative corrections.", "Furthermore, the validity of the WKB regime where the perturbations vary much faster than the background implies that $| {\\bf k}|H^{-1}\\gg 1$ .", "These requirements imply that the speed () is subluminal for $g_8>0, g_{12}=0$ and for $ g_{8}=0, g_{12}>0$ .", "Even though the departure from the speed of light will be small, this effect may pile-up when dealing with large observation times and lead to macroscopic effects.", "To understand how this could occur, we establish the amount of support $\\Delta x$ the field would be able to gain outside the standard Minkowski light-cone $\\Delta x=H^{-1} \\int _{\\tau _i}^{\\tau _f} (1-c_s(\\tau )) \\mathrm {d} \\tau \\ ,$ where we introduced the dimensionless time $\\tau =H t$ .", "The light-cone observed by the perturbation is smaller than the Minkowski one by $\\Delta x$ .", "Hence, violations of causality arise for waves with three-momentum $\\bf {k}$ enjoying $|{\\bf {k}}|\\Delta x<-1$ for any $\\Delta \\tau =\\tau _f-\\tau _i>0$ while remaining within the regime of validity of the EFT.", "That is, if the distance that the perturbations can propagate outside of the Minkowski light-cone becomes larger than the wavelength of the perturbation.", "As is well known, in this setup, there is no risk of causality violation if $g_8>0$ and $g_{12}>0$ .", "However if $g_8<0$ , one can easily find solutions on which $|{\\bf {k}}|\\Delta x<-1$ .", "Consider for instance a time-localised profile of the form $\\bar{\\phi }(\\tau )=\\bar{\\Phi }_0 e^{-\\tau ^2}$ .", "The resulting support outside the light-cone will then be $|{\\bf k}|\\Delta x=\\frac{|{\\bf k}|}{H} \\int _{-\\infty }^{\\infty } (1-c_s(\\tau )) \\mathrm {d} \\tau =4\\sqrt{\\pi }\\frac{|{\\bf k}|}{H}\\left[\\frac{H \\bar{\\Phi }_0 }{\\Lambda ^2}\\right]^2\\left(g_8+3g_{12}\\frac{|{\\bf k}|^2H^2}{\\Lambda ^4}-\\frac{9g_8^2}{\\sqrt{2}}\\frac{H^2 \\bar{\\Phi }_0^2}{\\Lambda ^4}\\right) \\ .$ In the regime of validity of the EFT, $H \\bar{\\Phi }_0\\ll \\Lambda ^2$ and the terms quadratic in $g_8$ are naturally negligible.", "While the prefactor in square brackets should be small, this can always be compensated by a sufficiently large $|{\\bf k}|H^{-1}\\gg 1$ as required from the validity of the WKB approximation.", "For those solutions the term linear in $g_{12}$ is always subdominant as $|{\\bf k}| H\\sim \\omega H \\ll \\Lambda ^2$ .", "Hence as $g_8<0$ , there are solutions within the regime of validity of the EFT for which the time advance is resolvable $|{\\bf k}|\\Delta x<-1$ signalling a violation of causality.", "This result complements that derived in [89].", "On the other hand for more involved profiles, the term linear in $g_{12}$ can be sufficiently enhanced so that it dominates over the term linear in $g_{8}$ despite the $|{\\bf k}|^2H^2 \\Lambda ^{-4}$ suppression.", "As a simple proof of principal, we could consider for instance a profile of the form $\\bar{\\phi }(\\tau )=\\bar{\\Phi }_0 \\tau ^{2} e^{-\\tau ^2}$ for which the support then becomes $|{\\bf k}|\\Delta x=\\frac{7\\sqrt{\\pi }}{2\\sqrt{2}}\\frac{|{\\bf k}|}{H} \\left[\\frac{H \\bar{\\Phi }_0 }{\\Lambda ^2}\\right]^2\\left(g_8+\\frac{57}{7}g_{12}\\frac{|{\\bf k}|^2H^2}{\\Lambda ^4}-\\frac{6129}{1792}\\frac{g_8^2}{\\sqrt{2}}\\frac{H^2 \\bar{\\Phi }_0^2}{\\Lambda ^4}\\right)\\,,$ hence taking $|{\\bf k}| H/\\Lambda ^2 \\sim 0.2$ ensures validity of the EFT, while the $g_{12}$ dominates over the $g_8$ term and hence a resolvable time advance will be possible within the regime of validity of the EFT for negative $g_{12}\\sim -1$ even if $g_8\\sim 1$ .", "One could push the analysis to more generic profiles and derive a more systematic resolvable support outside the light-cone whenever $g_{12}$ is negative however at this stage moving on to spherically symmetric profiles will prove more instructive and positivity of $g_{12}$ from pure causality considerations will be proven in that context (see the summary of the causality constraints depicted in Fig.", "REF where it is clear that even in the presence of a generic positive $g_8$ , $g_{12}$ still ought to be positive to ensure causality on generic configurations that remain in the regime of validity of the EFT)." ], [ "Spherically-symmetric background", "We proceed to explore causality constraints on a static and spherically-symmetric background $\\bar{\\phi }(r)$ for which the operator $g_{10}$ is relevant.", "This allows us to establish to which extent the non-linear positivity bounds in Eq.", "(REF ) can be reproduced using causality considerations at low energy without other information from its UV completion.", "Given the symmetries of the background, we perform an expansion in spherical harmonicsDue to the azimuthal symmetry, we can neglect the $\\varphi $ dependence of the spherical harmonics and work with the Legendre polynomials.", "(partial waves) and write our perturbation as $\\psi =\\sum _{\\ell } e^{i\\omega t}Y_{\\ell }(\\theta ) \\delta \\rho _{\\ell }(r)$ .", "We obtain an equation of motion for the $\\ell $ -mode radial perturbation, $\\rho _{\\ell }$ , which schematically is $\\delta \\rho ^{\\prime \\prime }_{\\ell }(r)+A(\\omega ^2,r)\\delta \\rho ^{\\prime }_{\\ell }(r)+\\left(\\omega ^2 C(\\omega ^2,r)-\\frac{\\ell (\\ell +1)}{r^2}+B(r,\\ell ) \\right)\\delta \\rho _{\\ell }(r)=0 \\, .", "$ In the absence of interactions we have $B(r,\\ell )=0$ (up to the mass of the scalar field which we treat as negligible).", "We perform a field redefinition, $\\delta \\rho _{\\ell }(r)=e^{-\\int A(\\omega ^2,r)/2 \\mathrm {d}r}\\chi _{\\ell }(r)$ , to remove the friction term and get $\\chi ^{\\prime \\prime }_{\\ell }(r)+\\frac{1}{c_s^2(\\omega ^2,r)}\\left(\\omega ^2-V_{\\text{eff}}\\right)\\chi _{\\ell }(r)=0 \\ , \\quad {\\rm with}\\quad V_{\\text{eff}}\\equiv \\frac{\\ell (\\ell +1)}{r^2}+\\tilde{B}(r,\\ell ) \\, .", "$ We can obtain this equation in an exact form, but for our purposes, we will consider an expansion in the parameters $\\delta _1$ and $\\delta _2$ defined in Eq.", "(REF ).", "Let us consider a spherically-symmetric background of the form $\\bar{\\phi }(r)=\\bar{\\Phi }_0 f(r)$ and change coordinates to $R=r/r_0$ , where $r_0$ is an arbitrary length scale that measures the variation of the background and $\\bar{\\Phi }_0$ has dimensions of mass, $\\bar{\\phi }(r/r_0) = \\bar{\\Phi }_0 f(R) \\,.$ With these definitions, both the profile $f$ and the radius $R$ are dimensionless.", "Note that the definitions of all dimensionless parameters and functions are reported in Table REF of Appendix .", "Moreover, we expect $f$ and its derivatives $f^{(n)}$ (where the differentiation is taken with respect to $R$ ) to be at most of $\\mathcal {O}(1)$ .", "The validity of the EFT implies that: $\\epsilon _1\\equiv \\frac{\\bar{\\Phi }_0}{r_0\\Lambda ^2}\\ll 1 \\, , \\quad \\epsilon _2\\equiv \\frac{1}{r_0 \\Lambda }\\ll 1 \\, , \\quad \\text{and} \\quad \\Omega \\epsilon _2\\equiv \\frac{\\omega }{r_0\\Lambda ^2} \\ll 1 \\ .", "$ At the level of the phase shift, each contribution from the $g_i$ terms would scale as follows $g_8 : \\mathcal {O}\\left( \\epsilon _1^2\\right), \\qquad g_{10} : \\mathcal {O} \\left(\\epsilon _1^2 \\epsilon _2^2\\right), \\qquad g_{12} : \\mathcal {O} \\left(\\epsilon _1^2\\epsilon _2^2\\Omega ^2\\right) \\,.$ More generally, any term coming from $g_8^{n_1} g_{10}^{n_2} g_{12}^{n_3}$ will be suppressed by at least a power $\\epsilon _1^{2(n_1+n_2+n_3)} \\epsilon _2^{2(n_2+n_3)}\\Omega ^{2n_3} \\ .$ We write our expressions in terms of $\\epsilon _1$ , $\\epsilon _2$ , and $\\Omega $ and then perform an expansion up to order $\\epsilon _1^2\\epsilon _2^2$ and $\\epsilon _1^4$ , which requires the assumptions $\\epsilon _1^2\\ll \\epsilon _2$ and $\\epsilon _2^2\\ll \\epsilon _1$ .", "In fact, we will consider $\\epsilon _1$ and $\\epsilon _2$ of the same order, but allow the freedom of the exact value of these scales to be different.", "We will refer to these contributions as the leading order (LO) or $\\mathcal {O}(\\epsilon ^4)$ .", "Thus, we need to keep track of the contributions coming from the $g_8, \\ g_{10}, \\ g_8^2, \\ g_{12}$ terms.", "Expanding up to order $\\epsilon _1^2\\epsilon _2^2$ we find $\\chi ^{\\prime \\prime }_{\\ell }(R)+W_{\\ell }\\chi _{\\ell }(R)=0 \\, , \\quad W_{\\ell }=\\frac{(\\omega r_0)^2}{c_s^2(\\omega ^2,R)}\\left(1-\\frac{V_{\\text{eff}}(R)}{(\\omega r_0)^2}\\right) \\, , $ where prime now denotes a derivative with respect to $R$ .", "Note that the expansion is not just in $\\epsilon $ , but in $g_i\\epsilon $ .", "For this perturbative result to be correct, we need to make sure higher-order corrections to the series expansion in Eq.", "(REF ) above are small.", "Thus, we require schematically that $g_i \\epsilon \\ll 1$ , which as expected, simply tells us that we should not consider too large values for the Wilson couplings.", "Wilson coefficients much larger than unity should be rescaled appropriately in the cutoff $\\Lambda $ resulting in a lower cutoff scale.", "We now solve Eq.", "(REF ) using the WKB approximation, first analysing how far in the WKB approximation one needs to include contributions to be consistent with the EFT expansion.", "One consistency of the WKB expansion with the required order of EFT expansion is establish we can then explore the parameter space in which causality violations can arise.", "We will do so for the cases $\\ell =0$ and $\\ell \\ne 0 $ separately." ], [ "Regime of validity of the WKB approximation", "We start by considering Eq.", "(REF ) given in terms of dimensionless variables $\\chi _{\\ell }^{\\prime \\prime }(R) + (\\omega r_0)^2 \\hat{W}_{\\ell }(R) \\chi _{\\ell }(R) = 0 \\,, \\qquad \\hat{W}_{\\ell }(R) = \\frac{W_{\\ell }(R)}{(\\omega r_0)^2} = \\frac{1}{c_s^2(\\omega ^2,R)}\\left(1-\\frac{V_{\\text{eff}}(R)}{(\\omega r_0)^2}\\right) \\,.", "$ Since we assume that the perturbation fluctuates faster than the background, namely, $\\frac{ \\lambda _{\\text{perturbation}} }{\\lambda _{\\text{background}}}=\\frac{1}{\\omega r_0} =\\frac{\\epsilon _2}{\\Omega }\\ll 1 \\ , $ we can solve the equation above using the WKB method.", "In this approach, the solution to the equation of motion up to $n$ th-order correction in the WKB formula is given by $\\chi _{\\ell }^{(n)}(R) \\propto \\left( e^{i (\\omega r_0) \\int _0^R \\sum _{j \\ge 0}^n \\delta _{\\rm WKB}^{(j)} \\mathrm {d}R} - e^{- i (\\omega r_0) \\int _0^R \\sum _{j \\ge 0}^n \\delta _{\\rm WKB}^{(j)} \\mathrm {d}R} \\right) \\,,$ where the boundary conditions were chosen such that $\\chi _{\\ell }^{(n)}(R=0)=0$ and $\\delta _{\\rm WKB}^{(j)}$ is the $j$ th-order term in the WKB series expansion whose explicit expressions can be found in [100] and we list the relevant ones for our analysis below.", "Noting that $\\hat{W}_{\\ell }>0$ , it is easy to realise that $\\delta _{\\rm WKB}^{(j)}$ are purely imaginary total derivatives when $j$ is odd, meaning that they do not contribute to the phase but simply to the amplitude.", "In the end, we have that the phase is proportional to $\\sum _{j \\ge 0} \\delta _{\\rm WKB}^{(2j)} = \\delta _{\\rm WKB}^{(0)} + \\delta _{\\rm WKB}^{(2)} + \\delta _{\\rm WKB}^{(4)} + \\cdots \\,,$ where the first three contributions are $\\delta _{\\rm WKB}^{(0)} &= \\sqrt{\\hat{W}_{\\ell }} \\,, \\\\\\delta _{\\rm WKB}^{(2)} &= - \\frac{1}{(\\omega r_0)^2} \\frac{1}{8\\sqrt{\\hat{W}_{\\ell }}} \\left( \\frac{\\hat{W}^{\\prime \\prime }_{\\ell }}{\\hat{W}_{\\ell }} - \\frac{5}{4} \\left( \\frac{\\hat{W}^{\\prime }_{\\ell }}{\\hat{W}_{\\ell }} \\right)^2 \\right) \\,, \\\\\\delta _{\\rm WKB}^{(4)} &= \\frac{1}{(\\omega r_0)^4} \\frac{1}{32 \\hat{W}_{\\ell }^{3/2}} \\left[ \\frac{\\hat{W}^{(4)}_{\\ell }}{\\hat{W}_{\\ell }} - 7 \\frac{\\hat{W}^{\\prime }_{\\ell } \\hat{W}^{(3)}_{\\ell }}{\\hat{W}^2_{\\ell }} - \\frac{19}{4} \\left( \\frac{\\hat{W}^{\\prime \\prime }_{\\ell }}{\\hat{W}_{\\ell }} \\right)^2 + \\frac{221}{8} \\frac{\\hat{W}^{\\prime \\prime }_{\\ell } \\hat{W}^{\\prime }_{\\ell }{}^2}{\\hat{W}^3_{\\ell }} - \\frac{1105}{64} \\left( \\frac{\\hat{W}^{\\prime }_{\\ell }}{\\hat{W}_{\\ell }} \\right)^4 \\right] \\,.$ For our purposes, we need to consider terms up to order $\\mathcal {O}(\\epsilon ^4)$ .", "Below, we will see that $\\delta _{\\rm WKB}^{(2)}$ has contributions of order $\\epsilon _1^2/(\\omega r_0)^2=\\epsilon _1^2\\epsilon _2^2/\\Omega ^2$ that should be taken into account in order to have a consistent expansion at LO for the phase shift, and hence the time delay.", "In some cases, we would be interested in computing the next EFT contribution to ensure that it is a small effect that does not change our bounds.", "These NLO corrections include terms of the following orders $\\mathcal {O}(\\epsilon _1^6, \\epsilon _1^2 \\epsilon _2^4, \\epsilon _1^4 \\epsilon _2^2)$ .", "In this case, we will need to include $\\delta _{\\rm WKB}^{(4)}$ corrections to the WKB formula.", "We can establish the validity of the WKB approximation by looking at the relative error between the exact solution $\\chi _{\\ell }$ and the WKB approximation up to $n$ th-order corrections $\\chi _{\\ell }^{(n)}$ .", "Thus we require that $\\frac{\\chi _{\\ell }(R)-\\chi _{\\ell }^{(n)}(R)}{\\chi _{\\ell }(R)}\\sim \\frac{1}{(\\omega r_0)^n}\\int _0^R \\delta _{\\rm WKB}^{(n+1)} \\mathrm {d}R \\ll 1 \\,,$ as well as $\\frac{1}{(\\omega r_0)^n}\\int _0^R \\delta _{\\rm WKB}^{(n+1)} \\mathrm {d}R \\ll \\frac{1}{(\\omega r_0)^{n-1}}\\int _0^R \\delta _{\\rm WKB}^{(n)} \\mathrm {d}R \\,,$ in order for the WKB to be a useful approximation given by an asymptotic series in $(\\omega r_0)^{-1}$ [100].", "Similarly, we want to ensure that the next order WKB terms are indeed negligible at the order in the perturbative expansion that we are working on.", "This can be checked by computing the next order in Eq.", "(REF ) inferring that, ${\\chi ^{(n)}_{\\ell }}^{\\prime \\prime }(R) + (\\omega r_0)^2 \\hat{W}_{\\ell }(R) \\chi ^{(n)}_{\\ell }(R) = \\mathcal {E}_{\\ell }^{(n+1)}\\sim \\frac{\\delta _{\\rm WKB}^{(n+1)}}{\\delta _{\\rm WKB}^{(0)}} \\ .$ From which we can see that the leftover is of order $ \\mathcal {O}((\\omega r_0)^{-(n+1)})$ which is small provided $(\\omega r_0) \\gg 1$ , as postulated earlier.", "In practice, we compute carefully the order of these leftover and make sure that it vanishes at LO." ], [ "Case 1: Monopole", "In this section, we analyse the causality bounds on the EFT Wilson coefficients that arise when scattering the monopole mode.", "To do so, we consider Eq.", "(REF ) with $\\ell =0$ .", "At leading order, the function $\\hat{W}_0$ that appears in the equation of motion reads $\\left.", "\\hat{W}_0(R) \\right|_{\\rm LO} =& 1+8 g_8 \\epsilon _1^2 f^{\\prime }(R)^2+96 g_8^2 \\epsilon _1^4 f^{\\prime }(R)^4+8 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 f^{\\prime \\prime }(R)^2 +24 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}\\nonumber \\\\&+12 \\frac{g_8}{\\Omega ^2} \\epsilon _1^2 \\epsilon _2^2 \\left(2 \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}+\\frac{1}{2} \\partial _R^2 f^{\\prime }(R)^2\\right) \\,,$ where $f$ and $R$ are respectively the dimensionless spherically-symmetric background and radius defined in (REF ).", "As explained earlier we have performed an expansion in the small dimensionless parameters $\\epsilon _1$ , $\\epsilon _2$ , $\\Omega \\epsilon _2$ that measure the validity of the EFT.", "There is another dimensionless parameter which measures the validity of the WKB expansion and can be written in terms of the previous parameters, namely, $(\\omega r_0)^{-1}=\\epsilon _2/\\Omega $ .", "In order to obtain tight bounds for the Wilsonian coefficients one needs to consider the extreme situation where these small parameters are as large as possible while maintaining the EFT under control and being able to compute the necessary WKB corrections at this order.", "We will be computing the time delay at LO while ensuring validity of the EFT by imposing Eq.", "(REF ).", "These requirements together with the validity of the WKB approximation lead to $\\epsilon _2\\ll \\Omega \\ll 1/ \\epsilon _2$ , but in practice, we require slightly tighter bounds given by $\\sqrt{\\epsilon _2}<\\Omega < 1/\\sqrt{ \\epsilon _2}$ , together with $\\epsilon _1^2<\\epsilon _2$ and $\\epsilon _2^2<\\epsilon _1$ in order to have a well-defined expansion truncated at $\\mathcal {O}(\\epsilon ^4)$ .", "For example, this means that we keep corrections of order $(\\epsilon _1/(\\omega r_0))^2$ but neglect $(\\epsilon _1/(\\omega r_0)^2)^2$ .", "The latter type of corrections arise in the effective potential, but not in the speed.", "It is instructive to look at the sound speed and effective potential which, at leading order, are given by $&\\left.", "c_s^2(\\omega ^2,R) \\right|_{\\rm LO}=1-8 g_8 \\epsilon _1^2 f^{\\prime }(R)^2-32 g_8^2 \\epsilon _1^4 f^{\\prime }(R)^4 - 8 g_{12} \\epsilon _1^2 \\epsilon _2^2 \\frac{\\omega ^2}{\\Lambda ^2} f^{\\prime \\prime }(R)^2 -24 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R} \\, ,\\nonumber \\\\&\\left.", "V_{\\text{eff}}(R) \\right|_{\\rm LO}= -12 g_8 \\epsilon _1^2 \\left( 2 \\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R} + \\frac{1}{2} \\partial _R^2 f^{\\prime }(R)^2 \\right) \\, .$ One should note here that the effective potential term is suppressed by $(\\omega r_0)^{-2}=\\epsilon _2^2 \\Omega ^{-2}$ with respect to the sound speed term.", "The corrections at NLO to the speed of sound and to the effective potential are listed in Appendix .", "From the sound speed expression, we can understand whether we should expect to be able to reproduce any of the positivity bounds in Eq.", "(REF ).", "Firstly, as already argued in Section , the $g_8$ and $g_{12}$ contributions to the speed are clearly sign definite.", "Next, we analyse the $g_{10}$ contribution.", "This term appears to be sign indefinite, but under the integral, it is equivalent to a sign definite contribution up to total derivatives that will vanish at the boundaries.", "Hence, with use of the monopole, we can only expect to be able to bound the $g_{10}$ coefficient from below and we will need to restor to higher multipoles to bound $g_{10}$ from above.", "We can now determine the phase shift experienced by the perturbation travelling in the spherically-symmetric background.", "For that we first rewrite the solution to the perturbed equation of motion in the following way $\\chi ^{(n)}_0(R) \\propto e^{-i (\\omega r_0) \\int _0^R \\left( \\sum _{j \\ge 0}^n \\delta _{\\rm WKB}^{(j)} - 1 \\right) \\mathrm {d}R} \\left( e^{2 i (\\omega r_0) \\int _0^R \\left( \\sum _{j \\ge 0}^n \\delta _{\\rm WKB}^{(j)} -1 \\right) \\mathrm {d}R} e^{i (\\omega r_0) R} - e^{-i (\\omega r_0) R} \\right) \\,,$ so that it can be compared to the asymptotic solution $\\chi _0(R) \\propto \\left( e^{2i \\delta _0} e^{i (\\omega r_0) R} - e^{-i (\\omega r_0) R} \\right)$ to find that the expression for the phase shift at $\\ell =0$ reads $\\delta _0(\\omega ) = \\omega r_0\\int _0^{\\infty } \\left( \\sum _{j \\ge 0} \\delta _{\\rm WKB}^{(j)}- 1 \\right) \\mathrm {d}R \\,, $ which is positive for $0 < c_s < 1$ and large enough $\\omega r_0$ as seen when using Eqs.", "(REF ) and (REF ): $\\delta _0(\\omega )\\sim \\omega r_0\\int _0^{\\infty } \\left( \\frac{1}{c_s}- 1 \\right) \\mathrm {d}R \\, .$ From Eq.", "(REF ), we see that the dimensionless time delay of a partial wave with zero angular momentum is given by $\\omega \\Delta T_0(\\omega ) = 2 \\omega \\frac{\\partial \\delta _0(\\omega )}{\\partial \\omega } = 2\\omega \\int _0^{\\infty }\\frac{\\partial }{\\partial \\omega }\\left( (\\omega r_0) \\left( \\sum _{j \\ge 0} \\delta _{\\rm WKB}^{(j)} - 1 \\right)\\right) \\mathrm {d}R \\equiv \\int _0^{\\infty }\\mathcal {I}_0(\\omega ,R) \\mathrm {d}R \\, ,$ where up to $\\mathcal {O}(\\epsilon ^4)$ we have $\\left.", "\\mathcal {I}_0(\\omega ^2,R) \\right|_{\\rm LO}= &8 (\\omega r_0) \\epsilon _1^2 \\left[ g_8 f^{\\prime }(R)^2 + 10 g_8^2 \\epsilon _1^2 f^{\\prime }(R)^4 +3 g_{12} \\Omega ^2 \\epsilon _2^2 f^{\\prime \\prime }(R)^2 \\vphantom{\\frac{1}{2}} \\right.", "\\nonumber \\\\& \\left.", "- \\frac{g_8}{\\Omega ^2} \\epsilon _2^2 \\left( 3 \\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R} + \\frac{1}{2} \\partial _R^2(f^{\\prime }(R)^2) \\right) + 3 g_{10} \\epsilon _2^2 \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R} \\right] \\, .$ As appropriate for a scattering regime which is intrinsically wave-like such as the $\\ell =0$ case, we are computing the time delay at fixed $\\ell $ .", "As mentioned earlier, we will consider background profiles giving null boundary terms so that we can neglect any contribution from total derivative terms.", "Taking this into consideration and performing integration by parts we find that the above equation can be written as $\\left.", "\\mathcal {I}_0(\\omega ^2,R) \\right|_{\\rm LO}=&\\ 8 (\\omega r_0) \\epsilon _1^2 \\left[ g_8 f^{\\prime }(R)^2 + 10 g_8^2 \\epsilon _1^2 f^{\\prime }(R)^4 + 3 \\epsilon _2^2 \\left( g_{12} \\Omega ^2 f^{\\prime \\prime }(R)^2 + \\frac{1}{2} \\left( g_{10} - \\frac{g_8}{\\Omega ^2} \\right) \\frac{f^{\\prime }(R)^2}{R^2} \\right) \\right] \\nonumber \\\\&+ \\text{total derivatives}\\, .", "$ We can now explicitly see that the contribution from each term in the EFT expansion is sign definite when looking at the scattering of $\\ell =0$ modes.", "From these expressions and the constraints from the validity of the EFT and the WKB approximation in Eqs.", "(REF ), (REF ), we can easily see that the $g_8$ and $g_{12}$ terms can give rise to resolvable time delays.", "In fact, the time delay is positive for $g_8>0$ when $g_{10}=g_{12}=0$ and for $g_{12}>0$ when $ g_{8}=g_{10}=0$ , however we will soon be able to make more general statements.", "For the $g_{10}$ terms, one can also obtain a resolvable time delay, but this requires tuning of the function $f$ to make the time delay large while satisfying Eq.", "(REF ).", "After considering the high $\\ell $ case in the following section, we will analyse the situations when a resolvable time advance can occur in section REF ." ], [ "Case 2: Higher-order multipoles", "We will now consider the case of partial waves with $\\ell >0$ .", "As first noted by Langer [101], the standard WKB approach fails to be useful when considering low multipole contributions since the approximation fails to reproduce the behavior of the solutions near $r=0$ .", "To deal with this, one can perform a change of variable, $r=e^{\\rho }$ , in order to map the singularity $r=0$ to $\\rho =-\\infty $ .", "Then, the exponentially decaying WKB solution reproduces the correct asymptotics at $\\rho =-\\infty $ .", "We proceed to change the variables in Eq.", "(REF ) as described above and obtain an equation of motion that contains a friction term which we remove with a field redefinition to get, $\\partial _{\\rho }^2 \\delta \\rho _{\\ell }(\\rho )= - \\widehat{W}_{\\ell }(\\rho ) \\delta \\rho _{\\ell }(\\rho )\\,.$ Then, we solve this equation using the WKB approximation.", "To find the phase shift, we want to express $\\widehat{W}_{\\ell }(\\rho )$ back in terms of the dimensionless radial coordinate $R=r/r_0$ .", "For generic multipole we define the dimensionless quantity $W_{\\ell }(R)\\ \\equiv \\frac{1}{(\\omega r)^2} \\widehat{W}_{\\ell }(\\rho (r)) \\,,$ note that this is not precisely the same definition as what was performed in (REF ).", "Here the factor $1/r^2$ captures the Jacobian of the transformation: $\\int \\sqrt{\\widehat{W}_{\\ell }(\\rho )} \\mathrm {d}\\rho = \\omega r_0 \\int \\sqrt{W_{\\ell }(r)} \\mathrm {d}R \\, .$ Before moving on, we note that within the present formalism we cannot compute the time delay beyond the leading-order WKB approximation for the $\\ell >0$ case.", "It is well known that higher-order ($n>0$ ) WKB corrections are divergent at the turning point.", "This simply signals the breaking of the approximation in this region and the WKB solution can be improved by matching to an asymptotic solution near the turning point.", "Nevertheless, this does not modify the asymptotic behavior of the WKB solution and thus does not change the inferred time delay.", "While these subleading contributions seem to involve infinities at finite order in the WKB series expansion, the physical phase shift is finite so that upon appropriate re-organization or resummation of the series the result will end up being finite.", "We could in principle carry out this resummation or re-organization of the series, however for simplicity we focus here instead in the regime where $(\\omega r_0) \\gg 1$ so that all WKB corrections can safely be ignored.", "At first order in the WKB approximation, the phase shift of the partial wave with $\\ell >0$ can be identified as (see Appendix B of [89]) $\\delta _{\\ell } = (\\omega r_0) \\left[ \\int _{R_t}^{\\infty } \\mathrm {d}R \\left( \\sqrt{W_{\\ell }(R)} - 1 \\right) - R_t + \\frac{1}{2} B \\pi \\right] \\, , $ where the $R_t=r_t/r_0$ is the dimensionless turning point such that $W_{\\ell }(r_t)=0$ and we have introduced the dimensionless impact parameter $B = b/r_0$ , where $b = (\\ell + 1/2)/ \\omega $ is the impact parameter of the free theory, i.e.", "when $g_{i}=0$ .", "We remind the reader that the definitions of all dimensionless parameters are reported in Table REF of Appendix .", "In order to perform the integral in Eq.", "(REF ) analytically, we will expand the integrand at LO as in the monopole case.", "We have to be careful when splitting the integral order by order so that each term is a converging integral.", "To do so, we start by writing $W_{\\ell }(R) = W_{\\ell }(R)|_{{g_i=0}}+\\delta W_{\\ell }(R)\\ ,$ where $ W_{\\ell }(R)|_{{g_i=0}}$ is the contribution arising purely from the angular momentum contributions but no self-interactions.", "Using the fact that $W_{\\ell }(R_t)=0$ , this can be rewritten as $W_{\\ell }(R) =\\left(1 - \\frac{R_t^2}{R^2}\\right) + \\delta W_{\\ell }(R)\\ , \\quad \\delta W_{\\ell }(R)=\\delta W_{\\ell }(R)-\\frac{R_t^2}{R^2}\\delta W_{\\ell }(R_t) \\ ,$ so that each contribution is finite at the integration boundaries and the integrals at each order in $\\epsilon _1$ and $\\epsilon _2$ converge.", "Now, expanding the square root at $\\mathcal {O}(\\epsilon ^4)$ gives $\\sqrt{W_{\\ell }(R)} = \\sqrt{ 1 - \\frac{R_t^2}{R^2} } + \\frac{U_{\\ell }(R)}{\\sqrt{ 1 - \\frac{R_t^2}{R^2} }} \\,,$ where $U_{\\ell }(R_t)=0$ .", "The LO explicit expressions for $W_{\\ell }(R)$ , $U_{\\ell }(R)$ , and $R_t$ can be found in Appendix .", "Integrating this expression gives the phase shift at $\\mathcal {O}(\\epsilon ^4)$ .", "Note that $R_t = B + \\mathcal {O}(\\epsilon ^4)$ and $U_{\\ell } = \\mathcal {O}(\\epsilon ^2)$ , hence, when dealing with the $U_{\\ell }$ term, the turning point $R_t$ can be replaced by $B$ since any corrections will contribute at NLO.", "This means that we can write $\\int _{R_t}^{\\infty } \\left( \\sqrt{W_{\\ell }(R)} -1 \\right) \\, \\mathrm {d}R = \\int _{R_t}^{\\infty } \\left( \\sqrt{ 1 - \\frac{R_t^2}{R^2} } -1 \\right) \\, \\mathrm {d}R + \\int _B^{\\infty } \\frac{U_{\\ell }(R)}{\\sqrt{ 1 - \\frac{B^2}{R^2} }} \\, \\mathrm {d}R \\,,$ giving $\\delta _{\\ell }(\\omega ) = (\\omega r_0) \\left[ \\int _B^{\\infty } \\frac{U_{\\ell }(R)}{\\sqrt{ 1 - \\frac{B^2}{R^2} }} \\, \\mathrm {d}R + \\frac{\\pi }{2} \\left( B - R_t \\right) \\right] \\,.", "$ To get the time delay, we need to differentiate the expression above with respect to $\\omega $ .", "As opposed to the monopole case where we fixed $\\ell $ , when going to higher multipoles it is convenenient to think of the scattering not in terms of the scattering of waves but of particles specified by a given impact parameter.", "That is to say, what is naturally held fixed for particle scattering is the impact parameter $b$ (or $B$ ).", "This is the time delay traditionally considered in the eikonal approximation (see for example [76]).", "Thus, the time delay reads $(\\omega \\Delta T_{b}(\\omega )) = 2 \\frac{\\partial \\delta _{\\ell }(\\omega )}{\\partial \\omega } \\big |_{b}= 2 (\\omega r_0) \\left[ \\int _{R_t}^{\\infty } \\left( \\partial _{\\omega } \\left( \\omega \\sqrt{W_{\\ell }(R)} \\right) - 1 \\right) \\mathrm {d}R - R_t + \\frac{1}{2} B \\pi \\right] \\, ,$ which after using Eq.", "(REF ) can be written as $(\\omega \\Delta T_{b}(\\omega )) = 2 (\\omega r_0) \\left[ \\int _{B}^{\\infty } \\frac{\\partial _{\\omega } (\\omega U_{\\ell }(R))}{\\sqrt{1 - \\frac{B^2}{R^2}}} \\mathrm {d}R + \\frac{\\pi }{2} \\left( B - \\partial _{\\omega } (\\omega R_t) \\right) \\right] \\,.", "$ In the next section, we will explore the regions in Wilson coefficient space that can lead to a resolvable time advance given by Eq.", "(REF ).", "Contrary to the $\\ell =0$ case, the contribution to the time delay from $g_{10}$ , found in Eq.", "(REF ), is not sign definite when we have angular momentum.", "This will allow us to bound the $g_{10}$ coefficient from above and below.", "The tightest bounds will arise from considering the scattering of higher-order multipole modes.", "Note that while we can take the large-$\\ell $ limit, we cannot take $\\ell \\rightarrow \\infty $ .", "This can be seen by writing $L=\\ell +1/2$ , and $L =\\omega b=\\frac{B \\Omega }{\\epsilon _2} \\,.$ The impact parameter $B$ cannot be taken to infinity, otherwise there would be no scattering.", "Meanwhile, $\\Omega $ is bounded by Eq.", "(REF ) so that we stay within the regime of validity of the EFT.", "Thus, at a fixed impact parameter, the angular momentum has an upper bounded given by $L\\ll \\frac{B}{\\epsilon _2^2} \\ .$ Note that this large angular momentum limit is related to the standard approach of computing phase shifts by looking at the eikonal limit of $2 \\rightarrow 2$ scatterings." ], [ "Causal shift-symmetric theories", "We now consider a specific background profile to obtain constraints on the Wilson coefficients given by causality.", "We use an analytic function in order to avoid any possible divergences at $R=0$ .", "Furthermore, we require that the background vanishes at infinity to have a well-defined scattering around an asymptotically flat background, that is, the light-cones observed by the perturbation approach the Minkowski ones near infinity.", "Therefore, we will consider a profile of the form $f(R^2)=\\left(\\sum _{n=0}^{p} a_{2n} R^{2n} \\right) e^{-R^2} \\ ,$ where $a_{2n}$ are arbitrary coefficients of order 1.", "For a generic scalar field EFT in its own right (not coupled to gravity), one can always consider an external source $J$ that would generate such a profile.", "In more specific contexts where the scalar field is considered to be diagnosing one of the degrees of freedom of gravity (as would for instance be the case for the helicity-0 mode in [102] or in massive gravity [103], [104]), one may consider more carefully how such a profile could be generated as discussed in Appendix .", "When considering such profiles (REF ), the largest contributions to the time delay (or advance) come from small powers $n$ , so in practice we truncate the series by choosing $p=3$ , i.e.", "including terms up to $a_6$ .", "Given this profile, we can explore the regions where one can obtain a resolvable time advance, that is, $\\omega \\Delta T_{b} \\lesssim -1$ while maintaining the EFT under control and hence violate causality.", "Since we have already established in the homogeneous case that $g_8$ ought to be positive we can set $g_8=1$ without loss of generality (this simply corresponds to a rescaling of all the Wilson coefficients by $g_8$ ).", "The case $g_8=0$ will be considered separately in what follows.", "We use an extremisation procedure to find the largest region where causality is violated following the method is explained in Appendix ." ], [ "Monopole modes:", "We first consider the $\\ell =0$ case.", "In the extremisation procedure, we include all the constraints on the dimensionless parameters arising from the validity of the EFT as in (REF ) and we require that the LO and NLO results differ only by a $3\\%$ for $g_{14}$ of order 1.", "The NLO contributions, found in Appendix , include WKB corrections as well as higher-order EFT terms.", "Within our parameterisation, we find the tightest constraints by considering $a_0=1 \\ ,\\ a_2=0.72 \\ , \\ a_4\\sim 0 \\ , \\ a_6=0.14 \\ , \\ \\epsilon _1=0.36 \\ ,\\ \\epsilon _2=0.35 \\ , \\ \\Omega =0.70 \\ , $ although it is likely that even tighter constraints could be derived if one considered other classes of profiles, the bounds we obtain here already serve as proof of principle.", "The bounds arising from the previous choice are shown in blue in Fig.", "REF together with the orange positivity bound from [33].", "It is easy to prove, by examining Eq.", "(REF ), that the slope of the line delimitating the causal region from the acausal one is negative for any choice of coefficients when considering $\\ell =0$ .", "This means that when considering the monopole, we can only get a left-sided bound as argued earlier.", "Figure: Positivity and monopole causality constraints for the shift-symmetric scalar EFT considered in ().", "In white, we observe a region that can lead to violations of causal propagation in the infrared, i.e.", "where ΔT 0 <-1/ω\\Delta T_0 < -1/\\omega .", "The blue region is its complement where there is no yet any indication of causality violation.", "Here, we have focused on bounds arising from monopole modes with a background profile given by Eq.", "() and the coefficient of the (∂φ) 4 (\\partial \\phi )^4 operator set to g 8 =1g_8=1.", "In orange, we observe the region that satisfies the positivity constraints in , and assumes physical properties of the UV completion.Note that our choice in Eq.", "(REF ) implies $\\omega r_0\\sim 2$ which does not suppress higher-order WKB corrections.", "Nevertheless, when working at $\\mathcal {O}(\\epsilon ^4)$ we can safely consider this case since all the corrections $\\delta _{\\rm WKB}^{(2n)}$ for $n\\ge 2$ will correspond to total derivatives that do not contribute to the phase shift.", "One can see that this is the case by looking at Eq.", "(REF ) and noting that after expanding in $\\epsilon _1$ and $\\epsilon _2$ up to order $\\mathcal {O}(\\epsilon ^4)$ the WKB corrections will arise from $W^{(2n)}$ ." ], [ "Higher multipole modes:", "Moving to the higher multipoles, $\\ell >0$ , we consider the same profile as in Eq.", "(REF ).", "By allowing finite values of $\\ell $ , the equation for the line $(\\omega \\Delta T_{b}) = -1$ separating regions of “causality-violation\" has a new free parameter and now allows for a positive slope.", "This opens the possibility to constrain the causality region from both sides and below.", "We do not get a better lower-sided bound on $g_{10}$ but we do get an upper bound by considering the union of the constraints arising from a set of parameters as explained in Appendix .", "Remarkably, this method also sets a lower bound on $g_{12}$ which ought to be positive, in complete agreements with positivity bounds.", "In itself this is a remarkable statement as in the presence of the $g_8$ operator the speed is typically dominated by that term and little would be inferred from $g_{12}$ .", "Our results are shown in Fig.", "REF where the causality bound region corresponds to the intersection of regions in Wilson coefficient space that do not give rise to resolvable time advances as defined in Appendix .", "An example of a set of parameters that we use to obtain the causality bounds is given by $a_0=-5 \\ ,\\ a_2=-5 \\ , \\ a_4=5 \\ , \\ a_6=-0.91 \\ , \\ \\epsilon _1=0.17 \\ ,\\ \\epsilon _2=0.17 \\ , \\ \\Omega =3 \\ , $ leading to our tightest bound on $g_{10}$ at $g_{12}=0$ .", "Once again we do not preclude the possibility that stronger bounds could be obtained by improved optimization methods or by considering more generic classes of profiles, however great care should be taken so as to ensure validity of the EFT and WKB approximation.", "As in the previous case, we ensure that we are within the regime of validity of the EFT by satisfying Eq.", "(REF ).", "Furthermore, we only work at leading order in the WKB approximation and guarantee that higher-order corrections are negligible by taking $\\omega r_0\\sim \\mathcal {O}(20)$ .", "Note that, as explained earlier, odd higher-order WKB corrections only contribute to the overall amplitude and hence, corrections to the phase shift (and time delay) only come from even higher-order WKB corrections, which are then suppressed by powers of $(\\omega r_0)^2\\sim \\mathcal {O}(400)$ .", "In contrast to the $\\ell =0$ analysis, we cannot compare to the NLO corrections since these would include WKB corrections that we cannot compute within our formalism as explained in the previous section.", "However, we do ensure smallness of the corrections by relying on dimension analysis given by Eq.", "(REF ).", "It is interesting to note that the tightest bounds that we found come from the region where $\\ell \\sim \\mathcal {O}(30)$ , and thus are related to calculations in the eikonal limit.", "On the other hand, the results from the previous case ($\\ell =0$ ) arise in the opposite regime that is less explored in the literature.", "Figure: The blue and orange regions represent the EFTs satisfying causality bounds from higher multipoles and positivity bounds respectively.", "The regions are computed as in Fig.", "with g 8 =1g_8=1, but the causality constraints are those arising from higher-order multipole modes.Combining monopole and higher multipoles causality bounds gives rise to the left panel of Fig.", "REF strongly constraining the viable region of the $\\lbrace g_{10}, g_{12}\\rbrace $ parameter space.", "We highlight that there is room for our procedure to be further tightened (for instance by considering more generic backgrounds and more freedom in their parameterizations and their scaling).", "As a result the white regions ruled out in Figs.", "REF and REF are very likely not the most optimal bounds that one can obtain from causality but already provide a close contact with standard positivity bounds and new compact positivity bounds." ], [ "Causality in Galileon theories", "Besides the shift-symmetric theory considered throughout this paper, one can impose a more constraining, spacetime-dependent, shift symmetry given by $\\phi \\rightarrow \\phi +c + b_\\mu x^\\mu \\ ,$ where $c$ is a constant and $b_\\mu $ a constant vector.", "This is the Galileon symmetry [96] which arises in various contexts such as massive gravity theories, brane-world models, accelerating universes, inflationary models and alternatives to inflation [105].", "Imposing this new symmetry requires that we set $g_8=0$ in Eq.", "(REF ).", "Note that $\\textit {any}$ scalar low energy EFT that enjoys a Galileon symmetry (with no other light degrees of freedom) is forbidden by positivity bounds.", "Setting $g_8=0$ the positivity bounds (REF ) then imposes $g_{10}=g_{12}=0$ .", "This means that when viewed as a low energy scalar EFT, a Galileon cannot have a Wilsonian UV completion that is local, unitary, causal, and Poincaré invariant.", "Here, we would like to understand whether we can obtain similar stringent bounds from infrared causality alone with no further input on the UV completion.", "The analysis follows in a similar way as the shift-symmetric case above, with the only modification arising from the requirements for the validity of the EFT which now read $\\epsilon _1 \\epsilon _2\\ll 1 \\, , \\quad \\text{and} \\quad \\Omega \\epsilon _2 \\ll 1 \\ .", "$ The validity of the WKB approximation and the above EFT requirements imply that $\\epsilon _2\\ll \\Omega \\ll 1/ \\epsilon _2$ .", "In order to have a well-defined $\\epsilon $ expansion, we require slightly tighter lower bounds given by $\\sqrt{\\epsilon _2}\\ll \\Omega $ .", "While in the shift-symmetric case we had $\\epsilon _1\\sim \\epsilon _2$ , here $\\epsilon _1$ can in principle be larger since, thanks to the Galileon symmetry, all operators are always suppressed by some power of $\\epsilon _2$ .", "The LO or $\\mathcal {O}(\\epsilon ^4)$ corrections simply include the $\\epsilon _1^2 \\epsilon _2^2$ terms.", "Figure: Causality bounds for the Galileon EFT (g 8 =0g_8=0).", "The green region represents the monopole causality bounds (for the profile considered in Eq. ()).", "The blue region represents causality bounds from higher multipole, leading to two-sided bounds.", "Only the intersection of the blue and green regions is so far causally viable.As previously, we consider propagation around the background profile in Eq.", "(REF ).", "When computing the time delay for $\\ell =0$ modes, we require that the NLO result differs from the LO only by a $3\\%$ for $g_{14}$ of order 1.", "As in the shift-symmetric case, we can only get lower bounds on $g_{10}$ in this regime.", "Note that the monopole constraint for the Galileon symmetry gives $g_{10} \\gtrsim 0$ , which is nearly as good as it can be for a one-sided bound.", "Meanwhile, in the higher multipoles case, i.e.", "$\\ell >0$ , we only consider the leading-order WKB results as in the previous case.", "For this case, we closely reproduce the $\\ell =0$ left-sided bound and get a new maximal right-sided bound as seen in blue in Fig.", "REF ." ], [ "Discussion and conclusions", "We have seen that requiring that the effective field theory only leads to causal propagation around a given spherically-symmetric background allows us to put tight bounds on the Wilson coefficients of a low energy EFT, independently of its ultimate high energy completion.", "Remarkably, there are two physical regimes that give rise to different bounds.", "The propagation of zero angular momentum partial waves gives rise to lower bounds while the propagation of high $\\ell $ modes imposes both lower and upper bounds, although the lower bounds are in general not competitive with those arising from $\\ell =0$ modes.", "We can summarise our findings by combining both results from the monopole and the higher-order multipoles.", "This is shown in the blue causal regions depicted in Fig.", "REF .", "Figure: Infrared Causality constraints on the Wilson coefficients of two scalar low-energy EFT, a shift-symmetric one with g 8 =1g_8=1 on the left and a Galileon-symmetric one with g 8 =0g_8=0 on the right.", "In both cases, the white areas are regions in the Wilson coefficients space where a violation of causality can be observed at low-energy, whereas the orange one is derived from positivity bounds requiring assumptions in the UV.", "To obtain these results, we combined lower and upper bounds derived respectively in the ℓ=0\\ell =0 and ℓ>0\\ell >0 cases.On the left pane of Fig.", "REF we observe the causality bounds (blue) compared to the positivity bounds (orange).", "While our causality bounds are not as constraining as the positivity ones, we note two important points.", "First, contrary to the positivity bounds, causality bounds do not require any assumptions of the UV completion (including notably, unitarity and locality) they arise purely from infrared physics that is well described by the EFT.", "Second, positivity bounds have by now been optimised using various techniques allowing to probe features of the EFT beyond its forward limit, while ours were so far obtained using a simple static and spherically symmetric profile with a simple extremisation procedure.", "It is likely that tighter bounds could be derived by allowing for more generic and less symmetric profiles.", "More importantly, we highlight that the precise numerical values of the causal bounds should not be the main focus of our results.", "The fact that by simply requiring causal propagation in the infrared we can obtain such semi-compact bounds is in itself remarkable.", "A naive version of the right-sided positivity bound is given by $g_{10}<2g_8$ and can be derived simply using the $s \\leftrightarrow u$ dispersion relation [33].", "This bound is slightly optimised when using triple crossing symmetry $s \\leftrightarrow t \\leftrightarrow u$ .", "Note that in our causality bounds, we only produce an upper bound for $g_{10}$ and lower bound for $g_{12}$ when looking at higher multipoles.", "On the other hand, the left-sided positivity bounds are fully coming from triple crossing symmetry.", "In our analysis this lower bound can be reproduced by looking at both high $\\ell $ and $\\ell =0$ scattering, but the stronger bound comes from the monopole bound.", "This suggests that our analysis approximately reproduces bounds purely from $s \\leftrightarrow u$ dispersion relation in the UV when looking at higher multipoles and triple crossing symmetry when looking at the monopole.", "However, this seems to be the opposite behaviour of the one observed in [33], [34], where the upper bound is obtained at $\\ell =0$ and the lower one at $\\ell \\ge 2$ .", "Correspondingly, in the right pane of Fig.", "REF we see that requiring infrared causality of the Galileon theory allows us to recover a very similar result to the recently derived full-crossing symmetric positivity bounds that entirely rule out the quartic Galileon by assuming properties of the UV completion.", "Thus, we effectively rule out the quartic Galileon as a causal low energy scalar effective field theory with no other light degrees of freedom.", "This does not imply that we rule out the quartic Galileon coupling that would arise in a gravitational setting.", "For example, the Galileon theory is a meaningful decoupling limit of massive gravity theories, but can never be considered as a low energy description without the inclusion of other modes.", "Moreover the Galileon field would generically couple to the trace of the stress-energy tensor, which must obey some consistency conditions of its own.", "We discuss this point in Appendix  and leave for future work the analysis of the situation where we have a gravitational coupling in which one has to impose conditions on the sources to be physical.", "Instead, our analysis holds if we assume that we are dealing with a scalar EFT in its own right that can be coupled to an arbitrary external source so that causal propagation is required for any possible external source configuration.", "Over the past few years, remarkable progress have been made in deriving new sets of non-linear, compact positivity bounds that make use of full $s\\leftrightarrow t \\leftrightarrow u$ crossing symmetry.", "This work serves as proof of principle that low energy causality arguments alone can go a long way in making contact with known positivity bounds.", "This extends the earlier observation of [2] (for a more recent discussion connecting time delays and positivity bounds see Appendix A of [31]).", "It would be interesting to understand how constraining low energy causality is when optimising the bounds derived in this paper across more general backgrounds similar to that considered in [89], [85], [86].", "One might expect that fewer symmetries could lead to stronger bounds.", "Similarly, one could use this approach to constrain Wilson coefficients of higher derivative terms that arise in the EFT which have been previously bounded using positivity arguments.", "One appeal of these constraints is that they can easily be generalizable to include operators that are higher order in the field and hence would not contribute at tree-level to known $2\\rightarrow 2$ positivity bounds.", "Furthermore, the requirement of low energy causality can be imposed on gravitational theories and curved backgrounds without running into problems related to the lack of an S-matrix or broken Lorentz symmetries, which would make them particularly appealing for instance for cosmological [56], [57], [62] or black hole gravitational bounds [85], [86].", "In future work, we will explore how causality can give rise to bounds in such situations." ], [ "Acknowledgments", "We would like to thank Andrei Khmelnitsky for collaborations in the earlier stages of this work.", "We would also like to thank Cliff Burgess, Massimo Porrati and the organizers and attendees of the IAS workshop “Possible and Impossible in Effective Field Theory: From the S-Matrix to the Swampland\" for useful discussions.", "The work of MCG, AJT and CdR is supported by STFC grant ST/T000791/1.", "MCG and CdR are supported by the European Union's Horizon 2020 Research Council grant 724659 MassiveCosmo ERC–2016–COG.", "VP is funded by the Imperial College President's Fellowship.", "CdR thanks the Royal Society for support at ICL through a Wolfson Research Merit Award.", "CdR is also supported by a Simons Foundation award ID 555326 under the Simons Foundation Origins of the Universe initiative, Cosmology Beyond Einstein's Theory and by a Simons Investigator award 690508.", "AJT thanks the Royal Society for support at ICL through a Wolfson Research Merit Award." ], [ "Causal time advances and Lorentz invariant UV completions", "As noted by Wigner and Eisenbud [69], [70], for scattering in a potential of finite range $a$ , it is natural to obtain a scattering time advance of $2a/v$ for spherical wave scattering since this reflects the time advance that a wave which scatters directly off the hard boundary at $r=a$ , relative to a wave which makes it to $r=0$ .", "Clearly this does not violate causality, and so the causality condition of Wigner-Eisenbud for monopole ($\\ell =0$ ) scattering is $\\Delta T > - \\frac{2a}{v} -\\frac{{\\cal O}(1)}{\\omega } \\, ,$ with $v$ the group velocity of the wave.", "Given this, one may wonder whether we have been too strict in our consideration of monopole scattering by not allowing any time advance.", "The key difference is that we are interested in the scattering of essentially massless particles in the relativistic limit for which $\\omega $ is large in comparison to the potential $V$ , and the scale of variations of the potential $r_0$ .", "More precisely we assume ${\\rm Max}[V^{(n)}(r)]\\ll \\omega ^{n+1}$ for all $n \\ge 0$ .", "In this limit, no resolvable time advance is consistent with Lorentz invariant causality.", "To understand why this is the case, let us consider the case of relativistic scattering off of a (quasi-)hard sphere.", "To make comparison with the non-relativistic problem, consider a complex massive scalar field $\\Phi $ of mass $m$ , which is charged under a $U(1)$ gauge field whose Coulomb potential $q A_0 = V(r)$ takes the form $V(r) = V_0 \\theta (a-r) \\, .$ The equation of motion for the complex scalar is $m^2 \\Phi -\\nabla ^2 \\Phi + D_t^2 \\Phi =0 \\, ,$ where $D_t =\\partial _t + i V$ .", "For a given frequency and multipole we have $(\\omega -V(r))^2 \\Phi = m^2 \\Phi -\\frac{1}{r^2} \\frac{\\partial }{\\partial r} \\left( r^2 \\frac{\\partial \\Phi }{\\partial r} \\right) +\\frac{\\ell (\\ell +1)}{r^2} \\Phi \\, .$ The non-relativistic problem is obtained as usual by replacing $\\omega = m+\\omega _{\\rm NR}$ and neglecting $\\omega _{\\rm NR}^2$ and $V^2$ terms.", "Focussing on the monopole case $\\ell =0$ for simplicity, the solution for $r<a$ which is regular at $r=0$ is $\\Phi (r) = \\frac{A}{r} \\sin \\left(\\kappa _0 \\, r\\right) \\, ,$ with $\\kappa _0= \\sqrt{(\\omega -V_0)^2-m^2}$ .", "Denoting $k = \\sqrt{\\omega ^2-m^2}$ , the solution for $r>a$ can be parametrised as $\\Phi (r) = \\frac{A^{\\prime }}{2i r} \\left(e^{2i \\delta } e^{i k r} -e^{-ik r} \\right) \\, .$ Matching at $r=a$ determines the relativistic phase shift to be $e^{2 i \\delta } = e^{-2i a k} \\frac{\\kappa _0 \\cos (a \\kappa _0)+ i k \\sin (a \\kappa _0)}{\\kappa _0 \\cos (a \\kappa _0)- i k \\sin (a \\kappa _0)} \\, .$ Now in the true hard sphere limit $|V_0| \\rightarrow \\infty $ for which the field vanishes for $r<a$ the phase shift reduces to $e^{2 i \\delta } = e^{-2i a k}\\,,$ and as expected this gives the relativistic version of the time advance noted by Wigner and Eisenbud $\\Delta T =2 \\frac{\\partial \\delta }{\\partial \\omega }= -\\frac{2 a}{v}\\,,$ with $v= \\frac{\\mathrm {d}\\omega }{\\mathrm {d}k}=k/\\omega $ , and a similar behaviour occurs even at finite $V_0$ consistent with the bound (REF ).", "Crucially however this effect occurs because the potential is sharper that the frequencies being considered.", "If we consider rather the situation where the frequencies are large in comparison to the typical scale of variation of the potential, we may use the WKB approximation for which the phase shift will take the approximate form $\\delta = \\int _0^{\\infty } \\mathrm {d}r \\left( \\kappa (r)- \\sqrt{\\omega ^2-m^2} \\right) \\, ,$ where now $\\kappa (r) = \\sqrt{(\\omega -V(r))^2-m^2} \\, .$ For $\\omega >{\\rm Max}\\left( |V(r)|\\right)$ in the massless case $m=0$ , the leading WKB correction to the time delay vanishes for $m=0$ since the leading contribution to the phase shift is frequency-independent.", "The first order correction to the WKB phase shift gives a frequency-dependent term which gives rise to a time-delay $\\Delta T \\sim \\frac{V^{\\prime }(0)}{\\omega ^3}+\\dots $ In the high frequency limit we are working in where $\\omega ^2\\gg |V^{\\prime }(r)|$ this time delay/advance is unresolvable $|\\omega \\Delta T|\\ll 1$ and higher order WKB corrections are similarly negligible.", "The massive case is slightly more subtle.", "The leading WKB term gives a correction $\\Delta T \\approx \\int _0^{\\infty } \\mathrm {d}r \\frac{m^2 (2 \\omega -V(r)) V(r)}{\\omega ^2 (\\omega -V)^2} \\approx \\int _0^{\\infty } \\mathrm {d}r \\frac{2 m^2 V(r)}{\\omega ^3} \\, ,$ where in the last step we assumed $\\omega \\gg {\\rm Max}\\left( |V(r)|\\right)$ .", "At first sight, it looks like we can easily obtain a time advance from a region of negative potential.", "However, for the situations considered in the main text, any background configuration can be parametrised by an overall amplitude and scale in terms of a dimensionless function.", "Similarly consider a potential of the form $V(r)= V_0 f\\left(r/r_0\\right)\\,,$ where $f(x)$ is a dimensionless function.", "The maximum time advance relative to a freely propagating massive particle we can create in this region is then of order $|\\Delta T| \\sim \\frac{ m^2 V_0 r_0}{\\omega ^3} \\, .$ By assumption, for the WKB approximation to be valid we need $\\omega \\gg r_0^{-1}$ .", "Furthermore we have assumed $V_0 \\ll \\omega $ .", "Thus we have the bound $\\omega |\\Delta T| \\ll m^2r_0^2\\,.$ For the theories considered in the paper, we assume the fundamental field is massless and any effective mass generated for fluctuations around a given background solution will be bounded in the sense $m^2 \\lesssim {\\cal O}(1) r_0^{-2}$ , and hence these potential time advances are unresolvable $\\omega |\\Delta T| \\ll 1 $ .", "Thus provided we consider the region $\\omega \\gg (r_0^{-1}, {\\rm Max}\\left( |V(r)|\\right))$ we do not expect to obtain any resolvable time advance.", "In summary, although time advances for monopole scattering are allowed in the non-relativistic and low frequency region without contradicting causality, for the scattering of massless or light (in the scale of the background) high frequency scattering is not expected to lead to any resolvable time advance and this is implicit in our use of this criterion in the main text." ], [ "Positivity of Lorentz invariant UV completions", "The previous example was particularly trivial since it does not lead to any interesting time delay at high frequencies.", "To make it more interesting, and to generate a resolvable time delay, consider now a UV theory of two charged scalars, whose fluctuations may be described by one light field $\\Phi $ and one heavy field $H$ with mass $M$ .", "Integrating out the heavy scalar will give EFT corrections to the previously considered theory which describe the scattering and will give rise to a time delay.", "Focussing on monopole fluctuations, it is natural to rescale $\\Phi = \\varphi /r$ and $H = h/r$ .", "We will assume the quadratic action for the monopole fluctuations in the UV completion takes the $U(1)$ invariant form $S &=& \\int \\mathrm {d}t \\int _0^{\\infty } \\mathrm {d}r \\int \\mathrm {d}\\Omega \\, \\left( |D_t \\phi |^2 - | \\partial _r \\phi |^2-m^2 |\\phi |^2 +|D_t h|^2 - | \\partial _r h |^2- M^2 |h|^2 \\right.", "\\\\&+&\\left.", "\\alpha h^*\\partial _r \\phi + \\beta h^* D_t \\phi + \\alpha ^* h \\partial _r \\phi ^* + \\beta ^* h (D_t\\phi )^* \\right) \\, ,\\nonumber $ where we have dropped any mass mixing terms which can be traded for derivative interactions by a field redefinition.", "This is manifestly relativistically causal by virtue of the Lorentz invariant two derivative terms which dominate the dynamics at high energy and determine the causal support of the retarded propagators.", "Integrating out the heavy field gives a low energy effective theory whose cutoff is $\\Lambda = M$ and whose full effective action is $S &=& \\int \\mathrm {d}t \\int _0^{\\infty } \\mathrm {d}r \\int \\mathrm {d}\\Omega \\, \\Big ( |D_t \\phi |^2 - | \\partial _r \\phi |^2-m^2 |\\phi |^2+ (\\alpha \\partial _r \\phi + \\beta D_t \\phi )^* \\frac{1}{M^2+D_t^2-\\partial _r^2} (\\alpha \\partial _r \\phi +\\beta D_t \\phi ) \\Big ) \\, .\\nonumber $ The effective dispersion relation is $((\\omega -V)^2-k_r^2-m^2) ((\\omega -V)^2-k_r^2-M^2)- |\\alpha k_r - \\beta (\\omega -V)|^2 =0 \\, .$ Due to the presence of odd powers of $k_r$ in the dispersion relation, the outgoing and ingoing waves have different magnitudes for their momenta $k_r^{\\pm }$ and the WKB scattered wave may be parametrised as $\\phi =A(r) \\left( e^{i \\int _0^{r} k_r^+ \\mathrm {d}r }- e^{i \\int _0^{r} k_r^- \\mathrm {d}r } \\right) \\, .$ which is matched against the asymptotics $\\phi = A^{\\prime } \\left( e^{2 i \\delta } e^{i \\sqrt{\\omega ^2-m^2} r}- e^{-i \\sqrt{\\omega ^2-m^2} r} \\right)\\,,$ to give the WKB phase shift $\\delta = \\int _0^{\\infty } \\mathrm {d}r \\left[\\frac{1}{2} (k_r^+(r)+k_r^-(r))- \\sqrt{\\omega ^2-m^2} \\right] \\, .$ In the regime of validity of the low energy EFT, the leading two derivative terms in the effective action are $S= \\int \\mathrm {d}t \\int _0^{\\infty } \\mathrm {d}r \\int \\mathrm {d}\\Omega \\, \\left( |D_t \\phi |^2 - | \\partial _r \\phi |^2-m^2 |\\phi |^2+\\frac{1}{M^2} |\\alpha \\partial _r \\phi +\\beta D_t\\phi |^2 + \\dots \\right) \\, ,$ and the time delay takes the form $\\Delta T = \\Delta T_{M=\\infty }+ \\Delta T_{\\rm EFT} \\, ,$ where $\\Delta T_{M=\\infty }$ is the delay obtained previously and the leading EFT correction is $\\Delta T_{\\rm EFT} &=& 2 \\frac{\\partial }{\\partial \\omega } \\int _0^{\\infty } \\mathrm {d}r \\left[\\frac{1}{2} (k_r^+(r)+k_r^-(r))-\\kappa (r) \\right] \\, , \\nonumber \\\\&=& \\frac{1}{M^2} \\frac{\\partial }{\\partial \\omega }\\int _0^{\\infty } \\mathrm {d}r \\left[\\frac{1}{2\\kappa (r)}\\left| \\alpha \\kappa (r)-(\\omega -V) \\beta \\right|^2 +\\frac{1}{2\\kappa (r)}\\left| \\alpha \\kappa (r)+(\\omega -V) \\beta \\right|^2 \\right] + \\dots \\nonumber \\\\&=& \\frac{1}{M^2}\\frac{\\partial }{\\partial \\omega }\\ \\int _0^{\\infty } \\mathrm {d}r \\left[ |\\alpha |^2 \\kappa +|\\beta |^2 \\frac{(\\omega -V)^2}{\\kappa } \\right] +\\dots \\nonumber \\\\&=& \\frac{1}{M^2} \\int _0^{\\infty } \\mathrm {d}r \\left[ |\\alpha |^2 \\frac{(\\omega -V)}{\\kappa }+|\\beta |^2 \\frac{(\\omega -V)}{\\kappa }\\left(1-\\frac{m^2}{\\kappa ^2}\\right) \\right] + \\dots \\, .$ In the WKB region considered, $\\kappa \\gg m$ , and $\\omega \\gg {\\rm Max}[V(r)]$ and so both terms are manifestly positive.", "Since in this example we know the UV completion, we can directly infer the cutoff in $\\omega $ of the low energy EFT by asking at what energy scale does the dispersion relation depart from that implied by the two derivative action (REF ).", "This is when $(\\omega -V) \\sim M^2/(|\\alpha |+|\\beta |)$ and so we infer that the largest time delay calculable within the low energy EFT we could create is bounded by $|\\omega \\Delta T_{\\rm EFT}| \\lesssim (|\\alpha |+|\\beta |) r_0 \\, .$ Since the RHS can be made arbitrarily large by increasing $r_0$ , remaining in the region of validity of the low energy EFT, this positive time delay can be made resolvable.", "Thus as anticipated, a consistent unitary Lorentz invariant UV completion of an EFT for a massless or light field gives rise to a positive, generally resolvable, time delay $\\Delta T>0$ in the WKB region, and the EFT contribution itself is by itself positive $ \\Delta T_{\\rm EFT}>0$ ." ], [ "Conventions", "In this Appendix, we summarise some our relations and conventions.", "For completeness, we consider the EFT including up to dimension-14 operators and work with the following form of the Lagrangian, $\\mathcal {L}=& - \\frac{1}{2} (\\partial \\phi )^2 - \\frac{1}{2} m^2 \\phi ^2+ \\frac{g_8}{\\Lambda ^4} (\\partial \\phi )^4 \\nonumber \\\\& + \\frac{g_{10}}{\\Lambda ^6} (\\partial \\phi )^2 \\Big [ (\\phi _{, \\mu \\nu })^2 - (\\Box \\phi )^2 \\Big ] + \\frac{g_{12}}{\\Lambda ^8} (( \\phi _{, \\mu \\nu } )^2 )^2 + \\frac{g_{14}}{\\Lambda ^{10}} ( \\phi _{, \\mu \\nu } )^2 ( \\phi _{, \\alpha \\beta \\gamma } )^2 \\, .$ The dimension-14 operator is constrained by the following positivity bounds $-2 g_{12} < g_{14} < \\frac{27}{5} (2g_8 - g_{10}) \\ .$ The relations between the parameters considered here and those included in [33] and [34] are given in the Table (REF ) below.", "Table: Parameters dictionary relating the conventions used in this work, defined in Eq.", "() and others presented in the literature.In order to extremise the causality bounds, it is convenient to work with dimensionless parameters.", "The relations between the dimensionless parameters and their dimensionfull counterparts is provided in Table REF below.", "Table: Parameters dictionary relating the dimensionless and dimensionfull ones.It is worth noting that $\\bar{\\Phi }_0$ carries the scale of the background field $\\bar{\\phi }$ , $r_0$ is its typical scale of variation, whereas $\\omega $ is the frequency of the scattered perturbation.", "The cutoff of the scalar EFT in Eq.", "(REF ) is given by $\\Lambda $ if the dimensionless couplings $g_i$ are all considered to be at most of order 1.", "Finally, $b$ and $r_t$ are respectively the impact parameter of the free theory and the turning point of the higher-multipole scattering events." ], [ "NLO corrections to the time delay at $\\ell =0$", "In this Appendix, we provide the explicit expressions required for computing the time delay at the next order in the EFT, which we refer to as next-to-leading order (NLO).", "At NLO, the equation of motion for the monopole $\\ell =0$ is given by, $&\\left.", "\\hat{W}_0(R) \\right|_{\\rm NLO}= 1152 g_8^3 \\epsilon _1^6 f^{\\prime }(R)^6 +224 g_8 g_{12} \\Omega ^2 \\epsilon _1^4 \\epsilon _2^2 f^{\\prime }(R)^2 f^{\\prime \\prime }(R)^2 \\\\&+144 \\frac{g_8^2}{\\Omega ^2} \\epsilon _1^4 \\epsilon _2^2 \\left(2 \\frac{f^{\\prime }(R)^3 f^{\\prime \\prime }(R)}{R}+2 f^{\\prime }(R)^2 f^{\\prime \\prime }(R)^2+f^{(3)}(R) f^{\\prime }(R)^3\\right) \\nonumber \\\\&-96 g_8 g_{10} \\epsilon _1^4 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^4}{R^2}-3\\frac{f^{\\prime }(R)^3 f^{\\prime \\prime }(R)}{R}\\right) \\nonumber \\\\&-8 g_{12} \\epsilon _1^2 \\epsilon _2^4 \\left(2\\frac{f^{\\prime }(R)^2}{R^4}+\\partial _R\\left(4 \\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R^2} -2 \\frac{f^{\\prime \\prime }(R)^2}{R} + f^{(3)}(R) f^{\\prime \\prime }(R)\\right)\\right) \\nonumber \\\\&-4 g_{14} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^4 \\left(12\\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R^3}+4\\frac{f^{(3)}(R) f^{\\prime }(R)-3 f^{\\prime \\prime }(R)^2}{R^2}+2\\frac{f^{(3)}(R) f^{\\prime \\prime }(R)}{R}+\\partial _R\\left(f^{(3)}(R) f^{\\prime \\prime }(R)\\right)\\right) \\nonumber \\\\&- 12 \\frac{g_{10}}{\\Omega ^2} \\epsilon _1^2 \\epsilon _2^4 \\left(\\frac{f^{\\prime }(R)^2}{R^4}+\\partial _R\\left(\\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R^2}\\right)\\right) \\, .\\nonumber $ The sound speed square and effective potential are given by $&\\left.", "c_s^2(\\omega ^2,R) \\right|_{\\rm NLO}= -128 g_8^3 \\epsilon _1^6 f^{\\prime }(R)^6 - 96 g_8 g_{12} \\epsilon _1^4 \\epsilon _2^2 \\frac{\\omega ^2}{\\Lambda ^2} f^{\\prime }(R)^2 f^{\\prime \\prime }(R)^2 \\\\& + 96 g_8 g_{10} \\epsilon _1^4 \\epsilon _2^2 \\left( \\frac{f^{\\prime }(R)^4}{R^2} + \\frac{f^{\\prime }(R)^3 f^{\\prime \\prime }(R)}{R} \\right) \\nonumber \\\\&+ 8 g_{12} \\epsilon _1^2 \\epsilon _2^4 \\left( 2 \\frac{f^{\\prime }(R)^2}{R^4} + \\partial _R\\left(4 \\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R^2} -2 \\frac{f^{\\prime \\prime }(R)^2}{R} + f^{(3)}(R) f^{\\prime \\prime }(R)\\right) \\right) \\nonumber \\\\&+ 4 g_{14} \\epsilon _1^2 \\epsilon _2^4 \\frac{\\omega ^2}{\\Lambda ^2} \\left( 12 \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R^3} +4 \\frac{-3 f^{\\prime \\prime }(R)^2 + f^{\\prime }(R) f^{(3)}(R)}{R^2} +2 \\frac{f^{\\prime \\prime }(R) f^{(3)}(R)}{R} + \\partial _R\\left(f^{(3)}(R) f^{\\prime \\prime }(R)\\right) \\right) \\ \\, , \\nonumber $ and $&\\left.", "V_{\\text{eff}}(R) \\right|_{\\rm NLO}= -48 g_8^2 \\epsilon _1^4 \\left(2\\frac{f^{\\prime }(R)^3 f^{\\prime \\prime }(R)}{R}+4 f^{\\prime }(R)^2 f^{\\prime \\prime }(R)^2+f^{(3)}(R) f^{\\prime }(R)^3 \\right) \\\\&+12 g_{10} \\epsilon _2^2 \\epsilon _1^2 \\left(\\frac{f^{\\prime }(R)^2}{R^4}+\\partial _R\\left(\\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R^2}\\right)\\right) \\, .", "\\nonumber $ The integrand of the time delay at NLO is given by $\\left.", "\\mathcal {I}_0(\\omega ^2,R) \\right|_{\\rm NLO}= &8 (\\omega r_0) \\epsilon _1^2 \\left[ 104 g_8^3 \\epsilon _1^4 f^{\\prime }(R)^6 -2 \\frac{g_8^2}{\\Omega ^2} \\epsilon _1^2 \\epsilon _2^2 \\left(3 \\frac{f^{\\prime }(R)^4}{R^2}-4 f^{\\prime }(R)^2 f^{\\prime \\prime }(R)^2\\right) - 6 g_8 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\frac{f^{\\prime }(R)^4}{R^2} \\right.", "\\nonumber \\\\&\\left.", "+72 g_8 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 f^{\\prime }(R)^2 f^{\\prime \\prime }(R)^2 -\\frac{45}{4} g_{14} \\Omega ^2 \\epsilon _2^4 \\left(\\frac{f^{\\prime }(R)^2}{R^4} + \\frac{f^{(3)}(R) f^{\\prime }(R)-f^{\\prime \\prime }(R)^2}{R^2}\\right) \\right.", "\\nonumber \\\\&\\left.", "+ \\left(g_{12}-\\frac{3}{4} \\frac{g_{10}}{\\Omega ^2}\\right) \\epsilon _2^4 \\left(\\frac{f^{\\prime }(R)^2}{R^4}-\\frac{f^{(3)}(R)f^{\\prime }(R)+f^{\\prime \\prime }(R)^2}{R^2}\\right) \\right] \\nonumber \\\\&+ \\text{total derivatives} \\, .$ We do not write the total derivative terms explicitly since they vanish upon integration in the $\\ell =0$ case considered here.", "Note that the total derivatives include terms like $f^{\\prime }(R)^2/R^3, f^{\\prime }(R)f^{\\prime \\prime }(R)/R^2$ and $f^{\\prime \\prime }(R)^2/R$ that diverge when evaluated at the origin.", "In this analysis, we have been careful to cancel the divergences so that the total derivatives in the last line of Eq.", "(REF ) actually vanish upon integration from 0 to $\\infty $ ." ], [ "Higher-order multipoles", "In this Appendix, we provide the leading-order expressions to the various functions entering the computation of the time delay for $\\ell >0$ , as defined in Section REF .", "Note that since we are focusing on a regime where $\\omega r_0 \\gg 1$ in order to safely ignore all WKB corrections, we will ignore all $1/\\Omega $ corrections for consistency.", "Furthermore, such a regime also allows us to forget about NLO corrections, hence they will be omitted here.", "The function $W_{\\ell }(R)$ reads, at leading order, $\\left.", "W_{\\ell }(R) \\right|_{\\rm LO} = &\\left(1-\\frac{B^2}{R^2}\\right) \\left(1+ 8 g_8 \\epsilon _1^2 f^{\\prime }(R)^2 + 96 g_8^2 \\epsilon _1^4 f^{\\prime }(R)^4 \\right) \\\\&+ 8 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\left\\lbrace \\left(1-\\frac{B^2}{R^2}\\right) \\left(f^{\\prime \\prime }(R)-\\frac{f^{\\prime }(R)}{R}\\right)+\\frac{f^{\\prime }(R)}{R}\\right\\rbrace ^2 \\nonumber \\\\& +12 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\frac{B^2}{R^2} \\left(\\frac{f^{\\prime }(R)^2}{R^2}-\\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R}\\right) \\nonumber \\,.$ This means that the square sound velocity and the effective potential at leading order read $\\left.", "c_s^2(\\omega ^2,R) \\right|_{\\rm LO}=& 1 -8 g_8 \\epsilon _1^2 f^{\\prime }(R)^2 -32 g_8^2 \\epsilon _1^4 f^{\\prime }(R)^4 \\\\& -8 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\left(2\\frac{B^2}{R^2} \\left(\\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R}-f^{\\prime \\prime }(R)^2\\right)+f^{\\prime \\prime }(R)^2\\right) -24 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R} \\, ,\\nonumber \\\\\\left.", "V_{\\text{eff}}(R) \\right|_{\\rm LO}=& \\frac{L^2}{R^2} \\left[ \\vphantom{\\frac{1}{2}} 1 -8 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 f^{\\prime \\prime }(R)^2 -12 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^2}{R^2}+\\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}\\right) \\right] \\\\&-8\\frac{L^4}{R^4} g_{12} \\epsilon _1^2 \\epsilon _2^4 \\left(\\frac{f^{\\prime }(R)^2}{R^2}-f^{\\prime \\prime }(R)^2\\right) \\, .\\nonumber $ We have decided to express the effective potential in terms of the orbital number $L$ rather than the reduced effective impact parameter $B$ in order to make contact with the free theory where $V_{\\rm eff, free}= L^2/R^2$ .", "It is worth mentioning once again that the effective potential term is suppressed by $(\\omega r_0)^{-2}=\\epsilon _2^2 / \\Omega ^2$ with respect to the speed of sound term.", "Hence, the leading-order effective potential should include terms up to $\\mathcal {O}(\\epsilon _1^2)$ .", "Note that the terms $L^2 \\epsilon _1^2 \\epsilon _2^2$ and $L^4 \\epsilon _1^2 \\epsilon _2^4$ seem to be higher-order and appear to be unnecessarily taken into account.", "However, recalling that $L=\\Omega B/\\epsilon _2$ implies $\\epsilon _2 L \\sim \\mathcal {O}(\\epsilon ^0)$ .", "This means that $L^2 \\epsilon _1^2 \\epsilon _2^2 \\sim L^4 \\epsilon _1^2 \\epsilon 2_4 \\sim \\mathcal {O}(\\epsilon _1^2)$ , so all terms considered are indeed leading order in the effective potential.", "To show that this is indeed the correct functional form for the potential, one could rewrite the term of interest, i.e.", "$V_{\\rm eff}/(\\omega r_0)^2$ rather than just the effective potential, in terms of the variable $B$ that does not hide any dependence on $\\epsilon _i$ , $\\left.", "\\frac{V_{\\text{eff}}(R)}{(\\omega r_0)^2} \\right|_{\\rm LO}=& \\frac{B^2}{R^2} \\left[ \\vphantom{\\frac{1}{2}} 1 -8 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 f^{\\prime \\prime }(R)^2 -12 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^2}{R^2}+\\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}\\right) \\right] \\\\& - 8 \\frac{B^4}{R^4} g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2\\left(\\frac{f^{\\prime }(R)^2}{R^2}-f^{\\prime \\prime }(R)^2\\right) \\, .", "\\nonumber $ In this set up, the corresponding turning point is now $R_t$ , which is given by $\\left.", "R_t \\right|_{\\rm LO} = B \\left[ 1 - 4 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\frac{f^{\\prime }(B)^2}{B^2} - 6 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(B)^2}{B^2}+\\frac{f^{\\prime }(B) f^{\\prime \\prime }(B)}{B}\\right) \\right] \\,.$ Moreover, we have $&\\left.", "U_{\\ell }(R) \\right|_{\\rm LO} = 4 \\left(1-\\frac{B^2}{R^2}\\right) \\left( g_8 \\epsilon _1^2 f^{\\prime }(R)^2 + 10 g_8^2 \\epsilon _1^4 f^{\\prime }(R)^4 \\right) \\\\& -6 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\left\\lbrace \\left(1-\\frac{B^2}{R^2}\\right) \\left(\\frac{f^{\\prime }(R)^2}{R^2}-\\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}\\right)- \\left( \\frac{f^{\\prime }(R)^2}{R^2} + \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R} \\right) + \\frac{B^2}{R^2} \\left( \\frac{f^{\\prime }(B)^2}{B^2} + \\frac{f^{\\prime }(B) f^{\\prime \\prime }(B)}{B} \\right) \\right\\rbrace \\nonumber \\\\& +4 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\left\\lbrace \\left[\\left(1-\\frac{B^2}{R^2}\\right) \\left(\\frac{f^{\\prime }(R)}{R}-f^{\\prime \\prime }(R)\\right)-\\frac{f^{\\prime }(R)}{R}\\right]^2 - \\frac{B^2}{R^2}\\frac{f^{\\prime }(B)^2}{B^2} \\right\\rbrace \\,,\\nonumber $ with $U_{\\ell }(R_t)=0$ .", "Having all the ingredients, the dimensionless time delay can now be expressed in the following form $\\omega \\Delta T_{b}(\\omega ) = (\\omega r_0) \\left[ \\int _B^{\\infty } \\left( \\frac{\\Upsilon ^{(0)}_{\\ell }(R)}{\\sqrt{ 1 - \\frac{B^2}{R^2} }} + \\Upsilon ^{(1)}_{\\ell }(R) \\sqrt{ 1 - \\frac{B^2}{R^2} } + \\Upsilon _{\\ell }^{(2)} \\left( 1 - \\frac{B^2}{R^2} \\right)^{3/2} \\right) \\mathrm {d}R + \\Upsilon _{\\ell }^{(3)} \\right],\\ $ where $\\Upsilon _{\\ell }^{(0)}(R) =& 12 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\left\\lbrace \\frac{f^{\\prime }(R)^2}{R^2}+\\frac{f^{\\prime }(R)f^{\\prime \\prime }(R)}{R} -\\frac{B^2}{R^2} \\left(\\frac{f^{\\prime }(B)^2}{B^2}+\\frac{f^{\\prime }(B) f^{\\prime \\prime }(B)}{B}\\right) \\right\\rbrace \\\\& +24 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^2}{R^2}- \\frac{B^2}{R^2}\\frac{f^{\\prime }(B)^2}{B^2}\\right) \\,,\\nonumber \\\\\\Upsilon _{\\ell }^{(1)}(R) =& 8 g_8 \\epsilon _1^2 f^{\\prime }(R)^2+80 g_8^2 \\epsilon _1^4f^{\\prime }(R)^4 - 48 g_{12}\\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^2}{R^2} - \\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R} \\right) \\\\&-12 g_{10} \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^2}{R^2}-\\frac{f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}\\right) \\,, \\nonumber \\\\\\Upsilon _{\\ell }^{(2)}(R) =& 24 g_{12} \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\left(\\frac{f^{\\prime }(R)^2}{R^2}-\\frac{2 f^{\\prime }(R) f^{\\prime \\prime }(R)}{R}+f^{\\prime \\prime }(R)^2\\right) \\,, \\\\\\Upsilon _{\\ell }^{(3)}(R) =& 12 g_{12} \\pi B \\Omega ^2 \\epsilon _1^2 \\epsilon _2^2 \\frac{f^{\\prime }(B)^2}{B^2} \\,.$ Note that $\\Upsilon _{\\ell }^{(0)}(R) = \\mathcal {F}[f(R)] - B^2/R^2 \\mathcal {F}[f(B)]$ , where $\\mathcal {F}$ is a functional of the function $f$ .", "This immediately shows that $\\Upsilon _{\\ell }^{(0)}(R=B)=0$ , hence avoiding any divergence around the lower bound of the integral." ], [ "Extremisation method", "The method used to extremise the causality bounds for the simple profile considered in (REF ) (with $p=3$ ) is summarised below.", "In principle the same method could be applied to more generic profiles and in less symmetric situations.", "The dimensionless time delay is given as a function of $(\\omega \\Delta T) = (\\omega \\Delta T)(g_{10}, g_{12}, \\mathcal {P}) \\,$ where the parameters are listed in the vector $\\mathcal {P} = \\left\\lbrace g_8, a_0, a_2, a_4, a_6, \\epsilon _1, \\epsilon _2, \\Omega , B \\right\\rbrace \\,.$ In our analysis $g_8$ will be fixed to be either 0 or 1 but we include it for completeness.", "In order to remain within the regime of validity of the EFT we only consider $-5<a_i<5$ so that $f(R)$ is $\\mathcal {O}(1)$ .", "More importantly, during the extremisation procedure we constrain the parameters in $\\mathcal {P}$ such that the analysis remains in the regime of validity of the EFT as given in Eq.", "(REF ) (Eq.", "(REF ) for the Galileons) by replacing $\\ll 1$ by $<1/2$ .", "Since the suppression of higher-order EFT corrections always comes as the square of these parameters, this ensures that the terms that we neglect are suppressed by at least a factor of $0.25$ .", "Furthermore, we also need to ensure that the WKB formula is valid up to the order that we compute it.", "To do so, we explicitly compute corrections to the WKB formula in the monopole case and check that they are negligible.", "For higher multipoles however, we rely instead on dimensional analysis to compute the order of magnitude of the corrections that are being neglected.", "This requires enforcing Eq.", "(REF ).", "For a more detailed discussion on the validity of the EFT and WKB approximation we refer to the analysis in Sections REF and REF .", "Note that in our analysis, we separated the case $\\ell =0$ and $\\ell >0$ , and also $g_8=0$ and $g_8=1$ , which gave four separate sets of causal regions.", "However, the method used in each of them was identical and will be detailed below.", "The boundary of the causal region for a given set of parameters is defined by $(\\omega \\Delta T) = -1/2$ , which can be solved for $g_{12}$ to give the equation of a line in the $(g_{10},g_{12})$ -plane $g_{12} = m(\\mathcal {P}) g_{10} + p(\\mathcal {P}) \\equiv \\mathcal {Y}_{\\mathcal {P}}(g_{10}) \\,.$ Now, the extremisation process differentiates between lower and upper bounds.", "In both cases, let us define a vector $\\mathcal {G}$ corresponding to set of discrete points in the interval $[0,2.5]$ .", "The parameter $g_{12}$ will take values drawn from $\\mathcal {G}$ , i.e.", "$g_{12} \\in \\mathcal {G}$ .", "The tightest lower bound for $g_{10}$ for a given value of $g_{12}=\\mathcal {G}_i$ is achieved by finding the optimal set of parameters $\\mathcal {P}_i^{(\\rm lower)}$ such that the negative value of $g_{10}$ at the intersection between the two lines defined by $g_{12}=\\mathcal {Y}_{\\mathcal {P}_i^{(\\rm lower)}}(g_{10})$ and $g_{12}=\\mathcal {G}_i$ is maximal.", "It can be defined as $\\mathcal {P}_i^{(\\rm lower)}= {\\rm Max} \\left\\lbrace g_{10} < 0 \\left| g_{12}=\\mathcal {Y}_{\\mathcal {P}}(g_{10})\\, \\& \\, g_{12}=\\mathcal {G}_i \\right.", "\\right\\rbrace \\,,$ and the `causal' regionThis method does not `prove' causality, it simply indicates the absence of obvious acausality.", "$\\mathcal {R}_i^{(\\rm lower)}$ would consist of all points in the $(g_{10}, g_{12})$ -plane that are “above\" this line, meaning $\\mathcal {R}_i^{(\\rm lower)} = \\left\\lbrace (g_{10},g_{12}) \\left| g_{10} \\in \\mathbb {R}, g_{12} > \\mathcal {Y}_{\\mathcal {P}_i^{(\\rm lower)}}(g_{10}) \\right.", "\\right\\rbrace \\,.$ Equivalently, the tightest upper bound for a given $i$ is given by $\\mathcal {P}_i^{(\\rm upper)}= {\\rm Min} \\left\\lbrace g_{10} > 0 \\left| g_{12}=\\mathcal {Y}(\\mathcal {P})\\, \\& \\, g_{12}=\\mathcal {G}_i \\right.", "\\right\\rbrace \\,,$ and the associated `causal' region $\\mathcal {R}_i^{(\\rm upper)} = \\left\\lbrace (g_{10},g_{12}) \\left| g_{10} \\in \\mathbb {R}, g_{12} < \\mathcal {Y}_{\\mathcal {P}_i^{(\\rm upper)}}(g_{10}) \\right.", "\\right\\rbrace \\,.$ Note that in the case where $\\ell =0$ , the method does not identify any upper bound, as described previously.", "This process is iterated for all values of $i$ (and it could be optimised further by exploring more values in the range $[0,2.5]$ or by extending this range) and the final causal region $\\mathcal {R}_{\\rm causal}$ is obtained by taking the union of all lower and upper regions labelled by $i$ , $\\mathcal {R}_{\\rm causal} = \\cup _{i} \\cup _{j=\\text{lower, upper}} \\mathcal {R}_i^{(j)} \\,.$" ], [ "Gravitationally-coupled Galileons", "In most of this work, we have considered the scalar field EFT to describe a single low energy degree of freedom in its own right in flat spacetime and in the absence of any other light degrees of freedom.", "For such low energy EFTs, one can in principle consider an arbitrary external source $J$ that would spontaneously generate an arbitrary (Lorentz-violating) background profile for the scalar field.", "We now explore a `Galileon' field which, in some contexts, can be thought of as describing a degree of freedom reminiscent of an infrared modification of gravity (as is the case from instance in the Dvali-Porrati-Gabadadze model [102] or massive gravity [103], [104].", "In this case the EFT is not precisely a low energy description, and the presence of other light degrees of freedom may not always be safely ignored.", "Generating a non-trivial profile for the field typically comes at the price of introducing a non-trivial stress-energy tensor which would also be expected to ever-so-slightly affect the geometry.", "The subtle issue of backreaction on the geometry can be put aside for now, but in this Appendix, we establish which source would be required to generate the spherically-symmetric background profile we have considered so far.", "In particular we explore whether there are any physical requirements to be imposed on that source, and whether source satisfies the null or weak energy condition.", "In the present case, we consider the coupling of the Galileon to matter through the trace of the stress-energy tensor $T^{\\mu }_{\\phantom{\\mu } \\mu }$ which generically arises in massive gravity theories.", "Thus the source in Eq.", "(REF ) is now given by $J=\\frac{1}{M_{\\rm Pl}} T^{\\mu }_{\\phantom{\\mu }\\mu } \\ ,$ The Galileon interactions (and possible mass term) are small corrections compared to the kinetic term and thus the equation of motion for the source reads $\\Box \\phi = - \\frac{g_{\\rm matter}}{M_{\\rm Pl}} T^{\\mu }_{\\phantom{\\mu }\\mu } \\,.$ The stress-energy tensor needs to respect the spherical symmetry and hence can be written in the following form $T^{\\mu }_{\\phantom{\\mu }\\nu } = \\text{diag}(- \\rho (r), p_r(r), p_{\\Omega }(r), p_{\\Omega }(r)) \\,,$ where $p_r$ and $\\rho $ are respectively the radial pressure and energy density of the fluid, and $p_{\\Omega }$ is the angular pressure.", "For simplicity, we write $p_{\\Omega } = A p_r$ , where $A$ is a constant that will be constrained by requiring asymptotic flatness of the spacetime.", "The trace of the stress-energy tensor is then simply given by $T^{\\mu }_{\\phantom{\\mu }\\mu } = p_r (1+2A) - \\rho $ .", "Energy-momentum conservation implies $p^{\\prime }_r + 2(1-A) \\frac{p_r}{r} = 0 \\,.$ This first-order differential equation for the radial pressure $p_r$ is solved by, $p_r(r) = \\bar{p}_r\\ r^{-2(1-A)} \\,, \\qquad \\rho (r) = \\bar{p}_r\\ (1+2A) r^{-2(1-A)} - T^{\\mu }_{\\phantom{\\mu }\\mu }(r) \\,,$ Asymptotically flatness (or `vacuum'), demands that at large radius $p_r, \\rho \\sim r^n$ with $n<-3$ , which effectively provides the bound $A<-1/2$ .", "Furthermore, for the source to be physical, we should at the very least demand the weak energy condition which requires $\\rho > 0 \\,, \\qquad \\rho + p_r > 0 \\,, \\qquad {\\rm and }\\qquad \\rho +A p_r>0 \\,.", "$ Defining $T_{\\rm max} = {\\rm Max}_{r>0} \\left\\lbrace r^{2(1-A)} \\left| T^{\\mu }_{\\phantom{\\mu }\\mu }(r) \\right| \\right\\rbrace $ , then if one were to choose $A<-1 \\,, \\qquad \\bar{p}_r < \\frac{T_{\\rm max}}{2(1+A)}<0\\,,$ and as long as $\\left| T^{\\mu }_{\\phantom{\\mu }\\mu }(r) \\right|$ is bounded and $r^{2(1-A)} T^{\\mu }_{\\phantom{\\mu }\\mu }(r) \\rightarrow 0$ when $r \\rightarrow \\infty $ , which is ensured for exponentially suppressed background profiles as the one considered in Eq.", "(REF ), then the weak energy condition is respected.", "Note that if one is only interested in the null energy condition, then $\\rho $ is unconstrained, but to satisfy the other two conditions in Eq.", "(REF ) we still require that Eq.", "(REF ) holds.", "We have thus proven that some fluids with negative pressure along some direction (and positive pressure along others) can represent a physical source generating an asymptotically flat spacetime, satisfying the weak energy condition and leading to any bounded profile $\\bar{\\phi }(r)$ .", "Note that this stress-energy tensor diverges at the origin, indicating that the source ought to be regularised but since the scalar field remains finite, one would not expect the regularization to impact the outcome of this study." ] ]
2207.03491
[ [ "Magnetic signature of vertically migrating aggregations in the ocean" ], [ "Abstract The transport of heat and solutes by vertically migrating aggregations of plankton has long been explored as a potentially important source of ocean mixing.", "However, direct evidence of enhanced mixing due to these migrations remains challenging to obtain and inconclusive.", "These shortcomings are due to the limitations of current measurement techniques, i.e., velocimetry techniques, which require a priori knowledge of the precise aggregation location and typically trigger animal avoidance behavior from introducing instrumentation into the migration.", "Here we develop a new approach to overcome these longstanding limitations by leveraging advancements in modern magnetometry to detect the flow-induced magnetic fields that naturally arise from seawater as it moves through the Earth's geomagnetic field.", "We derive quantitative predictions showing that these flow-induced magnetic fields in the vicinity of migrating aggregations have a strength proportional to the integrated fluid transport due to the migration.", "Importantly these magnetic signatures are potentially detectable remotely at a significant distance far from the aggregation and region of moving fluid with emerging quantum-enhanced magnetometry techniques such as Nitrogen-Vacancy centers in diamond.", "These results provide a new, testable framework for quantifying the significance of fluid transport in the ocean due to swimming organisms that may finally resolve a scientific debate with potentially enormous implications for our understanding of ocean dynamics and climate change." ], [ "Magnetic Equations of Motion in the Ocean", "The electric current density, $\\mathbf {j}$ , induced by the motion of seawater can be determined from Ohm's Law given by $\\mathbf {j} = \\sigma \\left(\\mathbf {E} + \\mathbf {u} \\times \\mathbf {B_{geo}}\\right),$ where $\\sigma $ is the electrical conductivity of the seawater (3-6 S/m), $\\mathbf {E}$ is any applied or induced electric field, $\\mathbf {u}$ is the fluid velocity field, and $\\mathbf {B_{geo}}$ is the Earth's geomagnetic magnetic field (25,000-50,000 nT).", "The resulting electric current, $\\mathbf {j}$ , in turn, has an associated magnetic field perturbation, $\\mathbf {b}$ , which can be determined from the non-relativistic (magnetostatic) version of Ampere's Law as $\\nabla \\times \\mathbf {b} = \\mu _0\\; \\mathbf {j}.$ Here, $\\mu _0$ denotes the magnetic permeability of seawater ($\\mu _0 = 4\\pi \\times 10^{-7}\\;\\mathrm {H/m}$ ), which is taken to be equal to the magnetic permeability of free space.", "Substituting equation REF into REF for $\\mathbf {j}$ gives the relation $\\mathbf {E} = \\frac{\\nabla \\times \\mathbf {b}}{\\mu _0\\;\\sigma } - \\mathbf {u} \\times \\mathbf {B_{geo}}.$ If the temporal variations in the geomagnetic field are assumed to be small compared to temporal variations in the magnetic perturbation (i.e., $\\partial B_{geo}/\\partial t \\ll \\partial b/\\partial t$ ), then the electric field in equation REF can be related to the motionally-induced magnetic field, $\\mathbf {b}$ , through the Maxwell–Faraday Law of Induction: $\\frac{\\partial \\mathbf {b}}{\\partial t} = - \\nabla \\times \\mathbf {E}.$ Taking the curl of equation REF allows equation REF to be expressed as $\\frac{\\partial \\mathbf {b}}{\\partial t} = - \\nabla \\times \\left( \\frac{\\nabla \\times \\mathbf {b}}{\\mu _0\\;\\sigma } - \\mathbf {u} \\times \\mathbf {B_{geo}} \\right).$ The first and second terms on the right-hand side of equation REF can be expanded and rewritten using the vector identities $-\\nabla \\times \\left( \\frac{\\nabla \\times \\mathbf {b}}{\\mu _0\\;\\sigma } \\right)= -\\frac{ \\nabla \\times \\left(\\nabla \\times \\mathbf {b} \\right)}{\\mu _0\\;\\sigma } - \\nabla \\left(\\frac{1}{\\mu _0\\sigma }\\right)\\times \\left(\\nabla \\times \\mathbf {b}\\right) = - \\frac{1}{\\mu _0\\;\\sigma } \\left( \\nabla \\left(\\nabla \\cdot \\mathbf {b} \\right) - \\nabla ^2 \\mathbf {b} \\right) - \\nabla \\left(\\frac{1}{\\mu _0\\sigma }\\right)\\times \\left(\\nabla \\times \\mathbf {b}\\right),$ and $\\nabla \\times \\left( \\mathbf {u} \\times \\mathbf {B_{geo}}\\right) = \\mathbf {u} \\left(\\nabla \\cdot \\mathbf {B_{geo}} \\right) - \\mathbf {B_{geo}}\\left( \\nabla \\cdot \\mathbf {u} \\right) + \\left(\\mathbf {B_{geo}} \\cdot \\nabla \\right) \\mathbf {u} - \\left(\\mathbf {u} \\cdot \\nabla \\right) \\mathbf {B_{geo}},$ respectively.", "Using equations REF and REF , equation REF can be expressed as $\\frac{\\partial \\mathbf {b}}{\\partial t} = - \\frac{\\left( \\nabla \\left(\\nabla \\cdot \\mathbf {b} \\right) - \\nabla ^2 \\mathbf {b} \\right)}{\\mu _0\\;\\sigma } - \\nabla \\left(\\frac{1}{\\mu _0\\sigma }\\right)\\times \\left(\\nabla \\times \\mathbf {b}\\right) +\\mathbf {u} \\left(\\nabla \\cdot \\mathbf {B_{geo}} \\right) - \\mathbf {B_{geo}}\\left( \\nabla \\cdot \\mathbf {u} \\right) + \\left(\\mathbf {B_{geo}} \\cdot \\nabla \\right) \\mathbf {u} - \\left(\\mathbf {u} \\cdot \\nabla \\right) \\mathbf {B_{geo}}.$ Because the fluid flow is assumed to be incompressible, the velocity field will be solenoidal (i.e., divergence free), following: $\\nabla \\cdot \\mathbf {u} = 0.$ Similarly, by Gauss' Law of Magnetism, both magnetic fields are also solenoidal: $\\nabla \\cdot \\mathbf {b} = 0 \\quad ; \\quad \\nabla \\cdot \\mathbf {B_{geo}} = 0.$ Using the constraints from equations REF and REF , equation REF reduces to $\\frac{\\partial \\mathbf {b}}{\\partial t} = \\frac{1}{\\mu _0\\;\\sigma } \\nabla ^2 \\mathbf {b} - \\nabla \\left(\\frac{1}{\\mu _0\\sigma }\\right)\\times \\left(\\nabla \\times \\mathbf {b}\\right) + \\left(\\mathbf {B_{geo}} \\cdot \\nabla \\right) \\mathbf {u} - \\left(\\mathbf {u} \\cdot \\nabla \\right) \\mathbf {B_{geo}}.$ To further simplify the relation between seawater motion, $\\mathbf {u}$ , and the induced magnetic field perturbation, $\\mathbf {b}$ , additional information related to the flows of interest can be considered.", "The leading order dynamics of the magnetic field perturbation, $\\mathbf {b}$ , can be identified by substituting the variables in equation REF for dimensionless variables that have been scaled by an appropriate, dimensional prefactor.", "These dimensionless variables are denoted with an $\\sim $ overline and given by $\\tilde{t}& = t/T,\\\\\\mathbf {\\tilde{b}} & = \\mathbf {b}/\\beta , \\\\\\mathbf {\\tilde{r}} = \\mathbf {r}/L &= [x,y,z]/L =[\\tilde{x},\\tilde{y},\\tilde{z}],\\\\\\mathbf {u} &= U \\mathbf {\\tilde{u}},\\\\\\mathbf {\\sigma } &= \\sigma _0 \\mathbf {\\tilde{\\sigma }}.\\\\$ The magnitude of each prefactor (e.g., $L$ , $U$ , $\\sigma _0$ , and $T$ ) is determined by the flow configuration of interest.", "The corresponding magnetic field perturbation scale, $\\beta $ , remains to be determined.", "Substituting these variables into equation REF gives $\\left[\\frac{\\beta }{T}\\right] \\frac{\\partial \\mathbf {\\tilde{b}}}{\\partial \\tilde{t}} = \\left[\\frac{\\beta }{\\mu _0\\;\\sigma _0 L^2}\\right] \\left(\\tilde{\\nabla }^{2} \\mathbf {\\tilde{b}} - \\tilde{\\nabla }\\left(\\frac{1}{\\tilde{\\sigma }}\\right)\\times \\left(\\tilde{\\nabla }\\times \\mathbf {\\tilde{b}}\\right)\\right) + \\left[\\frac{U \\left|\\mathbf {B_{geo}}\\right|}{L}\\right]\\left(\\mathbf {\\hat{B}_{geo}} \\cdot \\tilde{\\nabla } \\right) \\mathbf {\\tilde{u}} - \\left[\\frac{U\\;\\delta B_{geo}}{L}\\right] \\left(\\mathbf {\\tilde{u}} \\cdot \\tilde{\\nabla } \\right) \\mathbf {\\hat{B}_{geo}},$ where $\\mathbf {\\hat{B}_{geo}}$ is the unit vector aligned with the direction of the geomagnetic field and $\\delta B_{geo}$ is the scale of the variations in the geomagnetic field strength over the domain of interest.", "This choice of scaling takes a conservative approach where the gradients in the velocity, conductivity, and magnetic perturbation fields are assumed to scale the same as the scaling prefactor divided by the length scale, $L$ .", "In this formulation, all the dimensionless variables are outside the brackets and are assumed to be on the order of unity if appropriately scaled.", "The prefactors contained within the brackets denote the scale of each term in the equation, quantifying their relative importance to the dynamics.", "Normalizing equation REF by the scale $\\left[U\\;\\left|\\mathbf {B_{geo}}\\right|L^{-1}\\right]$ gives $\\left[\\frac{L\\beta }{U T \\left|\\mathbf {B_{geo}}\\right|}\\right] \\frac{\\partial \\mathbf {\\tilde{b}}}{\\partial \\tilde{t}} = \\left[\\frac{\\beta }{\\mu _0\\;\\sigma _0 L U \\left|\\mathbf {B_{geo}}\\right|}\\right] \\left(\\tilde{\\nabla }^{2} \\mathbf {\\tilde{b}} - \\tilde{\\nabla }\\left(\\frac{1}{\\tilde{\\sigma }}\\right)\\times \\left(\\tilde{\\nabla }\\times \\mathbf {\\tilde{b}}\\right) \\right) + \\left(\\mathbf {\\hat{B}_{geo}} \\cdot \\tilde{\\nabla } \\right) \\mathbf {\\tilde{u}} - \\left[\\frac{ \\delta B_{geo}}{\\left|\\mathbf {B_{geo}}\\right|}\\right] \\left(\\mathbf {\\tilde{u}} \\cdot \\tilde{\\nabla } \\right) \\mathbf {\\hat{B}_{geo}}$ such that each of the prefactors is now a dimensionless quantity, and the scale of the $\\left(\\mathbf {\\hat{B}_{geo}} \\cdot \\tilde{\\nabla } \\right) \\mathbf {\\tilde{u}}$ term is normalized to unity.", "To identify which terms in equation REF are of leading order, the relative magnitudes of the terms involving the magnetic field perturbation, $\\mathbf {b}$ , can be compared.", "The ratio between the prefactors of the unsteadiness term on the left-hand side and the Laplacian and conductivity gradient terms on the right-hand side is given by $\\left.", "\\left[\\frac{L\\beta }{U T \\left|\\mathbf {B_{geo}}\\right|}\\right] / \\left[\\frac{\\beta }{\\mu _0\\;\\sigma _0 L U \\left|\\mathbf {B_{geo}}\\right|}\\right] = \\left[\\frac{L^2\\mu _0\\;\\sigma _0 }{T}\\right] \\right.", ".$ Assigning representative values for the scaling parameters based on the relevant oceanic context of $\\sigma _0 = 6$ S/m, $\\mu _0=4\\pi \\times 10^{-7}$ H/m, $L=1000$ m, and $T=1 $ hr, yields ${L^2\\mu _0\\sigma _0 T^{-1}}=\\mathcal {O}(10^{-3})$ .", "The value of this ratio indicates that contributions from the unsteadiness term to the dynamics of equation REF are approximately three orders of magnitude smaller than those of the Laplacian and conductivity gradient terms and can likely be neglected.", "In this case, it can also be concluded that the scaling prefactor of the Laplacian term in equation REF , $\\left[{\\beta (\\mu _0\\;\\sigma _0 L U \\left|\\mathbf {B_{geo}}\\right|)^{-1}}\\right]$ , must be of leading order to ensure that the magnetic perturbation, $\\mathbf {b}$ , is still included in the leading order dynamics.", "Finally, it remains to be established if the final term in equation REF , which scales as $\\left[\\delta B_{geo}\\left|\\mathbf {B_{geo}}\\right|^{-1}\\right]$ , is of leading order.", "The geomagnetic field strength, $\\left|\\mathbf {B_{geo}}\\right|$ , is found to vary less than $0.1\\%$ over domain sizes of $L=$ 1000 m [1] indicating that $\\left[\\delta B_{geo}\\left|\\mathbf {B_{geo}}\\right|^{-1}\\right]=\\mathcal {O}(10^{-3})$ .", "This similarly small ratio indicates that contributions from the last term are also not of leading order for the flows of present interest and can similarly be neglected.", "Having now assessed the scale of each prefactor, the leading order dynamics in equation REF are found to be order unity, and it can be concluded that $\\left[\\beta ^{-1}\\mu _0\\;\\sigma _0 L U \\left|\\mathbf {B_{geo}}\\right|\\right] =\\mathcal {O}(1)$ to ensure it is retained in the leading order dynamics.", "Following the above scaling analysis, the leading order terms from equation REF that relate the seawater motion, $\\mathbf {u}$ , to the induced magnetic field perturbation, $\\mathbf {b}$ , are given by: $0= \\tilde{\\nabla }^{2} \\mathbf {\\tilde{b}} - \\tilde{\\nabla }\\left(\\frac{1}{\\tilde{\\sigma }}\\right)\\times \\left(\\tilde{\\nabla }\\times \\mathbf {\\tilde{b}}\\right) + \\left(\\mathbf {\\hat{B}_{geo}} \\cdot \\tilde{\\nabla } \\right) \\mathbf {\\tilde{u}},$ or in dimensional terms, $0 = \\frac{1}{\\mu _0\\;\\sigma } \\nabla ^2 \\mathbf {b} + \\left(\\mathbf {B_{geo}} \\cdot \\nabla \\right) \\mathbf {u} - \\nabla \\left(\\frac{1}{\\mu _0\\sigma }\\right)\\times \\left(\\nabla \\times \\mathbf {b}\\right).$ In cases where the electrical conductivity of the seawater is assumed to be horizontally homogeneous (i.e., $\\sigma =\\sigma (z)$ ), further simplification of equation REF can be observed when expressed in component wise form as: $\\left(\\frac{\\partial ^2 b_x}{\\partial x^2}+\\frac{\\partial ^2 b_x}{\\partial y^2}+\\frac{\\partial ^2 b_x}{\\partial z^2}\\right) &= -\\mu _0\\sigma (z)\\left(B_x \\frac{\\partial u}{\\partial x} + B_y \\frac{\\partial u}{\\partial y}+ B_z \\frac{\\partial u}{\\partial z}\\right)+\\frac{1}{\\sigma (z)}\\frac{\\partial \\sigma (z)}{\\partial z}\\left(\\frac{\\partial b_z}{\\partial x}-\\frac{\\partial b_x}{\\partial z}\\right) \\\\\\left(\\frac{\\partial ^2 b_y}{\\partial x^2}+\\frac{\\partial ^2 b_y}{\\partial y^2}+\\frac{\\partial ^2 b_y}{\\partial z^2}\\right) &= -\\mu _0\\sigma (z)\\left(B_x \\frac{\\partial v}{\\partial x} + B_y \\frac{\\partial v}{\\partial y}+ B_z \\frac{\\partial v}{\\partial z} \\right) +\\frac{1}{\\sigma (z)}\\frac{\\partial \\sigma (z)}{\\partial z}\\left(\\frac{\\partial b_z}{\\partial y}-\\frac{\\partial b_y}{\\partial z}\\right) \\\\\\left(\\frac{\\partial ^2 b_z}{\\partial x^2}+\\frac{\\partial ^2 b_z}{\\partial y^2}+\\frac{\\partial ^2 b_z}{\\partial z^2}\\right) &= -\\mu _0\\sigma (z)\\left(B_x \\frac{\\partial w}{\\partial x} + B_y \\frac{\\partial w}{\\partial y}+ B_z \\frac{\\partial w}{\\partial z} \\right) $ where $B_{geo} = [B_x,\\,B_y,\\,B_z]$ .", "While the horizontal components of the magnetic perturbation, $b_x$ and $b_y$ , each depend on the vertical gradient of electrical conductivity (see equations REF and ), the equation for $b_z$ (equation ) no longer has such dependencies when $\\sigma = \\sigma (z)$ .", "Furthermore, because equation has no dependency on $b_x$ or $b_y$ , it is a 3D Poisson equation for $b_z$ and can be solved using the free space Green's function as $b_z (\\mathbf {r}) = \\iiint _V \\frac{- \\mu _0\\sigma (z)}{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\left(B_x \\frac{\\partial w}{\\partial x^{\\prime }} + B_y \\frac{\\partial w}{\\partial y^{\\prime }}+ B_z \\frac{\\partial w}{\\partial z^{\\prime }} \\right) \\mathrm {d}^3 r^{\\prime }$ where the local conductivity field is included in the forcing term of the integrated function.", "The same is not true, however, for the horizontal components which depend on horizontal gradients in $b_z$ .", "This complication requires alternative approaches such as determining the vertical component of the magnetic field, $b_z$ , and then iteratively solving for the horizontal components or solving the entire set of equations numerically through a relaxation method.", "In cases when $\\sigma $ is constant over the entire domain, then equation REF simplifies instead to $0 = \\frac{1}{\\mu _0\\;\\sigma } \\nabla ^2 \\mathbf {b} + \\left(\\mathbf {B_{geo}} \\cdot \\nabla \\right) \\mathbf {u}$ which is a vectorized Poisson equation in 3D.", "Here, each component can be solved with the free space Green's function for the 3D Poisson equation through the integral relation $\\mathbf {b} (\\mathbf {r}) = \\iiint _V \\frac{- \\mu _0\\sigma \\; \\left(\\mathbf {B_{geo}}(\\mathbf {r^{\\prime }}) \\cdot \\nabla \\right) \\mathbf {u}(\\mathbf {r^{\\prime }}) }{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\mathrm {d}^3 r^{\\prime }.$ Expressing each vector component of equation REF with $B_{geo} = [B_x,\\,B_y,\\,B_z]$ gives $b_x (\\mathbf {r}) &= \\iiint _V \\frac{- \\mu _0\\sigma \\;}{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\left(B_x \\frac{\\partial u}{\\partial x^{\\prime }} + B_y \\frac{\\partial u}{\\partial y^{\\prime }}+ B_z \\frac{\\partial u}{\\partial z^{\\prime }} \\right)\\mathrm {d}^3 r^{\\prime } \\\\b_y (\\mathbf {r}) &= \\iiint _V \\frac{- \\mu _0\\sigma \\;}{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\left(B_x \\frac{\\partial v}{\\partial x^{\\prime }} + B_y \\frac{\\partial v}{\\partial y^{\\prime }}+ B_z \\frac{\\partial v}{\\partial z^{\\prime }} \\right) \\mathrm {d}^3 r^{\\prime } \\\\b_z (\\mathbf {r}) &= \\iiint _V \\frac{- \\mu _0\\sigma \\;}{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\left(B_x \\frac{\\partial w}{\\partial x^{\\prime }} + B_y \\frac{\\partial w}{\\partial y^{\\prime }}+ B_z \\frac{\\partial w}{\\partial z^{\\prime }} \\right) \\mathrm {d}^3 r^{\\prime }$ where $\\mathbf {r} = (x,y,z)$ and $\\mathbf {r^{\\prime }} = (x^{\\prime },y^{\\prime },z^{\\prime })$ .", "The above relations are valid inside the ocean and the Earth's surface.", "Above the ocean, where there is no electrical current or conductivity, the fields are determined by Laplace's equation for the scalar potential $\\nabla V=-\\mathbf {b}$ such that $\\nabla ^2 V = -\\nabla \\cdot \\mathbf {b} = 0$ ." ], [ "Magnetic signature of vertical flow induced by swimming aggregations", "Having established the relationship between the seawater motion, $\\mathbf {u}$ , and the induced magnetic field perturbation, $\\mathbf {b}$ , the magnetohydrodynamic signature of vertically migrating aggregations can be derived for representative velocity fields.", "In the following section, three different models for the biologically generated velocity field will be considered.", "First, the flow induced from vertically migrating aggregations is modeled using a Dirac delta distribution in the $xy$ -plane (i.e., horizontal plane) having a strength, $Q$ , representing the volumetric flow rate caused by the aggregation wake.", "This configuration mimics the scenario in which the induced flow is confined to a narrow radial extent in the horizontal plane relative to the size of the domain and assumes a homogeneous velocity signature along the vertical extent of the domain.", "The next model considers the induced flow to have a Gaussian distribution in the horizontal plane with a characteristic finite width, $\\varsigma _0$ , with centerline vertical velocity, $W$ , along the vertical axis.", "Similar to the previous model, the velocity signature is assumed to extend uniformly along the vertical extent of the domain.", "In the final model, the effect of the aggregation is modelled using an actuator disk [2] with a steady rate of vertical climb.", "In this case, the velocity field due to the migration is no longer homogeneous in the vertical direction.", "Instead, the velocity signature is modeled as linearly expanding wake with a Gaussian velocity profile that extends downstream from the aggregation position and has a negligible influence upstream.", "When subjected to a horizontal geomagnetic field such as that present near the equator, each of these velocity field models generates a magnetic signature, $\\mathbf {b}$ , that is poloidal, i.e., having primarily a vertical component.", "The magnitude of the vertical component is found to decay inversely with distance from the induced flow and be modulated sinusoidally around the vertical axis of the migration direction.", "Importantly, the slower decay of the magnetic signature (i.e., $b\\sim r^{-1}$ ) compared to the velocity signature (i.e., $w\\sim e^{-r}$ ) indicates that the physical signature of a vertical migration of swimming plankton is potentially detectable with modern magnetometry techniques from distances where the induced flow cannot be detected.", "The magnetic field could provide additional insight into the bulk fluid transport associated with biologically induced flow, as well as the flow induced by other vertical transport processes in the ocean." ], [ "Dirac Delta Migration Model", "The biologically generated velocity field stemming from the vertically migrating aggregations, $\\mathbf {u} (\\mathbf {r})$ , is modelled first with a Dirac delta distribution in the horizontal plane given by: $\\mathbf {u} (\\mathbf {r}) = \\left[u,\\,v,\\,w\\right]= \\left[0,\\;0,\\;\\pm Q\\delta (x)\\delta (y)\\right],$ where $\\pm Q$ is the volumetric flow rate associated with the induced flow, respectively.", "This velocity distribution is most representative when the characteristic width of the aggregation is small compared to the vertical extent of the velocity signature and the distances at which the magnetic signature is being measured.", "Similar to the previous section, the magnetic field signature can be related to the specific flow parameters through dimensional analysis.", "In this model, there are eight physical parameters: the magnitude of the magnetic signature, $b$ , the geomagnetic field strength, $B_{geo}$ , volumetric flow rate, $Q$ , electrical conductivity of the fluid, $\\sigma $ , magnetic permeability of seawater, $\\mu _0$ , fluid density, $\\rho $ , kinematic viscosity, $\\nu $ , and distance from the migration, $\\varrho $ , along with four associated dimensions: mass, length, time, and electric current.", "Through the applications of the Buckingham-$\\pi $ theorem, four dimensionless groups can be identified such that the functional dependence of $b$ can be expressed as $\\frac{b}{B_{geo}} \\cdot = f_1\\left( N , \\mathrm {R_\\varrho } , m\\right)$ where $N \\equiv B_{geo}^{2}\\varrho ^3\\sigma /(\\rho Q)$ is the Stuart Number, $\\mathrm {R_m} \\equiv Q \\varrho ^{-1} \\sigma \\mu _0$ is the dimensionless distance from the aggregation, and $m \\equiv \\nu \\cdot \\sigma \\cdot \\mu _0$ is the magnetic viscosity ratio.", "For parameter values representative of the flow of present interest, (i.e., $B_{geo}=25\\mu $ T, $Q=40\\,\\mathrm {m}^3/\\mathrm {s}$ m, $\\sigma = 6$ S/m, $\\rho =1.025$ g/mL, $U=5$ mm/s, and $\\varrho = 1000$ m), the Stuart Number is $N=O(10^{-5})$ , indicating that electromagnetic forces within the fluid are much smaller than inertial forces, consistent with the assumptions of our model.", "Similarly, the dimensionless magnetic signature $b \\cdot B_{geo}^{-1}$ is expected to exhibit a functional dependence on the dimensionless distance, $\\mathrm {R_m} \\equiv Q \\varrho ^{-1} \\sigma \\mu _0$ , which has the form of a magnetic Reynolds number.", "To specify the functional form of the magnetic signature, $\\mathbf {b}$ , the model velocity field can be analyzed using the Green's function approach described in section with a known geomagnetic field, $\\mathbf {B_{geo}}$ .", "Here, the geomagnetic field is taken to be constant over the domain without declination ($B_x$ ) and inclination ($B_z$ ) such that $\\mathbf {B} (\\mathbf {r}) = \\left[0,B_y ,\\,0\\right]$ where $B_x$ is prescribed to be aligned with the East-West direction, $B_y$ is aligned with the geographic North-South direction and $B_z$ is aligned with the vertical.", "This choice of geomagnetic field is representative of equatorial regions, where biologically generated mixing has been proposed as a potential contributor to the Meridional Overturning Circulation (MOC).", "[3], [4], [5].", "It also follows from equations - that the homogeneity of the vertical velocity field along the vertical ($z$ ) direction restricts the dependence of the vertical component of the magnetic signature, $b_z$ , to only horizontal components of the geomagnetic field (i.e., $B_x$ and $B_y$ ).", "Substituting equations REF and REF into equations REF - gives $\\mathbf {b}(\\mathbf {r})=[b_x,\\;b_y,\\;b_z] = \\left[0,\\;0,\\; \\iiint _V \\frac{B_y\\;\\mu _0\\;\\sigma \\;Q\\delta (x^{\\prime })\\frac{\\textrm {d}\\delta (y^{\\prime })}{\\textrm {d}y^{\\prime }}}{4\\pi \\sqrt{(x-x^{\\prime })^2+(y-y^{\\prime })^2+(z-z^{\\prime })^2}}\\, \\mathrm {d}x^{\\prime } \\mathrm {d}y^{\\prime } \\mathrm {d}z^{\\prime }\\right].$ Using the identity $\\int f(x)\\delta ^{\\prime }(x) \\textrm {d}x = \\int -f^{\\prime }(x)\\delta (x) \\textrm {d}x,$ integration in the $x$ and $y$ directions yields $b_z &= B_y\\;\\mu _0\\;\\sigma \\;Q \\iiint _V \\frac{1}{4\\pi \\sqrt{x^2+(y-y^{\\prime })^2+(z-z^{\\prime })^2}}\\, \\frac{\\textrm {d}\\delta (y^{\\prime })}{\\textrm {d}y^{\\prime }}\\mathrm {d}x^{\\prime } \\mathrm {d}y^{\\prime } \\mathrm {d}z^{\\prime }\\\\b_z &= B_y\\;\\mu _0\\;\\sigma \\;Q \\iint \\frac{(y-y^{\\prime })}{4\\pi \\left(x^2+(y-y^{\\prime })^2+(z-z^{\\prime })^2\\right)^{3/2}}\\, \\delta (y^{\\prime })\\mathrm {d}y^{\\prime } \\mathrm {d}z^{\\prime }\\\\b_z &= B_y\\;\\mu _0\\;\\sigma \\;Q \\int \\frac{y}{4\\pi \\left(x^2+y^2+(z-z^{\\prime })^2\\right)^{3/2}}\\,\\mathrm {d}z^{\\prime }.\\\\b_z &= B_y\\;\\mu _0\\;\\sigma \\;Q\\frac{y\\;H}{2\\pi \\left(x^2+y^2\\right)\\sqrt{H^2+x^2+y^2}}\\\\$ for constant $\\sigma $ .", "In scenarios when $\\sigma $ varies along the $z$ direction, it should remain in the integrand.", "Finally, integrating in the $z$ -direction from $-H$ to $H$ , where $2H$ is the height of the velocity signature, yields $b_z &= B_y\\;\\mu _0\\;\\sigma \\;Q\\frac{y\\;H}{2\\pi \\left(x^2+y^2\\right)\\sqrt{H^2+x^2+y^2}}.\\\\$ In the limit of $H\\rightarrow \\infty $ , the relation for the magnetic field signature given by $\\frac{b_z}{B_y} = \\frac{y\\;\\mu _0\\;\\sigma \\;Q}{2\\pi \\left(x^2+y^2\\right)} = \\frac{1}{2\\pi }\\frac{\\mu _0\\;\\sigma \\;Q\\;\\sin {\\theta }}{\\sqrt{x^2+y^2}}.$ This resulting expression is consistent with the above dimensional analysis, revealing that the strength of the magnetic field perturbation decays inversely with distance from the velocity signature and is modulated sinusoidally by the azimuthal angle, $\\theta $ , from the positive $x$ -axis." ], [ "Gaussian Jet Model", "The second model for $\\mathbf {u} (\\mathbf {r})$ is given by a unidirectional flow along the $z$ direction with Gaussian distribution along the $x$ and $y$ directions.", "For distributions centered on the domain origin, the velocity field for induced flow can be expressed as $\\mathbf {u} (\\mathbf {r^{\\prime }}) = \\left[u,\\,v,\\,w\\right]= \\left[0,\\;0,\\;\\pm W\\, \\textrm {exp}\\left({\\frac{-\\left(x^{\\prime 2}+y^{\\prime 2}\\right)}{2\\varsigma _0^2}}\\right)\\right]$ where $W$ is the velocity scale of the induced flow and $\\varsigma _0$ is the characteristic width of the jet.", "A contour map of the vertical velocity distribution for downwelling is shown over the $xy$ -plane in figure REF (a).", "Integrating equation REF over the $xy$ -plane reveals that the net volumetric flow rate associated with this induced flow model is $Q=\\pm 2\\pi \\,W\\,\\varsigma _0^2$ .", "Taking the limit of $\\varsigma _0\\rightarrow {0}$ and $W\\rightarrow {\\infty }$ where the volume flux, $Q$ , is finite, i.e., $Q=2\\pi \\varsigma _0^2 W=\\textrm {constant}$ , recovers the Dirac delta distribution from section REF given by equation REF .", "Similar to the previous model, the Earth's magnetic field is taken as constant with a negligible declination ($B_x$ ) and inclination ($B_z$ ) and $B_y$ aligned with the North-South direction.", "With the velocity fields (equation REF ) and applied magnetic fields (equation REF ) known, equations REF - can be simplified as $\\mathbf {b} (\\mathbf {r}) = \\left[b_x,\\,b_y,\\,b_z\\right] \\\\= \\left[0,\\;0,\\;\\mu _0\\;\\sigma \\iiint _V \\frac{1}{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\left(B_y \\frac{\\partial w}{\\partial y^{\\prime }}\\right)\\mathrm {d}^3 r^{\\prime }\\right]$ where $\\frac{\\partial w}{\\partial y^{\\prime }} = y^{\\prime }\\frac{W }{\\varsigma _0^2}\\, \\textrm {exp}\\left({\\frac{-\\left(x^{\\prime 2}+y^{\\prime 2}\\right)}{2\\varsigma _0^2}}\\right).$ Expanding the $b_z$ component gives $b_z = \\;\\mu _0\\;\\sigma \\iiint _V \\frac{B_y}{4 \\pi \\left| \\mathbf {r} - \\mathbf {r^{\\prime }} \\right|}\\, \\left( y^{\\prime }\\frac{W }{\\varsigma _0^2}\\, \\textrm {exp}\\left({\\frac{-\\left(x^{\\prime 2}+y^{\\prime 2}\\right)}{2\\varsigma _0^2}}\\right)\\right)\\mathrm {d}^3 r^{\\prime }.$ Rescaling the equation with length scale, $\\varsigma _0$ , gives a pre-factor of $\\varsigma _0^3$ and new dimensionless variables of integration: $\\mathbf {\\tilde{r}} = \\mathbf {{r}}/\\varsigma _0$ , $\\tilde{x}^{\\prime } = x^{\\prime }/\\varsigma _0$ , $\\tilde{y}^{\\prime } = y^{\\prime }/\\varsigma _0$ , and $\\tilde{z}^{\\prime } = z^{\\prime }/\\varsigma _0$ : $\\tilde{b}_z(\\mathbf {\\tilde{r}}) = \\frac{b_z(\\mathbf {\\tilde{r}})}{\\mu _0\\;\\sigma \\;B_y\\;W\\;\\varsigma _0} = \\frac{1}{4 \\pi } \\iiint \\frac{\\tilde{y}^{\\prime }}{ \\left| \\mathbf {\\tilde{r}} - \\mathbf {\\tilde{r}^{\\prime }} \\right| }\\, \\textrm {exp}\\left({\\frac{-\\left(\\tilde{x}^{\\prime 2}+\\tilde{y}^{\\prime 2}\\right)}{2}}\\right)\\mathrm {d} \\tilde{x}^{\\prime }\\,\\mathrm {d} \\tilde{y}^{\\prime }\\,\\mathrm {d} \\tilde{z}^{\\prime },$ which can be solved numerically." ], [ "Numerical Approach and Results", "To compute the integral in equation REF , the velocity field is discretized onto a 3D domain ranging from $-5\\varsigma _0$ to $5\\varsigma _0$ in each of the $x$ and $y$ directions and from $-320\\varsigma _0$ to $320\\varsigma _0$ in the vertical (i.e., z-direction) using $[N_x,\\,N_y,\\,N_z] = [64,\\,64,\\,2560]$ gridpoints to mimic a long vertical extent of downwelling.", "The magnetic field was evaluated on a larger domain on the $xy$ -plane (i.e., $z=0$ ) at 50 logarithmically spaced locations ranging from $x,y = 0.01 - 1000$ .", "A grid convergence study of the velocity field discretization was conducted to verify that the root-mean-square differences for the different resolutions were less than $1\\%$ of the global root-mean-square variations.", "Further, a domain study using different heights from $H= 80\\varsigma _0-640\\varsigma _0$ was conducted to observe their effects on the power law behavior ranges.", "All computations were performed on an Nvidia RTX Quadro 5000 GPU.", "Figure: NO_CAPTIONThe numerical results for the dimensionless vertical magnetic field $\\tilde{b}_z$ on the $xy$ -plane are shown in figure REF (b).", "Compared to the Gaussian velocity signature shown in figure REF (a), the magnetic signature persists much further away from the location of induced flow and exhibits a lobed structure.", "More precisely, the variation of the magnetic signature with azimuthal angle follows a sinusoidal dependence within the $xy$ -plane and is shown in figure REF (a) as a function of azimuthal locations where $\\varrho \\equiv \\sqrt{x^2+y^2} =\\varsigma _0$ , consistent with the simpler Dirac delta model.", "The variation of the magnetic signature with distance is found to exhibit three distinct scaling regimes, which are shown in figure REF (b) at locations along the $y$ -axis.", "The first regime occurs within the vicinity of the velocity signature ($y/\\varsigma _0 \\ll 1$ ).", "Here, the strength of the magnetic signature is largest due to the proximity to the finite velocity gradients and grows with distances away from the migration axis.", "The second regime begins outside the velocity signature region ($y/\\varsigma _0 > 1$ ) and extends to $y/\\varsigma _0 \\approx H/\\varsigma _0$ .", "Here, the signal begins to decay inversely with distance from the migration (i.e., $(y/\\varsigma _0)^{-1}$ ) and most closely emulates the behavior and assumptions of the Dirac delta model.", "The final regime is encountered at distances comparable to the height of the migration.", "There, the signal begins to exhibit a stronger decay, scaling nominally the inverse square of distance from the migration (i.e.,$(y/\\varsigma _0)^{-2}$ ), and is dominated by the effects of the finite domain size.", "The collection of these distinct behaviors in the magnetic signature are qualitatively analogous to the Rankine model of a vortex in viscous flow.", "In that model, the azimuthal velocity magnitude is found to increase linearly within a viscously dominated core and decay inversely with distances outside of the core.", "Here, an analogous behavior is observed in the magnetic signature, albeit with an additional sinusoidal modulation along the azimuth.", "The analogous Rankine model for the magnetic signature is given by the piecewise equation: $\\tilde{b}_z(\\mathbf {\\tilde{r}})= {\\left\\lbrace \\begin{array}{ll}y/(2\\varsigma _0) & \\varrho /\\varsigma _0\\le \\sqrt{2} \\\\\\varsigma _0y/\\varrho ^2 & \\sqrt{2}\\le \\varrho /\\varsigma _0\\le H/(\\varsigma _0) \\\\H\\varsigma _0y/\\varrho ^3 & \\varrho /\\varsigma _0 > H/(\\varsigma _0)\\end{array}\\right.", "}$ where $\\varrho = \\sqrt{x^2+y^2}$ .", "Figure: Comparison of numerical results with Rankine magnetic field model.", "(a) Normalized magnetic field amplitude as a function of azimuthal position in the xyxy-plane relative around the center of the migration.", "The azimuthal angle is taken with respect to the positive xx axis.", "The amplitude variation exhibited by the magnetic field (blue x symbols) is computed for location at a distance ς 0 \\varsigma _0 from the aggregation center and found to agree well with the sinusoidal approximation (solid black line).", "(b) Decay of magnetic signature with distance along the yy-axis (North-South direction).", "Blue lines show the vertical magnetic field signature computed numerically along the yy-axis.", "Dashed black lines show the Rankine model results along the yy-axis.", "Three distinct regimes are observed 1) a y/ς 0 y/\\varsigma _0 growth in the migration, 2) a ς 0 /y\\varsigma _0/y decay for y/ς 0 >2y/\\varsigma _0 >\\sqrt{2}, and , 3) a (ς 0 /y) 2 (\\varsigma _0/y)^2 decay for y/ς 0 >H/ς 0 y/\\varsigma _0 > H/\\varsigma _0.", "Gray line shows the solution for the Dirac Delta velocity model.", "Here, H=320ς 0 H = 320\\varsigma _0.Figure: Profile of the dimensional magnetic field strength compared with the velocity signature along the yy-axis.", "Solid blue lines show the magnetic field signature along the y-axis (x=0x=0).", "Solid orange lines show the velocity field signature along the y-axis (x=0x=0).", "Here we choose representative values of B geo =25μB_{geo}=25\\mu T, ς 0 =100\\varsigma _0=100 m, σ=5\\sigma = 5 S/m, W=1W=1 cm/s, giving a representative magnetic signature magnitude of 157 pT.", "Dashed lines in orange and blue indicate the relative sensitivity limits of different velocimetry and magnetometry techniques, respectively.", "Red dashed line represents the height of the resolved velocity wake.To determine the measurement sensitivity required to detect the magnetic signature, representative values for each parameter are chosen as $B_{geo}=25\\mu $ T, $\\varsigma _0=100$ m, $\\sigma = 5$ S/m, $W=1$ cm/s, and substituted for the dimensionless variables.", "The nominal scale of the vertical magnetic signature is found to be $\\mu _0 \\sigma B_y W \\varsigma _0 = 157$ pT.", "Recasting the data from figure REF (b) in terms of these dimensional parameters gives the distributions shown in figure REF for both the vertical magnetic and velocity components as a function of distance from the aggregation center.", "Superimposed on each distribution are the resolution or sensitivity limit for select measurement techniques for each parameter.", "A detailed tabulation of velocimetry and magnetometry techniques is compiled in Tables REF and REF , respectively.", "For the velocity field, common techniques such as Acoustic Doppler Current Profilers (ACDPs) [11], [12], [13], Acoustic Doppler Velocimeters [9], [14], [15], [16], [17], and Particle Image Velocimetry (PIV) [18], [10], [19], [20], all have resolution limits close to a few millimeters per second.", "Consequently, these techniques are suitable for observing upwelling and downwelling currents from migrating aggregates of zooplankton, which are typically on the order of a few centimeters per second [21], [22], [15], [17], [23].", "However, as can be seen in figure REF , the Gaussian decay of the velocity signature confines the usefulness of these techniques to the immediate vicinity of the velocity signature, with each technique reaching its sensitivity floor within a distance of $2\\varsigma _0-3\\varsigma _0$ of the aggregation center.", "Quantifying the bulk fluid transport due to the migration with these velocimetry techniques is conceptually straightforward and involves measuring the vertical velocity distribution within the aggregation core and spatially integrating the results.", "Even though the resolvable velocity signature is confined to a range comparable to the aggregation width, $\\varsigma _0$ , the magnetic signature is potentially detectable at ranges at least an order of magnitude larger.", "This feature is enabled both by the persistence of the magnetic signature due to the inherent nonlocality of the magnetic field and the advancements in the capability of modern magnetometry techniques.", "For example, commercial fluxgate magnetometers [6], [24], [25] and emerging quantum sensing techniques such as Nitrogen-vacancy (NV) centers [7] have sensitivities on the order of $1-10 \\, \\mathrm {pT}/\\sqrt{\\mathrm {Hz}}$ , which are theoretically detect this magnetic signature up to $100\\varsigma _0$ away along the N-S axis.", "A potential benefit of this feature is the ability to locate instance of biogenic mixing via their magnetic signature.", "As seen in figure REF , in order to locate an instance of biogenic upwelling and downwelling via one of the localized velocimetry techniques (e.g., ADV), one would effectively need to be collocated with the aggregation wake.", "This limitation is not as applicable to ADCPs, which are capable of measuring linear velocity profiles at-a-distance.", "However, it is still necessary for the interrogation volume to intersect with the aggregation velocity wake in order to detect the biogenic flow.", "In contrast, the magnetic signature is inherently a nonlocal quantity that extends far beyond that of the velocity wake.", "While the inherent scale of the biogenic magnetic signature is small compared to the Earth's geomagnetic field, such a signal is potentially detectable with existing magnetometry techniques, including commercially available fluxgate magnetometer.", "Determining the size of the magnetic signature and mapping its distribution can potentially be accomplished with even a handful of magnetometers while also helping to identify the location of the aggregation and its velocity signature.", "In contrast to the previous models, where the velocity signature was assumed to have a long extent in the vertical direction, the current model considers the case where the vertical extent of the aggregation, $H$ , is not only finite but much smaller than its characteristic width of the aggregation, $D$ .", "In this low aspect ratio configuration, the aggregation can be modeled as an actuator disk [2], [26] in a steady rate of vertical climb.", "The velocity signature due to the migration is no longer homogeneous in the vertical direction but is instead confined to the region downstream of the aggregation location.", "In this section, the velocity induced by the vertical migration is related to the properties of the animals that comprise the aggregation following [26].", "This induced velocity is then synthesized with the linearly expanding wake model based on actuator disk theory.", "One approach to estimate the induced velocity due to the migration is the analysis proposed by [26].", "In this approach, shown in figure REF , the vertically migrating aggregation as an actuator disk with diameter, $D$ , that is in a steady upward climb (i.e., no acceleration) of velocity, $W_{v}$ .", "From a force balance, assuming steady climb (i.e., no acceleration).", "Assuming the thrust force from the vertical swimming ($F_T$ ) is balanced by the slight negative buoyancy of the aggregation ($F_B$ ) and the fluid drag on the swimmers ($F_D$ ), the force balance can be expressed $|F_T| = F_B + F_D = N \\left[\\Delta \\rho \\frac{4 \\pi a_r }{3}\\left(\\frac{d}{2}\\right)^3 \\right] + N \\left[\\frac{\\rho }{2} C_D W_v^2 \\pi \\left(\\frac{d}{2}\\right)^2 \\right]$ where, $N$ is the number of animals, $d$ is the body width of the animal, $a_r=\\ell /d$ is the animal body length-to-width ratio, $\\Delta \\rho $ is the difference in density between the animal and the seawater.", "To simplify the analysis, the volume of the animal is approximated using the volume of a prolate spheroid.", "The thrust force generated by the climbing aggregation can be related to the induced velocity experienced by the aggregation, $\\Delta w_0$ , through the relationship $F_T &=\\frac{1}{2}\\rho \\pi D^2 \\Delta w_0 \\left( W_v+\\Delta w_0 \\right),$ which indicates that the thrust force from the migrating aggregation is balanced by the downward momentum injected into the fluid by the swimmers.", "In the far field where the pressure has recovered, the thrust force from the aggregation can be related to a wake velocity, $W_w$ , through $F_T = \\frac{1}{2}\\rho A_D W_w^2=\\frac{\\pi }{8}\\rho \\pi D^2 W_w^2.$ Figure: Diagram of the actuator disk model in its inertial frame.", "The aggregation is modeled as a porous disk of diameter, DD, and height, HH, shown in gray.", "The aggregation is assumed to climb upwards along the positive z ^\\hat{z} direction at a constant velocity, W v W_v.", "The force balance of the actuator disk is shown where the upwardly directed thrust force (F T F_T) shown in blue is balanced by the downward directed buoyancy (F B F_B) and drag (F D F_D) forces shown in orange.", "The individual animals are modeled as prolate spheroids, indicated in light blue with body length, ℓ\\ell , and width, dd.Using equations REF and REF , the induced velocity of the aggregation in climb can be solved for as $\\Delta w_0 = \\sqrt{\\frac{W_v^2}{4}+\\frac{(F_B + F_D)}{2 \\rho A_D}} -\\frac{W_v}{2}$ If the disk is in a steady climb where the thrust is balanced by the weight of the aggregation and the drag on the disk, then the induced velocity term, $\\frac{(F_B + F_D)}{2 \\rho A_D}$ , can be expressed as $\\frac{(F_B + F_D)}{2 \\rho A_D} &= \\frac{2 N}{\\rho \\pi D^2} \\left(\\frac{\\Delta \\rho g 4\\pi a_r}{3}\\left(\\frac{d}{2}\\right)^3 + \\frac{\\rho }{2} C_D W_v^2 \\pi \\left(\\frac{d}{2}\\right)^2 \\right)\\\\\\frac{(F_B + F_D)}{2 \\rho A_D} &=\\frac{d^2}{4}\\frac{N}{D^2} \\left(\\frac{4 a_r}{3}\\frac{\\Delta \\rho }{\\rho } g d + C_D W_v^2 \\right)$ Substituting the animal number density $\\Phi = 4N /(\\pi H D^2)$ gives $\\frac{(F_B + F_D)}{2 \\rho A_D} =\\Phi \\frac{\\pi H d^2}{16} \\left(\\frac{4 a_r}{3}\\frac{\\Delta \\rho }{\\rho } g d + C_D W_v^2 \\right).$ Substituting equation REF into equation REF gives $\\Delta w_0 = \\sqrt{\\frac{W_v^2}{4}+\\Phi \\frac{\\pi H d^2}{16} \\left(\\frac{4 a_r}{3}\\frac{\\Delta \\rho }{\\rho } g d + C_D W_v^2 \\right)} -\\frac{W_v}{2}$ In circumstances where the induced velocity of the migrating aggregation is not known or cannot be measured, equation REF can be employed with quantities based solely on the aggregation properties and rate of climb." ], [ "Linearly expanding wake model", "While the vertical velocity profile is still assumed to have a Gaussian distribution in the horizontal plane, the width of the wake is prescribed to expand linearly with the distance downstream of the aggregation following [27], [28] and [29].", "In the inertial frame of the migrating aggregation climbing with an upward velocity, $W_v$ , the surrounding vertical velocity field is given by: $w(x,y,z) = - W_v - \\Delta w(z)\\frac{D^2}{8\\varsigma _0^2}\\exp \\left(\\frac{-(x^2+y^2)}{2\\varsigma _0^2 d_w(z)^2}\\right).$ Here the aggregation is centered on the domain origin, $\\Delta w(z)$ is the vertical velocity deficit along the wake centerline, $\\varsigma _0$ is the characteristic width of the wake at the streamwise location of the aggregation taken to be $\\varsigma _0 = 0.235D$ [29], and $d_w(z)$ is the dimensionless spreading function of the wake as a function of distance downstream of the aggregation.", "The wake spreading function is modeled as a linear expansion similar to the Jensen wake model [30] and is given by the function $d_w(z) = 1+k_w \\ln \\left(1+\\exp {\\left(\\frac{2(z-1)}{D}\\right)}\\right)$ from [29] with wake expansion coefficient, $k_w \\approx 0.0834$ .", "The corresponding centerline velocity deficit for the aggregation wake is given by $\\Delta w(z) =\\frac{\\Delta w_0}{d_w^2(z)}\\frac{1}{2}\\left[1+\\mathrm {erf}\\left(\\frac{z\\sqrt{2}}{D}\\right)\\right]$ where $\\Delta w_0$ denotes the induced velocity at the center of the aggregation and is associated with the thrust force from to the vertically migrating aggregation.", "This parameter is also assumed to depend on the properties of the aggregation and will be discussed in the following section.", "A mean entertainment velocity, $u_\\varrho $ , due to the vertical velocity field, $w$ , can be modeled for an axisymmetric mean flow field using the incompressibility condition.", "The continuity equation for incompressible flow in cylindrical coordinates is given by $\\nabla \\cdot \\mathbf {u} \\Rightarrow \\frac{1}{\\varrho } \\frac{\\partial (\\varrho u_\\varrho )}{\\partial \\varrho } + \\frac{\\partial w}{\\partial z} = - \\frac{1}{\\varrho }\\frac{\\partial u_\\phi }{\\partial \\phi } = 0.$ where $z$ is the vertical coordinate, $\\phi $ be the azimuthal angle, and $\\varrho $ be the radial coordinate in the horizontal.", "Assuming the mean velocity field to be axisymmetric lets the azimuthal velocity component, $u_\\phi $ be zero and simplifies equation REF to be $\\frac{1}{\\varrho } \\frac{\\partial (\\varrho u_\\varrho )}{\\partial \\varrho } = - \\frac{\\partial w}{\\partial z}.$ Solving for the radial velocity gives $u_\\varrho (z,\\varrho ) = - \\frac{1}{\\varrho }\\int _0^\\varrho \\varrho ^{\\prime }\\frac{\\partial w}{\\partial z}(z,\\varrho ^{\\prime }) d\\varrho ^{\\prime }.$ Combining equations REF , REF , REF , and REF gives an axisymmetric mean velocity field associated with the vertical migration of a low aspect ratio aggregation.", "This velocity field can be inserted into equations REF - using the following transformation into Cartesian velocities: $u (z,\\varrho ,\\theta ) = u_\\varrho \\cos (\\theta )\\\\v (z,\\varrho ,\\theta ) = u_\\varrho \\sin (\\theta )\\\\w (z,\\varrho ,\\theta ) = w\\\\$" ], [ "Numerical Approach and Results", "Similar to the previous section, the Green's function of the wake velocity was numerically integrated over the 3D domain ranging from $-10D$ to $10D$ in each of the $x$ and $y$ directions and from $-400D$ to $400D$ in the vertical (i.e., z-direction) using $[N_x,\\,N_y,\\,N_z] = [88,\\,88,\\,3520]$ grid points.", "The magnetic field was evaluated on a larger domain on the $yz$ -plane (i.e., $x=0$ ) at 100 logarithmically spaced locations ranging from $x,y = 0.01 - 1000$ .", "As before, the Earth's magnetic field is taken to be both constant over the velocity field with a negligible declination and inclination such that only $B_y$ is nonzero.", "In contrast to the previous velocity models, the semi-infinite extent of the velocity signature in the vertical ($z$ ) direction requires that the magnetic signature now have horizontal components to ensure that the magnetic field lines are closed.", "These horizontal components, however, are much smaller than the vertical component due the entrainment velocity and its horizontal gradients being much smaller than those of the vertical velocity component (i.e., wake velocity).", "Figure: NO_CAPTIONUnlike the previous velocity models, which feature vertical homogeneity in the velocity field, the actuator disk model assumes a minimal velocity signature far upstream of the aggregation and an expanding jet downstream of the migration.", "The resulting wake velocity given by equations REF - REF is shown in figure REF (a) as a contour map in the $yz$ -plane with the nominal wake spreading of $2\\varsigma d_w(z)$ (see equation REF ) shown as a dashed line.", "The vertical component of the associated magnetic signature is shown in figure REF (b) as a contour map in the $yz$ -plane against the same wake spreading function.", "Despite the reduced vertical extent of the wake, the magnetic signature is still observed to persist at horizontal distances much larger than the wake width for all values of $z$ downstream of the aggregation.", "Substituting representative parameters $B_{geo}=25\\mu $ T, $D=100$ m, $\\sigma = 5$ S/m, and $\\Delta w_0=1$ cm/s into these distributions again gives a magnetic signature scale of $\\mu _0 \\sigma \\Delta w_0 B_y D= 157$ pT.", "Select profiles of the vertical magnetic and velocity components are shown along the $y$ -axis in figure REF with the respective resolution/sensitivity limits of different measurement techniques.", "Similar to the previous analysis, common techniques such as Acoustic Doppler Current Profilers (ACDPs) [11], [12], [13] and Acoustic Doppler Velocimeters [9], [14], [15], [16], [17] are all still suitable for observing upwelling and downwelling currents from migrating aggregates of zooplankton even up to $5D$ downstream of the aggregation.", "As can be seen in figure REF , the Gaussian decay of the velocity signature still confines the usefulness of velocimetry techniques to the immediate vicinity of the wake.", "However, because of the gradual expansion of the wake downstream of the aggregation, the horizontal distance from the axis of the migration where the velocity signature can be detected gradually increases downstream of the migration.", "By comparison, the limited vertical extent of the wake in this model has somewhat reduced the overall magnitude of the magnetic signature distribution and choice of normalization using the aggregation size versus wake width.", "Furthermore, the power-law decay of the signature is slightly faster than the $y^{-1}$ predicted by the previous models.", "This feature is evidenced by the dashed gray line showing the nominal results from the previous section for comparison.", "However, at fixed horizontal distances, there is a relative enhancement of the magnetic signature with downstream distance from the migration outside the velocity wake due to the entrainment and wake spreading.", "While the detection distances for state-of-the-art fluxgate magnetometers [6], [24], [25] and Nitrogen-vacancy (NV) centers [7] are significantly reduced compared to the previous models, each detection distance still extends approximately an order of magnitude further than the velocimetry techniques according to this model.", "Figure: Profile of a representative dimensional (a) aggregation wake velocities and (b) corresponding vertical magnetic field strengths as a function of distance along the yy-axis at different heights.", "Representative values of B geo =25μB_{geo}=25\\mu T, D=100D=100 m, σ=5\\sigma = 5 S/m, Δw 0 =1\\Delta w_0=1 cm/s, k w =0.0834k_w=0.0834 and ς 0 =0.235D\\varsigma _0=0.235D were chosen for the migration giving a representative magnetic signature magnitude of 157 pT.", "Solid orange and blue lines show the vertical wake velocity and vertical magnetic field component along the y-axis (x=0x=0) at vertical locations z/D=-5,-2,-1,0, and 1z/D=-5,-2,-1,0,\\;\\mathrm {and}\\;1.", "Darker shades of each respective color indicate lower heights with the thick line denoting the height of the aggregation itself.", "Dotted lines indicate the typical resolution or sensitivity limit of corresponding velocimetry and magnetometry techniques.", "The dashed gray line in (b) indicates the nominally equivalent magnetic signature generated a 2D Gaussian jet of infinite extent (see equation ).", "Red dashed line represents the height of the resolve velocity wake." ] ]
2207.03486
[ [ "Emergence of biological transportation networks as a self-regulated\n process" ], [ "Abstract We study self-regulating processes modeling biological transportation networks.", "Firstly, we write the formal $L^2$-gradient flow for the symmetric tensor valued diffusivity $D$ of a broad class of entropy dissipations associated with a purely diffusive model.", "The introduction of a prescribed electric potential leads to the Fokker-Planck equation, for whose entropy dissipations we also investigate the formal $L^2$-gradient flow.", "We derive an integral formula for the second variation of the dissipation functional, proving convexity (in dependence of diffusivity tensor) for a quadratic entropy density modeling Joule heating.", "Finally, we couple in the Poisson equation for the electric potential obtaining the Poisson-Nernst-Planck system.", "The formal gradient flow of the associated entropy loss functional is derived, giving an evolution equation for $D$ coupled with two auxiliary elliptic PDEs." ], [ "Introduction", "A transportation network is a realization of a spatial structure which permits flow of some commodity.", "Network structures and dynamics in biological contexts, in particular organization of leaf venation networks, vascular and neural network formation, have been widely investigated in the recent literature [7], [9], [15], [27].", "One typically focuses on studying optimality in transport properties (electric, fluids, material) of the networks, involving a complex trade-off between cost, transportation efficiency, and fault tolerance [6].", "Biological transportation networks develop without centralized control [38] and have been fine-tuned by many cycles of evolutionary selection pressure.", "They can therefore be considered as emergent structures resulting from self-regulating processes.", "An important class of self-regulating processes that we shall study in this paper, is governed by the minimization of an entropy dissipation, coupled to the conservation law for a quantity $u=u(x)$ , $ -\\nabla \\cdot (D\\nabla u) = S \\qquad \\mathrm {in} \\, \\, \\Omega , $ with $D=D(x)$ the symmetric, positive definite diffusivity tensor of the transportation structure and $S=S(x)$ the distribution of sources and sinks.", "The quantity $u=u(x)$ typically represents the concentration of a chemical species, ions, nutrients or material pressure.", "Equation (REF ) is posed on a bounded domain $\\Omega \\subset \\mathbb {R}^d$ , $d\\ge 1$ , with smooth boundary $\\partial \\Omega $ , subject to the Dirichlet boundary condition $ u \\equiv c \\qquad \\mathrm {on} \\, \\, \\partial \\Omega , $ where $c$ is a constant, considered the equilibrium state of the system.", "The entropy dissipation is given by the functional $ E[D] = \\int _\\Omega \\Phi ^{\\prime \\prime }(u) \\nabla u \\cdot D \\nabla u \\,\\mathrm {d}x,$ where the entropy (or free energy) generating function $\\Phi :\\mathbb {R}\\rightarrow \\mathbb {R}$ is convex with $\\Phi ^{\\prime }(c) = 0.$ Here the solution $u$ of (REF ), (REF ) is considered to depend on the diffusivity tensor $D$ , i.e., $u = u[D]$ .", "Evolutionary selection is assumed to take place through the minimization of the entropy dissipation with respect to $D$ .", "A generic example is the minimization of Joule heating, where $\\Phi (u) = u^2/2$ and the energy dissipation is given by the Dirichlet integral considered as a functional of the diffusivity $D$ , $ E[D] = \\int _\\Omega \\nabla u \\cdot D \\nabla u \\,\\mathrm {d}x,$ where $u=u[D]$ .", "Indeed, Joule's law asserts that the system power density is given by the product of the current $J = D \\nabla u$ and the potential gradient $\\nabla u$ .", "Another typical choice for the entropy generator is $\\Phi (u) = u(\\ln (u)-1)$ , which turns (REF ) into the Fisher information $\\int _\\Omega \\frac{\\nabla u \\cdot D \\nabla u}{u} \\,\\mathrm {d}x$ .", "It is easy to check that the gradient flow of (REF ) subject to the constraint (REF ), (REF ) is given by $ \\frac{\\partial D}{\\partial t} = \\nabla u \\otimes \\nabla u \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega , $ where $D=D(t,x)$ and $t$ is the time-like variable induced by the gradient flow.", "A particular case of this type of process in the context of biological applications (e.g., leaf venation in plants) is the network formation problem introduced in [26] and further analyzed in the series of papers [1], [2], [11], [20], [21], [22], [31], [39], [40].", "Here the quantity $D=D(t,x)$ represent the tensor-valued local conductivity of the network, which is understood as a continuous porous medium.", "The flow of the material (e.q., water with nutrients in the case of leaf venation) is described in terms of the flux $q=D\\nabla u$ , where $u=u(t,x)$ is the fluid pressure.", "To account for the metabolic cost of maintaining the biological tissue, the functional (REF ) is extended by adding the algebraic term $\\int _\\Omega |D|^\\gamma \\,\\mathrm {d}x$ , where $|D|$ denotes a suitable matrix norm of $D$ and $\\gamma >0$ is the metabolic exponent derived from the biological properties of the underlying system; see [1], [2], [25] for details on the modeling.", "Moreover, random fluctuations in the media are accounted for by adding the Dirichlet integral $\\frac{\\beta }{2}\\int _{\\Omega } |\\nabla D|^2 \\,\\mathrm {d}x$ with $\\beta >0$ the diffusivity constant.", "One thus arrives at the energy functional $ E[D] = \\int _{\\Omega } \\frac{\\beta }{2}|\\nabla D|^2 + \\nabla u \\cdot D \\nabla u + \\frac{\\alpha }{\\gamma } |D|^\\gamma \\,\\mathrm {d}x,$ where the parameter $\\alpha >0$ is the metabolic coefficient.", "The energy is constrained by (REF ), (REF ), representing the local mass conservation $-\\nabla \\cdot q = S$ .", "We refer to [18], [19], [23] for a derivation of the system from the discrete graph-based model of [26].", "A crucial observation about (REF ), (REF ), (REF ) made in [23] is that for $\\gamma \\ge 1$ it is a convex functional in $D$ .", "Consequently, by standard theory [3] we obtain the existence and uniqueness of the corresponding $L^2$ -gradient flow, $ \\frac{\\partial D}{\\partial t} = \\beta \\Delta D + \\nabla u \\otimes \\nabla u - \\alpha |D|^{\\gamma -2} D \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega , $ subject to a homogeneous Dirichlet boundary condition for $D$ and coupled to (REF ), (REF ).", "However, let us note that well-posedness of solutions for $\\gamma \\in (0,1)$ is an open problem, complicated by the singularity of the term $|D|^{\\gamma -2} D$ when $D\\rightarrow 0$ .", "The first goal of the paper, carried out in Section , is to derive formal $L^2$ -gradient flows corresponding to general entropy dissipation functionals of the form (REF ) with convex functions $\\Phi $ .", "This class of functionals is a classical subject of interest in the literature studying convex and logarithmic Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations [4], [5].", "In Section we extend the model to the drift-diffusion setting, i.e., we replace (REF ) by $ - \\nabla \\cdot ( D \\nabla u + \\mathrm {z}u D \\nabla \\varphi ) = S \\qquad \\mathrm {in} \\, \\, \\Omega .", "$ The Fokker-Planck equation (REF ) describes flow of ions with charge $\\mathrm {z}\\in \\mathbb {R}$ under the effects of the concentration gradient $\\nabla u$ and electric field $- \\mathrm {z}\\nabla \\varphi $ .", "In certain applications, e.g.", "magnetofluidics [24] or ferrohydrodynamics [29], [36], the influence of the particle velocity and magnetic potential is relevant.", "However, in the context of biological systems, where fluid flow velocities are small, these effects are mostly negligible.", "In [14], [37], the Fokker-Planck equation was coupled to the Navier-Stokes equation, modelling the velocity of charged particles in the fluid.", "To account for drift induced by the electrostatic field of the charged particles, we shall consider the electric potential $\\varphi =\\varphi (t,x)$ to be a solution of the Poisson equation $ -\\Delta \\varphi = \\mathrm {z}u \\qquad \\mathrm {in} \\, \\, \\Omega , $ where $\\mathrm {z}\\in \\mathbb {R}$ is the particle charge, subject to a homogeneous Dirichlet boundary condition.", "The coupled system (REF ), (REF ) then constitutes the Poisson-Nernst-Planck system, widely used in the literature to describe the flow of ions in various physical applications such as semiconductor charge carriers' transport [34], [35], ions transport in porous media [17], in biological processes [16], [32] and in electronic devices [33].", "The free energy for Poisson-Nernst-Planck system (REF ), (REF ) is given by the Helmholtz free energy, see, e.g., [32], $ \\mathcal {H}(u, \\varphi ) := \\mathcal {S}(u) + U(\\varphi ) = \\int _{\\Omega } u (\\ln u -1) \\mathrm {d}x+ \\int _{\\Omega } \\frac{1}{2} | \\nabla \\varphi |^2 \\mathrm {d}x,$ Observe that $\\mathcal {S}(u) = \\int _{\\Omega } u (\\ln u -1) \\mathrm {d}x$ is the Boltzmann entropy and $U(\\varphi ) = \\int _{\\Omega } \\frac{1}{2} | \\nabla \\varphi |^2 \\mathrm {d}x$ is the internal energy due to electrostatic particle interactions.", "In Section we calculate the loss of the Helmholtz free energy to be of the form $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ with the quasi-Fermi energy level $\\mu $ given by $ \\mu := \\ln (u) + \\mathrm {z}\\varphi .$ Let us note that functionals of the form (REF ) also appear in the context of charged particle flow in semiconductor devices, see, e.g., [10].", "The main goal of this paper is to calculate the gradient flow of the loss functional (REF ) constrained by the Poisson-Nernst-Planck system (REF ), (REF ).", "The calculation shall be carried out in Section , leading to the evolution equation for the diffusivity tensor $D=D(t,x)$ $ \\frac{\\partial D}{\\partial t} = u \\nabla \\mu \\otimes \\nabla \\mu + u \\frac{\\nabla \\mu \\otimes \\nabla \\sigma + \\nabla \\sigma \\otimes \\nabla \\mu }{2} \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary variables $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot (D \\nabla \\sigma ) + \\mathrm {z}\\nabla \\varphi \\cdot D \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot D \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega , \\\\- \\Delta \\eta = \\nabla \\cdot (u D \\nabla \\sigma ) \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ subject to the homogeneous Dirichlet boundary conditions for $\\sigma $ and $\\eta $ , $ \\sigma = 0, \\quad \\eta = 0 \\qquad \\mbox{on } \\partial \\Omega .", "$ When equipped with a metabolic term of the form $\\mu \\int _\\Omega |D|^\\gamma \\, \\,\\mathrm {d}x$ and with diffusion, analogously to (REF ), the loss of the Helmholtz free energy (REF ) becomes $ \\mathcal {E}[D] = \\int _{\\Omega } \\frac{\\beta }{2} | \\nabla D |^2 + u \\nabla \\mu \\cdot D \\nabla \\mu + \\alpha |D|^\\gamma \\, \\mathrm {d}x.$ This functional describes the energy expenditure of a biological network transporting charged particles.", "A generic example is represented by neural (brain) tissue in animals and humans.", "However, a quick inspection of equation (REF ), or its straightforward modification accounting for the presence of the metabolic term in (REF ), reveals that, due to lack of a minimum principle, it does not guarantee preservation of positive (semi)definitness of the tensor $D$ .", "In a future work, we shall examine the well-posedness of the system in the case of small sources $S=S(x)$ and/or small time $t$ .", "Another option, inspired by [1], [2], [11], [20], [21], is to make the ansatz $ D = r\\mathbb {I} + m\\otimes m,$ where $r=r(x) \\ge r_0 >0$ is the background permeability of the medium and the vector field $m=m(t,x)\\in \\mathbb {R}^d$ describes the local conductance of the network structure.", "Note that $D$ taking the form (REF ) has the eigenvalues $r(x) + |m|^2$ with eigenvector $m$ , and $r(x)$ with eigenvectors orthogonal to $m$ .", "Thus, it represents conduction along the direction $m$ with conductivity $r(x) + |m|^2$ , while the conduction in directions perpendicular to $m$ is due to the background permeability.", "We, therefore, led to consider the following reformulation of (REF ), $ \\mathcal {E}[m] = \\int _{\\Omega } \\frac{\\beta }{2} |\\nabla m|^2 + r u |\\nabla \\mu |^2 + u |m\\cdot \\nabla \\mu |^2 + \\frac{\\alpha }{\\gamma } |m|^{2\\gamma } \\, \\mathrm {d}x, $ with diffusivity $\\beta >0$ , metabolic constant $\\alpha >0$ and metabolic exponent $\\gamma >0$ .", "The functional (REF ) is constrained by the drift-diffusion equation (REF ) with the diffusivity tensor $D$ given by (REF ), and the Poisson equation (REF ) for the potential $\\varphi $ ." ], [ "General diffusive model", "We now derive the formal $L^2$ -gradient flow of (REF ) constrained by the elliptic problem (REF ), (REF ).", "Observe that multiplying (REF ) by $\\Phi ^{\\prime }(u)$ and integrating by parts one obtains $\\int _\\Omega \\Phi ^{\\prime \\prime }(u) \\nabla u \\cdot D \\nabla u \\,\\,\\mathrm {d}x = \\int _\\Omega S(x) \\Phi ^{\\prime }(u) \\,\\mathrm {d}x\\ge 0,$ due to the convexity of $\\Phi $ and since $\\Phi ^{\\prime }(c)=0$ .", "Consequently, the functional $E[D]$ given by (REF ) can be written as $E[D] = \\int _{\\Omega } S(x) \\Phi ^{\\prime }(u) \\, \\mathrm {d}x.$ We then have the following result.", "Lemma 1 The formal $L^2$ -gradient flow of the energy functional (REF ) constrained by (REF ), (REF ) is given by $\\frac{\\partial D}{\\partial t} = \\Phi ^{\\prime \\prime }(u) \\nabla u \\otimes \\nabla u + \\frac{\\nabla \\sigma \\otimes \\nabla u + \\nabla u\\otimes \\nabla \\sigma }{2} \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega , $ for the symmetric tensor valued diffusivity $D=D(t,x)$ and scalar $u=u(t,x)$ , with $\\sigma =\\sigma (t,x)$ the solution of the boundary value problem $ - \\nabla \\cdot (D \\nabla \\sigma ) = \\Phi ^{\\prime \\prime \\prime }(u) \\nabla u \\cdot D \\nabla u \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega $ subject to $\\sigma = 0$ on $\\partial \\Omega $ .", "We expand $u=u^0 + \\varepsilon u^1 + O(\\varepsilon ^2)$ and $D=D^0 + \\varepsilon D^1 + O(\\varepsilon ^2)$ with $\\varepsilon \\in \\mathbb {R}$ , where $D^0$ is a symmetric positive definite tensor and $D^1$ is symmetric.", "Inserting into (REF ), we obtain at zeroth-order $- \\nabla \\cdot (D^0 \\nabla u^0 ) = S, $ subject to $u^0 = c$ on $\\partial \\Omega $ .", "Collecting terms of first order in $\\varepsilon $ , we have $-\\nabla \\cdot (D^0 \\nabla u^1 + D^1 \\nabla u^0) = 0,$ subject to $u^1 = 0$ on $\\partial \\Omega $ .", "Multiplication of (REF ) by a sufficiently smooth function $v=v(x)$ , vanishing on the boundary $\\partial \\Omega $ , and integration by parts leads to the useful identity $\\int _{\\Omega } u^1 [-\\nabla \\cdot (D^0 \\nabla v)] \\, \\mathrm {d}x=\\int _{\\Omega } \\nabla v \\cdot D^0 \\nabla u^1 \\,\\mathrm {d}x= - \\int _{\\Omega } \\nabla v \\cdot D^1 \\nabla u^0 \\,\\mathrm {d}x.$ Next, we calculate the first variation of $E$ in the direction $D^1$ , $\\frac{\\delta E[D^0]}{\\delta D}(D^1) &= \\frac{d}{d\\varepsilon } E[D^0 + \\varepsilon D^1] \\Bigr |_{\\varepsilon = 0} \\\\&= \\frac{d}{d\\varepsilon } \\int _{\\Omega } S(x) \\Phi ^{\\prime }(u^0 + \\varepsilon u^1) \\mathrm {d}x\\Bigr |_{\\varepsilon = 0} \\\\ &= \\frac{d}{d\\varepsilon } \\int _{\\Omega } S(x) \\big [ \\Phi ^{\\prime }(u^0) + \\varepsilon u^1 \\Phi ^{\\prime \\prime }(u^0) \\big ] \\mathrm {d}x\\Bigr |_{\\varepsilon = 0} \\\\ &= \\int _{\\Omega } u^1 S(x) \\Phi ^{\\prime \\prime }(u^0) \\mathrm {d}x.", "$ Using (REF ) and integrating by parts we obtain $\\frac{\\delta E[D^0]}{\\delta D}(D^1) &= \\int _{\\Omega } \\nabla \\big [ u^1 \\Phi ^{\\prime \\prime }(u^0) \\big ] \\cdot D^0 \\nabla u^0 \\mathrm {d}x\\\\&= \\int _{\\Omega } \\nabla u^1 \\cdot D^0 \\nabla [\\Phi ^{\\prime }(u^0)] \\mathrm {d}x+ \\int _{\\Omega } u^1 \\Phi ^{\\prime \\prime \\prime }(u^0) \\nabla u^0 \\cdot D^0 \\nabla u^0 \\mathrm {d}x=: I_1 + I_2.$ We apply (REF ) with $v = \\Phi ^{\\prime }(u^0)$ , noting that $\\Phi ^{\\prime }(u^0) =0$ on $\\partial \\Omega $ due to (REF ) and (REF ), to evaluate $I_1 := \\int _{\\Omega } \\nabla \\Phi ^{\\prime }(u^0) \\cdot D^0 \\nabla u^1 \\mathrm {d}x= - \\int _{\\Omega } \\nabla \\Phi ^{\\prime }(u^0) \\cdot D^1 \\nabla u^0 \\,\\mathrm {d}x.$ To evaluate $I_2$ we define $\\sigma $ as the solution of the elliptic problem $- \\nabla \\cdot (D^0 \\nabla \\sigma ) = \\Phi ^{\\prime \\prime \\prime }(u^0) \\nabla u^0 \\cdot D^0 \\nabla u^0, $ subject to homogeneous Dirichlet boundary condition on $\\partial \\Omega $ .", "Using (REF ) with $v:=\\sigma $ we arrive at $I_2 := \\int _{\\Omega } u^1 \\Phi ^{\\prime \\prime \\prime }(u^0) \\nabla u^0 \\cdot D^0 \\nabla u^0 \\mathrm {d}x= - \\int _{\\Omega } \\nabla \\sigma \\cdot D^1 \\nabla u^0 \\,\\mathrm {d}x.$ Due to the symmetry of $D^1$ , we have $I_2 &=& - \\frac{1}{2} \\int _{\\Omega } \\nabla \\sigma \\cdot D^1 \\nabla u^0 + \\nabla u^0 \\cdot D^1 \\nabla \\sigma \\,\\mathrm {d}x\\\\&=& - \\int _{\\Omega } D^1 : \\frac{\\nabla \\sigma \\otimes \\nabla u^0 + \\nabla u^0\\otimes \\nabla \\sigma }{2} \\,\\mathrm {d}x,$ where the symbol $:$ denotes the contraction product of tensors, i.e., $A:B = \\mathrm {tr}(A B^T)$ .", "Substituting (REF ) and (REF ) into (REF ), we finally get $\\frac{\\delta E[D^0]}{\\delta D}(D^1) &= - \\int _{\\Omega } D^1:\\bigg [ \\nabla u^0 \\otimes \\nabla \\Phi ^{\\prime }(u^0)+ \\frac{\\nabla \\sigma \\otimes \\nabla u^0 + \\nabla u^0\\otimes \\nabla \\sigma }{2} \\bigg ] \\, \\mathrm {d}x\\\\&= - \\int _{\\Omega } D^1 : \\bigg [ \\Phi ^{\\prime \\prime }(u^0) \\nabla u^0 \\otimes \\nabla u^0 + \\frac{\\nabla \\sigma \\otimes \\nabla u^0 + \\nabla u^0\\otimes \\nabla \\sigma }{2}\\bigg ] \\, \\mathrm {d}x.", "$ Drift-diffusion model In this Section we extend the model to the drift-diffusion setting, where the electrically charged particles are subject to a prescribed smooth stationary electric potential $\\varphi =\\varphi (x)$ with $\\varphi = 0$ on $\\partial \\Omega $ .", "Then, the local mass conservation is of the form $ - \\nabla \\cdot ( D \\nabla u + \\mathrm {z}u D \\nabla \\varphi ) = S \\qquad \\mbox{in } \\Omega , $ where $\\mathrm {z}\\in \\mathbb {R}$ denotes the valence (electric charge) of the particles.", "Equation (REF ) is subject to the Dirichlet boundary condition $ u = c \\qquad \\mbox{on } \\partial \\Omega ,$ where the constant $c$ is an equilibrium, i.e., $\\Phi ^{\\prime }(c) = 0$ .", "Defining the potential $ w := e^{\\mathrm {z}\\varphi (x)} u,$ equation (REF ) transforms into $ - \\nabla \\cdot ( e^{-\\mathrm {z}\\varphi } D \\nabla w ) = S \\qquad \\mathrm {in} \\, \\, \\Omega ,$ subject to $w = c$ on $\\partial \\Omega $ .", "We then consider the entropy loss functional $ E[D] = \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\Phi ^{\\prime \\prime }(w) \\nabla w \\cdot D \\nabla w \\mathrm {d}x.", "$ Following similar steps as in the proof of Lemma REF , we derive the $L^2$ -gradient flow of (REF )–(REF ).", "Lemma 2 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF ) is given by $ \\frac{\\partial D}{\\partial t} = e^{-\\mathrm {z}\\varphi } \\left[ \\Phi ^{\\prime \\prime }(w) \\nabla w \\otimes \\nabla w + \\frac{\\nabla w \\otimes \\nabla \\sigma + \\nabla \\sigma \\otimes \\nabla w}{2} \\right] \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega ,$ with $w$ given by (REF ) and $\\sigma $ a solution of $ - \\nabla \\cdot (D \\nabla \\sigma ) + \\mathrm {z}\\nabla \\varphi \\cdot D \\nabla \\sigma = \\Phi ^{\\prime \\prime \\prime }(w) \\nabla w \\cdot D \\nabla w \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega $ subject to the homogeneous Dirichlet boundary condition $\\sigma = 0$ on $\\partial \\Omega $ .", "See the proof of Lemma REF .", "To examine convexity properties of the functional (REF ) constrained by (REF ), we calculate its second-order variation.", "Lemma 3 The second-order variation of (REF ) coupled to (REF ) is given by $ \\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right)\\cdot D^0\\nabla w^0 \\,\\mathrm {d}x.$ where $w^0$ is a solution of $- \\nabla \\cdot ( e^{-\\mathrm {z}\\varphi } D^0 \\nabla w^0 ) = S \\qquad \\mbox{in } \\Omega ,$ $w^0 = c$ on $\\partial \\Omega $ and $w^1$ , $w^2$ are defined by $ - \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^0 + D^0 \\nabla w^1 \\right)\\right] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega , \\\\- \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^1 + \\frac{1}{2} D^0 \\nabla w^2 \\right)\\right] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega ,$ subject to homogeneous Dirichlet boundary conditions.", "We expand $D=D^0 + \\varepsilon D^1 + O(\\varepsilon ^2)$ with $\\varepsilon \\in \\mathbb {R}$ , where $D^0$ is a symmetric positive definite tensor and $D^1$ is symmetric.", "By Taylor expansion we have $w[D^0 + \\varepsilon D^1] = w[D^0] + \\varepsilon \\frac{\\delta w[D^0]}{\\delta D}(D^1) + \\frac{\\varepsilon ^2}{2} \\frac{\\delta ^2 w[D^0]}{\\delta D^2}(D^1, D^1) + O(\\varepsilon ^3),$ and we denote $w^0 := w[D^0], \\qquad w^1 := \\frac{\\delta w[D^0]}{\\delta D}(D^1), \\qquad w^2 := \\frac{\\delta ^2 w[D^0]}{\\delta D^2}(D^1, D^1).$ Collecting the $O(1)$ terms in (REF ) gives $- \\nabla \\cdot ( e^{-\\mathrm {z}\\varphi } D^0 \\nabla w^0 ) = S.$ The first-order terms give $- \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^0 + D^0 \\nabla w^1 \\right)\\right] = 0,$ which is (REF ), and the second-order terms $- \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^1 + \\frac{1}{2} D^0 \\nabla w^2 \\right)\\right] = 0,$ which is ().", "Multiplication of (REF ) by $\\Phi ^{\\prime }(w)$ and integration by parts yields $E[D] = \\int _\\Omega S \\Phi ^{\\prime }(w) \\,\\mathrm {d}x,$ so that we have $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = {\\frac{\\,\\mathrm {d}^2 }{\\,\\mathrm {d}\\varepsilon ^2}} \\left.", "\\int _\\Omega S \\Phi ^{\\prime }\\left( w[D^0 + \\varepsilon D^1] \\right) \\,\\mathrm {d}x \\right|_{\\varepsilon =0} .", "$ Taylor expansion gives $\\Phi ^{\\prime }\\left( w[D^0 + \\varepsilon D^1] \\right) &=& \\Phi ^{\\prime }\\left( w^0 + \\varepsilon w^1 + \\frac{\\varepsilon ^2}{2} w^2 \\right) + O(\\varepsilon ^3) \\\\&=& \\Phi ^{\\prime }(w^0) + \\varepsilon \\Phi ^{\\prime \\prime }(w^0)w^1 + \\frac{\\varepsilon ^2}{2} \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right) + O(\\varepsilon ^3).$ Consequently, $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega S \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right) \\,\\mathrm {d}x,$ and using (REF ) again, integration by parts results in $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right)\\cdot D^0\\nabla w^0 \\,\\mathrm {d}x.$ We observe that for general (convex) $\\Phi $ the result of Lemma REF does not directly imply convexity of $E[D]$ .", "However, for $\\Phi (w) = w^2/2$ we have $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^2\\cdot D^0\\nabla w^0 \\,\\mathrm {d}x.$ Multiplication of () by $w^0$ and integration by parts gives $\\frac{1}{2} \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^0 \\cdot D^0 \\nabla w^2 \\,\\mathrm {d}x= - \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^0\\cdot 1\\nabla w^1,$ and multiplication of (REF ) by $w^1$ and integration by parts yields $\\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^1\\cdot 1\\nabla w^0 = - \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^1 \\cdot D^0 \\nabla w^1 \\,\\mathrm {d}x.$ Consequently, recalling the symmetry and positive semidefinitness of $D^0$ , we have $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^1 \\cdot D^0 \\nabla w^1 \\,\\mathrm {d}x\\ge 0.$ We conclude that for $\\Phi (w) = w^2/2$ the minimization problem is convex.", "For the following we denote by $S_d$ the space of symmetric real $d \\times d$ -matrices and, for $\\alpha > 0$ : $L^2_{\\alpha } (\\Omega ; S_d) := \\lbrace D \\in L^2(\\Omega ; S_d) \\, \\, \\mathrm {s. t.} \\, \\, D \\ge \\alpha I \\, \\, \\mathrm {a.e.}", "\\, \\mathrm {on} \\, \\, \\Omega \\rbrace .$ We prove: Theorem 1 Let $\\varphi \\in L^{\\infty }(\\Omega )$ and define the functional $E : L^2 (\\Omega ; S_d) \\rightarrow (-\\infty , +\\infty ] $ by $E[D] := {\\left\\lbrace \\begin{array}{ll}\\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\nabla w \\cdot D \\nabla w \\mathrm {d}x, \\quad &\\mathrm {if} \\, \\, D \\in L^2_{\\alpha } (\\Omega ; S_d), \\\\\\infty \\quad &\\mathrm {otherwise},\\end{array}\\right.", "}$ where $-\\nabla \\cdot (e^{-\\mathrm {z}\\varphi } D \\nabla w ) = S$ in $\\Omega $ , $w=c$ on $\\partial \\Omega $ , with $S \\in L^2(\\Omega )$ .", "Then, for each $D_I \\in L^2_{\\alpha } (\\Omega ; S_d)$ , the gradient flow of $E$ , given by $\\frac{\\partial D}{\\partial t} = e^{-\\mathrm {z}\\varphi } \\nabla w \\otimes \\nabla w \\qquad &\\mbox{on }& (0, \\infty ) \\times \\Omega , \\\\D(0, x) = D_I (x) \\qquad &\\mbox{on }& \\Omega ,$ coupled to (REF ), exists for all $t>0$ .", "For $S \\in L^2_{\\alpha }(\\Omega ; S_d)$ an application of the Lax-Milgram Lemma in the space $H := \\left\\lbrace v \\in H^1_0 (\\Omega ) \\, \\, \\mathrm {s.t.}", "\\, \\, \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\nabla v \\cdot D \\nabla v \\, \\mathrm {d}x< \\infty \\right\\rbrace $ shows the existence of a unique solution $w \\in H + c$ of the equation $-\\nabla \\cdot (e^{-\\mathrm {z}\\varphi } D \\nabla w) = S$ in $\\Omega $ , $w = c$ on $\\partial \\Omega $ .", "Thus, $E$ is well defined.", "Moreover, it is easy to prove that $E$ is $C^2$ on its domain of definition.", "Obviously, $L^2_{\\alpha }(\\Omega ; S_d)$ is a closed and convex subset of $L^2(\\Omega ; S_d)$ and E is convex and lower semicontinuous on $L^2(\\Omega ; S_d)$ (see the proof of Proposition 3.3 in [23] for the lower semicontinuity proof).", "The statement then follows from standard theory, see e.g.", "[3].", "Let us note that the right-hand side in (REF ) is a positive semidefinite matrix.", "Consequently, during the evolution induced by (REF ) the solution $D=D(t)$ 'stays away' from the boundary of the set $L^2_{\\alpha } (\\Omega ; S_d)$ , which consists of positive semidefinite $L^2$ -integrable tensors.", "In particular, if $D_I \\ge \\alpha I$ almost everywhere on a subset $U$ of $\\Omega $ of positive Lebesque measure, then $D(t,x) \\ge \\alpha I$ almost everywhere on $U$ for all $t\\ge 0$ .", "Note that the gradient flow equation (REF ) for $D=D(t)$ , coupled to (REF ), does not become stationary (except for the trivial case $S\\equiv 0$ , when $\\nabla w \\equiv 0$ ).", "This is mitigated by adding a relaxation term for $D$ to the right-hand side of (REF ).", "Following the suggestion of [20], [21], [23], [26], we choose the power-law $|D|^{\\gamma -2} D$ with $\\gamma > 1$ .", "Setting $\\varphi \\equiv 0$ for simplicity, we arrive at the following stationary version of (REF ), $\\nabla w \\otimes \\nabla w = |D|^{\\gamma -2} D.$ Inserting into (REF ), we arrive at $- \\nabla \\cdot ( |\\nabla {w}|^{\\frac{2}{\\gamma -1}} \\nabla w ) = S \\qquad \\mbox{in } \\Omega ,$ which is the p-Laplace equation with $p=\\frac{2\\gamma }{\\gamma -1} > 1$ .", "In this context let us point out the important works [28], [30] of Juan Luis Vázquez.", "A significant problem for proving well-posedness of the system (REF ), (REF ), (REF ) with general convex entropy loss densities $\\Phi =\\Phi (w)$ is the fact that we are not able to establish preservation of nonnegativity of the tensor $D=D(t)$ .", "However, modeling considerations [1], [2], [11], [20], [21] motivate us to make the ansatz (REF ) for $D$ , namely $D = r\\mathbb {I} + m\\otimes m,$ with the regularization parameter $r=r(x) \\ge r_0 >0$ (background permeability of the medium) and the vector field $m=m(t,x)\\in \\mathbb {R}^d$ (local conductance of the network structure).", "Then, the Fokker-Planck equation (REF ) transforms into $ - \\nabla \\cdot \\big [ e^{-\\mathrm {z}\\varphi } (r I + m \\otimes m) \\nabla w \\big ] = S \\qquad \\mathrm {in} \\, \\, \\Omega ,$ subject to $w = c$ in $\\partial \\Omega $ .", "Similarly, we recast the entropy loss functional (REF ) as $ E[m] = \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\Phi ^{\\prime \\prime }(w) \\nabla w \\cdot (rI + m \\otimes m) \\nabla w \\,\\mathrm {d}x.$ Note that a multiplication of $(\\ref {eq:NP2m_ddmod})$ by $\\Phi ^{\\prime }(w)$ and an integration by parts gives $E[m] = \\int _{\\Omega } S(x) \\Phi ^{\\prime }(w) \\mathrm {d}x.$ We have the following form for the $L^2$ -gradient flow of the system (REF ) – (REF ).", "Lemma 4 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF ) is given by $ \\frac{\\partial m}{\\partial t} = e^{-\\mathrm {z}\\varphi } \\big [ 2 \\Phi ^{\\prime \\prime }(w) (\\nabla w \\cdot m) \\nabla w + (\\nabla \\sigma \\cdot m) \\nabla w + (\\nabla w \\cdot m) \\nabla \\sigma \\big ] \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ with $w$ given by (REF ) and $\\sigma $ a solution of $- \\nabla \\cdot \\big [ (r I + m \\otimes m) \\nabla \\sigma \\big ] + \\mathrm {z}\\nabla \\varphi \\cdot (rI + m \\otimes m) \\nabla \\sigma = \\Phi ^{\\prime \\prime \\prime }(w) \\nabla w \\cdot (rI + m \\otimes m) \\nabla w $ subject to the homogeneous Dirichlet boundary condition $\\sigma = 0$ on $\\partial \\Omega $ .", "We only sketch the proof here, following the lines of the proof of Lemma REF , where we substitute for $u^0= e^{-\\mathrm {z}\\varphi } w^0$ , $u^1= e^{-\\mathrm {z}\\varphi } w^1$ , $D^0 = rI + m^0 \\otimes m^0$ and $D^1 = m^0 \\otimes m^1 + m^1 \\otimes m^0$ .", "Combining (REF ), (REF ) and (REF ) and setting $\\sigma $ as a solution of $- \\nabla \\cdot \\big [ e^{-\\mathrm {z}\\varphi } (rI + m^0 \\otimes m^0) \\nabla \\sigma \\big ] = \\Phi ^{\\prime \\prime \\prime }(w^0) \\nabla w^0 \\cdot (rI + m^0 \\otimes m^0) \\nabla w^0,$ we get $\\frac{\\delta E[m^0]}{\\delta m}(m^1) &=& - \\int _{\\Omega } e^{- \\mathrm {z}\\varphi } \\nabla \\Phi ^{\\prime }(w^0) \\cdot (m^0 \\otimes m^1 + m^1 \\otimes m^0) \\nabla w^0 \\mathrm {d}x\\\\&-& \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\nabla \\sigma \\cdot (m^0 \\otimes m^1 + m^1 \\otimes m^0) \\nabla w^0 \\mathrm {d}x\\\\&=& - \\int _{\\Omega } m^1 \\cdot e^{-\\mathrm {z}\\varphi } \\big [ 2 \\Phi ^{\\prime \\prime }(w^0) ( \\nabla w^0 \\cdot m^0) \\nabla w^0 + (\\nabla \\sigma \\cdot m) \\nabla w + (\\nabla w \\cdot m) \\nabla \\sigma \\big ] \\mathrm {d}x.$ Let us now examine the (non)convexity of the functional (REF ) with $\\Phi (w)=w^2/2$ .", "Lemma 5 Denote $\\,\\mathrm {d}\\zeta := e^{-\\mathrm {z}\\varphi } \\mathrm {d}x$ .", "The second-order variation of the energy $ E[m] := \\int _\\Omega r|\\nabla w|^2 + |m\\cdot \\nabla w|^2 \\,\\,\\mathrm {d}\\zeta $ in direction $m^1\\in H^1_0(\\Omega )$ , where $w=w[m]$ is the solution of (REF ), reads $\\frac{\\delta ^2 E[m^0]}{\\delta m^2}(m^1, m^1) = 2 \\int _\\Omega r \\left| \\nabla \\frac{\\delta w[m^0]}{\\delta m}(m^1) \\right|^2+ \\left| m^0\\cdot \\nabla \\frac{\\delta w[m^0]}{\\delta m}(m^1) \\right|^2- \\left| m^1\\cdot \\nabla w \\right|^2 \\,\\,\\mathrm {d}\\zeta .$ Using $w$ as test function in the weak formulation of (REF ) gives $E[m] = \\int _\\Omega S w \\,\\mathrm {d}x,$ so that $\\frac{\\delta ^2 E[m^0]}{\\delta m^2}(m^1, m^1) = \\int _\\Omega S \\frac{\\delta ^2 w[m^0]}{\\delta m^2}(m^1, m^1).$ Let us denote $w^0 := w[m], \\qquad w^1 := \\frac{\\delta w[m^0]}{\\delta m}(m^1), \\qquad w^2 := \\frac{\\delta ^2 w[m^0]}{\\delta m^2}(m^1,m^1).$ We use $w^1$ as a test function in the weak formulation of (REF ) and calculate the first-order variation in direction $m^1 \\in H^1_0(\\Omega )$ , which leads to $\\nonumber \\int _\\Omega S w^2 \\,\\mathrm {d}x&=&\\int _\\Omega r \\left| \\nabla w^1 \\right|^2 + r \\nabla w^0 \\cdot \\nabla w^2 \\,\\,\\mathrm {d}\\zeta \\\\&+& \\int _\\Omega (m^1\\cdot \\nabla w^0)\\left( m^0 \\cdot \\nabla w^1 \\right)+ \\left| m^0\\cdot \\nabla w^1 \\right|^2 \\\\&& \\quad + (m^0\\cdot \\nabla w^0)\\left( m^1 \\cdot \\nabla w^1 \\right)+ (m^0\\cdot \\nabla w^0)\\left( m^0\\cdot \\nabla w^2 \\right) \\,\\,\\mathrm {d}\\zeta .\\nonumber $ The first-order variation of the weak formulation of (REF ) with test function $\\xi \\in H^1_0(\\Omega )$ reads $ \\int _\\Omega r \\nabla w^1\\cdot \\nabla \\xi &+& (m^1\\cdot \\nabla w^0)(m^0\\cdot \\nabla \\xi ) + (m^0\\cdot \\nabla w^1)(m^0\\cdot \\nabla \\xi ) \\\\ &+& (m^0\\cdot \\nabla w^0)(m^1\\cdot \\nabla \\xi ) \\,\\,\\mathrm {d}\\zeta = 0,$ and setting $\\xi :=w^1$ gives $ \\int _\\Omega r |\\nabla w^1|^2 + (m^1\\cdot \\nabla w^0)(m^0\\cdot \\nabla w^1) + \\left|m^0\\cdot \\nabla w^1 \\right|^2 + (m^0\\cdot \\nabla w^0)(m^1\\cdot \\nabla w^1) \\,\\,\\mathrm {d}\\zeta = 0.$ Inserting into (REF ) gives $\\int _\\Omega S w^2 \\,\\mathrm {d}x=\\int _\\Omega r \\nabla w^0 \\cdot \\nabla w^2 + (m^0\\cdot \\nabla w^0)\\left( m^0\\cdot \\nabla w^2 \\right) \\,\\,\\mathrm {d}\\zeta .$ Now we again take a variation of (REF ) in direction $m^1$ and use $\\xi :=w^0$ as the test function, $\\int _\\Omega r \\nabla w^2\\cdot \\nabla w^0 + (m^0\\cdot \\nabla w^0)\\left( m^0\\cdot \\nabla w^2 \\right) \\,\\,\\mathrm {d}\\zeta = \\\\= - 2 \\int _\\Omega (m^1\\cdot \\nabla w^1)(m^0\\cdot \\nabla w^0) + \\left| m^1\\cdot \\nabla w^0 \\right|^2 + (m^0\\cdot \\nabla w^1)(m^1\\cdot \\nabla w^0) \\,\\,\\mathrm {d}\\zeta .$ Consequently, $\\int _\\Omega S w^2 \\,\\mathrm {d}x= - 2 \\int _\\Omega (m^1\\cdot \\nabla w^1)(m^0\\cdot \\nabla w^0) + \\left| m^1\\cdot \\nabla w^0 \\right|^2 + (m^0\\cdot \\nabla w^1)(m^1\\cdot \\nabla w^0) \\,\\,\\mathrm {d}\\zeta .$ Using (REF ), we finally arrive at $\\int _\\Omega S w^2 \\,\\mathrm {d}x= 2 \\int _\\Omega r \\left| \\nabla w^1 \\right|^2 + \\left| m^0\\cdot \\nabla w^1 \\right|^2 - \\left| m^1\\cdot \\nabla w^0 \\right|^2 \\,\\,\\mathrm {d}\\zeta .$ To gain a better insight into the convexity properties of the functional (REF ), let us recall the spatially one-dimensional case considered in [1].", "We set $\\Omega :=(0,1)$ and, for simplicity, $r(x) :\\equiv 1$ and $\\varphi := 0$ .", "Moreover, to expedite the calculation, we impose the mixed boundary conditions for $w$ , $\\frac{\\partial w}{\\partial x}(0) = w(1) = 0$ .", "Then an integration of the Poisson equation (REF ) gives $\\frac{\\partial w^0}{\\partial x}(x) = - \\frac{B(x)}{1+m^2},$ with $B(x) := \\int _0^x S(\\xi ) \\,\\mathrm {d}\\xi $ .", "The Frechet derivative in direction $m^1$ reads then $\\frac{\\partial w^1}{\\partial x} = \\frac{\\partial }{\\partial x} \\frac{\\delta w[m^0]}{\\delta m}(m^1) = \\frac{2Bm^0}{(1+(m^0)^2)^2} m^1.$ Consequently, we have $\\frac{\\delta ^2 E[m^0]}{\\delta m^2}(m^1, m^1)&=& 2 \\int _\\Omega \\left| \\frac{\\partial w^1}{\\partial x} \\right|^2 + \\left| m^0 \\frac{\\partial w^1}{\\partial x} \\right|^2 - \\left| m^1 \\frac{\\partial w^0}{\\partial x} \\right|^2 \\\\&=& 2 \\int _\\Omega \\frac{(m^1)^2 B^2}{(1+(m^0)^2)^3} \\left( 3(m^0)^2 - 1 \\right) \\, \\mathrm {d}x.$ Clearly, the sign of the second-order variation of $E[m^0]$ in any direction $m^1$ depends on $m^0$ , i.e., if $|m^0| \\ge 1/\\sqrt{3}$ then is non-negative; otherwise it is negative.", "Therefore, $E$ is not convex on $L^2(\\Omega )$ .", "Poisson-Nernst-Planck model In this section we consider the convection-diffusion equation (REF ) for an ion charge density $u$ with drift induced by the electrostatic field of the charged particles.", "Consequently, the electric potential $\\varphi =\\varphi (t,x)$ is a solution of the Poisson equation $ -\\Delta \\varphi = \\mathrm {z}u \\qquad \\mbox{in } \\Omega ,$ where $\\mathrm {z}\\in \\mathbb {R}$ is the particle charge.", "We prescribe homogeneous Dirichlet boundary condition for $\\varphi $ , $ \\varphi = 0 \\qquad \\mbox{on } (0,T] \\times \\partial \\Omega .$ As argued in, e.g., [32], the entropy generator $\\Phi =\\Phi (u)$ for the Poisson-Nernst-Planck system (REF ), (REF ) is given by $\\Phi (u) = u (\\ln u -1)$ , and the Helmholtz free energy takes the form (REF ).", "Moreover, observe that we have the equilibrium state $c=1$ , i.e., $\\Phi ^{\\prime }(1) = 0$ .", "Introducing the quasi-Fermi energy level $\\mu $ defined by (REF ), the parabolic version of the Poisson-Nernst-Planck system reads: $\\frac{\\partial u}{\\partial t} - \\nabla \\cdot ( u D \\nabla \\mu ) &=& 0 \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega ,\\\\- \\Delta \\varphi &=& \\mathrm {z}u \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega .$ It is a known result, see, e.g., [8], that the loss of the Helmholtz free energy (REF ) along the solutions of the Poisson-Nernst-Planck system (REF )–() is given by the functional $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ For the convenience of the reader, we detail the calculation here.", "Lemma 6 We have, along the solutions of (REF )–(), $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) = - \\mathcal {E}[D],$ where $\\mathcal {H}(u, \\varphi )$ is the Helmholtz free energy (REF ) and $\\mathcal {E}[D]$ is given by (REF ).", "We calculate $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) &=& \\int _{\\Omega } \\ln (u) \\frac{\\partial u}{\\partial t} + \\nabla \\varphi \\cdot \\nabla \\frac{\\partial \\varphi }{\\partial t} \\,\\mathrm {d}x\\\\&=& \\int _{\\Omega } \\ln (u)\\frac{\\partial u}{\\partial t} - \\varphi \\Delta \\frac{\\partial \\varphi }{\\partial t}\\, \\mathrm {d}x\\\\&=& \\int _{\\Omega } \\left( \\ln (u) + \\mathrm {z}\\varphi \\right) \\frac{\\partial u}{\\partial t} \\, \\mathrm {d}x,$ where we used () and integrated by parts.", "Substituting for $\\frac{\\partial u}{\\partial t}$ from (REF ) and integrating by parts again gives $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) &=& - \\int _{\\Omega } u \\nabla \\left( \\ln (u) + \\mathrm {z}\\varphi \\right)\\cdot D\\nabla \\mu \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } \\left( \\nabla u + \\mathrm {z}u \\nabla \\varphi \\right) \\cdot D\\nabla \\mu \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ Note that the boundary terms in the partial integration steps vanish due to the homogeneous boundary condition (REF ) for $\\varphi $ .", "We now derive the $L^2$ -gradient flow of the loss functional (REF ) coupled to the stationary Poisson-Nernst-Planck system formed by (REF ) and $- \\nabla \\cdot (u D \\nabla \\mu ) = S \\qquad &\\mathrm {in} \\, \\, \\Omega ,$ subject to the boundary conditions $u = 1, \\quad \\varphi = 0, \\qquad \\mathrm {on} \\, \\, \\partial \\Omega .$ Note that (REF ) implies $\\mu = 0$ on $\\partial \\Omega $ .", "Consequently, multiplication of (REF ) by $\\mu $ and integration by parts gives $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\mathrm {d}x= \\int _{\\Omega } S(x) \\mu \\, \\mathrm {d}x.$ Lemma 7 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF )–(REF ) is given by $\\frac{\\partial D}{\\partial t} = u \\nabla \\mu \\otimes \\nabla \\mu + u \\frac{\\nabla \\mu \\otimes \\nabla \\sigma + \\nabla \\sigma \\otimes \\nabla \\mu }{2} \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary quantities $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot (D \\nabla \\sigma ) + \\mathrm {z}\\nabla \\varphi \\cdot D \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot D \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega , \\\\- \\Delta \\eta = \\nabla \\cdot (u D \\nabla \\sigma ) \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega $ subject to the boundary conditions $\\sigma = 0, \\quad \\eta =0 \\qquad \\mbox{on } \\partial \\Omega .", "$ Let us expand $D = D^0 + \\varepsilon D^1 + O(\\varepsilon ^2)$ , where $D^0$ is a symmetric positive definite tensor and $D^1$ is symmetric.", "Similarly, we expand the other relevant quantities $u$ , $\\varphi $ and $\\mu $ in terms of $\\varepsilon >0$ .", "With (REF ) we have $\\mu ^1 = \\frac{u^1}{u^0} + \\mathrm {z}\\, \\varphi ^1$ .", "Collecting the zero-order terms in (REF ) and (REF ), we have $- \\nabla \\cdot ( u^0 D^0 \\nabla \\mu ^0) &=& S, \\\\- \\Delta \\varphi ^0 &=& \\mathrm {z}u^0.$ subject to the boundary conditions $u^0=1$ and $\\varphi ^0=0$ on $\\partial \\Omega $ .", "At first order in $\\varepsilon $ we obtain the system $- \\nabla \\cdot ( u^0 D^1 \\nabla \\mu ^0 + u^1 D^0 \\nabla \\mu ^0 + u^0 D^0 \\nabla \\mu ^1) &=& 0 \\\\- \\Delta \\varphi ^1 &=& \\mathrm {z}u^1,$ subject to $u^1 = 0$ and $\\varphi ^1 = 0$ on $\\partial \\Omega $ .", "Note that (REF ) can also be rewritten in the form $ - \\nabla \\cdot \\big [ D^0 ( \\nabla u^1 + \\mathrm {z}u^1 \\nabla \\varphi ^0 + \\mathrm {z}u^0 \\nabla \\varphi ^1) + D^1 ( \\nabla u^0 + \\mathrm {z}u^0 \\varphi ^0 ) \\big ] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega .$ Next, we calculate the first variation of $\\mathcal {E}$ given by (REF ) in the direction $D^1$ , $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) &=& \\frac{d}{d\\varepsilon } \\int _{\\Omega } S(x) (\\mu ^0 + \\varepsilon \\mu ^1) \\, dx \\Bigr |_{\\varepsilon = 0} \\\\&=& \\int _{\\Omega } - \\nabla \\cdot (u^0 D^0 \\nabla \\mu ^0) \\mu ^1 \\, \\mathrm {d}x\\\\&=& \\int _{\\Omega } u^0 \\nabla \\mu ^1 \\cdot D^0 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ Multiplication of the first-order system (REF ) by $\\mu ^0$ and integration by parts, recalling the symmetry of $D$ , leads to $\\int _{\\Omega } u^0 \\nabla \\mu ^1 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x.$ Substitution of the above identity into (REF ) gives $ \\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) = - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x.$ To evaluate the second term, we need to find a mapping between $u^1$ and $D^1$ .", "For this sake, we multiply (REF ) by a function $\\sigma $ vanishing at the boundary $\\partial \\Omega $ and integrate by parts, $\\int _{\\Omega } \\nabla \\sigma \\cdot \\big [D^0 (\\nabla u^1 + \\mathrm {z}u^1 \\nabla \\varphi ^0 + \\mathrm {z}u^0 \\nabla \\varphi ^1 ) \\big ] \\, \\mathrm {d}x&=&- \\int _{\\Omega } \\nabla \\sigma \\cdot D^1 (\\nabla u^0 + \\mathrm {z}u^0 \\nabla \\varphi ^0) \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ After further integration by parts on the left-hand side, we obtain $ - \\int _{\\Omega } u^1 \\big [ \\nabla \\cdot (D^0 \\nabla \\sigma ) - \\mathrm {z}\\nabla \\sigma \\cdot D^0 \\nabla \\varphi ^0 \\big ] + \\mathrm {z}\\varphi ^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\mathrm {d}x.$ With the Poisson equation () we rewrite the third term of the left-hand side as $\\int _{\\Omega } \\mathrm {z}\\varphi ^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\,\\mathrm {d}x&=& - \\int _{\\Omega } \\mathrm {z}^2 \\Delta ^{-1} \\big [ u^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\big ] \\,\\mathrm {d}x\\\\&=& - \\int _{\\Omega } \\mathrm {z}^2 u^1 \\Delta ^{-1} \\big [ \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\big ] \\, \\mathrm {d}x.$ Consequently, defining the function $\\psi =\\psi [\\sigma ]$ , $\\psi [\\sigma ] := - \\nabla \\cdot (D^0 \\nabla \\sigma ) + \\mathrm {z}\\nabla \\sigma \\cdot D^0 \\nabla \\varphi ^0 + \\mathrm {z}^2 \\Delta ^{-1} \\nabla \\cdot ( u^0 D^0 \\nabla \\sigma ),$ equation (REF ) becomes $\\int _{\\Omega } u^1 \\psi [\\sigma ] \\, \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ Hence, setting $\\psi [\\sigma ] = \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0$ , we obtain $- \\int _{\\Omega } u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\,\\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\,\\mathrm {d}x.$ Substitution of the above identity into (REF ) leads to $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) = - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\,\\mathrm {d}x.$ Due to the symmetry of $D^1$ , we have $\\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 &=& \\frac{\\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 + \\nabla \\mu ^0 \\cdot D^1 \\nabla \\sigma }{2} \\\\&=& D^1 : \\frac{\\nabla \\sigma \\otimes \\nabla \\mu ^0 + \\nabla \\mu ^0 \\otimes \\nabla \\sigma }{2}.$ We thus finally arrive at $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) =- \\int _{\\Omega } D^1 : \\left[ u^0 \\nabla \\mu ^0 \\otimes \\nabla \\mu ^0 + u^0 \\frac{\\nabla \\sigma \\otimes \\nabla \\mu ^0 + \\nabla \\mu ^0 \\otimes \\nabla \\sigma }{2} \\right] \\,\\mathrm {d}x,$ which directly gives (REF ).", "Once again, we cannot guarantee the non-negativity of $D$ for every time $t>0$ from Equation (REF ), so we employ the ansatz (REF ).", "Therefore, we obtain the following: Lemma 8 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF )–(REF ) is given by $\\frac{\\partial m}{\\partial t} = 2 u (m \\cdot \\nabla \\mu ) \\nabla \\mu + (m \\cdot \\nabla \\mu ) \\nabla \\sigma + (m \\cdot \\nabla \\sigma ) \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary quantities $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot \\big [ (rI + m \\otimes m) \\nabla \\sigma \\big ] + \\mathrm {z}\\nabla \\varphi \\cdot (rI + m \\otimes m) \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot (rI + m \\otimes m) \\nabla \\mu , \\\\ - \\Delta \\eta = \\nabla \\cdot \\big [ u (rI + m \\otimes m) \\nabla \\sigma \\big ], $ in $(0, \\infty ) \\times \\Omega $ and subject to the boundary conditions $\\sigma = 0, \\quad \\eta =0 \\qquad \\mbox{on } \\partial \\Omega .", "$ See the proof of Lemma REF ." ], [ "Drift-diffusion model", "In this Section we extend the model to the drift-diffusion setting, where the electrically charged particles are subject to a prescribed smooth stationary electric potential $\\varphi =\\varphi (x)$ with $\\varphi = 0$ on $\\partial \\Omega $ .", "Then, the local mass conservation is of the form $ - \\nabla \\cdot ( D \\nabla u + \\mathrm {z}u D \\nabla \\varphi ) = S \\qquad \\mbox{in } \\Omega , $ where $\\mathrm {z}\\in \\mathbb {R}$ denotes the valence (electric charge) of the particles.", "Equation (REF ) is subject to the Dirichlet boundary condition $ u = c \\qquad \\mbox{on } \\partial \\Omega ,$ where the constant $c$ is an equilibrium, i.e., $\\Phi ^{\\prime }(c) = 0$ .", "Defining the potential $ w := e^{\\mathrm {z}\\varphi (x)} u,$ equation (REF ) transforms into $ - \\nabla \\cdot ( e^{-\\mathrm {z}\\varphi } D \\nabla w ) = S \\qquad \\mathrm {in} \\, \\, \\Omega ,$ subject to $w = c$ on $\\partial \\Omega $ .", "We then consider the entropy loss functional $ E[D] = \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\Phi ^{\\prime \\prime }(w) \\nabla w \\cdot D \\nabla w \\mathrm {d}x.", "$ Following similar steps as in the proof of Lemma REF , we derive the $L^2$ -gradient flow of (REF )–(REF ).", "Lemma 2 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF ) is given by $ \\frac{\\partial D}{\\partial t} = e^{-\\mathrm {z}\\varphi } \\left[ \\Phi ^{\\prime \\prime }(w) \\nabla w \\otimes \\nabla w + \\frac{\\nabla w \\otimes \\nabla \\sigma + \\nabla \\sigma \\otimes \\nabla w}{2} \\right] \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega ,$ with $w$ given by (REF ) and $\\sigma $ a solution of $ - \\nabla \\cdot (D \\nabla \\sigma ) + \\mathrm {z}\\nabla \\varphi \\cdot D \\nabla \\sigma = \\Phi ^{\\prime \\prime \\prime }(w) \\nabla w \\cdot D \\nabla w \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega $ subject to the homogeneous Dirichlet boundary condition $\\sigma = 0$ on $\\partial \\Omega $ .", "See the proof of Lemma REF .", "To examine convexity properties of the functional (REF ) constrained by (REF ), we calculate its second-order variation.", "Lemma 3 The second-order variation of (REF ) coupled to (REF ) is given by $ \\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right)\\cdot D^0\\nabla w^0 \\,\\mathrm {d}x.$ where $w^0$ is a solution of $- \\nabla \\cdot ( e^{-\\mathrm {z}\\varphi } D^0 \\nabla w^0 ) = S \\qquad \\mbox{in } \\Omega ,$ $w^0 = c$ on $\\partial \\Omega $ and $w^1$ , $w^2$ are defined by $ - \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^0 + D^0 \\nabla w^1 \\right)\\right] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega , \\\\- \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^1 + \\frac{1}{2} D^0 \\nabla w^2 \\right)\\right] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega ,$ subject to homogeneous Dirichlet boundary conditions.", "We expand $D=D^0 + \\varepsilon D^1 + O(\\varepsilon ^2)$ with $\\varepsilon \\in \\mathbb {R}$ , where $D^0$ is a symmetric positive definite tensor and $D^1$ is symmetric.", "By Taylor expansion we have $w[D^0 + \\varepsilon D^1] = w[D^0] + \\varepsilon \\frac{\\delta w[D^0]}{\\delta D}(D^1) + \\frac{\\varepsilon ^2}{2} \\frac{\\delta ^2 w[D^0]}{\\delta D^2}(D^1, D^1) + O(\\varepsilon ^3),$ and we denote $w^0 := w[D^0], \\qquad w^1 := \\frac{\\delta w[D^0]}{\\delta D}(D^1), \\qquad w^2 := \\frac{\\delta ^2 w[D^0]}{\\delta D^2}(D^1, D^1).$ Collecting the $O(1)$ terms in (REF ) gives $- \\nabla \\cdot ( e^{-\\mathrm {z}\\varphi } D^0 \\nabla w^0 ) = S.$ The first-order terms give $- \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^0 + D^0 \\nabla w^1 \\right)\\right] = 0,$ which is (REF ), and the second-order terms $- \\nabla \\cdot \\left[ e^{-\\mathrm {z}\\varphi } \\left(D^1\\nabla w^1 + \\frac{1}{2} D^0 \\nabla w^2 \\right)\\right] = 0,$ which is ().", "Multiplication of (REF ) by $\\Phi ^{\\prime }(w)$ and integration by parts yields $E[D] = \\int _\\Omega S \\Phi ^{\\prime }(w) \\,\\mathrm {d}x,$ so that we have $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = {\\frac{\\,\\mathrm {d}^2 }{\\,\\mathrm {d}\\varepsilon ^2}} \\left.", "\\int _\\Omega S \\Phi ^{\\prime }\\left( w[D^0 + \\varepsilon D^1] \\right) \\,\\mathrm {d}x \\right|_{\\varepsilon =0} .", "$ Taylor expansion gives $\\Phi ^{\\prime }\\left( w[D^0 + \\varepsilon D^1] \\right) &=& \\Phi ^{\\prime }\\left( w^0 + \\varepsilon w^1 + \\frac{\\varepsilon ^2}{2} w^2 \\right) + O(\\varepsilon ^3) \\\\&=& \\Phi ^{\\prime }(w^0) + \\varepsilon \\Phi ^{\\prime \\prime }(w^0)w^1 + \\frac{\\varepsilon ^2}{2} \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right) + O(\\varepsilon ^3).$ Consequently, $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega S \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right) \\,\\mathrm {d}x,$ and using (REF ) again, integration by parts results in $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla \\left( \\Phi ^{\\prime \\prime }(w^0) w^2 + \\Phi ^{\\prime \\prime \\prime }(w^0) (w^1)^2 \\right)\\cdot D^0\\nabla w^0 \\,\\mathrm {d}x.$ We observe that for general (convex) $\\Phi $ the result of Lemma REF does not directly imply convexity of $E[D]$ .", "However, for $\\Phi (w) = w^2/2$ we have $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^2\\cdot D^0\\nabla w^0 \\,\\mathrm {d}x.$ Multiplication of () by $w^0$ and integration by parts gives $\\frac{1}{2} \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^0 \\cdot D^0 \\nabla w^2 \\,\\mathrm {d}x= - \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^0\\cdot 1\\nabla w^1,$ and multiplication of (REF ) by $w^1$ and integration by parts yields $\\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^1\\cdot 1\\nabla w^0 = - \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^1 \\cdot D^0 \\nabla w^1 \\,\\mathrm {d}x.$ Consequently, recalling the symmetry and positive semidefinitness of $D^0$ , we have $\\frac{\\delta ^2 E[D^0]}{\\delta D^2}(D^1, D^1) = \\int _\\Omega e^{-\\mathrm {z}\\varphi } \\nabla w^1 \\cdot D^0 \\nabla w^1 \\,\\mathrm {d}x\\ge 0.$ We conclude that for $\\Phi (w) = w^2/2$ the minimization problem is convex.", "For the following we denote by $S_d$ the space of symmetric real $d \\times d$ -matrices and, for $\\alpha > 0$ : $L^2_{\\alpha } (\\Omega ; S_d) := \\lbrace D \\in L^2(\\Omega ; S_d) \\, \\, \\mathrm {s. t.} \\, \\, D \\ge \\alpha I \\, \\, \\mathrm {a.e.}", "\\, \\mathrm {on} \\, \\, \\Omega \\rbrace .$ We prove: Theorem 1 Let $\\varphi \\in L^{\\infty }(\\Omega )$ and define the functional $E : L^2 (\\Omega ; S_d) \\rightarrow (-\\infty , +\\infty ] $ by $E[D] := {\\left\\lbrace \\begin{array}{ll}\\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\nabla w \\cdot D \\nabla w \\mathrm {d}x, \\quad &\\mathrm {if} \\, \\, D \\in L^2_{\\alpha } (\\Omega ; S_d), \\\\\\infty \\quad &\\mathrm {otherwise},\\end{array}\\right.", "}$ where $-\\nabla \\cdot (e^{-\\mathrm {z}\\varphi } D \\nabla w ) = S$ in $\\Omega $ , $w=c$ on $\\partial \\Omega $ , with $S \\in L^2(\\Omega )$ .", "Then, for each $D_I \\in L^2_{\\alpha } (\\Omega ; S_d)$ , the gradient flow of $E$ , given by $\\frac{\\partial D}{\\partial t} = e^{-\\mathrm {z}\\varphi } \\nabla w \\otimes \\nabla w \\qquad &\\mbox{on }& (0, \\infty ) \\times \\Omega , \\\\D(0, x) = D_I (x) \\qquad &\\mbox{on }& \\Omega ,$ coupled to (REF ), exists for all $t>0$ .", "For $S \\in L^2_{\\alpha }(\\Omega ; S_d)$ an application of the Lax-Milgram Lemma in the space $H := \\left\\lbrace v \\in H^1_0 (\\Omega ) \\, \\, \\mathrm {s.t.}", "\\, \\, \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\nabla v \\cdot D \\nabla v \\, \\mathrm {d}x< \\infty \\right\\rbrace $ shows the existence of a unique solution $w \\in H + c$ of the equation $-\\nabla \\cdot (e^{-\\mathrm {z}\\varphi } D \\nabla w) = S$ in $\\Omega $ , $w = c$ on $\\partial \\Omega $ .", "Thus, $E$ is well defined.", "Moreover, it is easy to prove that $E$ is $C^2$ on its domain of definition.", "Obviously, $L^2_{\\alpha }(\\Omega ; S_d)$ is a closed and convex subset of $L^2(\\Omega ; S_d)$ and E is convex and lower semicontinuous on $L^2(\\Omega ; S_d)$ (see the proof of Proposition 3.3 in [23] for the lower semicontinuity proof).", "The statement then follows from standard theory, see e.g.", "[3].", "Let us note that the right-hand side in (REF ) is a positive semidefinite matrix.", "Consequently, during the evolution induced by (REF ) the solution $D=D(t)$ 'stays away' from the boundary of the set $L^2_{\\alpha } (\\Omega ; S_d)$ , which consists of positive semidefinite $L^2$ -integrable tensors.", "In particular, if $D_I \\ge \\alpha I$ almost everywhere on a subset $U$ of $\\Omega $ of positive Lebesque measure, then $D(t,x) \\ge \\alpha I$ almost everywhere on $U$ for all $t\\ge 0$ .", "Note that the gradient flow equation (REF ) for $D=D(t)$ , coupled to (REF ), does not become stationary (except for the trivial case $S\\equiv 0$ , when $\\nabla w \\equiv 0$ ).", "This is mitigated by adding a relaxation term for $D$ to the right-hand side of (REF ).", "Following the suggestion of [20], [21], [23], [26], we choose the power-law $|D|^{\\gamma -2} D$ with $\\gamma > 1$ .", "Setting $\\varphi \\equiv 0$ for simplicity, we arrive at the following stationary version of (REF ), $\\nabla w \\otimes \\nabla w = |D|^{\\gamma -2} D.$ Inserting into (REF ), we arrive at $- \\nabla \\cdot ( |\\nabla {w}|^{\\frac{2}{\\gamma -1}} \\nabla w ) = S \\qquad \\mbox{in } \\Omega ,$ which is the p-Laplace equation with $p=\\frac{2\\gamma }{\\gamma -1} > 1$ .", "In this context let us point out the important works [28], [30] of Juan Luis Vázquez.", "A significant problem for proving well-posedness of the system (REF ), (REF ), (REF ) with general convex entropy loss densities $\\Phi =\\Phi (w)$ is the fact that we are not able to establish preservation of nonnegativity of the tensor $D=D(t)$ .", "However, modeling considerations [1], [2], [11], [20], [21] motivate us to make the ansatz (REF ) for $D$ , namely $D = r\\mathbb {I} + m\\otimes m,$ with the regularization parameter $r=r(x) \\ge r_0 >0$ (background permeability of the medium) and the vector field $m=m(t,x)\\in \\mathbb {R}^d$ (local conductance of the network structure).", "Then, the Fokker-Planck equation (REF ) transforms into $ - \\nabla \\cdot \\big [ e^{-\\mathrm {z}\\varphi } (r I + m \\otimes m) \\nabla w \\big ] = S \\qquad \\mathrm {in} \\, \\, \\Omega ,$ subject to $w = c$ in $\\partial \\Omega $ .", "Similarly, we recast the entropy loss functional (REF ) as $ E[m] = \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\Phi ^{\\prime \\prime }(w) \\nabla w \\cdot (rI + m \\otimes m) \\nabla w \\,\\mathrm {d}x.$ Note that a multiplication of $(\\ref {eq:NP2m_ddmod})$ by $\\Phi ^{\\prime }(w)$ and an integration by parts gives $E[m] = \\int _{\\Omega } S(x) \\Phi ^{\\prime }(w) \\mathrm {d}x.$ We have the following form for the $L^2$ -gradient flow of the system (REF ) – (REF ).", "Lemma 4 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF ) is given by $ \\frac{\\partial m}{\\partial t} = e^{-\\mathrm {z}\\varphi } \\big [ 2 \\Phi ^{\\prime \\prime }(w) (\\nabla w \\cdot m) \\nabla w + (\\nabla \\sigma \\cdot m) \\nabla w + (\\nabla w \\cdot m) \\nabla \\sigma \\big ] \\qquad \\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ with $w$ given by (REF ) and $\\sigma $ a solution of $- \\nabla \\cdot \\big [ (r I + m \\otimes m) \\nabla \\sigma \\big ] + \\mathrm {z}\\nabla \\varphi \\cdot (rI + m \\otimes m) \\nabla \\sigma = \\Phi ^{\\prime \\prime \\prime }(w) \\nabla w \\cdot (rI + m \\otimes m) \\nabla w $ subject to the homogeneous Dirichlet boundary condition $\\sigma = 0$ on $\\partial \\Omega $ .", "We only sketch the proof here, following the lines of the proof of Lemma REF , where we substitute for $u^0= e^{-\\mathrm {z}\\varphi } w^0$ , $u^1= e^{-\\mathrm {z}\\varphi } w^1$ , $D^0 = rI + m^0 \\otimes m^0$ and $D^1 = m^0 \\otimes m^1 + m^1 \\otimes m^0$ .", "Combining (REF ), (REF ) and (REF ) and setting $\\sigma $ as a solution of $- \\nabla \\cdot \\big [ e^{-\\mathrm {z}\\varphi } (rI + m^0 \\otimes m^0) \\nabla \\sigma \\big ] = \\Phi ^{\\prime \\prime \\prime }(w^0) \\nabla w^0 \\cdot (rI + m^0 \\otimes m^0) \\nabla w^0,$ we get $\\frac{\\delta E[m^0]}{\\delta m}(m^1) &=& - \\int _{\\Omega } e^{- \\mathrm {z}\\varphi } \\nabla \\Phi ^{\\prime }(w^0) \\cdot (m^0 \\otimes m^1 + m^1 \\otimes m^0) \\nabla w^0 \\mathrm {d}x\\\\&-& \\int _{\\Omega } e^{-\\mathrm {z}\\varphi } \\nabla \\sigma \\cdot (m^0 \\otimes m^1 + m^1 \\otimes m^0) \\nabla w^0 \\mathrm {d}x\\\\&=& - \\int _{\\Omega } m^1 \\cdot e^{-\\mathrm {z}\\varphi } \\big [ 2 \\Phi ^{\\prime \\prime }(w^0) ( \\nabla w^0 \\cdot m^0) \\nabla w^0 + (\\nabla \\sigma \\cdot m) \\nabla w + (\\nabla w \\cdot m) \\nabla \\sigma \\big ] \\mathrm {d}x.$ Let us now examine the (non)convexity of the functional (REF ) with $\\Phi (w)=w^2/2$ .", "Lemma 5 Denote $\\,\\mathrm {d}\\zeta := e^{-\\mathrm {z}\\varphi } \\mathrm {d}x$ .", "The second-order variation of the energy $ E[m] := \\int _\\Omega r|\\nabla w|^2 + |m\\cdot \\nabla w|^2 \\,\\,\\mathrm {d}\\zeta $ in direction $m^1\\in H^1_0(\\Omega )$ , where $w=w[m]$ is the solution of (REF ), reads $\\frac{\\delta ^2 E[m^0]}{\\delta m^2}(m^1, m^1) = 2 \\int _\\Omega r \\left| \\nabla \\frac{\\delta w[m^0]}{\\delta m}(m^1) \\right|^2+ \\left| m^0\\cdot \\nabla \\frac{\\delta w[m^0]}{\\delta m}(m^1) \\right|^2- \\left| m^1\\cdot \\nabla w \\right|^2 \\,\\,\\mathrm {d}\\zeta .$ Using $w$ as test function in the weak formulation of (REF ) gives $E[m] = \\int _\\Omega S w \\,\\mathrm {d}x,$ so that $\\frac{\\delta ^2 E[m^0]}{\\delta m^2}(m^1, m^1) = \\int _\\Omega S \\frac{\\delta ^2 w[m^0]}{\\delta m^2}(m^1, m^1).$ Let us denote $w^0 := w[m], \\qquad w^1 := \\frac{\\delta w[m^0]}{\\delta m}(m^1), \\qquad w^2 := \\frac{\\delta ^2 w[m^0]}{\\delta m^2}(m^1,m^1).$ We use $w^1$ as a test function in the weak formulation of (REF ) and calculate the first-order variation in direction $m^1 \\in H^1_0(\\Omega )$ , which leads to $\\nonumber \\int _\\Omega S w^2 \\,\\mathrm {d}x&=&\\int _\\Omega r \\left| \\nabla w^1 \\right|^2 + r \\nabla w^0 \\cdot \\nabla w^2 \\,\\,\\mathrm {d}\\zeta \\\\&+& \\int _\\Omega (m^1\\cdot \\nabla w^0)\\left( m^0 \\cdot \\nabla w^1 \\right)+ \\left| m^0\\cdot \\nabla w^1 \\right|^2 \\\\&& \\quad + (m^0\\cdot \\nabla w^0)\\left( m^1 \\cdot \\nabla w^1 \\right)+ (m^0\\cdot \\nabla w^0)\\left( m^0\\cdot \\nabla w^2 \\right) \\,\\,\\mathrm {d}\\zeta .\\nonumber $ The first-order variation of the weak formulation of (REF ) with test function $\\xi \\in H^1_0(\\Omega )$ reads $ \\int _\\Omega r \\nabla w^1\\cdot \\nabla \\xi &+& (m^1\\cdot \\nabla w^0)(m^0\\cdot \\nabla \\xi ) + (m^0\\cdot \\nabla w^1)(m^0\\cdot \\nabla \\xi ) \\\\ &+& (m^0\\cdot \\nabla w^0)(m^1\\cdot \\nabla \\xi ) \\,\\,\\mathrm {d}\\zeta = 0,$ and setting $\\xi :=w^1$ gives $ \\int _\\Omega r |\\nabla w^1|^2 + (m^1\\cdot \\nabla w^0)(m^0\\cdot \\nabla w^1) + \\left|m^0\\cdot \\nabla w^1 \\right|^2 + (m^0\\cdot \\nabla w^0)(m^1\\cdot \\nabla w^1) \\,\\,\\mathrm {d}\\zeta = 0.$ Inserting into (REF ) gives $\\int _\\Omega S w^2 \\,\\mathrm {d}x=\\int _\\Omega r \\nabla w^0 \\cdot \\nabla w^2 + (m^0\\cdot \\nabla w^0)\\left( m^0\\cdot \\nabla w^2 \\right) \\,\\,\\mathrm {d}\\zeta .$ Now we again take a variation of (REF ) in direction $m^1$ and use $\\xi :=w^0$ as the test function, $\\int _\\Omega r \\nabla w^2\\cdot \\nabla w^0 + (m^0\\cdot \\nabla w^0)\\left( m^0\\cdot \\nabla w^2 \\right) \\,\\,\\mathrm {d}\\zeta = \\\\= - 2 \\int _\\Omega (m^1\\cdot \\nabla w^1)(m^0\\cdot \\nabla w^0) + \\left| m^1\\cdot \\nabla w^0 \\right|^2 + (m^0\\cdot \\nabla w^1)(m^1\\cdot \\nabla w^0) \\,\\,\\mathrm {d}\\zeta .$ Consequently, $\\int _\\Omega S w^2 \\,\\mathrm {d}x= - 2 \\int _\\Omega (m^1\\cdot \\nabla w^1)(m^0\\cdot \\nabla w^0) + \\left| m^1\\cdot \\nabla w^0 \\right|^2 + (m^0\\cdot \\nabla w^1)(m^1\\cdot \\nabla w^0) \\,\\,\\mathrm {d}\\zeta .$ Using (REF ), we finally arrive at $\\int _\\Omega S w^2 \\,\\mathrm {d}x= 2 \\int _\\Omega r \\left| \\nabla w^1 \\right|^2 + \\left| m^0\\cdot \\nabla w^1 \\right|^2 - \\left| m^1\\cdot \\nabla w^0 \\right|^2 \\,\\,\\mathrm {d}\\zeta .$ To gain a better insight into the convexity properties of the functional (REF ), let us recall the spatially one-dimensional case considered in [1].", "We set $\\Omega :=(0,1)$ and, for simplicity, $r(x) :\\equiv 1$ and $\\varphi := 0$ .", "Moreover, to expedite the calculation, we impose the mixed boundary conditions for $w$ , $\\frac{\\partial w}{\\partial x}(0) = w(1) = 0$ .", "Then an integration of the Poisson equation (REF ) gives $\\frac{\\partial w^0}{\\partial x}(x) = - \\frac{B(x)}{1+m^2},$ with $B(x) := \\int _0^x S(\\xi ) \\,\\mathrm {d}\\xi $ .", "The Frechet derivative in direction $m^1$ reads then $\\frac{\\partial w^1}{\\partial x} = \\frac{\\partial }{\\partial x} \\frac{\\delta w[m^0]}{\\delta m}(m^1) = \\frac{2Bm^0}{(1+(m^0)^2)^2} m^1.$ Consequently, we have $\\frac{\\delta ^2 E[m^0]}{\\delta m^2}(m^1, m^1)&=& 2 \\int _\\Omega \\left| \\frac{\\partial w^1}{\\partial x} \\right|^2 + \\left| m^0 \\frac{\\partial w^1}{\\partial x} \\right|^2 - \\left| m^1 \\frac{\\partial w^0}{\\partial x} \\right|^2 \\\\&=& 2 \\int _\\Omega \\frac{(m^1)^2 B^2}{(1+(m^0)^2)^3} \\left( 3(m^0)^2 - 1 \\right) \\, \\mathrm {d}x.$ Clearly, the sign of the second-order variation of $E[m^0]$ in any direction $m^1$ depends on $m^0$ , i.e., if $|m^0| \\ge 1/\\sqrt{3}$ then is non-negative; otherwise it is negative.", "Therefore, $E$ is not convex on $L^2(\\Omega )$ .", "Poisson-Nernst-Planck model In this section we consider the convection-diffusion equation (REF ) for an ion charge density $u$ with drift induced by the electrostatic field of the charged particles.", "Consequently, the electric potential $\\varphi =\\varphi (t,x)$ is a solution of the Poisson equation $ -\\Delta \\varphi = \\mathrm {z}u \\qquad \\mbox{in } \\Omega ,$ where $\\mathrm {z}\\in \\mathbb {R}$ is the particle charge.", "We prescribe homogeneous Dirichlet boundary condition for $\\varphi $ , $ \\varphi = 0 \\qquad \\mbox{on } (0,T] \\times \\partial \\Omega .$ As argued in, e.g., [32], the entropy generator $\\Phi =\\Phi (u)$ for the Poisson-Nernst-Planck system (REF ), (REF ) is given by $\\Phi (u) = u (\\ln u -1)$ , and the Helmholtz free energy takes the form (REF ).", "Moreover, observe that we have the equilibrium state $c=1$ , i.e., $\\Phi ^{\\prime }(1) = 0$ .", "Introducing the quasi-Fermi energy level $\\mu $ defined by (REF ), the parabolic version of the Poisson-Nernst-Planck system reads: $\\frac{\\partial u}{\\partial t} - \\nabla \\cdot ( u D \\nabla \\mu ) &=& 0 \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega ,\\\\- \\Delta \\varphi &=& \\mathrm {z}u \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega .$ It is a known result, see, e.g., [8], that the loss of the Helmholtz free energy (REF ) along the solutions of the Poisson-Nernst-Planck system (REF )–() is given by the functional $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ For the convenience of the reader, we detail the calculation here.", "Lemma 6 We have, along the solutions of (REF )–(), $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) = - \\mathcal {E}[D],$ where $\\mathcal {H}(u, \\varphi )$ is the Helmholtz free energy (REF ) and $\\mathcal {E}[D]$ is given by (REF ).", "We calculate $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) &=& \\int _{\\Omega } \\ln (u) \\frac{\\partial u}{\\partial t} + \\nabla \\varphi \\cdot \\nabla \\frac{\\partial \\varphi }{\\partial t} \\,\\mathrm {d}x\\\\&=& \\int _{\\Omega } \\ln (u)\\frac{\\partial u}{\\partial t} - \\varphi \\Delta \\frac{\\partial \\varphi }{\\partial t}\\, \\mathrm {d}x\\\\&=& \\int _{\\Omega } \\left( \\ln (u) + \\mathrm {z}\\varphi \\right) \\frac{\\partial u}{\\partial t} \\, \\mathrm {d}x,$ where we used () and integrated by parts.", "Substituting for $\\frac{\\partial u}{\\partial t}$ from (REF ) and integrating by parts again gives $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) &=& - \\int _{\\Omega } u \\nabla \\left( \\ln (u) + \\mathrm {z}\\varphi \\right)\\cdot D\\nabla \\mu \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } \\left( \\nabla u + \\mathrm {z}u \\nabla \\varphi \\right) \\cdot D\\nabla \\mu \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ Note that the boundary terms in the partial integration steps vanish due to the homogeneous boundary condition (REF ) for $\\varphi $ .", "We now derive the $L^2$ -gradient flow of the loss functional (REF ) coupled to the stationary Poisson-Nernst-Planck system formed by (REF ) and $- \\nabla \\cdot (u D \\nabla \\mu ) = S \\qquad &\\mathrm {in} \\, \\, \\Omega ,$ subject to the boundary conditions $u = 1, \\quad \\varphi = 0, \\qquad \\mathrm {on} \\, \\, \\partial \\Omega .$ Note that (REF ) implies $\\mu = 0$ on $\\partial \\Omega $ .", "Consequently, multiplication of (REF ) by $\\mu $ and integration by parts gives $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\mathrm {d}x= \\int _{\\Omega } S(x) \\mu \\, \\mathrm {d}x.$ Lemma 7 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF )–(REF ) is given by $\\frac{\\partial D}{\\partial t} = u \\nabla \\mu \\otimes \\nabla \\mu + u \\frac{\\nabla \\mu \\otimes \\nabla \\sigma + \\nabla \\sigma \\otimes \\nabla \\mu }{2} \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary quantities $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot (D \\nabla \\sigma ) + \\mathrm {z}\\nabla \\varphi \\cdot D \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot D \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega , \\\\- \\Delta \\eta = \\nabla \\cdot (u D \\nabla \\sigma ) \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega $ subject to the boundary conditions $\\sigma = 0, \\quad \\eta =0 \\qquad \\mbox{on } \\partial \\Omega .", "$ Let us expand $D = D^0 + \\varepsilon D^1 + O(\\varepsilon ^2)$ , where $D^0$ is a symmetric positive definite tensor and $D^1$ is symmetric.", "Similarly, we expand the other relevant quantities $u$ , $\\varphi $ and $\\mu $ in terms of $\\varepsilon >0$ .", "With (REF ) we have $\\mu ^1 = \\frac{u^1}{u^0} + \\mathrm {z}\\, \\varphi ^1$ .", "Collecting the zero-order terms in (REF ) and (REF ), we have $- \\nabla \\cdot ( u^0 D^0 \\nabla \\mu ^0) &=& S, \\\\- \\Delta \\varphi ^0 &=& \\mathrm {z}u^0.$ subject to the boundary conditions $u^0=1$ and $\\varphi ^0=0$ on $\\partial \\Omega $ .", "At first order in $\\varepsilon $ we obtain the system $- \\nabla \\cdot ( u^0 D^1 \\nabla \\mu ^0 + u^1 D^0 \\nabla \\mu ^0 + u^0 D^0 \\nabla \\mu ^1) &=& 0 \\\\- \\Delta \\varphi ^1 &=& \\mathrm {z}u^1,$ subject to $u^1 = 0$ and $\\varphi ^1 = 0$ on $\\partial \\Omega $ .", "Note that (REF ) can also be rewritten in the form $ - \\nabla \\cdot \\big [ D^0 ( \\nabla u^1 + \\mathrm {z}u^1 \\nabla \\varphi ^0 + \\mathrm {z}u^0 \\nabla \\varphi ^1) + D^1 ( \\nabla u^0 + \\mathrm {z}u^0 \\varphi ^0 ) \\big ] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega .$ Next, we calculate the first variation of $\\mathcal {E}$ given by (REF ) in the direction $D^1$ , $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) &=& \\frac{d}{d\\varepsilon } \\int _{\\Omega } S(x) (\\mu ^0 + \\varepsilon \\mu ^1) \\, dx \\Bigr |_{\\varepsilon = 0} \\\\&=& \\int _{\\Omega } - \\nabla \\cdot (u^0 D^0 \\nabla \\mu ^0) \\mu ^1 \\, \\mathrm {d}x\\\\&=& \\int _{\\Omega } u^0 \\nabla \\mu ^1 \\cdot D^0 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ Multiplication of the first-order system (REF ) by $\\mu ^0$ and integration by parts, recalling the symmetry of $D$ , leads to $\\int _{\\Omega } u^0 \\nabla \\mu ^1 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x.$ Substitution of the above identity into (REF ) gives $ \\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) = - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x.$ To evaluate the second term, we need to find a mapping between $u^1$ and $D^1$ .", "For this sake, we multiply (REF ) by a function $\\sigma $ vanishing at the boundary $\\partial \\Omega $ and integrate by parts, $\\int _{\\Omega } \\nabla \\sigma \\cdot \\big [D^0 (\\nabla u^1 + \\mathrm {z}u^1 \\nabla \\varphi ^0 + \\mathrm {z}u^0 \\nabla \\varphi ^1 ) \\big ] \\, \\mathrm {d}x&=&- \\int _{\\Omega } \\nabla \\sigma \\cdot D^1 (\\nabla u^0 + \\mathrm {z}u^0 \\nabla \\varphi ^0) \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ After further integration by parts on the left-hand side, we obtain $ - \\int _{\\Omega } u^1 \\big [ \\nabla \\cdot (D^0 \\nabla \\sigma ) - \\mathrm {z}\\nabla \\sigma \\cdot D^0 \\nabla \\varphi ^0 \\big ] + \\mathrm {z}\\varphi ^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\mathrm {d}x.$ With the Poisson equation () we rewrite the third term of the left-hand side as $\\int _{\\Omega } \\mathrm {z}\\varphi ^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\,\\mathrm {d}x&=& - \\int _{\\Omega } \\mathrm {z}^2 \\Delta ^{-1} \\big [ u^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\big ] \\,\\mathrm {d}x\\\\&=& - \\int _{\\Omega } \\mathrm {z}^2 u^1 \\Delta ^{-1} \\big [ \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\big ] \\, \\mathrm {d}x.$ Consequently, defining the function $\\psi =\\psi [\\sigma ]$ , $\\psi [\\sigma ] := - \\nabla \\cdot (D^0 \\nabla \\sigma ) + \\mathrm {z}\\nabla \\sigma \\cdot D^0 \\nabla \\varphi ^0 + \\mathrm {z}^2 \\Delta ^{-1} \\nabla \\cdot ( u^0 D^0 \\nabla \\sigma ),$ equation (REF ) becomes $\\int _{\\Omega } u^1 \\psi [\\sigma ] \\, \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ Hence, setting $\\psi [\\sigma ] = \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0$ , we obtain $- \\int _{\\Omega } u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\,\\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\,\\mathrm {d}x.$ Substitution of the above identity into (REF ) leads to $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) = - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\,\\mathrm {d}x.$ Due to the symmetry of $D^1$ , we have $\\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 &=& \\frac{\\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 + \\nabla \\mu ^0 \\cdot D^1 \\nabla \\sigma }{2} \\\\&=& D^1 : \\frac{\\nabla \\sigma \\otimes \\nabla \\mu ^0 + \\nabla \\mu ^0 \\otimes \\nabla \\sigma }{2}.$ We thus finally arrive at $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) =- \\int _{\\Omega } D^1 : \\left[ u^0 \\nabla \\mu ^0 \\otimes \\nabla \\mu ^0 + u^0 \\frac{\\nabla \\sigma \\otimes \\nabla \\mu ^0 + \\nabla \\mu ^0 \\otimes \\nabla \\sigma }{2} \\right] \\,\\mathrm {d}x,$ which directly gives (REF ).", "Once again, we cannot guarantee the non-negativity of $D$ for every time $t>0$ from Equation (REF ), so we employ the ansatz (REF ).", "Therefore, we obtain the following: Lemma 8 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF )–(REF ) is given by $\\frac{\\partial m}{\\partial t} = 2 u (m \\cdot \\nabla \\mu ) \\nabla \\mu + (m \\cdot \\nabla \\mu ) \\nabla \\sigma + (m \\cdot \\nabla \\sigma ) \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary quantities $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot \\big [ (rI + m \\otimes m) \\nabla \\sigma \\big ] + \\mathrm {z}\\nabla \\varphi \\cdot (rI + m \\otimes m) \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot (rI + m \\otimes m) \\nabla \\mu , \\\\ - \\Delta \\eta = \\nabla \\cdot \\big [ u (rI + m \\otimes m) \\nabla \\sigma \\big ], $ in $(0, \\infty ) \\times \\Omega $ and subject to the boundary conditions $\\sigma = 0, \\quad \\eta =0 \\qquad \\mbox{on } \\partial \\Omega .", "$ See the proof of Lemma REF ." ], [ "Poisson-Nernst-Planck model", "In this section we consider the convection-diffusion equation (REF ) for an ion charge density $u$ with drift induced by the electrostatic field of the charged particles.", "Consequently, the electric potential $\\varphi =\\varphi (t,x)$ is a solution of the Poisson equation $ -\\Delta \\varphi = \\mathrm {z}u \\qquad \\mbox{in } \\Omega ,$ where $\\mathrm {z}\\in \\mathbb {R}$ is the particle charge.", "We prescribe homogeneous Dirichlet boundary condition for $\\varphi $ , $ \\varphi = 0 \\qquad \\mbox{on } (0,T] \\times \\partial \\Omega .$ As argued in, e.g., [32], the entropy generator $\\Phi =\\Phi (u)$ for the Poisson-Nernst-Planck system (REF ), (REF ) is given by $\\Phi (u) = u (\\ln u -1)$ , and the Helmholtz free energy takes the form (REF ).", "Moreover, observe that we have the equilibrium state $c=1$ , i.e., $\\Phi ^{\\prime }(1) = 0$ .", "Introducing the quasi-Fermi energy level $\\mu $ defined by (REF ), the parabolic version of the Poisson-Nernst-Planck system reads: $\\frac{\\partial u}{\\partial t} - \\nabla \\cdot ( u D \\nabla \\mu ) &=& 0 \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega ,\\\\- \\Delta \\varphi &=& \\mathrm {z}u \\qquad \\mbox{in } (0, \\infty ) \\times \\Omega .$ It is a known result, see, e.g., [8], that the loss of the Helmholtz free energy (REF ) along the solutions of the Poisson-Nernst-Planck system (REF )–() is given by the functional $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ For the convenience of the reader, we detail the calculation here.", "Lemma 6 We have, along the solutions of (REF )–(), $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) = - \\mathcal {E}[D],$ where $\\mathcal {H}(u, \\varphi )$ is the Helmholtz free energy (REF ) and $\\mathcal {E}[D]$ is given by (REF ).", "We calculate $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) &=& \\int _{\\Omega } \\ln (u) \\frac{\\partial u}{\\partial t} + \\nabla \\varphi \\cdot \\nabla \\frac{\\partial \\varphi }{\\partial t} \\,\\mathrm {d}x\\\\&=& \\int _{\\Omega } \\ln (u)\\frac{\\partial u}{\\partial t} - \\varphi \\Delta \\frac{\\partial \\varphi }{\\partial t}\\, \\mathrm {d}x\\\\&=& \\int _{\\Omega } \\left( \\ln (u) + \\mathrm {z}\\varphi \\right) \\frac{\\partial u}{\\partial t} \\, \\mathrm {d}x,$ where we used () and integrated by parts.", "Substituting for $\\frac{\\partial u}{\\partial t}$ from (REF ) and integrating by parts again gives $\\frac{d}{dt} \\mathcal {H}(u, \\varphi ) &=& - \\int _{\\Omega } u \\nabla \\left( \\ln (u) + \\mathrm {z}\\varphi \\right)\\cdot D\\nabla \\mu \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } \\left( \\nabla u + \\mathrm {z}u \\nabla \\varphi \\right) \\cdot D\\nabla \\mu \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\, \\mathrm {d}x.$ Note that the boundary terms in the partial integration steps vanish due to the homogeneous boundary condition (REF ) for $\\varphi $ .", "We now derive the $L^2$ -gradient flow of the loss functional (REF ) coupled to the stationary Poisson-Nernst-Planck system formed by (REF ) and $- \\nabla \\cdot (u D \\nabla \\mu ) = S \\qquad &\\mathrm {in} \\, \\, \\Omega ,$ subject to the boundary conditions $u = 1, \\quad \\varphi = 0, \\qquad \\mathrm {on} \\, \\, \\partial \\Omega .$ Note that (REF ) implies $\\mu = 0$ on $\\partial \\Omega $ .", "Consequently, multiplication of (REF ) by $\\mu $ and integration by parts gives $\\mathcal {E}[D] = \\int _{\\Omega } u \\nabla \\mu \\cdot D \\nabla \\mu \\mathrm {d}x= \\int _{\\Omega } S(x) \\mu \\, \\mathrm {d}x.$ Lemma 7 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF )–(REF ) is given by $\\frac{\\partial D}{\\partial t} = u \\nabla \\mu \\otimes \\nabla \\mu + u \\frac{\\nabla \\mu \\otimes \\nabla \\sigma + \\nabla \\sigma \\otimes \\nabla \\mu }{2} \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary quantities $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot (D \\nabla \\sigma ) + \\mathrm {z}\\nabla \\varphi \\cdot D \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot D \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega , \\\\- \\Delta \\eta = \\nabla \\cdot (u D \\nabla \\sigma ) \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega $ subject to the boundary conditions $\\sigma = 0, \\quad \\eta =0 \\qquad \\mbox{on } \\partial \\Omega .", "$ Let us expand $D = D^0 + \\varepsilon D^1 + O(\\varepsilon ^2)$ , where $D^0$ is a symmetric positive definite tensor and $D^1$ is symmetric.", "Similarly, we expand the other relevant quantities $u$ , $\\varphi $ and $\\mu $ in terms of $\\varepsilon >0$ .", "With (REF ) we have $\\mu ^1 = \\frac{u^1}{u^0} + \\mathrm {z}\\, \\varphi ^1$ .", "Collecting the zero-order terms in (REF ) and (REF ), we have $- \\nabla \\cdot ( u^0 D^0 \\nabla \\mu ^0) &=& S, \\\\- \\Delta \\varphi ^0 &=& \\mathrm {z}u^0.$ subject to the boundary conditions $u^0=1$ and $\\varphi ^0=0$ on $\\partial \\Omega $ .", "At first order in $\\varepsilon $ we obtain the system $- \\nabla \\cdot ( u^0 D^1 \\nabla \\mu ^0 + u^1 D^0 \\nabla \\mu ^0 + u^0 D^0 \\nabla \\mu ^1) &=& 0 \\\\- \\Delta \\varphi ^1 &=& \\mathrm {z}u^1,$ subject to $u^1 = 0$ and $\\varphi ^1 = 0$ on $\\partial \\Omega $ .", "Note that (REF ) can also be rewritten in the form $ - \\nabla \\cdot \\big [ D^0 ( \\nabla u^1 + \\mathrm {z}u^1 \\nabla \\varphi ^0 + \\mathrm {z}u^0 \\nabla \\varphi ^1) + D^1 ( \\nabla u^0 + \\mathrm {z}u^0 \\varphi ^0 ) \\big ] = 0 \\qquad \\mathrm {in} \\, \\, \\Omega .$ Next, we calculate the first variation of $\\mathcal {E}$ given by (REF ) in the direction $D^1$ , $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) &=& \\frac{d}{d\\varepsilon } \\int _{\\Omega } S(x) (\\mu ^0 + \\varepsilon \\mu ^1) \\, dx \\Bigr |_{\\varepsilon = 0} \\\\&=& \\int _{\\Omega } - \\nabla \\cdot (u^0 D^0 \\nabla \\mu ^0) \\mu ^1 \\, \\mathrm {d}x\\\\&=& \\int _{\\Omega } u^0 \\nabla \\mu ^1 \\cdot D^0 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ Multiplication of the first-order system (REF ) by $\\mu ^0$ and integration by parts, recalling the symmetry of $D$ , leads to $\\int _{\\Omega } u^0 \\nabla \\mu ^1 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x.$ Substitution of the above identity into (REF ) gives $ \\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) = - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\mathrm {d}x.$ To evaluate the second term, we need to find a mapping between $u^1$ and $D^1$ .", "For this sake, we multiply (REF ) by a function $\\sigma $ vanishing at the boundary $\\partial \\Omega $ and integrate by parts, $\\int _{\\Omega } \\nabla \\sigma \\cdot \\big [D^0 (\\nabla u^1 + \\mathrm {z}u^1 \\nabla \\varphi ^0 + \\mathrm {z}u^0 \\nabla \\varphi ^1 ) \\big ] \\, \\mathrm {d}x&=&- \\int _{\\Omega } \\nabla \\sigma \\cdot D^1 (\\nabla u^0 + \\mathrm {z}u^0 \\nabla \\varphi ^0) \\, \\mathrm {d}x\\\\&=& - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ After further integration by parts on the left-hand side, we obtain $ - \\int _{\\Omega } u^1 \\big [ \\nabla \\cdot (D^0 \\nabla \\sigma ) - \\mathrm {z}\\nabla \\sigma \\cdot D^0 \\nabla \\varphi ^0 \\big ] + \\mathrm {z}\\varphi ^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\mathrm {d}x.$ With the Poisson equation () we rewrite the third term of the left-hand side as $\\int _{\\Omega } \\mathrm {z}\\varphi ^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\,\\mathrm {d}x&=& - \\int _{\\Omega } \\mathrm {z}^2 \\Delta ^{-1} \\big [ u^1 \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\big ] \\,\\mathrm {d}x\\\\&=& - \\int _{\\Omega } \\mathrm {z}^2 u^1 \\Delta ^{-1} \\big [ \\nabla \\cdot (u^0 D^0 \\nabla \\sigma ) \\big ] \\, \\mathrm {d}x.$ Consequently, defining the function $\\psi =\\psi [\\sigma ]$ , $\\psi [\\sigma ] := - \\nabla \\cdot (D^0 \\nabla \\sigma ) + \\mathrm {z}\\nabla \\sigma \\cdot D^0 \\nabla \\varphi ^0 + \\mathrm {z}^2 \\Delta ^{-1} \\nabla \\cdot ( u^0 D^0 \\nabla \\sigma ),$ equation (REF ) becomes $\\int _{\\Omega } u^1 \\psi [\\sigma ] \\, \\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\, \\mathrm {d}x.$ Hence, setting $\\psi [\\sigma ] = \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0$ , we obtain $- \\int _{\\Omega } u^1 \\nabla \\mu ^0 \\cdot D^0 \\nabla \\mu ^0 \\,\\mathrm {d}x= - \\int _{\\Omega } u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\,\\mathrm {d}x.$ Substitution of the above identity into (REF ) leads to $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) = - \\int _{\\Omega } u^0 \\nabla \\mu ^0 \\cdot D^1 \\nabla \\mu ^0 + u^0 \\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 \\,\\mathrm {d}x.$ Due to the symmetry of $D^1$ , we have $\\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 &=& \\frac{\\nabla \\sigma \\cdot D^1 \\nabla \\mu ^0 + \\nabla \\mu ^0 \\cdot D^1 \\nabla \\sigma }{2} \\\\&=& D^1 : \\frac{\\nabla \\sigma \\otimes \\nabla \\mu ^0 + \\nabla \\mu ^0 \\otimes \\nabla \\sigma }{2}.$ We thus finally arrive at $\\frac{\\delta \\mathcal {E}[D^0]}{\\delta D} (D^1) =- \\int _{\\Omega } D^1 : \\left[ u^0 \\nabla \\mu ^0 \\otimes \\nabla \\mu ^0 + u^0 \\frac{\\nabla \\sigma \\otimes \\nabla \\mu ^0 + \\nabla \\mu ^0 \\otimes \\nabla \\sigma }{2} \\right] \\,\\mathrm {d}x,$ which directly gives (REF ).", "Once again, we cannot guarantee the non-negativity of $D$ for every time $t>0$ from Equation (REF ), so we employ the ansatz (REF ).", "Therefore, we obtain the following: Lemma 8 The formal $L^2$ -gradient flow of the functional (REF ) constrained by (REF )–(REF ) is given by $\\frac{\\partial m}{\\partial t} = 2 u (m \\cdot \\nabla \\mu ) \\nabla \\mu + (m \\cdot \\nabla \\mu ) \\nabla \\sigma + (m \\cdot \\nabla \\sigma ) \\nabla \\mu \\qquad &\\mathrm {in} \\, \\, (0, \\infty ) \\times \\Omega ,$ coupled to the system for the auxiliary quantities $\\sigma =\\sigma (t,x)$ and $\\eta =\\eta (t,x)$ , $- \\nabla \\cdot \\big [ (rI + m \\otimes m) \\nabla \\sigma \\big ] + \\mathrm {z}\\nabla \\varphi \\cdot (rI + m \\otimes m) \\nabla \\sigma - \\mathrm {z}^2 \\eta = \\nabla \\mu \\cdot (rI + m \\otimes m) \\nabla \\mu , \\\\ - \\Delta \\eta = \\nabla \\cdot \\big [ u (rI + m \\otimes m) \\nabla \\sigma \\big ], $ in $(0, \\infty ) \\times \\Omega $ and subject to the boundary conditions $\\sigma = 0, \\quad \\eta =0 \\qquad \\mbox{on } \\partial \\Omega .", "$ See the proof of Lemma REF ." ] ]
2207.03542
[ [ "Quote Erat Demonstrandum: A Web Interface for Exploring the Quotebank\n Corpus" ], [ "Abstract The use of attributed quotes is the most direct and least filtered pathway of information propagation in news.", "Consequently, quotes play a central role in the conception, reception, and analysis of news stories.", "Since quotes provide a more direct window into a speaker's mind than regular reporting, they are a valuable resource for journalists and researchers alike.", "While substantial research efforts have been devoted to methods for the automated extraction of quotes from news and their attribution to speakers, few comprehensive corpora of attributed quotes from contemporary sources are available to the public.", "Here, we present an adaptive web interface for searching Quotebank, a massive collection of quotes from the news, which we make available at https://quotebank.dlab.tools." ], [ "Introduction", "Quotes of sources, politicians, athletes, or scientists play an important role in lending credibility to news articles [3].", "As news stories evolve, quotes are useful data for analyzing the spread of information through the news [10], for determining the source of news information [17], or in fact checking and credibility assessment [16], [2].", "Outside of journalistic applications, extracted quotes from the news are also valuable in social studies, for example through opinion mining from quotes [1].", "The substance of quotes lies not just in what is being said, but by whom the quote is uttered since the attribution to a speaker provides context to the words.", "As a result, the automated extraction and attribution of quotes from document corpora has been the subject of ongoing research over the years [14], [15], [8], [11].", "However, while methods for the extraction and attribution of quotes are developed continuously and benefit from recent advances in natural language processing, few of the resulting resources are available to end-users.", "To address this need, we report on the development of a user interface for searching Quotebank  [18], a massive corpus of quotes that we extracted from a decade of English news.", "Contributions.", "We provide an interface for (faceted) search in Quotebank, a Web-scale database of quotes from a decade of English news articles.", "Our tool is accessible as a website that is geared towards end-users who would otherwise be unable to use this resource, and it provides near-realtime query performance that enables an interactive exploration of the Quotebank corpus." ], [ "Related Work", "Prior work that is related to Quotebank is quite sparse and can be grouped into two categories: corpora of quotes and system demonstrations that make use of or recommend quotes.", "Quote corpora.", "Available corpora include the PolNeAR corpus [9], the Penn Attribution Relation Corpus [13], the Speech, Thought, and Writing Presentation corpus [5], the Rich Quotation Annotations corpus [12], and the DirectQuote dataset [20].", "In contrast to Quotebank, all of the above corpora are available only as NLP resources, not as searchable repositories that can be used by laypeople.", "Furthermore, they are substantially smaller in comparison to Quotebank, which exceeds the amount of contained quotes in these corpora by several orders of magnitude.", "System demonstrations.", "To the best of our knowledge, there are no other search interfaces for large-scale corpora of quotes from the news domain.", "Repositories of quotes that can be accessed and searched by laypeople remain limited to manually curated repositories of mostly historical quotes, such as WikiQuote.https://www.wikiquote.org/ System demonstrations in the information retrieval community tend to only consider quotes indirectly and often in the wider contexts of credibility assessment, such as CredEye [16], or fact checking for news such as BeLink [2] and the work of Miranda et al. [7].", "Figure: Schematic overview of the Quotebank system architecture.", "During pre-processing, speaker candidates and quotes are annotated in the news article corpus, and quotes are attributed to speakers with Quobert .", "The attributed quote data is augmented by linking speakers to Wikidata and subsequently loaded into Elasticsearch, which is used for indexing and retrieval.", "Kibana and Logstash are used internally for monitoring and managing indices.", "The web server exposes the query API to handle user queries and directly interfaces with the cache to reduce the impact of complex queries on system load.", "The JavaScript UI translates user queries to request to the web server (for details on the UI, see Figure ).The likely most closely related approach to ours is by MacLaughlin et al., who propose a system for quote recommendation based on context by using a BERT architecture [6].", "However, the employed QUOTUS data set [10] is relatively small and the tool is neither available nor suitable for the exploration of a Web-scale news corpus that would be useful to an end-user." ], [ "Quote Corpus", "Quotebank is a dataset of 235 million unique, speaker-attributed quotes that were extracted from 196 million English news articles (127 million of them containing quotes) published between September 2008 and April 2020 [18].", "The data was extracted with the BERT-based architecture Quobert, which utilizes Quootstrap [15] for the extraction of training data.", "We make two stages of the data available in our search interface: article-level and quote-level.", "Article-level data.", "In the article-level version of the data, articles are the central unit.", "The data contains individual quote annotations for all articles in the news data set, each attributed with a ranked list of the most likely speaker candidates in the article (or no speaker if no suitable candidate could be attributed by Quobert).", "Additionally, the data contains context windows surrounding each of the quote mentions in the news article text.", "Quote-level data.", "The quote-level data contains an aggregated view of the quotes across all articles.", "To generate this stage of the data, all individual occurrences of quotes are first canonicalized and quotes with matching canonical form are then aggregated into a single data point.", "Speaker candidates for these quotes are merged by weighted consensus, i.e., by summing over the local candidate probabilities in all individual occurrences to derive the most likely global speaker for a given quote.", "Speaker data.", "To improve the querying capability of Quotebank and support faceted search, we further enrich the quote corpus with speaker information extracted from Wikidata [19], which is one of the largest publicly available knowledge graphs (about 97M entities).", "Specifically, we extract and add data concerning the occupation, nationality, and gender of the speakers in Quotebank.", "A full JSON dump of the raw data prior to the addition of speaker data from Wikidata is also available for download.https://doi.org/10.5281/zenodo.4277311 For details on the generation, we refer the reader to Vaucher et al.", "[18]." ], [ "The ", "The complete architecture of the Quotebank system is shown in Figure REF .", "It consists of three core components: (1) a database system housing the core storage and querying capabilities; (2) a web server providing a layer of abstraction and API on top of the the database system; and (3) a user interface responsible for delivering the results to the user through an interactive and flexible visualization.", "In the following, we provide a description of these components and highlight their key functionality." ], [ "Database system", "The Quotebank database is built using Elasticsearch [4], which is one of the most popular distributed, scalable, and open-source search and analytics engines supporting full-text indexing and querying, thereby serving our primary goal of efficiently searching through terabytes of quotes and news content.", "Our database consists of three indices: (1) article, (2) quote, and (3) speaker, which store and index the article-level, quote-level, and speaker data, respectively.", "Naïvely indexing the Quotebank corpus using Elasticsearch would require more than 2TB of disk space, while the storage footprint of our optimized database is 4 times smaller and consumes only about 500GB.", "We employ the following fundamental tried-and-true design decisions and optimizations to reduce the storage footprint and improve the query efficiency.", "[leftmargin=*,noitemsep] Database normalization.", "We follow standard principles such as removing data redundancy, unless redundancy entails significant query speed-ups.", "Choice of data types.", "We use data types with minimal storage requirements, such as the integer type for fields supporting range queries (e.g., number of occurrences of a quote), or keyword type for fields used for creating filters (e.g., speaker nationality).", "Querying-indexing trade-off.", "We push the complexity to the indexing phase by aggregating all text type fields into a single field, which results in faster querying (at the cost of slower indexing) in comparison to searching multiple fields at query time.", "Figure: The Quotebank user interface.", "(a) The search panel supports quote-level and the article-level searches, which can be faceted by speaker attributes.", "(b) The search engine result page (SERP) displays retrieved quotes in the selected time window alongside speaker candidates and the URLs of articles from which the quotes are sourced.", "A histogram shows the distribution of quotes in the result over time.The database system also houses Kibana and Logstash, which are primarily used for internal monitoring purposes.", "While Kibana is used for monitoring the status of indices and analyzing database performance statistics, Logstash handles storage management of the logs produced by Elasticsearch." ], [ "Web server", "The web server provides a layer of abstraction on top of the database system and exposes the API endpoints that enable the communication between the user-interface and the database.", "It is responsible for composing, validating, and submitting user queries to the database as well as parsing and returning the retrieved result from the database to the user.", "In a nutshell, the web server prevents the users from communicating directly and in an unrestrained manner with the database, thereby providing an added level of security." ], [ "User interface", "The user interface (cf.", "Figure REF ) is an adaptive web-based search platform built with React.js,https://reactjs.org/ which allows users to interactively explore the Quotebank corpus.", "It consists of two main views: (1) the search panel, and (2) the search engine result page.", "Search panel.", "Figure REF portrays the interface that is exposed to the users for querying the Quotebank corpus.", "In the following, we describe the key features of the search panel." ], [ "Corpus type", "While the system supports queries on the quote-level data (called quotation-centric in Figure REF ) by default, it also provides the user with the option to query the article-level data." ], [ "Query type", "The most straight-forward way to search the Quotebank corpus is via text query, which supports fuzzy matching by default.", "However, the user may also choose to utilize exact matching of the query text by enabling the checkbox option 'Enable exact match'.", "For the article-level data, which supports only text queries, users are able to query both the quotes and the context in which they occur in the original articles ('Search with context' checkbox).", "To enable search faceting, the interface supports a multitude of search filters for querying quote-level data (cf.", "Figure REF ), such as speaker name or nationality, minimum number of quote occurrences, etc., thereby providing additional refinements over and above the text query.", "By default, the time window is set to the full date range of the Quotebank corpus corresponding to September 1, 2008 and April 17, 2020, but the user may adapt this window by using the From and To fields, respectively.", "All other filter-related fields are not initialized with default values." ], [ "Auto-complete", "To assist users in applying search filters and increase search precision, auto-complete is implemented for each speaker-related field.", "Similar to most auto-complete implementations, users are only required to input the initial characters of a speaker's name in the corresponding field and may choose from the provided suggestions.", "The auto-complete function also provides a short description of the speaker (extracted from Wikidata), which is displayed alongside each suggestion to help users disambiguate and choose from the list of potential suggestions.", "Search engine result page (SERP).", "Figure REF portrays the SERP, which consists of three primary components: a summary section, a histogram of quote occurrences, and the main result blocks." ], [ "Summary section", "This section displays basic information about the retrieved result, such as the number of quotes or articles that were deemed relevant to the query, and the query time in seconds.", "To reduce the load on the client, the total number of results returned from the web server is capped at 1000, corresponding to the 1000 most-relevant results based on their content matching score with the user query.", "Additionally, this section provides a 'Share' button, which generates a shareable permalink to the user's query and copies it to the clipboard.", "A 'Save results' button enables the user to download the results either as a JSON or text file." ], [ "Histogram", "The histogram, implemented in D3.js,https://d3js.org displays the distribution of quotes or articles that match the user query in the specified time window.", "The histogram bin boundaries are automatically adjusted to improve its aesthetics while simultaneously keeping the resource utilization on the client side low.", "The main purpose of the histogram is to allow the user to effectively visualize the trend portrayed by results relevant to her query.", "While the returned results are capped at 1000, note that the histogram is based on all quotes or articles that match the user's query.", "Thus, the counts displayed in the histogram may exceed the number of returned documents." ], [ "Result block", "The returned results are displayed in blocks, where each block contains either a single relevant quote or a relevant article that contains at least one matching quote.", "The font and layout of the result block is designed to simulate a real quote as one might encounter it in a newspaper.", "The URLs in each block are clickable links to the news articles in which the quote was originally published.", "For quote blocks, all possible speakers of a quote are annotated with their unique Wikidata identifiers, which are also clickable links pointing to the Wikidata page of the speaker.", "Long result blocks are collapsed to a fixed size, allowing the user to effectively browse the result summary and only expand specific blocks if desired.", "In the same vein, we also implement pagination by breaking down the result blocks into pages of at most 10 blocks each.", "The buttons Prev and Next can be used to navigate the pages.", "Lastly, we also enable the user to re-rank the returned results based on several pre-defined sorting criteria." ], [ "Demonstration", "As part of the demonstration, we encourage the reader to engage with three visual scenarios crafted to highlight key features of the interface, namely the exploration of (1) quote-level and (2) article-level data, and (3) a free-roam exploration on a device of the user's choice (e.g., tablet, phone, or laptop).", "These scenarios were designed with support from an EPFL journalist, who also helped in improving our system's usability." ], [ "Quote-level exploration", "In this scenario, the users may explore the Quotebank corpus to identify all the quotes from Donald Trump containing the text “great again” and that appeared in at least 500 distinct articles.", "They are further encouraged to re-rank the results in reverse chronological order and share the results with their colleagues by sending the permalink via e-mail or messenger service.", "User interaction.", "This scenario provides a demonstration of the 'Quotation-centric' (cf.", "Figure REF ) search panel.", "In addition to simply posing the text query “great again”, the user has the option of applying multiple filters.", "Specifically, the user should set the 'Minimum number of occurrences' to 500, and leverage the auto-complete functionality to set the input for 'Name of speaker' to “Donald Trump”.", "Lastly, the user may select the pre-defined filter 'Date (descending)' to re-rank the results in reverse chronological order.", "A permalink of the query is obtained by clicking on the 'Share' button on the results page.", "The retrieved results for this scenario can be accessed at this quote-level permalink." ], [ "Article-level exploration", "In this scenario, the user is encouraged to explore the Quotebank corpus to identify all articles published on May 19, 2018 that contain the text “gdpr” in either the quotes or in their surrounding context.", "The user should also download the results as a text file and view its contents in a text editor.", "User interaction.", "This scenario provides a demonstration of the 'Article-centric' (cf.", "Figure REF ) search panel.", "In addition to posing the text query “gdpr”, the user may vary the time window by restricting it to a particular day, i.e., May 19, 2018.", "Moreover, the user analyzes the differences in the retrieved results by toggling the 'Search with context' checkbox.", "Lastly, to facilitate later exploration and analyses, the user may download the results corresponding to her query as a text file by clicking on the 'Save results' button on the results page.", "The retrieved results for this scenario can also be accessed at this article-level permalink." ], [ "Free-roam exploration", "While we provide a few example use-cases (described below) to bootstrap the exploration in this scenario, the reader is encouraged to conduct a free-roam exploration of Quotebank.", "[leftmargin=*, noitemsep] UC 1: Using Quotebank on their phone, users search quotes from female tennis players of Switzerland that appear in at least 100 different articles and share the results via messenger.", "UC 2: Using Quotebank on their tablet, users search quotes related to “science” by female journalists that appear in at least 100 different articles and share the results via e-mail.", "UC 3: Using 'Enable exact match' to search for a quote that the user remembers.", "For instance, the famous quote: “You have to dream before your dreams can come true” to recall the name of the speaker or identify its popularity over the past decade." ], [ "Conclusion", "In this paper, we introduced a search interface that makes the Quotebank dataset more accessible to end users who lack the computational background to work directly with the raw data.", "To support faceted search based on speaker attributes, we further enriched the quote corpus with information from Wikidata.", "Our intention is to enable the general public to explore the Quotebank data, and we are looking forward to seeing the findings and insights that are gained in the exploration of the data by journalists, social scientist, and laypeople.", "Future work.", "In the future, we plan on analyzing the search logs and investigate users' search patterns to determine exploration strategies and use-cases that can aid us in further refining and augmenting the Quotebank corpus.", "We are also working on disambiguation techniques for speaker candidates during quote attribution and are preparing a stand-alone end-to-end pipeline for attributed quote extraction.", "Acknowledgments.", "We would like to thank Tanya Petersen for testing the interface and sharing her expert-user insights.", "We are also grateful to the members of EPFL DLAB and the students enrolled in the 2021 edition of the Applied Data Analysis (ADA) course for their feedback.", "This project was partly funded by the Swiss National Science Foundation (grant 200021_185043), the European Union (TAILOR, grant 952215), and the Microsoft Swiss Joint Research Center.", "We also gratefully acknowledge generous gifts from Facebook and Google." ] ]
2207.03592
[ [ "Demystifying the Adversarial Robustness of Random Transformation\n Defenses" ], [ "Abstract Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings such as autonomous vehicles.", "While many countermeasures may look promising, only a few withstand rigorous evaluation.", "Defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al., 2019) on ImageNet.", "However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood.", "Their stochastic properties make evaluation more challenging and render many proposed attacks on deterministic models inapplicable.", "First, we show that the BPDA attack (Athalye et al., 2018a) used in BaRT's evaluation is ineffective and likely overestimates its robustness.", "We then attempt to construct the strongest possible RT defense through the informed selection of transformations and Bayesian optimization for tuning their parameters.", "Furthermore, we create the strongest possible attack to evaluate our RT defense.", "Our new attack vastly outperforms the baseline, reducing the accuracy by 83% compared to the 19% reduction by the commonly used EoT attack ($4.3\\times$ improvement).", "Our result indicates that the RT defense on the Imagenette dataset (a ten-class subset of ImageNet) is not robust against adversarial examples.", "Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT), resulting in a large robustness gain.", "Code is available at https://github.com/wagner-group/demystify-random-transform." ], [ "Introduction", "Today, deep neural networks are widely deployed in safety-critical settings such as autonomous driving and cybersecurity.", "Despite their effectiveness at solving a wide-range of challenging problems, they are known to have a major vulnerability.", "Tiny crafted perturbations added to inputs (so called adversarial examples) can arbitrarily manipulate the outputs of these large models, posing a threat to the safety and privacy of the millions of people who rely on existing ML systems.", "The importance of this problem has drawn substantial attention, and yet the research community has not devised a concrete countermeasure.", "Adversarial training [32] has been the foremost approach for defending against adversarial examples.", "While adversarial training provides increased robustness, it results in a loss of accuracy on benign inputs.", "Recently, a promising line of defenses against adversarial examples has emerged.", "These defenses randomize either the model parameters or the inputs themselves [26], [22], [29], [45], [47], [3], [28], [9], [11].", "Introducing randomness into the model can be thought of as a form of smoothing that removes sinuous portions of the decision boundary where adversarial examples frequently lie [21].", "Other works attribute its success to the ensemble [16] or the “moving-target” [8] effect.", "Among these randomization approaches, [34] propose Barrage of Random Transforms (BaRT), a new defense which applies a large set of random image transformations to classifier inputs.", "They report a $24\\times $ increase in robust accuracy over previously proposed defenses.", "Despite these promising results, researchers still lack a clear understanding of how to properly evaluate random defenses.", "This is concerning as a defense can falsely appear more robust than it actually is when evaluated using sub-optimal attacks [1], [41].", "Therefore, in this work, we improve existing attacks on randomized defenses, and use them to rigorously evaluate BaRT and more generally, random transformation (RT) defenses.", "We find that sub-optimal attacks have led to an overly optimistic view of these RT defenses.", "Notably, we show that even our best RT defense is much less secure than previously thought, formulating a new attack that reduces its security (from 70% adversarial accuracy found by the baseline attack to only 6% on Imagenette).", "We also take the investigation further and combine RT defense with adversarial training.", "Nevertheless, this turns out to be ineffective as the attack is not sufficiently strong and only generates weak adversarial examples for the model to train with.", "The outcomes appear more promising for CIFAR-10, but it still lacks behind deterministic defense such as [32] and [46].", "We believe that stronger and more efficient attacks on RT-based models will be necessary not only for accurate evaluation of the stochastic defenses but also for improving the effectiveness of adversarial training for such models.", "To summarize, we make the following contributions: [noitemsep] We show that non-differentiable transforms impede optimization during an attack and even an adaptive technique for circumventing non-differentiability (i.e., BPDA [1]) is not sufficiently effective.", "This reveals that existing RT defenses are likely non-robust.", "To this end, we suggest that an RT defense should only use differentiable transformations for reliable evaluations and compatibility with adversarial training.", "We propose a new state-of-the-art attack for RT defense that improves over EoT [2] in terms of both the loss function and the optimizer.", "We explain the success of our attack through the variance of the gradients.", "Improve the RT scheme by using Bayesian optimization for hyperparameter tuning and combining it with adversarial training which uses our new attack method instead of the baseline EoT." ], [ "Adversarial Examples", "Adversarial examples are carefully perturbed inputs designed to fool a machine learning model [39], [6], [15].", "An adversarial perturbation $\\delta $ is typically constrained to be within some $\\ell _p$ -norm ball with a radius of $\\epsilon $ .", "The $\\ell _p$ -norm ball is a proxy to the “imperceptibility” of $\\delta $ and can be thought of as the adversary's budget.", "In this work, we primarily use $p = \\infty $ and only consider adaptive white-box adversary.", "Finding the worst-case perturbation $\\delta ^*$ requires solving the following optimization problem: $ x_{\\text{adv}} = x + \\delta ^* = x + \\operatornamewithlimits{arg\\,max}_{\\delta : \\left\\Vert \\delta \\right\\Vert _p \\le \\epsilon } ~L(x + \\delta , y)$ where $L:\\mathbb {R}^d \\times \\mathbb {R}^C \\rightarrow \\mathbb {R}$ is the loss function of the target model which, in our case, is a classifier which makes predictions among $C$ classes.", "Projected gradient descent (PGD) is often used to solve the optimization problem in Eqn.", "(REF )." ], [ "Randomization Defenses", "A number of recent papers have proposed defenses against adversarial examples which utilize inference-time randomization.", "One common approach is to sample weights of the network from some probability distribution [28], [22], [29], [3].", "In this paper, we instead focus on defenses that apply random transforms to the input [34], [45], [47], [9], many of which claim to achieve state-of-the-art robustness.", "Unlike prior evaluations, we test these defenses using a wide range of white-box attacks as well as a novel stronger attack.", "A key issue when evaluating these schemes is that PGD attacks require gradients through the entire model pipeline, but many defenses use non-differentiable transforms.", "As we show later, this can cause evaluation results to be misleading.", "Various random transformation defenses have been proposed.", "[45] randomly resize and pad the images.", "While this defense ranked second in the NeurIPS 2017 adversarial robustness competition, they did not consider in their evaluation adaptive attacks where the adversary has full knowledge of the transformations.", "[47] add Gaussian noise to the input and then quantize it.", "Their defense is reported to outperform all of the NeurIPS 2017 submissions.", "The adaptive attack used to evaluate their defense approximates the gradient of the transformations which could lead to a sub-optimal attack.", "In this paper, we use the exact gradients for all transforms when available.", "More recently, [34] claims to achieve a state-of-the-art robust accuracy $24\\times $ better than adversarial training using a random transformation defense known as Barrage of Random Transforms (BaRT).", "BaRT involves randomly sampling a large set of image transformations and applying them to the input in random order.", "Because many transformations are non-differentiable, BaRT evaluates their scheme using PGD attack that approximates the gradients of the transformations.", "In Section , we show that this approximation is ineffective, giving overly optimistic impression of BaRT's robustness, and we re-evaluate BaRT using a stronger attack which utilizes exact transform gradients.", "Figure: An illustration of a random transformation (RT) defense against adversarial examples.", "Transformations of different types and parameters are sampled and applied sequentially to multiple copies of the input.", "All of the transformed inputs are then passed to a single neural network, and the outputs are combined to make the final prediction." ], [ "Random Transformation Defense", "Here, we introduce notations and the design of our RT defense, formalizing the BaRT defense." ], [ "Decision Rules", "RT repeatedly applies a randomly chosen transform to the input, uses a neural network to make a prediction, and then averages the softmax prediction scores: $ g(x) \\operatorname{\\mathbb {E}}_{\\theta \\sim p(\\theta )} \\left[ \\sigma \\left( f \\left( t(x;\\theta ) \\right) \\right) \\right]$ where $\\sigma (\\cdot )$ is the softmax function, $f:\\operatorname{\\mathbb {R}}^d\\rightarrow \\operatorname{\\mathbb {R}}^C$ a neural network ($C$ is the number of classes), and the transformation $t(\\cdot ;\\theta ):\\operatorname{\\mathbb {R}}^d \\rightarrow \\operatorname{\\mathbb {R}}^d$ is parameterized by a random variable $\\theta $ drawn from some distribution $p(\\theta )$ .", "In practice, we approximate the expectation in Eqn.", "(REF ) with $n$ Monte Carlo samples per one input $x$ : $ g(x) \\approx g_n(x) \\frac{1}{n} \\sum _{i=1}^n \\sigma \\left( f(t(x;\\theta _i)) \\right)$ We then define the final prediction as the class with the largest softmax probability: $\\hat{y}(x) = \\operatornamewithlimits{arg\\,max}_{c \\in [C]}~[g_n(x)]_c$ .", "Note that this decision rule is different from most previous works that use a majority vote on hard labels, i.e., $\\hat{y}_{\\mathrm {maj}}(x) = \\operatornamewithlimits{arg\\,max}_{c \\in [C]}~\\sum _{i=1}^n \\mathbb {1}\\left\\lbrace c = \\operatornamewithlimits{arg\\,max}_{j \\in [C]}~f_j(x)\\right\\rbrace $  [34], [9].", "We later show in Appendix REF that our rule is empirically superior to the majority vote.", "From the Law of Large Numbers, as $n$ increases, the approximation in Eqn.", "(REF ) converges to the expectation in Eqn.", "(REF ).", "Fig.", "REF illustrates the structure and the components of the RT architecture." ], [ "Parameterization of Transformations", "Here, $t(\\cdot ;\\theta )$ represents a composition of $S$ different image transformations where $\\theta = \\lbrace \\theta ^{(1)},\\dots ,\\theta ^{(S)}\\rbrace $ and $\\theta ^{(s)}$ denotes the parameters for the $s$ -th transformation, i.e., $t(x;\\theta ) = t_{\\theta ^{(S)}} \\circ t_{\\theta ^{(S-1)}} \\circ \\dots \\circ t_{\\theta ^{(1)}}(x)$ Each $\\theta ^{(s)}$ is a random variable comprised of three components, i.e., $\\theta ^{(s)}=\\lbrace \\tau ^{(s)},\\beta ^{(s)},\\alpha ^{(s)}\\rbrace $ , which dictate the properties of a transformation: Type $\\tau $ of transformation to apply (e.g., rotation, JPEG compression), which is uniformly drawn, without replacement, from a pool of $K$ transformation types: $\\tau \\sim \\text{Cat}(K, \\mathbf {1}/K)$ .", "A boolean $\\beta $ indicating whether the transformation will be applied.", "This is a Bernoulli random variable with probability $p_\\beta $ : $\\beta \\sim \\mathrm {Bern}\\left(p\\right)$ .", "Strength of the transformation (e.g., rotation angle, JPEG quality) denoted by $\\alpha $ , sampled from a predefined distribution (either uniform or normal): $\\alpha \\sim p(a)$ .", "Specifically, for each of the $n$ transformed samples, we sample a permutation of size $S$ out of $K$ transformation types in total, i.e.", "$\\lbrace \\tau ^{(1)},\\dots ,\\tau ^{(S)}\\rbrace \\in \\mathrm {Perm}(K, S)$ .", "Then the boolean and the strength of the $s$ -th transform are sampled: $\\beta ^{(s)} \\sim \\mathrm {Bern}\\left(p_{\\tau ^{(s)}}\\right)$ and $\\alpha ^{(s)} \\sim p(a_{\\tau ^{(s)}})$ .", "We abbreviate this sampling process as $\\theta \\sim p(\\theta )$ which is repeated for every transformed sample (out of $n$ ) for a single input.", "Assuming that the $K$ transformation types are fixed, an RT defense introduces, at most, $2K$ hyperparameters, $\\lbrace p_1,\\dots ,p_K\\rbrace $ and $\\lbrace a_1,\\dots ,a_K\\rbrace $ , that can be tuned.", "It is also possible to tune by selecting $K^{\\prime }$ out of $K$ transformation types, but this is combinatorially large in $K$ .", "In Appendix , we show a heuristic for “pruning” the transformation types through tuning $p$ and $a$ (e.g., setting $p=0$ is equivalent to removing that transformation type)." ], [ "Choices of Transformations", "In this work, we use a pool of $K=33$ different image transformations including 19 differentiable and 2 non-differentiable transforms taken from the 30 BaRT transforms [34] (counting each type of noise injection as its own transform).", "We replace non-differentiable transformations with a smooth differentiable alternative [36].", "The transformations fall into seven groups: noise injection (7), blur filtering (4), color-space alteration (8), edge detection (2), lossy compression (3), geometric transformation (5), and stylization (4).", "All transforms are described in Appendix REF ." ], [ "Evaluating {{cite:4ea15bc4d60ed41c49209b85311407298eebf92d}}'s BaRT", "Backward-pass differentiable approximation (BPDA) was proposed as a heuristic for approximating gradients of non-differentiable components in many defenses to make gradient-based attacks applicable [1].", "It works by first approximating the function with a neural network and backpropagate through this network instead of the non-differentiable function.", "Evaluations of BaRT in [34] have considered BPDA as some transformations are innately non-differentiable or have zero gradients almost everywhere (e.g., JPEG compression, precision reduction, etc.).", "To approximate a transformation, we train a model $\\tilde{t}_\\phi $ that minimizes the Euclidean distance between the transformed image and the model output: $ \\min _{\\phi }~\\sum _{i=1}^N\\mathop {\\mathbb {E}}_{\\theta \\sim p(\\theta )}\\left\\Vert \\tilde{t}_\\phi (x_i; \\theta ) - t(x_i; \\theta )\\right\\Vert _2$ We evaluate the BPDA approximation below in a series of experiments that compare the effectiveness of the BPDA attack to an attack that uses exact gradients." ], [ "Experiment Setup", "Our experiments use two datasets: CIFAR-10 and Imagenette [23], a ten-class subset of ImageNet.", "While CIFAR-10 is the most common benchmark in the adversarial robustness domain, some image transformations work poorly on low-resolution images.", "We choose Imagenette because BaRT was created on ImageNet, but we do not have resources to do thorough investigation on top of adversarial training on ImageNet.", "Additionally, the large and realistic images from Imagenette more closely resemble real-world usage All Imagenette models are pre-trained on ImageNet to speed up training and boost performance.", "Since RT models are stochastic, we report their average accuracy together with the 95% confidence interval from 10 independent runs.", "Throughout this work, we consider the perturbation size $\\epsilon $ of $16/255$ for Imagenette and $8/255$ for CIFAR-10.", "Appendix REF has more details on the experiments (network architecture, hyperparameters, etc.", ")." ], [ "BPDA Attack is Not Sufficiently Strong", "We re-implemented and trained a BaRT model on these datasets, and then evaluated the effectiveness of BPDA attacks against this model.The authors have been very helpful with the implementation details but cannot make the official code or model weights public.", "First, we evaluate the full BaRT model in Table REF , comparing an attack that uses a BPDA approximation (as [34]) vs an attack that uses the exact gradient for differentiable transforms and BPDA for non-differentiable transforms, denoted “BPDA” and “Combo”, respectively.", "Empirically, we observe that attacks using BPDA are far weaker than the equivalent attack using exact gradient approximations.", "Similarly, on a variant BaRT model that uses only the subset of differentiable transforms, the BDPA attack is worse than an attack that uses the exact gradient for all transforms.", "BPDA is surprisingly weaker than even a naive attack which approximates all transform gradients with the identity.", "There are a few possible explanations for the inability of BPDA to approximate transformation gradients well: As Fig.", "REF illustrates, BPDA struggles to approximate some transforms accurately.", "This might be partly because the architecture [34] used (and we use) to approximate each transform has limited functional expressivity: it consists of five convolutional layers with 5x5 kernel and one with 3x3 kernel (all strides are 1), so a single output pixel can only depend on the input pixels fewer than 11 spaces away in any direction ($5 \\cdot {\\frac{5}{2}} + 1 \\cdot {\\frac{3}{2}} = 11$ ).", "Considering the inputs for Imagenette are of size $224\\times 224$ , some transforms like “crop” which require moving pixels much longer distances are impossible to approximate with such an architecture.", "The BPDA network training process for solving Eqn.", "(REF ) may only find a sub-optimal solution, yielding a poor approximation of the true transformation.", "During the attack, the trained BPDA networks are given partially transformed images, yet the BPDA networks are only trained with untransformed inputs.", "Since we are backpropagating through several transforms, one poor transform gradient approximation could ruin the overall gradient approximation.", "Appendix REF has more details on these experiments.", "These results show that BaRT's evaluation using BPDA was overly optimistic, and BaRT is not as robust as previously thought.", "Since BPDA is unreliable for approximating gradients of non-differentiable image transformations, we recommend that other ensuing RT-based defenses only use differentiable transformations.", "For the rest of this paper, we only study the robustness of RT defenses with differentiable transforms to isolate them from an orthogonal line of research on non-differentiable defenses (e.g., with approximate gradients or zero-th order attacks).", "Additionally, differentiable models can boost their robustness further when combined with adversarial training.", "We explore this direction in Section .", "Even without non-differentiable transforms, we still lack reliable evaluation on stochastic defenses apart from EoT.", "In the next section, we show that applying an EoT attack on RT defense results in a critically sub-optimal evaluation.", "After that, we propose a stronger attack." ], [ "Hyperparameter Tuning on RT Defenses", "Before investigating attacks, we want to ensure we evaluate on the most robust RT defense possible.", "We found that BaRT is not robust, but it could be because of the chosen transformations and their hyperparameters which they do not provide any justification for.", "Finding the most robust RT defense is, however, challenging because it consists of numerous hyperparameters including the $K$ transformation types, the number of transformations to apply ($S$ ), and their parameters ($a$ and $p$ ).", "A typical grid search is intractable since we have 33 transformations, and trying to optimize the parameters directly with the reparameterization trick does not work as most transforms are not differentiable w.r.t.", "their parameters.", "We systematically address this problem by using Bayesian optimization (BO) [37], a well-known black-box optimization technique used for hyperparameter search, to fine-tune $a$ and $p$ .", "In short, BO optimizes an objective function that takes in the hyperparameters ($a$ and $p$ in our case) as inputs and outputs adversarial accuracy.", "This process, which is equivalent to one iteration in BO, is computationally expensive as it involves training a neural network as a backbone for an RT defense and evaluating it with our new attack.", "Consequently, we have to scale down the problem by shortening the training, using fewer training/testing data samples, and evaluating with fewer attack steps.", "Essentially, we have to trade off precision of the search for efficiency.", "Because BO does not natively support categorical or integral variables, we experiment with different choices for $K$ and $S$ without the use of BO.", "The full details of this procedure are presented Appendix ." ], [ "State-of-the-Art Attack on RT Defenses", "[tb] Our best attack on RT defenses Input: Set of $K$ transformations and distributions of their parameters $p(\\theta )$ , neural network $f$ , perturbation size $\\epsilon $ , max.", "PGD steps $T$ , step size $\\lbrace \\gamma _t\\rbrace _{t=1}^T$ , and AggMo's damping constants $\\lbrace \\mu _b\\rbrace _{b=1}^B$ .", "Output: Adversarial examples $x_{\\mathrm {adv}}$ Data: Test input $x$ and its ground-truth label $y$ // Initialize x_adv and velocities $x_{\\mathrm {adv}} \\leftarrow x + u \\sim \\mathcal {U}[-\\epsilon ,\\epsilon ],\\quad \\lbrace v_b\\rbrace _{b=1}^B \\leftarrow \\mathbf {0}$ $x_{\\mathrm {adv}} \\leftarrow \\mathrm {Clip}(x_{\\mathrm {adv}}, 0, 1)$ $t=1$ to $T$ $\\lbrace \\theta _i\\rbrace _{i=1}^n \\sim p(\\theta )$ // Compute a gradient estimate with linear loss on logits (Section REF ) and with SGM (Section REF ) $G_n \\leftarrow \\nabla \\mathcal {L}_{\\mathrm {Linear}}\\left(\\frac{1}{n} \\sum _{i=1}^n f(t(x_{\\mathrm {adv}};\\theta _i)), y\\right)$ $\\hat{G}_n \\leftarrow \\mathrm {sign}(G_n)$ // Use signed gradients Update velocities and x_adv with AggMo (Section REF ) $b=1$ to $B$ $v_b \\leftarrow \\mu _b \\cdot v_b + \\hat{G}_n$ $x_{\\mathrm {adv}} \\leftarrow x_{\\mathrm {adv}} + \\frac{\\gamma _t}{B}\\sum _{b=1}^B v_b$ We propose a new attack on differentiable RT defenses that leverages insights from previous literature on transfer attacks as well as recent stochastic optimization algorithms.", "Our attack is immensely successful and shows that even the fine-tuned RT defense from Section  shows almost no adversarial robustness (Table REF ).", "We summarize our attack in Algorithm  before describing the setup and investigating the three main design choices that make this attack successful and outperform the baseline from [2] by a large margin." ], [ "Setup: Stochastic Gradient Method", "First, we describe the setup and explain intuitions around variance of the gradient estimates.", "Finding adversarial examples on RT defenses can be formulated as the following stochastic optimization problem: $\\max _{\\delta :\\left\\Vert \\delta \\right\\Vert _\\infty \\le \\epsilon } H(\\delta ) &\\max _{\\delta :\\left\\Vert \\delta \\right\\Vert _\\infty \\le \\epsilon } \\operatorname{\\mathbb {E}}_{\\theta } \\left[h(\\delta ;\\theta )\\right] \\\\&\\max _{\\delta :\\left\\Vert \\delta \\right\\Vert _\\infty \\le \\epsilon } \\operatorname{\\mathbb {E}}_{\\theta } \\left[\\mathcal {L}(f(t(x+\\delta ; \\theta )), y)\\right] $ for some objective function $\\mathcal {L}$ .", "Note that we drop dependence on $(x,y)$ to declutter the notation.", "Since it is not possible to evaluate the expectation or its gradients exactly, the gradients are estimated by sampling $\\lbrace \\theta _i\\rbrace _{i=1}^n$ similarly to how we obtain a prediction $g_n$ .", "Suppose that $H$ is smooth and convex, and variance of the gradient estimates is bounded by $\\sigma ^2$ , i.e., $ \\mathop {\\mathbb {E}}_{\\theta \\sim p(\\theta )} \\left[ \\left\\Vert \\nabla h(\\delta ; \\theta ) - \\nabla H(\\delta )\\right\\Vert ^2 \\right] \\le \\sigma ^2,$ the error of SGD after $T$ iterations is $\\mathcal {O}\\left(1/T + \\sigma /\\sqrt{T}\\right)$ for an appropriate step size [14].", "This result suggests that small $\\sigma $ or low-variance gradient speeds up convergence which is highly desirable for attackers and defenders alike.", "Specifically, it leads to more efficient and more accurate evaluation as well as a stronger attack to use during adversarial training, which in turn, could yield a better defense (we explore this in Section ).", "As a result, the analyses on our attack will be largely based on variance and two other measures of spread of the gradients.", "Specifically, we measure (1) the dimension-averaged variance in Eqn.", "(REF ), (2) cosine similarity and (3) a percentage of matching signs between mean gradient and each gradient sample.", "Since all three metrics appear to be highly correlated in theory and in practice, we only report the variance in the main paper.", "For the other metrics and their mathematical definitions, please see Appendix REF ." ], [ "EoT Baseline.", "We compare our attack to the baseline which is exactly taken from [2].", "This attack takes on the same form as Eqn.", "(REF ) and its gradients are averaged over $n$ gradient samples: $H^{\\mathrm {EoT}}_n(\\delta ) &\\frac{1}{n} \\sum _{j=1}^n~ \\mathcal {L}\\left( f \\left( t(x + \\delta ; \\theta _j) \\right), y\\right) $ It is important to note that this approximation does not exactly match the decision rule of RT defenses as the expectation should be in front of $f$ but behind the loss function (see Eqn.", "(REF )).", "While the gradient estimates from Eqn.", "(REF ) are unbiased, they may have high variance as each gradient sample is equivalent to computing the loss on $g_n$ with $n=1$ .", "In the next section, we will compare other options for objective functions and decision rules and show that there are better alternatives to the original EoT." ], [ "Signed gradients.", "All of the attacks used in this study including ours and the baseline use signs of gradients instead of the gradients themselves.", "This is a common practice for gradient-based $\\ell _\\infty $ -attacks, and we have also empirically confirm that it leads to much stronger attacks.", "This is also the reason that we measure sign matching as a measure of spread of the gradient estimates.", "In addition to the $\\ell _\\infty $ -constraint, using signed gradients as well as signed momentum is also beneficial as it has been shown to reduce variance for neural network training and achieve even faster convergence than normal SGD in certain cases [5]." ], [ "Adversarial Objectives and Decision Rules", "Here, we propose new decision rules and loss functions for the attacks as alternatives to EoT.", "Note that this need not be the same as the rule used for making prediction in Eqn.", "(REF ).", "First, we introduce softmax and logits rules: $&H^{\\mathrm {softmax}}(\\delta ) \\mathcal {L}\\left( \\mathop {\\mathbb {E}}_{\\theta \\sim p(\\theta )} \\left[ \\sigma \\left( f \\left( t(x + \\delta ; \\theta ) \\right) \\right) \\right], y\\right) \\\\&H^{\\mathrm {logits}}(\\delta ) \\mathcal {L} \\left( \\mathop {\\mathbb {E}}_{\\theta \\sim p(\\theta )} \\left[ f \\left( t(x + \\delta ; \\theta ) \\right) \\right], y\\right) $ $H^{\\mathrm {softmax}}$ , or loss of the expected softmax probability, is the same rule as the decision rule of RT defenses (Eqn.", "(REF )).", "It was also used by [35] where $\\mathcal {L}$ is cross-entropy loss.", "$H^{\\mathrm {logits}}$ or an expected logits, is similar to $H^{\\mathrm {softmax}}$ but without the softmax function to avoid potential vanishing gradients from softmax.", "Figure: Comparison of PGD attack's effectiveness with (a) different loss functions and decision rules, and (b) different attack variants with improved transferability.", "The error bars are too small to see with the markers so we report the numerical results in Table .“Baseline” refers to EoT with CE loss in Eqn.", "().In addition to the rules, we experiment with two choices of $\\mathcal {L}$ commonly used for generating adversarial examples: cross-entropy loss (CE) and linear loss (Linear).", "The linear loss is defined as the difference between the largest logit of the wrong class and logit of the correct class: $\\mathcal {L}_{\\mathrm {Linear}}(x, y) &~~ \\max _{j \\ne y} F_j - F_y \\\\\\text{where}~\\;~ F &~=~ \\mathop {\\mathbb {E}}_{\\theta \\sim p(\\theta )} \\left[f\\left(t(x; \\theta ) \\right) \\right]$ The advantage of the linear loss is that its gradient estimates are unbiased, similarly to EoT, meaning that the expectation can be moved in front of $\\mathcal {L}$ due to linearity.", "However, this is not the case for CE loss.", "Attack evaluation and comparison.", "We evaluate the attacks by their effectiveness in reducing the adversarial accuracy (lower means stronger attack) on the RT defense obtained from Section .", "In our setting, the adversarial examples are generated once and then used to compute the accuracy 10 times, each with a different random seed on the RT defense.", "We report the average accuracy over these 10 runs together with the 95%-confidence interval.", "Alternatively, one can imagine a threat model that counts at least one misclassification among a certain number of trials as incorrect.", "This is an interesting and perhaps more realistic in some settings, but the optimal attack will be very different from EoT as we care a lot less about the expectation.", "This, however, is outside of the scope of our work.", "In Fig.", "REF , we compare the effectiveness of four attacks, each using a different pair of losses and decision rules with varying numbers of PGD steps and samples $n$ .", "The widely used EoT method performs the worst of the four.", "CE loss on mean softmax probability performs better than EoT, confirming the observation made by [35].", "Linear loss and CE loss on average logits are even better and are consistently the strongest attacks, across all hyperparameters.", "For the rest of this paper, we adopt the linear loss with mean logits as the main objective function.", "Figure: Comparison of dimension-normalized variance of the gradient estimates across (blue) different loss functions and decision rules and (yellow) transferability-improving attacks.", "Strong attacks are highly correlated with low variance of their gradient estimates, i.e., Lin+SGM.", "Note that Lin+MB or Momentum Boosting is not shown here because it does not modify the gradients.Connection to variance.", "As we predicted in Section REF , a stronger attack directly corresponds to lower variance.", "This hypothesis is confirmed by Fig.", "REF .", "For instance, the EoT baseline has the highest variance as well as the worst performance according to Fig.", "REF .", "On the other hand, the linear loss (Lin) has the lowest variance among the three loss functions (blue) and hence, it performs the best.", "The other three points in orange will be covered in the next section." ], [ "Ensemble and Transfer Attacks", "RT defense can be regarded as an ensemble of neural networks with each member sharing the same parameters but applying different sets of transformations to the input (i.e., different $\\theta $ 's from random sampling).", "Consequently, we may view a white-box attack on RT defenses as a “partial” black-box attack on an ensemble of (infinitely) many models where the adversary wishes to “transfer” adversarial examples generated on some subset of the members to another unseen subset.", "Given this interpretation, we apply four techniques designed to enhance the transferability of adversarial examples to improve the attack success rate on RT defense.", "The techniques include momentum boosting (MB) [12], modifying backward passes by ignoring non-linear activation (LinBP) [17] or by emphasizing the gradient through skip connections of ResNets more than through the residual block (SGM) [44], and simply using a targeted attack with the linear loss function (TG) [48].", "In Fig.", "REF , we compare these techniques combined with the best performing loss and decision rule from Section REF (i.e., the linear loss on logits).", "Only SGM improves the attack success rate at all settings while the rest result in weaker attacks than the one without any of the techniques (denoted by “Linear (logits)” in Fig.", "REF ).", "SGM essentially normalizes the gradients and scales ones from the residual blocks by some constant less than 1 (we use $0.5$ ) to reduce its influence and prioritize the gradients from the skip connection.", "[44] explain that SGM leads to better transferability because gradients through skip connections preserve “low-level information” which tends to transfer better.", "Intuitively, this agrees with our variance explanation as the increased transferability implies a stronger agreement among gradient samples and hence, less spread or lower variance." ], [ "Stochastic Optimization Algorithm", "While most attacks on deterministic models can use naive PGD to solve Eqn.", "(REF ) effectively, this is not the case for stochastic models like the RT defense.", "Here, the adversary only has access to noisy estimates of the gradients, making it a strictly more difficult problem, and techniques used in the deterministic case may no longer apply.", "Figure: Comparison of the optimizers for attacking an RT defense with ϵ=16/255,n=10\\epsilon =16/255, n=10 on Imagenette dataset.", "All but the baseline (CE loss with EoT) use the linear loss with SGM, and all but AggMo (B=6B=6) use the default hyperparameters.", "AggMo with B=6B=6 outperforms the other algorithms in terms of both the convergence rate and the final adversarial accuracy obtained.", "This result is not very sensitive to BB as any sufficiently large value (≥4\\ge 4) yields the same outcome.As mentioned in Section REF , high-variance gradient estimates undermine the convergence rate of SGD.", "Thus, the attack should benefit from optimization techniques aimed at reducing the variance or speeding up the convergence of SGD.", "We first experiment with common optimizers such as SGD and Adam [25] with different hyperparameters, e.g., momentum, Nesterov acceleration, and learning rate schedules, to find the best setting for the linear loss with SGM.", "Based on this experiment, we found that a momentum term with an appropriate damping constant plays an important role in the attack success rate.", "Momentum is also well-known to accelerate and stabilize training of neural networks [38].", "Fig.", "REF reports adversarial accuracy at varying attack iterations and indicates that higher momentum constant leads to faster convergence and a higher attack success rate.", "However, the results seem highly sensitive to this momentum constant which also varies from one setting to another (e.g., number or types of transformations, dataset, etc.).", "To mitigate this issue, we introduce another optimizer.", "AggMo is exactly designed to be less sensitive to choices of the damping coefficient by aggregating $B$ momentum terms with different constants instead of one [31].", "After only a few tries, we found a wide range of values of $B$ where AggMo outperforms SGD with a fine-tuned momentum constant (see Fig.", "REF ).", "Fig.", "REF compares the attacks using different choices of the optimizers to the baseline EoT attack.", "Here, the baseline can only reduce the adversarial accuracy from $89\\%$ to $70\\%$ while our best attack manages to reach $\\mathbf {6\\%}$ or over $\\mathbf {4.3\\times }$ improvement.", "This concludes that the optimizer plays a crucial role in the success of the attack, and the RT defense, even with a carefully and systematically chosen transformation hyperparameters, is not robust against adversarial examples.", "Furthermore, we note that without our loss function and only using AggMo, the accuracy only goes down to $23\\%$ at a much slower rate.", "Conversely, when the linear loss and SGM are used with SGD (no momentum), the accuracy drops to $51\\%$ .", "This signifies that all three techniques we deploy play important roles to the attack's effectiveness." ], [ "Comparison with AutoAttack", "AutoAttack [10] was proposed as a standardized benchmark for evaluating deterministic defenses against adversarial examples.", "It uses an ensemble of four different attacks that cover weaknesses of one another, one of which does not use gradients.", "AutoAttack has been proven to be one of the strongest attack currently and is capable of catching defenses with false robustness caused by gradient obfuscation [1].", "While not particularly designed for stochastic models, AutoAttack can be used to evaluate them when combined with EoT.", "We report the accuracy on adversarial examples generated on AutoAttack with all default hyperparameters in the “standard” mode and 10-sample EoT in Table REF .", "AutoAttack performs worse than the baseline EoT and our attack on both Imagenette and CIFAR-10 by a large margin.", "One of the reasons is that AutoAttack is optimized for efficiency and so each of its attacks is usually terminated once a misclassification occurs.", "This is applicable to deterministic models, but for stochastic ones such as an RT defense, the adversary is better off finding the adversarial examples that maximize the expected loss instead of ones that are misclassified once.", "To take this property into account, we include the accuracy reported by AutoAttack that treats a sample as incorrect if it is misclassified at least once throughout the entire process.", "For Imagenette, the accuracies after each of the four attacks (APGD-CE, APGD-T, FAB, and Square) is applied sequentially are $82.03$ , $78.81$ , $78.03$ , and $77.34$ , respectively.", "Note that this is a one-time evaluation so there is no error bar here.", "Needless to say, the adversarial accuracy computed this way is strictly lower than the one we reported in Table REF and violates our threat model.", "However, it is still higher than that of the baseline EoT and our attack, suggesting that AutoAttack is ineffective against randomized models like RT defenses.", "AutoAttack also comes with a “random” mode for randomized models which only use APGD-CE and APGD-DLR with 20-sample EoT.", "The adversarial accuracies obtained from this mode are $85.62$ and $83.83$ or $88.62 \\pm 0.46$ for single-pass evaluation as in Table REF .", "This random mode performs worse than the standard version." ], [ "Combining with Adversarial Training", "To deepen our investigation, we explore the possibility of combining RT defense with adversarial training.", "However, this is a challenging problem on its own.", "For normal deterministic models, 10-step PGD is sufficient for reaching adversarial accuracy close to best known attack or the optimal adversarial accuracy.", "However, this is not the case for RT defenses as even our new attack still requires more than one thousand iterations before the adversarial accuracy starts to plateau.", "Ultimately, the robustness of adversarially trained models largely depends on the strength of the attack used to generate the adversarial examples, and using a weak attack means that the obtained model will not be robust.", "A similar phenomenon is observed by [40] and [43] where an adversarially trained model overfits to the weak FGSM attacks but has shown to be non-robust with the accurate evaluation.", "To test this hypothesis, we adversarially train the RT defense from Section  using our new attack with 50 iterations (already $5\\times $ the common number of steps) and call this defense ”AdvRT.” The attack step size is also adjusted accordingly to $\\epsilon / 8$ .", "In Table REF , we confirm that training AdvRT this way results in a model with virtually no robustness improvement over the normal RT on Imagenette.", "On the other hand, the AdvRT trained on CIFAR-10 proves to be more promising even though it is still not as robust as deterministic models trained with adversarial training or TRADES [46].", "Based on this result, we conclude that a stronger attack on RT defenses that converge within a much fewer iterations will be necessary to make adversarial training successful.", "In theory, it might be possible to achieve a robust RT model with 1,000-step attack on Imagenette, but this is too computationally intensive for us to verify, and it will not to scale to any realistic setting." ], [ "Conclusion", "While recent papers report state-of-the-art robustness with RT defenses, our evaluations show that RT generally under-performs existing defenses like adversarial training when met with a stronger attack, even after fine-tuning the hyperparameters of the defense.", "Through our experiments, we found that non-differentiability and high-variance gradients can seriously inhibit adversarial optimization, so we recommend using only differentiable transformations along with their exact gradients in the evaluation of future RT defenses.", "In this setting, we propose a new state-of-the-art attack that improves significantly over the baseline (PGD with EoT) and show that RT defenses as well as their adversarially trained counterparts are not as robust to adversarial examples as they were previously believed to be." ], [ "Acknowledgements", "We would like to thank Jonathan Shewchuk for the feedback on the paper.", "This research was supported by the Hewlett Foundation through the Center for Long-Term Cybersecurity (CLTC), by the Berkeley Deep Drive project, by the National Science Foundation under Award CCF-1909204, and by generous gifts from Open Philanthropy and Google Cloud Research Credits program under Award GCP19980904." ], [ "Details on the Image Transformations", "The exact implementation of RT models and all the transformations will be released.", "Here, we provide some details on each of the transformation types and groups.", "Then, we describe how we approximate some non-differentiable functions with differentiable ones." ], [ "Noise injection", "[noitemsep] Erase: Set the pixels in a box with random size and location to zero.", "Gaussian noise: Add Gaussian noise to each pixel.", "Pepper: Zero out pixels with some probability.", "Poisson noise: Add Poisson noise to each pixel.", "Salt: Set pixels to one with some probability.", "Speckle noise: Add speckle noise to each pixel.", "Uniform noise: Add uniform noise to each pixel." ], [ "Blur filtering", "[noitemsep] Box blur: Blur with randomly sized mean filter.", "Gaussian blur: Blur with randomly sized Gaussian filter with randomly chosen variance.", "Median blur: Blur with randomly sized median filter.", "Motion blur: Blur with kernel for random motion angle and direction." ], [ "Color-space alteration", "[noitemsep] HSV: Convert to HSV color-space, add uniform noise, then convert back.", "LAB: Convert to LAB color-space, add uniform noise, then convert back.", "Gray scale mix: Mix channels with random proportions.", "Gray scale partial mix: Mix channels with random proportions, then mix gray image with each channel with random proportions.", "Two channel gray scale mix: Mix two random channels with random proportions.", "One channel partial gray: Mix two random channels with random proportions, then mix gray image with other channel.", "XYZ: Convert to XYZ color-space, add uniform noise, then convert back.", "YUV: Convert to YUV color-space, add uniform noise, then convert back." ], [ "Edge detection", "[noitemsep] Laplacian: Apply Laplacian filter.", "Sobel: Apply the Sobel operator." ], [ "Lossy compression", "[noitemsep] JPEG compression: Compress image using JPEG to a random quality.", "Color precision reduction: Reduce color precision to a random number of bins.", "FFT perturbation: Perform FFT on image and remove each component with some probability." ], [ "Geometric transforms", "[noitemsep] Affine: Perform random affine transformation on image.", "Crop: Crop image randomly and resize to original shape.", "Horizontal flip: Flip image across the vertical.", "Swirl: Swirl the pixels of an image with random radius and strength.", "Vertical flip: Flip image across the horizontal." ], [ "Stylization", "[noitemsep] Color jitter: Randomly alter the brightness, contrast, and saturation.", "Gamma: Randomly alter gamma.", "Sharpen: Apply sharpness filter with random strength.", "Solarize: Solarize the image." ], [ "Non-differentiable (for BPDA Tests Only)", "[noitemsep] Adaptive histogram: Equalize histogram in patches of random kernel size.", "Chambolle denoise: Apply Chambolle's total variation denoising algorithm with random weight (can be implemented differentiably but was not due to time constraints).", "Contrast stretching: Pick a random minimum and maximum pixel value to rescale intensities (can be implemented differentiably but was not due to time constraints).", "Histogram: Equalize histogram using a random number of bins." ], [ "Unused transforms from BaRT", "[noitemsep] Seam carving: Algorithm used in [34] has been patented and is no longer available for open-source use.", "Wavelet denoising: The implementation in [34] is incomplete.", "Salt & pepper: We have already used salt and pepper noise separately.", "Non-local means denoising: The implementation of NL means denoising in [34] is too slow." ], [ "Experiment Details", "All of the experiments are evaluated on 1000 randomly chosen test samples.", "Since we choose the default $n$ to be 20 for inference and 10 for the attacks, the experiments are at least 10 times more expensive than usual, and we cannot afford enough computation to run a large number of experiments on the entire test set.", "The networks used in this paper are ResNet-34 [19] for Imagenette and Pre-activation ResNet-20 [20] for CIFAR-10.", "In all of the experiments, we use a learning rate of 0.05, batch size of 128, and weight decay of 0.0005.", "We use cosine annealing schedule [30] for the learning rate with a period of 10 epochs which also doubles after every period.", "All models are trained for 70 epochs, and we save the weights with the highest accuracy on the held-out validation data (which does not overlap with the training or test set).", "For adversarially trained RT defenses, the cosine annealing step is set to 10 and the training lasts for 70 epochs to reduce the computation.", "To help the training converge faster, we pre-train these RT models on clean data before turning on adversarial training as suggested by [18]." ], [ "Details on BPDA Experiments", "We used the following setup for the differentiability related experiments conducted in Section REF : [noitemsep] Each accuracy is an average over 10 trials on the same set of 1000 Imagenette images.", "The defense samples $S = 10$ transforms from the full set of $K$ transforms.", "The image classifier uses a ResNet-50 architecture like in [34] trained on transformed images for 30 epochs.", "The attack uses 40 PGD steps of size $4/255$ with an $\\epsilon =16/255$ to minimize the EoT objective.", "The BPDA network architecture is the same used by [34] and is outlined in Fig.", "REF .", "All BPDA networks were trained using Adam with a learning rate of $0.01$ for 10 epochs.", "All networks achieve a per-pixel MSE below $0.01$ .", "The outputs of the BPDA networks are compared to the true transform outputs for several different transform types in Fig.", "REF .", "The specific set of transforms used in each defense are the following: BaRT (all): adaptive histogram, histogram, bilateral blur, box blur, Gaussian blur, median blur, contrast stretching, FFT, gray scale mix, gray scale partial mix, two channel gray scale mix, one channel gray scale mix, HSV, LAB, XYZ, YUV, JPEG compression, Gaussian noise, Poisson noise, salt, pepper, color precision reduction, swirl, Chambolle denoising, crop.", "BaRT (only differentiable): all of the BaRT all transforms excluding adaptive histogram, histogram, contrast stretching, and Chambolle denoising.", "Figure: Comparison of the true transformed outputs (top row) and outputs of respective BPDA networks (bottom row) for six different transformation types." ], [ "Differentiable Approximation", "Some of the transformations contain non-differentiable operations which can be easily approximated with differentiable functions.", "Specifically, we approximate the rounding function in JPEG compression and color precision reduction, and the modulo operator in all transformations that require conversion between RGB and HSV color-spaces (HSV alteration and color jitter).", "Note that we are not using the non-differentiable transform on the forward pass and a differentiable approximation on the backward pass (like in BPDA).", "Instead, we are using the differentiable version both when performing the forward pass and when computing the gradient.", "We take the approximation of the rounding function from [36] shown in Eqn.", "(REF ).", "$ \\lfloor x \\rceil _\\text{approx} = \\lfloor x \\rceil + (x - \\lfloor x \\rceil )^3$ For the modulo or the remainder function, we approximate it using the above differentiable rounding function as a basis.", "$ \\mathrm {mod}(x) &= {\\left\\lbrace \\begin{array}{ll}x - \\lfloor x \\rceil \\qquad \\quad \\mathrm {if}~x > \\lfloor x \\rceil \\\\x - \\lfloor x \\rceil + 1 \\quad ~\\mathrm {otherwise}\\end{array}\\right.", "}$ To obtain a differentiable approximation, we can replace the rounding operator with its smooth version in Eqn.", "(REF ).", "This function (approximately) returns decimal numbers or a fractional part of a given real number, and it can be scaled to approximate a modulo operator with any divisor.", "Note that these operators are step functions and are differentiable almost everywhere, like ReLU.", "However, their derivatives are always zero (unlike ReLU), and so a first-order optimization algorithm would still fail on these functions." ], [ "Effect of the Permutation of the Transformations", "We mentioned in Section REF that a permutation of the transforms $\\lbrace \\tau ^{(s)}\\rbrace _{s=1}^S$ is randomly sampled for each of the $n$ samples.", "However, we found that in practice, this leads to high-variance estimates of the gradients.", "On the other hand, fixing the permutation across $n$ samples in each attack iteration (i.e., $\\tau $ is fixed but not $\\alpha $ or $\\beta $ ) results in lower variance and hence, a stronger attack, even though the gradient estimates are biased as $\\tau $ is fixed.", "For instance, with fixed permutation, adversarial accuracy achieved by EoT attack is $51.44$ where the baseline EoT with completely random permutation is $70.79$ .", "The variance also reduces from $0.97$ to $0.94$ .", "Additionally, the fixed permutation reduces the computation time as all transformations can be applied in batch.", "All of the attacks reported in this paper, apart from the baseline, use this fixed permutation.", "Table: Comparison of different attack techniques on our best RT model.", "Lower means stronger attack.", "This table only shows the numerical results plotted in Fig.", ".Figure: (a) Cosine similarity and (b) percentage of sign matches for three pairs of attack loss functions and decision rules: CE loss with EoT “Baseline”, CE loss on mean softmax probability “CE (softmax)”, and linear loss on logits “Lin (logits)”.Figure: (a) Cosine similarity and (b) percentage of sign matches for the linear loss and its combinations with three transfer attack techniques: Linear Backward Pass “LinBP”, Skip Gradient Method “SGM”, and targeted “TG”." ], [ "Variance of Gradients", "We have described how we compute the sample variance of the gradients in Section REF .", "Here, we provide detailed calculations of the other three metrics.", "First, the unbiased variance is computed as normal with an additional normalization by dimension.", "$\\mu _{n} &\\frac{1}{n} \\sum _{j=1}^n \\nabla \\hat{G}_{1,j} \\\\\\sigma _{n}^2 &\\frac{1}{d}\\frac{1}{n-1} \\sum _{j=1}^n \\left\\Vert \\mu _{n} - \\hat{G}_{1,j}\\right\\Vert _2^2 $ where $\\hat{G}_1$ is the signed gradients where the loss is estimated with one sample as defined in Algorithm .", "The cosine similarity is computed between the mean gradient and all $n$ samples and then averaged.", "$\\text{cos}_{n} \\frac{1}{n} \\sum _{j=1}^n \\frac{\\left\\langle \\hat{G}_{1,j}, \\mu _{n}\\right\\rangle }{\\left\\Vert \\hat{G}_{1,j}\\right\\Vert _2 \\cdot \\left\\Vert \\mu _{n}\\right\\Vert _2}$ Lastly, the sign matching percentage is $\\text{sign\\_match}_{n}.", "\\frac{1}{n} \\sum _{j=1}^n \\frac{1}{d} \\sum _{i=1}^d \\mathbb {1}\\lbrace [\\hat{G}_{1,j}]_i = [\\mu _{n}]_i\\rbrace $ Fig.", "REF and Fig.", "REF plot the cosine similarly and the sign matching for varying loss functions and varying transfer attacks, respectively.", "Similarly to Fig.", "REF , better attacks result in less spread of the gradient samples which corresponds to higher cosine similarity and sign matching percentage.", "Figure: Effectiveness of the optimizers, (a) SGD and (b) AggMo, with varying momentum parameters.", "Increasing BB for AggMo in this case monotonically reduces the final adversarial accuracy until B=4B=4 where it plateaus.", "This is more predictable and stable than increasing the momentum constant in SGD." ], [ "Details on Bayesian Optimization", "[tb] Tuning and training RT defense.", "Input: Set of transformation types, $n$ , $p$ , $\\epsilon $ Output: $g^*(\\cdot ), \\mathcal {R}, \\mathcal {R}_{p,\\epsilon }$ Data: Training data $\\left(\\mathbf {X}^{\\mathrm {train}}, \\mathbf {Y}^{\\mathrm {train}}\\right)$ , test data $\\left(\\mathbf {X}^{\\mathrm {test}}, \\mathbf {Y}^{\\mathrm {test}}\\right)$ // Starting Bayesian optimization (BO) Sub-sample $\\left(\\mathbf {X}^{\\mathrm {train}}, \\mathbf {Y}^{\\mathrm {train}}\\right)$ and split it into BO's training data $\\left(\\mathbf {X}^{\\mathrm {train}}_{\\mathrm {BO}}, \\mathbf {Y}^{\\mathrm {train}}_{\\mathrm {BO}}\\right)$ and validation data $\\left(\\mathbf {X}^{\\mathrm {val}}_{\\mathrm {BO}}, \\mathbf {Y}^{\\mathrm {val}}_{\\mathrm {BO}}\\right)$ .", "$\\mathcal {R}_{p,\\epsilon }^* \\leftarrow 0$ // Best adversarial accuracy $\\lbrace (p^*_i, \\alpha ^*_i)\\rbrace _{i=1}^{K} \\leftarrow 0$ // Best RT hyperparameters $\\mathrm {step}=1$ to MAX_BO_STEPS // Running one trial of BO BO specifies $\\lbrace (p_i, \\alpha _i)\\rbrace _{i=1}^{K}$ to evaluate.", "Train an RT model on $\\left(\\mathbf {X}^{\\mathrm {train}}_{\\mathrm {BO}}, \\mathbf {Y}^{\\mathrm {train}}_{\\mathrm {BO}}\\right)$ with hyperparameters $\\lbrace (p_i, \\alpha _i)\\rbrace _{i=1}^{K}$ to obtain $g$ .", "Test $g$ by computing $\\mathcal {R}_{p,\\epsilon }$ on $\\left(\\mathbf {X}^{\\mathrm {val}}_{\\mathrm {BO}}, \\mathbf {Y}^{\\mathrm {val}}_{\\mathrm {BO}}\\right)$ using a weak but fast attack.", "$\\mathcal {R}_{p,\\epsilon } > \\mathcal {R}_{p,\\epsilon }^*$ $\\mathcal {R}_{p,\\epsilon }^* \\leftarrow \\mathcal {R}_{p,\\epsilon }$ $\\lbrace (p^*_i, \\alpha ^*_i)\\rbrace _{i=1}^{K} \\leftarrow \\lbrace (p_i, \\alpha _i)\\rbrace _{i=1}^{K}$ No improvement for some steps break // Full training of RT Train an RT model on $\\left(\\mathbf {X}^{\\mathrm {train}}, \\mathbf {Y}^{\\mathrm {train}}\\right)$ with best hyperparameters $\\lbrace (p^*_i, \\alpha ^*_i)\\rbrace _{i=1}^{K}$ to obtain $g^*$ .", "Evaluate $g^*$ by computing $\\mathcal {R}$ and $\\mathcal {R}_{p,\\epsilon }$ on $\\left(\\mathbf {X}^{\\mathrm {test}}, \\mathbf {Y}^{\\mathrm {test}}\\right)$ using a strong attack.", "One major challenge in implementing an RT defense is selecting the defense hyperparameters which include the $K$ transformation types, the number of transformations to apply ($S$ ), and their parameters ($a$ and $p$ ).", "To improve the robustness of RT defense, we use Bayesian optimization (BO), a well-known black-box optimization technique, to fine-tune $a$ and $p$  [37].", "In this case, BO models the hyperparameter tuning as a Gaussian process where the objective function takes in $a$ and $p$ , trains a neural network as a backbone for an RT defense, and outputs adversarial accuracy under some pre-defined $\\ell _\\infty $ -budget $\\epsilon $ as the metric used for optimization.", "Since BO quickly becomes ineffective as we increase the dimensions of the search space, we choose to tune either $a$ or $p$ , never both, for each of the $K$ transformation types.", "For transformations that have a tunable $a$ , we fix $p = 1$ (e.g., noise injection, affine transform).", "For the transformations without an adjustable strength $a$ , we only tune $p$ (e.g., Laplacian filter, horizontal flip).", "Additionally, because BO does not natively support categorical or integral variables, we experiment with different choices for $K$ and $S$ without the use of BO.", "Therefore, our BO problem must optimize over $K$ (up to 33) variables, far more than are typically present when doing model hyperparamter tuning using BO.", "Mathematically, the objective function $\\psi $ is defined as $\\psi : [0, 1]^K \\rightarrow \\mathcal {R}_{\\infty ,\\epsilon } \\in [0, 1]$ where the input is $K$ real numbers between 0 and 1, and $\\mathcal {R}_{\\infty ,\\epsilon }$ denotes the adversarial accuracy or the accuracy on $x_{\\mathrm {adv}}$ as defined in Eqn.", "(REF ).", "Since $\\psi $ is very expensive to evaluate as it involves training and testing a large neural network, we employ the following strategies to reduce the computation: (1) only a subset of the training and validation set is used, (2) the network is trained for fewer epochs with a cosine annealing learning rate schedule to speed up convergence [30], and (3) the attack used for computing $\\mathcal {R}_{\\infty ,\\epsilon }$ is weaker but faster.", "Even with these speedups, one BO run still takes approximately two days to complete on two GPUs (Nvidia GeForce GTX 1080 Ti).", "We also experimented with other sophisticated hyperparameter-tuning algorithms based on Gaussian processes [4], [24], [13] but do not find them more effective.", "We summarize the main steps for tuning and training an RT defense in Algorithm .", "We use the Ray Tune library for RT's hyperparameter tuning in Python [27].", "The Bayesian optimization tool is implemented by [33], following analyses and instructions by [37] and [7].", "As mentioned in Section , we sub-sample the data to reduce computation for each BO trial.", "Specifically, we use 20% and 10% of the training samples for Imagenette and CIFAR-10 respectively (Algorithm , line ) as Imagenette has a much smaller number of samples in total.", "The models are trained with the same transformations and hyperparameters used during inference, and here, $n$ is set to 1 during training, just as is done during standard data augmentation.", "We use 200 samples to evaluate each BO run in line  of Algorithm  with only 100 steps and $n=10$ .", "One BO experiment executes two BO's in parallel.", "The maximum number of BO runs is 160, but we terminate the experiment if no improvement has been made in the last 40 runs after a minimum of 80 runs have taken place.", "The runtime depends on $S$ and the transformation types used.", "In our typical case, when all 33 transformation types are used and $S=14$ , one BO run takes almost an hour on an Nvidia GeForce GTX 1080 Ti for Imagenette.", "One BO experiment then takes about two days to finish.", "In line  and of Algorithm , we now use the full training set and 1000 test samples as mentioned earlier.", "During the full training, $n$ is set to four which increases the training time by approximately four times.", "We find that using a larger $n$ is beneficial to both the clean and the adversarial accuracy, but $n$ larger than four does not make any significant difference." ], [ "Details on the Final RT Model", "We run multiple BO experiments (Algorithm ) on different subsets of transformation types to identify which transformations are most/least effective in order to reduce $K$ as well as the number of hyperparameters our final run of BO has to tune.", "We then repeat Algorithm  initialized with the input-output pairs from the prior runs of BO to obtain a new set of hyperparameters.", "Finally, we remove the transformations whose $p$ or $a$ has been set to zero by the first run of BO, and we run BO once more with this filtered subset of transformations.", "At the end of this expensive procedure, we obtain the best and final RT model that we use in the experiments throughout this paper.", "For Imagenette, the final set of 18 transformation types used in this model are color jitter, erase, gamma, affine, horizontal flip, vertical flip, Laplacian filter, Sobel filter, Gaussian blur, median blur, motion blur, Poisson noise, FFT, JPEG compression, color precision reduction, salt noise, sharpen, and solarize.", "$S$ is set to 14." ], [ "Decision Rules and Number of Samples", "Fig.", "REF and Fig.", "REF compare three different decision rules that aggregate the $n$ outputs of the RT model to produce the final prediction $\\hat{y}(x)$ given an input $x$ .", "We choose the average softmax probability rule for all of our RT models because it provides a good trade-off between the clean accuracy and the robustness.", "Majority vote has poor clean accuracy, and the average logits have poor robustness." ], [ "Importance of the Transformation Groups", "Choosing the best set of transformation types to use is a computationally expensive problem.", "There are many more transformations that can be applied outside of the 33 types we choose, and the number of possible combinations grows exponentially.", "BO gives us an approximate solution but is by no means perfect.", "Here, we take a step further to understand the importance of each transformation group.", "Table REF gives an alternative way to gauge the contribution of each transformation group.", "According to this experiment, noise injection appears most robust followed by lossy compression and geometric transformations.", "However, this result is not very informative as most of the groups have zero adversarial accuracy, and the rest are likely to also reduce to zero given more attack steps.", "This result also surprisingly follows the commonly observed robustness-accuracy trade-off [42]." ], [ "Number of Transformations", "We test the effect of the transform permutation size $S$ on the clean and the robust accuracy of RT models (Fig.", "REF ).", "We run Bayesian optimization experiments for different values of $S$ using all 33 transformation types, and all of the models are trained using the same procedure.", "Fig.", "REF shows that generally more transformations (larger $S$ ) increase robustness but lower accuracy on benign samples." ] ]
2207.03574
[ [ "Training Transformers Together" ], [ "Abstract The infrastructure necessary for training state-of-the-art models is becoming overly expensive, which makes training such models affordable only to large corporations and institutions.", "Recent work proposes several methods for training such models collaboratively, i.e., by pooling together hardware from many independent parties and training a shared model over the Internet.", "In this demonstration, we collaboratively trained a text-to-image transformer similar to OpenAI DALL-E. We invited the viewers to join the ongoing training run, showing them instructions on how to contribute using the available hardware.", "We explained how to address the engineering challenges associated with such a training run (slow communication, limited memory, uneven performance between devices, and security concerns) and discussed how the viewers can set up collaborative training runs themselves.", "Finally, we show that the resulting model generates images of reasonable quality on a number of prompts." ], [ "Introduction", "Training state-of-the-art deep learning models is becoming ever more computationally demanding.", "One infamous example of this trend is transformers [45], a popular architecture widely used in NLP [10], [27], [4], speech processing [15], [25], and computer vision [12], [43], [6].", "Transformers benefit from having billions of parameters [4], [17], [29] and large-batch training [31], which makes them dependent on large-scale training infrastructure [28], [40], [22].", "Unfortunately, this kind of infrastructure can be prohibitively expensive, whether one buys the hardware or rents cloud resources [44], [24].", "As a result, most researchers simply cannot afford to conduct the necessary experiments to develop their ideas, which ultimately slows down scientific progress.", "To make large-scale deep learning more accessible, recent work proposes to train these models collaboratively, i.e., to pool together the hardware from many independent parties and train a shared model over the Internet [30], [38], [19], [2], [11].", "Such work proposes general distributed algorithms for training on many devices with uneven compute capability and reliability.", "However, to make them practical, one must overcome several engineering challenges, such as slow communication, limited memory, and security concerns.", "In this demonstration, we collaboratively trained a text-to-image transformer similar to DALL-E [36].", "Our contributions are the following: We modify the DALL-E model, making it suitable for training over the Internet using the method from [11] and the hivemind library [16].", "We set up the infrastructure for such a training run and publish the training results.", "We provide a webpageSee https://training-transformers-together.github.io explaining how to join the ongoing training run, address challenges related to collaborative training runs (slow communication, low memory budget, support of heterogeneous devices), and set up such a training run by yourself.", "We provide an interactive “calculator” that shows the memory consumed by different models in case of using various memory-efficiency techniques.", "Also, we present a tutorial on setting up dataset streaming and model compression using the datasets and bitsandbytes libraries [23], [9].", "The central part of our demonstration is a webpage where people can explore the demonstration materials.", "The webpage describes the motivation behind collaborative training projects, the method for efficient training from [11], and the ongoing collaborative training of our adapted version of DALL-E (see Section ).", "Here, we also show a plot of the training objective and the number of active participants.", "Next, we provide instructions on how to join the training run using free cloud providers or their own GPU.", "This involves (1) joining a specific Hugging Face organization, where we can authenticate the users and measure their contribution, and (2) running a Jupyter notebook [20] with the training code.", "Our intention was that the user can explore our collaborative training environment through active participation while at the same time reading the detailed explanations of how it works.", "Here, we also provide the link to the interactive dashboard which shows the statistics and the leaderboard of contributors and provides further information about the training run, such as model checkpoints uploaded to the Model Hub, notebooks for inference, and links to the source code.", "Then, we proceed to discuss the engineering challenges of collaborative training runs: Communication efficiency.", "Most distributed training algorithms are designed for the networks inside HPC clusters with a 10–100 Gbit/s bandwidth.", "However, typical Internet connections are orders of magnitude slower (10–100 Mbit/s).", "To make training over the Internet practical, one can reduce the communication costs using large-batch training [50], gradient compression [8], [26], [46], [42], parameter sharing [21], [49], and overlapping computation with communication [37].", "Uneven device performance.", "Traditional data-parallel training waits for the slowest device on every batch.", "[11] allow the devices to process different numbers of samples for a batch, while keeping the guarantees of synchronous training.", "Memory efficiency.", "Distributed training requires either storing all parameters and optimizer statistics on each participant, which is challenging in the case of low-end hardware, or using model parallelism which introduces another level of complexity.", "Fortunately, the first option is often viable if we reduce the memory consumption with 8-bit optimizers [9], by offloading the statistics to CPU, with gradient checkpointing or parameter sharing [21], [49].", "Dataset streaming.", "Participants often cannot store or even download the whole dataset, since datasets used for pretraining transformers may contain hundreds of gigabytes of data.", "To address that, one can use dataset streaming tools, such as the datasets library [23].", "Security.", "Crucially, the participants only exchange tensors and never send code to be executed on each other's computers.", "Since a malicious participant also could influence the training outcome by sending wrong tensors, we should either authenticate participants, as described in [11], and/or use gradient aggregation techniques robust to outliers [18], [14].", "Finally, we provide a recipe on how to combine all that and set up a new collaborative training run using the hivemind library [16]." ], [ "Memory calculator", "The demonstration webpage includes an interactive “calculator” showing the benefits of various memory-efficiency techniques and their combinations.", "It can compute the consumption of RAM and GPU memory for BERT [10], T5 [35], GPT-2 [33], GPT-3 [4], GPT-J [47], and DALL-E [36] in case of using 8-bit optimizers, offloading the optimizer statistics to CPU, using gradient checkpointing and parameter sharing." ], [ "Tutorial on memory-efficiency techniques", "The demonstration webpage refers to a tutorial on setting up dataset streaming with the datasets library [23] and model compression with the bitsandbytes library [9].", "The goal of the tutorial is to fine-tune the GPT-2 Large model [33] on the C4 dataset [35] using only a low-end GPU, which is possible with the 8-bit Adam optimizer." ], [ "Model", "For the practical example of a collaborative training run, we chose to train a text-to-image transformer similar to DALL-E [36], based on the code from [48].", "Specifically, we used a decoder-only transformer with 1024 hidden units and 64 layers, each of which uses 16 attention heads with a per-head state size of 64 ($\\approx $ 1.1B parameters in total).", "We alternated the attention masks as in the original paper, i.e., repeated “row, column, row, row” masks until the last layer, which had the convolutional mask.", "To improve communication and memory efficiency, we tied weights of all “row, column, row, row” layer groups [21] and tied the input and output embeddings [32], so the model uses $\\approx $ 8x fewer parameters (but the same amount of compute).", "We also used reversible layers [5] to reduce memory usage and rotary embeddings [41] to improve training stability.", "We replaced dVAE with VQ-GAN [13], since it has a smaller reconstruction error.", "We used the checkpoint with $f{=}8$ and the codebook size 8192.", "Finally, we used CLIP ViT/B-32 [34] to choose the best 4 out of 128 generated images." ], [ "Dataset", "We trained the model on the first 100 million image-text pairs from LAION-400M [39].", "We skipped $\\approx $ 10% images due to short captions, extreme aspect ratios, and NSFW labels.", "Before training, we preprocessed all images with VQGAN and uploaded the VQGAN codes and captions, both compressed with Brotli [1], to the Hugging Face Dataset Hub [23].", "During training, we streamed the compressed codes instead of the original images, thus consuming $\\approx $ 18x less bandwidth." ], [ "Training procedure", "We followed the distributed training procedure from [11] and used the 8-bit LAMB optimizer [50], [9] offloaded to CPU.", "We used the linear training schedule with 31250 steps (the first 10% is the warm-up) and the peak learning rate of $2.5 \\cdot 10^{-3}$ .", "While exchanging gradients and parameters, we used the 8-bit quantization [8] for tensors with $\\ge 2^{16}$ elements and the 16-bit precision for other tensors.", "Unlike the original paper, we did not use PowerSGD [46]." ], [ "Results", "The training run lasted for 2.5 months and passed $\\approx $ 80% of the training schedule.", "Besides the authors, 37 volunteers have contributed for at least 10 minutes (see Appendix ).", "During inference, we note that limiting sampling to top 256 logits or top logits whose probability sums up to $p = 0.75$ greatly improves the image quality.", "The final model generates realistic images for some prompts but fails to draw correct shapes for the others, while using the appropriate image style, textures, and colors (see Appendix ).", "We attribute that to the fact that our model is too small to remember the full diversity of images in LAION-400M.", "Still, the model can generalize to the concepts not present in the dataset." ], [ "Model Inference Results", "px" ] ]
2207.03481
[ [ "A new type of results on probabilities of moderate deviations for i.i.d.\n random variables" ], [ "Abstract Let $\\{X, X_{n}; n \\geq 1\\}$ be a sequence of i.i.d.", "non-degenerate real-valued random variables with $\\mathbb{E}X^{2} < \\infty$.", "Let $S_{n} = \\sum_{i=1}^{n} X_{i}$, $n \\geq 1$.", "Let $g(\\cdot): ~[0, \\infty) \\rightarrow [0, \\infty)$ be a nondecreasing regularly varying function with index $\\rho \\geq 0$ and $\\lim_{t \\rightarrow \\infty} g(t) = \\infty$.", "Let $\\mu = \\mathbb{E}X$ and $\\sigma^{2} = \\mathbb{E}(X - \\mu)^{2}$.", "In this paper, we obtain precise asymptotic estimates for probabilities of moderate deviations by showing that, for all $x > 0$, \\[ \\limsup_{n \\rightarrow \\infty} \\frac{\\log \\mathbb{P}\\left(S_{n} - n \\mu > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} = - \\left(\\frac{x^{2}}{2\\sigma^{2}} \\wedge \\frac{\\overline{\\lambda}_{1}}{2^{\\rho}} \\right), \\] \\[ \\liminf_{n \\rightarrow \\infty} \\frac{\\log \\mathbb{P}\\left(S_{n} - n \\mu > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} = - \\left(\\frac{x^{2}}{2\\sigma^{2}} \\wedge \\frac{\\underline{\\lambda}_{1}}{2^{\\rho}} \\right), \\] \\[ \\limsup_{n \\rightarrow \\infty} \\frac{\\log \\mathbb{P}\\left(S_{n} - n \\mu < -x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} = - \\left(\\frac{x^{2}}{2\\sigma^{2}} \\wedge \\frac{\\overline{\\lambda}_{2}}{2^{\\rho}} \\right), \\] and \\[ \\liminf_{n \\rightarrow \\infty} \\frac{\\log \\mathbb{P}\\left(S_{n} - n \\mu < -x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} = - \\left(\\frac{x^{2}}{2\\sigma^{2}} \\wedge \\frac{\\underline{\\lambda}_{2}}{2^{\\rho}} \\right), \\] where $\\overline{\\lambda}_{1}$ are $\\underline{\\lambda}_{1}$ are determined by the asymptotic behavior of $\\mathbb{P}(X > t)$ and $\\overline{\\lambda}_{2}$ and $\\underline{\\lambda}_{2}$ are determined by the asymptotic behavior of $\\mathbb{P}(X < -t)$.", "Unlike those known results in the literature, the moderate deviation results established in this paper depend on both the variance and the asymptotic behavior of the tail distribution of $X$." ], [ "Introduction", "Throughout this paper, let $\\lbrace X, X_n; n \\ge 1\\rbrace $ be a sequence of independent and identically distributed (i.i.d.)", "real-valued random variables defined on a probability space $(\\Omega , {\\cal F}, \\mathbb {P})$ and, as usual, let $S_{n} = \\sum _{i=1}^n X_i, n \\ge 1$ .", "Write $\\log t = \\log _{e} t$ , $t > 0$ and define $\\log 0 = - \\infty $ .", "It is well known that Cramér [9] and Chernoff [8] initiated the study of the theory of large deviations, that characterizes the exponential concentration behaviour, as $n \\rightarrow \\infty $ , of a sequence of probabilities $\\lbrace \\mathbb {P}(S_n / n \\in \\mathbf {A}) ; n \\ge 1\\rbrace $ , where $\\mathbf {A} \\subseteq (-\\infty , \\infty )$ .", "They showed that $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n}/n\\in \\mathbf {A} \\right)}{n} \\le -\\Lambda (\\mathbf {A})~\\mbox{for every closed set}~\\mathbf {A} \\subseteq (-\\infty , \\infty ),$}\\\\&\\\\& \\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n}/n\\in \\mathbf {A} \\right)}{n} \\ge -\\Lambda (\\mathbf {A})~\\mbox{for every open set}~\\mathbf {A} \\subseteq (-\\infty , \\infty ),$}\\end{array}\\right.$ provided that $M(t) \\equiv \\mathbb {E}\\left(e^{tX} \\right) < \\infty ~~\\mbox{for all}~ t \\in (-\\infty , \\infty ),$ where, for $x \\in (-\\infty , \\infty )$ and $\\mathbf {A} \\subseteq (-\\infty , \\infty )$ , $\\Lambda (\\mathbf {A}) = \\inf _{x \\in \\mathbf {A}} I(x),~I(x) = \\sup _{t \\in (-\\infty , \\infty )} \\left(tx - \\log M(t) \\right).$ This fundamental result, which describes on the scale of a law of large number type ergodic phenomenon, is what we call the Cramér-Chernoff large deviation principle (in short, LDP) for $\\lbrace S_{n}; ~n \\ge 1 \\rbrace $ .", "Clearly, the rate function $I(x), ~x \\in (-\\infty , \\infty )$ of the LDP in (1.1) is determined by the moment generating function $M(t), ~t \\in (-\\infty , \\infty )$ of random variable $X$ .", "Donsker and Varadhan [12] and Bahadur and Zabell [1] established an LDP for sums of i.i.d.", "Banach space-valued random variables.", "Bolthausen [3] extended the Cramér-Chernoff-Donsker-Varadhan-Bahadur-Zabell LDP when the laws of the random variables converge weakly and satisfy a uniform exponential integrability condition.", "As an application of the Bolthausen LDP, Li, Rosalsky, and Al-Mutairi [21] established an LDP for bootstrapped sample means.", "Since large deviation theory deals with the decay of the probability of increasingly unlikely events, it has applications in many different scientific fields, ranging from queuing theory to statistics and from finance to engineering.", "There have been a great number of investigations on the probabilities of large deviations for sums of independent random variables.", "Surveys of these investigations can be found in Book [4], Dembo and Zeitouni [11], Petrov [25, 26]), Saulis and Statulevic̆ius [27], Stroock [29], etc.", "Inspired by the results Gantert [14] and Hu and Nyrhinen [15], Li and Miao [18] established an LDP, on the scale $\\log n$ , for partial sums of i.i.d.", "$\\mathbf {B}$ -valued random variables.", "For the special case $\\mathbf {B} = (- \\infty , \\infty )$ , if $S_n / n^{1/p} \\rightarrow _{\\mathbb {P}} 0$ for some $0<p<2$ then, for all $s > 0$ , $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > s n^{1/p} \\right)}{\\log n}= - (\\overline{\\beta } -p)/p,$}\\\\&\\\\& \\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > s n^{1/p} \\right)}{\\log n}= -( \\underline{\\beta } -p)/p,$}\\end{array}\\right.$ where $\\overline{\\beta } = - \\limsup _{t \\rightarrow \\infty }\\frac{\\log \\mathbb {P}(\\log |X| > t)}{t}~~\\mbox{and}~~\\underline{\\beta } = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\mathbb {P}(\\log |X| > t)}{t}.$ In particular, under the same hypotheses, for all $s>0$ , $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > s n^{1/p} \\right)}{\\log n}= - ({\\beta } -p)/p ~~\\mbox{if and only if}~~\\overline{\\beta }= \\underline{\\beta }= \\beta .$ As a special case of (1.2), the main results of Hu and Nyrhinen [15] are not only improved, but also extended.", "Recently, a similar large deviation result to (1.2) for partial sums of i.i.d.", "random variables with super-heavy tailed distribution is established in Li, Miao, and Stoica [19] which extends in particular the results of Stoica [28] and Nakata [22].", "We must point out that the large deviation results established in both Li and Miao [18] and Li, Miao, and Stoica [19] depend only on the asymptotic behaviour of the tail distribution $\\mathbb {P}(|X| > t)$ as $t \\rightarrow \\infty $ .", "In the another direction, under the Cramér condition, which asserts that $\\mathbb {E}\\left(e^{t|X|} \\right) < \\infty ~~\\mbox{for some}~ t > 0,$ Petrov [23] obtained asymptotic expansions for $\\mathbb {P} \\left(S_{n} > n \\mu + n^{1/2}x \\right) ~~\\mbox{and}~~\\mathbb {P} \\left(S_{n} < n \\mu - n^{1/2}x \\right)$ for $x \\ge 0$ and $x = {\\it o}\\left(n^{1/2} \\right)$ , where $\\mu = \\mathbb {E}X$ .", "Let $\\lbrace b_{n};~n \\ge 1 \\rbrace $ be a sequence of positive real numbers such that $\\lim _{n \\rightarrow \\infty } \\frac{b_{n}}{n} = 0~~\\mbox{and}~~\\lim _{n \\rightarrow \\infty } \\frac{b_{n}}{\\sqrt{n}} = \\infty .$ Then it follows from the Petrov asymptotic expansions [23] that $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{n}{b_{n}^{2}} \\log \\mathbb {P}\\left(\\frac{S_{n} - n \\mu }{b_{n}}\\in \\mathbf {A} \\right) \\le - \\inf _{x \\in \\mathbf {A}}\\frac{x^{2}}{2 \\sigma ^{2}}~\\mbox{for closed }~\\mathbf {A} \\subseteq (-\\infty , \\infty ),$}\\\\&\\\\& \\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{n}{b_{n}^{2}} \\log \\mathbb {P}\\left(\\frac{S_{n} - n \\mu }{b_{n}}\\in \\mathbf {A} \\right) \\ge - \\inf _{x \\in \\mathbf {A}}\\frac{x^{2}}{2 \\sigma ^{2}}~\\mbox{for open}~\\mathbf {A} \\subseteq (-\\infty , \\infty ),$}\\\\\\end{array}\\right.$ where $\\sigma ^{2} = \\mbox{Var}\\left(X - \\mathbb {E}X \\right)^{2}$ .", "This classical result, which describes the probabilities on a scale between a law of large numbers and some sort of central limit theorem, is what we call the moderate deviation principle (in short, MDP) for $\\lbrace S_{n}; ~n \\ge 1 \\rbrace $ .", "Under the assumptions that $\\mathbb {E}X = 0, ~\\frac{b_{n}}{n} \\downarrow 0,~~\\mbox{and}~~\\frac{b_{n}}{\\sqrt{n}} \\uparrow \\infty ~~\\mbox{as}~ n \\rightarrow \\infty ,$ Eichelsbacher and Löwe [13] showed that (1.3) holds for some $\\sigma ^{2} < \\infty $ if and only if $\\sigma ^{2} = \\mathbb {E}X^{2}~~\\mbox{and}~~\\lim _{n \\rightarrow \\infty } \\frac{n}{b_{n}^{2}} \\log \\left(n \\mathbb {P}\\left(|X| > b_{n} \\right) \\right) = - \\infty .$ Clearly, the rate function $I(x) = \\frac{x^{2}}{2 \\sigma ^{2}}, ~x \\in (-\\infty , \\infty )$ of the MDP in (1.3) is determined by the variance of random variable $X$ .", "Borovkov and Mogul'skiĭ [5], Chen [6, 7], de Acosta [10], and Ledoux [16] obtained versions of (1.3) in a Banach space setting under various conditions.", "Motivated by (1.2), (1.3), and the work of Eichelsbacher and Löwe [13], it is natural to ask if any MDP result for partial sums of $\\lbrace X, X_{n};~n \\ge 1 \\rbrace $ is determined by either the variance of $X$ only or the tail distribution of $X$ only.", "In this paper, we focus on this problem to study the MDP under the condition $\\mathbb {E}X^{2} < \\infty $ only.", "On the scale $g(\\log n)$ , we obtain precise asymptotic estimates for the probabilities of moderate deviations of the form $\\log \\mathbb {P}\\left(S_{n} - n \\mu > x \\sqrt{ng(\\log n)} \\right)$ , $\\log \\mathbb {P}\\left(S_{n} - n \\mu < -x \\sqrt{ng(\\log n)} \\right)$ , and $\\log \\mathbb {P}\\left(\\left|S_{n} - n \\mu \\right| > x \\sqrt{ng(\\log n)} \\right)$ for all $x > 0$ , where $g(\\cdot )$ : $[0, \\infty ) \\rightarrow [0, \\infty )$ is a non-decreasing regularly varying function with index $\\rho \\ge 0$ and $\\lim _{t \\rightarrow \\infty } g(t) = \\infty $ .", "From the results established in this paper, we can see that, unlike those known results, the moderate deviation results established in this paper depend on both the variance and the asymptotic behavior of the tail distribution of the random variable $X$ and hence, the answer to the problem stated at the beginning of this paragraph is negative.", "The plan of the paper is as follows.", "Our main results Theorems 2.1 and 2.2, which are some general results on probabilities of moderate deviations for partial sums of $\\lbrace X, X_{n};~n \\ge 1 \\rbrace $ , are presented in Section 2.", "Some preliminary results needed to prove the main results are listed (and proved) in Section 3.", "The proofs of Theorems 2.1 and 2.2 are given in Sections 4 and 5 respectively.", "The truncation technique, conditional probability technique, two preliminary results on the regularly varying functions, a maximal inequality, and Kolmogorov exponential inequalities are paramount in the proof of Theorem 2.1.", "The main tools employed in proving Theorem 2.2 are the symmetrization technique, Kolmogorov strong law of large numbers, Kolmogorov exponential inequalities, and the method of proof by contradiction." ], [ "Statement of the main results", "Before we can formulate our results, we need some extra notation.", "For any real numbers $a$ and $b$ , let $a \\wedge b = \\min \\lbrace a, b\\rbrace $ , $a \\vee b = \\max \\lbrace a, b\\rbrace $ , $a^{+} = a \\vee 0$ , and $a^{-} = (-a)\\vee 0$ .", "For any real numbers $x \\in (0, \\infty )$ and $y \\in (-\\infty , \\infty )$ put, by convention, $(\\pm \\infty + y)/x = \\pm \\infty $ .", "Let $\\rho \\ge 0$ and let $\\mathcal {V}_{\\rho }$ be the set of all nondecreasing regularly varying functions with index $\\rho $ and $\\lim _{t \\rightarrow \\infty } g(t) = \\infty $ .", "Thus, if $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , then $\\lim _{t \\rightarrow \\infty } \\frac{g(xt)}{g(t)} = x^{\\rho }~~\\mbox{for all}~ x > 0.$ Clearly, if $g(\\cdot ) \\in \\mathcal {V}_{0}$ , then $g(\\cdot )$ is slowly varying at infinity; if $g(\\cdot )$ is a nondecreasing regularly varying function with index $\\rho > 0$ , then $\\lim _{t \\rightarrow \\infty } g(t) = \\infty $ automatically.", "Let $X$ be a real-valued random variable.", "For any given $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , write $\\overline{\\lambda }_{1}= - \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(t^{2} \\mathbb {P}(X > t ) \\right)}{g(\\log t)}~\\mbox{and}~\\underline{\\lambda }_{1} = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\left(t^{2} \\mathbb {P}(X > t ) \\right)}{g(\\log t)},$ $\\overline{\\lambda }_{2}= - \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(t^{2} \\mathbb {P}(X < -t ) \\right)}{g(\\log t)}~\\mbox{and}~\\underline{\\lambda }_{2} = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\left(t^{2} \\mathbb {P}(X < -t ) \\right)}{g(\\log t)},$ and $\\overline{\\lambda }= - \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(t^{2} \\mathbb {P}(|X| > t ) \\right)}{g(\\log t)}~\\mbox{and}~\\underline{\\lambda } = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\left(t^{2} \\mathbb {P}(|X| > t ) \\right)}{g(\\log t)}.$ Clearly, $\\overline{\\lambda }_{1}$ and $\\underline{\\lambda }_{1}$ defined in (2.1) are two parameters of $X$ determined by the asymptotic behavior of the tail distribution $\\mathbb {P}(X > t)$ as $t \\rightarrow \\infty $ , $\\overline{\\lambda }_{2}$ and $\\underline{\\lambda }_{2}$ defined in (2.2) are two parameters of $X$ determined by the asymptotic behavior of the tail distribution $\\mathbb {P}(X < -t)$ as $t \\rightarrow \\infty $ , and $\\overline{\\lambda }$ and $\\underline{\\lambda }$ defined in (2.3) are two parameters of $X$ determined by the asymptotic behavior of the tail distribution $\\mathbb {P}(|X| > t)$ as $t \\rightarrow \\infty $ .", "The following Theorem 2.1 provides general and precise moderate deviation results for the partial sums of $\\lbrace X, X_{n}; n \\ge 1 \\rbrace $ under the finite second moment condition only.", "Theorem 2.1 Let $\\lbrace X, X_{n}; n \\ge 1\\rbrace $ be a sequence of i.i.d.", "non-degenerate real-valued random variables with $\\mathbb {E}X^{2} < \\infty $ .", "Write $\\mu = \\mathbb {E}X$ and $\\sigma ^{2} = \\mathbb {E}(X - \\mu )^{2} \\in (0, \\infty )$ .", "Then, for any given $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , we have $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$,}\\\\&\\\\&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$,}\\end{array}\\right.$ $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu < -x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }_{2}}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$,}\\\\&\\\\&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu < - x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }_{2}}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$,}\\end{array}\\right.$ and $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\mu \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$,}\\\\&\\\\&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\mu \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$.", "}\\end{array}\\right.$ Hence, $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n}- n \\mu > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\hat{\\lambda }_{1}}{2^{\\rho }} \\right)~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\overline{\\lambda }_{1} = \\underline{\\lambda }_{1} = \\hat{\\lambda }_{1},$ $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu < - x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\hat{\\lambda }_{1}}{2^{\\rho }} \\right)~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\overline{\\lambda }_{2} = \\underline{\\lambda }_{2} = \\hat{\\lambda }_{2},$ and $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\mu \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\hat{\\lambda }}{2^{\\rho }} \\right)~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\overline{\\lambda } = \\underline{\\lambda } = \\hat{\\lambda }.$ In particular, $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\overline{\\lambda }_{1} = \\infty ,$ $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} - n \\mu < - x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\overline{\\lambda }_{2} = \\infty ,$ and $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\mu \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\overline{\\lambda } = \\infty .$ Remark 2.1 (i)  It is interesting to see that, on the scale $g(\\log n)$ , Theorem 2.1 provides us precise asymptotic estimates for the probabilities of moderate deviations of $\\log \\mathbb {P}\\left(S_{n} - n \\mu > x \\sqrt{ng(\\log n)} \\right)$ , $\\log \\mathbb {P}\\left(S_{n} - n \\mu < -x \\sqrt{ng(\\log n)} \\right)$ , and $\\log \\mathbb {P}\\left(\\left|S_{n} - n \\mu \\right| > x \\sqrt{ng(\\log n)} \\right)$ for all $x > 0$ and such moderate deviation results depend only on both the variance and the asymptotic behavior of the tail distribution of $X$ .", "(ii)  Since, for all $t > 0$ , $\\begin{array}{lll}\\mbox{$\\displaystyle \\frac{\\log \\left(t^{2} \\mathbb {P}(X > t ) \\right)}{g(\\log t)} \\vee \\frac{\\log \\left(t^{2} \\mathbb {P}(X < -t ) \\right)}{g(\\log t)}$}& \\le &\\mbox{$\\displaystyle \\frac{\\log \\left(t^{2} \\mathbb {P}(|X| > t ) \\right)}{g(\\log t)}$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\frac{\\log \\left(2 \\left(t^{2} \\mathbb {P}(X > t ) \\vee t^{2} \\mathbb {P}(X < -t ) \\right)\\right)}{g(\\log t)}$}\\\\&&\\\\& = &\\mbox{$\\displaystyle \\frac{\\log 2}{g(\\log t)} + \\left(\\frac{\\log \\left(t^{2} \\mathbb {P}(X > t ) \\right)}{g(\\log t)} \\vee \\frac{\\log \\left(t^{2} \\mathbb {P}(X < -t ) \\right)}{g(\\log t)}\\right),$}\\end{array}$ we have $-\\overline{\\lambda } = \\left(-\\overline{\\lambda }_{1}\\right) \\vee \\left(-\\overline{\\lambda }_{2} \\right);~\\mbox{i.e.,}~~\\overline{\\lambda } = \\overline{\\lambda }_{1} \\wedge \\overline{\\lambda }_{2}.$ Similarly, we can show that $\\left(-\\underline{\\lambda }_{1}\\right) \\vee \\left(-\\underline{\\lambda }_{2} \\right) \\le -\\underline{\\lambda };~\\mbox{i.e.,}~~\\underline{\\lambda } \\le \\underline{\\lambda }_{1} \\wedge \\underline{\\lambda }_{2}.$ However, the assertion $\\underline{\\lambda } = \\underline{\\lambda }_{1} \\wedge \\underline{\\lambda }_{2} $ is not true.", "Remark 2.2 If, for the given $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , $\\lim _{t \\rightarrow \\infty } \\frac{g(t)}{t} = \\infty $ (for such case, $\\rho \\ge 1$ ), then $\\lim _{t \\rightarrow \\infty } \\frac{\\log t^{2}}{g(\\log t)} = 0$ and hence, one can easily see that the $\\overline{\\lambda }_{1}$ , $\\underline{\\lambda }_{1}$ , $\\overline{\\lambda }_{2}$ , $\\underline{\\lambda }_{2}$ , $\\overline{\\lambda }$ , and $\\underline{\\lambda }$ can also be defined respectively by $\\overline{\\lambda }_{1}= - \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\mathbb {P}(X > t )}{g(\\log t)},~~\\underline{\\lambda }_{1} = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\mathbb {P}(X > t )}{g(\\log t)},$ $\\overline{\\lambda }_{2}= - \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\mathbb {P}(X < -t )}{g(\\log t)},~~\\underline{\\lambda }_{2} = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\mathbb {P}(X < -t )}{g(\\log t)},$ $\\overline{\\lambda }= - \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\mathbb {P}(|X| > t )}{g(\\log t)},~\\mbox{and}~~\\underline{\\lambda } = - \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\mathbb {P}(|X| > t )}{g(\\log t)}.$ Remark 2.3 Under the assumptions of Theorem 2.1, one can show that the $\\overline{\\lambda }_{1}$ , $\\underline{\\lambda }_{1}$ , $\\overline{\\lambda }_{2}$ , $\\underline{\\lambda }_{2}$ , $\\overline{\\lambda }$ , and $\\underline{\\lambda }$ can also be defined respectively by $\\overline{\\lambda }_{1} = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2}e^{r g(\\log t)}\\mathbb {P}(X > t) = 0 \\right\\rbrace ,~\\underline{\\lambda }_{1} = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2}e^{r g(\\log t)}\\mathbb {P}(X > t) = 0 \\right\\rbrace ,$ $\\overline{\\lambda }_{2} = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2}e^{r g(\\log t)}\\mathbb {P}(X < -t) = 0 \\right\\rbrace ,~\\underline{\\lambda }_{2} = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2}e^{r g(\\log t)}\\mathbb {P}(X < -t) = 0 \\right\\rbrace ,$ $\\overline{\\lambda } = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2}e^{r g(\\log t)}\\mathbb {P}(|X| > t) = 0 \\right\\rbrace ,~\\mbox{and}~ \\underline{\\lambda } = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2}e^{r g(\\log t)}\\mathbb {P}(|X| > t) = 0 \\right\\rbrace .$ The proofs are left to the reader.", "Remark 2.4 Let $\\lbrace X, X_{n}; n \\ge 1\\rbrace $ be a sequence of i.i.d.", "non-degenerate real-valued random variables with $\\mathbb {E}X^{2} < \\infty $ .", "Then, following from Theorem 2.1 and Remark 2.3, we have the following two special cases which are related to the law of the iterated logarithm for partial sums of i.i.d.", "random variables.", "(i)  If $g(t) = t$ , $t \\ge 0$ , then all conclusions in Theorem 2.1 hold with $g(\\log n) = \\log n$ , $\\rho = 1$ , and $\\overline{\\lambda }_{1}$ , $\\underline{\\lambda }_{1}$ , $\\overline{\\lambda }_{2}$ , $\\underline{\\lambda }_{2}$ , $\\overline{\\lambda }$ , and $\\underline{\\lambda }$ defined by $\\overline{\\lambda }_{1} = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2+r}\\mathbb {P}(X > t) = 0 \\right\\rbrace ,~\\underline{\\lambda }_{1} = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2+r}\\mathbb {P}(X > t) = 0 \\right\\rbrace ,$ $\\overline{\\lambda }_{2} = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2+r}\\mathbb {P}(X < -t) = 0 \\right\\rbrace ,~\\underline{\\lambda }_{2} = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2+r}\\mathbb {P}(X < -t) = 0 \\right\\rbrace ,$ $\\overline{\\lambda } = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2+r}\\mathbb {P}(|X| > t) = 0 \\right\\rbrace ,~\\mbox{and}~ \\underline{\\lambda } = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2+2}\\mathbb {P}(|X| > t) = 0 \\right\\rbrace .$ In particular, $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n \\log n} \\right)}{\\log n}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\mathbb {E}\\left(X^{+} \\right)^{r} < \\infty ~~\\mbox{for all}~ r > 0,$ $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} < - x \\sqrt{n \\log n} \\right)}{\\log n}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\mathbb {E}\\left(X^{-} \\right)^{r} < \\infty ~~\\mbox{for all}~ r > 0,$ and $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x \\sqrt{n \\log n} \\right)}{\\log n}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\mathbb {E}|X|^{r} < \\infty ~~\\mbox{for all}~ r > 0.$ (ii)  If $g(t) = \\log (t \\vee 1)$ , $t \\ge 0$ , then all conclusions in Theorem 2.1 hold with $g(\\log n) = \\log \\log n$ ($n \\ge 3$ ), $\\rho = 0$ , and $\\overline{\\lambda }_{1}$ , $\\underline{\\lambda }_{1}$ , $\\overline{\\lambda }_{2}$ , $\\underline{\\lambda }_{2}$ , $\\overline{\\lambda }$ , and $\\underline{\\lambda }$ defined by $\\overline{\\lambda }_{1} = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2}(\\log t)^{r} \\mathbb {P}(X > t) = 0 \\right\\rbrace ,~\\underline{\\lambda }_{1} = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2}(\\log t)^{r}\\mathbb {P}(X > t) = 0 \\right\\rbrace ,$ $\\overline{\\lambda }_{2} = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty }t^{2}(\\log t)^{r}\\mathbb {P}(X < -t) = 0 \\right\\rbrace ,~\\underline{\\lambda }_{2} = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty }t^{2}(\\log t)^{r} \\mathbb {P}(X < -t) = 0 \\right\\rbrace ,$ $\\overline{\\lambda } = \\sup \\left\\lbrace r \\ge 0: \\lim _{t \\rightarrow \\infty } t^{2}(\\log t)^{r} \\mathbb {P}(|X| > t) = 0 \\right\\rbrace ,~\\mbox{and}~ \\underline{\\lambda } = \\sup \\left\\lbrace r \\ge 0: \\liminf _{t \\rightarrow \\infty } t^{2}(\\log t)^{r} \\mathbb {P}(|X| > t) = 0 \\right\\rbrace .$ In particular $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n \\log \\log n} \\right)}{\\log \\log n}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\mathbb {E}\\left((X^{+})^{2} (\\log X^{+})^{r} \\right) < \\infty ~~\\mbox{for all}~ r > 0,$ $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} < -x \\sqrt{n \\log \\log n} \\right)}{\\log \\log n}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\mathbb {E}\\left((X^{-})^{2} (\\log X^{-})^{r} \\right) < \\infty ~~\\mbox{for all}~ r > 0,$ and $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n}\\right| > x \\sqrt{n \\log \\log n} \\right)}{\\log \\log n}= - \\frac{x^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ x > 0~\\mbox{if and only if}~ \\mathbb {E}\\left(X^{2} (\\log |X|)^{r} \\right) < \\infty ~~\\mbox{for all}~ r > 0.$ The following Theorem 2.2 shows that $0 < \\sigma ^{2} = \\mathbb {E}(X - \\mu )^{2} < \\infty $ is necessary for the moderate deviation results established in Theorems 2.1.", "Theorem 2.2 Let $\\lbrace X, X_{n}; n \\ge 1\\rbrace $ be a sequence of i.i.d.", "real-valued random variables.", "Then, for any given $\\eta \\in (-\\infty , \\infty )$ and $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ for some $\\rho \\ge 0$ , we have: (i)  The following three statements are equivalent: $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\eta \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} = 0~~\\mbox{for all} ~x > 0;$ $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\eta \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} = 0~~\\mbox{for some} ~x > 0;$ $\\mbox{Either} ~~\\mathbb {E}X \\ne \\eta ~~\\mbox{or}~~\\mathbb {E}X^{2} = \\infty ~~\\mbox{or}~ \\sigma ^{2} = \\mbox{Var}(X) \\in (0, \\infty ) ~\\mbox{and}~~\\lambda _{1} = \\lambda _{2} = 0.$ (ii)  The following three statements are equivalent: $-\\infty < \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\eta \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} < 0~~\\mbox{for all} ~x > 0;$ $-\\infty < \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\eta \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} < 0~~\\mbox{for some} ~x > 0;$ $\\sigma ^{2} = \\mbox{Var}(X) \\in (0, \\infty )~ \\mbox{and}~\\lambda _{1} > 0.$ (iii)  The following three statements are equivalent: $-\\infty < \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\eta \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} < 0~~\\mbox{for all} ~x > 0;$ $-\\infty < \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} - n \\eta \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)} < 0~~\\mbox{for some} ~x > 0;$ $\\sigma ^{2} = \\mbox{Var}(X) \\in (0, \\infty )~ \\mbox{and}~\\lambda _{2} > 0.$" ], [ "Preliminary lemmas", "In this section, we collect four preliminary lemmas needed for the proofs of our main results.", "We need some additional notation.", "Let $m(Y)$ denote a median for a real-valued random variable $Y$ .", "We put $m(-Y) = -m(Y)$ .", "The following lemma is used to prove Theorem 2.1.", "The first part was obtained by Li and Rosalsky [20, Lemma 3.1] and the second part was established by Petrov [24] (also see Petrov [26, Theorem 2.1]).", "Lemma 3.1 Let $\\lbrace V_{k};~1 \\le k \\le n \\rbrace $ be a finite sequence of independent real-valued random variables and set $T_{0} = 0$ and $T_{k} = V_{1} + \\cdots + V_{k}$ , $1 \\le k \\le n$ .", "Then, for every real $t$ , $\\mathbb {P}\\left(\\max _{1 \\le k \\le n} \\left(V_{k} + m\\left(T_{k-1}\\right) \\right) > t \\right)\\le 2 \\mathbb {P}\\left(\\max _{1 \\le k \\le n} T_{k} > t \\right),$ $\\mathbb {P}\\left(\\max _{1 \\le k \\le n} \\left(T_{k} + m\\left(T_{n} - T_{k}\\right) \\right) > t \\right)\\le 2 \\mathbb {P}\\left(T_{n} > t \\right).$ Lemma 3.2 Let $Y$ be a non-negative random variable.", "Let $p(\\cdot ): ~[t_{1}, \\infty ) \\rightarrow [0, \\infty )$ be a non-decreasing function such that $0 < p(2t) \\le b p(t), ~t \\ge t_{1}~~\\mbox{and}~~\\lim _{t \\rightarrow \\infty } p(t) \\mathbb {P}(Y > t) = 0,$ where $b > 1$ and $t_{1} > 0$ are two constants.", "Let $h(\\cdot ): ~[t_{2}, \\infty ) \\rightarrow [0, \\infty )$ be a non-decreasing function such that $\\lim _{t \\rightarrow \\infty } h(t) = \\infty ~~\\mbox{and}~~\\lim _{t \\rightarrow \\infty } \\frac{h(t+1)}{h(t)} = 1,$ where $t_{2} > 0$ is a constant.", "Then $\\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(p(n) \\mathbb {P}\\left(Y > n \\right)\\right)}{h(n)} = \\limsup _{t \\rightarrow \\infty }\\frac{\\log \\left(p(t) \\mathbb {P}\\left(Y > t \\right)\\right)}{h(t)}$ and $\\liminf _{n \\rightarrow \\infty }\\frac{\\log \\left(p(n) \\mathbb {P}\\left(Y > n \\right)\\right)}{h(n)} = \\liminf _{t \\rightarrow \\infty }\\frac{\\log \\left(p(t) \\mathbb {P}\\left(Y > t \\right)\\right)}{h(t)}.$ Proof  Since $p(\\cdot ): ~[t_{1}, \\infty ) \\rightarrow [0, \\infty )$ is a non-decreasing function with (3.1), we have, for $n \\le t < n + 1$ and all sufficiently large $n$ , $\\begin{array}{lll}\\mbox{$\\displaystyle \\frac{1}{b} p(n+1)\\mathbb {P}(Y > n+1)$}& \\le & \\mbox{$\\displaystyle p(n) \\mathbb {P}(Y > n+1)$}\\\\&&\\\\& \\le & \\mbox{$\\displaystyle p(t) \\mathbb {P}(Y > t) \\le p(n+1) \\mathbb {P}(Y > n)$}\\\\&&\\\\& \\le & \\mbox{$\\displaystyle bp(n) \\mathbb {P}(Y > n)$}\\\\&&\\\\& \\le & 1.\\end{array}$ Hence, for $n \\le t < n+1$ and all sufficiently large $n$ , $\\begin{array}{lll}0 & \\le &\\mbox{$\\displaystyle - \\log \\left(b p(n) \\mathbb {P}(Y > n)\\right)$}\\\\&&\\\\& \\le & \\mbox{$\\displaystyle - \\log \\left(p(t) \\mathbb {P}(Y > t) \\right)$}\\\\&&\\\\& \\le & \\mbox{$\\displaystyle - \\log \\left(\\frac{1}{b} p(n+1)\\mathbb {P}(Y > n + 1)\\right).$}\\end{array}$ Since $h(\\cdot ): ~[t_{2}, \\infty ) \\rightarrow [0, \\infty )$ is a non-decreasing function, we have, for $n \\le t < n + 1$ and all sufficiently large $n$ , $\\begin{array}{lll}0 & \\le & \\mbox{$\\displaystyle \\frac{h(n)}{h(n+1)} \\left(- \\frac{\\log b + \\log \\left(p(n) \\mathbb {P}(Y > n)\\right)}{h(n)}\\right)$}\\\\&&\\\\& \\le & \\mbox{$\\displaystyle - \\frac{\\log \\left(p(t) \\mathbb {P}(Y > t)\\right)}{h(t)} $}\\\\&&\\\\& \\le & \\mbox{$\\displaystyle \\frac{h(n+1)}{h(n)} \\left(- \\frac{- \\log b + \\log \\left( p(n+1) \\mathbb {P}(Y > n+1)\\right)}{h(n+1)}\\right).$}\\end{array}$ Thus, (3.3) and (3.4) follow from (3.5) and (3.2).", "$\\Box $ Lemma 3.3 Let $X$ be a real-valued random variable with $\\mathbb {E}X^{2} < \\infty $ .", "Then, for any given $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ and all $s > 0$ , we have $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > s \\sqrt{n g(\\log n)} \\right) \\right)}{g(\\log n)}= - \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }},$}\\\\&\\\\& \\mbox{$\\displaystyle ~\\liminf _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > s \\sqrt{n g(\\log n)} \\right) \\right)}{g(\\log n)}= - \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }}$}\\end{array}\\right.$ and $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > \\frac{s \\sqrt{n}}{g(\\log n)} \\right) \\right)}{g(\\log n)}= - \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }},$}\\\\&\\\\& \\mbox{$\\displaystyle ~\\liminf _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > \\frac{s \\sqrt{n}}{g(\\log n)} \\right) \\right)}{g(\\log n)}= - \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }},$}\\end{array}\\right.$ where $\\overline{\\lambda }_{1} $ and $\\underline{\\lambda }_{1}$ are defined by (2.1).", "Proof  Since $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , we see that $\\tilde{g}(t) = g(n-1) + (t - n + 1)\\left(g(n) - g(n-1) \\right), ~n-1 \\le t < n,~ n \\ge 1.$ is a continuous and nondecreasing function defined on $[0, \\infty )$ such that $\\tilde{g}(n) = g(n), ~n \\ge 1 ~\\mbox{and}~~\\lim _{t \\rightarrow \\infty } \\frac{\\tilde{g}(t)}{g(t)} = 1$ and hence, $\\tilde{g}(\\cdot ) \\in \\mathcal {V}_{\\rho }$ .", "Thus, without loss of generality, we can assume that $g(\\cdot ): [0, \\infty ) \\rightarrow [0, \\infty )$ is continuous (otherwise, $g(\\cdot )$ can be replaced by $\\tilde{g}(\\cdot )$ ) and $g(1) > 0$ (since $\\lim _{t \\rightarrow \\infty } g(t) = \\infty $ ).", "We first establish (3.6).", "For given $s > 0$ , write $\\varphi _{s}(t) = s \\sqrt{t g(\\log (t \\vee e))}, ~t \\ge 0.$ Then, under the given conditions of $g(\\cdot )$ , $\\varphi _{s}(\\cdot ):~[0, \\infty ) \\rightarrow [0, \\infty )$ is a continuous and strictly increasing function with $\\varphi _{s}(0) = 0$ and $\\lim _{t \\rightarrow \\infty } \\varphi _{s}(t) = \\infty $ and $\\lim _{t \\rightarrow \\infty } g(\\log t) = \\infty , ~\\lim _{t \\rightarrow \\infty } \\frac{g(\\log (t+1))}{g(\\log t)} =1,$ $\\lim _{t \\rightarrow \\infty } \\frac{g\\left(\\log \\varphi _{s}(t)\\right)}{g(\\log t)}= \\lim _{t \\rightarrow \\infty } \\frac{g\\left(\\frac{1}{2} \\log t + \\log s + \\frac{1}{2} \\log g(\\log t) \\right)}{g(\\log t)}= \\frac{1}{2^{\\rho }}.$ Let $\\varphi _{s}^{-1}(\\cdot )$ be the inverse function of $\\varphi _{s}(\\cdot )$ .", "Under the given conditions, we have $\\begin{array}{lll}\\mbox{$\\displaystyle \\limsup _{t \\rightarrow \\infty } t \\mathbb {P}\\left(\\varphi _{s}^{-1}\\left(X^{+}\\right) > t \\right)$}& = &\\mbox{$\\displaystyle \\limsup _{t \\rightarrow \\infty } t \\mathbb {P}\\left(X > \\varphi _{s}(t) \\right) $}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\limsup _{t \\rightarrow \\infty } t \\mathbb {P}\\left(X > \\sqrt{t} \\right)$}\\\\&&\\\\& = & 0.\\end{array}$ Thus $\\lim _{t \\rightarrow \\infty } t \\mathbb {P}\\left(\\varphi _{s}^{-1}\\left(X^{+} \\right) > t \\right) = 0.$ Now, it follows from (3.8), (3.10), Lemma 3.2 (with $Y = \\varphi _{s}^{-1}\\left(X^{+} \\right)$ ,  $p(t) = t$ , and $h(t) = g(\\log t)$ , $t \\ge 1$ ), and (3.9) that $\\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > s \\sqrt{n g(\\log n)} \\right) \\right)}{g(\\log n)}$}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(\\varphi _{s}^{-1}\\left(X^{+} \\right) > n \\right) \\right)}{g(\\log n)}$}\\\\& \\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(t \\mathbb {P}\\left(\\varphi _{s}^{-1}\\left(X^{+} \\right) > t \\right) \\right)}{g(\\log t)}$~~(by (3.8), (3.10), and Lemma 3.2)}\\\\& \\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(t \\mathbb {P}\\left(X > \\varphi _{s}(t) \\right) \\right)}{g(\\log t)}$}\\\\& \\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty }\\frac{\\log \\left(\\left(\\varphi _{s}(t)\\right)^{2} \\mathbb {P}\\left(X > \\varphi _{s}(t) \\right) \\right)- 2\\log s - \\log g(\\log t)}{2^{\\rho }g\\left(\\log \\varphi _{s}(t) \\right)}$~~(by (3.9))}\\\\& \\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty }\\frac{\\log \\left(\\left(\\varphi _{s}(t)\\right)^{2}\\mathbb {P}\\left(X > \\varphi _{s}(t) \\right) \\right)}{2^{\\rho }g\\left(\\log \\varphi _{s}(t) \\right)}$}\\\\& \\\\& \\mbox{$\\displaystyle = \\limsup _{x \\rightarrow \\infty }\\frac{\\log \\left(x^{2} \\mathbb {P}(X > x) \\right)}{2^{\\rho }g(\\log x)}$~~(let $\\displaystyle t = \\varphi _{s}^{-1}(x)$)}\\\\& \\\\& \\mbox{$\\displaystyle = - \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }}$~~(by (2.1));}\\end{array}$ i.e., the first assertion of (3.6) holds.", "Similarly, the second assertion of (3.6) also follows from (3.8), (3.10), Lemma 3.2, and (3.9).", "We now prove (3.7).", "Write $h(t) = g(\\log t)$ , $t \\ge 1$ .", "Under the given conditions, it is easy to see that $h(\\cdot ):~[1, \\infty ) \\rightarrow [0, \\infty )$ is a continuous and non-decreasing slowly varying function such that $\\lim _{t \\rightarrow \\infty }h(t) = \\infty $ .", "By the Karamata representation theorem (see, e.g., Bingham, Goldie, and Teugels [2, Theorem 1.3.1]), there exist two measurable functions $c(\\cdot )$ and $\\varepsilon (\\cdot )$ and a constant $d > 0$ such that $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle h(t) = c(t) \\exp \\left(\\int _{d}^{t} \\frac{\\varepsilon (x)}{x}dx \\right), ~t \\ge d,$}\\\\&\\\\& \\mbox{$\\displaystyle \\lim _{t \\rightarrow \\infty } c(t) = c \\in (0, \\infty ),~\\mbox{and}~ \\lim _{t \\rightarrow \\infty } \\varepsilon (t) = 0.$}\\end{array}\\right.$ Write $\\phi (t) = (1/c)\\sqrt{t} \\exp \\left(-\\int _{d}^{t} \\frac{\\varepsilon (x)}{x}dx \\right), ~t \\ge d.$ It follows from (3.11) that $\\lim _{t \\rightarrow \\infty } \\frac{\\sqrt{t}/g(\\log t)}{\\phi (t)} =1,$ $\\lim _{t \\rightarrow \\infty } \\frac{\\phi (2t)}{\\phi (t)} = \\sqrt{2},$ $\\lim _{t \\rightarrow \\infty } \\frac{\\log \\left(\\frac{t}{\\phi ^{2}(t)}\\right)}{g(\\log t)} = \\lim _{t \\rightarrow \\infty } \\frac{2\\log g(\\log t)}{g(\\log t)} = 0,$ $\\lim _{t \\rightarrow \\infty } \\frac{g\\left(\\log \\left(s\\phi (t) \\right)\\right)}{g(\\log t)}= \\lim _{t \\rightarrow \\infty } \\frac{g\\left(\\frac{1}{2} \\log t + \\log s - \\log g(\\log t) \\right)}{g(\\log t)}= \\frac{1}{2^{\\rho }}$ and $\\phi ^{\\prime }(t) = \\frac{\\phi (t)}{t} \\left(\\frac{1}{2} - \\varepsilon (t) \\right)> 0 ~\\mbox{ultimately}.$ Clearly, from (3.12), we see that (3.7) is equivalent to, for all $s > 0$ , $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}(X > s \\phi (n)) \\right)}{g(\\log n)}= - \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }},$}\\\\&\\\\& \\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P} (X > s \\phi (n)) \\right)}{g(\\log n)}= - \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }}.$}\\end{array}\\right.$ Note that (3.16) implies that there exists a constant $d_{1} > d$ such that $\\phi (t)$ is a continuous and strictly increasing function on $[d_{1}, \\infty )$ .", "Define $\\psi (t) =\\left\\lbrace \\begin{array}{ll}\\mbox{$\\displaystyle \\frac{\\phi \\left(d_{1} \\right)}{d_{1}}t$} & \\mbox{if $\\displaystyle 0 \\le t < d_{1}$,}\\\\&\\\\\\mbox{$\\displaystyle \\phi (t)$} & \\mbox{if $\\displaystyle t \\ge d_{1}$.", "}\\end{array}\\right.$ Then $\\psi (t)$ is a continuous and strictly increasing function on $[0, \\infty )$ $\\psi (0) = 0$ and $\\lim _{t \\rightarrow \\infty } \\psi (t) = \\infty $ .", "Let $\\psi ^{-1}(\\cdot )$ be the inverse function of $\\psi (\\cdot )$ .", "Clearly, (3.13) ensures that there exists $t_{1} > d_{1}$ such that $0 < \\phi (2t) \\le 2 \\phi (t)$ , $t \\ge t_{1}$ ; i.e., $0 < \\psi (2t) \\le 2 \\psi (t), ~t \\ge t_{1}.$ Since $\\mathbb {E}X^{2} < \\infty $ , we have $\\limsup _{t \\rightarrow \\infty } \\left(\\psi (t)\\right)^{2} \\mathbb {P}\\left(\\psi ^{-1}\\left(\\frac{X^{+}}{s}\\right) > t \\right)= \\limsup _{t \\rightarrow \\infty } \\left(\\psi (t)\\right)^{2} \\mathbb {P}\\left( \\frac{X}{s} > \\psi (t) \\right)= 0.$ Thus $\\lim _{t \\rightarrow \\infty } \\left(\\psi (t)\\right)^{2} \\mathbb {P}\\left(\\psi ^{-1}\\left(\\frac{X^{+}}{s}\\right) > t \\right) = 0.$ Now, it follows from (3.14), the definition of $\\psi (\\cdot )$ , (3.18), (3.19), (3.8), Lemma 3.2 (with $Y = \\psi ^{-1}(X^{+}/s)$ ,  $p(t) = \\psi ^{2}(t), ~t > t_{1}$ , and $h(t) = g(\\log t)$ , $t \\ge 1$ ), (3.15), and (2.1)) that $\\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > s \\phi (n) \\right) \\right)}{g(\\log n)}$}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{n \\rightarrow \\infty }\\frac{\\log \\left(\\phi ^{2}(n) \\mathbb {P}\\left(X > s \\phi (n) \\right)\\right) + \\log \\left(\\frac{n}{\\phi ^{2}(n)}\\right)}{g(\\log n)}$}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{n \\rightarrow \\infty }\\frac{\\log \\left(\\phi ^{2}(n) \\mathbb {P}\\left(X > s \\phi (n) \\right)\\right)}{g(\\log n)}$~~(by (3.14))}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{n \\rightarrow \\infty }\\frac{\\log \\left(\\psi ^{2}(n) \\mathbb {P}\\left(X > s \\psi (n) \\right)\\right)}{g(\\log n)}$~~(by the definition of $\\psi (\\cdot )$)}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(\\psi ^{2}(n) \\mathbb {P}\\left(\\psi ^{-1}\\left(X^{+}/s \\right) > n \\right) \\right)}{g(\\log n)}$}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(\\psi ^{2}(t) \\mathbb {P}\\left(\\psi ^{-1}\\left(X^{+}/s \\right) > t \\right) \\right)}{g(\\log t)}$~~(by (3.18), (3.19), (3.8), and Lemma 3.2)}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty } \\frac{\\log \\left(\\psi ^{2}(t) \\mathbb {P}\\left(X > s \\psi (t) \\right) \\right)}{g(\\log t)}$}\\\\&\\\\& \\mbox{$\\displaystyle = \\limsup _{t \\rightarrow \\infty }\\frac{\\log \\left(\\left(s\\psi (t)\\right)^{2} \\mathbb {P}\\left(X > s \\psi (t) \\right) \\right) - 2 \\log s}{2^{\\rho }g(\\log \\left(s\\psi (t)\\right))}$~~(by (3.15) and the definition of $\\psi (\\cdot )$)}\\\\& \\\\& \\mbox{$\\displaystyle = \\limsup _{x \\rightarrow \\infty } \\frac{\\log \\left(x^{2} \\mathbb {P}(X > x) \\right)}{2^{\\rho }g(\\log x)}$~~(let $\\displaystyle t = \\psi ^{-1}(x/s)$)}\\\\& \\\\& \\mbox{$\\displaystyle = - \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }}$~~(by (2.1));}\\end{array}$ i.e., the first assertion of (3.17) follows.", "Following the same argument, the second assertion of (3.17) follows.", "The proof of Lemma 3.3 is complete.", "$\\Box $ Lemma 3.4 For each $n \\ge 2$ , let $\\lbrace X_{n,i}; 1 \\le i \\le n \\rbrace $ be i.i.d.", "real-valued random variables such that $\\mathbb {E}X_{n, 1} = 0~~\\mbox{and}~~\\lim _{n \\rightarrow \\infty } \\mathbb {E}X_{n,1}^{2} = \\sigma ^{2} \\in (0, \\infty ).$ and, for some given $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ for some $\\rho \\ge 0$ , there exists a sequence of positive constants $\\left\\lbrace \\tau _{n}; ~n \\ge 2 \\right\\rbrace $ such that $\\lim _{n \\rightarrow \\infty } \\tau _{n} = 0~~\\mbox{and}~~\\left|X_{n,1} \\right| \\le \\tau _{n} \\sqrt{\\frac{n}{g(\\log n)}}~~\\mbox{almost surely (a.s.)}$ Then we have $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > r \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\frac{r^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ r > 0$ and similarly, $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} < - r \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\frac{r^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ r > 0$ and $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|\\sum _{i=1}^{n} X_{n,i} \\right| > r \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}= - \\frac{r^{2}}{2\\sigma ^{2}}~~\\mbox{for all}~ r > 0.$ Proof  For $n \\ge 2$ and fixed $r > 0$ , write $B_{n} = \\mbox{Var}\\left(\\sum _{i=1}^{n} X_{n,i} \\right), ~M_{n} = \\tau _{n} \\sqrt{\\frac{n}{g(\\log n)}},~\\mbox{and}~ ~x_{n}(r) = r \\sqrt{n g(\\log n)}, ~ n \\ge 1.$ Since, for each $n \\ge 2$ , $\\lbrace X_{n,i}; 1 \\le i \\le n \\rbrace $ are i.i.d.", "real-valued random variables with (3.20), we have $\\lim _{n \\rightarrow \\infty } \\frac{B_{n}}{n \\sigma ^{2}} = \\lim _{n \\rightarrow \\infty } \\frac{n \\mathbb {E}X_{n,1}^{2}}{n \\sigma ^{2}} = 1.$ Thus it follows from (3.21) and (3.22) that $\\max _{1 \\le i \\le n} \\left|X_{n,i} \\right| \\le M_{n}~ \\mbox{a.s.}, ~n \\ge 2,$ $0 < x_{n}(r) M_{n} = \\tau _{n} rn \\le B_{n} ~\\mbox{for all sufficiently large}~ n,$ $\\lim _{n \\rightarrow \\infty } \\frac{x_{n}(r)M_{n}}{B_{n}} = \\frac{r}{\\sigma ^{2}}\\lim _{n \\rightarrow \\infty }\\left(\\frac{n \\sigma ^{2}}{B_{n}}\\right) \\tau _{n} = 0,$ and $\\frac{x_{n}^{2}(r)}{B_{n}} \\sim \\frac{r^{2}}{\\sigma ^{2}}\\left(\\frac{n \\sigma ^{2}}{B_{n}} \\right) g(\\log n) \\rightarrow \\infty ~(\\mbox{since} ~\\lim _{t \\rightarrow \\infty } g(t) = \\infty ).$ By (3.24), (3.25), and Lemma 7.1 (which is one of the Kolmogorov exponential inequalities) of Petrov [26, page 240], for all sufficiently large $n$ we have $\\begin{array}{lll}\\mbox{$\\displaystyle \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > r \\sqrt{n g(\\log n)} \\right)$}& = &\\mbox{$\\displaystyle \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > x_{n}(r) \\right)$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\exp \\left\\lbrace - \\frac{x_{n}^{2}(r)}{2B_{n}}\\left(1 - \\frac{x_{n}(r)M_{n}}{2B_{n}} \\right) \\right\\rbrace $}\\end{array}$ and hence, it follows from $\\lim _{t \\rightarrow \\infty } g(t) = \\infty $ , (3.26), (3.27), and (3.23) that $\\begin{array}{lll}\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > r \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}$}& \\le &\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{- \\frac{x_{n}^{2}(r)}{2B_{n}}\\left(1 - \\frac{x_{n}(r)M_{n}}{2B_{n}} \\right)}{g(\\log n)}$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle - \\frac{r^{2}}{2 \\sigma ^{2}} \\liminf _{n \\rightarrow \\infty } \\frac{n \\sigma ^{2}}{B_{n}}$}\\\\&&\\\\& = &\\mbox{$\\displaystyle - \\frac{r^{2}}{2 \\sigma ^{2}}$.", "}\\end{array}$ Now by (3.26), (3.27), and Lemma 7.2 (which also is one of the Kolmogorov exponential inequalities) of Petrov [26, page 241], for every fixed $0 < \\epsilon < 1$ and all sufficiently large $n$ we have $\\begin{array}{lll}\\mbox{$\\displaystyle \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > r \\sqrt{n g(\\log n)} \\right)$}& = &\\mbox{$\\displaystyle \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > x_{n}(r) \\right)$}\\\\&&\\\\& \\ge &\\mbox{$\\displaystyle \\exp \\left\\lbrace - \\frac{x_{n}^{2}(r)}{2B_{n}}(1 - \\epsilon ) \\right\\rbrace $}\\end{array}$ and hence, it follows from (3.23) that $\\begin{array}{lll}\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left( \\sum _{i=1}^{n} X_{n,i} > r \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}$}& \\ge &\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{- \\frac{x_{n}^{2}(r)}{2B_{n}}(1 - \\epsilon )}{g(\\log n)}$}\\\\&&\\\\& \\ge &\\mbox{$\\displaystyle - \\frac{(1 - \\epsilon )r^{2}}{2 \\sigma ^{2}} \\limsup _{n \\rightarrow \\infty } \\frac{n \\sigma ^{2}}{B_{n}}$}\\\\&&\\\\& = &\\mbox{$\\displaystyle - \\frac{(1 - \\epsilon )r^{2}}{2 \\sigma ^{2}}$.", "}\\end{array}$ Thus, letting $\\epsilon \\searrow 0$ , we get $\\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{n,i} > r \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}\\ge - \\frac{r^{2}}{2 \\sigma ^{2}}.$ Clearly, (3.28) and (3.29) together ensure (3.22).", "This completes the proof of Lemma 3.4.", "$\\Box $" ], [ "Proof of Theorem 2.1", "We first state two basic facts which will be used in the proof of Theorem 2.1.", "Let $\\left\\lbrace a_{n};~n \\ge 1 \\right\\rbrace $ and $\\left\\lbrace b_{n};~n \\ge 1 \\right\\rbrace $ be two sequences of real numbers.", "Then $\\limsup _{n \\rightarrow \\infty } \\left(a_{n}\\vee b_{n} \\right)= \\left(\\limsup _{n \\rightarrow \\infty } a_{n} \\right) \\vee \\left(\\limsup _{n \\rightarrow \\infty } b_{n} \\right)~~\\mbox{and}~~\\liminf _{n \\rightarrow \\infty } \\left(a_{n}\\vee b_{n} \\right)\\le \\left(\\limsup _{n \\rightarrow \\infty } a_{n} \\right) \\vee \\left(\\liminf _{n \\rightarrow \\infty } b_{n} \\right).$ Proof of Theorem 2.1  Since $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , we have $\\lim _{t \\rightarrow \\infty } \\frac{g(\\log (t \\pm \\mu ))}{g(\\log t)} = 1$ and hence, in view of (2.1), (2.2), and (2.3), without of loss generality, we can assume that $\\mu = 0$ .", "(i)  We first give the proof of (2.4).", "The lower bound part of (2.4).", "We first show that, for all $x > 0$ , $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle - \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }}\\le \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)},$}\\\\& \\\\& \\mbox{$\\displaystyle - \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }}\\le \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}.$}\\end{array}\\right.$ Recall that $m(Y)$ is a median for a real-valued random variable $Y$ .", "Write $S_{0} = 0$ and $m_{n} = \\min _{0 \\le k \\le n} m\\left(S_{k} \\right), ~n \\ge 1,$ Since $X_{1}, ..., X_{n}$ are i.i.d.", "random variables, we have $\\min _{0 \\le k \\le n} m\\left(S_{n} - S_{k} \\right) = m_{n},~~n \\ge 1.$ Under the given conditions of Theorem 2.1, we have $\\frac{S_{n}}{\\sqrt{n g(\\log n)}} \\rightarrow _{\\mathbb {P}} 0,$ where “$\\rightarrow _{\\mathbb {P}}$ \" stands for convergence in probability.", "Hence, $\\lim _{n \\rightarrow \\infty } \\frac{m_{n}}{\\sqrt{n g(\\log n)}} = \\lim _{n \\rightarrow \\infty }\\frac{\\min _{0 \\le k \\le n} m\\left(S_{k} \\right)}{\\sqrt{n g(\\log n)}} = 0.$ Thus, for any given $x > 0$ , by (4.2) and Lemma 3.1, we have for all sufficiently large $n$ , $\\begin{array}{lll}\\mbox{$\\displaystyle \\mathbb {P} \\left(\\max _{1 \\le k \\le n} X_{k} > 2x \\sqrt{n g(\\log n)} \\right)$}& \\le &\\mbox{$\\displaystyle \\mathbb {P} \\left(\\max _{1 \\le k \\le n} X_{k} > x \\sqrt{n g(\\log n)} - 2m_{n} \\right)$ ~(by (4.2))}\\\\&&\\\\& = &\\mbox{$\\displaystyle \\mathbb {P} \\left(\\max _{1 \\le k \\le n} X_{k} +\\min _{0 \\le k \\le n} m\\left(S_{k} \\right) > x \\sqrt{n g(\\log n)} - m_{n} \\right)$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\mathbb {P} \\left(\\max _{1 \\le k \\le n} \\left(X_{k}+ m\\left(S_{k-1}\\right) \\right) > x \\sqrt{n g(\\log n)} - m_{n} \\right)$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle 2 \\mathbb {P} \\left(\\max _{1 \\le k \\le n} S_{k} > x \\sqrt{n g(\\log n)} - m_{n} \\right)$~~(by Lemma 3.1)}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle 2 \\mathbb {P} \\left(\\max _{1 \\le k \\le n} \\left(S_{k}+ m\\left(S_{n} - S_{k}\\right) \\right) > x \\sqrt{n g(\\log n)} \\right)$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle 4 \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)$~~(by Lemma 3.1).", "}\\end{array}$ Using the method used in the proof of Lemma 3.4 of Li and Miao [18], we get $\\frac{1 \\wedge \\left(n \\mathbb {P}\\left(X > 2x \\sqrt{ng(\\log n)} \\right) \\right)}{2}\\le \\mathbb {P}\\left(\\max _{1 \\le k \\le n} X_{k} > 2x\\sqrt{ng(\\log n)} \\right),~n \\ge 1.$ Note that $\\mathbb {E}X^{2} < \\infty $ and $\\lim _{t \\rightarrow \\infty } g(t) = \\infty $ ensure that $\\lim _{n \\rightarrow \\infty } n \\mathbb {P}\\left(X > 2x \\sqrt{ng(\\log n)} \\right) = 0.$ Thus, under the given conditions of $g(\\cdot )$ , it follows from (4.3), (4.4), and Lemma 3.3 that $\\begin{array}{lll}\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}$}& \\ge &\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(\\frac{n \\mathbb {P}\\left(X > 2x \\sqrt{n g(\\log n)} \\right)}{8} \\right)}{g(\\log n)}$}\\\\&&\\\\& = &\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\left(n \\mathbb {P}\\left(X > 2x \\sqrt{n g(\\log n)} \\right) \\right)}{g(\\log n)}$}\\\\&&\\\\& = &\\mbox{$\\displaystyle - \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }} ~~\\mbox{for all}~ x > 0$;}\\end{array}$ i.e., the first half of (4.1) holds.", "In the same vein, the second half of (4.1) follows.", "In the following, we aim to establish the inequality $- \\frac{x^{2}}{2 \\sigma ^{2}}\\le \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}~~\\mbox{for all}~ x > 0.$ Since $\\mathbb {E}X^{2} < \\infty $ , we have $\\lim _{t \\rightarrow \\infty }\\frac{1}{\\delta ^2}\\mathbb {E}\\left(X^{2}I\\lbrace |X| > \\delta t \\rbrace \\right) = 0~~\\mbox{for all}~ \\delta > 0.$ Thus there exists a sequence of positive constants $\\left\\lbrace \\delta _{n};~ n \\ge 1 \\right\\rbrace $ with $\\delta _{n} \\searrow 0$ as $n \\rightarrow \\infty $ such that $\\lim _{n \\rightarrow \\infty } \\frac{1}{\\delta _n^2}\\mathbb {E}\\left(|X|^2I\\left\\lbrace |X| >\\delta _{n} \\sqrt{\\frac{n}{g(\\log n)}} \\right\\rbrace \\right) = 0,$ which implies $\\lim _{n \\rightarrow \\infty } \\sqrt{\\frac{n}{g(\\log n)}} \\mathbb {E}\\left(|X|I\\left\\lbrace |X| >\\delta _{n} \\sqrt{\\frac{n}{g(\\log n)}} \\right\\rbrace \\right) = 0$ and $\\lim _{n \\rightarrow \\infty }\\frac{n}{g(\\log n)}\\mathbb {P}\\left(|X| >\\delta _{n} \\sqrt{\\frac{n}{g(\\log n)}}\\right)=0$ in view of the following estimates, for $t > 0$ , $t\\mathbb {E}\\left(|X|I\\lbrace |X| > \\delta t \\rbrace \\right) \\le \\frac{1}{\\delta }\\mathbb {E}\\left(X^{2}I\\lbrace |X| > \\delta t \\rbrace \\right)~~\\mbox{and}~~t^{2}\\mathbb {P}\\left(|X|>\\delta t\\right)\\le \\frac{1}{\\delta ^2}\\mathbb {E}\\left(|X|^2I\\left\\lbrace |X| >\\delta t\\right\\rbrace \\right).$ Write, for $n \\ge 2$ , $\\hat{\\delta }_{n} = \\delta _{n} \\vee \\frac{1}{\\sqrt{g(\\log n)}}, ~c_{n} = \\hat{\\delta }_{n} \\sqrt{\\frac{n}{g(\\log n)}},~\\mu _{n} = \\mathbb {E}\\left(XI\\left\\lbrace |X| \\le c_{n} \\right\\rbrace \\right), ~\\mbox{and}~ p_{n} = \\mathbb {P}\\left(|X| > c_{n} \\right).$ It follows from (4.6) (since $c_{n} \\ge \\delta _{n} \\sqrt{\\frac{n}{g(\\log n)}}$ , $n \\ge 2$ ) and (4.7) that $\\lim _{n \\rightarrow \\infty }\\sqrt{\\frac{n}{g(\\log n)}} \\mathbb {E}\\left(|X|I\\left\\lbrace |X| >c_{n}\\right\\rbrace \\right) = 0~\\mbox{ and }~\\lim _{n \\rightarrow \\infty }\\frac{np_{n}}{g(\\log n)}= 0.$ Since $\\mu = \\mathbb {E}X = 0$ , we have $\\frac{n\\left|\\mu _{n} \\right|}{\\sqrt{ng(\\log n)}} \\le \\sqrt{\\frac{n}{g(\\log n)}} \\mathbb {E}\\left(|X|I\\left\\lbrace |X| >c_{n} \\right\\rbrace \\right), ~n \\ge 2$ and hence, from the first half of (4.9), $\\lim _{n\\rightarrow \\infty }\\frac{n\\mu _n}{\\sqrt{ng(\\log n)}}=0.$ Since $c_{n} \\ge \\frac{\\sqrt{n}}{g(\\log n)} \\rightarrow \\infty $ as $n \\rightarrow \\infty $ , we conclude that $p_{n} \\rightarrow 0$ as $n \\rightarrow \\infty $ and hence, from the second half of (4.9), $\\lim _{n \\rightarrow \\infty }\\frac{\\log (1-p_{n})^{n}}{g(\\log n)}= \\lim _{n\\rightarrow \\infty }\\frac{n\\log (1-p_{n})}{g(\\log n)}= \\lim _{n\\rightarrow \\infty }\\frac{-np_{n}}{g(\\log n)} = 0.$ Denote the conditional distribution of $X$ given $|X|\\le c_{n}$ as $F_{n}$ ; that is, $F_{n}(x)=\\mathbb {P}\\left(X\\le x \\big ||X|\\le c_{n} \\right) = \\left\\lbrace \\begin{array}{ll}0 & \\mbox{if $\\displaystyle x<-c_{n},$} \\\\&\\\\\\mbox{$\\displaystyle \\frac{1}{1-p_{n}}\\mathbb {P}\\Big (-c_{n} \\le X\\le x\\Big )$}& \\mbox{if $\\displaystyle -c_{n} \\le x \\le c_{n},$} \\\\&\\\\1 & \\mbox{if $\\displaystyle x > c_{n}.$}\\end{array}\\right.$ Conditional on $\\lbrace \\max _{1\\le k \\le n} \\left|X_{k} \\right|\\le c_n\\rbrace $ , $X_{1}, \\cdots , X_{n}$ are i.i.d.", "random variables with distribution function $F_{n}$ .", "Let $\\widetilde{X}_{n,1},\\cdots , \\widetilde{X}_{n,n}$ be independent random variables with common distribution $F_{n}$ .", "Since $\\lim _{n \\rightarrow \\infty } p_{n} = 0$ , it is easy to see that $\\tilde{\\mu }_{n}:=\\mathbb {E} \\Big (\\widetilde{X}_{n,1}\\Big ) = \\frac{\\mu _{n}}{1-p_{n}} \\rightarrow 0~~\\mbox{as}~ n \\rightarrow \\infty ,$ and $\\tilde{\\sigma }_{n}^{2} = \\mathbb {E}\\Big (\\widetilde{X}_{n,1} - \\tilde{\\mu }_{n} \\Big )^{2}= \\frac{1}{1-p_{n}}\\mathbb {E}\\left(X^{2}I\\left\\lbrace |X| \\le c_{n} \\right\\rbrace \\right) - \\tilde{\\mu }_{n}^{2}\\rightarrow \\sigma ^{2}.$ Clearly, $\\big \\lbrace U_{n,i}:=\\widetilde{X}_{n,i}-\\tilde{\\mu }_{n}; ~1\\le i\\le n \\big \\rbrace $ are i.i.d.", "random variables with mean 0 and variance $\\tilde{\\sigma }_{n}^{2} \\rightarrow \\sigma ^{2}$ as $n \\rightarrow \\infty $ and bounded by $2c_{n}= \\tau _{n} \\sqrt{\\frac{n}{g(\\log n)}}$ , $n \\ge 2$ where $\\tau _{n} = 2 \\hat{\\delta }_{n} \\rightarrow 0$ as $n \\rightarrow \\infty $ .", "Thus, replacing $\\big \\lbrace X_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ with $\\big \\lbrace U_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ , all conditions of Lemma 3.4 are satisfied and hence, (3.22) holds for $\\big \\lbrace U_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ .", "Thus, for any fixed $x>0$ and $\\epsilon > 0$ , we have $\\begin{array}{ll}& \\mbox{$\\displaystyle \\mathbb {P}\\left(S_{n} > x\\sqrt{ng(\\log n)}\\right)$}\\\\&\\\\& \\mbox{$\\displaystyle \\ge \\mathbb {P}\\left(\\big \\lbrace S_{n} > x\\sqrt{ng(\\log n)} \\big \\rbrace \\bigcap \\big \\lbrace \\max _{1\\le k \\le n}|X_{k}|\\le c_{n} \\big \\rbrace \\right)$}\\\\&\\\\&\\mbox{$\\displaystyle = \\mathbb {P}\\left(\\big \\lbrace \\sum _{i=1}^{n} X_{i}I\\left\\lbrace |X_{i}| \\le c_{n} \\right\\rbrace > x\\sqrt{ng(\\log n)} \\big \\rbrace \\bigcap \\big \\lbrace \\max _{1\\le k \\le n}|X_{k}|\\le c_{n} \\big \\rbrace \\right)$}\\\\&\\\\& \\mbox{$\\displaystyle = \\mathbb {P}\\left(\\sum _{i=1}^{n} X_{i}I\\left\\lbrace |X_{i}| \\le c_{n} \\right\\rbrace > x\\sqrt{ng(\\log n)}\\Big |\\max _{1\\le k \\le n}|X_{k}|\\le c_{n}\\right)\\mathbb {P}\\left(\\max _{1\\le k \\le n}|X_{k}|\\le c_{n}\\right)$}\\\\&\\\\& \\mbox{$\\displaystyle = \\mathbb {P}\\left(\\sum _{i=1}^{n}\\widetilde{X}_{n,i} > x\\sqrt{ng(\\log n)} \\right)\\left(1-p_{n} \\right)^{n}$}\\\\&\\\\& \\mbox{$\\displaystyle \\ge \\mathbb {P}\\left(\\sum _{i=1}^{n}\\left(\\widetilde{X}_{n,i}-\\tilde{\\mu }_{n} \\right) > x\\sqrt{ng(\\log n)}+n\\left|\\tilde{\\mu }_{n} \\right| \\right)\\left(1-p_{n} \\right)^{n}$}\\\\&\\\\& \\mbox{$\\displaystyle \\ge \\mathbb {P}\\left(\\sum ^n_{i=1}U_{n,i} > (x+\\epsilon )\\sqrt{ng(\\log n)}\\right)\\left(1-p_{n} \\right)^{n}$}\\end{array}$ for all sufficiently large $n$ .", "In the last step, we have used (4.10) and (4.12) to conclude that $n|\\tilde{\\mu }_n|\\le \\epsilon \\sqrt{ng(\\log n)}$ for all sufficiently large $n$ .", "Therefore, from Lemma 3.4 and (4.11) we obtain that $\\begin{array}{ll}& \\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x\\sqrt{ng(\\log n)}\\right)}{g(\\log n)}$}\\\\& \\\\& \\mbox{$\\displaystyle \\ge \\lim _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} U_{n,i}> (x + \\epsilon )\\sqrt{ng(\\log n)}\\right)}{g(\\log n)}+ \\lim _{n\\rightarrow \\infty }\\frac{\\log \\Big (\\left(1-p_{n} \\right)^{n}\\Big )}{g(\\log n)}$}\\\\& \\\\& \\mbox{$\\displaystyle = -\\frac{(x + \\epsilon )^2}{2\\sigma ^{2}},$}\\end{array}$ which yields (4.5) by letting $\\epsilon \\searrow 0$ .", "Clearly, (4.1) and (4.5) together ensure the following lower bound of (2.4): $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }} \\right)\\le \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}~\\mbox{for all}~x > 0$,}\\\\&\\\\&\\mbox{$\\displaystyle - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }} \\right)\\le \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}~\\mbox{for all}~x > 0$.", "}\\end{array}\\right.$ The upper bound part of (2.4).", "To complete the proof of (2.4) we have to establish the following upper bound of (2.4): $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}\\le - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$,}\\\\&\\\\&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}\\le - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }} \\right)~\\mbox{for all}~x > 0$.", "}\\end{array}\\right.$ Write, for $n \\ge 2$ $V_{n,i} = X_{i}I\\left\\lbrace \\left|X_{i}\\right| \\le c_{n} \\right\\rbrace - \\mu _{n}, ~ Y_{n,i} =X_{i}I\\left\\lbrace X_{i} > c_{n} \\right\\rbrace + \\mu _{n}, ~i = 1, 2, ..., n,$ where $\\big \\lbrace c_{n}; n\\ge 2 \\big \\rbrace $ and $\\big \\lbrace \\mu _{n}; n \\ge 2 \\big \\rbrace $ are defined in (4.8).", "Clearly, $\\big \\lbrace V_{n,i}; ~1\\le i\\le n \\big \\rbrace $ are i.i.d.", "random variables with mean 0 and variance $\\mbox{Var}\\big (V_{n,1} \\big ) \\rightarrow \\sigma ^{2}$ as $n \\rightarrow \\infty $ and bounded by $2c_{n}= \\tau _{n} \\sqrt{\\frac{n}{g(log n)}}$ , $n \\ge 2$ where $\\tau _{n} = 2 \\hat{\\delta }_{n} \\rightarrow 0$ as $n \\rightarrow \\infty $ .", "Thus, replacing $\\big \\lbrace X_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ with $\\big \\lbrace V_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ , all conditions of Lemma 3.4 are satisfied and hence, (3.22) holds for $\\big \\lbrace V_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ .", "Note that, for $n \\ge 2$ , $S_{n} \\le \\sum _{i=1}^{n} X_{i} I\\left\\lbrace X > -c_{n} \\right\\rbrace = \\sum _{i=1}^{n} V_{n,i} + \\sum _{i=1}^{n} Y_{n,i}~~\\mbox{and}~~\\frac{\\sqrt{n}}{g(\\log n)} \\le c_{n}.$ Thus, for any fixed $x>0$ and $0 < \\epsilon < x$ , it follows from (4.10) that, for all sufficiently large $n$ , $\\begin{array}{ll}& \\mbox{$\\displaystyle \\left\\lbrace S_{n} > x \\sqrt{n g(\\log n)} \\right\\rbrace $}\\\\&\\\\& \\mbox{$\\displaystyle \\subseteq \\left\\lbrace \\sum _{i=1}^{n} V_{n,i} + \\sum _{i=1}^{n} Y_{n,i} > x \\sqrt{n g(\\log n)} \\right\\rbrace $}\\\\&\\\\& \\mbox{$\\displaystyle \\subseteq \\left\\lbrace \\sum _{i=1}^{n} V_{n,i} > (x - \\epsilon ) \\sqrt{n g(\\log n)} \\right\\rbrace \\bigcup \\left\\lbrace \\sum _{i=1}^{n} X_{i}I\\left\\lbrace X_{i} > c_{n} \\right\\rbrace + n \\mu _{n} > \\epsilon \\sqrt{n g(\\log n)} \\right\\rbrace $}\\\\&\\\\& \\mbox{$\\displaystyle \\subseteq \\left\\lbrace \\sum _{i=1}^{n} V_{n,i} > (x - \\epsilon ) \\sqrt{n g(\\log n)} \\right\\rbrace \\bigcup \\left\\lbrace \\max _{1 \\le i \\le n} X_{i} > \\frac{\\sqrt{n}}{g(\\log n)} \\right\\rbrace $.", "}\\end{array}$ Then it follows from (4.15) and Lemmas 3.3 and 3.4 that $\\begin{array}{ll}& \\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left( S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}$}\\\\& \\\\& \\mbox{$\\displaystyle \\le \\limsup _{n \\rightarrow \\infty }\\frac{\\log \\left(\\mathbb {P}\\left( \\sum _{i=1}^{n} V_{n,i} > (x - \\epsilon ) \\sqrt{n g(\\log n)} \\right)+ \\mathbb {P}\\left(\\max _{1 \\le i \\le n} X_{i} > \\frac{\\sqrt{n}}{g(\\log n)} \\right) \\right)}{g(\\log n)}$~~(by (4.15))}\\\\& \\\\& \\mbox{$\\displaystyle \\le \\left(\\limsup _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} V_{n,i} > (x - \\epsilon ) \\sqrt{n g(\\log n)}\\right)}{g(\\log n)}\\right)\\vee \\left(\\limsup _{n \\rightarrow \\infty }\\frac{\\log \\left(n \\mathbb {P}\\left(X > \\frac{\\sqrt{n}}{g(\\log n)}\\right) \\right)}{g(\\log n)} \\right)$}\\\\&\\\\& \\mbox{$\\displaystyle = \\left( - \\frac{(x - \\epsilon )^{2}}{2\\sigma ^{2}} \\right) \\vee \\left(- \\frac{\\overline{\\lambda }_{1}}{2^{\\rho }}\\right)$ ~~(by (3.22) and the first half of (3.7)).", "}\\end{array}$ and $\\begin{array}{ll}& \\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left( S_{n} > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}$}\\\\& \\\\& \\mbox{$\\displaystyle \\le \\liminf _{n \\rightarrow \\infty }\\frac{\\log \\left(\\mathbb {P}\\left( \\sum _{i=1}^{n} V_{n,i} > (x - \\epsilon ) \\sqrt{n g(\\log n)} \\right)+ \\mathbb {P}\\left(\\max _{1 \\le i \\le n} X_{i} > \\frac{\\sqrt{n}}{g(\\log n)} \\right) \\right)}{g(\\log n)}$~~(by (4.15))}\\\\& \\\\& \\mbox{$\\displaystyle \\le \\left(\\limsup _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left(\\sum _{i=1}^{n} V_{n,i} > (x - \\epsilon ) \\sqrt{n g(\\log n)}\\right)}{g(\\log n)}\\right)\\vee \\left(\\liminf _{n \\rightarrow \\infty }\\frac{\\log \\left(n \\mathbb {P}\\left(X > \\frac{\\sqrt{n}}{g(\\log n)}\\right) \\right)}{g(\\log n)} \\right)$}\\\\&\\\\& \\mbox{$\\displaystyle = \\left( - \\frac{(x - \\epsilon )^{2}}{2\\sigma ^{2}} \\right) \\vee \\left(- \\frac{\\underline{\\lambda }_{1}}{2^{\\rho }}\\right)$ ~~(by (3.22) and the second half of (3.7)}\\end{array}$ which yields (4.14) by letting $\\epsilon \\searrow 0$ .", "(ii)  Clearly, replacing $\\left\\lbrace X, X_{n};~n \\ge 1 \\right\\rbrace $ with $\\left\\lbrace -X, -X_{n};~n \\ge 1 \\right\\rbrace $ , (2.5) follows from (2.4) and (2.2).", "(iii)  To complete the proof of Theorem 2.1 we now establish (2.6).", "Since $\\sigma ^{2} < \\infty $ and $\\mu = 0$ , replacing $\\left\\lbrace X, X_{n};~n \\ge 1 \\right\\rbrace $ with $\\left\\lbrace -X, -X_{n};~n \\ge 1 \\right\\rbrace $ , it follows from (4.3) that, for any given $x > 0$ , $\\mathbb {P}\\left(\\max _{1 \\le k \\le n}(-X_{k}) > 2x \\sqrt{n g(\\log n)} \\right)\\le 4 \\mathbb {P} \\left(-S_{n} > x \\sqrt{n g(\\log n)} \\right)~~\\mbox{for all sufficiently large}~n.$ Together with (4.3) this implies that $\\mathbb {P}\\left(\\max _{1 \\le k \\le n}\\left|X_{k}\\right| > 2x \\sqrt{n g(\\log n)} \\right)\\le 4 \\mathbb {P} \\left(\\left|S_{n}\\right| > x \\sqrt{n g(\\log n)} \\right)~~\\mbox{for all sufficiently large}~n.$ In the same vein as establishing (4.1), it follows from (4.16) and (3.6) (i.e., the first part of Lemma 3.3) that $\\left\\lbrace \\begin{array}{ll}& \\mbox{$\\displaystyle - \\frac{\\overline{\\lambda }}{2^{\\rho }}\\le \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n}\\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)},$}\\\\& \\\\& \\mbox{$\\displaystyle - \\frac{\\underline{\\lambda }}{2^{\\rho }}\\le \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n}\\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}.$}\\end{array}\\right.$ Since, for $n \\ge 1$ and $x > 0$ , $\\mathbb {P}\\left(S_{n} > x \\sqrt{n g(\\log n)} \\right) \\le \\mathbb {P}\\left(\\left|S_{n}\\right| > x \\sqrt{n g(\\log n)} \\right)$ , (4.5) and (4.17) together ensure the following lower bound of (2.6): $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }}{2^{\\rho }} \\right)\\le \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}~\\mbox{for all}~x > 0$,}\\\\&\\\\&\\mbox{$\\displaystyle - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }}{2^{\\rho }} \\right)\\le \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n}\\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}~\\mbox{for all}~x > 0$.", "}\\end{array}\\right.$ Now note that, for $n \\ge 2$ , $\\begin{array}{lll}\\mbox{$\\displaystyle \\left|S_{n} \\right|$}& = &\\mbox{$\\displaystyle \\left|\\sum _{i=1}^{n} \\left(X_{i}I\\left\\lbrace |X_{i}| \\le c_{n} \\right\\rbrace - \\mu _{n} \\right) +\\sum _{i=1}^{n} X_{i}I\\left\\lbrace |X_{i}| > c_{n} \\right\\rbrace + n \\mu _{n} \\right|$}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\left|\\sum _{i=1}^{n}V_{n,i} \\right|+ \\sum _{i=1}^{n} \\left|X_{i} \\right|I\\left\\lbrace |X_{i}| > c_{n} \\right\\rbrace + \\left|n \\mu _{n} \\right|$},\\end{array}$ where $\\big \\lbrace c_{n}; n\\ge 2 \\big \\rbrace $ , $\\big \\lbrace \\mu _{n}; n \\ge 2 \\big \\rbrace $ , and $\\big \\lbrace V_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ are the same as in (4.15).", "Thus, for any fixed $x>0$ and $0 < \\epsilon < x$ , by the same argument as in (4.15), it follows from (4.10) that, for all sufficiently large $n$ , $\\left\\lbrace \\left|S_{n} \\right| > x \\sqrt{n g(\\log n)} \\right\\rbrace \\subseteq \\left\\lbrace \\left|\\sum _{i=1}^{n} V_{n,i} \\right| > (x - \\epsilon ) \\sqrt{n g(\\log n)} \\right\\rbrace \\bigcup \\left\\lbrace \\max _{1 \\le i \\le n} \\left|X_{i} \\right| > \\frac{\\sqrt{n}}{g(\\log n)} \\right\\rbrace $ Then, for any fixed $x > 0$ and $0 < \\epsilon < x$ , it follows from (4.19), the second part of Lemma 3.3 (i.e., (3.7) with $X$ , $\\overline{\\lambda }_{1}$ , and $\\underline{\\lambda }_{1}$ replaced by $|X|$ , $\\overline{\\lambda }$ , and $\\underline{\\lambda }$ respectively) and Lemma 3.4 (with $\\big \\lbrace X_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ replaced by $\\big \\lbrace V_{n,i}; ~1 \\le i \\le n, n \\ge 2 \\big \\rbrace $ ) that $\\left\\lbrace \\begin{array}{ll}&\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}\\le - \\left(\\frac{(x-\\epsilon )^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\overline{\\lambda }}{2^{\\rho }} \\right)$,}\\\\&\\\\&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n}\\right| > x \\sqrt{n g(\\log n)} \\right)}{g(\\log n)}\\le - \\left(\\frac{(x - \\epsilon )^{2}}{2\\sigma ^{2}} \\wedge \\frac{\\underline{\\lambda }}{2^{\\rho }} \\right)$.", "}\\end{array}\\right.$ Clearly, (2.6) follows from (4.18) and (4.20) by letting $\\epsilon \\searrow 0$ .", "This completes the proof of Theorem 2.1.", "$\\Box $" ], [ "Proof of Theorem 2.2", "In this section, we give the proof of Theorem 2.2.", "Proof of Theorem 2.2    Since $g(\\cdot ) \\in \\mathcal {V}_{\\rho }$ , we have $\\lim _{t \\rightarrow \\infty } \\frac{g(\\log (t \\pm \\eta ))}{g(\\log t)} = 1$ and hence, in view of (2.1), (2.2), and (2.3), without of loss generality, we can assume that $\\eta = 0$ .", "We only give the proof of Theorem 2.2 (i), the proofs of Theorem 2.2 (ii) and (iii) are left to the reader.", "We first establish the implication (2.9) $\\Rightarrow $ (2.7).", "Since $\\mathbb {E}|X| = \\infty $ ensures that $\\mathbb {E}X^{2} = \\infty $ , (2.9) implies the following three cases to consider: Case I   $\\mathbb {E}|X| < \\infty $ and $\\mathbb {E}X \\ne 0$ ; Case II  $\\mathbb {E}X^{2} = \\infty $ ; Case III $\\mathbb {E}X = 0$ , $\\mathbb {E}X^{2} = \\sigma ^{2} \\in (0, \\infty )$ , and $\\overline{\\lambda } = \\underline{\\lambda } = 0$ .", "For Case I, by the law of large numbers, for all $x$ , $\\lim _{n \\rightarrow \\infty } \\mathbb {P}\\left(\\left|S_{n} \\right| > x \\sqrt{n g(\\log n)} \\right)= \\lim _{n \\rightarrow \\infty } \\mathbb {P}\\left(\\left|\\frac{S_{n}}{n} \\right| > x \\sqrt{\\frac{g(\\log n)}{n}} \\right) = 1$ which ensures (2.7).", "For Case II, we consider $\\tilde{S}_{n} = \\sum _{i=1}^{n} Y_{i}$ , $n \\ge 1$ , where $\\left\\lbrace Y, Y_{n}; n \\ge 1 \\right\\rbrace = \\left\\lbrace X - X^{\\prime }, X_{n} - X^{\\prime }_{n}; n \\ge 1 \\right\\rbrace $ , $\\lbrace X^{\\prime },~X_{n}^{\\prime };~n\\ge 1\\rbrace $ is an independent copy of $\\lbrace X,~X_{n};~n\\ge 1\\rbrace $ .", "Note that $\\left\\lbrace Y, Y_{n}; ~n \\ge 1 \\right\\rbrace $ is a sequence of i.i.d.", "symmetric real-valued random variables with $\\mathbb {E}Y^{2} = \\infty $ (since $\\mathbb {E}X^{2} = \\infty $ ).", "Thus, by the second part of Lemma 6.5 of Ledoux and Talagrand [17, pages 153-154], we have, for each $n \\ge 1$ , $\\begin{array}{lll}\\mbox{$\\displaystyle \\mathbb {P}\\left(\\left|\\sum _{i=1}^{n} Y_{i}(c) \\right| > 2x\\sqrt{ng(\\log n)} \\right)$}& \\le &\\mbox{$\\displaystyle 2 \\mathbb {P}\\left(\\left|\\tilde{S}_{n} \\right| > 2x\\sqrt{ng(\\log n)} \\right)$}\\\\&&\\\\&\\le & \\mbox{$\\displaystyle 4 \\mathbb {P}\\left(\\left|S_{n} \\right| > x \\sqrt{ng(\\log n)} \\right)~~\\mbox{for all}~x > 0,$}\\\\\\end{array}$ where $Y(c) = YI\\lbrace |Y| \\le c \\rbrace $ , and $Y_{n}(c) = Y_{n} I\\left\\lbrace |Y_{n}| \\le c \\right\\rbrace $ , $n \\ge 1$ and $c > 0$ is large enough such that $\\mathbb {E}Y^{2}(c) > 0$ .", "Then, by Lemma 3.4 and (5.1), we have $\\begin{array}{lll}\\mbox{$\\displaystyle - \\frac{(2x)^{2}}{2 \\mathbb {E}Y^{2}(c)}$}&=&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left(\\left|\\sum _{i=1}^{n} Y_{i}(c) \\right| > 2x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)}$ ~(by Lemma 3.4)}\\\\&&\\\\&\\le &\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty }\\frac{\\log \\left(4 \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)\\right)}{g(\\log n)}$ ~(by (5.1))}\\\\&&\\\\&=&\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)}$}\\\\&&\\\\&\\le &\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty }\\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)}$}\\\\&&\\\\&\\le &\\mbox{$\\displaystyle 0~~\\mbox{for all}~ x > 0 ~(\\mbox{since}~ \\mathbb {P}(A) \\le 1~ \\mbox{for any event} ~A)$.", "}\\\\\\end{array}$ Since $\\lim _{c \\rightarrow \\infty } \\mathbb {E}Y^{2}(c) = \\mathbb {E}Y^{2} = \\infty $ , we see that (2.7) follows from (5.2).", "For Case III, applying Theorem 2.1 (i.e., (2.4)), we have $\\begin{array}{lll}0 & = &\\mbox{$\\displaystyle - \\left(\\frac{x^{2}}{2\\sigma ^{2}} \\wedge \\frac{0}{2^{\\rho }} \\right) $}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)} $~~(by Theorem 2.1)}\\\\&&\\\\& \\le &\\mbox{$\\displaystyle \\limsup _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)}$}\\\\&&\\\\& = &\\mbox{$\\displaystyle - \\left(\\frac{x^{2}}{2 \\sigma ^{2}} \\wedge \\frac{0}{2^{\\rho }} \\right) $~~(by Theorem 2.1)}\\\\&&\\\\& = & 0 ~~\\mbox{for all}~ x > 0\\end{array}$ which also ensures (2.7).", "Obviously, (2.7) implies (2.8).", "We now establish the implication (2.8) $\\Rightarrow $ (2.9) by contradiction.", "If (2.9) does not hold, then either $\\mathbb {E}X = 0 ~~\\mbox{and}~~\\mathbb {E}X^{2} = 0$ or $\\mathbb {E}X = 0, ~~\\mathbb {E}X^{2} = \\sigma ^{2} \\in (0, \\infty ), ~ \\mbox{and}~~\\lambda _{1} \\ne \\lambda _{2}.$ For the first case, for each $n \\ge 2$ , we have $\\mathbb {P}\\left(\\left|S_{n} \\right| > x \\sqrt{n g(\\log n)} \\right) = 0~~\\mbox{for all}~ x > 0$ and hence, $\\lim _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)}= - \\infty ~~\\mbox{for all}~ x > 0$ which is contradictory to (2.8).", "For the second case, since $\\lambda _{1} \\ne \\lambda _{2}$ and $0 \\le \\lambda _{1} \\le \\lambda _{2} \\le \\infty $ , we have $0 < \\lambda _{2} \\le \\infty $ and hence, by Theorem 2.1, $\\liminf _{n \\rightarrow \\infty } \\frac{\\log \\mathbb {P}\\left(\\left|S_{n} \\right| > x\\sqrt{ng(\\log n)} \\right)}{g(\\log n)}\\le - \\left(\\frac{x^{2}}{2 \\sigma ^{2}} \\wedge \\frac{\\lambda _{2}}{2^{\\rho }} \\right) < 0 ~~\\mbox{for all}~ x > 0$ which is also contradictory to (2.8).", "$\\Box $ Fundings The research of Deli Li was partially supported by a grant from the Natural Sciences and Engineering Research Council of Canada (grant #: RGPIN-2019-06065) and the research of Yu Miao was partially supported by a grant from the National Natural Science Foundation of China (grant #: NSFC-11971154).", "Conflicts of interests/Competing interests The authors declare that they have no conflicts of interest.", "References Bahadur, R. R., Zabell, S. L.: Large deviations of the sample mean in general vector spaces.", "Ann.", "Probab.", "7, 587-621 (1979).", "Bingham, N. H., Goldie, C. M., Teugels, J. L.: Regular Variation.", "Encyclopedia of Mathematics and Its Applications 27.", "Cambridge Univ.", "Press.", "(1987) Bolthausen, E.: On the probability of large deviations in Banach spaces, Ann.", "Probab.", "12, 427-435 (1984).", "Book, S. A.: Large deviations and applications, Encyclopedia of Statistical Sciences.", "Vol.", "4 (S. Kotz, N. L. Johnson, and C. B.", "Read, eds.", "), John Wiley & Sons, New York, 476-480 (1983).", "Borovkov, A.", "A., Mogul'skiĭ, A.", "A.: Probabilities of large deviations in topological spaces.", "I, Sibirsk.", "Mat.", "Zh.", "19, no.", "5, 988-1004 (1978) (Russian), translated in Siberian Math.", "J.", "19, no.", "5, 697-709 (1979).", "Chen, X.: Probabilities of moderate deviations for B-valued independent random vectors, Chinese J. Contemp.", "Math.", "11, 381-393 (1990).", "Chen, X.: Moderate deviations of independent random vectors in a Banach space, Chinese J. Appl.", "Probab.", "Statist.", "7, no.", "1, 24-32 (1991) (Chinese).", "Chernoff, H.: A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations.", "Ann.", "Math.", "Statistics 23, 493-507 (1952).", "Cramér, H.: Sur un nouveau théorème-limite de la théorie des probabilités.", "Actualités Sci.", "Indust.", "736, 5-23 (1938).", "de Acosta, A.: Moderate deviations and associated Laplace approximations for sums of independent random vectors, Trans.", "Amer.", "Math.", "Soc.", "329, no.", "1, 357-375 (1992).", "Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications.", "Springer, Berlin-Heidelberg (2009).", "Donsker, M. D., Varadhan, S. R. S.: Asymptotic evaluation of certain Markov process expectations for large time.", "III.", "Comm.", "Pure Appl.", "Math.", "29, 389-461 (1976).", "Eichelsbacher, P., Loẅe, M.: Moderate deviations for i.i.d.", "random variables.", "ESAIM - Probab.", "Stat.", "7, 209-218 (2003).", "Gantert, N.: A note on logarithmic tail asymptotics and mixing.", "Statist.", "Probab.", "Lett.", "49, 113-118 (2000).", "Hu, Y. J., Nyrhinen, H.: Large deviations view points for heavy-tailed random walks.", "J. Theoret.", "Probab.", "17, 761-768 (2004).", "Ledoux, M: Sur les déviations modérées des sommes de variables aléatoires vectorielles indépendantes de même loi [On moderate deviations of sums of i.i.d.", "vector random variables], Ann.", "Inst.", "H. Poincaré Probab.", "Statist.", "28, no.", "2, 267-280 (1992) (French).", "Ledoux, M., Talagrand, M.: Probability in Banach Spaces: Isoperimetry and Processes.", "Springer-Verlag, Berlin (1991).", "Li, D., Miao, Y.: A supplement to the laws of large numbers and the large deviations.", "Stochastics 93, 1261-1280 (2021).", "Li, D., Miao, Y., Stoica, G.: A general large deviation result for partial sums of i.i.d.", "super-heavy tailed random variables.", "Stat.", "Prob.", "Lett.", "184, Article 109371 (2022).", "Li, D., Rosalsky: Precise lim sup behavior of probabilities of large deviations for sums of i.i.d.", "random variables, Int.", "J.", "Math.", "Math.", "Sci.", "2004, 3565-3576 (2004).", "Li, D., Rosalsky, A., Al-Mutairi, D. K.: A large deviation principle for bootstrapped sample means.", "Proc.", "Amer.", "Math.", "Soc.", "130, 2133-2138 (2002).", "Nakata, T.: Large deviations for super-heavy tailed random walks.", "Stat.", "Prob.", "Lett.", "180, Article 109240 (2022).", "Petrov, V. V.: Generalization of Cramér's limit theorem, Uspehi Matem.", "Nauk (N.S.)", "9, no.", "4(62), 195-202 (1954) (Russian), translated in Select.", "Transl.", "in Math.", "Stat.", "and Probab.", "6, 1-8 (1966).", "Petrov, V. V.: A generalization of a certain inequality of Lévy, Teor.", "Veroyatnost.", "i Primenen.", "20, 140-144 (1975) (Russian), translated in Theory Probab.", "Appl., 20, 141-145 (1975).", "Petrov, V. V.: Sums of Independent Random Variables.", "Springer-Verlag, New York (1975).", "Petrov, V. V.: Limit Theorems of Probability Theory.", "Sequences of Independent Random Variables.", "Oxford Studies in Probability, vol.", "4, The Clarendon Press, Oxford University Press, New York (1995).", "Saulis, L., Statulevic̆ius, V. A.: Limit Theorems for Large Deviations, Mathematics and its Applications (Soviet Series), vol.", "73, Kluwer Academic Publishers, Dordrecht (1991).", "Stoica, G.: Large gains in the St. Petersburg game.", "C. R. Math.", "Acad.", "Sci.", "Paris 346, no.", "9-10, 563-566 (2008).", "Stroock, D. W.: An Introduction to the Theory of Large Deviations.", "Springer, New York (1984)." ] ]
2207.03545
[ [ "On the asymmetric additive energy of polynomials" ], [ "Abstract We prove a general result concerning the paucity of integer points on a certain family of 4-dimensional affine hypersurfaces.", "As a consequence, we deduce that integer-valued polynomials have small asymmetric additive energy." ], [ "Introduction", "Given a non-zero polynomial $f\\in \\mathbb {Z}[x]$ of degree $d\\ge 3,$ an integer $k\\in \\mathbb {Z},$ and a parameter $B\\ge 1,$ we let $E_f(B;k)$ denote the number of integer solutions to the equation $f(x_1)+f(x_2)=f(x_3)+f(x_4)+k$ inside the multi-dimensional box $S(B) = \\lbrace (x_1,x_2,x_3,x_4)\\in \\mathbb {Z}_{>0}^4: \\max _{i} x_i \\le B\\rbrace .$ Following Baker, Munsch and Shparlinski [14], when $k$ is a fixed, non-zero integer, we call $E_f(B;k)$ the asymmetric additive energy of the polynomial $f$ inside the box $S(B)$ with respect to $k$ .", "If $k=0$ we simply call $E_f(B;0)$ the symmetric additive energy of the polynomial $f$ inside the box $S(B).$ The latter case has been particularly well-studied in the literature, and here we have results available for any polynomial $f$ .", "In this case, we immediately see there are $2B^2$ diagonal solutions to equation (REF ) of the form $(a,b,a,b)$ and $(a,b,b,a)$ .", "Based on standard probabilistic heuristics, one would expect there to be very few other solutions.", "Indeed, it is now known that one has an asymptotic of the form $E_f(B;0) = 2B^2 + O_{f}(B^{2-\\delta })$ for some explicit constant $\\delta >0$ .", "This was first established in the special case $f(x)=x^d$ by Hooley [10], [11], with results for more general polynomials $f$ being established later by various authors.", "We refer the reader to Browning's excellent paper [2] for a brief history of this interesting problem.", "The case where $k$ is a fixed, non-zero integer has received less attention.", "However, recently, it is has been realised that estimates in this alternate setting would have interesting applications.", "This is the regime we study.", "In this case, in the absence of any diagonal solutions, one would simply expect there to be very few solutions to equation (REF ).", "The estimate $E_f(B;k) \\ll _{f,\\epsilon } B^{2+\\epsilon }$ is essentially trivial and follows from an application of the divisor bound.", "Hence in this situation one expects a bound of the form $E_f(B;k) \\ll _{f} B^{2-\\delta }$ to hold.", "We remark that this estimate is uniform in $k$ , and this is important in the interest of applications.", "Currently, a bound like (REF ) is only known in the special case when $f(x)=x^d$ and either $d=3$ or $d\\ge 5$ .", "This is essentially due to Hooley [10] in the former case and Marmon [13] in the latter.", "In this paper, we prove a bound of type (REF ) holds for an arbitrary polynomial $f$ .", "In other words, we establish that polynomials have small asymmetric additive energy.", "Theorem 1.1 Fix a polynomial $f\\in \\mathbb {Z}[x]$ of degree $d \\ge 3$ .", "For any non-zero integer $k$ we have $E_f(B;k) \\ll _{f} B^{2-1/(50d)}.$ The zero-set of equation (REF ) defines a 4-dimensional affine hypersurface over $\\mathbb {Q}$ .", "It is natural to consider this geometric object abstractly.", "In doing so, we are led to consider a family of 4-dimensional affine hypersurfaces which generalise equation (REF ).", "We are then able to prove the following result concerning the family, from which Theorem REF will follow as a corollary.", "This general result may have independent interest.", "Theorem 1.2 Fix a polynomial $f\\in \\mathbb {Z}[x,y]$ of degree $d$ with zero constant term, a polynomial $g\\in \\mathbb {Z}[x,y]$ of degree $(d-1)$ , and non-zero integers $abk\\ne 0.$ Write $f_d$ (resp.", "$g_{d-1}$ ) for the top-degree homogenous parts of $f$ (resp.", "$g$ ), and let $M_{f,g}(B;k)$ denote the number of integer solutions to the equation $f(x_1,x_2)=(ax_3-bx_4)g(x_3,x_4)+k$ inside the multi-dimensional box $S(B)$ defined in equation (REF ).", "Suppose, in addition, the following constraints hold: The affine curve $\\lbrace f(x,y)=k\\rbrace \\subset \\mathbb {A}_{\\mathbb {Q}}^{2}$ doesn't contain a line.", "The projective variety $\\lbrace f_{d}(x,y)=0\\rbrace \\subset \\mathbb {P}_{\\mathbb {Q}}^{1}$ is smooth and doesn't contain any non-constant repeated components.", "The projective variety $\\lbrace (ax-by)g_{d-1}(x,y)=0\\rbrace \\subset \\mathbb {P}_{\\mathbb {Q}}^{1}$ doesn't contain any non-constant repeated components.", "If $d=4$ we additionally suppose: The projective variety $\\bigg \\lbrace \\frac{1}{a}\\frac{\\partial g_{d-1}}{\\partial x}\\bigg (\\frac{x}{a},\\frac{y}{b}\\bigg ) +\\frac{1}{b}\\frac{\\partial g_{d-1}}{\\partial y}\\bigg (\\frac{x}{a},\\frac{y}{b}\\bigg )=0\\bigg \\rbrace \\subset \\mathbb {P}_{\\mathbb {Q}}^{1}$ doesn't contain any non-constant repeated components.", "Then, $M_{f,g}(B;k)\\ll _{f,g,a,b} B^{2-1/(50d)}.$ We remark here that one can show the estimate $M_{f,g}(B;k) \\ll _{f,g,a,b,\\epsilon } B^{2+\\epsilon }$ via a divisor-bound argument, and without constraints on $f$ and $g$ this is essentially optimal.", "Thus, one may wonder which of the assumptions of Theorem REF are necessary in order for there to be significantly fewer solutions.", "We clearly require (1), as otherwise we would be able to generate $O(B^2)$ “trivial solutions\" lying on lines contained in the hypersurface.", "Interestingly, (1) is not sufficient; one also requires (3).", "This is illustrated by the following example: there are $O(B^{2})$ solutions to the equation $x_1^4 - x_2^4 = (x_3-x_4)(x_3^3-x_4^3-3x_4^2-3x_4)+1$ of the form $(a,a,b+1,b).$ In this example (1) is satisfied (see Lemma REF below) but (3) is not, as the top-degree homogenous part of the RHS contains the square-factor $(x_3-x_4)^2.$ On the other hand, (2) and (4) are present purely to facilitate our proof method.", "Presumably, both of these assumptions could be removed if one had a different approach.", "We discuss this in more detail in Section .", "We have written the conclusion (REF ) of Theorem REF in the form stated for simplicity; in view of applications, the most important aspect is that we obtain a power saving over the trivial bound.", "However, our proof actually yields the better bounds: $M_{f,g}(B;k)\\ll _{f,g,a,b,\\epsilon }{\\left\\lbrace \\begin{array}{ll}B^{2-1/(3d)+\\epsilon }\\,\\,&\\text{if $d\\in \\lbrace 3,4\\rbrace ,$} \\\\B^{1+\\epsilon }(B^{1/2}+B^{2/\\sqrt{d}+1/(d-1)-1/((d-2)\\sqrt{d})})\\,\\,&\\text{if $d\\ge 5.$}\\end{array}\\right.", "}$ The bounds present in Theorem REF can be improved accordingly.", "We will deduce Theorem REF from Theorem REF in Section REF below.", "As mentioned above, Theorem REF has various applications in the literature.", "We discuss these now." ], [ "Applications", "In [8] Chen, Kerr, Maynard, and Shparlinski were interested in showing that Weyl sums typically exhibit square-root cancellation.", "A key input to their method was an estimate of the form $\\sum _{0<|k|\\le 4B^{d}} \\frac{E_{f}(B;k)}{k} \\ll B^{2-\\kappa _d}$ for the monomial $f(x)=x^d,$ where $\\kappa _d>0$ is a constant depending only on $d$ (cf.", "proof of [8]).", "In the paper they establish such an estimate when $d=3$ and $d\\ge 5$ , using work of Hooley [10] and Marmon [13].", "This just left the case $d=4$ .", "It is clear that, with Theorem REF applied to the polynomial $f(x)=x^4,$ we can now extend [8] to cover the case $d=4$ and hence complete this aspect of the classification of Weyl sums.", "Corollary 1.3 (Square-root cancellation in Weyl sums almost always) There exist positive constants $c$ and $C$ such that, for any $d\\ge 3$ and any sequence of complex weights $(a_n)_{n=1}^{\\infty }$ with $|a_n|=1,$ the set $\\bigg \\lbrace x\\in [0,1): cN^{1/2} \\le \\bigg |\\sum _{n=1}^{N}a_ne^{2\\pi i x n^d}\\bigg |\\le CN^{1/2} \\text{ for infinitely many } N\\in \\mathbb {N}\\bigg \\rbrace $ has full Lebesgue measure.", "As a further application of Theorem REF , in recent work Baker, Munsch and Shparlinski [14] proved a general result which enables one to establish large sieve inequalities for a general class of sparse sequences, provided that one has good estimates available for the symmetric and asymmetric additive energy of the sequence.", "Using the work [8] described above, the authors were able to establish large sieve inequalities for the monomial sequences $f(x)=x^d$ when $d=3$ or $d\\ge 5.$ The authors also proved a weaker result about general polynomial sequences by alternative methods [14].", "By using the new estimates contained in Theorem REF , we are able to establish their first result for the monomial $f(x)=x^4$ (albeit with a slightly weaker exponent), and also improve their result concerning polynomial sequences.", "A direct application of [14] to the appropriate sequence yields the following.", "Corollary 1.4 (Large sieve inequality for polynomial sequence) Fix $\\epsilon >0.$ For any sequence of complex weights $(a_n)_{n=1}^{\\infty }$ and $f\\in \\mathbb {Z}[x]$ of degree $d\\ge 3$ and $Q^d\\le N\\le Q^{2d}$ we have $\\sum _{q=1}^{Q}\\sum _{\\begin{array}{c}a=1 \\\\ (a,f(q))=1\\end{array}}^{f(q)} \\bigg | \\sum _{n=M+1}^{M+N}a_n e\\bigg (\\frac{2\\pi i a n }{f(q)}\\bigg )\\bigg |^2 \\ll _{f,\\epsilon } (NQ^{1/2}+N^{3/4}Q^{d/2+2-1/(50d)})Q^{\\epsilon }\\sum _{n=M+1}^{M+N}|a_n|^2.$ It is likely that further applications of Theorem REF will appear in the literature.", "We now show how Theorem REF implies Theorem REF ." ], [ "Deducing Theorem ", "Fix a polynomial $p\\in \\mathbb {Z}[x]$ of degree $d$ and a non-zero integer $k$ .", "Let us write $p(x) = \\sum _{i=0}^{d}a_i x^i.$ We would like to apply Theorem REF , taking $f(x,y) = (x-y)g(x,y) =p(x)-p(y).$ First let us check the constraints on $f$ are satisfied.", "The homogenous polynomial $f_d(x,y)$ is clearly smooth, and moreover over $\\overline{\\mathbb {Q}}$ we have the factorisation $f_d(x,y) = a_d(x^d-y^d) = a_d \\prod _{\\xi ^d=1}(x-\\xi y),$ which shows that $f_d$ has no repeated factors.", "Thus we just need to check that the curve $f(x,y)=k$ contains no rational lines.", "For this we have the following lemma.", "Note that in this special case we can prove the stronger assertion that $f(x,y)=k$ contains no lines over $\\overline{\\mathbb {Q}}.$ Lemma 1.5 For any $p\\in \\mathbb {Z}[x]$ and non-zero integer $k$ , the affine curve $\\lbrace p(x)-p(y)=k\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {Q}}}^{2}$ contains no lines.", "Suppose for a contradiction that this affine curve contains a line.", "It is clear that in this case there must exist a parametrisation of this line of the form $(x,y) = (t,\\alpha t+ \\beta )$ with $\\alpha ,\\beta \\in \\overline{\\mathbb {Q}}$ .", "This leads us to the polynomial identity $p(t) = p(\\alpha t+\\beta )+k$ in $\\overline{\\mathbb {Q}}[t].$ Comparing leading-term coefficients, we see that $\\alpha ^d = 1.$ Then comparing the coefficients of $t^{d-1},$ we may solve for $\\beta $ and find that $\\beta = a_{d-1}(\\alpha -1)/(da_d).$ If $\\alpha =1$ then we must have $\\beta =0,$ and then setting $t=0$ in (REF ) yields $k=0$ , a contradiction.", "Otherwise, setting $t = -\\beta /(\\alpha -1)$ yields the same contradiction.", "It just remains to check our constraints on $g$ .", "It follows from the above that $(x-y)g_{d-1}(x,y)$ is square-free.", "Finally, when $d=4,$ we must additionally check that the gradient of $g_{d-1}$ is square-free.", "In this case, $g_3(x,y) = a_4(x^3+x^2y+xy^2+y^3)$ and we have $\\frac{\\partial g_{3}}{\\partial x}(x,y) +\\frac{\\partial g_{3}}{\\partial y}(x,y) = 4a_4(x^2+xy+y^2) = 4a_4(x-\\omega y)(x-\\overline{\\omega }y)$ where $\\omega = (-1+\\sqrt{3}i)/2.$ Hence the gradient is square-free.", "This completes the check of all the constraints.", "Since $E_{p}(B;k) = M_{f,g}(B;k),$ it is now clear that Theorem REF follows from Theorem REF .", "Acknowledgments The author would like to thank James Maynard for many helpful and insightful discussions about the problem.", "The author is funded by an EPSRC Studentship and part of Maynard's ERC Grant (grant agreement No 851318)." ], [ "Proof outline of Theorem ", "The proof of Theorem REF will split into two cases, depending on whether $d\\in \\lbrace 3,4\\rbrace $ or $d\\ge 5$ .", "In the former case we will apply a sieve method, and in the latter we will apply the determinant method.", "The following notation will be useful in this section and throughout the paper: whenever $F\\in \\mathbb {Z}[x_1,\\ldots ,x_n]$ is a polynomial in $n$ variables, we let $M_F(B)$ denote the number of solutions to the equation $F(x_1,\\ldots ,x_n)=0$ inside the multi-dimensional box $S(B)$ defined in equation (REF ).", "(Although this overloads the notation $M_{f,g}(B;k)$ used in Theorem REF , it will always be clear from the context which quantity we are referring to.)" ], [ "The case $d\\in \\lbrace 3,4\\rbrace $ and the polynomial sieve method", "When $d\\in \\lbrace 3,4\\rbrace $ , we will establish Theorem REF via a sieve method.", "The sieve can be viewed as a “local\" method, where one attempts to rule out the existence of lots of “global\" solutions by ruling out the possibility of lots of “local\" solutions modulo $p$ for “many\" primes $p$ .", "Hooley [9], [10], [11] was the first to appreciate how sieves could be applied in this context.", "We will find it convenient to use a particularly flexible sieve method called the polynomial sieve due to Browning [2].", "We defer the statement of the main sieve proposition to Section .", "Let us recall equation (REF ) where for simplicitly we assume $a=b=1:$ $f(x_1,x_2)=(x_3-x_4)g(x_3,x_4)+k$ A key property of the above equation which enables the sieve method to work is the presence of a linear factor on the RHS.", "If we let $h=(x_3-x_4)$ and eliminate $x_4$ (say) in the above equation, we may equivalently examine $f(x_1,x_2)= hg(x_3,x_3-h)+k$ The argument then proceeds by first fixing the value of $h$ , and then counting solutions to the simpler equation in $(x_1,x_2,x_3)$ which remains.", "We can then apply the polynomial sieve to detect solutions to this equation where the variables $(x_1,x_2)$ are constrained to satisfy the congruence $f(x_1,x_2)\\equiv k\\,\\,(\\text{mod}\\,\\,h).$ By applying the sieve to a congruenced set in this manner, we are able to retain the trivial bound at this step of the argument.", "This is crucial, as it allows one to obtain a power saving for $M_{f,g}(B;k)$ provided one gains only a small power of $B$ from the sieve estimates.", "This means we would like to sieve by primes $p$ of size $O(B^{\\delta })$ (say), which in turn requires one to understand our variables in arithmetic progressions with modulus of size $O(B^{1+2\\delta }).$ The modulus is slightly larger than the length of summation, but this difficulty can be overcome by a completion of sums argument.", "This leaves one with certain exponential sums over algebraic varieties to estimate.", "Hence, to execute the sieve method effectively, one has recourse to the deep work of Weil [15] and Deligne [6] concerning the Riemann Hypothesis for curves and higher dimensional varieties over finite fields.", "This argument works for a generic value of $h$ , but in practice there might exist some exceptional values of $h$ for which certain auxiliary varieties (depending on $h$ ) fail to be smooth.", "However, using elimination theory, it is possible to show that there can't be too many of these exceptional values.", "Once can then estimate the contribution from these cases via other methods, such as the Bombieri-Pila method (see below)." ], [ "The case $d\\ge 5$ and the determinant method", "For the complementary case, when $d\\ge 5,$ we will use the determinant method.", "The determinant method can be viewed as a “global\" method.", "The general philosophy is that any “large\" contribution to the count $M_{f,g}(B)$ must come from rational points lying on lower dimensional varieties contained inside the hypersurface.", "This method has its origins in the pioneering work of Bombieri-Pila [1].", "It was then greatly developed at a later date by Heath-Brown [7], and has enjoyed various refinements since due to a variety of authors.", "Again, recalling equation (REF ) with $a=b=1,$ we wish to count points on the 3-dimensional surfaces $f(x_1,x_2)=(x_3-n)g(x_3,n)+k \\subset \\mathbb {A}_{\\mathbb {Q}}^{3}$ for each fixed integer $n.$ Thus, as in the sieve method, we begin by considering a simpler object of smaller dimension.", "However, in the former case it was crucial we had the linear factor on the RHS and our change of variables incorporated this information.", "This is less important here (however we will still make use of the linear factor later).", "We will apply the determinant method to equation (REF ) in the form of the following result, which is implicit in the proof of [5].", "Proposition 2.1 (Browning, Heath-Brown) Let $F\\in \\mathbb {Z}[x,y,z]$ be a non-singular polynomial of degree $d \\ge 4.$ Then $M_F(B)&\\ll _{d,\\epsilon } M_F^{\\text{lines}}(B) + B^{1/2+\\epsilon }+B^{2/\\sqrt{d}+1/(d-1)-1/(\\sqrt{d}(d-2))+\\epsilon },$ where $M_F^{\\text{lines}}(B)$ counts the number of integer points lying on lines contained in the hypersurface $\\lbrace F=0\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {Q}}}^{3}$ inside the box $S(B)$ defined by equation (REF ).", "We note that the last error term exceeds $B$ when $d<5$ .", "This is the reason we cannot apply the determinant method when $d\\in \\lbrace 3,4\\rbrace .$ Therefore, to obtain a power saving for $M_{f,g}(B;k)$ via the determinant method, it is sufficient to have control over the possible lines which can appear in the surfaces (REF ).", "Here it is important we are averaging over $n;$ for certain values of $n$ lines may exist and hence contribute a larger amount to $M_{f,g}(B;k),$ however, one can show via elementary means that there cannot be too many values of $n$ for which this can occur.", "We note that our argument will also make use of the linear factor on the RHS of (REF ).", "We remark that Proposition REF only applies when the surface (REF ) is smooth.", "This will be true for a generic choice of $n$ .", "Thus we arrive at a similar situation to that described above with the sieve argument, where we must handle exceptional cases via different methods.", "For these values we will use the Bombieri-Pila method.", "We state the main result we will use here.", "The following appears as [1].", "Proposition 2.2 (Bombieri-Pila) Let $F\\in \\mathbb {Z}[x,y]$ be an absolutely irreducible curve of degree $d\\ge 2$ .", "Then $M_F(B) \\ll _{d,\\epsilon } B^{1/d+\\epsilon }.$" ], [ "Some basic facts about discriminant polynomials", "Throughout the proof of Theorem REF we will encounter various auxiliary curves and surfaces which depend on an integer parameter $h$ (say).", "For both the determinant method and the polynomial sieve method to work, we require these varieties to be smooth for“most\" choices of $h.$ This in turn amounts to showing that certain discriminant polynomials, which by definition will be polynomials in the parameter $h$ , are not the zero polynomial.", "Our method of proving this is to extract the leading coefficient using limiting arguments.", "This leading coefficient will generically be non-zero, and only vanish if our polynomials are degenerate in some way.", "Our additional assumptions (2) and (4) in the statement of Theorem REF ensure that we avoid these cases.", "It is possible that these additional assumptions could be removed if one had a different way of proving these discriminant polynomials didn't vanish.", "With this in mind, we collect here a few basic facts about discriminant polynomials which we will use without comment throughout the paper.", "Given a polynomial $f\\in \\mathbb {Q}[x]$ of degree $d,$ leading coefficient $a_d,$ and roots $\\lambda _1,\\ldots ,\\lambda _d \\in \\overline{\\mathbb {Q}},$ we form its discriminant polynomial with respect to $x$ , which we write as $\\mathrm {Disc}_x[f(x)]$ by the formula $\\mathrm {Disc}_x[f(x)] = (-1)^{d(d-1)/2}a_d^{d(d-1)} \\prod _{\\begin{array}{c}i\\ne j\\end{array}}(\\lambda _i-\\lambda _j).$ The discriminant polynomial satisfies the following properties: $\\mathrm {Disc}_x[f(x)]$ is a polynomial in the coefficients of $f$ .", "$\\mathrm {Disc}_x[f(x)]$ vanishes if and only if $f$ and $f^{\\prime }$ possess a common factor over $\\mathbb {Q}$ .", "In particular, if $f$ has no non-constant repeated factors over $\\mathbb {Q}$ then $\\mathrm {Disc}_x[f(x)]\\ne 0.$ For any real numbers $a,b$ and $c$ we have the transformation formula $\\mathrm {Disc}_x[af(bx+c)] = a^{2d-2}b^{d(d-1)}\\mathrm {Disc}_x[f(x)].$" ], [ "Notation", "We will use both Landau and Vinogradov asymptotic notation throughout the paper.", "$B$ will denote a large integer, and all asymptotic notation is to be understood as referring to the limit as $B\\rightarrow \\infty .$ We allow any implied constants to depend implicitly on the variables $f,g,a$ and $b,$ without specifying so.", "By this, we mean dependencies on the coefficients of $f$ and $g$ and also on the degree $d$ .", "Any dependencies of the implied constants on other parameters $A$ will be denoted by a subscript, for example $X\\ll _{A} Y$ or $X=O_{A}(Y),$ unless stated otherwise.", "We let $\\epsilon $ denote a small positive constant, and we adopt the convention it is allowed to change at each occurrence, and even within a line.", "If $k$ is a field, we let $\\mathbb {A}_{k}^{n}$ (resp.", "$\\mathbb {P}_{k}^{n}$ ) denote $n$ -dimensional affine (resp.", "projective) space over $k$ .", "If $F\\in \\mathbb {Z}[x_1,\\ldots ,x_n]$ we let $\\lbrace F=0\\rbrace \\subset \\mathbb {A}_{\\mathbb {Q}}^{n}$ denote the $n$ -dimensional affine hypersurface generated by $F$ over $\\mathbb {Q}.$ By slight abuse of notation, we may write this as $F=0\\subset \\mathbb {A}_{\\mathbb {Q}}^{n},$ or even simply $F\\subset \\mathbb {A}_{\\mathbb {Q}}^{n}.$ We adopt similar conventions whenever $F$ is homogenous and generates a projective hypersurface.", "The hypersurface defined by $F$ is said to be smooth over $\\mathbb {Q}$ if the system of equations $F(y_1,\\ldots ,y_n) = \\frac{\\partial {F}}{\\partial {x_1}}(y_1,\\ldots ,y_n) =\\ldots =\\frac{\\partial {F}}{\\partial {x_d}}(y_1,\\ldots ,y_n)= 0$ has no solutions with $(y_1,\\ldots ,y_n)\\in \\mathbb {A}_{\\mathbb {Q}}^{n}$ in the affine case or $[y_1:\\ldots :y_n]\\in \\mathbb {P}_{\\mathbb {Q}}^{n}$ in the projective case.", "We let $f_i$ (resp.", "$g_i$ ) denote the homogenous part of $f$ (resp.", "$g$ ) of degree $i$ .", "Thus, we may write $f(x,y) = \\sum _{i=0}^{d}f_i(x,y),\\,\\,\\,\\,g(x,y) = \\sum _{i=0}^{d-1}g_i(x,y).$ We will frequently use assumption (1) in Theorem REF which says that the curve $\\lbrace f(x,y)=k\\rbrace \\subset \\mathbb {A}_{\\mathbb {Q}}^{2}$ contains no lines.", "We note that the curve may well contain lines over the larger field $\\overline{\\mathbb {Q}},$ but these lines cannot simply be reparametrisations of lines over $\\mathbb {Q}$ (e.g.", "$\\sqrt{2}x+\\sqrt{2}y=\\sqrt{2}$ ).", "To this end, the following definition is useful: we say a line $\\lbrace \\alpha x+\\beta y + \\gamma = 0\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {Q}}}^{2}$ is definable over $\\mathbb {Q}$ if there exists $\\lambda \\in \\overline{\\mathbb {Q}}$ and $a,b,c\\in \\mathbb {Q}$ such that $\\alpha x+ \\beta y+\\gamma = \\lambda (ax+by+c).$ With this definition, our assumption is precisely that the curve $f(x,y)=k$ doesn't contain any lines definable over $\\mathbb {Q}.$" ], [ "Proof of Theorem ", "Fix $f,g\\in \\mathbb {Z}[x,y]$ as in the statement of Theorem REF , with $d\\ge 5$ .", "In this section we prove the Theorem REF in this regime.", "Recalling equation (REF ), we wish to count integer points on the affine hypersurface $f(x_1,x_2)=(ax_3-bx_4)g(x_3,x_4)+k\\subset \\mathbb {A}_{\\mathbb {Q}}^{4}.$ The method proceeds by fixing the value of $x_4$ , which we now call $n$ , and considering the resulting 3-dimensional affine surface $f(x_1,x_2) = (ax_3-bn)g(x_3,n)+k\\subset \\mathbb {A}_{\\mathbb {Q}}^{3},$ which we call $\\Gamma _n.$ For later purposes, we let $\\Gamma _n^{\\text{proj}}(x_1,x_2,x_3,w)\\subset \\mathbb {P}_{\\mathbb {Q}}^{3}$ denote the projectivisation of this surface.", "Recalling our notation so far, we can write $M_{f,g}(B;k) = \\sum _{1\\le n\\le B} M_{\\Gamma _n}(B).$ First let us deal with a degenerate situation, where $n$ is such that $g(x_3,n)$ vanishes identically (as a polynomial in $x_3$ ).", "Lemma 4.1 Suppose $n$ is such that $g(x_3,n)$ vanishes identically.", "Then $M_{\\Gamma _n}(B) \\ll _{\\epsilon } B^{3/2+\\epsilon }.$ Clearly $M_{\\Gamma _n}(B) \\ll B\\cdot \\#\\lbrace x_1,x_2\\in [1,B]\\cap \\mathbb {Z}: f(x_1,x_2) = k\\rbrace .$ To evaluate this count, we can decompose our curve into $O(1)$ absolutely irreducible components and majorise by summing over each component.", "We can then apply Proposition REF to each component.", "Components of degree $\\ge 2$ contribute $O_{\\epsilon }(B^{1/2+\\epsilon }).$ By assumption, any components of degree 1 (i.e.", "lines) are not definable over $\\mathbb {Q}$ and therefore contain at most 1 integer point, and so these contribute in total $O(1)$ .", "Thus, this count is $O_{\\epsilon }(B^{1/2+\\epsilon }),$ which yields the lemma.", "There can be at most $O(1)$ such values of $n$ for which $g(x_3,n)$ vanishes identically, and so (REF ) becomes $M_{f,g}(B;k) = \\sum _{\\begin{array}{c}1\\le n\\le B \\\\ g(\\cdot ,n)\\ne 0\\end{array}}M_{\\Gamma _{n}}(B)+O_{\\epsilon }(B^{3/2+\\epsilon }).$ For these remaining values of $n$ we would like to use Proposition REF to estimate the corresponding $M_{\\Gamma _n}(B).$ To do this we require the surface $\\Gamma _n$ to be smooth.", "Generically this is will be the case, as the following lemma demonstrates.", "Lemma 4.2 $\\Gamma _{n}^{\\text{proj}}$ is singular for at most $O(1)$ values of $n$ .", "Write $g(x_3,x_4) = \\sum _{i+j\\le d-1}e_{i,j} x_3^ix_4^j.$ We first deal with possible singular points with $w=0.$ We can write $\\Gamma _n^{\\text{proj}}$ as $f_d(x_1,x_2)+wf_{d-1}(x_1,x_2) &= ae_{d-1,0}x_3^d+(ae_{d-2,1}n-bne_{d-1,0}+ae_{d-2,0})x_3^{d-1}w\\\\&+\\text{ (terms involving $w^2$)}.$ If $[r:s:t:0]$ is a singular point, by considering the $x_1$ and $x_2$ derivatives, we see that necessarily $\\frac{\\partial f_{d}}{\\partial x_1}(r,s) = \\frac{\\partial f_{d}}{\\partial x_2}(r,s) = 0.$ By Euler's identity, we see that $f_d(r,s)=0$ also.", "As we are assuming $f_d$ is smooth we must have $r=s=0.$ If $e_{d-1,0}\\ne 0$ then the equation above yields $t=0,$ which is a contradiction.", "If $e_{d-1,0}=0$ then necessarily $e_{d-2,1}\\ne 0$ as otherwise $g_{d-1}(x_3,x_4)$ would contain a square factor of $x_4^2$ .", "In this case the $w$ derivative evaluated at $[0:0:t:0]$ yields $a(e_{d-2,1}n+e_{d-2,0})t^{d-1} = 0.$ Unless $n=-e_{d-2,0}/e_{d-2,1}$ we conclude again that $t=0.$ Thus there is at most 1 value of $n$ for which $\\Gamma _n^{\\text{proj}}$ contains a singular point with $w=0.$ This leaves us to examine possible singular points with $w=1.$ For ease, let us write $G_{n}(x_3) = (ax_3-bn)g(x_3,n)$ and denote by $G_n^{\\prime }(x_3)$ the derivative with respect to $x_3.$ Again, by examining derivatives, it is clear that any singular point $[r:s:t:1]$ must satisfy, in particular, $\\frac{\\partial f}{\\partial x_1}(r,s) = \\frac{\\partial f}{\\partial x_2}(r,s) = 0 \\text{ and } G_n^{\\prime }(t) = 0.$ Now, our assumption that $f_d$ is square-free implies that the partial derivatives $\\partial f/\\partial x_1$ and $\\partial f/\\partial x_2$ are coprime and hence, by Bézout's theorem, they have at most $O(1)$ common zeros.If $\\partial f/\\partial x_1$ and $\\partial f/\\partial x_2$ have a common factor $p$ then $f$ must be of the form $f= p^2q+c$ for some constant $c$ .", "By comparing homogenous parts of top-degree we see $f_d$ will be divisible by a square in this case.", "Thus, this system constrains $r$ and $s$ to at most $O(1)$ possible values.", "For each pair we must then solve the system $G_{n}(t) &= f(r,s), \\\\G_n^{\\prime }(t) &= 0.$ We would like to show that this system is only solvable in $t$ for at most $O(1)$ choices of $n.$ If this were the case, it would follow that there are at most $O(1)$ values of $n$ for which $\\Gamma _n^{\\text{proj}}$ contains a singular point with $w=1.$ This, together with the above, would yield the conclusion of the lemma.", "We will prove this via the following strategy, which will be used numerous times throughout the paper: if the system is solvable then the discriminant $\\mathrm {Disc}_{t} [G_{n}(t) -f(r,s)]$ will vanish identically.", "However, by definition, this will be a polynomial in $n$ which generically will be non-zero and so only vanish for $O(1)$ values of $n$ .", "Hence it is sufficient to prove that this discriminant is not identically zero.", "We prove this by extracting the leading coefficient.", "Our assumptions on $f$ and $g$ will then imply this leading coefficient doesn't vanish.", "Now we have $ \\nonumber \\frac{G_{n}(n t)}{n^d} &= \\frac{(at-b)g(nt,n)}{n^{d-1}} \\\\ \\nonumber &= \\frac{(at-b)}{n^{d-1}} \\sum _{i=0}^{d-1}g_i(nt,n) \\\\ \\nonumber &= (at-b)g_{d-1}(t,1)+O(n^{-1}).$ Thus, taking limits, we see that $\\lim _{n\\rightarrow \\infty } \\frac{G_{n}(n t)}{n^d} &=(at-b)g_{d-1}(t,1).$ By standard properties of discriminant polynomials, as detailed in Section REF , whenever $n\\ne 0$ we can write $\\mathrm {Disc}_{t} \\bigg [\\frac{G_{n}(nt) -f(r,s)}{n^d} \\bigg ] &= \\frac{\\mathrm {Disc}_{t}[G_n(t) - f(r,s)]}{n^{(D-1)(2d-D)}},$ where $1\\le D\\le d$ is the degree of $G_n(t)$ as a polynomial in $t.$ This is valid for any $n\\ne 0,$ and moreover both sides of equation (REF ) have the same degree $D$ in $t$ .", "Thus we may take the limit as $n\\rightarrow \\infty $ inside the discriminant, and it follows that $\\lim _{n\\rightarrow \\infty } \\frac{\\mathrm {Disc}_{t}[G_n(t) - f(r,s)]}{n^{(D-1)(2d-D)}} = \\mathrm {Disc}_{t} [(at-b)g_{d-1}(t,1)]$ The RHS is non-zero by our assumption that $\\lbrace (ax-by)g_{d-1}(x,y)\\rbrace \\subset \\mathbb {P}_{\\mathbb {Q}}^{2}$ is square-free.", "Thus, the discriminant polynomial has a non-zero leading coefficient.", "This completes the proof of the lemma.", "Before we can dispense with those values of $n$ for which $\\Gamma _n$ is singular, we require some information about the possible lines which can be contained in level sets of the curve $\\lbrace f(x,y)=0\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {Q}}}^{2}.$ Lemma 4.3 (Lines contained in level sets of $f$ ) Let $f\\in \\mathbb {Z}[x_1,x_2]$ be such that $f_d$ is not divisible by a square and let $l\\in \\overline{\\mathbb {Q}}.$ Then, if the variety $\\lbrace f(x_1,x_2) = l\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {Q}}}^{2}$ contains a line, this line must be equal to one of the possible lines listed below: The line parametrised by $(x_1,x_2) = (t,\\alpha t+\\beta )$ where $f_d(1,\\alpha ) = 0$ and $\\beta = -f_{d-1}(1,\\alpha )/(\\partial f_d/\\partial y)(1,\\alpha ).$ The line parametrised by $(x_1,x_2) = (\\gamma ,t)$ where $\\gamma =-f_{d-1}(0,1)/(\\partial f_d/\\partial x)(0,1).$ (This case requires $f_d(0,1)=0.$ ) Let us suppose that the level set $f(x_1,x_2)=l$ contains a line.", "This line can be parametrised by $(x_1,x_2) = (\\lambda _1 t+\\mu _1,\\lambda _2 t+ \\mu _2),$ where all coefficients lie in $\\overline{\\mathbb {Q}}$ and $\\lambda _1,\\lambda _2$ are not both zero.", "We have the Taylor expansion $f(\\lambda _1 t+\\mu _1,\\lambda _2 t+\\mu _2) &=\\sum _{i=0}^{d} \\sum _{j=0}^{i} \\bigg [\\sum _{m+k=j} \\frac{1}{k!}", "\\frac{1}{m!", "}\\frac{\\partial ^{j}f_i}{\\partial x^ky^m}(\\lambda _1 ,\\lambda _2 )\\mu _1^k\\mu _2^m\\bigg ] t^{i-j}.$ We consider two cases, depending on whether or not $\\lambda _1$ is zero.", "If $\\lambda _1 \\ne 0$ then our line may be parametrised by $(x_1,x_2) = (t,\\alpha t+\\beta )$ for some $\\alpha $ and $\\beta .$ Now since we are assuming $f(t,\\alpha t+\\beta )=l,$ we arrive at the following polynomial identity in $\\overline{\\mathbb {Q}}[t]:$ $f_d(1,\\alpha ) t^{d}+\\bigg [f_{d-1}(1,\\alpha )+\\beta \\frac{\\partial f_d}{\\partial y}(1,\\alpha )\\bigg ]t^{d-1}+\\ldots =l.$ Since $f_d$ is smooth, $f_d(1,\\alpha )$ is a non-zero polynomial in $\\alpha $ of degree at most $d.$ Therefore, there are at most $O_d(1)$ choices of $\\alpha $ for which the leading coefficient vanishes.", "For every such $\\alpha ,$ we must have $(\\partial f_d/\\partial y)(1,\\alpha )\\ne 0$ as otherwise the discriminant $\\mathrm {Disc}_y(f_d(1,y))$ would vanish, contradicting the fact $f_d$ is square-free.", "The result follows by looking at the vanishing of the coefficient of $t^{d-1}$ .", "If $\\lambda _1 = 0$ then necessarily $\\lambda _2 \\ne 0.$ In this case our line may be parametrised by $(x_1,x_2) = (\\gamma ,t).$ Now $f(\\gamma ,t) = f_d(0,1)t^d+\\bigg [f_{d-1}(0,1)+\\gamma \\frac{\\partial f_d}{\\partial x}(0,1)\\bigg ]t^{d-1}+\\ldots .$ For the coefficient of $t^d$ to vanish we must have $f_d(0,1)=0.$ It follows that $(\\partial f_d/\\partial x)(0,1)\\ne 0$ as otherwise $f_d(x_1,x_2)$ would be divisible by the square $x_1^2.$ The coefficient of $t^{d-1}$ vanishing then implies $\\gamma = -f_{d-1}(0,1)/(\\partial f_d/\\partial x)(0,1),$ as required.", "From now on we let $\\Lambda \\subset \\overline{\\mathbb {Q}}$ consist of the set of $\\alpha ,\\beta ,\\gamma $ defined in Lemma REF above, whenever they exist.", "These numbers depend only on $f_d$ and $f_{d-1}$ , and it is clear that $|\\Lambda |=O(1).$ In case (1) we must have $l=f(0,\\beta )$ and in case (2) we must have $l=f(\\gamma ,0).$ We are now finally in a position to deal with the contribution from those $n$ for which $\\Gamma _n^{\\text{proj}}$ is singular.", "Lemma 4.4 We have $\\sum _{\\begin{array}{c}1\\le n\\le B \\\\ \\Gamma _n^{\\text{proj}}\\text{ singular} \\\\ g(\\cdot ,n)\\ne 0\\end{array}}M_{\\Gamma _n}(B) \\ll _{\\epsilon } B^{3/2+\\epsilon }.$ There are $O(1)$ choices of $x_3$ for which $(ax_3-bn)g(x_3,n)=(l-k)$ and $l\\in \\lbrace f(0,\\beta ),f(\\gamma ,0)\\rbrace _{\\beta ,\\gamma \\in \\Lambda }.$ For these values of $x_3$ we use the trivial bound $O(B)$ for the number of possible values of $x_1,x_2\\in [1,B]\\cap \\mathbb {Z}$ for which $f(x_1,x_2) = l$ .", "For the other $O(B)$ values of $x_3,$ we claim that $\\#\\lbrace x_1,x_2\\in [1,B]\\cap \\mathbb {Z}: f(x_1,x_2) = (ax_3-bn)g(x_3,n) +k \\rbrace \\ll _{\\epsilon } B^{1/2+\\epsilon }.$ Indeed, this follows from Proposition REF in much the same way as Lemma REF .", "We split our curve into absolutely irreducible components and sum over each component.", "By Lemma REF we are avoiding any level set which could potentially contain a line over $\\overline{\\mathbb {Q}},$ and hence every absolutely irreducible component of our curve must have degree $\\ge 2.$ By Proposition REF we can therefore bound this count by $O_{\\epsilon }(B^{1/2+\\epsilon }),$ as required.", "We are done as there are only $O(1)$ choices for $n$ by Lemma REF and only $O(1)$ choices for $\\beta ,\\gamma $ by Lemma REF .", "Lemma REF together with equation (REF ) yields $M_{f,g}(B;k) = \\sum _{\\begin{array}{c}1\\le n\\le B \\\\ \\Gamma _n^{\\text{proj}}\\text{ smooth} \\\\ g(\\cdot ,n)\\ne 0\\end{array}}M_{\\Gamma _{n}}(B)+O_{\\epsilon }(B^{3/2+\\epsilon }).$ We are now in a position to apply Proposition REF to estimate each term in the sum.", "Let us analogously define $M_{\\Gamma _n}^{\\text{lines}}(B)$ to count the number of integer points lying on a line contained in the surface $\\lbrace \\Gamma _n=0\\rbrace \\subset \\mathbb {A}_{\\mathbb {Q}}^{3}.$ From Proposition REF , we conclude that $M_{f,g}(B;k) = \\sum _{\\begin{array}{c}1\\le n\\le B \\\\ \\Gamma _n^{\\text{proj}}\\text{ smooth} \\\\ g(\\cdot ,n)\\ne 0\\end{array}}M_{\\Gamma _n}^{\\text{lines}}(B)+O_{\\epsilon }(B^{1+\\epsilon }(B^{1/2}+B^{2/\\sqrt{d}+1/(d-1)-1/((d-2)\\sqrt{d})})).$ We turn to understanding the lines which can appear in $\\Gamma _n.$ The reason we work projectively is so that we can apply the following lemma due to Colliot-Thélène, which can be found in [7].", "Lemma 4.5 (Colliot-Thélène) Suppose that $X\\subset \\mathbb {P}_{\\mathbb {Q}}^{3}$ is a smooth projective surface of degree $d\\ge 3.$ Then there are $O_d(1)$ lines contained in $X$ .", "The following proposition, reminiscent of Lemma REF above, summarises our information about possible lines contained in the affine surfaces $\\Gamma _n.$ Proposition 4.6 (Analysis of lines contained in $\\Gamma _n$ ) Suppose $f$ and $g$ satisfy the hypotheses of Theorem REF and let $\\Lambda $ be defined as in the remarks following Lemma REF .", "Fix a positive integer $n$ for which $g(x_3,n)$ is not identically zero as a polynomial in $x_3.$ Then, if the variety $\\lbrace \\Gamma _n=0\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {Q}}}^{3}$ contains a line, this line must be equal to one of the possible lines listed below: The line $x_2 = \\alpha x_1+\\beta ,$ where $\\alpha ,\\beta \\in \\Lambda $ and $x_3$ and $n$ satisfy the equation $(ax_3-bn)g(x_3,n)+k=f(0,\\beta )$ with $f(0,\\beta )\\ne k.$ The line $x_1 = \\gamma $ , where $\\gamma \\in \\Lambda $ and $x_3$ and $n$ satisfy $(ax_3-bn)g(x_3,n)+k= f(\\gamma ,0)$ where $f(\\gamma ,0)\\ne k.$ (This case requires $f_d(0,1)=0.$ ) Lines which contain at most 1 integer point $(x_1,x_2,x_3).$ Any line in the surface can be parametrised by $x_i = \\lambda _i t+\\mu _i$ for $i\\in \\lbrace 1,2,3\\rbrace ,$ where $\\lambda _i,\\mu _i \\in \\overline{\\mathbb {Q}}$ and the $\\lambda _i$ are not all zero.", "This then leads to an equality of polynomials in $\\overline{\\mathbb {Q}}[t]:$ $f(\\lambda _1 t+ \\mu _1,\\lambda _2 t+ \\mu _2) = [a(\\lambda _3 t+ \\mu _3)-bn]g(\\lambda _3 t+ \\mu _3,n)+k.$ Our proof proceeds by a careful case analysis.", "Suppose that $\\lambda _3=0$ and $\\lambda _1\\ne 0.$ Then, by Lemma REF , our line must be equal to the line with parametrisation $x_1 = t,\\,\\,\\, x_2 = \\alpha t+ \\beta ,\\,\\,\\,x_3 = \\mu _3$ where $\\alpha ,\\beta \\in \\Lambda $ and $x_3=\\mu _3$ must satisfy $(ax_3-bn)g(x_3,n)+k=f(0,\\beta ).$ If $f(0,\\beta )=k$ then, because we are assuming the curve $f(x,y)=k$ doesn't contain a line definable over $\\mathbb {Q},$ we must have at least one of $\\alpha ,\\beta \\in \\overline{\\mathbb {Q}}\\backslash \\mathbb {Q}.$ But now the line $x_2 = \\alpha x_1+\\beta $ contains at most one integer point $(x_1,x_2).$ Suppose that $\\lambda _3=0$ and $\\lambda _1= 0.$ Then, by Lemma REF , our line must equal the line with parametrisation $x_1 = \\gamma ,\\,\\,\\,x_2 = t,\\,\\,\\,x_3 = \\mu _3$ where $\\gamma \\in \\Lambda $ and we see $x_3=\\mu _3$ must satisfy $f(\\gamma ,0) &= (ax_3-bn)g(x_3,n)+k.$ This case requires $f_d(0,1)=0.$ From the definition of $\\gamma $ in Lemma REF , we see that $\\gamma \\in \\mathbb {Q}.$ We are assuming that $f(x,y)=k$ doesn't contain any lines definable over $\\mathbb {Q}$ , and so it follows that in this situation we must have $f(\\gamma ,0)\\ne k.$For otherwise we would have $f(\\gamma ,t)=k$ identically in $t,$ and then $f(x,y)=k$ would contain the rational line $(x-\\gamma )$ .", "Suppose that $\\lambda _3\\ne 0.$ We may parametrise our line as follows: $x_1 = \\tilde{\\lambda }_1 t+ \\tilde{\\mu }_1,\\,\\,\\,x_2 = \\tilde{\\lambda }_2 t+ \\tilde{\\mu }_2,\\,\\,\\,x_3 = t,$ where $\\tilde{\\lambda }_1,\\tilde{\\mu }_1,\\tilde{\\lambda }_2,\\tilde{\\mu }_2\\in \\overline{\\mathbb {Q}}.$ Then we must examine $f(\\tilde{\\lambda }_1 t+ \\tilde{\\mu }_1,\\tilde{\\lambda }_2 t+ \\tilde{\\mu }_2) = (at-bn)g(t,n)+k.$ Recall we are supposing that $g(t,n)$ is not the zero polynomial.", "In particular, as polynomials in $t,$ we must have the factorisation $at-bn | f(\\tilde{\\lambda }_1 t+ \\tilde{\\mu }_1,\\tilde{\\lambda }_2 t+ \\tilde{\\mu }_2)-k$ over $\\overline{\\mathbb {Q}}[t].$ Now, because we are assuming that the curve $f(x,y)=k$ doesn't contain a line definable over $\\mathbb {Q}$ , it follows that at least one of the variables $\\tilde{\\lambda }_1,\\tilde{\\mu }_1,\\tilde{\\lambda }_2,\\tilde{\\mu }_2\\in \\overline{\\mathbb {Q}}\\backslash \\mathbb {Q}.$ But then, since $x_i = \\tilde{\\lambda }_i x_3 + \\tilde{\\mu }_i$ for $i\\in \\lbrace 1,2\\rbrace ,$ it is clear that any line which arises in this way contains at most 1 integer point.", "Our last technical estimate is the following lemma.", "We note that our assumption $(ax-by)g(x,y)$ is square-free means that, in particular, we have $g_{d-1}(1,a/b)\\ne 0.$ Lemma 4.7 Suppose $g\\in \\mathbb {Z}[x,y]$ is such that $g_{d-1}(1,a/b)\\ne 0.$ Then for any $l\\ne 0$ we have $\\#\\lbrace x,y\\in [1,B]\\cap \\mathbb {Z}: (ax-by)g(x,y)=l\\rbrace \\ll _{\\epsilon } B^{1/2+\\epsilon }.$ This is proved along the same lines of Lemma REF .", "We will be done by Proposition REF provided that we can show this curve doesn't contain any lines defined over $\\overline{\\mathbb {Q}}.$ As $l\\ne 0,$ any line contained in this variety must have a parametrisation of the form $(x,y)=(t,\\lambda t+ \\mu ).$ In this case we must have the following polynomial identity in $\\overline{\\mathbb {Q}}[t]:$ $[(a-b\\lambda )t-b\\mu ]g(t,\\lambda t +\\mu )=l.$ Since $l\\ne 0,$ for this to be true clearly the first factor must be constant, i.e.", "$\\lambda = a/b.$ Now, by Taylor expansion, we have $g(t,\\lambda t +\\mu ) = g_{d-1}(1,\\lambda )t^{d-1}+\\ldots .$ For the leading term to vanish we must have $g_{d-1}(1,\\lambda ) = g_{d-1}(1,a/b) = 0.$ This is a contradiction.", "The following lemma, together with equation (REF ), completes the proof of Theorem REF , in the case $d\\ge 5.$ Lemma 4.8 We have $ \\sum _{\\begin{array}{c}1\\le n\\le B \\\\ \\Gamma _n^{\\text{proj}}\\text{ smooth} \\\\ g(\\cdot ,n)\\ne 0\\end{array}}M_{\\Gamma _n}^{\\text{lines}}(B) \\ll _{\\epsilon } B^{3/2+\\epsilon }$ This follows by assembling the information gathered thus far.", "Consider again Proposition REF .", "It is easy to see that the contribution from lines in cases (1) or (2) is $O_{\\epsilon }(B^{3/2+\\epsilon })$ by Lemma REF together with the fact $|\\Lambda | = O(1),$ and the contribution from lines in case (3) is $O(B)$ by Lemma REF .", "We conclude that $M_{f,g}(B;k) \\ll _{\\epsilon } B^{1+\\epsilon }(B^{1/2}+B^{2/\\sqrt{d}+1/(d-1)-1/((d-2)\\sqrt{d})}).$ One can check the exponents appearing here are strictly less than $(2-1/(50d))$ whenever $d\\ge 5$ , and so the bound stated in Theorem REF follows." ], [ "Proof of Theorem ", "We now proceed to prove Theorem REF in the remaining cases when $d\\in \\lbrace 3,4\\rbrace .$ As discussed in Section , we will use the polynomial sieve developed by Browning in this regime.", "We state the main sieve proposition here.", "By slightly adjusting the set-up, we are able to make the implied constant absolute and transfer any dependencies into our choice of $\\mathcal {P},$ the set of sieving primes.", "This is a technical convenience which will prove useful to us, as for our applications to asymmetric additive energy of polynomials we wish to explicitly keep track of any dependencies on the constant term.", "It is clear that the following result follows from the proof of [2].", "Proposition 5.1 (Browning) Let $\\mathcal {A}\\subset \\mathbb {Z}^2,$ and let $F\\in \\mathbb {Z}[x,X_1,X_2]$ be a polynomial of the form $F(x;X_1,X_2) = c_{d}x^{d}+c_{d-1}(X_1,X_2)x^{d-1}+\\ldots +c_{0}(X_1,X_2),$ where $c_d$ is a non-zero integer and $c_i \\in \\mathbb {Z}[X_1,X_2]$ for every $i\\in \\lbrace 0,\\ldots ,d-1\\rbrace .$ Let $\\mathcal {P}$ be a set of primes such that $(c_d,p)=1$ for every $p\\in \\mathcal {P}$ and $\\sqrt{X_1^2+X_2^2} \\le \\mathrm {exp}(\\#\\mathcal {P})$ whenever $(X_1,X_2)\\in \\mathcal {A}.$ Then, for any integer $\\alpha \\ge 1,$ we have $\\#\\lbrace (X_1,X_2)\\in \\mathcal {A}: F(x;X_1,X_2)=0\\text{ for some } x\\in \\mathbb {Z}\\rbrace \\ll \\frac{1}{\\#\\mathcal {P}^2}\\sum _{p,q\\in \\mathcal {P}}\\bigg |\\sum _{i,j\\in \\lbrace 0,1,2\\rbrace }c_{i,j}(\\alpha )S_{i,j}(p,q)\\bigg |,$ where $S_{i,j}(p,q) = \\sum _{\\begin{array}{c}(X_1,X_2)\\in \\mathcal {A}\\end{array}}v_{p}(X_1,X_2)^{i}v_{q}(X_1,X_2)^{j},$ the $v_p(X_1,X_2)$ denote the “local counts\" of solutions $v_p(X_1,X_2) = \\#\\lbrace x\\,\\,(\\text{mod}\\,\\,p): F(x;X_1,X_2) \\equiv 0\\,\\,(\\text{mod}\\,\\,p)\\rbrace ,$ and the coefficients $c_{i,j}(\\alpha )$ are given by $c_{i,j}(\\alpha )={\\left\\lbrace \\begin{array}{ll}(\\alpha -d)^{2}\\,\\,&\\text{if $(i,j)=(0,0)$,} \\\\\\alpha +(\\alpha -1)d-d^2\\,\\,&\\text{if $(i,j)=(1,0)$ or $(0,1)$,} \\\\(1+d)^{2}\\,\\,&\\text{if $(i,j)=(1,1)$,} \\\\-\\alpha -d\\,\\,&\\text{if $(i,j)=(2,0)$ or $(0,2)$,} \\\\-1-d\\,\\,&\\text{if $(i,j)=(2,1)$ or $(1,2)$,} \\\\1\\,\\,&\\text{if $(i,j)=(2,2)$.}\\end{array}\\right.", "}$ The implied constant is absolute.", "The purpose of the $\\alpha $ parameter will become clear later.", "We will choose it in such a way as to eliminate the “main term\" contribution.", "Before we begin, we first make an observation.", "If $|k| \\gg B^{d+1}$ (say) then, by size considerations, we must have $M_{f,g}(B;k)=0.$ Thus, continuing, we may assume that $0 < |k| \\ll B^{d+1}.$ This fact we can restrict to the case when $k$ is polynomially bounded in terms of $B$ will be useful later on.", "To apply the determinant method, we began by making a change of variables and proceeded to count points on the simpler 3-dimensional surface $\\Gamma _n.$ We will do a similar transformation now for the sieve method.", "Recall, we wish to count integer points on the affine hypersurface $f(x_1,x_2)=(ax_3-bx_4)g(x_3,x_4)+k\\subset \\mathbb {A}_{\\mathbb {Q}}^{4}.$ Unlike the determinant method, the sieve method we will use makes crucial use of the factorisation properties of the above equation.", "Thus we make a different change of variables.", "In spite of this change, much of the preliminary work is the same.", "Let us write $ \\nonumber X_1&=x_1, \\\\ \\nonumber X_2&=x_2, \\\\ \\nonumber X_3&=ax_3+bx_4, \\\\h &=ax_3-bx_4.$ We view $h$ as fixed and consider counting integer points on the affine surface $(2ab)^{d-1} f(X_1,X_2)= (2ab)^{d-1} hg\\bigg (\\frac{X_3+h}{2a},\\frac{X_3-h}{2b}\\bigg )+(2ab)^{d-1}k \\subset \\mathbb {A}_{\\mathbb {Q}}^{3}.$ Here we multiply through by suitable powers of $2,a$ and $b$ to ensure that our polynomials have integer coefficients.", "Let us denote by $K_h(x,y,z,w)$ the projectivisation of this surface in $\\mathbb {P}_{\\mathbb {Q}}^{3}.$ We can write $M_{f,g}(B;k) \\le \\sum _{0\\le |h|\\ll B} \\sum _{\\begin{array}{c}1\\le X_1,X_2\\le B \\\\ 1\\le X_3 \\ll B \\\\ K_{h}(X_1,X_2,X_3,1)=0\\end{array}}1.$ Exactly as above, we first deal with the degenerate case when $h$ is such that $hg\\bigg (\\frac{X_3+h}{2a},\\frac{X_3-h}{2b}\\bigg )=0$ identically (as a polynomial in $X_3$ ).", "There are $O(1)$ values of $h$ for which this is the case.", "For these values of $h$ we conclude the contribution to (REF ) is $O_{\\epsilon }(B^{3/2+\\epsilon }),$ by Lemma REF .", "Our aim is estimate the remaining terms using the polynomial sieve.", "The sieve method works most effectively when $K_h$ is smooth.", "Generically this will be true, as the following lemma demonstrates.", "Lemma 5.2 The varieties $K_h\\subset \\mathbb {P}_{\\mathbb {Q}}^{3}$ are smooth for all but at most $O(1)$ values of $h.$ This is proved in the same way as Lemma REF with minor differences.", "It will also be important to have control over various auxiliary curves which arise in the argument.", "In particular, we will need to have control over the the curves $(2ab)^{d-1}h\\bigg [g\\bigg (\\frac{x+h}{2a},\\frac{x-h}{2b}\\bigg )-g\\bigg (\\frac{y+h}{2a},\\frac{y-h}{2b}\\bigg )\\bigg ]\\bigg /(x-y)\\subset \\mathbb {A}_{\\mathbb {Q}}^{2}.$ We let $P_h(x,y,w)$ denote the projectivisation of this curve in $\\mathbb {P}_{\\mathbb {Q}}^{2}.$ For future convenience, we note that $\\nonumber P_h(x,y,w) &= g_{d-1}(b,a) h (x^{d-2}+x^{d-3}y+\\ldots +x y^{d-3}+y^{d-2}) \\\\ &+(\\text{terms involving $w$})$ and $P_h(x,x,1) = (2ab)^{d-1}hg^{\\prime }\\bigg (\\frac{x+h}{2a},\\frac{x-h}{2b}\\bigg )$ Here, by the dash notation on the RHS we mean the derivative of the function $g(\\frac{x+h}{2a},\\frac{x-h}{2b})$ with respect to $x.$ We would also like to restrict to the generic case when $P_h$ is smooth.", "For this we require the following lemma.", "Proposition 5.3 The varieties $P_h\\subset \\mathbb {P}_{\\mathbb {Q}}^{2}$ are smooth for all but at most $O(1)$ values of $h.$ We split into two cases, depending on the degree $d$ .", "In this proof $c_1(h),c_2(h)$ and $c_3(h)$ will denote polynomials in $h$ whose coefficients will depend on those of $g,a$ and $b.$ The case $d=3$ is simple, as here we have $P_h(x,y,w) = g_{d-1}(b,a)h(x+y)+c_1(h)w$ for some polynomial $c_1(h),$ and it clear that we do not have any singular points whenever $h\\ne 0.$ Let us now examine the case $d=4$ .", "Here we are going to use the additional assumption (4) we make in the statement of Theorem REF .", "We have $P_h(x,y,w) = g_{d-1}(b,a)h(x^2+xy+y^2)+c_2(h)(x+y)w+c_3(h)w^2$ for some polynomials $c_2(h)$ and $c_3(h).$ From this it is clear that there are no singular points when $w=0$ and $h\\ne 0.$ Thus we consider possible singular points with $w=1.$ It is easy to check that for the partial derivatives to vanish we must have $x=y,$ and so any singular point is necessarily of the form $[x:x:1]$ where $x$ satisfies, in particular, the system $P_h(x,x,1) = P_h^{\\prime }(x,x,1) = 0.$ Here the latter quantity denotes the derivative of $P_h(x,x,1)$ with respect to $x$ .Note $P_h^{\\prime }(t,t,1)= \\frac{P_h}{\\partial x}(t,t,1)+\\frac{P_h}{\\partial y}(t,t,1).$ Now, if this is the case, the discriminant $\\mathrm {Disc}_x[P_h(x,x,1)]$ must vanish identically.", "However this is a polynomial in $h$ which generically will be non-zero.", "Arguing as above, we prove this is non-zero by extracting the leading coefficient and showing this doesn't vanish.", "From (REF ) we obtain $\\frac{P_h(hx,hx,1)}{h^{d-1}} &= \\frac{(2ab)^{d-1}}{h^{d-2}} \\sum _{i=0}^{d-1}g_i^{\\prime }\\bigg (\\frac{hx+h}{2a},\\frac{hx-h}{2b}\\bigg ) \\\\&=(2ab)^{d-1}g^{\\prime }_{d-1}\\bigg (\\frac{x+1}{2a},\\frac{x-1}{2b}\\bigg )+O(h^{-1})$ Hence, by a similar limiting argument to the proof of Lemma REF , we have $\\lim _{h\\rightarrow \\infty } \\frac{\\mathrm {Disc}_x[P_h(x,x,1)])}{h^{d(d-3)}} = \\mathrm {Disc}_x \\bigg [(2ab)^{d-1}g^{\\prime }_{d-1}\\bigg (\\frac{x+1}{2a},\\frac{x-1}{2b}\\bigg )\\bigg ].$ Note that $g^{\\prime }_{d-1}\\bigg (\\frac{x+1}{2a},\\frac{x-1}{2b}\\bigg ) = \\bigg [\\frac{1}{2a}\\frac{\\partial g_{d-1}}{\\partial x}+\\frac{1}{2b}\\frac{\\partial g_{d-1}}{\\partial y}\\bigg ]\\bigg (\\frac{x+1}{2a},\\frac{x-1}{2b}\\bigg ).$ Hence it is clear that this discriminant doesn't vanish from our assumption that the projective variety $ \\bigg [\\frac{1}{2a}\\frac{\\partial g_{d-1}}{\\partial x}+\\frac{1}{2b}\\frac{\\partial g_{d-1}}{\\partial y}\\bigg ]\\bigg (\\frac{x}{a},\\frac{y}{b}\\bigg ) \\subset \\mathbb {P}_{\\mathbb {Q}}^{2}$ is square-free.", "We now deal with the contribution from those values of $h$ for which either $K_h$ or $P_h$ is singular.", "Lemma 5.4 Suppose that $h$ is constrained to lie in a set of size $O(1)$ and moreover $h$ is such that $hg\\bigg (\\frac{X_3+h}{2a},\\frac{X_3-h}{2b}\\bigg )$ doesn't vanish identically as a polynomial in $X_3.$ Then the contribution from these values of $h$ to (REF ) is at most $O_{\\epsilon }(B^{3/2+\\epsilon }).$ This follows in much the same way as the proof of Lemma REF .", "With this lemma, we can rewrite (REF ) as $M_{f,g}(B;k) \\le \\sum _{\\begin{array}{c}0 < |h| \\ll B \\\\ K_h,P_h\\text{ smooth}\\end{array}} \\sum _{\\begin{array}{c}1\\le X_1,X_2\\le B \\\\ 1\\le X_3 \\ll B \\\\ K_{h}(X_1,X_2,X_3,1)=0\\end{array}}1+O_{\\epsilon }(B^{3/2+\\epsilon }).$ For each fixed $h$ we will apply the polynomial sieve to count the inner sum, by detecting solubility of the equation in the $X_3$ variable.", "In the notation of Proposition REF , we will take $\\mathcal {A}:= \\lbrace (X_1,X_2)\\in ([1,B]\\cap \\mathbb {Z})^2: (2ab)^{d-1}f(X_1,X_2)\\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,|h|)\\rbrace $ and $F(x;X_1,X_2) := K_h(X_1,X_2,x,1).$ After unravelling the definition of $K_h$ we find that $F(x;X_1,X_2) = hg_{d-1}(b,a)x^{d-1}+(\\text{lower order terms in $x$}).$ We are assuming that $hg_{d-1}(b,a)\\ne 0.$Recall that our assumption that the projective variety $\\lbrace (ax-by)g_{d-1}(x,y)\\rbrace \\subset \\mathbb {P}_{\\mathbb {Q}}^{1}$ is square-free implies, in particular, that $g_{d-1}(b,a)\\ne 0.$ In particular, $F$ is of the correct form to apply Proposition REF .", "We define $\\mathcal {P} = \\lbrace p\\le Q: p\\nmid 6abg_{d-1}(b,a)\\mathrm {cont}(f_d)\\mathrm {Disc}[f_d]\\mathrm {Disc}[K_h]\\mathrm {Disc}[P_h] h\\rbrace $ for some large value of $Q$ .", "By $\\mathrm {cont}(f_d)$ we mean the content of the polynomial $f_d$ (i.e.", "the gcd of all the coefficients).", "Here, whenever $F$ is a homogenous polynomial we denote by $\\mathrm {Disc}[F]$ its discriminant.", "This choice is important, as it will ensure our varieties remain smooth when viewed over $\\overline{\\mathbb {F}}_p$ (for any $p\\in \\mathcal {P}$ ).", "We need to be careful here, as this set clearly depends on $h$ and will also depend (in some complicated manner) on $k.$ Now, it is a standard fact that the discriminant of a homogenous polynomial of degree $N$ is a homogenous polynomial of degree $(2N-2)$ in the coefficients.", "This, together with the size bounds $|h|\\ll B$ and $|k| \\ll B^{d+1}$ (recall (REF )) imply that $\\mathrm {Disc}[K_h]\\mathrm {Disc}[P_h] h\\ll B^{5d^2}$ uniformly in $h$ and $k$ (say).", "We will eventually take $Q=B^{\\delta }$ for some small $\\delta >0.$ This means, by the prime number theorem, we will have the asymptotic $\\#\\mathcal {P}\\sim Q/\\log {Q}$ uniformly in $h$ and $k,$ whenever $B\\gg 1$ .", "Thus, applying Proposition REF with our choices above to each term in the sum (REF ) yields $M_{f,g}(B;k) \\ll _{\\epsilon } \\sum _{\\begin{array}{c}0 < |h| \\ll B \\\\ K_h,P_h\\text{ smooth}\\end{array}} \\frac{1}{(\\#\\mathcal {P})^{2}}\\sum _{p,q\\in \\mathcal {P}}\\bigg |\\sum _{i,j\\in \\lbrace 0,1,2\\rbrace }c_{i,j}(\\alpha )S_{i,j}(p,q)\\bigg |+B^{3/2+\\epsilon },$ where $S_{i,j}(p,q) &= \\sum _{\\begin{array}{c}(X_1,X_2)\\in \\mathcal {A}\\end{array}}v_{p}(X_1,X_2)^{i}v_{q}(X_1,X_2)^{j}, \\\\v_p(X_1,X_2) &= \\#\\lbrace x\\,\\,(\\text{mod}\\,\\,p): K_h(X_1,X_2,x,1) \\equiv 0\\,\\,(\\text{mod}\\,\\,p)\\rbrace ,$ and the constants $c_{i,j}(\\alpha )$ are defined as in the statement of Proposition REF .", "We evaluate the sums $S_{i,j}(p,q)$ following the method of Browning [2], by first restricting to congruence classes modulo $pq|h|$ and then completing exponential sums.", "We have $ \\nonumber S_{i,j}(p,q) &= \\sum _{r,s\\,\\,(\\text{mod}\\,\\,pq|h|)} \\sum _{\\begin{array}{c}(X_1,X_2)\\in \\mathcal {A} \\\\ X_1\\equiv r\\,\\,(\\text{mod}\\,\\,pq|h|)\\\\ X_2\\equiv s\\,\\,(\\text{mod}\\,\\,pq|h|) \\end{array}} v_p(X_1,X_2)^{i}v_q(X_1,X_2)^{j} \\\\&= \\sum _{\\begin{array}{c}r,s\\,\\,(\\text{mod}\\,\\,pq|h|) \\\\ (2ab)^{d-1}f(r,s)\\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,|h|)\\end{array}} v_p(r,s)^{i}v_q(r,s)^{j} \\sum _{\\begin{array}{c}1\\le X_1,X_2\\le B \\\\ X_1\\equiv r\\,\\,(\\text{mod}\\,\\,pq|h|)\\\\ X_2\\equiv s\\,\\,(\\text{mod}\\,\\,pq|h|)\\end{array}} 1.$ We can detect the congruence condition in the inner sum using additive characters, as follows: $ \\nonumber \\sum _{\\begin{array}{c}1\\le X_1 \\le B \\\\ X_1\\equiv r\\,\\,(\\text{mod}\\,\\,pq|h|)\\end{array}}1 &= \\frac{1}{pq|h|}\\sum _{\\begin{array}{c}1\\le X_1 \\le B \\end{array}}\\sum _{m\\,\\,(\\text{mod}\\,\\,pq|h|)} e\\bigg (-\\frac{m(X_1-r)}{pq|h|}\\bigg ) \\\\ \\nonumber &= \\frac{1}{pq|h|}\\sum _{-pq|h|/2 < m \\le pq|h|/2} e\\bigg (\\frac{mr}{pq|h|}\\bigg )\\sum _{\\begin{array}{c}1\\le X_1 \\le B \\end{array}}e\\bigg (-\\frac{mX_1}{pq|h|}\\bigg ) \\\\&:=\\frac{1}{pq|h|}\\sum _{-pq|h|/2 < m \\le pq|h|/2} \\Gamma (B,m)e\\bigg (\\frac{mr}{pq|h|}\\bigg ),$ where we have defined $\\Gamma (B,m) := \\sum _{\\begin{array}{c}1\\le l\\le B \\end{array}}e\\bigg (\\frac{-ml}{pq|h|}\\bigg ).$ We note the well-known bound here $\\Gamma (B,m) \\ll \\min \\bigg \\lbrace B,\\frac{pq|h|}{|m|}\\bigg \\rbrace ,$ which will be used later.", "A similar identity holds for the sum over $X_2.$ Putting these facts together, and then swapping sums, we obtain the expression $S_{i,j}(p,q) &= \\frac{1}{(pqh)^{2}} \\sum _{-pq|h|/2<m,n\\le pq|h|/2} \\Gamma (B,m)\\Gamma (B,n)\\Psi _{i,j}(m,n),$ where $\\Psi _{i,j}(m,n) := \\sum _{\\begin{array}{c}r,s\\,\\,(\\text{mod}\\,\\,pq|h|) \\\\ (2ab)^{d-1}f(r,s)\\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,|h|)\\end{array}} v_p(r,s)^{i}v_q(r,s)^{j} e\\bigg (\\frac{mr+ns}{pq|h|}\\bigg ).$ By our choice of $\\mathcal {P}$ we have $(pq,h)=1.$ This allows us to deduce the following multiplicativity property for the exponential sums $\\Psi _{i,j}.$ Lemma 5.5 The following factorisations hold.", "Suppose $p\\ne q$ and let $p^{\\prime },q^{\\prime },\\overline{pq},\\overline{h}\\in \\mathbb {Z}$ be defined by $pq\\overline{pq}+|h|\\overline{h}=1$ and $pp^{\\prime }+qq^{\\prime }=1.$ Then $\\Psi _{i,j}(m,n)=\\Sigma _i(p;\\overline{h}q^{\\prime }m,\\overline{h}q^{\\prime }n)\\Sigma _j(q;\\overline{h}p^{\\prime }m,\\overline{h}p^{\\prime }n)\\Phi (|h|;\\overline{pq}m,\\overline{pq}n).$ Suppose $p= q$ and let $\\overline{p},\\overline{h}\\in \\mathbb {Z}$ be defined by $p\\overline{p}+|h|\\overline{h}=1.$ Then $\\Psi _{i,j}(m,n)={\\left\\lbrace \\begin{array}{ll}p^2\\Sigma _{i+j}(p;\\overline{h}m^{\\prime },\\overline{h}n^{\\prime })\\Phi (|h|;\\overline{p}m^{\\prime },\\overline{p}n^{\\prime })\\,\\,&\\text{if $(m,n)=p(m^{\\prime },n^{\\prime })$,} \\\\0\\,\\,&\\text{otherwise.}\\end{array}\\right.", "}$ Here $\\Sigma _t(p;M,N) = \\sum _{x,y\\in \\mathbb {F}_p}v_p(x,y)^t e\\bigg (\\frac{Mx+Ny}{p}\\bigg )$ and $\\Phi (h;M,N) = \\sum _{\\begin{array}{c}x,y\\,\\,(\\text{mod}\\,\\,h) \\\\ (2ab)^{d-1}f(x,y)\\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,h)\\end{array}}e\\bigg (\\frac{Mx+Ny}{h}\\bigg ).$ This is [2] with slight changes to notation.", "Recall $i,j\\in \\lbrace 0,1,2\\rbrace .$ Thus, to examine the exponential sums $\\Psi _{i,j}$ we may restrict our analysis to the exponential sums $\\Sigma _t$ for $0\\le t\\le 4$ and $\\Phi .$ For the former, it is important that we have restricted to the case where our varieties are smooth; the desired bounds will then follow relatively straightforwardly from the work of Weil and Deligne.", "We will make use of Hooley's method of moments, which allows us to estimate an exponential sum over an algebraic variety by counting points on the variety over finite fields.", "The following result originates in [12] and appears in the form stated here as [2].", "Lemma 5.6 (Hooley's method of moments) Let $F,G_1,\\ldots ,G_k$ be polynomials over $\\mathbb {Z}$ of degree at most $d$ , and let $S =\\sum _{\\begin{array}{c}{\\bf x}\\in \\mathbb {F}_p^n \\\\ G_1({\\bf x})=\\ldots = G_k({\\bf x}) = 0\\end{array}}e\\bigg (\\frac{F({\\bf x})}{p}\\bigg )$ for any prime $p.$ For each $j\\ge 1$ and $\\tau \\in \\mathbb {F}_{p^{j}}$ we define the sets $N_j(\\tau ) = \\#\\lbrace {\\bf x}\\in \\mathbb {F}_{p^j}^{n}:G_1({\\bf x})=\\ldots = G_k({\\bf x}) = 0\\text{ and } F({\\bf x}) = \\tau \\rbrace .$ Suppose there exists $N_j\\in \\mathbb {R}$ such that $\\sum _{\\tau \\in \\mathbb {F}_{p^j}}|N_j(\\tau )-N_j|^{2} \\ll _{d,k,n} p^{\\kappa j}$ where $\\kappa \\in \\mathbb {Z}$ is independent of $j.$ Then $S \\ll _{d,k,n} p^{\\kappa /2}.$ Once we have reduced to this point-counting problem over finite fields, we may employ the work of Weil [15] to count points on curves, and the work of Deligne [6] to count points on higher-dimensional varieties.", "The following two results will be sufficient for our purposes.", "Lemma 5.7 (Deligne) Let $W\\subset \\mathbb {P}_{\\mathbb {F}_q}^n$ be a non-singular complete intersection of dimension 2 and degree $d$ .", "Then $\\#\\lbrace {\\bf x}\\in \\mathbb {F}_{q}^n: [{\\bf x}] \\in W\\rbrace = q^3+O_{d,n}(q^2).$ Lemma 5.8 (Weil) Let $V\\subset \\mathbb {A}_{\\mathbb {F}_q}^n$ be an absolutely irreducible curve of degree $d$ .", "Then $\\#\\lbrace {\\bf x}\\in \\mathbb {F}_{q}^n: {\\bf x} \\in V\\rbrace = q+O_{d,n}(q^{1/2}).$ Recall that a projective variety $W\\subset \\mathbb {P}_{\\mathbb {F}_p}^{n}$ is a complete intersection if it is generated by exactly $\\mathrm {codim}(W)$ elements.", "We will estimate the exponential sums $\\Phi $ by obtaining square-root cancellation when $l=1$ (in the generic case), and using elementary arguments for higher powers.", "This is the part of the argument where we will use the fact the curve $f(x,y)=k$ doesn't contain any lines definable over $\\mathbb {Q}.$ In both cases, care must be taken to ensure that our results have no dependence on the constant term $k$ .", "We recall our convention, adopted in Section , which says all implied constants are allowed to depend on $f,g,a$ and $b$ without specifying so, and this includes dependencies on the coefficients of $f$ and $g$ as well as on the degree $d$ .", "Finally, the following two sections will require us to perform arithmetic in $\\overline{\\mathbb {F}}_p.$ To this end, we adopt the convention that any rational number $a/b$ whose denominator is coprime to $p$ may be viewed as an element of $\\overline{\\mathbb {F}}_p,$ namely $\\overline{a}\\overline{b}^{-1},$ where $\\overline{x}$ denotes the reduction map modulo $p$ and $\\overline{x}\\overline{x}^{-1}\\equiv 1\\,\\,(\\text{mod}\\,\\,p)$ .", "In particular, the rational number $a/b$ vanishes when viewed as an element of $\\overline{\\mathbb {F}}_p$ if and only if $p|a.$" ], [ "Estimation of exponential sums (I)", "Recall the definition of $\\Sigma _t$ given by (REF ).", "In this section we are going to estimate the exponential sums $\\Sigma _t(p;M,N) = \\sum _{x,y\\in \\mathbb {F}_p}v_p(x,y)^t e\\bigg (\\frac{Mx+Ny}{p}\\bigg )$ for $0\\le t\\le 4$ and integers $M,N$ .", "We argue in much the same way as Browning [2] did for the analogous sums he encountered when investigating the symmetric additive energy of quartic polynomials, with a few changes.", "Our main result is the following (cf. [2]).", "Proposition 6.1 Let $p\\in \\mathcal {P}$ .", "For $0\\le t\\le 4$ we have $\\Sigma _t(p;M,N) \\ll p(p,M,N)$ and for $0\\le t\\le 2$ we have $\\Sigma _t(p;0,0) = \\max \\lbrace 1,t\\rbrace p^2+O(p).$ We will prove this result in stages.", "The case $t=0$ follows immediately from orthogonality of additive characters.", "Thus we turn our attention to the case $t=1,$ where $\\Sigma _1(p;M,N) = \\sum _{\\begin{array}{c}x,y,z\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ).$ We begin by proving the following lemma.", "Lemma 6.2 For any integers $N$ and $M$ we have $\\mathrm {Disc}_x [f_d(Nx,1-Mx)] = N^D \\mathrm {Disc}_x [f_d(x,1)]$ for some integer $D$ depending on $f_d,M$ and $N$ .", "The result is true if $N=0$ as then both sides equal zero.", "Thus we may suppose that $N\\ne 0.$ We have $\\mathrm {Disc}_x [f_d(Nx,1-Mx)] = N^{D}\\mathrm {Disc}_x [f_d(x,1-Mx/N)]$ where $D:=l(l-1)$ and $l$ is the degree of $f_d(x,1-Mx/N)$ as a polynomial in $x$ .", "To prove the lemma, it suffices to show that the discriminant $\\mathrm {Disc}_x (f_d(x,1-tx)),$ which by definition is a polynomial in $t$ , is in fact constant in $t$ .", "To do this, we require two further properties of discriminant polynomials which we state here.", "Namely, for any polynomial $F$ we have $\\mathrm {Disc}_x(xF(x)) = F(0)^2\\mathrm {Disc}_x(F(x))$ and, if in addition $F$ has degree $d$ and $F(0)\\ne 0,$ we have $\\mathrm {Disc}_x(F(x)) = \\mathrm {Disc}_x(x^d F(1/x)).$ We split into two cases.", "Suppose $f_d(0,1)\\ne 0.$ In this case, provided $t$ is such that $f_d(1,-t)\\ne 0,$ we have $\\mathrm {Disc}_x (f_d(x,1-tx)) &= \\mathrm {Disc}_x (x^d f_d(1,1/x-t)) \\\\&= \\mathrm {Disc}_x (f_d(1,x-t)) \\\\&=\\mathrm {Disc}_x ( f_d(1,x)).$ As both sides are polynomials in $t$ , we conclude this identity in fact holds for all $t.$ The result follows.", "Suppose $f_d(0,1)=0.$ Then we must have $(\\partial f_d/\\partial x)(0,1)\\ne 0$ as otherwise $f_d(x,y)$ would be divisible by $x^2.$ Again supposing $t$ is such that $f_d(1,-t)\\ne 0,$ we have $\\mathrm {Disc}_x (f_d(x,1-tx)) &= \\mathrm {Disc}_x (x^{d} f_d(1,1/x-t)) \\\\&= \\bigg [\\lim _{x\\rightarrow 0} \\frac{f_d(x,1-xt)}{x}\\bigg ]^2 \\mathrm {Disc}_x (x^{d-1}f_d(1,1/x-t)) \\\\&= \\bigg [\\frac{\\partial f_d}{\\partial x}(0,1)\\bigg ]^2\\mathrm {Disc}_x (f_d(1,x))$ The result follows, as above.", "We isolate the $t=1$ case of Proposition REF in the following lemma.", "Lemma 6.3 Let $p\\in \\mathcal {P}$ .", "Then $\\Sigma _1(p;M,N)={\\left\\lbrace \\begin{array}{ll}p^2+O(p)\\,\\,&\\text{if $p|(M,N),$} \\\\O(p)\\,\\,&\\text{otherwise.}\\end{array}\\right.", "}$ First we consider the case $p|(M,N).$ We may write $\\Sigma _1(p;0,0) = \\frac{1}{p-1}\\#\\lbrace x,y,z,w\\in \\mathbb {F}_p: K_h(x,y,z,w)=0\\text{ and } w\\ne 0\\rbrace .$ Our set-up ensures that $K_h(x,y,z,w)$ is a non-singular projective surface over $\\mathbb {F}_p.$ The hypotheses of Lemma REF are satisfied, and so we obtain $\\#\\lbrace x,y,z,w\\in \\mathbb {F}_p: K_h(x,y,z,w)=0\\rbrace = p^3+O(p^2).$ We have $K_h(x,y,z,0) = (2ab)^{d-1}f_d(x,y).$ Assuming $p\\in \\mathcal {P},$ it follows that $\\#\\lbrace x,y,z\\in \\mathbb {F}_p: K_h(x,y,z,0)=0\\rbrace \\ll p^2.$ Putting these facts together yields $\\Sigma _1(p;0,0) = p^2+O(p).$ Now let us suppose $p\\nmid (M,N).$ We will use Lemma REF to show that (generically) we have square-root cancellation.", "To this end, fix an integer $j\\ge 1$ and $\\tau \\in \\mathbb {F}_{p^{j}},$ and define $N_{j}(\\tau ) := \\#\\lbrace x,y,z \\in \\mathbb {F}_{p^{j}}: K_h(x,y,z,1) = 0 \\text{ and } Mx+Ny= \\tau \\rbrace .$ We may suppose WLOG that $p\\nmid N,$ with the case $p\\nmid M$ being treated similarly.", "We may rewrite the above as $N_{j}(\\tau ) = \\#\\lbrace x,z \\in \\mathbb {F}_{p^{j}}: K_h(x,(\\tau -Mx)N^{-1},z,1) = 0\\rbrace .$ We claim the following.", "Claim.", "$K_h(x,(\\tau -Mx)N^{-1},z,1) =0$ defines an absolutely irreducible curve for all but at most $O(1)$ values of $\\tau $ .", "To examine whether the curve $K_h(x,(\\tau -Mx)N^{-1},z,1) = 0$ is absolutely irreducible, we investigate the smoothness properties of its projectivisation over $\\overline{\\mathbb {F}}_p$ .", "By Taylor expansion (see e.g.", "(REF )), and using the fact $p\\nmid N,$ we may write $f(x,(\\tau -Mx)N^{-1}) &=\\frac{f_d(N,-M)}{N^d}x^d \\\\&+ \\bigg [\\frac{f_{d-1}(N,-M) }{N^{d-1}}- \\tau \\frac{(\\partial f_d/\\partial y)(N,-M)}{N^d} \\bigg ]x^{d-1}+\\ldots .$ We also have $(2ab)^{d-1}hg\\bigg (\\frac{z+h}{2a},\\frac{z-h}{2b}\\bigg ) = g_{d-1}(b,a)hz^{d-1}+\\ldots .$ There are two cases to consider.", "Suppose $f_d(N,-M)=0$ in $\\overline{\\mathbb {F}}_{p}.$ In this case, since $f_d$ is smooth and $N\\ne 0$ we have $(\\partial f_d/\\partial y)(N,-M)\\ne 0.$ We exclude the value $\\tau = Nf_{d-1}(N,-M)/(\\partial f_d/\\partial y)(N,-M),$ as we may, and then we see the projectivisation of this curve can be written as $\\bigg [\\frac{f_{d-1}(N,-M) }{N^{d-1}}- \\tau \\frac{(\\partial f_d/\\partial y)(N,-M)}{N^d} \\bigg ]x^{d-1} &=g_{d-1}(b,a)hz^{d-1} + (\\text{terms involving $w$}).$ It is then clear, by considering vanishing of partial derivatives, that there cannot be any singular points with $w=0.$ This leaves us to investigate singular points of the form $[r:s:1].$ By considering the $z$ -derivative, we see that $s$ is constrained to at most $O(1)$ values.", "For each such value, there will be a solution in $r$ if and only if the discriminant $\\mathrm {Disc}_x\\bigg (f(x,(\\tau -Mx)N^{-1}) - (2ab)^{d-1}hg\\bigg (\\frac{s+h}{2a},\\frac{s-h}{2b}\\bigg ) -(2ab)^{d-1}k\\bigg )$ vanishes.", "This will be a polynomial in $\\tau $ which generically is non-zero, and so will only vanish for $O(1)$ values of $\\tau .$ As previously, we will prove this by extracting the leading coefficient and showing this is non-zero.", "We have $\\frac{f(\\tau x, (\\tau -M\\tau x)N^{-1}}{\\tau ^d} = f_d(x,(1-Mx)N^{-1})+O(\\tau ^{-1}).$ By similar arguments to the proof of Lemma REF , it follows that the leading coefficient of the dscriminant (REF ), as a polynomial in $\\tau ,$ is $\\mathrm {Disc}_x( f_d(x,(1-Mx)N^{-1}).$ By Lemma REF and our assumptions on $p$ , this doesn't vanish.", "Suppose $f_d(N,-M)\\ne 0$ in $\\overline{\\mathbb {F}}_{p}.$ Then the projectivised curve can be written as $\\frac{f_d(N,-M)}{N^d} x^{d} &=g_{d-1}(b,a)hz^{d-1}w + (\\text{terms involving $w$}).$ In exactly the same way as above, we conclude that there are no singular points with $w=0$ and for any singular point of the form $[r:s:1],$ $s$ is constrained to at most $O(1)$ values and for each such value there exists an $r$ if and only if the discriminant defined by (REF ) vanishes.", "This can only occur for at most $O(1)$ values of $\\tau .$ This finishes the proof of the claim.", "Now, for the $O(1)$ values of $\\tau $ for which the curve $K_h(x,(\\tau -Mx)N^{-1},z,1) =0$ is not absolutely irreducible, we will apply the trivial bound $N_{j}(\\tau ) \\ll p^{j}.$ In the complementary case, by Lemma REF , we obtain $N_{j}(\\tau ) = p^{j} + O(p^{j/2})$ We may therefore take $N_{j} = p^{j}$ and $\\kappa =2$ in the statement of Lemma REF , and the result follows.", "For the case $t=2,$ we must examine the exponential sum $\\Sigma _2(p;M,N)&=\\sum _{\\begin{array}{c}x,y,z_1,z_2\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z_1,1)=0 \\\\ K_h(x,y,z_2,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ).$ Recalling the definition of $P_h$ in (REF ), we see that $K_h(x,y,z_1,1)= K_h(x,y,z_2,1) \\iff (z_1-z_2)P_h(z_1,z_2,1)=0.$ Thus we may equivalently write $\\Sigma _2(p;M,N) &=\\sum _{\\begin{array}{c}x,y,z_1,z_2\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z_1,1)=0 \\\\ (z_1-z_2)P_h(z_1,z_2,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ).$ We isolate the $t=2$ case of Proposition REF in the following lemma.", "Lemma 6.4 Let $p\\in \\mathcal {P}.$ Then $\\Sigma _2(p;M,N)={\\left\\lbrace \\begin{array}{ll}2p^2+O(p)\\,\\,&\\text{if $p|(M,N),$} \\\\O(p)\\,\\,&\\text{otherwise.}\\end{array}\\right.", "}$ Separating out the contribution from $z_1=z_2,$ and recalling the definition of $\\Sigma _1$ (see (REF ) above), we may write $\\Sigma _2(p;M,N) &=\\Sigma _1(p;M,N)+\\sum _{\\begin{array}{c}x,y,z_1,z_2\\,\\,(\\text{mod}\\,\\,p) \\\\ z_1\\ne z_2 \\\\ K_h(x,y,z_1,1)=0 \\\\ P_h(z_1,z_2,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ).$ In this last sum we may add back in the terms with $z_1=z_2,$ as these contribute $ \\le \\sum _{\\begin{array}{c}z\\,\\,(\\text{mod}\\,\\,p) \\\\ P_h(z,z,1)=0\\end{array}}\\sum _{\\begin{array}{c}x,y\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z,1)=0\\end{array}}1 \\ll p \\sum _{\\begin{array}{c}z\\,\\,(\\text{mod}\\,\\,p) \\\\ P_h(z,z,1)=0\\end{array}}1 \\ll p.$ Overall, we obtain $\\Sigma _2 &= \\Sigma _1+\\sum _{\\begin{array}{c}x,y,z_1,z_2\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z_1,1)=0 \\\\ P_h(z_1,z_2,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ) +O(p).$ Let us call this inner sum $T(p;M,N).$ The following claim is important.", "Claim.", "For any $p\\in \\mathcal {P}$ the variety $\\lbrace K_h(x,y,z_1,w) = P_h(z_1,z_2,w) = 0 \\rbrace \\subset \\mathbb {P}_{\\overline{\\mathbb {F}}_p}^{4}$ is smooth.", "Any singular point by definition must solve the system $K_h = P_h &= 0,\\,\\,\\lambda \\nabla K_h = \\mu \\nabla P_h$ for some $(\\lambda ,\\mu )\\ne (0,0).$ By the work above we know that both $K_h$ and $P_h$ are non-singular over $\\mathbb {Q}.$ It follows that we must have $\\lambda \\mu \\ne 0.$ Now, the gradient identity implies the following equations must be solved: $\\frac{\\partial {K_h}}{\\partial {x}} &= 0, \\\\\\frac{\\partial {K_h}}{\\partial {y}} &= 0, \\\\\\lambda \\frac{\\partial {K_h}}{\\partial {z_1}} &= \\mu \\frac{\\partial {P_h}}{\\partial {z_1}}, \\\\\\frac{\\partial {P_h}}{\\partial {z_2}} &= 0, \\\\\\lambda \\frac{\\partial {K_h}}{\\partial {w}} &= \\mu \\frac{\\partial {P_h}}{\\partial {w}}.$ We first rule out the possibility of any singular points with $w=0$ .", "If $[r:s:t:u:0]$ is a singular point, then the first two equations imply that $ \\frac{\\partial f_d}{\\partial x}(r,s) = \\frac{\\partial f_d}{\\partial y}(r,s) =0.$ By Euler's identity we conclude that $f_d(r,s)=0.$ As we are assuming $f_d$ is smooth over $\\mathbb {F}_p,$ we must therefore have $r=s=0.$ Now $\\partial {K_h}/\\partial {z_1}$ vanishes when $w=0.$ Recalling the definition of $P_h$ (see (REF )), we have $P_h(x,y,w) = g_{d-1}(b,a)h (x^{d-2}+x^{d-3}y+\\ldots +x y^{d-3} +y^{d-2})+\\ldots .$ If $d=3$ then the third equation (say) cannot be satisfied, and so there are no singular points in this case.", "Thus we may suppose that $d=4.$ In this case the third and fourth equation imply that $2t+u &= 0 \\\\t+2u &= 0,$ whence we must have $t=u=0.$ This is a contradiction, and so there cannot be any singular points with $w=0.$ We now consider singular points of the form $[r:s:t:u:1].$ The fourth equation implies that $(\\partial {P_h}/\\partial {z_2})(t,u)=0.$ Likewise, on replacing $K(x,y,z_1,w)$ by $K(x,y,z_2,w)$ and using the symmetry of $P_h(z_1,z_2,w)$ in the first two variables, we see that $(\\partial {P_h}/\\partial {z_1})(r,s)=0$ also.", "Since we are assuming $P_h(r,s,1)=0,$ by Euler's identity we see that $(\\partial {P_h}/\\partial {w})(r,s)=0$ .", "But now we have produced a singular point on the curve $P_h,$ a contradiction.", "With this claim proven, we move on to estimating the exponential sum $T(p;M,N)$ defined above.", "We begin by considering the case $p|(M,N).$ By definition, $T(p;0,0) = \\#\\lbrace x,y,z_1,z_2 \\in \\mathbb {F}_p: K_h(x,y,z_1,1) = P_h(z_1,z_2,1)=0 \\rbrace ,$ which we write as $T(p;0,0) =\\frac{1}{p-1} \\#\\lbrace x,y,z_1,z_2,w \\in \\mathbb {F}_p: K_h(x,y,z_1,w)=P_h(z_1,z_2,w)=0 \\text{ and } w\\ne 0\\rbrace .$ This set defines a non-singular, complete intersection, projective variety of dimension 2.", "Thus, we may apply Lemma REF to conclude that $\\#\\lbrace x,y,z_1,z_2,w \\in \\mathbb {F}_p: K_h(x,y,z_1,w)=P_h(z_1,z_2,w)=0 \\rbrace = p^3+O(p^2).$ The contribution from points with $w=0$ is $\\sum _{\\begin{array}{c}z_1,z_2\\in \\mathbb {F}_p \\\\ P_h(z_1,z_2,0)=0\\end{array}}\\sum _{\\begin{array}{c}x,y\\in \\mathbb {F}_p\\\\ K_h(x,y,z_1,0)=0\\end{array}}1 \\ll p\\sum _{\\begin{array}{c}z_1,z_2\\in \\mathbb {F}_p \\\\ P_h(z_1,z_2,0)=0\\end{array}}1 \\ll p^2.$ It follows that $T(p;0,0) = p^2+O(p).$ Now let us suppose $p\\nmid (M,N).$ We will use Lemma REF to show that (generically) we have square-root cancellation.", "Fix $j\\ge 1$ and $\\tau \\in \\mathbb {F}_{p^{j}}.$ Then we must count $N_{j}(\\tau ):=\\#\\lbrace x,y,z_1,z_2 \\in \\mathbb {F}_{p^{j}}: K_h(x,y,z_1,1)=P_h(z_1,z_2,1)=0 \\text{ and } Mx+Ny=\\tau \\rbrace .$ We may suppose WLOG that $p\\nmid N.$ Then we may write this as $N_{j}(\\tau )=\\#\\lbrace x,z_1,z_2 \\in \\mathbb {F}_{p^{j}}: K_h(x,(\\tau -Mx)N^{-1},z_1,1)=P_h(z_1,z_2,1)=0\\rbrace .$ We claim the following.", "Claim.", "$K_h(x,(\\tau -Mx)N^{-1},z_1,1)=P_h(z_1,z_2,1)=0$ defines an absolutely irreducible curve for all but at most $O(1)$ values of $\\tau $ .", "This proceeds much the same way as the analogous claim contained in Lemma REF .", "We examine the absolute irreducibility of this curve by investigating the smoothness properties of its projectivisation over $\\overline{\\mathbb {F}}_p$ .", "There are two cases to consider.", "If $f_d(N,-M)=0$ in $\\overline{\\mathbb {F}}_p,$ then we exclude the value $\\tau = Nf_{d-1}(N,-M)/(\\partial f_d/\\partial y)(N,-M),$ as we may, and consider the equations $\\bigg [\\frac{f_{d-1}(N,-M) }{N^{d-1}}- \\tau \\frac{(\\partial f_d/\\partial y)(N,-M)}{N^d} \\bigg ]x^{d-1} &=g_{d-1}(b,a)hz_1^{d-1} + (\\text{terms involving $w$})$ and $P_h(z_1,z_2,w)= 0.$ Call the first equation $\\psi (x,z_1,w)=0.$ We must examine the system $\\psi = P_h = 0 \\text{ and } \\lambda \\nabla \\psi = \\mu \\nabla P_h$ for some $(\\lambda ,\\mu )\\ne (0,0).$ Clearly $\\lambda \\ne 0$ as $P_h$ is smooth.", "Let us suppose that $\\mu =0.$ We consider possible singular points of the form $[r:s:t:0].$ The equations $(\\partial \\psi /\\partial x)(r,s)=(\\partial \\psi /\\partial z_1)(r,s)=0$ yield $r=s=0,$ and then the equation $P_h(0,t,0)=0$ yields $t=0$ , a contradiction.", "Thus we may look for singular points of the form $[r:s:t:1].$ The equation $\\partial \\psi /\\partial x=0$ becomes $f^{\\prime }(x,(\\tau -Mx)N^{-1})=0$ and, as above, the equation $\\partial \\psi /\\partial z_1=0$ constrains $s$ to at most $O(1)$ values.", "In this case, there exists a valid $r$ if and only if the discriminant $\\mathrm {Disc}_x\\bigg (f(x,(\\tau -Mx)N^{-1}) - (2ab)^{d-1}hg\\bigg (\\frac{s+h}{2a},\\frac{s-h}{2b}\\bigg ) -(2ab)^{d-1}k\\bigg )$ vanishes.", "This is the same discriminant as defined in the proof of Lemma REF (see (REF )), and we showed there that this vanishes for at most $O(1)$ values of $\\tau .$ Hence the claim holds in this case.", "Thus we may suppose that $\\lambda \\mu \\ne 0.$ Our equations become $\\frac{\\partial {\\psi }}{\\partial {x}} &= 0, \\\\\\lambda \\frac{\\partial {\\psi }}{\\partial {z_1}} &= \\mu \\frac{\\partial {P_h}}{\\partial {z_1}}, \\\\\\frac{\\partial {P_h}}{\\partial {z_2}} &= 0, \\\\\\lambda \\frac{\\partial {\\psi }}{\\partial {w}} &= \\mu \\frac{\\partial {P_h}}{\\partial {w}}.$ On replacing $\\psi (x,z_1,w)$ with $\\psi (x,z_2,w)$ and using the symmetry of $P_h(z_1,z_2,w)$ in the first two arguments, we may adjoin to this system the equation $\\partial {P_h}/\\partial {z_1} = 0.$ Now, if $[r:s:t:1]$ is a singular point then the equations $(\\partial {P_h}/\\partial {z_1})(r,s) =(\\partial {P_h}/\\partial {z_2})(r,s) =0$ together with Euler's identity and the equation $P_h(s,t,1)=0$ yield $(\\partial P_h/\\partial w)(r,s)=0.$ This is a contradiction as $P_h$ is smooth.", "Thus we may restrict ourselves to looking at singular points of the form $[r:s:t:0].$ But now our first equation implies that $r=0,$ and similar to the proof of the claim above, the equations $(\\partial P_h/\\partial z_1)(r,s)=(\\partial P_h/\\partial z_2)(r,s)=0$ either cannot be satisfied in the case $d=3,$ or imply that $s=t=0$ in the case $d=4.$ In both cases we arrive at a contradiction.", "If $f_d(N,-M)\\ne 0$ in $\\overline{\\mathbb {F}}_p,$ then we must instead consider the equations $\\frac{f_d(N,-M)}{N^d} x^{d} &=g_{d-1}(b,a)hz^{d-1}w + (\\text{terms involving $w$}).$ and $P_h(z_1,z_2,w)= 0.$ One can proceed exactly as above and conclude there are at most $O(1)$ values of $\\tau $ for which this system is singular.", "This completes the proof of the claim.", "Now, for the $O(1)$ values of $\\tau $ for which the curve $K_h(x,(\\tau -Mx)N^{-1},z,1) =P_h(z_1,z_2,1)=0$ is not absolutely irreducible, we will apply the trivial bound $N_{j}(\\tau ) \\ll p^{j}.$ In the complementary case, by Lemma REF , we obtain $N_{j}(\\tau ) = p^{j} + O(p^{j/2})$ Thus we may take $N_{j}=p^{j}$ and $\\kappa =2$ in the statement of Lemma REF , and the result follows.", "We are left to proof Proposition REF in the cases when $t\\in \\lbrace 3,4\\rbrace .$ In this regime we only require an upper bound for $\\Sigma _t(p;M,N)$ whenever $p\\in \\mathcal {P}.$ To do this we follow the argument of Browning [2].", "By definition, we have $\\Sigma _t(p;M,N) &=\\sum _{\\begin{array}{c}x,y,z_1,\\ldots ,z_t\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z_1,1)=0 \\\\ (z_i-z_j)P_h(z_i,z_j,1)=0\\text{ for } i\\ne j\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ).$ Let $\\sigma ({\\bf z})$ denote the number of distinct elements in the set $\\lbrace z_1,\\ldots ,z_t\\rbrace ,$ so that $1\\le \\sigma ({\\bf z}) \\le t.$ We may split our sum according to the value of $\\sigma ({\\bf z}).$ The contribution from those $z_1,\\ldots ,z_t$ for which $\\sigma ({\\bf z})=1$ is $\\Sigma _1$ , and this event arises in precisely one way.", "The contribution from those $z_1,\\ldots ,z_t$ with $\\sigma ({\\bf z})=2$ is $\\Sigma _2-\\Sigma _1$ by (REF ) and this event arises in $c_t$ ways, for some appropriate constant $c_t$ depending only on $t$ .", "Let us now consider the contribution from those ${\\bf z}$ with $\\sigma ({\\bf z})=3.$ We claim the following.", "From the definition of $P_h$ it is easy to see the following: If $d=3$ we have $P_h(x,y,1) = P_h(x,z,1) \\iff y=z.$ If $d=4$ we have $P_h(x,y,1) = P_h(x,z,1) \\iff y=z\\text{ or } x+y+z=g_{d-1}(b,a)^{-1}h^{-1}c(h)$ for some polynomial $c(h)$ whose coefficients depend on $g,a$ and $b.$ With this, it is clear there is no contribution from the case $\\sigma ({\\bf x}) \\ge 3,$ whenever $d=3.$ Hence, we obtain $\\Sigma _t(p;M,N) = (1-c_t)\\Sigma _1+c_t\\Sigma _2$ whenever $t\\in \\lbrace 3,4\\rbrace $ and $d=3,$ and Proposition REF follows.", "Thus we may suppose that $d=4.$ In this case it is clear that there is no contribution from $\\sigma ({\\bf x}) \\ge 4.$ This just leaves us to examine the case $\\sigma ({\\bf x})=3.$ This event will arise in $d_t$ ways, for some appropriate constant $d_t$ depending only on $t$ .", "We must examine the exponential sum $\\sum _{\\begin{array}{c}x,y,z_1,z_2\\,\\,(\\text{mod}\\,\\,p) \\\\ (z_1-z_2)(2z_1+z_2-c(h))(z_1+2z_2-c(h))\\ne 0 \\\\ K_h(x,y,z_1,1)=0 \\\\ P_h(z_1,z_2,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg ).$ As $p\\in \\mathcal {P}$ and so, in particular, $p\\nmid 6 h g_{d-1}(b,a)),$ the polynomials $P_h(z,z,1), P_h(z,c(h)-2z,1)$ and $P_h(z,2^{-1}(c(h)-z),1)$ are non-zero quadratic polynomials in $z.$ It follows that the terms omitted contribute $O(p)$ altogether.", "In other words, this equals $\\sum _{\\begin{array}{c}x,y,z_1,z_2\\,\\,(\\text{mod}\\,\\,p) \\\\ K_h(x,y,z_1,1)=0 \\\\ P_h(z_1,z_2,1)=0\\end{array}}e\\bigg (\\frac{Mx+Ny}{p}\\bigg )+O(p).$ This exponential sum is precisely $\\Sigma _2-\\Sigma _1+O(p)$ by (REF ) and (REF ).", "Thus, we obtain $\\Sigma _t(p;M,N) = (1-c_t-d_t)\\Sigma _1(p;M,N)+(c_t+d_t)\\Sigma _2(p;M,N) + O(p)$ whenever $t\\in \\lbrace 3,4\\rbrace $ and $d=4.$ This completes the proof of Proposition REF ." ], [ "Estimation of exponential sums (II)", "Recall the definition of $\\Phi $ given by (REF ).", "In this section we will estimate the exponential sums $\\Phi (h;M,N) = \\sum _{\\begin{array}{c}x,y\\,\\,(\\text{mod}\\,\\,h) \\\\ (2ab)^{d-1}f(x,y)\\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,h)\\end{array}}e\\bigg (\\frac{Mx+Ny}{h}\\bigg )$ where $h$ is a fixed, positive integer and $M$ and $N$ are integers.", "We first note the following multiplicativity property of these sums: if $(j,l)=1$ and $j\\overline{j}+l\\overline{l}=1$ is it not difficult to show that $\\Phi (jl;M,N) = \\Phi (j;\\overline{l}M,\\overline{l}N)\\Phi (l;\\overline{j}M,\\overline{j}N).$ Thus is suffices to study $\\Phi (p^l;M,N)$ for some prime $p$ and integer $l\\ge 1.$ To tackle these exponential sums we will obtain square-root cancellation in the generic case for $l=1,$ and use elementary bounds for higher powers.", "We now consider estimating $\\Phi (p;M,N).$ This exponential sum will be sensitive to whether or not the curve $\\lbrace f(x_1,x_2)=k\\rbrace \\subset \\mathbb {A}_{\\mathbb {F}_p}^{2}$ contains a line.", "In this case, if this line can be parametrised by $Mx+Ny= \\tau ,$ for some constant $\\tau \\in \\mathbb {F}_p$ , we will get zero cancellation and hence the sum will be large.", "Because we are assuming the curve $f(x,y)=k$ contains no line definable over $\\mathbb {Q},$ this should be a rare event.", "Our aim is to define a non-zero integer $\\Delta _f(M,N,k)$ such that whenever this occurs, we must have that $p|\\Delta _f.$ This, together with size bounds on $\\Delta _f,$ will be enough to control the cases where this exponential sum is large.", "This is the content of the following lemma.", "Lemma 7.1 Let $f,k$ be as in the statement of Theorem REF .", "Fix integers $(M,N)\\in \\mathbb {Z}^{2},$ not both zero.", "Fix a prime $p$ such that $p\\nmid (M,N)$ and $p$ is sufficiently large in terms of $d.$ Let $\\tau \\in \\overline{\\mathbb {F}}_p.$ Then there exists a non-zero integer $\\Delta _f(M,N,k)$ such that whenever the curve $\\lbrace f(x,y)=k\\rbrace \\subset \\mathbb {A}_{\\overline{\\mathbb {F}}_p}^{2}$ contains the line $Mx+Ny = \\tau ,$ we must have $p|\\Delta _f(M,N,k).$ Moreover, $\\Delta _f(M,N,k)$ satisfies the size bound $|\\Delta _f(M,N,k)| \\ll _{f,d} |k|\\max \\lbrace |M|, |N|\\rbrace ^{d^2}.$ We may suppose WLOG that $N\\ne 0.$ (If $M\\ne 0$ then an identical case-analysis argument holds with minor adjustments.)", "With notation as above, let us suppose that the curve $f(x,y)=k$ contains the line $Mx+Ny=\\tau $ over $\\overline{\\mathbb {F}}_p.$ In other words, we have the polynomial identity $f(x,(\\tau -Mx)N^{-1})=0$ in $\\overline{\\mathbb {F}}_p[x].$ By Taylor expansion (see e.g.", "(REF )), and using the fact $p\\nmid N,$ we see our assumption is that the identity $\\frac{f_d(N,-M)}{N^d}x^d + \\bigg [\\frac{f_{d-1}(N,-M)}{N^{d-1}} - \\tau \\frac{(\\partial f_d/\\partial y)(N,-M)}{N^d} \\bigg ]x^{d-1}+\\ldots =k$ holds in $\\overline{\\mathbb {F}}_p[x].$ The proof now proceeds by careful case-analysis.", "If $f_d(N,-M)\\ne 0$ in $\\mathbb {Q}$ , then for the leading term to vanish we must have $p|f_d(N,-M).$ Thus in this case we define $\\Delta _f(M,N,k) := f_d(N,-M)$ and the size bound is easily satisfied.", "Thus, proceeding, we may suppose that $f_d(N,-M)=0$ in $\\mathbb {Q}$ .", "In this case, by Euler's identity, and the fact $f_d$ is assumed to be smooth and $N\\ne 0,$ we must have $(\\partial f_d/\\partial y)(N,-M)\\ne 0$ in $\\mathbb {Q}.$ If $p|(\\partial f_d/\\partial y)(N,-M)$ then we set $\\Delta _f(M,N,k):=(\\partial f_d/\\partial y)(N,-M)$ and we are done; again the size bound is immediate.", "Now let us suppose that $p\\nmid (\\partial f_d/\\partial y)(N,-M).$ The vanishing of the coefficient of $x^{d-1}$ implies that $\\tau = \\frac{N f_{d-1}(N,-M)}{(\\partial f_d/\\partial y)(N,-M)}\\,\\,\\,\\text{in $\\overline{\\mathbb {F}}_p$}.$ Substituting (REF ) back into (REF ), we obtain $\\sum _{j=0}^{d} E_j(M,N) x^j = k \\,\\,\\,\\text{in $\\overline{\\mathbb {F}}_p[x]$,}$ where, for each $j\\in \\lbrace 0,\\ldots ,d\\rbrace ,$ we have defined the rational numbers $ \\nonumber E_j(M,N) &:= \\frac{1}{N^j}\\sum _{i=j}^{d} \\frac{(-1)^{i-j}}{(i-j)!}", "\\bigg (\\frac{f_{d-1}(N,-M)}{(\\partial f_d/\\partial y)(N,-M)}\\bigg )^{i-j}\\frac{\\partial ^{i-j}f_i}{\\partial y^{i-j}}(N,-M) \\\\ \\nonumber &= \\frac{\\sum _{i=j}^{d} \\frac{(d-j)!}{(i-j)!", "}f_{d-1}(N,-M)^{i-j}(\\partial f_d/\\partial y)(N,-M)^{d-i} (\\partial ^{i-j} f_i/\\partial y^{i-j})(N,-M)}{N^j(\\partial f_d/\\partial y)(N,-M)^{d-j}(d-j)!}", "\\\\&:= \\frac{A_j(M,N)}{B_j(M,N)}.$ Note that $A_j(M,N)$ and $B_j(M,N)$ are both integers, $B_j(M,N)$ is non-zero and our assumptions imply that $p\\nmid B_j(M,N)$ (for every $j$ ).", "Suppose that $E_j(M,N)\\ne 0$ in $\\mathbb {Q}$ for some $j\\in \\lbrace 1,\\ldots ,d-1\\rbrace .$ Let $J$ be the maximal such integer.", "Then, in particular, we must have that $p$ divides $A_J(M,N)$ and in this case we set $\\Delta _f(M,N,k) := A_J(M,N).$ Note $|\\Delta _f(M,N,k)| \\ll \\max _{J\\le i\\le d}\\lbrace |M|,|N|\\rbrace ^{(d-1)(i-j)+(d-2)(d-i)+j} \\ll \\max \\lbrace |M|,|N|\\rbrace ^{d^2},$ and so the size bound is satisfied.", "Otherwise, we arrive at the identity $E_0(M,N) = k \\,\\,\\,\\text{in $\\overline{\\mathbb {F}}_p$.", "}$ If $E_0(M,N)=k$ in $\\mathbb {Q},$ then, together with all of our assumptions thus far, we arrive at a bonafide identity over $\\mathbb {Q}:$ $\\sum _{j=0}^{d} E_j(M,N) x^j = k \\,\\,\\,\\text{in $\\mathbb {Q}[x]$.", "}$ Translating everything back, this says that $f(x,y)+k$ contains the line $Mx+Ny = \\frac{N f_{d-1}(N,-M)}{(\\partial f_d/\\partial y)(N,-M)}.$ This is a rational line, which is a contradiction.", "Thus, we must have that $E_0(M,N)\\ne k$ in $\\mathbb {Q}.$ In this case we must have $p$ divides the numerator of $E_0(M,N)-k,$ and so we define $\\Delta _f(M,N,k):=A_0(M,N)-kB_0(M,N).$ As above, we have $|A_0(M,N)| \\ll \\max \\lbrace |M|,|N|\\rbrace ^{d^2}.$ Note that $|B_0(M,N)| \\ll \\max \\lbrace |M|,|N|\\rbrace ^{d(d-1)} \\ll \\max \\lbrace |M|,|N|\\rbrace ^{d^2}.$ As $|\\Delta _f(M,N,k)| \\le |A_0(M,N)| + |k| |B_0(M,N)|,$ the stated bound follows.", "With Lemma REF , we can prove the following.", "Lemma 7.2 For any integers $M$ and $N$ we have $\\Phi (p;M,N)\\ll p^{1/2}(p,\\Delta _f(M,N,k))^{1/2}.$ We may suppose that $p$ is sufficiently large in terms of $f,a$ and $b$ as otherwise both sides are $O(1).$ We note that $\\#\\lbrace x,y\\in \\mathbb {F}_p: (2ab)^{d-1}f(x,y)\\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,p)\\rbrace \\ll p.$ Thus, by bounding trivially, we may apply the bound $\\Phi (p;M,N) \\ll p$ whenever $M=N=0, p|(M,N)$ or $p|\\Delta _f(M,N,k).$ By inspecting the proof of Lemma REF , it is clear that $\\Delta _f(0,0,k)=0$ and $(M,N)|\\Delta _f(M,N,k).$ Thus (REF ) holds in these cases.", "Proceeding, we may suppose $M$ and $N$ are not both zero and $p\\nmid \\Delta _f(M,N,k).$ In this regime, we will obtain square-root cancellation in $\\Phi (p;M,N)$ using Lemma REF .", "For any $j\\ge 1$ and $\\tau \\in \\mathbb {F}_{p^j},$ we define $N_{j}(\\tau ) = \\#\\lbrace x,y\\in \\mathbb {F}_{p^{j}}: (2ab)^{d-1}f(x,y) = (2ab)^{d-1}k\\text{ and } Mx+Ny = \\tau \\rbrace .$ We may suppose WLOG that $p\\nmid N.$ We can write $N_{j}(\\tau ) = \\#\\lbrace x\\in \\mathbb {F}_{p^{j}}: (2ab)^{d-1}f(x,(\\tau -Mx)N^{-1}) = (2ab)^{d-1}k\\rbrace .$ Since we are assuming $p\\nmid \\Delta _f(M,N,k),$ it follows from Lemma REF that the curve $f(x,y)=k$ contains no lines over $\\mathbb {F}_{p^j}.$ Hence $f(x,(\\tau -Mx)N^{-1})-k$ is never the zero polynomial in $\\mathbb {F}_{p^j}[x]$ , and we can bound $N_j(\\tau ) \\ll 1.$ Thus we may take $N_{j}=0$ and $\\kappa =1$ in the statement of Lemma REF , and the result follows.", "We now turn our attention to bounding $\\Phi (p^l;M,N)$ when $l\\ge 2.$ To do this we will bound trivially and forego any cancellation in our exponential sum.", "In other words, we bound $\\Phi (p^l;M,N) \\le \\#\\lbrace x,y\\in \\mathbb {Z}/p^l\\mathbb {Z}: (2ab)^{d-1}f(x,y) \\equiv (2ab)^{d-1}k\\,\\,(\\text{mod}\\,\\,p^l)\\rbrace $ and aim to estimate the count on the RHS.", "To do this we need some understanding of the number of solutions to polynomial congruences over the finite rings $\\mathbb {Z}/p^l\\mathbb {Z}$ .", "To this end, let $Q\\in \\mathbb {Z}[x]$ be a polynomial of degree $d$ with leading coefficient $a_d.$ We will make the dependencies on implied constants explicit for the following estimates involving $Q$ .", "We are interested in general bounds for the count $\\#\\lbrace x \\in \\mathbb {Z}/p^l\\mathbb {Z}: Q(x)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^l)\\rbrace .$ As a first estimate, whenever $p\\nmid \\mathrm {cont}(Q),$ we have $\\#\\lbrace x \\in \\mathbb {Z}/p^l\\mathbb {Z}: Q(x)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^l)\\rbrace \\ll _{d} p^{l-1}.$ This is easily proven by induction; there are $O_d(1)$ roots when $l=1$ and, when $l\\ge 2,$ any element of $\\mathbb {Z}/p^{l-1}\\mathbb {Z}$ has exactly $p$ lifts to an element of $\\mathbb {Z}/p^l\\mathbb {Z}.$ By using $p$ -adic arithmetic, we can improve on this bound for large $l$ .", "Lemma 7.3 With notation as above, for any prime $p$ and integer $l\\ge 1,$ we have $\\#\\lbrace x\\in \\mathbb {Z}/p^{l}\\mathbb {Z}: Q(x)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^l)\\rbrace \\ll _{d} p^{l-l/d+v_p(a_d)/d},$ where $v_p(a_d)$ denotes the $p$ -adic valuation of the non-zero integer $a_d.$ We will prove this by using the fact any root must be “$p$ -adically close\" to one of the $d$ (not necessarily distinct) roots of $Q$ in an algebraic closure of $\\mathbb {Q}_p,$ the $p$ -adic numbers.", "To this end, we define the following (standard) notation.", "We let $|\\cdot |_p$ denote the usual norm in $\\overline{\\mathbb {Q}}_p$ and $\\mu $ denote the Haar measure.", "For any $\\beta \\in \\overline{\\mathbb {Q}}_p$ and $m\\in \\mathbb {Z}$ we define $B(\\beta ,p^{m}) := \\lbrace y\\in \\overline{\\mathbb {Q}}_p: |y-\\beta |_p \\le p^m\\rbrace ,$ i.e.", "the (closed) ball of radius $p^m$ centered at $\\beta ,$ so that $\\mu (B(\\beta ,p^m))=p^m.$ The estimate (REF ) is true for $l=1$ since in this case we can use the bound $O_d(1).$ Proceeding, we fix an integer $l\\ge 2$ and roots $\\beta _1,\\ldots ,\\beta _d$ of $Q$ in an algebraic closure $\\overline{\\mathbb {Q}}_p$ .", "In $\\overline{\\mathbb {Q}}_p[t]$ we have the factorisation $Q(t) = a_d\\prod _{i=1}^{d}(t-\\beta _i).$ We can partition the set we are interested as follows: $\\bigcup _{i=1}^{d} \\#\\lbrace x\\in \\mathbb {Z}/p^{l}\\mathbb {Z}: Q(x)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^l)\\text{ and } |x-\\beta _i|_p\\text{ minimal}\\rbrace .$ Suppose we are looking at the set corresponding to the root $\\beta .$ For an element $x\\in \\mathbb {Z}/p^l\\mathbb {Z}$ to be counted we must have $|a_d|_p |x-\\beta |_p^{d} \\le |a_d|_p \\prod _{i=1}^{d}|x-\\beta _i|_p = |Q(x)|_p \\le p^{-l}.$ Suppose $m=v_p(a_d)$ so that $|a_d|_p = p^{-m}.$ The line above equivalently says that $x \\in B(\\beta ,p^{-(l-m)/d})$ .", "By reduction, we see that $x$ must take the form $x = \\alpha + \\lambda p$ for some root $\\alpha \\in \\mathbb {Z}/p\\mathbb {Z}$ and $\\lambda \\in \\mathbb {Z}/p^{l-1}\\mathbb {Z}.$ There are $O_d(1)$ choices for $\\alpha .$ Suppose that $\\alpha + \\lambda p\\in \\overline{B}(\\beta ,p^{-(l-m)/d})$ for $\\lambda \\in \\Lambda \\subset \\mathbb {Z}/p^{l-1}\\mathbb {Z}.$ Our aim is therefore to bound $|\\Lambda |.$ We claim the following: Claim: If $\\lambda \\ne \\lambda ^{\\prime }\\in \\mathbb {Z}/p^{l-1}\\mathbb {Z}$ then the balls $B_{\\lambda } := B(\\alpha +\\lambda p, p^{-l})$ are disjoint.", "For any distinct elements $\\lambda \\ne \\lambda ^{\\prime }\\in \\mathbb {Z}/p^{l-1}\\mathbb {Z}$ we have $|\\lambda -\\lambda ^{\\prime }|_p \\ge p^{-(l-2)}.$ Now, if $x \\in \\overline{B}_{\\lambda }\\cap \\overline{B}_{\\lambda ^{\\prime }}$ we have $p^{-(l-1)}&\\le |(\\alpha +\\lambda p) - (\\alpha +\\lambda ^{\\prime }p)|_p \\\\&=|(\\alpha +\\lambda p)-x - (\\alpha +\\lambda ^{\\prime }p-x)|_p \\\\&\\le \\max \\lbrace |\\alpha +\\lambda p-x|_p, |\\alpha +\\lambda ^{\\prime }p-x|_p\\rbrace \\\\&\\le p^{-l},$ a contradiction.", "Here we have used the fact $|\\cdot |_p$ is non-Archimedean.", "With this claim proven we can complete the proof.", "Fix $\\alpha $ .", "As $|\\cdot |_p$ is non-Archimedean, whenever $\\alpha + \\lambda p\\in B(\\beta ,p^{-(l-m)/d})$ we must actually have $B(\\alpha + \\lambda p,p^{-(l-m)/d}) = B(\\beta ,p^{-(l-m)/d}).$ Hence we are assuming that $\\bigcup _{\\lambda \\in \\Lambda } B(\\alpha +\\lambda p, p^{-l}) \\subseteq \\bigcup _{\\lambda \\in \\Lambda } B(\\alpha +\\lambda p, p^{-(l-m)/d}) = B(\\beta ,p^{-(l-m)/d}).$ From the claim, these sets on the LHS are disjoint.", "Using the fact $\\mu $ is a measure, it follows that $\\mu \\bigg (\\bigcup _{\\lambda \\in \\Lambda }B(\\alpha +\\lambda p^l,p^{-l})\\bigg ) &= \\sum _{\\lambda \\in \\Lambda }\\mu (B(\\alpha +\\lambda p^l,p^{-l})) = |\\Lambda | p^{-l}.$ On the other hand $\\mu (B(\\beta ,p^{-(l-m)/d})) = \\mu (B(\\beta ,p^{- \\left\\lceil (l-m)/d \\right\\rceil })) = p^{-\\left\\lceil (l-m)/d \\right\\rceil }.$ Putting these facts together, and using monotonicity of $\\mu ,$ we obtain $|\\Lambda | &\\le p^{l-\\left\\lceil (l-m)/d \\right\\rceil } \\le p^{l-l/d+m/d}.$ We are done as there are only $O_d(1)$ possibilities for $\\alpha $ and $\\beta $ .", "These results can easily be extended to the case $p^m||\\mathrm {cont}(Q).$ Corollary 7.4 Suppose $p^m||\\mathrm {cont}(Q).$ Then $\\#\\lbrace x \\in \\mathbb {Z}/p^l\\mathbb {Z}: Q(x)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^l)\\rbrace \\ll _{d,a_d}{\\left\\lbrace \\begin{array}{ll}p^l\\,\\,&\\text{if $l\\le m,$} \\\\p^{\\min \\lbrace l-1,l-l/d\\rbrace })\\,\\,&\\text{if $l>m$.}\\end{array}\\right.", "}$ The statement is clear when $l\\le m$ and so we suppose that $l>m.$ Write $Q(x) = p^m R(x)$ where $\\mathrm {cont}(R)=1.$ Then we have $\\#\\lbrace x \\in \\mathbb {Z}/p^l\\mathbb {Z}: Q(x)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^l)\\rbrace &= p^N\\cdot \\#\\lbrace y \\in \\mathbb {Z}/p^{l-m}\\mathbb {Z}: R(y)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^{l-m})\\rbrace .$ Hence, using (REF ) and (REF ), we can bound this by $\\ll _d p^m\\cdot p^{\\min \\lbrace l-m-1, l-m - (l-m)/d+v_p(a_d p^{-m})/d\\rbrace } = p^{\\min \\lbrace l-1, l - l/d+v_p(a_d)/d\\rbrace }.$ The result follows upon noting the $p^{v_p(a_d)/d}$ factor can be absorbed into the constant term, which we allow to depend on $a_d.$ The following estimate will also be convenient for us: for any $A,B\\in \\mathbb {Z}$ such that $A\\ne 0$ and $p\\nmid A,$ one has $\\#\\lbrace x \\in \\mathbb {Z}/p^l\\mathbb {Z}: p^m|Ax+B\\rbrace \\le {\\left\\lbrace \\begin{array}{ll}p^{l-m}\\,\\,&\\text{if $m\\le l,$} \\\\1 \\,\\,&\\text{if $m>l$.}\\end{array}\\right.", "}$ To proceed we require the following lemma.", "Lemma 7.5 Let $f\\in \\mathbb {Z}[x,y]$ be such that the projective variety $\\lbrace f_d(x,y)=0\\rbrace \\subset \\mathbb {P}_{\\mathbb {Q}}^{1}$ has no non-constant repeated factors.", "Then the polynomial $\\mathrm {Disc}_t((2ab)^{d-1}f(x,t)-(2ab)^{d-1}k)$ is not identically zero, and the leading coefficient is independent of $k$ .", "We wish to show a particular resultant polynomial is not identically zero.", "We will follow the strategy adopted in Lemma REF : once again, we will extract the leading coefficient and show this is non-zero.", "Since $f(x,xt) &= \\sum _{i=0}^{d} f_i(x,xt) = \\sum _{i=0}^{d} x^i f_i(1,t),$ it follows that $\\lim _{x\\rightarrow \\infty } \\frac{(2ab)^{d-1}f(x,xt)-(2ab)^{d-1}k}{x^d} = (2ab)^{d-1}f_d(1,t).$ By a similar argument to the proof of Lemma REF , it follows that our discriminant polynomial has the leading coefficient $\\mathrm {Disc}_t((2ab)^{d-1}f_d(1,t)).$ Now this discriminant is non-zero by our assumption on $f_d(x,y),$ and it is clearly independent of $k.$ Let us write $f(x,y)=\\sum _{i+j\\le d}c_{i,j}x^iy^j,$ so that $f_{d}(x,y) = c_{d,0}x^d+c_{d-1,1}x^{d-1}y+\\ldots +c_{1,d-1}x y^{d-1}+c_{0,d}y^d.$ If both $c_{d,0}=c_{d-1.1}=0$ then $f_d(x,y)$ will contain a a square factor.", "Thus with our assumptions at least one of $c_{d-1,1}$ or $c_{d,0}$ is non-zero.", "(Similarly for $c_{0,d}$ and $c_{1,d-1}.$ ) We will make use of this fact in what follows.", "Our main result is the following.", "We recall our convention that implied constants may depend on $f,g,a$ and $b$ without specifying so, and this includes dependencies on the coefficients of $f$ and $g$ as well as dependencies on the degree $d$ .", "Lemma 7.6 Fix $l\\ge 2.$ We have $\\Phi (p^l;0,0) \\ll {\\left\\lbrace \\begin{array}{ll}p^{2l-2}\\,\\,&\\text{if $2\\le l\\le d,$} \\\\p^{2l-l/d-1}\\,\\,&\\text{if $l\\ge d.$}\\end{array}\\right.", "}$ For this proof it will also be helpful to define the polynomial $F(x,y) := (2ab)^{d-1}f(x,y)-(2ab)^{d-1}k.$ We may assume that $p$ is sufficiently large in terms of $f,a$ and $b$ as otherwise the result holds trivially.", "In particular, in view of Lemma REF , in this regime we may assume the polynomial $\\mathrm {disc}_t(F(x,t))$ has content coprime to $p.$ By Hensel's lemma, if $x\\in \\mathbb {Z}/p^{l}\\mathbb {Z}$ is such that $\\mathrm {disc}_t(F(x,t))\\lnot \\equiv 0\\,\\,(\\text{mod}\\,\\,p),$ then any of the $\\le p^l$ choices for $x$ will lead to at most $O(1)$ solutions in $y$ .", "Hence we have $\\Phi (p^l;0,0) = \\sum _{\\begin{array}{c}x,y\\in \\mathbb {Z}/p^{l}\\mathbb {Z} \\\\ F(x,y)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^{l}) \\\\ \\mathrm {disc}_t(F(x,t))\\equiv 0\\,\\,(\\text{mod}\\,\\,p) \\end{array}}1+O(p^l).$ Let us take the sum over $x$ on the outside.", "The last constraint restricts $x$ to $O(1)$ values modulo $p$ and hence $O(p^{l-1})$ values modulo $p^l.$ We would like to use the estimates proved above to tackle the inner sum over $y$ .", "To do this, we need some understanding of how $p$ divides the content of $F(x,y)$ as a polynomial in $y.$ We write this as $\\mathrm {cont}_y(F(x,y)).$ If $f_d(0,1)=c_{0,d}\\ne 0$ we have $F(x,y) = (2ab)^{d-1}c_{0,d}y^d+(\\text{lower order terms in $y$}).$ In the regime under consideration, $p\\nmid c_{0,d}$ , and so we get an overall bound $\\Phi (p^l;0,0) \\ll p^l+p^{l-1+\\min \\lbrace l-1,l-l/d\\rbrace }$ by Corollary REF .", "This agrees with the result stated.", "Now let us suppose that $f_d(0,1)=c_{0,d}=0.$ From the remarks above, we must therefore have $c_{1,d-1}\\ne 0.$ In particular, we are assuming $p\\nmid c_{1,d-1}.$ It follows that $F(x,y) = (2ab)^{d-1}(c_{1,d-1}x+c_{0,d-1})y^{d-1}+(\\text{lower order terms in $y$})$ for some $c_{0,d-1}\\in \\mathbb {Z}$ .", "We now split the sum in (REF ) according to the power $p^m$ which divides $\\mathrm {cont}_y(F(x,y)).$ Write this sum as $\\sum _{m} \\sum _{\\begin{array}{c}x\\in \\mathbb {Z}/p^{l}\\mathbb {Z} \\\\ \\mathrm {disc}_t(F(x,t))\\equiv 0\\,\\,(\\text{mod}\\,\\,p) \\\\ p^m||\\mathrm {cont}_y(F(x,y))\\end{array}}\\sum _{\\begin{array}{c}y\\in \\mathbb {Z}/p^{l}\\mathbb {Z} \\\\ F(x,y)\\equiv 0\\,\\,(\\text{mod}\\,\\,p^{l}) \\end{array}}1.$ The term with $m=0$ contributes $\\ll _d p^{l-1+\\min \\lbrace l-1,l-l/d\\rbrace }$ by Corollary REF .", "To tackle the remaining sums we will majorise by replacing the condition $p^m||\\mathrm {cont}_y(F(x,y))$ with the simpler condition $p^m | (2ab)^{d-1}(c_{1,d-1}x+c_{0,d-1}).$ The terms with $m\\ge l+1$ contribute $\\le p^l\\sum _{\\begin{array}{c}x\\in \\mathbb {Z}/p^l\\mathbb {Z} \\\\ \\mathrm {disc}_t(F(x,t)) \\equiv 0\\,\\,(\\text{mod}\\,\\,p) \\\\ p^{l+1}|\\mathrm {cont}_y(F(x,y)) \\end{array}}1 \\le p^l\\sum _{\\begin{array}{c}x\\in \\mathbb {Z}/p^l\\mathbb {Z} \\\\ p^{l+1}|(2ab)^{d-1}(c_{1,d-1}x+c_{0,d-1}) \\end{array}}1 \\le p^l$ by (REF ).", "To finish we need to estimate the contribution to (REF ) from when $m\\in \\lbrace 1,\\ldots ,l\\rbrace .$ To do this, we split into two cases.", "Suppose that $l\\le d.$ Fix $1\\le m\\le l.$ By Corollary REF we may bound the sum over $y$ by $O(p^{l-1}).$ We therefore get a contribution $&\\ll p^{l-1} \\sum _{m=1}^{l} \\sum _{\\begin{array}{c}x\\in \\mathbb {Z}/p^l\\mathbb {Z} \\\\ p^m| (2ab)^{d-1}(c_{1,d-1}x+c_{0,d-1})\\end{array}}1$ Using (REF ) we may bound this by $&\\ll p^{l-1} \\sum _{m=1}^{l} p^{l-m} \\ll p^{2l-2}.$ Putting everything together, in this case we get $\\Phi (p^l;0,0) \\ll p^{2l-2}+p^{l-1+\\min \\lbrace l-1,l-l/d\\rbrace }+p^l \\ll p^{2l-2}.$ This agrees with the stated result.", "Now suppose that $l>d.$ In this case we split our sum at $m=l-d.$ Fix $1\\le m\\le l-d.$ We can bound the sum over $y$ by $O(p^{l-l/d})$ using Corollary REF .", "These terms therefore contribute $\\ll p^{l-l/d} \\sum _{m=1}^{l-d} \\sum _{\\begin{array}{c}x\\in \\mathbb {Z}/p^l\\mathbb {Z} \\\\ p^m|c_{1,d-1}x+c_{0,d-1} \\end{array}} 1.$ Using (REF ) we can bound this by $&\\ll p^{l-l/d} \\sum _{m=1}^{l-d}p^{l-m} \\ll p^{2l-l/d-1}.$ Fix $l-d<m\\le l.$ We can bound the sum over $y$ by $O(p^{l-1})$ using Corollary REF .", "These terms therefore contribute $&\\ll p^{l-1} \\sum _{m=l-d+1}^{l} \\sum _{\\begin{array}{c}x\\in \\mathbb {Z}/p^l\\mathbb {Z} \\\\ p^m| c_{1,d-1}x+c_{0,d-1} \\end{array}} 1.$ Again, using equation (REF ) we see the overall contribution in this case is $\\ll p^{l-1} \\sum _{m=l-d+1}^{l} p^{l-m} \\ll p^{l+d-2}.$ Now, since $l>d$ the first term dominates.", "Putting everything together, we conclude that $\\Phi (p^l;0,0)\\ll p^{2l-l/d-1}+p^{l-1+\\min \\lbrace l-1,l-l/d\\rbrace }+p^l \\ll p^{2l-l/d-1}.$ This agrees with the result stated.", "Finally, we have the following.", "Proposition 7.7 Fix $\\epsilon >0.$ For any integers $M,N$ we have $\\sum _{0<|h|\\ll B} \\frac{\\Phi (h;M,N)}{|h|^{2-1/d+\\epsilon }} \\ll _{\\epsilon } \\Delta _f(M,N,k)^{\\epsilon } B^{\\epsilon }.$ Fix $0<\\delta <1/d.$ By multiplicativity, and the fact $\\Phi (-h;M,N)=\\Phi (h;M,N),$ we can bound our sum by $\\sum _{0<|h|\\ll B} \\frac{\\Phi (h;M,N)}{|h|^{2-\\delta }} \\ll \\prod _{p\\ll B}\\bigg (1+\\frac{\\Phi (p;M,N)}{p^{2-\\delta }}+\\sum _{l=2}^{\\infty }\\frac{\\Phi (p^l;0,0)}{p^{l(2-\\delta )}}\\bigg ).$ We first deal with the contribution from higher powers.", "Using the bounds established in Lemma REF , this contribution is bounded by $&\\ll \\frac{1}{p^2} \\sum _{l=2}^{d}p^{l\\delta } + \\frac{1}{p}\\sum _{l=d+1}^{\\infty }\\frac{1}{p^{l/d-\\delta l}} \\ll _{\\delta } \\frac{1}{p^{2-d\\delta }} + \\frac{1}{p^{2+1/d-(d+1)\\delta }} \\ll _{\\delta } \\frac{1}{p},$ using the fact $\\delta <1/d.$ Now, fix positive constants $A_i>0$ which may depend on $f,a,b$ and $\\delta .$ We recall Lemma REF , which says that $|\\Phi (p;M,N)| \\ll p^{1/2}(p,\\Delta _f(M,N,k))^{1/2}.$ We see the contribution from those primes dividing $\\Delta _f(M,N,k)$ is bounded by $\\prod _{p|\\Delta _f(M,N,k)} \\bigg (1+\\frac{A_1}{p^{1-\\delta }}+\\frac{A_2}{p}\\bigg ) \\ll _{\\delta } \\tau (\\Delta _f(M,N,k)) \\ll _{\\delta ,\\epsilon } \\Delta _f(M,N,k)^{\\epsilon }.$ The contribution from the remaining terms is bounded by $\\prod _{p\\ll B}\\bigg (1+\\frac{A_3}{p^{3/2-\\delta }}+\\frac{A_4}{p}\\bigg ) &\\ll _{\\delta } \\prod _{p\\ll B}\\bigg (1+\\frac{A_5}{p}\\bigg ) \\\\&\\ll _{\\delta } \\prod _{p\\ll B}\\bigg (1+\\frac{1}{p}\\bigg )^{A_5} \\ll _{\\delta ,\\epsilon } B^{\\epsilon }$ by Mertens' estimate.", "The proof of the proposition is completed upon taking $\\delta =1/d-\\epsilon .$" ], [ "Final estimates", "Let us recall our work so far.", "Putting together equations (REF ) and (REF ) we arrive at the bound $M_{f,g}(B;k) \\ll _{\\epsilon } \\sum _{\\begin{array}{c}0<|h| \\ll _{a,b} B \\\\ K_{h},P_{h}\\text{ smooth}\\end{array}}\\,\\,\\,\\,\\,\\frac{1}{(\\#\\mathcal {P})^{2}}\\sum _{p,q\\in \\mathcal {P}}\\bigg |\\sum _{i,j\\in \\lbrace 0,1,2\\rbrace }c_{i,j}(\\alpha )S_{i,j}(p,q)\\bigg |+B^{3/2+\\epsilon }$ where $S_{i,j}(p,q) &= \\frac{1}{(pqh)^{2}} \\sum _{-pq|h|/2<m,n\\le pq|h|/2} \\Gamma (B,m)\\Gamma (B,n)\\Psi _{i,j}(m,n).$ Using Lemma REF we were able to decompose the exponential sums $\\Psi _{i,j}$ into the simpler exponential sums $\\Sigma _t$ and $\\Phi $ which we were able to estimate individually.", "In this section we aim to bring everything together.", "Firstly, we isolate the main term contribution to the above sum.", "To this end, let $L_{i,j}(p,q)$ denote the the contribution from the term $(m,n)=(0,0)$ .", "Note that $\\Gamma (B,0)=\\left\\lfloor B \\right\\rfloor ,$ and so, with the estimate $ \\left\\lfloor B \\right\\rfloor = B+O(1),$ we obtain $L_{i,j}(p,q) = \\frac{B^2\\Psi _{i,j}(0,0)}{(pqh)^{2}}+O\\bigg (\\frac{B\\Psi _{i,j}(0,0)}{(pqh)^{2}}\\bigg ).$ Let us concentrate our attention on this first term.", "If $p\\ne q,$ then for $i,j\\in \\lbrace 0,1,2\\rbrace ,$ we have $ \\nonumber \\Psi _{i,j}(0,0) &= \\Sigma _i(p;0,0)\\Sigma _j(q;0,0)\\Phi (h;0,0) \\\\&= \\max \\lbrace 1,i\\rbrace \\max \\lbrace 1,j\\rbrace [p^2+O(p)][q^2+O(q)]\\Phi (h;0,0)$ using Lemma REF for the decomposition of $\\Psi _{i,j}$ and Lemma REF for the asymptotics for $\\Sigma _t.$ Recalling the definition of the constants $c_{i,j}(\\alpha )$ in Proposition REF , we obtain an expression $\\sum _{i,j\\in \\lbrace 0,1,2\\rbrace }c_{i,j}(\\alpha )L_{i,j}(p,q) &= \\frac{B^2\\Phi (h;0,0)}{h^2}(\\alpha -1)^2+O\\bigg (\\frac{B^2\\Phi (h;0,0)}{\\min \\lbrace p,q\\rbrace h^2}\\bigg ).$ Thus we may take $\\alpha =1$ to eliminate the main term.", "In this case, including the error present in equation (REF ) and using the bound $\\Psi _{i,j}(0,0) \\ll p^2q^2\\Phi (h;0,0)$ which follows from our work so far, we obtain $\\sum _{i,j\\in \\lbrace 0,1,2\\rbrace }c_{i,j}(1)L_{i,j}(p,q) \\ll \\frac{B^2\\Phi (h;0,0)}{\\min \\lbrace p,q\\rbrace h^2}+\\frac{B\\Phi (h;0,0)}{h^{2}}$ for the total contribution to the main term from terms with $p\\ne q.$ We see the first term dominates in our range of $p$ and $q$ .", "If $p=q$ then we bound trivially, to obtain $\\sum _{i,j\\in \\lbrace 0,1,2\\rbrace }c_{i,j}(1)L_{i,j}(p,p) &\\ll \\frac{B^2\\Phi (h;0,0)}{h^2}.$ Now note that, from Lemma REF and Lemma REF , it follows that we can bound $\\Phi (h;0,0) \\ll A^{\\omega (h)}h^2/\\mathrm {rad}(|h|),$ where $A>0$ is a constant and $\\mathrm {rad}(|h|)$ denotes the radical of the non-zero integer $h$ .", "Thus $ \\nonumber \\sum _{0<|h|\\ll B}\\frac{\\Phi (h;0,0)}{h^2} & \\ll \\sum _{0<|h|\\ll B}\\frac{A^{\\omega (h)}}{\\mathrm {rad}(|h|)} \\\\\\nonumber &\\ll _{\\epsilon } B^{\\epsilon }\\sum _{0<|h|\\ll B}\\frac{A^{\\omega (h)}}{h^{\\epsilon }\\mathrm {rad}(|h|)} \\\\\\nonumber &\\ll _{\\epsilon } B^{\\epsilon }\\prod _{p\\ll B}\\bigg (1+\\frac{A}{p}\\bigg [\\frac{1}{p^{\\epsilon }}+\\frac{1}{p^{2\\epsilon }}+\\ldots \\bigg ]\\bigg ) \\\\&\\ll _{\\epsilon } B^{\\epsilon }\\prod _{p\\ll B}\\bigg (1+\\frac{A}{p^{1+\\epsilon /2}}\\bigg ) \\ll _{\\epsilon } B^{\\epsilon }.$ Thus, the total contribution from the main term, using the asymptotic $\\#\\mathcal {P}\\sim Q/\\log {Q},$ is $ \\nonumber \\frac{B^2\\log ^2{Q}}{Q^2}\\sum _{0<|h|\\ll B}\\frac{\\Phi (h;0,0)}{h^2}&\\bigg [\\sum _{\\begin{array}{c}p,q\\in \\mathcal {P} \\\\ p\\ne q \\end{array}}\\frac{1}{\\min \\lbrace p,q\\rbrace }+\\sum _{p\\in \\mathcal {P}}1\\bigg ] \\\\ \\nonumber &\\ll \\frac{B^2\\log {Q}\\log \\log {Q}}{Q}\\sum _{0<|h|\\ll B}\\frac{\\Phi (h;0,0)}{h^2} \\\\ &\\ll _{\\epsilon } \\frac{B^{2+\\epsilon }}{Q}.$ We now focus on the contribution from the remaining terms.", "From now on we put $T_{i,j}= S_{i,j}-L_{i,j}.$ It follows that $T_{i,j}(p,q) &\\ll \\frac{1}{(pqh)^2} \\sum _{\\begin{array}{c}-pq|h|/2 < m,n \\le pq|h|/2 \\\\ (m,n)\\ne (0,0)\\end{array}}\\min \\bigg \\lbrace B, \\frac{pq |h|}{|m|}\\bigg \\rbrace \\min \\bigg \\lbrace B, \\frac{pq |h|}{|n|}\\bigg \\rbrace |\\Psi _{i,j}(m,n)|.$ We would like to find a pointwise estimate for the exponential sums $\\Psi _{i,j}.$ There are two regimes to consider.", "If $p\\ne q,$ then recalling Lemma REF and Lemma REF we obtain $|\\Psi _{i,j}(m,n)| \\ll pq(pq,m,n)|\\Phi (h;\\overline{pq}m,\\overline{pq}n)|.$ Here $\\overline{pq}$ is the multiplicative inverse of $pq$ modulo $|h|.$ If $p=q$ , then again recalling the results above we have $|\\Psi _{i,j}(m,n)| \\ll 1_{p|(m,n)} p^3(p,m/p,n/p) \\Phi (h; \\overline{p}m/p, \\overline{p}n/p).$ Here $\\overline{p}$ is the multiplicative inverse of $p$ modulo $|h|.$ Now if $p=q$ then $1_{p|(m,n)} p^3(p,m/p,n/p) \\le pq(pq,m,n).$ We conclude that the first bound holds in all cases.", "Thus we may write $T_{i,j} &\\ll \\frac{1}{pqh^2} \\sum _{\\begin{array}{c}-pq|h|/2 < m,n \\le pq|h|/2 \\\\ (m,n)\\ne (0,0)\\end{array}}\\min \\bigg \\lbrace B, \\frac{pq |h|}{|m|}\\bigg \\rbrace \\min \\bigg \\lbrace B, \\frac{pq |h|}{|n|}\\bigg \\rbrace (pq,m,n)|\\Phi (h;\\overline{pq}m,\\overline{pq}n)|.$ There are now 3 regimes to consider.", "The contribution from when $m=0$ is $\\frac{B}{|h|} \\sum _{\\begin{array}{c}-pq|h|/2 < n\\le pq|h|/2 \\\\ n\\ne 0\\end{array}}\\frac{(pq,n)|\\Phi (h;0,\\overline{pq}n)|}{|n|}.$ The contribution from when $n=0$ is $\\frac{B}{|h|} \\sum _{\\begin{array}{c}-pq|h|/2 < m \\le pq|h|/2 \\\\ m\\ne 0\\end{array}}\\frac{(pq,m)|\\Phi (h;\\overline{pq}m,0)|}{|m|}.$ The contribution from when $mn\\ne 0$ is $pq\\sum _{\\begin{array}{c}-pq|h|/2 < m,n \\le pq|h|/2 \\\\ mn \\ne 0\\end{array}}\\frac{(pq,m,n)|\\Phi (h;\\overline{pq}m,\\overline{pq}n)|}{|m||n|}.$ To evaluate these sums we will now bring the sum over $h$ on the inside.", "Recall Proposition REF , which states that for any $\\epsilon >0$ we have $\\sum _{0<|h|\\ll B} \\frac{\\Phi (h;M,N)}{|h|^{2-1/d+\\epsilon }} \\ll _{\\epsilon } \\Delta _f(M,N,k)^{\\epsilon }B^{\\epsilon },$ and the corresponding results which follow by partial summation.", "From Lemma REF we have the size bound $|\\Delta _f(M,N,k)| \\le |k| \\max \\lbrace |M|,|N|\\rbrace ^{d^2}.$ We are also assuming $0<|k| \\ll _f B^{d+1}$ (see (REF )).", "In our range of variables we thus have $\\Delta _f(\\overline{pq}m,\\overline{pq}n,k)^{\\epsilon } \\ll _{\\epsilon } B^{\\epsilon }$ It follows that the total contribution from the terms $T_{i,j}$ is bounded by $\\frac{B^{2-1/d+\\epsilon }\\log ^{2}{Q}}{Q^{2}}\\sum _{\\begin{array}{c}p,q\\in \\mathcal {P}\\end{array}} \\bigg [\\sum _{\\begin{array}{c}-pq|h|/2 < n \\le pq|h|/2 \\\\ n\\ne 0\\end{array}}\\frac{(pq,n)}{|n|}+pq\\sum _{\\begin{array}{c}-pq|h|/2 < m,n \\le pq|h|/2 \\\\ mn \\ne 0\\end{array}}\\frac{(pq,m,n)}{|m||n|}\\bigg ].$ This in turn is bounded by $\\frac{B^{2-1/d+\\epsilon }\\log ^{2}{Q}}{Q^{2}}\\sum _{\\begin{array}{c}p,q\\in \\mathcal {P}\\end{array}}pq &\\ll _{\\epsilon } Q^2B^{2-1/d+\\epsilon }.$ Putting together equation (REF ) and equation (REF ), we get a final bound $M_{f,g}(B;k) \\ll _{\\epsilon } B^{\\epsilon }\\bigg (Q^2B^{2-1/d}+ \\frac{B^{2}}{Q} \\bigg ) \\ll B^{2-1/(3d)+\\epsilon },$ where we take $Q=B^{1/(3d)}$ to balance the error terms.", "The result stated in Theorem REF follows upon noting that the exponent here is strictly less than $(2-1/(50d)).$" ] ]
2207.03595
[ [ "Three-loop helicity amplitudes for quark-gluon scattering in QCD" ], [ "Abstract We compute the three-loop helicity amplitudes for $q\\bar{q} \\to gg$ and its crossed partonic channels, in massless QCD.", "Our analytical results provide a non-trivial check of the color quadrupole contribution to the infrared poles for external states in different color representations.", "At high energies, the $qg \\to qg$ amplitude shows the predicted factorized form from Regge theory and confirms previous results for the gluon Regge trajectory extracted from $qq' \\to qq'$ and $gg \\to gg$ scattering." ], [ "Introduction", "The computation of multiloop scattering amplitudes in Quantum Chromodynamics (QCD) plays a fundamental role for the Standard Model (SM) precision program carried out at particle colliders such as the Large Hadron Collider (LHC) at CERN.", "Suitably combined with real-radiation contributions, they provide a powerful tool to generate predictions for a variety of collider observables, allowing for precise comparisons with experimental data [1].", "In fact, matching the shrinking experimental errors with correspondingly precise theory predictions allows one to discover even subtle signals from possible physics scenarios beyond the SM.", "In addition to their phenomenological significance, analytic computations of scattering amplitudes enable investigations of general properties of perturbative Quantum Field Theories (QFT), including comparative studies of QCD amplitudes with their supersymmetric counterparts.", "The more loops, external legs, or particle masses one is considering for a scattering amplitude, the more challenging its computation becomes.", "In recent years, significant progress has been achieved for the reduction of loop integrals to master integrals and their analytical evaluation, resulting in the calculation of previously inaccessible multiloop amplitudes.", "At two loops, various QCD amplitudes became available for 2 $\\rightarrow $ 3 scattering processes involving mostly massless particles [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], paving the way for the first Next-to-Next-to-Leading-Order (NNLO) studies at LHC [28], [29], [30].", "At three loops, first QCD amplitudes were computed for 2 $\\rightarrow $ 2 scattering processes [31], [32], [33], [34].", "At four loops, $2 \\rightarrow 1$ form factors were obtained in full-color QCD [35], [36], [37].", "Analytical results for multiloop scattering amplitudes can also provide non-trivial information about all-order results in QCD.", "An interesting case is the so-called Regge limit [38] of large collision energy, where universal factorization properties can be observed in QCD amplitudes.", "The BFKL formalism [39], [40] allows one to describe all-order structures in QCD through the exchange of so-called “Reggeized gluons”, which resum leading contributions of the quark and gluon interactions at high energies.", "With the recent determination of the three-loop Regge trajectory [33], [41], the last missing ingredient for next-to-next-to-leading-logarithmic analysis became available.", "This paper concludes our analytical calculation of all four-parton scattering amplitudes in three-loop QCD.", "Previously, we presented the helicity amplitudes for the process $q\\bar{q}\\rightarrow q^{\\prime }\\bar{q}^{\\prime }$ and crossed channels [32] and for the process $gg\\rightarrow gg$ [33].", "In this work, we provide the helicity amplitudes for $q\\bar{q}\\rightarrow gg$ scattering and crossed channels in full-color, massless QCD.", "Our calculation checks the predicted quadrupole contribution to the infrared poles for a process with external legs in different color representation [42], [43].", "By analyzing the high-energy limit of the $qg\\rightarrow qg$ amplitude, we check the universality of the predicted factorization and the three-loop expression for the Regge trajectory [33], [41].", "The rest of this paper is organized as follows.", "In section , we set up our notation and describe the color and Lorentz decomposition of the scattering amplitude.", "In section  we discuss our computation of the bare helicity amplitudes employing the tensor decomposition provided in the previous section and analytical solutions for the master integrals [44], [34].", "In section , we describe the UV renormalization and give details for the subtraction of IR poles up to three loops.", "In section  we present our final results and enumerate the checks we have performed to verify their correctness.", "Finally, in section  we discuss the high energy (Regge) limit of the $qg\\rightarrow qg$ amplitudes.", "We draw our conclusions in section .", "We reserve the appendices for lengthy formulas with explicit results for all the relevant anomalous dimensions (appendix ) and for the impact factors and the gluon Regge trajectory (appendix )." ], [ "Color and Lorentz decomposition", "We consider the quark-gluon scattering process ${ q}(p_1) \\;+ \\;\\bar{ q}(p_2) \\; +\\; { g}(p_3) \\;+ \\; { g}(p_4) \\; \\longrightarrow \\; 0,$ in massless QCD, where the momenta satisfy $p_1^2=p_2^2=p_3^2=p_4^2=0, \\qquad p_1^\\mu + p_2^\\mu + p_3^\\mu + p_4^\\mu = 0.$ The kinematics of the process eq.", "(REF ) can be parametrized in terms of the usual Mandelstam invariants $s= (p_1 + p_2)^2, \\qquad t= (p_1 + p_3)^2, \\qquad u= (p_2 + p_3)^2,$ with $u = -t-s$ .", "We find it convenient to introduce the dimensionless variable $x=-t/s$ to parametrize our results.", "The primary physical scattering process considered in this paper is ${q}(p_1) \\;+ \\;\\bar{ q}(p_2) \\; \\longrightarrow \\; { g}(p_3) \\;+ \\; { g}(p_4)\\,,$ which can be obtained from the process (REF ) by a crossing of external legs with $p_{3,4}\\rightarrow -p_{3,4}$ .", "For this process, the physical region of the phase space is given by $s >0,\\quad t,u < 0\\qquad \\Rightarrow \\quad 0 < x < 1\\,.$ Results for other physical scattering processes will subsequently be derived from the result for process (REF ) by considering further crossings.", "The bare amplitude for process (REF ) can be decomposed in three different color structures $\\mathcal {C}_i$ , $ \\mathcal {A}_{i_1,i_2,a_3,a_4} = 4 \\pi \\alpha _{s,b} \\sum _{i=1}^3 \\mathcal {A}^{[i]} {\\mathcal {C}}_{i}\\,.$ Here, $i_1$ and $i_2$ are the fundamental color indices of the external quarks with momenta $p_1$ and $p_2$ , and $a_3$ and $a_4$ are the adjoint color indices of the external gluons with momenta $p_3$ and $p_4$ , respectively.", "Further, $\\alpha _{s,b}$ is the bare strong coupling.", "In eq.", "(REF ) we also introduced the notation $[i]$ to indicate a color component index of the amplitude.", "The three color structures are ${\\mathcal {C}}_{1} = ({{T}^{a_3}}{{T}^{a_4}})_{i_2i_1}, \\quad \\quad {\\mathcal {C}}_{2} = ({{T}^{a_4}}{{T}^{a_3}})_{i_2i_1}, \\quad \\quad {\\mathcal {C}}_{3} = \\delta ^{a_3a_4}\\,\\delta _{i_2 i_1}\\,, $ where we work in QCD with color group $SU(N_c)$ and $n_f$ massless quark flavors.", "The matrices $(T^a)_{i_2i_1}$ are the generators of $SU(N_c)$ in the fundamental representation.", "We use $\\operatorname{Tr}[T^aT^b] = \\frac{1}{2} \\delta _{ab}$ and denote the quadratic Casimir operators in the fundamental and adjoint representation by $C_F$ and $C_A$ , respectively.", "The amplitude coefficients $\\mathcal {A}^{[i]}$ can be decomposed further into Lorentz-covariant structures $\\mathcal {T}_i$ , $\\mathcal {A}^{[i]} = \\sum _{j=1}^{4} \\mathcal {F}^{[i]}_j \\; \\mathcal {T}_i \\;,$ where the $ \\mathcal {F}^{[i]}_j$ are scalar form factors.", "To regulate ultraviolet and infrared divergences, we employ dimensional regularization and use $d = 4 - 2 \\epsilon $ for the number of space-time dimensions.", "We denote the external gluon polarization vectors as $\\epsilon (p_i) = \\epsilon _i $ with the transversality condition for the external gluon momenta $\\epsilon (p_i)\\cdot p_i = 0 $ ($i =3,4$ ).", "To simplify the Lorentz decomposition, we also fix the gauge of the external gluons such that $\\epsilon _3\\cdot p_2 = \\epsilon _4\\cdot p_1 = 0 $ , which leads to the following gluon polarization sums $&\\sum _{pol}{\\epsilon }^{\\mu }_3 {\\epsilon }^{\\nu }_3 = -g^{\\mu \\nu } + \\frac{{p}^{\\mu }_3{p}^{\\nu }_2 + {p}^{\\nu }_3{p}^{\\mu }_2}{p_2\\cdot p_3},\\,\\nonumber \\\\&\\sum _{pol}{\\epsilon }^{\\mu }_4 {\\epsilon }^{\\nu }_4 = -g^{\\mu \\nu } + \\frac{{p}^{\\mu }_4{p}^{\\nu }_1 + {p}^{\\nu }_4{p}^{\\mu }_1}{p_1\\cdot p_4}\\,.$ Since we are ultimately interested in computing the helicity amplitudes for this process in the ’t Hooft–Veltman scheme (tHV) scheme, we use the Lorentz structures [45], [46], [31] $\\mathcal {T}_1 &= \\bar{u}(p_2){{\\epsilon }}_{3}u(p_1) \\,\\epsilon _4\\cdot p_2\\,, \\quad \\quad &&\\mathcal {T}_2 = \\bar{u}(p_2){{\\epsilon }}_{4}u(p_1) \\,\\epsilon _3\\cdot p_1\\,, \\nonumber \\\\\\mathcal {T}_3 &= \\bar{u}(p_2){{p}}_{3}u(p_1) \\,\\epsilon _3\\cdot p_1\\,\\epsilon _4\\cdot p_2\\,, \\quad \\quad &&\\mathcal {T}_4 = \\bar{u}(p_2){{p}}_{3}u(p_1) \\,\\epsilon _3\\cdot \\epsilon _4\\,,$ and introduce projection operators $\\mathcal {P}_i$ which extract the form factors from the amplitude, $\\mathcal {P}_{j}\\cdot \\mathcal {A}^{[i]} = \\sum _{\\text{pol}}\\mathcal {P}_{j}\\mathcal {A}^{[i]} = \\mathcal {F}^{[i]}_j,\\quad j = 1,\\ldots ,4\\,.", "$ In eq.", "(REF ), we introduced the short-hand notation $\\mathcal {P}_{i}\\cdot \\mathcal {A}$ which implies a sum over the polarizations of the external particles.", "By introducing the matrix $M_{ij} =\\mathcal {T}^\\dagger _i \\cdot \\mathcal {T}_j,$ the projectors can be compactly defined as $\\mathcal {P}_{i} = \\sum _{j=1}^{4}(M^{-1})_{ij}\\mathcal {T}^{+}_{j} \\quad \\Rightarrow \\quad \\mathcal {P}_i\\cdot \\mathcal {T}_j = \\delta _{ij} \\,,$ where $M^{-1} &= \\frac{1}{2(d-3)s^2 t^3 u}\\left(\\begin{array}{cccc}t^2u^2 & 0 & -t u^2 & 0 \\\\0 & t^2 u^2 & t u^2 & 0 \\\\-t u^2 & t u^2 & \\,\\,(d u^2-4st)\\,\\, & (s-t)s t \\\\0 & 0 & (s-t)s t & s^2 t^2 \\\\\\end{array}\\right)\\,.$ We stress that in conventional dimensional regularization there is a fifth Lorentz structure which would need to be taken into account in eq.", "(REF ).", "In the tHV scheme we take internal momenta in $d = 4 - 2 \\epsilon $ dimensions and keep external momenta and polarizations in four dimensions.", "As explained in refs.", "[45], [46], this allows us to essentially ignore this fifth evanescent structure completely and work with just the four structures (REF ), which are linearly independent in four space-time dimensions.", "We also point out that the decompositions of eqs.", "(REF ) and (REF ), as well as the explicit form of the projectors (REF ), hold to any orders in perturbation theory." ], [ "Helicity amplitudes", "From the form factors $\\mathcal {F}_j$ one can construct amplitudes for definite helicities of the external particles.", "We denote the helicity of the incoming quark as $\\lambda _q$ ; the helicity of the incoming anti-quark $\\lambda _{\\bar{q}}$ is then automatically fixed due to helicity conservation along the massless quark line.", "We refer to the quark line helicity with the symbol $\\lambda _{q\\bar{q}} = \\lbrace \\lambda _{q}\\lambda _{\\bar{q}} \\rbrace $ which can take two possible values: $\\lambda _{q\\bar{q}} = L,R = \\lbrace -+\\rbrace ,\\lbrace +-\\rbrace $ .", "Further, we denote the helicities of the outgoing gluons as $\\lambda _{3}$ and $\\lambda _4$ .", "After exploiting parity, charge-conjugation and Bose symmetry relations [31], one is left with only two independent helicity configurations.", "However, we choose to compute the overcomplete set of four helicity configurations $\\lbrace \\lambda _{q\\bar{q}}\\lambda _{3}\\lambda _{4}\\rbrace = \\lbrace L--\\rbrace , \\lbrace L-+\\rbrace , \\lbrace L+-\\rbrace , \\lbrace L++\\rbrace $ which allow us to perform a consistency check on our calculation.", "Results for right-handed quarks can subsequently be obtained by a parity transformation.", "We write for the left-handed spinors $\\overline{u_{L}}(p_2) = \\langle 2|$ , $u_L(p_1) = |1] $ , and for the polarization vector of the gluons $\\epsilon ^{\\mu }_{3,-}(p_3) &= \\frac{\\langle 2|\\gamma ^{\\mu }|3]}{\\sqrt{2}\\langle 23 \\rangle }\\,, &\\epsilon ^{\\mu }_{3,+}(p_3) &= \\frac{\\langle 3|\\gamma ^{\\mu }|2]}{\\sqrt{2}[32]}\\,,\\\\\\epsilon ^{\\mu }_{4,-}(p_4) &= \\frac{\\langle 1|\\gamma ^{\\mu }|4]}{\\sqrt{2}\\langle 14 \\rangle }\\,, &\\epsilon ^{\\mu }_{4,+}(p_4) &= \\frac{\\langle 4|\\gamma ^{\\mu }|1]}{\\sqrt{2}[41]}\\,.$ Inserting these equations into the Lorentz structures $\\mathcal {T}_j$ (REF ) gives the helicity amplitudes $\\mathcal {A}_{L--} &= s_{L--} \\sum _{i=1}^3 \\mathcal {H}^{[i]}_1 \\;{\\mathcal {C}}_{i}\\,, \\quad &\\mathcal {A}_{L-+} &= s_{L-+}\\sum _{i=1}^3 \\mathcal {H}^{[i]}_2\\;{\\mathcal {C}}_{i}\\,, \\nonumber \\\\\\mathcal {A}_{L+-} &= s_{L+-} \\sum _{i=1}^3 \\mathcal {H}^{[i]}_3\\;{\\mathcal {C}}_{i}\\,, \\quad &\\mathcal {A}_{L++} &= s_{L++} \\sum _{i=1}^3 \\mathcal {H}^{[i]}_4\\;{\\mathcal {C}}_{i}\\,,$ where the little group scaling is captured by the overall spinor factors $s_{L--} = \\frac{2[34]^2}{\\langle 1 3 \\rangle [23]}\\,, \\quad s_{L-+} = \\frac{2 \\langle 2 4 \\rangle [13]}{\\langle 2 3 \\rangle [24]}\\, , \\quad s_{L+-} = \\frac{2\\langle 2 3 \\rangle [41]}{\\langle 2 4 \\rangle [32]}\\, , \\quad s_{L++} = \\frac{2 {\\langle 3 4 \\rangle }^2}{\\langle 3 1 \\rangle [23]}\\,,$ and we have defined the scalar helicity amplitudes $\\mathcal {H}^{[i]}_1 &= \\frac{t}{2}\\left(\\mathcal {F}^{[i]}_2 - \\frac{t}{2}\\mathcal {F}^{[i]}_3 + \\mathcal {F}^{[i]}_4 \\right), &\\mathcal {H}^{[i]}_2 &= \\frac{t}{2}\\left(\\frac{s}{2}\\mathcal {F}^{[i]}_3 + \\mathcal {F}^{[i]}_4 \\right),& \\nonumber \\\\\\mathcal {H}^{[i]}_3 &= \\frac{st}{2u}\\left(\\mathcal {F}^{[i]}_2 - \\mathcal {F}^{[i]}_1 - \\frac{t}{2}\\mathcal {F}^{[i]}_3 - \\frac{t}{s}\\mathcal {F}^{[i]}_4 \\right), &\\mathcal {H}^{[i]}_4 &= \\frac{t}{2}\\left(\\mathcal {F}^{[i]}_1 + \\frac{t}{2}\\mathcal {F}^{[i]}_3 - \\mathcal {F}^{[i]}_4 \\right).&$ The amplitudes for right-handed quarks are related to those for left-handed quarks by $\\mathcal {A}_{R,\\lambda _3,\\lambda _4} = ( \\mathcal {A}_{L,-\\lambda _3,-\\lambda _4} )|_{\\langle ij \\rangle \\leftrightarrow [ji]}\\,.$ By exchanging the two outgoing gluons, we find that Bose symmetry implies the relations $&\\mathcal {H}_2^{[1]}(x)&=&+\\mathcal {H}_3^{[2]}(1-x), \\qquad &&\\mathcal {H}_2^{[2]}(x)&=&+\\mathcal {H}_3^{[1]}(1-x), \\qquad &&\\mathcal {H}_2^{[3]}(x)&=&+\\mathcal {H}_3^{[3]}(1-x), \\nonumber \\\\&\\mathcal {H}_{1,4}^{[1]}(x)&=& -\\mathcal {H}_{1,4}^{[2]}(1-x), \\qquad &&\\mathcal {H}_{1,4}^{[2]}(x)&=& -\\mathcal {H}_{1,4}^{[1]}(1-x), \\qquad &&\\mathcal {H}_{1,4}^{[3]}(x)&=& -\\mathcal {H}_{1,4}^{[3]}(1-x).$ We also note that $\\mathcal {H}_1^{[i]}(x) = - \\mathcal {H}_4^{[i]}(x) \\, .$ These identities will serve as an important check of our calculations.", "We expand the helicity amplitudes in $\\bar{\\alpha }_{s,b}\\equiv {\\alpha _{s,b}}/({4\\pi })$ , $\\mathcal {H}^{[i]}_\\lambda &= \\sum _{\\ell =0}^3 \\mathcal {H}^{[i],(\\ell )}_\\lambda \\left( \\bar{\\alpha }_{s,b}S_\\epsilon \\right)^\\ell + \\operatorname{\\mathcal {O}}\\left( \\bar{\\alpha }_{s,b}^4 \\right)$ for $\\lambda =1,\\ldots ,4$ , where $S_\\epsilon = (4 \\pi )^\\epsilon e^{- \\epsilon \\gamma _E}$ .", "The normalization factor $S_\\epsilon $ absorbs constants in the bare amplitude and matches the usual $\\overline{\\text{MS}}$ conventions in the renormalization of the strong coupling performed below.", "In the expansion of the amplitude, $\\mathcal {H}^{[i],(3)}_\\lambda $ is the three loop contribution, which we compute here for the first time.", "We have also recomputed the tree-level, one-loop and two-loop contributions using the form factor decomposition defined in eq.", "(REF ).", "We employ Qgraf [47] to produce Feynman diagrams and find 3 diagrams at tree level, 30 diagrams at one loop, 595 diagrams at two loops and 14971 at three loops.", "We give a few representative samples of the three-loop diagrams contributing to the process in figure REF .", "Figure: NO_CAPTIONFigure: Sample three loop diagramscontributing to the process qq ¯→ggq\\bar{q} \\rightarrow gg .We use Form [48] to apply the Lorentz projectors of eq.", "(REF ) to the diagrams and to perform the Dirac and color algebra.", "In this way, we obtain the form factors as linear combinations of a large number $(\\sim 10^7)$ of scalar Feynman integrals with rational coefficients.", "We parametrize the corresponding $\\ell $ -loop Feynman integrals according to $\\mathcal {I}^\\text{top}_{n_1,n_2,...,n_N} = \\mu _0^{2\\ell \\epsilon } e^{\\ell \\epsilon \\gamma _E} \\int \\prod _{j=1}^\\ell \\left( \\frac{\\mathrm {d}^d k_j}{i \\pi ^{\\frac{d}{2}}} \\right) \\frac{1}{D_1^{n_1}D_2^{n_2} \\dots D_N^{n_N}} \\; ,$ where $\\gamma _E \\approx 0.5772$ is Euler's constant, $\\mu _0$ is the scale of dimensional regularization, and the denominators $D_j$ are inverse propagators for the respective integral family “top”.", "More details on the integral families can be found in ref. [32].", "Using Reduze 2 [49], [50] and Finred, an in-house implementation of the Laporta algorithm [51] based on finite field arithmetic [52], [53], [54], [55] and syzygy algorithms [56], [57], [58], [59], [60], [61], we reduced these integrals to a linear combination of 486 master integrals.", "Upon insertion of the recently computed solutions for the master integrals [44], [34] we arrive at an analytical result for the helicity amplitudes in terms of harmonic polylogarithms." ], [ "UV and IR subtractions", "The bare helicity amplitudes (REF ) contain UV and IR divergences, which appear as poles in the Laurent expansion in $\\epsilon $ .", "The $\\overline{\\text{MS}}$ renormalized strong $\\alpha _s(\\mu )$ is defined through $\\bar{\\alpha }_{s,b}\\: \\mu _0^{2\\epsilon } \\: S_\\epsilon &= \\bar{\\alpha }_{s}\\: \\mu ^{2\\epsilon } Z[\\bar{\\alpha }_{s}] \\; ,$ where $\\bar{\\alpha }_{s}=\\alpha _s(\\mu )/(4\\pi )$ , $\\mu $ is the renormalization scale and $Z[\\bar{\\alpha }_{s}] = 1 - \\bar{\\alpha }_{s}\\frac{ \\beta _0 }{\\epsilon } + \\bar{\\alpha }_{s}^2 \\left( \\frac{\\beta _0^2}{\\epsilon ^2} - \\frac{\\beta _1 }{2 \\epsilon } \\right) -\\bar{\\alpha }_{s}^3 \\left( \\frac{\\beta _0^3}{\\epsilon ^3} - \\frac{ 7}{6} \\frac{\\beta _0 \\beta _1}{\\epsilon ^2}+ \\frac{\\beta _2}{3 \\epsilon } \\right) + \\mathcal {O}(\\bar{\\alpha }_{s}^4).$ The $\\beta $ -function coefficients are defined in the standard way through $\\frac{d \\bar{\\alpha }_{s}}{d \\log \\mu } = \\beta (\\bar{\\alpha }_{s}) - 2\\epsilon \\bar{\\alpha }_{s}\\; , \\quad \\beta (\\bar{\\alpha }_{s}) = -2 \\bar{\\alpha }_{s}\\sum \\limits _{\\ell \\ge 0} \\beta _\\ell \\bar{\\alpha }_{s}^{\\ell +1}.$ We also recall the values of the standard quadratic Casimir constants for a $SU(N_c)$ gauge group: $C_A = N_c, \\qquad C_F = \\frac{N_c^2-1}{2N_c} \\, .$ With this, up to third order of the perturbative expansion, we have $\\beta _0 &= \\frac{11}{3} C_A - \\frac{2}{3}\\: n_f \\; , \\nonumber \\\\\\beta _1 &= \\left( \\frac{34}{3} C_A^2-\\frac{10}{3} C_A \\:n_f \\right)-2 \\:C_F\\: n_f \\; ,\\nonumber \\\\\\beta _2 &= -\\frac{1415}{54}{C_A^2 \\:n_f}+\\frac{2857}{54}{ C_A^3}-\\frac{205}{18}C_A\\: C_F\\:n_f +\\frac{79}{54}C_A \\:n_f^2+C_F^2 \\:n_f+\\frac{11}{9}C_F \\:n_f^2 \\;.$ In the following, we use boldface symbols to denote vectors in colour space, that is, we define $\\mathbfcal {H}= \\big (\\mathcal {H}^{[1]},\\mathcal {H}^{[2]},\\mathcal {H}^{[3]}\\big )^T$ for the decomposition of the amplitude with respect to the basis $\\mathcal {C}_i$ .", "Using the expansion of (REF ), we collect the $\\bar{\\alpha }_{s}$ coefficients of the UV finite, but IR divergent, amplitudes as $\\mathbfcal {H}_{\\lambda ,\\text{ren}}^{(0)} &= \\mathbfcal {H}_\\lambda ^{(0)} ,& \\nonumber \\\\\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(1)} &= \\mathbfcal {H}_\\lambda ^{(1)}- \\frac{\\beta _0 }{\\epsilon } \\mathbfcal {H}_\\lambda ^{(0)}, & \\nonumber \\\\\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(2)} &= \\mathbfcal {H}_\\lambda ^{(2)} - \\frac{2 \\beta _0 }{\\epsilon } \\mathbfcal {H}_\\lambda ^{(1)} + \\frac{ \\left(2 \\beta _0^2- \\beta _1\\epsilon \\right)}{2 \\epsilon ^2} \\mathbfcal {H}_\\lambda ^{(0)}, & \\nonumber \\\\\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(3)} &= \\mathbfcal {H}_\\lambda ^{(3)} -\\frac{3 \\beta _0}{\\epsilon } \\mathbfcal {H}_\\lambda ^{(2)} +\\frac{ \\left(3 \\beta _0^2-\\beta _1 \\epsilon \\right)}{\\epsilon ^2} \\mathbfcal {H}_\\lambda ^{(1)} +\\frac{ \\left(7 \\beta _1 \\beta _0\\epsilon -6 \\beta _0^3-2 \\beta _2 \\epsilon ^2 \\right)}{6 \\epsilon ^3} \\mathbfcal {H}_\\lambda ^{(0)},$ so that the renormalized helicity amplitudes can be written as $\\mathbfcal {H}_{\\lambda ,\\text{ren}}=\\sum _{\\ell \\ge 0}\\bar{\\alpha }_{s}^\\ell \\mathbfcal {H}^{(\\ell )}_{\\lambda ,\\text{ren}}\\,.$ The IR singularity structure of QCD amplitudes has been studied at two loops in ref.", "[62] and was extended up to three loops in refs.", "[63], [64], [65], [66], [67], [68], [69], [70], [42].", "The IR divergences can be subtracted from our renormalized amplitudes multiplicatively: $\\mathbfcal {H}_{\\lambda ,\\:\\text{ren}}= \\mathbfcal {Z} \\; \\mathbfcal {H}_{\\lambda ,\\:\\text{fin}}.$ Here $\\mathbfcal {Z}$ is a color matrix acting on the space spanned by the $\\mathcal {C}_i$ basis vectors (REF ) and $\\mathbfcal {H}_{\\lambda ,\\:\\text{fin}}$ are finite remainders, also called hard scattering functions.", "The matrix $\\mathbfcal {Z}$ can be written as $\\mathbfcal {Z} = \\mathbb {P} \\exp \\left[\\int _\\mu ^\\infty \\frac{\\mathrm {d} \\mu ^{\\prime }}{\\mu ^{\\prime }}\\mathbf {\\Gamma }(\\lbrace p\\rbrace ,\\mu ^{\\prime })\\right],$ where $\\mathbb {P}$ denotes the path-ordering of color operators [67] in increasing values of $\\mu ^{\\prime }$ from left to right.", "It can be omitted up to three loops, since to this order $[\\mathbf {\\Gamma }(\\mu ),\\mathbf {\\Gamma }(\\mu ^{\\prime })] = 0$ .", "The color-space correlation structure at three-loops allows one to decompose the soft anomalous dimension operator $\\mathbf {\\Gamma }$ into so-called dipole ($\\mathbf {\\Gamma }_\\text{dipole}$ ) and quadrupole ($\\mathbf {\\Delta }_4$ ) contributions according to $\\mathbf {\\Gamma } = \\mathbf {\\Gamma }_\\text{dipole} + \\mathbf {\\Delta }_4 \\, .$ The dipole term $\\mathbf {\\Gamma }_\\text{dipole}$ can be written as $\\mathbf {\\Gamma }_{\\text{dipole}}(\\lbrace p\\rbrace ,\\mu ) = & \\sum _{1\\le i < j \\le 4} \\mathbf {T}^a_i \\; \\mathbf {T}^a_j \\; \\gamma ^\\text{K}(\\bar{\\alpha }_{s}) \\; \\log \\left(\\frac{\\mu ^2}{-s_{ij}-i\\delta }\\right) + \\sum _{i=1}^4 \\; \\gamma ^i(\\bar{\\alpha }_{s})\\; , &$ where $\\gamma ^\\text{K}(\\bar{\\alpha }_{s})$ is the cusp anomalous dimension [71], [72], [73], [74], [75], [76] and $\\gamma ^i$ the quark (gluon) collinear anomalous dimension [77], [78], [79], [80] of the $i$ -th external particle, which are given in our notation in appendix .", "Further, $\\mathbf {T}^a_i$ represents the color generator of the $i$ -th parton in the scattering amplitude, $(\\mathbf {T}^a_i)_{\\alpha \\beta } &= t^a_{\\alpha \\beta } \\; &&\\text{ for a final(initial)-state quark (anti-quark)}, \\nonumber \\\\(\\mathbf {T}^a_i)_{\\alpha \\beta } &= -t^a_{\\beta \\alpha } \\; &&\\text{ for a final(initial)-state anti-quark (quark)}, \\nonumber \\\\(\\mathbf {T}^a_i)_{bc} &= -if^{abc} \\;&&\\text{ for a gluon.", "}$ The quadrupole term $\\mathbf {\\Delta }_4$ contributes for the first time at three loops.", "It can be written in the kinematical region (REF ) as [42], [43], [32], [33] $ \\mathbf {\\Delta }^{(3)}_4 &=128f_{abe}f_{cde}\\left[ \\mathbf {T}^a_1\\mathbf {T}^c_2\\mathbf {T}^b_3\\mathbf {T}^d_4\\,D_1(x) - \\mathbf {T}^a_4\\mathbf {T}^b_1\\mathbf {T}^c_2\\mathbf {T}^d_3\\,D_2(x) \\right] \\\\&- 16f_{abe}f_{cde}C\\sum _{i=1}^4 \\sum _{\\begin{array}{c}1\\le j < k \\le 4 \\\\ j,k\\ne i\\end{array}} \\left\\lbrace \\mathbf {T}^a_i,\\mathbf {T}^d_i \\right\\rbrace \\mathbf {T}^b_j\\mathbf {T}^c_k,$ where $C$ = $ \\zeta _5 + 2 \\zeta _2 \\zeta _3$ and $D_1(x)$ , $D_2(x)$ are linear combinations of harmonic polylogarithms as [42], [44], [32], [33].", "They read $D_1 &= -2 \\textit {G}_{1,4}-\\textit {G}_{2,3}-\\textit {G}_{3,2}+2 \\textit {G}_{1,1,3}+2 \\textit {G}_{1,2,2}-2 \\textit {G}_{1,3,0}-\\textit {G}_{2,2,0}-\\textit {G}_{3,1,0} +2 \\textit {G}_{1,1,2,0}\\nonumber \\\\&\\quad -2 \\textit {G}_{1,2,0,0}+ 2 \\textit {G}_{1,2,1,0}+4 \\textit {G}_{1,0,0,0,0}-2 \\textit {G}_{1,1,0,0,0}+\\frac{\\zeta _5}{2} - 5 \\zeta _2 \\zeta _3 + \\zeta _2[5 \\textit {G}_{3}+5 \\textit {G}_{2,0}+2 \\textit {G}_{1,0,0}\\nonumber \\\\&\\quad -6 (\\textit {G}_{1,2}+\\textit {G}_{1,1,0})]+ \\zeta _3 (\\textit {G}_{2}+2 \\textit {G}_{1,0}-2 \\textit {G}_{1,1})- i \\pi [-\\zeta _3 \\textit {G}_{0}+\\textit {G}_{2,2}+\\textit {G}_{3,0} +\\textit {G}_{3,1}+ \\textit {G}_{2,0,0}\\nonumber \\\\&\\quad +2 (\\textit {G}_{1,3}-\\textit {G}_{1,1,2}-\\textit {G}_{1,2,1} -\\textit {G}_{1,0,0,0})] + i \\pi \\zeta _2 (-\\textit {G}_{2}+2 (\\textit {G}_{1,1}+\\textit {G}_{1,0}))- 11i\\pi \\zeta _4 \\, ,\\\\[10pt]D_2 &= 2 \\textit {G}_{2,3}+2 \\textit {G}_{3,2}-\\textit {G}_{1,1,3}-\\textit {G}_{1,2,2}-2 \\textit {G}_{2,1,2}+2 \\textit {G}_{2,2,0}-2 \\textit {G}_{2,2,1} +2 \\textit {G}_{3,1,0}-2 \\textit {G}_{3,1,1}-\\textit {G}_{1,1,2,0}\\nonumber \\\\&\\quad - \\textit {G}_{1,2,1,0}-2 \\textit {G}_{2,1,1,0}+4 \\textit {G}_{2,1,1,1}-\\zeta _5 +4 \\zeta _2 \\zeta _3 + \\zeta _3 \\textit {G}_{1,1} +\\zeta _2 [-6 \\textit {G}_{3}-6 \\textit {G}_{2,0}+2 \\textit {G}_{2,1} \\nonumber \\\\&\\quad +5 (\\textit {G}_{1,2}+\\textit {G}_{1,1,0})] + i \\pi (\\zeta _3 \\textit {G}_{1}+2 \\textit {G}_{3,0}-\\textit {G}_{1,1,2}-\\textit {G}_{1,2,0}-\\textit {G}_{1,2,1}+2 \\textit {G}_{2,0,0}-2 \\textit {G}_{2,1,0} \\nonumber \\\\&\\quad +2 \\textit {G}_{2,1,1}-\\textit {G}_{1,1,0,0})+i \\pi \\zeta _2 (4 \\textit {G}_{2}-\\textit {G}_{1,1}) \\, .", "$ Here the argument $x$ has been suppressed, and for the HPLs we used a compact notation similar to [81], [82]: $G_{a_1,\\dots ,a_n,\\footnotesize \\underbrace{ 0,\\dots ,0}_{n_0}} = G(\\underbrace{0,\\dots ,0}_{|a_1|-1},\\text{sgn}(a_1),\\dots ,\\underbrace{0,\\dots ,0}_{|a_n|-1},\\text{sgn}(a_n),\\underbrace{0,\\dots ,0}_{n_0};x).", "\\nonumber $ In terms of the color vector space introduced in (REF ) and of the quantities we have just defined we find the explicit form $\\mathbf {\\Delta }_4^{(3)} = 8\\left({\\small \\begin{array}{ccccc}-2N_c(2D_1 + D_2 +4 C) &~~ & 2N_c(2D_1 + 3D_2 + 2C)&~~ & 2N^2_c(2D_2- C) \\\\[8pt]2N_c(3D_1 + 2D_2 + 2C) &~~ & -2N_c(D_1 +2 D_2 -4 C) &~~ & 2N^2_c(2D_1- C) \\\\[8pt]D_1 + 2 N_c^+ D_2 - N_c^- C \\; &~~ & 2 N_c^+ D_1 + D_2 -N_c^- C \\; &~~ & 6N_c(D_1 + D_2 - C)\\end{array}}\\right),$ where $N_c^{\\pm } = (N_c^2\\pm 1)/2$ and $C = \\zeta _5 + 2 \\zeta _2 \\zeta _3$ .", "Unlike $ \\mathbf {\\Gamma }_\\text{dipole}$ , $\\mathbf {\\Delta }^{(3)}_4$ does not depend explicitly on the factorization scale $\\mu ^2$ .", "We highlight the contributions to the quadrupole soft divergences, and in particular to the colour correlation pattern in the first and second line of eq.", "(REF ), by drawing a couple of representative diagrams in figure REF .", "Figure: Sample diagrams with quadrupole soft divergences, reinterpreted as tree-level diagrams (black lines) plus virtual gluons (red lines).", "Diagrams (a) and (b) involve colour correlations between four and three external partons and contribute to the first and second line of eq.", "(), respectively.The coefficients of the perturbative expansion for the finite remainders $\\mathbfcal {H}_{\\lambda ,\\text{fin}}=\\sum _{\\ell \\ge 0}\\bar{\\alpha }_{s}^\\ell \\mathbfcal {H}^{(\\ell )}_{\\lambda ,\\text{fin}}$ can be obtained according to $\\mathbfcal {H}_{\\lambda ,\\:\\text{fin}}^{(0)} &= \\mathbfcal {H}_{\\lambda }^{(0)} \\;,\\nonumber \\\\\\mathbfcal {H}_{\\lambda ,\\:\\text{fin}}^{(1)} &= \\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(1)} - \\mathbfcal {I}_1\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(0)} \\;, \\nonumber \\\\\\mathbfcal {H}_{\\lambda ,\\:\\text{fin}}^{(2)} &= \\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(2)} - \\mathbfcal {I}_2\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(0)} - \\mathbfcal {I}_1\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(1)} \\;, \\nonumber \\\\\\mathbfcal {H}_{\\lambda ,\\:\\text{fin}}^{(3)} &= \\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(3)} - \\mathbfcal {I}_3\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(0)} - \\mathbfcal {I}_2\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(1)} - \\mathbfcal {I}_1\\mathbfcal {H}_{\\lambda ,\\: \\text{ren}}^{(2)} \\;,$ with $\\mathbfcal {I}_{1} = \\mathbfcal {Z}_1,\\qquad \\mathbfcal {I}_{2} = \\mathbfcal {Z}_2 - \\mathbfcal {Z}_1^2,\\qquad \\mathbfcal {I}_{3} = \\mathbfcal {Z}_3 - 2\\mathbfcal {Z}_1 \\mathbfcal {Z}_2 + \\mathbfcal {Z}_1^3 + \\mathbf {\\Delta }_4^{(3)} \\,,$ where the $\\mathbfcal {Z}_n$ are the coefficients of the expansion of $\\mathbfcal {Z}$ in $\\bar{\\alpha }_{s}$ and explicitly read [67], [32]: $\\mathbfcal {Z}_0 &= 1 \\,, \\nonumber \\\\\\mathbfcal {Z}_1 &= \\frac{\\Gamma ^{\\prime }_0}{4 \\epsilon ^2} + \\frac{\\mathbf {\\Gamma }_0}{2 \\epsilon } \\,, \\nonumber \\\\\\mathbfcal {Z}_2 &= \\frac{{\\Gamma _0^{\\prime }}^2}{32 \\epsilon ^4} + \\frac{\\Gamma ^{\\prime }_0}{8 \\epsilon ^3} \\left( \\mathbf {\\Gamma }_0 - \\frac{3}{2} \\beta _0 \\right) + \\frac{\\mathbf {\\Gamma }_0}{8 \\epsilon ^2}(\\mathbf {\\Gamma }_0 - 2 \\beta _0) + \\frac{\\Gamma _1^{\\prime }}{16 \\epsilon ^2} + \\frac{\\mathbf {\\Gamma }_1}{4 \\epsilon }\\,, \\nonumber \\\\\\mathbfcal {Z}_3 &= \\frac{{\\Gamma ^{\\prime }_0}^3}{384 \\epsilon ^6} + \\frac{{\\Gamma ^{\\prime }_0}^2}{64 \\epsilon ^5}(\\mathbf {\\Gamma }_0 \\!-\\!", "3 \\beta _0) + \\frac{\\Gamma _0^{\\prime }}{32 \\epsilon ^4} \\left( \\mathbf {\\Gamma }_0 \\!- \\!\\frac{4}{3} \\beta _0 \\right) \\left( \\mathbf {\\Gamma }_0 \\!-\\!", "\\frac{11}{3} \\beta _0 \\right) + \\frac{\\Gamma _0^{\\prime } \\Gamma _1^{\\prime }}{64 \\epsilon ^4} \\nonumber \\\\&\\quad +\\!\\frac{\\mathbf {\\Gamma }_0}{48\\epsilon ^3}(\\mathbf {\\Gamma }_0 -\\!", "2 \\beta _0)(\\mathbf {\\Gamma }_0 - 4 \\beta _0)\\!+\\!", "\\frac{\\Gamma ^{\\prime }_0}{16 \\epsilon ^3} \\left( \\mathbf {\\Gamma }_1\\!- \\!\\frac{16}{9} \\beta _1\\right) + \\frac{\\Gamma _1^{\\prime }}{32 \\epsilon ^3} \\left( \\mathbf {\\Gamma }_0 \\!-\\!", "\\frac{20}{9} \\beta _0 \\right)+ \\frac{\\mathbf {\\Gamma }_0 \\mathbf {\\Gamma }_1}{8 \\epsilon ^2} \\nonumber \\\\&\\quad -\\!", "\\frac{\\beta _0 \\mathbf {\\Gamma }_1 + \\beta _1 \\mathbf {\\Gamma }_0}{6 \\epsilon ^2} + \\frac{\\Gamma _2^{\\prime }}{36 \\epsilon ^2 } + \\frac{\\mathbf {\\Gamma }_2 + \\mathbf {\\Delta }_4^{(3)}}{6 \\epsilon } \\; .$ Above we have used $\\Gamma ^{\\prime }(\\bar{\\alpha }_{s}) = \\frac{\\partial \\mathbf {\\Gamma }(\\lbrace p\\rbrace ,\\bar{\\alpha }_{s},\\mu )}{\\partial \\log \\mu } = -\\gamma ^\\text{K} \\sum _i C_i = \\sum _{\\ell \\ge 0} \\bar{\\alpha }_{s}^{\\ell +1} \\Gamma _\\ell ^{\\prime } ,$ with the last equal sign giving the definition of the perturbative coefficients $\\Gamma _\\ell ^{\\prime } $ .", "The explicit expression for the perturbative expansions of the cusp anomalous dimension and of the quark (gluon) collinear anomalous dimensions are given in the appendix." ], [ "Checks and exact results", "First, we have checked that our results for the lower loop amplitudes are consistent with the literature.", "In particular, we have compared our tree-level, one-loop and two-loop results for the bare helicity amplitudes for $q\\bar{q}\\rightarrow gg$ in the helicity configurations (REF ) against the results provided in the ancillary files of ref.", "[83] and find analytical agreement through to weight six.", "We have also checked that our one-loop expressions for $q\\bar{q}\\rightarrow gg$ and $qg\\rightarrow qg$ match results obtained with the automated one-loop generator OpenLoops [84], [85].", "At the three-loop level, we have verified that the IR singularities of our results for the renormalized helicity amplitudes in eq.", "(REF ) match the pattern predicted by eqs.", "(REF )-(REF ), which provides a highly non-trivial check.", "From the high energy limit of our amplitudes we extract the quark and gluon impact factors and find that they are consistent with previous results, which tests lower loop contributions to the renormalized amplitude up to weight six.", "Moreover, we extract the gluon Regge trajectory and find agreement with previous results, which provides a stringent check of the finite contributions to the three-loop amplitudes presented in this paper.", "The high energy limit will be described in more detail in the next section.", "Our analytic results for the three-loop finite remainders $ \\mathbfcal {H}_{{\\lambda },\\: \\text{fin}} $ are expressed in terms of harmonic polylogarithms with transcendental weight up to six.", "Alternatively, these can be converted to a functional basis of logarithms, classical polylogarithms and a few multiple polylogarithms with at most three-fold nested sums [31].", "We provide a general conversion table for harmonic polylogarithms up to weight six in the ancillary files of the arXiv submission of this article.", "From our results for the process $q\\bar{q}\\rightarrow gg$ we also derive explicit expressions for the helicity amplitudes for $qg\\rightarrow qg$ scattering, which requires a non-trivial analytical continuation.", "Details for this procedure are given in ref. [32].", "The remaining partonic channels $gg\\rightarrow q\\bar{q}$ and $g\\bar{q} \\rightarrow g\\bar{q}$ are not provided explicitly, since they can be obtained by a simple crossing of external legs without any non-trivial analytic continuation.", "While our results are relatively compact, of the order of 1 megabyte per partonic channel, they are too lengthy to be presented here.", "We include them in computer-readable format in the ancillary files on arXiv.", "In figure REF we show the finite remainder of the amplitude at different loop orders interfered with the tree-level amplitude for the processes $q\\bar{q}\\rightarrow gg$ and $qg \\rightarrow qg$ .", "The interferences are averaged (summed) over polarizations and color in the initial (final) state.", "Figure: Perturbative amplitudes up to three loops interfered with the tree-level amplitude for qq ¯→ggq\\bar{q}\\rightarrow gg (panel a) and qg→qgqg \\rightarrow qg (panel b) in dependence of x=-t/sx=-t/s .", "The two-loop contribution to qq ¯→ggq\\bar{q}\\rightarrow gg diverges to +∞+\\infty near x=0 + ,1 - x=0^+,1^- (panel c shows details near x=0 + x=0^+), while the three-loop contribution to qg→qgqg \\rightarrow qg diverges to -∞-\\infty near x=0 + x=0^+ (panel d).Additionally, since with the results of this paper all $2 \\rightarrow 2$ partonic channels are now available in three-loop massless QCD, we find it useful to compare virtual corrections for the processes $q\\bar{q}\\rightarrow gg$ , $qg \\rightarrow qg$ , $gg\\rightarrow gg$ and $q\\bar{q}\\rightarrow \\bar{Q}Q$ .", "In figure REF , we show the contributions to the squared amplitude at different orders in $\\bar{\\alpha }_{s}$ , normalized by the respective tree-level squared amplitude.", "Again, we average (sum) over polarization and color in the initial (final) states.", "Figure: Perturbative expansion of the amplitude squared for the processes qq ¯→ggq\\bar{q}\\rightarrow gg, qg→qgqg \\rightarrow qg, gg→gggg\\rightarrow gg and qq ¯→Q ¯Qq\\bar{q}\\rightarrow \\bar{Q}Q as functions of x=-t/sx=-t/s.", "Values are normalized by the tree-level amplitude squared.Below we define more in detail the quantities we present in the plots.", "We rewrite the finite amplitude as a vector in color and helicity space $ | \\mathcal {A}\\rangle = 4 \\pi \\alpha _s\\sum _{\\ell \\ge 0} \\bar{\\alpha }_{s}^\\ell \\; | \\mathcal {A}^{(\\ell )} \\rangle $ and define the contraction between different elements in this vector space as [1] $\\langle \\mathcal {A}^{(\\ell )} |\\mathcal {A}^{(\\ell ^{\\prime })}\\rangle \\equiv \\mathcal {N}\\sum _{i,j,\\mathbf {\\lambda }}\\mathcal {C}_i^\\dagger \\mathcal {C}_{j}|s_{\\mathbf {\\lambda }}|^2\\mathcal {H}^{[i],(\\ell )^*}_{\\mathbf {\\lambda }, \\rm fin}\\mathcal {H}^{[j],(\\ell ^{\\prime })}_{{\\mathbf {\\lambda }}, \\rm fin},$ where the factor $4 \\pi \\alpha _s$ in eq.", "(REF ) replicates the overall normalization of eq.", "(REF ).", "$\\mathcal {N}$ is the initial-state color and polarization averaging factor, which depends on the process and takes the following values: $ \\mathcal {N} ={\\left\\lbrace \\begin{array}{ll}\\frac{1}{4N_c^2} \\quad \\quad \\quad &\\text{ for $q\\bar{q}\\rightarrow gg$}, \\\\\\frac{1}{4N_c(N_c^2-1)} \\quad &\\text{ for $qg\\rightarrow qg$},\\\\\\frac{1}{4(N_c^2-1)^2} \\quad &\\text{ for $gg \\rightarrow gg$},\\\\\\frac{1}{4N_c^2} \\quad \\quad \\quad &\\text{ for $q \\bar{q}\\rightarrow \\bar{Q}Q $}.\\end{array}\\right.", "}$ The initial and final state polarization sum runs over all helicity configurations.", "The color factors $\\mathcal {C}_i$ and the spinor factors $s_{\\bf \\lambda }$ are different for the various processes: for $q\\bar{q}\\rightarrow gg$ they are given in eqs.", "(REF ) and (REF ), while for $qg\\rightarrow qg$ they are obtained by applying the transformation $p_2 \\leftrightarrow p_3$ to those of $q\\bar{q}\\rightarrow gg$ .", "For the other two channels $gg \\rightarrow gg$ and $q \\bar{q}\\rightarrow \\bar{Q}Q $ , they can be found in refs.", "[33] and [32] respectively.", "We expand the squared amplitude normalized by the tree-level contribution in $\\bar{\\alpha }_{s}$ according to $\\frac{\\langle \\mathcal {A} |\\mathcal {A}\\rangle }{\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(0)}\\rangle } =\\mathcal {V}^{(0)}+\\bar{\\alpha }_{s}\\mathcal {V}^{(1)}+\\bar{\\alpha }_{s}^2\\mathcal {V}^{(2)}+\\bar{\\alpha }_{s}^3\\mathcal {V}^{(3)}+O(\\bar{\\alpha }_{s}^4)$ with $\\mathcal {V}^{(0)} &=1, \\quad &\\mathcal {V}^{(1)} &= 2\\frac{\\operatorname{Re}\\langle \\mathcal {A}^{(0)}| \\mathcal {A}^{(1)}\\rangle }{\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(0)}\\rangle }, \\quad \\\\\\mathcal {V}^{(2)} &= \\frac{\\langle \\mathcal {A}^{(1)} |\\mathcal {A}^{(1)}\\rangle }{\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(0)}\\rangle }+2\\frac{\\operatorname{Re}\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(2)}\\rangle }{\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(0)}\\rangle }, \\quad &\\mathcal {V}^{(3)} &= 2\\frac{\\operatorname{Re}\\langle \\mathcal {A}^{(1)} |\\mathcal {A}^{(2)}\\rangle }{\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(0)}\\rangle }+2 \\frac{\\operatorname{Re}\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(3)}\\rangle }{\\langle \\mathcal {A}^{(0)} |\\mathcal {A}^{(0)}\\rangle } .$ Finally, for the numerical evaluation, we have set $\\mu ^2=s = m_Z^2$ , $\\alpha _s(\\mu )=0.118$ , $n_f=5$ and $N_c=3$ ." ], [ "High energy limit", "In the high-energy or Regge limit, quantum field theoretic scattering amplitudes become particularly simple and are known to exhibit universal factorization properties.", "In the following, we consider the process $q(p_1)\\;+ \\;g(p_2) \\; \\rightarrow \\; q(p_3)\\;+ \\;g(p_4),$ for which $t$ -channel gluon exchanges provide the dominant contribution to the amplitude at high energies.", "The Regge limit is defined as $s\\rightarrow \\infty $ for fixed scattering angle, that is, $|s| \\approx |u| \\gg |t|$ , where $s=(p_1+p_2)^2$ , $t=(p_1-p_3)^2$ , $u=-s-t$ in terms of the momenta in (REF ).", "For the variable $x=-t/s$ , the Regge limit corresponds to $x\\rightarrow 0$ .", "Following the investigation [86], [87], we split the renormalized amplitude into the definite $s \\leftrightarrow u$ signature component $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg,\\pm }} = \\frac{1}{2}\\left.", "[ \\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}}(s,u) \\pm \\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}}(u,s) \\right] \\,.$ The definite-signature amplitudes $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg},+}$ and $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg},-}$ are referred to as the even and odd amplitudes.", "We expand them up to third order in $\\bar{\\alpha }_{s}$ , $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg},\\pm } &= \\sum _{\\ell =0}^3 \\bar{\\alpha }_{s}^\\ell \\sum _{k=0}^\\ell L^k \\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}} ^{(\\pm ,\\ell ,k)},$ where we use for the signature-symmetric logarithm $L = -\\ln (x) - \\frac{i\\pi }{2} \\approx \\frac{1}{2} \\left[ \\ln \\left( \\frac{-s-i \\delta }{-t} \\right) + \\ln \\left( \\frac{-u-i \\delta }{-t} \\right) \\right]$ and the color operators [88], [89] are $& \\mathbf {T}_s^2 = (\\mathbf {T}_1 \\!+\\!", "\\mathbf {T}_2)^a(\\mathbf {T}_1\\!+\\!", "\\mathbf {T}_2)^a , \\;\\quad \\mathbf {T}_t^2 = (\\mathbf {T}_1 \\!+\\!", "\\mathbf {T}_3)^a(\\mathbf {T}_1 \\!+\\!", "\\mathbf {T}_3)^a, \\nonumber \\\\& \\mathbf {T}_u^2 = (\\mathbf {T}_1 \\!+\\!", "\\mathbf {T}_4)^a(\\mathbf {T}_1 \\!+\\!", "\\mathbf {T}_4)^a ,\\;\\quad \\mathbf {T}_{s-u}^2 = \\frac{1}{2}(\\mathbf {T}_s^2 -\\mathbf {T}_u^2 ).$ Here the $\\mathbf {T}_i$ (i=1,...,4) are assigned according to eq.", "(REF ).", "Explicitly, we find $\\scalebox {0.95}{\\mathbf {T}_s^2 \\!=\\!", "\\left(\\begin{array}{ccc}C_A\\!+\\!C_F & 0 & 2 \\\\0 & C_F & \\shortminus 2 \\\\1/2 & 0 & C_A\\!+\\!C_F \\\\\\end{array}\\right),\\;\\mathbf {T}_t^2 \\!=\\!", "\\left(\\begin{array}{ccc}C_A & 0 & 0 \\\\0 & C_A & 0 \\\\\\shortminus 1/2 & \\shortminus 1/2 & 0 \\\\\\end{array}\\right),\\;\\mathbf {T}_u^2\\!", "=\\!\\left(\\begin{array}{ccc}C_F & 0 & \\shortminus 2 \\\\0 & C_A\\!+\\!C_F & 2 \\\\0 & 1/2 & C_A\\!+\\!C_F\\ \\end{array}\\right).", "}$ Following ref.", "[86], one can show that the coefficients $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}} ^{(-,\\ell ,k)}$ ($\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}} ^{(+,\\ell ,k)}$ ) are purely imaginary(real).", "The $t$ -channel exchange of an even number of Reggeons contributes only to $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}} ^{(+,\\ell ,k)}$ , while the $t$ -channel exchange of an odd number of Reggeons contributes only to $\\mathbfcal {H}_{\\mathrm {\\mathrm {qg \\rightarrow qg}}} ^{(-,\\ell ,k)}$ .", "A single Reggeon exchange contributes to the Regge pole contribution, while a multiple Reggeon exchange in general can have non-vanishing contributions to both Regge pole and Regge cuts [90], [87], [41], [91].", "Up to next-to-leading logarithmic (NLL) accuracy, the odd signature amplitude is completely determined by the gluon Regge trajectory and by the so-called quark and gluon impact factors, that describe the interaction of the reggeized gluon with external states.", "The factorization structure for the odd amplitude becomes more complex in the next-to-next-to-leading logarithmic (NNLL) approximation, as both Regge pole and Regge cut [92], [86], [93], [88] contribute at this order.", "For the even amplitude, only the Regge cut contributes at the NLL level [86] and breaks the simple exponential structure already at this logarithmic order.", "Starting from NNLL, the odd-signature amplitude receives contributions from both Regge pole and Regge cuts.", "In ref.", "[91], a scheme has been proposed to disentangle the two.", "As in our previous paper [33], we adopt this scheme to study the high-energy behaviour of $qg \\rightarrow qg$ to three loops up to NNLL.", "Following the framework outlined in [91], we assume that, by setting the renormalization scale to $\\mu ^2 = -t$ , eq.", "(REF ) can be written as $\\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg},\\pm } &= \\: Z_q\\;Z_g \\:e^{L\\mathbf {T}_t^2 \\tau _g} \\sum _{\\ell =0}^3 \\bar{\\alpha }_{s}^\\ell \\sum _{k=0}^\\ell L^k \\mathbfcal {O}^{\\pm ,(\\ell )}_k \\mathbfcal {H}_{\\mathrm {qg \\rightarrow qg}} ^{(0)},$ where $\\tau _g = \\sum _{\\ell =1} \\bar{\\alpha }_{s}^\\ell \\tau _\\ell $ is the gluon Regge trajectory and the factors $Z_q = \\sum _{\\ell =0} \\bar{\\alpha }_{s}^\\ell Z_q^{(\\ell )}$ and $Z_g = \\sum _{\\ell =0} \\bar{\\alpha }_{s}^\\ell Z_g^{(\\ell )}$ capture the collinear poles of the amplitude [86] for quarks and gluons, respectively.", "Up to $O(\\bar{\\alpha }_{s})$ we have $Z_i^{(0)} & = 1 \\, , \\nonumber \\\\Z_i^{(1)} & = -C_i \\gamma ^\\text{K}_1 \\frac{1}{ \\epsilon ^2} + 4\\gamma _1^i\\frac{1}{\\epsilon } \\, , \\nonumber \\\\Z_i^{(2)} & = C_i^2 \\frac{(\\gamma ^\\text{K}_1)^2}{2 \\epsilon ^4} + C_i\\left[\\frac{1}{\\epsilon ^3}\\gamma ^\\text{K}_1\\left(\\frac{3 \\beta _0}{4} - 4\\gamma _1^i \\right) - \\frac{\\gamma ^\\text{K}_2}{\\epsilon ^2} \\right] + \\frac{2}{\\epsilon ^2}\\gamma _1^i\\left(4\\gamma _1^i -\\beta _0 \\right) + \\frac{8\\gamma _2^i}{ \\epsilon }\\, .$ The odd signature color operators $\\mathbfcal {O}^{-,(\\ell )}_k$ contributing at NNLL [86] are $\\mathbfcal {O}^{-,(0)}_0 &=1, \\\\\\mathbfcal {O}^{-,(1)}_0&= \\mathcal {I}^q_1+\\mathcal {I}^g_1, \\nonumber \\\\\\mathbfcal {O}^{-,(2)}_0&= \\left[ \\mathcal {I}^q_2 +\\mathcal {I}^g_2 + \\mathcal {I}^q_1\\mathcal {I}^g_1\\right] + \\mathcal {B}^{-,\\scalebox {0.7}{(2)}} [ (\\mathbf {T}^2_{s-u} )^2 - N_c^2/4], \\nonumber \\\\\\mathbfcal {O}^{-,(3)}_1 &= \\mathcal {B}_1^{-,\\scalebox {0.7}{(3)}}\\mathbf {T}^2_{s-u}[\\mathbf {T}^2_{t},\\mathbf {T}^2_{s-u}]+ \\mathcal {B}_2^{-,\\scalebox {0.7}{(3)}}[\\mathbf {T}^2_{t},\\mathbf {T}^2_{s-u}] \\mathbf {T}^2_{s-u} , \\nonumber \\multicolumn{2}{l}{\\text{and the even signature ones contributing at NLL~\\cite {Caron-Huot:2017fxr} are}}\\\\\\mathbfcal {O}^{+,(1)}_0 &= i \\pi \\,\\mathcal {B}^{+,(1)} \\, \\mathbf {T}_{s-u}^2, \\;\\nonumber \\\\\\mathbfcal {O}^{+,(2)}_1 &= i \\pi \\, \\mathcal {B}^{+,(2)} \\,[ \\mathbf {T}_{t}^2, \\mathbf {T}_{s-u}^2], \\nonumber \\\\\\mathbfcal {O}^{+,(3)}_2 &= i \\pi \\, \\mathcal {B}^{+,(3)} \\, [\\mathbf {T}_{t}^2,[ \\mathbf {T}_{t}^2, \\mathbf {T}_{s-u}^2]] .", "\\multicolumn{2}{l}{\\text{The coefficients $\\mathcal {B}^{\\pm ,(\\ell )}$ describethe process independent Regge cut contributions~\\cite {Caron-Huot:2013fea,Caron-Huot:2017fxr,Falcioni:2021buo} and we report them below for convenience.The odd-signature ones are}}\\\\\\mathcal {B}^{-,(2)}& = \\frac{2\\pi ^2}{3} r_\\Gamma ^2 \\left( \\frac{3}{\\epsilon ^2} - 18 \\epsilon \\zeta _3 - 27 \\epsilon ^2 \\zeta _4+ \\mathcal {O}(\\epsilon ) \\right),\\nonumber \\\\\\mathcal {B}_1^{-,(3)} & = 64\\pi ^2 r_\\Gamma ^3 \\left( \\frac{1}{48\\epsilon ^2} + \\frac{37}{24} \\zeta _3 + \\mathcal {O}(\\epsilon ) \\right) ,\\nonumber \\\\\\mathcal {B}_2^{-,(3)} &= 64\\pi ^2 r_\\Gamma ^3 \\left( \\frac{1}{24\\epsilon ^2} + \\frac{1}{12} \\zeta _3 + \\mathcal {O}(\\epsilon ) \\right),\\multicolumn{2}{l}{\\text{while for even signature one finds}}\\\\\\mathcal {B}^{+,(1)} &= \\, r_{\\Gamma } \\: \\frac{2}{\\epsilon }, \\nonumber \\\\\\mathcal {B}^{+,(2)} &= - \\frac{r_{\\Gamma }^2}{2} \\left( \\frac{4}{\\epsilon ^2} +72 \\zeta _3 \\epsilon + 108 \\zeta _4 \\epsilon ^2 +\\mathcal {O}(\\epsilon ^3)\\right), \\nonumber \\\\\\mathcal {B}^{+,(3)} &= \\frac{r_{\\Gamma }^3}{6} \\bigg ( \\frac{8}{\\epsilon ^3} - 176 \\zeta _3 - 264 \\zeta _4 \\epsilon - 5712 \\zeta _5 \\epsilon ^2 + \\mathcal {O}(\\epsilon ^3) \\bigg ) .$ $\\mathcal {I}^q_\\ell $ and $\\mathcal {I}^g_\\ell $ are the perturbative expansion coefficients of the quark and gluon impact factors; they can be extracted from the one- and two-loop calculation [83].", "The explicit expressions are rather long and are reported to the required orders in $\\epsilon $ in appendix .", "With the perturbative expansion of $\\tau _g$ up to the three-loop order obtained in [33] (and provided in appendix ), we have all the ingredients to fully predict the Regge limit of the process $qg\\rightarrow qg$ through eq.", "(REF ), which only requires the tree-level amplitude $\\mathcal {H}_{\\mathrm {qg \\rightarrow qg}} ^{(0)}$ as an input.", "We find by explicit calculation that the high energy limit of our results for the $qg\\rightarrow qg$ three-loop amplitude indeed agrees with this prediction and confirms in particular the literature results [86], [41], [95], [96], [97] for the gluon Regge trajectory as well as quark and gluon impact factors in QCD.", "This provides a highly non-trivial test of the universality of high energy factorization in QCD." ], [ "Conclusions", "In this paper, we have presented the three-loop helicity amplitudes for quark-gluon scattering processes in full-color, massless QCD.", "To perform this calculation, we have made use of various cutting-edge techniques, in particular to handle the Lorentz decomposition of the scattering amplitude and to solve the highly non-trivial system of integration-by-parts identities required to reduce the amplitude to master integrals.", "In addition to our previous calculations for the scattering of four quarks and of four gluons, these latest analytical results confirm predictions for the infrared poles of four-point amplitudes in QCD, also for processes with external states in different color representations.", "Moreover, our results have made it possible to verify the factorization properties of partonic amplitudes in the Regge limit.", "With this work, all three-loop amplitudes for parton-parton scattering processes are publicly available, providing the virtual corrections to dijet production at N$^3$ LO.", "The research of FC was supported by the ERC Starting Grant 804394 hipQCD and by the UK Science and Technology Facilities Council (STFC) under grant ST/T000864/1.", "GG was supported by the Royal Society grant URF/R1/191125.", "AvM was supported in part by the National Science Foundation through Grant 2013859.", "LT was supported by the Excellence Cluster ORIGINS funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC-2094 - 390783311, by the ERC Starting Grant 949279 HighPHun and, in the initial phase of this work, by the Royal Society through grant URF/R1/191125.", "[1]" ], [ "Anomalous dimensions", "In this appendix, we list the perturbative expansions of the cusp anomalous dimension and of the quark and gluon collinear anomalous dimensions, $\\gamma ^{\\text{K}} = \\sum \\limits _{n=0} \\left(\\frac{\\alpha _s}{4 \\pi }\\right)^{n+1} \\gamma _n^\\text{K} , \\qquad \\gamma ^{g/q} = \\sum \\limits _{n=0} \\left(\\frac{\\alpha _s}{4 \\pi }\\right)^{n+1} \\gamma _n^{q/g}.$ The required expansion coefficients of the cusp anomalous dimension read [71], [72], [73] $\\gamma _0^\\text{K} &= 4 \\,, \\nonumber \\\\\\gamma _1^\\text{K} &= \\left( \\frac{268}{9}- \\frac{4\\pi ^2}{3} \\right) C_A - \\frac{40}{9}\\, n_f \\,,\\nonumber \\\\\\gamma _2^\\text{K} &= C_A^2 \\left( \\frac{490}{3}- \\frac{536\\pi ^2}{27}+ \\frac{44\\pi ^4}{45} + \\frac{88}{3}\\,\\zeta _3 \\right) + C_A n_f \\left( \\frac{80\\pi ^2}{27}- \\frac{836}{27} - \\frac{112}{3}\\,\\zeta _3 \\right) \\nonumber \\\\& \\quad + C_F n_f \\left(32\\zeta _3 - \\frac{110}{3}\\right) - \\frac{16}{27}\\, n_f^2 \\; .\\multicolumn{2}{l}{\\text{The required expansion coefficients of the quark collinear anomalous dimension are~\\cite {Moch:2005id} }}\\\\\\gamma _0^q &= -3 C_F \\,, \\nonumber \\\\\\gamma _1^q &= C_F^2 \\left( -\\frac{3}{2} + 2\\pi ^2- 24\\zeta _3 \\right)+ C_F C_A \\left( - \\frac{961}{54} - \\frac{11\\pi ^2}{6}+ 26\\zeta _3 \\right)+ C_F n_f \\left( \\frac{65}{27} + \\frac{\\pi ^2}{3} \\right) ,\\nonumber \\\\\\gamma _2^q &= C_F^3 \\left( -\\frac{29}{2} - 3\\pi ^2- \\frac{8\\pi ^4}{5}- 68\\zeta _3 + \\frac{16\\pi ^2}{3}\\,\\zeta _3 + 240\\zeta _5 \\right)\\nonumber \\\\&\\mbox{}+ C_F^2 C_A \\left( - \\frac{151}{4} + \\frac{205\\pi ^2}{9}+ \\frac{247\\pi ^4}{135} - \\frac{844}{3}\\,\\zeta _3- \\frac{8\\pi ^2}{3}\\,\\zeta _3 - 120\\zeta _5 \\right) \\nonumber \\\\&\\mbox{}+ C_F C_A^2 \\left( - \\frac{139345}{2916} - \\frac{7163\\pi ^2}{486}- \\frac{83\\pi ^4}{90} + \\frac{3526}{9}\\,\\zeta _3- \\frac{44\\pi ^2}{9}\\,\\zeta _3 - 136\\zeta _5 \\right) \\nonumber \\\\&\\mbox{}+ C_F^2 n_f \\left( \\frac{2953}{54} - \\frac{13\\pi ^2}{9}- \\frac{14\\pi ^4}{27} + \\frac{256}{9}\\,\\zeta _3 \\right)\\nonumber \\\\&\\mbox{}+ C_F C_A n_f \\left( - \\frac{8659}{729}+ \\frac{1297\\pi ^2}{243} + \\frac{11\\pi ^4}{45}- \\frac{964}{27}\\,\\zeta _3 \\right) \\nonumber \\\\&\\mbox{}+ C_F n_f^2 \\left( \\frac{2417}{729}- \\frac{10\\pi ^2}{27} - \\frac{8}{27}\\,\\zeta _3 \\right) \\; ,\\\\\\multicolumn{2}{l}{\\text{while for the gluon collinear anomalous dimension~\\cite {Moch:2005tm} they read}}\\\\\\gamma _0^g &= -\\beta _0 \\,, \\nonumber \\\\\\gamma _1^g &= C_A^2 \\left( -\\frac{692}{27} + \\frac{11}{3} \\zeta _2 + 2 \\zeta _3 \\right)+C_A n_f \\left( \\frac{128}{27} - \\frac{2}{3} \\zeta _2 \\right) + 2 C_F n_f \\,, \\nonumber \\\\\\gamma _2^g &= C_A^3\\left(\\frac{-97186}{729} + \\frac{6109}{81} \\zeta _2 + \\frac{122}{3} \\zeta _3- \\frac{319}{3} \\zeta _4 - \\frac{40}{3} \\zeta _2 \\zeta _3\\;- 16 \\zeta _5 \\right)\\nonumber \\\\&\\mbox{}+C_A^2 n_f \\left( \\frac{30715}{1458} - \\frac{1198}{81}\\zeta _2 + \\frac{356}{27} \\zeta _3 + \\frac{82}{3} \\zeta _4 \\right) - \\frac{11}{9} C_F n_f^2 - C_F^2 n_f \\nonumber \\\\&\\mbox{}+C_A C_F n_f \\left( \\frac{1217}{27} - 2 \\zeta _2 - \\frac{152}{9}\\zeta _3 - 8 \\zeta _4 \\right) + C_A n_f^2 \\left(-\\frac{269}{1458} + \\frac{20}{27} \\zeta _2 - \\frac{56}{27} \\zeta _3 \\right).$" ], [ "Impact factors and gluon Regge trajectory", "In this appendix we provide expressions relevant for the high-energy limit of the three-loop amplitude discussed in the main text.", "The expansion coefficients for the quark and gluon impact factors up to two loops read $\\mathcal {I}^q_1 & =\\frac{4-\\frac{\\zeta _2}{2}}{N_c}+N_c \\left(\\frac{7\\zeta _2}{2}+\\frac{13}{18}\\right)-\\frac{5 n_f}{9}\\nonumber \\\\&\\quad +\\epsilon \\bigg [N_c\\left(-\\frac{\\zeta _2}{6}+\\frac{10 \\zeta _3}{3}+\\frac{40}{27}\\right)+\\frac{1}{N_c}\\left(-\\frac{3\\zeta _2}{4}-\\frac{7 \\zeta _3}{3}+8\\right)+n_f\\left(\\frac{\\zeta _2}{6}-\\frac{28}{27}\\right)\\bigg ]\\nonumber \\\\&\\quad +\\epsilon ^2 \\bigg [N_c \\left(-\\frac{13\\zeta _2}{36}+\\frac{35 \\zeta _4}{16}-\\frac{7 \\zeta _3}{9}+\\frac{242}{81}\\right)+\\frac{1}{N_c}\\left(-2\\zeta _2-\\frac{47 \\zeta _4}{16}-\\frac{7 \\zeta _3}{2}+16\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f \\left(\\frac{5\\zeta _2}{18}+\\frac{7 \\zeta _3}{9}-\\frac{164}{81}\\right)\\bigg ]\\nonumber \\\\&\\quad +\\epsilon ^3 \\bigg [N_c \\left(-\\frac{26 \\zeta _2 \\zeta _3}{3}-\\frac{20 \\zeta _2}{27}-\\frac{47 \\zeta _4}{48}+\\frac{36\\zeta _5}{5}-\\frac{91 \\zeta _3}{54}+\\frac{1456}{243}\\right)\\nonumber \\\\&\\quad \\quad \\quad +\\frac{1}{N_c}\\left(\\frac{7 \\zeta _2 \\zeta _3}{6}-4 \\zeta _2-\\frac{141\\zeta _4}{32}-\\frac{31 \\zeta _5}{5}-\\frac{28 \\zeta _3}{3}+32\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f \\left(\\frac{14\\zeta _2}{27}+\\frac{47 \\zeta _4}{48}+\\frac{35 \\zeta _3}{27}-\\frac{976}{243}\\right)\\bigg ]\\nonumber \\\\&\\quad +\\epsilon ^4 \\bigg [\\frac{1}{N_c}\\left(\\frac{7 \\zeta _2 \\zeta _3}{4}-8\\zeta _2-\\frac{47 \\zeta _4}{4}-\\frac{93 \\zeta _5}{10}+\\frac{49\\zeta _3^2}{18}-\\frac{56 \\zeta _3}{3}-\\frac{949 \\pi ^6}{120960}+64\\right) \\nonumber \\\\&\\quad \\quad \\quad +N_c \\left(\\frac{7 \\zeta _2 \\zeta _3}{18}-\\frac{121 \\zeta _2}{81}-\\frac{611 \\zeta _4}{288}-\\frac{31\\zeta _5}{15}-\\frac{91 \\zeta _3^2}{18}-\\frac{280 \\zeta _3}{81}-\\frac{977 \\pi ^6}{120960}+\\frac{8744}{729}\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f\\left(-\\frac{7 \\zeta _2 \\zeta _3}{18}+\\frac{82\\zeta _2}{81}+\\frac{235 \\zeta _4}{144}+\\frac{31 \\zeta _5}{15}+\\frac{196 \\zeta _3}{81}-\\frac{5840}{729}\\right)\\bigg ] + \\mathcal {O}(\\epsilon ^5)\\, ,\\\\\\mathcal {I}^q_2 & =-\\frac{3 N_c^2\\zeta _2}{2 \\epsilon ^2} + N_c^2 \\left(\\frac{87 \\zeta _2}{4}+\\frac{25\\zeta _4}{16}+\\frac{41 \\zeta _3}{9}+\\frac{22537}{2592}\\right)+\\frac{1}{N_c^2}\\left(\\frac{21\\zeta _2}{4}-\\frac{83 \\zeta _4}{16}-\\frac{15 \\zeta _3}{2}+\\frac{255}{32}\\right)\\nonumber \\\\&\\quad \\quad \\quad +N_c n_f \\left(-4\\zeta _2-\\frac{23 \\zeta _3}{9}-\\frac{650}{81}\\right)+\\frac{n_f}{N_c}\\left(-\\zeta _2-\\frac{19 \\zeta _3}{9}-\\frac{505}{81}\\right)+\\frac{25n_f^2}{54}+\\frac{19 \\zeta _2}{2}\\nonumber \\\\&\\quad \\quad \\quad -\\frac{47\\zeta _4}{8}-\\frac{205 \\zeta _3}{18}+\\frac{28787}{648} \\nonumber \\\\&\\quad +\\epsilon \\bigg [ N_c^2 \\left(\\frac{161 \\zeta _2 \\zeta _3}{6}+\\frac{4055\\zeta _2}{144}+\\frac{587 \\zeta _4}{12}+\\frac{49 \\zeta _5}{2}+\\frac{898 \\zeta _3}{27}+\\frac{911797}{15552}\\right)+n_f^2\\left(\\frac{140}{81}-\\frac{5 \\zeta _2}{18}\\right)\\nonumber \\\\&\\quad \\quad \\quad +\\frac{1}{N_c^2}\\left(\\frac{49 \\zeta _2 \\zeta _3}{6}+\\frac{325 \\zeta _2}{16}-\\frac{201 \\zeta _4}{16}-3 \\zeta _5-\\frac{166 \\zeta _3}{3}+\\frac{2157}{64}\\right)\\nonumber \\\\&\\quad \\quad \\quad +N_cn_f \\left(-\\frac{61 \\zeta _2}{36}-\\frac{247\\zeta _4}{24}-\\frac{85 \\zeta _3}{27}-\\frac{36031}{972}\\right)-\\frac{5507 \\zeta _3}{54}+\\frac{746543}{3888}\\nonumber \\\\&\\quad \\quad \\quad +\\frac{n_f}{N_c} \\left(-\\frac{13\\zeta _2}{4}-\\frac{83 \\zeta _4}{24}-\\frac{17 \\zeta _3}{27}-\\frac{11983}{486}\\right)+13 \\zeta _2\\zeta _3+\\frac{115 \\zeta _2}{8}-\\frac{1283\\zeta _4}{48}+\\frac{121 \\zeta _5}{2} \\bigg ]\\nonumber \\\\&\\quad +\\epsilon ^2 \\bigg [N_c^2 \\left(-\\frac{3613 \\zeta _2 \\zeta _3}{18}+\\frac{5131\\zeta _2}{864}+\\frac{31811 \\zeta _4}{288}+\\frac{94 \\zeta _5}{5}-\\frac{293 \\zeta _3^2}{18}+\\frac{12007 \\zeta _3}{648}+\\frac{3251 \\pi ^6}{120960}\\right.\\nonumber \\\\&\\quad \\quad \\quad \\left.+\\frac{23246941}{93312}\\right)+N_c n_f\\left(\\frac{625 \\zeta _2 \\zeta _3}{18}+\\frac{1475\\zeta _2}{108}-\\frac{779 \\zeta _4}{72}-\\frac{143 \\zeta _5}{5}+\\frac{1993 \\zeta _3}{81}-\\frac{805855}{5832}\\right)\\nonumber \\\\&\\quad \\quad \\quad +\\frac{1}{N_c^2}\\left(10 \\zeta _2 \\zeta _3+\\frac{2287 \\zeta _2}{32}-\\frac{5627 \\zeta _4}{64}-\\frac{9\\zeta _5}{2}+\\frac{1255 \\zeta _3^2}{18}-\\frac{6205 \\zeta _3}{24}+\\frac{7193 \\pi ^6}{120960}+\\frac{13575}{128}\\right)\\nonumber \\\\&\\quad \\quad \\quad +\\frac{n_f}{N_c}\\left(\\frac{31\\zeta _2 \\zeta _3}{9}-\\frac{45 \\zeta _2}{4}-\\frac{503\\zeta _4}{144}-\\frac{151 \\zeta _5}{15}+\\frac{623 \\zeta _3}{81}-\\frac{227023}{2916}\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f^2\\left(-\\frac{53 \\zeta _2}{54}+\\frac{5 \\zeta _4}{48}-\\frac{35 \\zeta _3}{27}+\\frac{404}{81}\\right)+\\frac{1613 \\zeta _2 \\zeta _3}{36}+\\frac{197 \\zeta _2}{24}-\\frac{27175\\zeta _4}{144}+\\frac{791 \\zeta _5}{30}\\nonumber \\\\&\\quad \\quad \\quad +\\frac{1621 \\zeta _3^2}{18}-\\frac{170951 \\zeta _3}{324}+\\frac{17 \\pi ^6}{70}+\\frac{16114247}{23328} \\bigg ]+ \\mathcal {O}(\\epsilon ^3) \\, ,\\\\\\multicolumn{2}{l}{\\text{and}}\\\\\\mathcal {I}^g_1 & = N_c\\left(4 \\zeta _2-\\frac{67}{18}\\right)+\\frac{5 n_f}{9}+\\epsilon \\Bigg [N_c \\left(\\frac{17 \\zeta _3}{3}+\\frac{11 \\zeta _2}{12}-\\frac{202}{27}\\right)+n_f \\left(-\\frac{\\zeta _2}{6}+\\frac{28}{27}\\right)\\Bigg ]\\nonumber \\\\&\\quad +\\epsilon ^2 \\Bigg [N_c \\left(\\frac{41\\zeta _4}{8}+\\frac{77 \\zeta _3}{18}+\\frac{67 \\zeta _2}{36}-\\frac{1214}{81}\\right)+n_f \\left(-\\frac{7 \\zeta _3}{9}-\\frac{5 \\zeta _2}{18}+\\frac{164}{81}\\right)\\Bigg ]\\nonumber \\\\&\\quad +\\epsilon ^3 \\Bigg [N_c \\left(-\\frac{59 \\zeta _2 \\zeta _3}{6}+\\frac{67 \\zeta _5}{5}+\\frac{517 \\zeta _4}{96}+\\frac{469 \\zeta _3}{54}+\\frac{101 \\zeta _2}{27}-\\frac{7288}{243}\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f \\left(-\\frac{47 \\zeta _4}{48}-\\frac{35 \\zeta _3}{27}-\\frac{14\\zeta _2}{27}+\\frac{976}{243}\\right)\\Bigg ]\\nonumber \\\\&\\quad +\\epsilon ^4 \\Bigg [N_c \\left(-\\frac{\\pi ^6}{4320}-\\frac{70 \\zeta _3^2}{9}-\\frac{77 \\zeta _2 \\zeta _3}{36}+\\frac{341 \\zeta _5}{30}+\\frac{3149 \\zeta _4}{288}+\\frac{1414 \\zeta _3}{81}+\\frac{607 \\zeta _2}{81}-\\frac{43736}{729}\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f \\left(\\frac{7 \\zeta _2 \\zeta _3}{18}-\\frac{31 \\zeta _5}{15}-\\frac{235\\zeta _4}{144}-\\frac{196 \\zeta _3}{81}-\\frac{82 \\zeta _2}{81}+\\frac{5840}{729}\\right)\\Bigg ] + \\mathcal {O}(\\epsilon ^5)\\,, \\\\\\mathcal {I}^g_2 & = -\\frac{3 N_c^2 \\zeta _2}{2 \\epsilon ^2}+N_c^2 \\left(\\frac{9 \\zeta _4}{4}+\\frac{88 \\zeta _3}{9}+\\frac{335\\zeta _2}{18}-\\frac{26675}{648}\\right)+N_c n_f\\left(\\frac{2 \\zeta _3}{9}-\\frac{25 \\zeta _2}{9}+\\frac{2063}{216}\\right)\\nonumber \\\\& \\quad \\quad \\quad +\\frac{n_f}{N_c} \\left(2 \\zeta _3-\\frac{55}{24}\\right)-\\frac{25n_f^2}{162}\\nonumber \\\\&\\quad +\\epsilon \\Bigg [N_c^2 \\left(22 \\zeta _2 \\zeta _3-39 \\zeta _5+\\frac{275 \\zeta _4}{4}+\\frac{1865 \\zeta _3}{18} +\\frac{3191 \\zeta _2}{72}-\\frac{98671}{648}\\right)\\nonumber \\\\&\\quad \\quad \\quad +N_c n_f \\left(-\\frac{19 \\zeta _4}{2}-\\frac{157 \\zeta _3}{9}-\\frac{871\\zeta _2}{108}+\\frac{149033}{3888}\\right)\\nonumber \\\\&\\quad \\quad \\quad +\\frac{n_f}{N_c} \\left(3 \\zeta _4+\\frac{19 \\zeta _3}{3}+\\frac{\\zeta _2}{4}-\\frac{1711}{144}\\right)+n_f^2 \\left(\\frac{5 \\zeta _2}{54}-\\frac{140}{243}\\right)\\Bigg ] \\nonumber \\\\&\\quad +\\epsilon ^2 \\Bigg [N_c^2 \\left(-\\frac{4733 \\pi ^6}{30240}-\\frac{659 \\zeta _3^2}{18}-\\frac{8987 \\zeta _2 \\zeta _3}{36}-\\frac{187 \\zeta _5}{5}+\\frac{16103 \\zeta _4}{64}+\\frac{121859 \\zeta _3}{324}+\\frac{71263 \\zeta _2}{648}\\right.\\nonumber \\\\&\\quad \\quad \\quad \\left.-\\frac{6140957}{11664}\\right)+N_c n_f \\left(\\frac{781 \\zeta _2 \\zeta _3}{18}+\\frac{104 \\zeta _5}{5}-\\frac{5803 \\zeta _4}{144}-\\frac{5698 \\zeta _3}{81}-\\frac{1645 \\zeta _2}{72}+\\frac{3197809}{23328}\\right)\\nonumber \\\\&\\quad \\quad \\quad +\\frac{n_f}{N_c}\\left(-2 \\zeta _2 \\zeta _3+14 \\zeta _5+\\frac{19 \\zeta _4}{2}+\\frac{197 \\zeta _3}{9}+\\frac{55 \\zeta _2}{24}-\\frac{42727}{864}\\right)\\nonumber \\\\&\\quad \\quad \\quad +n_f^2\\left(-\\frac{5 \\zeta _4}{144}+\\frac{35 \\zeta _3}{81} + \\frac{53 \\zeta _2}{162}-\\frac{404}{243}\\right)\\Bigg ] + \\mathcal {O}(\\epsilon ^3)\\, .", "$ In order to express the gluon Regge trajectory, we define $K(\\alpha _s(\\mu )) = - \\frac{1}{4} \\int _\\infty ^{\\mu ^2} \\frac{d\\lambda ^2}{\\lambda ^2} \\gamma ^\\text{K}\\left(\\alpha _s(\\lambda ^2)\\right),$ with the perturbative expansion $K = \\sum _{\\ell \\ge 1} K_\\ell \\bar{\\alpha }_{s}^\\ell $ .", "The coefficients up to third order are $K_1 &= \\frac{\\gamma _0^\\text{K}}{\\epsilon }\\, , \\nonumber \\\\K_2 &= \\frac{2\\gamma _1^\\text{K}}{\\epsilon } - \\frac{\\beta _0 \\gamma _0^\\text{K}}{2 \\epsilon ^2}\\, ,\\nonumber \\\\K_3 &= \\frac{16\\gamma _2^\\text{K}}{3\\epsilon } - \\frac{4\\beta _0 \\gamma _1^\\text{K} + 4\\beta _1 \\gamma _0^\\text{K}}{3\\epsilon ^2} + \\frac{\\beta _0^2\\gamma _0^\\text{K}}{3\\epsilon ^3} \\, .$ The expansion coefficients of the gluon Regge trajectory $\\tau _\\ell $ can then be written as [33], [41] $\\tau _1 &= \\; e^{\\epsilon \\gamma _E} \\frac{\\Gamma (1-\\epsilon )^2 \\Gamma (1+\\epsilon )}{\\Gamma (1-2\\epsilon )} \\frac{2}{\\epsilon }, \\nonumber \\\\[8pt]\\tau _2 &= \\; K_2 -\\frac{56 n_f}{27}+ N_c\\left( \\frac{404}{27} - 2\\zeta _3\\right) +\\epsilon \\bigg [N_c \\left(\\frac{2428}{81}-66 \\zeta _3-\\frac{67 \\zeta _2}{9}-3\\zeta _4\\right) \\nonumber \\\\&\\quad + n_f\\left(12 \\zeta _3 -\\frac{328}{81}+\\frac{5\\pi ^2}{27}\\right) \\bigg ]+\\epsilon ^2 \\bigg [N_c \\bigg (82\\zeta _5+\\frac{142 \\zeta _2 \\zeta _3}{3}-\\frac{4556 \\zeta _3 }{27}+\\frac{14576}{243}\\nonumber \\\\&\\quad -\\frac{404 \\zeta _2}{27}-\\frac{2321 \\zeta _4}{24}\\bigg ) +n_f \\left(\\frac{680\\zeta _3}{27}-\\frac{1952}{243}+\\frac{56 \\zeta _2}{27}+\\frac{211 \\zeta _4}{12}\\right)\\bigg ] + \\mathcal {O}(\\epsilon ^3), \\nonumber \\\\[8pt]\\tau _3 &= \\; K_3 +N_c^2 \\bigg (16 \\zeta _5+\\frac{40 \\zeta _2 \\zeta _3}{3}-\\frac{77 \\zeta _4}{3}-\\frac{6664 \\zeta _3}{27} -\\frac{3196\\zeta _2}{81}+\\frac{297029}{1458}\\bigg )+ n_f^2\\left(\\frac{928}{729}-\\frac{128 \\zeta _3}{27}\\right)\\nonumber \\\\&\\quad +N_c n_f\\left(\\frac{412 \\zeta _2}{81}+\\frac{2 \\zeta _4}{3}+\\frac{632 \\zeta _3}{9}-\\frac{171449}{2916}\\right) +\\frac{n_f}{N_c} \\left(-4\\zeta _4-\\frac{76 \\zeta _3}{9}+\\frac{1711}{108}\\right) +\\mathcal {O}(\\epsilon ).$ Note that since one can expand $\\tau _1 = K_1 + O(\\epsilon )$ , the poles of $\\tau _g$ are given exactly by $K$ defined in eq.", "(REF ) (see also ref. [91]).", "The expressions above are also provided in electronic format in the arXiv submission of this article." ] ]
2207.03503
[ [ "Improving the accuracy of discretisations of the vector transport\n equation on the lowest-order quadrilateral Raviart-Thomas finite elements" ], [ "Abstract Within finite element models of fluids, vector-valued fields such as velocity or momentum variables are commonly discretised using the Raviart-Thomas elements.", "However, when using the lowest-order quadrilateral Raviart-Thomas elements, standard finite element discretisations of the vector transport equation typically have a low order of spatial accuracy.", "This paper describes two schemes that improve the accuracy of transporting such vector-valued fields on two-dimensional curved manifolds.", "The first scheme that is presented reconstructs the transported field in a higher-order function space, where the transport equation is then solved.", "The second scheme applies a mixed finite element formulation to the vector transport equation, simultaneously solving for the transported field and its vorticity.", "An approach to stabilising this mixed vector-vorticity formulation is presented that uses a Streamline Upwind Petrov-Galerkin (SUPG) method.", "These schemes are then demonstrated, along with their accuracy properties, through some numerical tests.", "Two new test cases are used to assess the transport of vector-valued fields on curved manifolds, solving the vector transport equation in isolation.", "The improvement of the schemes is also shown through two standard test cases for rotating shallow-water models." ], [ "Motivation", "Many numerical models of fluids involve transporting the velocity or momentum field, via solving an equation such as $ \\frac{\\partial {\\mathbf {F}}}{\\partial {t}} + \\left(\\mathbf {v \\cdot \\nabla } \\right) \\mathbf {F} = \\mathbf {0}.$ In this work, (REF ) and its variants are referred to as the vector transport equation.", "In (REF ), $\\mathbf {v}$ and $\\mathbf {F}$ are vector-valued functions and $\\mathbf {F}$ is transported by $\\mathbf {v}$ .", "A major class of such numerical models are those that use finite element methods, which have a major advantage of being easy to formulate on arbitrary meshes.", "In finite element methods, fields are expressed as the sum of a finite number of basis functions multiplied by coefficients.", "Each basis function is localised to a cell or a small number of cells on the mesh.", "The choice of basis functions and their continuity between cells is typically referred to as the finite element.", "As argued by [1], finite element discretisations can also offer advantages when considering the transport of vectors on two-dimensional curved manifolds, such as the surface of the sphere.", "In such cases, the vector transport equation generally includes metric terms, which describe accelerations induced by the curvature of the manifold itself.", "As discussed by [1], there are two standard approaches to handling these terms.", "Firstly the metric terms can be explicitly included in the equation, which typically involves evaluating the Christoffel symbols describing the curvature of the manifold.", "However for a general manifold this evaluation may not be straightforward, which can make this approach difficult or even impossible.", "In the alternative approach, the vectors are described in three Cartesian components, which adds an extra dimension to the equation.", "Then no metric terms appear in the transport equations for the components, but a third unknown has been added, and a constraint must also be applied to keep the transported vector in the tangent bundle of the manifold.", "Instead, finite element discretisations can combine the benefits of these two approaches.", "By writing the equation in a weak integral form and numerically evaluating the integral in Cartesian coordinates, the explicit evaluation of Christoffel symbols can be avoided.", "At the same time, no third component is added and the transported vector will naturally be tangent to the manifold.", "One family of finite elements that is often used for velocity or momentum variables is the Raviart-Thomas family, which can be defined on triangular or quadrilateral cells.", "The previous decade has seen particular interest in these finite elements from the numerical weather prediction (NWP) community.", "In many of the finite difference or finite volume methods used historically by this community, the density/pressure and velocity variables have been staggered according to the Arakawa C-grid of [2], [3], [4], due to its good representation of the wave modes of the shallow-water equations [5].", "It was shown by [6] that certain choices of finite element pairs for the density/pressure and velocity variables can still replicate the desirable dispersion properties of the Arakawa C-grid in a mixed finite element model of the shallow-water equations.", "In particular, the finite element equivalent of the velocity staggering used in the Arakawa C-grid on quadrilateral cells is the lowest-order Raviart-Thomas elements (i.e.", "the elements using the lowest degree polynomials in the basis functions).", "Maintaining this equivalence by using the lowest-order finite element spaces can be advantageous for other reasons, for instance that it can simplify the coupling to parametrisations which are used to represent unresolved physical processes in NWP models.", "It is these properties that have seen the lowest-order Raviart-Thomas elements become candidates for use in NWP models.", "For instance, the UK Met Office will use them in its next-generation model, LFRicFor more information on LFRic, see [7] and [8].", "(named after Lewis Fry Richardson).", "The Met Office currently uses a longitude-latitude grid for its global simulations, which suffers from the pole problem.", "The convergence of meridians at the poles of the grid has begun to lead to bottlenecks in data communication on massively parallel supercomputers.", "The scalability of the model is then compromised, and the current forecasting model will be unable to exploit the computational power of the next-generation of supercomputers.", "Moving to a finite element formulation facilitates the move to a cubed-sphere grid, which is quasi-uniform over the sphere and should avoid these scalability bottlenecks.", "However, a challenge with using these lowest-order elements is that typical discretisations of the transport equation do not have a satisfactory order of accuracy with respect to the grid spacing, (as discussed by [5] this should be at least approaching second-order).", "One route to circumvent this is to use higher-order finite difference or finite volume methods to build up transport stencils, which is the approach used by [8].", "Unfortunately finite difference or finite volume methods are not supported in many finite element software systems, where these methods may then be unfeasible.", "In any case, as argued by [1], finite element discretisations may offer particular advantages for discretising the vector transport equation.", "The motivation is then to find finite element discretisations that do deliver improved accuracy, while in this work the computational cost of such schemes are of secondary concern.", "One approach to tackle the low order of accuracy of transport schemes for the lowest-order elements was presented by [9], which introduced a method of recovering fields in a higher-order finite element space for solving the transport equation.", "This resulted in higher-order accuracy overall while using the lowest-order finite elements.", "However [9] did not present a method that could be used for transporting vector-valued fields on curved manifolds.", "In this paper, two discretisations of the vector transport equation are shown to improve the order of accuracy for transport with the lowest-order Raviart-Thomas elements on quadrilateral cells, and crucially when the transport is on a two-dimensional curved manifold.", "The first method extends the recovery approach of [9] to curved manifolds, reconstructing the vector-valued field in a higher-order function space.", "The second method adapts a mixed finite element formulation similar to that of [10] to the vector transport equation.", "This scheme simultaneously solves for the transported vector and its vorticity.", "A stabilisation based on a Streamline Upwind Petrov-Galerkin (SUPG) approach is then presented for this scheme, based on [11].", "The remainder of the paper is laid out as follows.", "In Section , some background is given to the vector transport equation and the Raviart-Thomas elements, alongside a standard upwind finite element scheme for (REF ) that is used as a benchmark.", "The two schemes that improve on this are described in Sections and respectively.", "Section reviews the recovered transport approach of [9] before extending it for the Raviart-Thomas elements, while Section describes a mixed vector-vorticity discretisation like that of [10] in the context of the vector transport equation, and presents the new SUPG stabilisation to it.", "The new schemes are demonstrated through some test cases in Section , that cover both the vector transport equation on its own and also within a shallow-water model." ], [ "The Vector Transport Equation", "This work considers the transport of some vector-valued field $\\mathbf {F}(\\mathbf {x},t)$ by some other vector-valued field $\\mathbf {v}(\\mathbf {x},t)$ , where $\\mathbf {x}$ is the position vector in the domain $$ and $t$ is the point in time.", "The domain $$ is a two-dimensional differentiable manifold, which may be embedded in two-dimensional space (so that the domain is a plane) or in three-dimensional space (for instance when the domain is the surface of a sphere).", "The vectors $\\mathbf {v}$ and $\\mathbf {F}$ live in the tangent bundle of $$ , and so can be expressed locally through two scalar components.", "This section briefly considers different forms of the vector transport equation.", "It is most easily expressed as (REF ), which is referred to as the advective form, and which is repeated again here: $ \\frac{\\partial {\\mathbf {F}}}{\\partial {t}} + \\left(\\mathbf {v \\cdot \\nabla } \\right) \\mathbf {F} = \\mathbf {0}.$ Before writing the next form of the equation, we introduce the perpendicular operation, denoted by superscript $^\\perp $ .", "Defining the unit normal outward from the manifold as $\\widehat{\\mathbf {N}}$ , then $\\mathbf {F}^\\perp $ is given by $ \\mathbf {F}^\\perp := \\widehat{\\mathbf {N}}\\times \\mathbf {F},$ with $\\times $ denoting the cross product.", "If $$ is a plane with components labelled $x$ and $y$ , then $\\mathbf {F}^\\perp =\\left(-F_y,F_x\\right)$ .", "With this definition, an alternative form of the vector transport equation is $\\frac{\\partial {\\mathbf {F}}}{\\partial {t}} + \\left(\\mathbf {\\nabla }^\\perp \\mathbf {\\cdot F}\\right) \\mathbf {v}^\\perp + \\frac{1}{2}{\\mathbf {\\nabla }{(\\mathbf {v\\cdot F})}}+ \\frac{1}{2}\\left[\\left(\\mathbf {\\nabla } \\mathbf {F}\\right)\\mathbf {\\cdot v}-\\left(\\mathbf {\\nabla }\\mathbf {v}\\right)\\mathbf {\\cdot F} \\right] = \\mathbf {0},$ where $\\mathbf {\\nabla }$ applied to a vector is tensor-valued, and we call the terms featuring this the vector-gradient terms.", "The second term of (REF ) might be more easily recognised as the two-dimensional version of $\\left(\\mathbf {\\nabla }\\times \\mathbf {F}\\right)\\times \\mathbf {v}$ .", "If the vorticity is defined by $ \\zeta := \\mathbf {\\nabla }^\\perp \\mathbf {\\cdot F},$ then (REF ) can also be written in vorticity form: $\\frac{\\partial {\\mathbf {F}}}{\\partial {t}} + \\zeta \\mathbf {v}^\\perp + \\frac{1}{2}{\\mathbf {\\nabla }{(\\mathbf {v\\cdot F})}}+ \\frac{1}{2}\\left[\\left(\\mathbf {\\nabla } \\mathbf {F}\\right)\\mathbf {\\cdot v}-\\left(\\mathbf {\\nabla } \\mathbf {v}\\right)\\mathbf {\\cdot F} \\right] = \\mathbf {0}.$ The velocity $\\mathbf {u}$ in fluid dynamics models is self-transporting, and in that context the vector transport equation becomes a form of the Burgers' equation.", "With $\\mathbf {v}=\\mathbf {F}=\\mathbf {u}$ , the vector-gradient terms of (REF ) cancel to yield the vector-invariant form: $ \\frac{\\partial {\\mathbf {u}}}{\\partial {t}} + \\zeta \\mathbf {u}^\\perp + \\tfrac{1}{2}\\mathbf {\\nabla }\\left(\\mathbf {u}\\mathbf {\\cdot }\\mathbf {u}\\right)= \\mathbf {0}.$ Finally, although not considered in this work, the vector transport equation can also be written in flux form: $ \\frac{\\partial {\\mathbf {F}}}{\\partial {t}} + \\mathbf {\\nabla \\cdot }\\left(\\mathbf {v}\\otimes \\mathbf {F}\\right)- \\left(\\mathbf {\\nabla \\cdot v}\\right)\\mathbf {F}=\\mathbf {0},$ where $\\otimes $ denotes the outer product of two vectors." ], [ "Raviart-Thomas Elements", "The Raviart-Thomas family is an important class of finite elements used to describe two-dimensional vector fields.", "These elements were introduced for both triangular and quadrilateral elements by [12], who used them to solve the Poisson equation with a mixed finite element discretisation.", "The Raviart-Thomas elements come in two varieties: those that preserve the normal components of vectors between cells, and those that preserve the tangential components.", "These former elements (preserving normal components) are known as $H(\\mathrm {div})$ -conforming as functions in these elements have square-integrable divergence.", "The latter (preserving tangential components) are $H(\\mathrm {curl})$ -conforming, which in the context of a two-dimensional manifold means that for all fields $\\mathbf {u}$ in the corresponding finite element space, $\\mathbf {\\nabla }^\\perp \\mathbf {\\cdot u}$ is square-integrable (as well as $\\mathbf {u}$ also being square-integrable).", "As discussed by [13], the quadrilateral Raviart-Thomas elements can be represented as the tensor-product of one-dimensional elements.", "For definitions and more thorough descriptions of the elements, see [14].", "This work uses the nomenclature of [15], with $\\mathrm {RTc}^e_k$ representing the $k$ -th order $H(\\mathrm {curl})$ -conforming elements on quadrilateral elements and $\\mathrm {RTc}^f_k$ representing the $H(\\mathrm {div})$ -conforming elements on quadrilateral elements.", "The field of finite element exterior calculus (see [16]) explains how some finite elements can be related to others through the action of the exterior derivative.", "Such spaces can be part of a de Rham complex, which is the chain of spaces obtained by application of the exterior derivative.", "For instance, taking the divergence of a field in the $\\mathrm {RTc}^f_k$ space yields a field in the discontinuous Galerkin space $\\mathrm {DG}_{k-1}$ .", "The main families of finite elements that form de Rham complexes are captured in the periodic table of finite elements [15].", "In a compatible finite element model, variables in the discretisation are chosen to lie in the spaces of the discrete de Rham complex that correspond to their continuous analogues.", "In this structure, the discrete differential operators preserve vector calculus identities such as $\\mathbf {\\nabla }\\times \\mathbf {\\nabla }f=\\mathbf {0}$ for all scalar $f$ .", "Applied to fluid dynamics, this structure suggests that the velocity should lie in a $H(\\mathrm {div})$ -conforming space such as the $\\mathrm {RTc}^f_k$ elements.", "As mentioned in Section , it was shown by [6] that the choice of $\\mathrm {RTc}^f_k$ -$\\mathrm {DG}_{k-1}$ for the wind and height fields on quadrilateral elements in a shallow-water model gives a discretisation with many desirable properties.", "As explained by [6], this pair of elements has the optimal ratio of degrees of freedom (DoFs) for capturing the wave modes of the shallow-water equations, and it mimics the properties of the popular C-grid staggering used in finite difference models.", "For this reason, a similar discretisation will be used in the Met Office's new LFRic model, with the wind lying in the three-dimensional form of $\\mathrm {RTc}^f_1$ space.", "Although the use of the corresponding Raviart-Thomas elements on triangular elements in a shallow-water model has been investigated elsewhere, notably by [17] as part of the $\\mathrm {RT}^f_k$ -$\\mathrm {DG}_{k-1}$ pair, as shown by [6] it suffers from inferior representation of the shallow-water wave modes.", "Given this result, this work focuses on quadrilateral elements." ], [ "An Upwind Finite Element Discretisation", "This section describes a simple finite element discretisation for the advective form (REF ), which is used in the results of Section as a benchmark to compare with the improved schemes of Sections and .", "This discretisation is a generalisation of the upwind discontinuous-Galerkin method (first used by [18]) to vector-valued fields.", "For an overview of these methods see for instance [19].", "For these methods, the time discretisation generally does not affect the spatial accuracy, so discussion of the time discretisation is left until Section .", "Let $\\mathbf {v}$ and $\\mathbf {F}$ lie in function space $V_F$ made up of Raviart-Thomas elements.", "Multiplying (REF ) by a test function $\\mathbf {\\gamma }\\in V_F$ , integrating over the domain $$ and then integrating by parts gives, $\\forall \\mathbf {\\gamma }\\in V_F$ , $ \\begin{split}\\int _\\mathbf {\\gamma \\cdot } \\frac{\\partial {\\mathbf {F}}}{\\partial {t}} \\hspace{2.13394pt}\\mathrm {d}{x}+ \\int _\\left(\\mathbf {v}^+ \\mathbf {\\cdot }\\widehat{\\mathbf {n}}^+\\right)\\left.\\mathbf {\\gamma }\\right._{+}\\mathbf {\\cdot }\\mathbf {F}^\\dagger \\hspace{2.13394pt}\\mathrm {d}{S}- \\int _\\mathbf {F\\cdot }\\left[\\mathbf {\\nabla \\cdot }\\left(\\mathbf {\\gamma }\\otimes \\mathbf {v}\\right)\\right]\\hspace{2.13394pt}\\mathrm {d}{x} \\\\+ \\int _\\left(\\mathbf {v}^+ \\mathbf {\\cdot }\\widehat{\\mathbf {n}}^+\\right)\\left(\\mathbf {F}^\\dagger \\mathbf {\\cdot }\\widehat{\\mathbf {n}}^\\dagger \\right)\\left(\\mathbf {\\gamma }^\\ddagger \\mathbf {\\cdot }\\left[\\widehat{\\mathbf {n}}^++\\widehat{\\mathbf {n}}^- \\right]\\right) \\hspace{2.13394pt}\\mathrm {d}{S} = 0.\\end{split}$ Here $$ is the set of all interior facets of the domain.", "Each side of these facets can be arbitrarily labelled with a $+$ or $-$ , and $\\widehat{\\mathbf {n}}^+$ is defined as the outward normal from the $+$ side of a facet.", "The double square brackets $\\left.\\cdot \\right._+$ denote the jump of some field over a facet, so that $ \\left.\\mathbf {\\gamma } \\right._+ := \\mathbf {\\gamma }^+ - \\mathbf {\\gamma }^-.$ The upwind value at a facet is denoted by the dagger $^\\dagger $ , and is given by $ \\mathbf {F}^\\dagger := \\left\\lbrace \\begin{matrix}\\mathbf {F}^+ & \\mathrm {if} \\ \\mathbf {v^+ \\cdot }\\widehat{\\mathbf {n}}^+ \\ge 0, \\\\\\mathbf {F}^- & \\mathrm {if} \\ \\mathbf {v^+ \\cdot }\\widehat{\\mathbf {n}}^- < 0.\\end{matrix}\\right.$ The final term of (REF ) is a correction to project the upwind term into the tangent bundle, with the double dagger $^\\ddagger $ denoting the downwind term (i.e.", "from the opposite side of the facet to the upwind term).", "This correction is similar to that used by [1], so that both sides of the $\\left.\\mathbf {\\gamma }\\right._+$ term are evaluated in the tangent space of $\\mathbf {F}^\\dagger $ , on the upwind side of the facet.", "In a Cartesian plane, $\\widehat{\\mathbf {n}}^+=-\\widehat{\\mathbf {n}}^-$ and the correction vanishes, but this is not generally true for a curved manifold.", "Although this benchmark scheme performs well for general Raviart-Thomas spaces, it has low-order accuracy for the lowest-order spaces, which is demonstrated in Section .", "This poor performance can be understood heuristically by considering the components and basis functions of $\\mathbf {F}$ .", "Those that are parallel to $\\mathbf {v}$ are linear in a cell in the direction of $\\mathbf {v}$ .", "However the components of $\\mathbf {F}$ that are perpendicular to $\\mathbf {v}$ are only constant in a cell in the direction of $\\mathbf {v}$ .", "Conventional finite element discretisations of spatial derivatives for piecewise constant fields have only first-order accuracy or worse.", "For this reason, similar upwind discretisations (such as that of [20]) will also suffer from low orders of accuracy when applied to alternative forms of the transport equation such as (REF ) or (REF )." ], [ "Discretisation of the Shallow-Water Equations", "Some discretisations of geophysical fluids include a step in which the vector transport equation is solved in isolation.", "One such discretisation is the shallow-water model of [21] and [22], which uses a compatible finite element framework.", "This section briefly describes this model, which is used for the demonstrations in Section , applied to the lowest-order finite element spaces that correspond to those used by the Met Office's LFRic model [8], so that the velocity field $\\mathbf {u}$ and the depth field $h$ are in the $\\mathrm {RTc}^f_1$ and $\\mathrm {DG}_0$ spaces respectively.", "The rotating shallow-water equations can be expressed as $& \\frac{\\partial {\\mathbf {u}}}{\\partial {t}}+(\\mathbf {u\\cdot \\nabla })\\mathbf {u}+f\\mathbf {u}^\\perp +g{\\mathbf {\\nabla }{(h+h_b)}}=0, \\\\& \\frac{\\partial {h}}{\\partial {t}} + \\mathbf {\\nabla \\cdot }(h\\mathbf {u})=0,$ where $h_b$ is the height of the lower surface, $f$ is the Coriolis parameter and $g$ is acceleration due to gravity.", "For more discussion of the shallow-water equations, see for instance [23].", "The shallow-water model of [21] and [22] discretises (REF ) with a time stepping structure that follows the semi-implicit scheme used by both the Met Office's current ENDGame [24] and new GungHo dynamical cores [8].", "In this semi-implicit scheme, a time step consists of an outer loop in which the transport terms are evaluated, and an inner loop in which the implicit terms are obtained by solving a linearised form of (REF ).", "This linear problem is iterated to obtain the variables at the next time step, $\\mathbf {u}^{n+1}$ , $h^{n+1}$ .", "For the linear solver, the hybridised finite element technique presented by [22] is used.", "A thorough description of the semi-implicit time stepping scheme is also given by [22].", "As part of the outer loop of the time step, the transport terms are evaluated from $& \\frac{\\partial {\\mathbf {u}}}{\\partial {t}} = - (\\mathbf {u}_a\\mathbf {\\cdot \\nabla })\\mathbf {u}^*, \\\\& \\frac{\\partial {h}}{\\partial {t}} = - \\mathbf {\\nabla \\cdot }(h^n\\mathbf {u}_a),$ where $\\mathbf {u}^\\ast $ is $\\mathbf {u}^n$ incremented by the explicit pressure gradient and Coriolis terms.", "The transporting velocity is $\\mathbf {u}_a=\\frac{1}{2}\\left(\\mathbf {u}^n+\\mathbf {u}^{(k)}\\right)$ , where $\\mathbf {u}^{(k)}$ is the latest approximation of $\\mathbf {u}^{n+1}$ .", "So comparing (REF ) with (REF ), $\\mathbf {u}_a$ plays the role of $\\mathbf {v}$ and $\\mathbf {u}^*$ plays the role of $\\mathbf {F}$ .", "In this context, the discretisation of (REF ) can be treated as a “black box”, in which different schemes for solving the vector transport equation can be used." ], [ "Recovery", "Inspired by the recovered finite element methods of [25] and the embedded transport scheme of [26], [9] presented a transport scheme to improve the spatial accuracy of discretisations for the lowest-order finite element spaces.", "This was particularly motivated for the discontinuous Galerkin space of piecewise constants, $\\mathrm {DG}_0$ .", "The recovered transport scheme of [9] involves recovering the field to be transported from $\\mathrm {DG}_0$ to $\\mathrm {DG}_1$ , and solving the transport equation in $\\mathrm {DG}_1$ before projecting the solution back to $\\mathrm {DG}_0$ .", "The recovery is performed using a simple averaging operator, which was indicated by [25] to have second-order accuracy.", "If then the transport scheme used for the $\\mathrm {DG}_1$ field has second-order accuracy, the whole scheme has second-order accuracy.", "However, [9] focused on Cartesian domains where the velocity field was transported by decomposing the field into orthogonal components and transporting each of these separately.", "This recovered approach was also used for solving the transport equation by [27] in the context of a moist compressible Euler model, but this also only focused on Cartesian domains.", "In this section, after reviewing the approach of [9], we present the extension to this to achieve higher-order transport on curved manifolds." ], [ "Review of recovered transport", "To start, we define a series of function spaces $\\left\\lbrace {V_L, \\widehat{V}_L, V_R, V_H}\\right\\rbrace $ and operators $\\left\\lbrace {\\mathcal {I}, \\mathcal {R}, \\mathcal {P}_L, \\widehat{\\mathcal {P}}_L, \\mathcal {J}, \\mathcal {T}}\\right\\rbrace $ that are used in the recovered transport scheme of [9].", "These are summarised in Table REF .", "Here a different terminology is used to that of [9] to make the new scheme clearer.", "The lowest-order function space is given by $V_L$ .", "This is the native function space of the transported variable $q$ , so that $q\\in V_L$ .", "The broken (fully-discontinuous) form of $V_L$ is then denoted by $\\widehat{V}_L$ .", "The higher-order space in which the transport will happen is $V_H$ , and an intermediate space into which $q$ is recovered is $V_R$ .", "In [9], the spaces were chosen so that $V_L\\subset V_H$ , $\\widehat{V}_L\\subset V_H$ and $V_R\\subset V_H$ , while $V_R$ was assumed to be fully-continuous.", "Section REF relaxes some of these requirements.", "As in [26], an injection operator $\\mathcal {I}_H:V\\rightarrow V_H$ identifies a field in one of $V_L$ , $\\widehat{V}_L$ and $V_R$ as also being a member of $V_H$ .", "The operator $\\mathcal {P}_L:V\\rightarrow V_L$ is a Galerkin projection from some space into the lower-order space.", "Taking arbitrary $\\mathbf {u}\\in V$ (e.g.", "$V_H$ ), the action of $\\mathcal {P}_L$ so that $\\mathbf {y} = \\mathcal {P}_L\\mathbf {u}$ with $\\mathbf {y}\\in V_L$ is given by $\\int _\\mathbf {\\gamma \\cdot y} \\hspace{2.13394pt}\\mathrm {d}{x} = \\int _\\mathbf {\\gamma \\cdot u} \\hspace{2.13394pt}\\mathrm {d}{x}, \\ \\ \\ \\ \\forall \\mathbf {\\gamma } \\in V_L.$ Similarly, $\\widehat{P}_L:V_R\\rightarrow \\widehat{V}_L$ is a Galerkin projection.", "The key operator in the reconstruction is $\\mathcal {R}:V_L \\rightarrow V_R$ .", "This is the recovery operator, and should have second-order spatial accuracy.", "This can be achieved by using an averaging operator, so that for the DoFs of $V_R$ that are shared between cells, the field values are the average of values from neighbouring cells of the field in $V_L$ .", "At any domain boundaries, improved accuracy can be obtained by extrapolating from values on the interior (for more details see [28]).", "The full operator for reconstructing the higher-order field is $\\mathcal {J}:V_L\\rightarrow V_H$ , defined by $\\mathcal {J} := \\mathcal {I}_H+ \\mathcal {I}_H\\mathcal {R} - \\mathcal {I}_H\\widehat{\\mathcal {P}}_L\\mathcal {R}.$ The addition of $\\mathcal {I}_H-\\mathcal {I}_H\\widehat{\\mathcal {P}}_L\\mathcal {R}$ ensures that the mass of $\\mathbf {u}\\in V_L$ and $\\mathcal {J}\\mathbf {u}\\in V_H$ will be preserved in each cell, and that $\\mathcal {P}_L\\mathcal {J}\\mathbf {u}=\\mathbf {u}$ , so that if no transport happens then $\\mathbf {u}$ will remain unchanged.", "Finally, $\\mathcal {T}:V_H\\rightarrow V_H$ performs a single transport step in $V_H$ .", "Denoting the value of $\\mathbf {u}\\in V_L$ at the $n$ -th time step as $\\mathbf {u}^n$ , a whole transport step is described by $ \\mathbf {u}^{n+1} = \\mathcal {P}_L\\mathcal {T}\\mathcal {J}\\mathbf {u}^{n}.$ Table: A summary of the variables used to describe the function spaces and operators involved in the recovered transport scheme.", "The space VV without a subscript is used to represent a range of the other defined spaces." ], [ "Extension to curved manifolds", "In [9], the higher-order space $V_H$ for the transport of scalar-valued fields was taken as the $\\mathrm {DG}_1$ space, while $V_R$ was the linear continuous Galerkin space $\\mathrm {CG}_1$ .", "To transport the velocity field, it was separated into orthogonal components which were each separately reconstructed in $\\mathrm {DG}_1$ .", "This was possible because [9] only considered Cartesian domains.", "However using this approach on curved manifolds presents problems.", "For instance, consider two vectors at two different points of the manifold, both pointing along the geodesic that joins the two points.", "These vectors will generally lie in two different tangent planes.", "A vector lying on the midpoint of the geodesic between the two points should not be reconstructed by using the average of the Cartesian components.", "This would generally not lie in the tangent space itself, and its projection into the tangent space will likely under-approximate the vector's size.", "For some domains this could be resolved by averaging in some other orthogonal coordinate system, but this work is motivated by geophysical applications and in particular the sphere, where the topology also presents challenges (for instance when a spherical-polar coordinate system is used then the components do not make sense at the poles).", "In this section, we extend the scheme of [9] by careful choice of the spaces and operators described in Section REF so as to avoid these problems and achieve a higher-order transport scheme for velocities in the lowest-order Raviart-Thomas spaces.", "The broad structure of the scheme is the same, following equation (REF ), so that $\\mathbf {u}^{n+1} = \\mathcal {P}_L\\mathcal {T}\\mathcal {J}\\mathbf {u}^{n}.$ The main difference is that the operator $\\mathcal {J}$ will be defined differently.", "Although it is still required that that $V_L\\subset V_H$ , it is no longer assumed that $V_R$ or $\\widehat{V}_L$ are subsets of $V_H$ .", "Another difference is the introduction of $\\widehat{V}_R$ , the space of broken elements of $V_R$ .", "The recovery operator $\\mathcal {R}:V_L\\rightarrow V_R$ is split into two steps: firstly a Galerkin projection $\\widehat{P}_R:V_L\\rightarrow \\widehat{V}_R$ , and secondly an averaging operator $\\mathcal {A}:\\widehat{V}_R\\rightarrow V_R$ .", "The averaging operator restores the continuity of a field in $\\widehat{V}_R$ , by setting the values at DoFs of $V_R$ that are shared between cells to be the average of the values from the neighbouring cells of the field in $\\widehat{V}_R$ .", "The recovery operator is then expressed as $\\mathcal {R}=\\mathcal {A}\\widehat{\\mathcal {P}}_R.$ As $V_R\\nsubseteq V_H$ , in place of the injection operator $\\mathcal {I}_H$ we simply use a Galerkin projection $\\mathcal {P}_H : V\\rightarrow V_H$ .", "Then the whole reconstruction operator $\\mathcal {J}$ can be expressed as $\\mathcal {J} := \\mathcal {I}_H+\\mathcal {P}_H\\mathcal {R} - \\mathcal {I}_H\\mathcal {P}_L\\mathcal {P}_H\\mathcal {R},$ Again, the addition of $\\mathcal {I}_H-\\mathcal {I}_H\\mathcal {P}_L\\mathcal {P}_H\\mathcal {R}$ ensures that the whole operation will be reversible in the absence of transport, as $\\mathcal {P}_L\\mathcal {J} & =\\mathcal {P}_L\\mathcal {I}_H + \\mathcal {P}_L\\mathcal {P}_H\\mathcal {R}- \\mathcal {P}_L\\mathcal {I}_H\\mathcal {P}_L\\mathcal {P}_H\\mathcal {R} \\\\& = \\mathcal {P}_L\\mathcal {I}_H + \\mathcal {P}_L\\mathcal {P}_H\\mathcal {R}- \\mathcal {P}_L\\mathcal {P}_H\\mathcal {R} \\\\& = \\mathcal {P}_L\\mathcal {I}_H ,$ which when acting upon a field in $V_L$ is the identity operator, since $V_L\\subset V_H$ ." ], [ "Choice of function spaces", "Armed with the extension of the recovery scheme presented in Section REF , now consider the motivating case when $V_L$ is the lowest-order $H(\\mathrm {div})$ Raviart-Thomas space for quadrilateral cells.", "In general, there will be multiple possible choices for $V_R$ and $V_H$ that will satisfy the requirements presented in Section REF , but here we only present the specific choices that are demonstrated in Section .", "These spaces are illustrated in Table REF .", "The general strategy is to choose $V_H$ to have the same continuity properties as $V_L$ , but with increased polynomial order.", "For the $H(\\mathrm {div})$ Raviart-Thomas spaces, the components of the vector field that are normal to cell edges are continuous, and these already have a higher-order representation.", "However the components that are tangential to cell edges are discontinuous with a lower-order representation.", "Therefore we choose $V_R$ to be the higher-order $H(\\mathrm {curl})$ space corresponding to $V_H$ , whose tangential components are continuous between cells.", "On quadrilateral cells, $V_L$ is $\\mathrm {RTc}^f_1$ , which has a single DoF for each edge of the cell.", "The higher-order space $V_H$ is $\\mathrm {RTc}^f_2$ , so from the same family as $V_L$ but with higher polynomial order.", "The recovered space $V_R$ is the $H(\\mathrm {curl})$ form, $\\mathrm {RTc}^e_2$ .", "For the transport operator $\\mathcal {T}$ , this work uses the benchmark upwind discretisation (REF ) in the higher-order space $V_H$ , combined with a trapezoidal time discretisation that will be described in Section .", "Table: Representations of the finite elements discussed in Section for use in the recovered finite element method for transporting fields in the lowest-order Raviart-Thomas spaces.These are the specific choices of element that are used in the demonstrations of Section .The diamonds represent the DoFs of the element, and whether the DoFs describe the components of the vector that are tangential or normal to the cell edges." ], [ "Vorticity Form", "As discussed at the end of Section REF , the low order of accuracy of transport of fields in $\\mathrm {RTc}^{f}_1$ can be attributed to the representation of the components of $\\mathbf {F}$ that are perpendicular to the transporting velocity $\\mathbf {v}$ , as these components are only constant within a cell in the direction of $\\mathbf {v}$ .", "When expressing the vector transport equation in the vorticity form (REF ), the transport of these components is captured in part by using the vorticity $\\zeta $ through the $\\zeta \\mathbf {v}^\\perp $ term.", "This vorticity can be expressed weakly in a space $V_\\zeta $ through $ \\int _\\eta \\; \\zeta \\hspace{2.13394pt}\\mathrm {d}{x} =- \\int _{\\mathbf {\\nabla }{^\\perp \\eta }}\\mathbf {\\cdot F}\\hspace{2.13394pt}\\mathrm {d}{x}, \\qquad \\forall \\eta \\in V_\\zeta ,$ where for simplicity terms associated with the boundary of the domain have been neglectedFor a discussion including boundary terms, see [10] and [11]..", "In the compatible finite element framework, with $\\mathbf {F}\\in \\mathrm {RTc}^f_k$ , the space $V_\\zeta $ is the continuous Galerkin space $\\mathrm {CG}_k$ , as for all $\\eta \\in \\mathrm {CG}_k$ then $\\mathbf {\\nabla }^\\perp \\eta \\in \\mathrm {RTc}^f_k$ .", "Thus when using the lowest-order elements, $\\zeta $ is piece-wise linear, which suggests that a formulation using the vorticity could improve the accuracy of the transport of $\\mathbf {F}\\in \\mathrm {RTc}^f_1$ .", "Due to its favourable properties with regards to the system's total energy and potential vorticity budgets [29], the vector-invariant form (REF ) of the momentum equation is popular in NWP models (e.g.", "[30], [31]).", "It has also been used in the context of the compatible finite element discretisations of the shallow-water equations with vorticity as an auxiliary variable by [10], [11], [32], and [33].", "In [32], a modified vorticity is diagnosed from the velocity field and used in its transport.", "The modification to the vorticity improves the stability of the vector transport term by dissipating enstrophy without compromising on energy conservation.", "This is known as the anticipated potential vorticity method (APVM).", "However, this approach is not consistent with the continuous equations, in the sense that strong solutions do not necessarily satisfy the discrete equations.", "To overcome this problem, an extension to the APVM term was developed by [10], which used a mixed finite element problem to solve simultaneously for the velocity and the vorticity evolution equations, where the latter is stabilised by an SUPG method.", "For a recent comparison including the APVM and SUPG methods, see [34].", "Finally, [11] introduced a methodology to apply an SUPG-based stabilisation method to more general vorticity evolution equations, which may contain additional terms such as ones arising from temperature gradients.", "The methodology relies on the SUPG method's residual-based form.", "In this section, we present a mixed velocity-vorticity approach for the vector transport equation similar to the setup of [10].", "However, here we follow a “black box” approach in which the non-transport terms in the broader equation set are not included in the vorticity transport equation.", "To achieve this, we apply a residual-based setup akin to the one presented in [11].", "In order to derive an evolution equation for the vorticity, we consider the vorticity form of the vector transport equation $\\frac{\\partial {\\mathbf {F}}}{\\partial {t}} + \\zeta \\mathbf {v}^\\perp + \\frac{1}{2}{\\mathbf {\\nabla }{(\\mathbf {v\\cdot F})}}+ \\mathbf {G}(\\mathbf {F}) = \\mathbf {0},$ with $\\mathbf {G}(\\mathbf {F}) = \\frac{1}{2}\\left[\\left(\\mathbf {\\nabla } \\mathbf {F}\\right)\\mathbf {\\cdot v}-\\left(\\mathbf {\\nabla } \\mathbf {v}\\right)\\mathbf {\\cdot F} \\right].", "$ Applying the $\\mathbf {\\nabla }^\\perp \\mathbf {\\cdot }$ operator and using $\\mathbf {\\nabla ^\\perp \\cdot \\nabla }f = 0$ for scalar $f$ yields a vorticity equation of the form $\\frac{\\partial {\\zeta }}{\\partial {t}} + \\mathbf {\\nabla } \\mathbf {\\cdot } (\\zeta \\mathbf {v})+ \\mathbf {\\nabla ^\\perp \\cdot } \\mathbf {G}(\\mathbf {F}) = \\mathbf {0},$ noting that we applied the identity $\\mathbf {a}^\\perp \\mathbf {\\cdot b}^\\perp = \\mathbf {a} \\mathbf {\\cdot b}$ , for any vectors $\\mathbf {a}, \\mathbf {b}$ , to obtain $\\mathbf {\\nabla }^\\perp \\mathbf {\\cdot } (\\zeta \\mathbf {v}^\\perp ) = \\mathbf {\\nabla } \\mathbf {\\cdot } (\\zeta \\mathbf {v}).", "$ We arrive at our discretisation by multiplying (REF ) and (REF ) by test functions $\\mathbf {\\gamma }\\in V_F$ and $\\eta \\in V_\\zeta $ , which yields a mixed finite element problem with two equations to be solved simultaneously: $& \\int _\\mathbf {\\gamma } \\mathbf {\\cdot } \\frac{\\partial {\\mathbf {F}}}{\\partial {t}} \\hspace{2.13394pt}\\mathrm {d}{x} +\\int _\\mathbf {\\gamma } \\mathbf {\\cdot } (\\zeta \\mathbf {v}^\\perp )\\hspace{2.13394pt}\\mathrm {d}{x} -\\frac{1}{2} \\int _(\\mathbf {v\\cdot F}) \\; (\\mathbf {\\nabla \\cdot } \\mathbf {\\gamma })\\hspace{2.13394pt}\\mathrm {d}{x}+ \\mathbf {G}^{\\prime }(\\mathbf {F}; \\mathbf {\\gamma }) = 0, &\\forall \\mathbf {\\gamma } \\in V_F, \\\\& \\int _\\eta \\frac{\\partial {\\zeta }}{\\partial {t}} \\hspace{2.13394pt}\\mathrm {d}{x} - \\int _\\mathbf {\\nabla }\\eta \\mathbf {\\cdot } (\\zeta \\mathbf {v})\\hspace{2.13394pt}\\mathrm {d}{x} - \\mathbf {G}^{\\prime }(\\mathbf {F}; \\mathbf {\\nabla ^\\perp } \\eta ) = 0, & \\forall \\eta \\in V_\\zeta , $ where the initial discrete vorticity $\\zeta $ is defined by (REF ), and $\\mathbf {G}^{\\prime }$ is a weak discretisation of $\\mathbf {G}$ , whose specific value is postponed to later in this section.", "Note that to arrive at the above weak vorticity equation, we applied integration by parts according to $\\int _\\Omega \\eta \\mathbf {\\nabla \\cdot } (\\zeta \\mathbf {v}) \\hspace{2.13394pt}\\mathrm {d}{x} = - \\int _\\Omega \\mathbf {\\nabla } \\eta \\mathbf {\\cdot } (\\zeta \\mathbf {v}) \\hspace{2.13394pt}\\mathrm {d}{x} && \\forall \\eta \\in V_\\zeta ,$ which does not include any additional facet integral terms since the choice of finite element spaces ensures that the normal component of $\\zeta \\mathbf {v}$ is continuous.", "Since () is true for all $\\mathbf {\\gamma }$ , it is also true for $\\mathbf {\\gamma }=-{\\mathbf {\\nabla }{^\\perp }}\\eta $ which recovers () (by cancelling perpendicular operations similar to (REF )).", "This means that by solving these equations simultaneously, the evolution of the discrete $\\mathbf {F}$ and its vorticity are kept consistent.", "In the form of (), the discrete vorticity evolution equation does not contain any transport stabilisation measures, and we may therefore expect it to be vulnerable to grid-scale oscillations.", "In the context of a shallow-water model this can correspond to a lack of dissipation of enstrophy, which naturally cascades to fine scales but gets trapped at the grid scale without a mechanism to dissipate it [32].", "Since the vorticity is discretised as a $\\mathrm {CG}_k$ field, this can be remedied by using a stabilisation based on the SUPG method.", "The usual Petrov-Galerkin approach to applying an SUPG stabilisation is to adjust the test function to include a transport contribution via $\\eta \\; \\rightarrow \\; \\eta + \\tau \\mathbf {v} \\mathbf {\\cdot \\nabla } \\eta , $ where $\\tau $ denotes a suitable stabilisation parameter with dimensions of time.", "However, modifying only the test function for () breaks the consistency between the evolution equations of $\\mathbf {F}$ and $\\zeta $ .", "Instead, we consider a residual-based approach like those used by [11].", "This uses the residual of the strong form of the vorticity equation: $\\zeta _{res} = \\frac{\\partial {\\zeta }}{\\partial {t}} + \\mathbf {\\nabla } \\mathbf {\\cdot } (\\zeta \\mathbf {v})+ \\mathbf {\\nabla ^\\perp \\cdot } \\mathbf {G}(\\mathbf {F}).$ Then, the vorticity appearing in the discretisation () is modified to give $& \\int _\\mathbf {\\gamma } \\mathbf {\\cdot } \\frac{\\partial {\\mathbf {F}}}{\\partial {t}} \\hspace{2.13394pt}\\mathrm {d}{x} +\\int _\\mathbf {\\gamma } \\mathbf {\\cdot } (\\zeta ^\\ast \\mathbf {v}^\\perp )\\hspace{2.13394pt}\\mathrm {d}{x} -\\frac{1}{2} \\int _(\\mathbf {v\\cdot F}) \\; (\\mathbf {\\nabla \\cdot } \\mathbf {\\gamma })\\hspace{2.13394pt}\\mathrm {d}{x}+ \\mathbf {G}^{\\prime }(\\mathbf {F}; \\mathbf {\\gamma }) = 0, &\\forall \\mathbf {\\gamma } \\in V_F, \\\\& \\int _\\eta \\frac{\\partial {\\zeta }}{\\partial {t}} \\hspace{2.13394pt}\\mathrm {d}{x} - \\int _\\mathbf {\\nabla } \\eta \\mathbf {\\cdot } (\\zeta ^\\ast \\mathbf {v})\\hspace{2.13394pt}\\mathrm {d}{x} - \\mathbf {G}^{\\prime }(\\mathbf {F}; \\mathbf {\\nabla ^\\perp } \\eta ) = 0, & \\forall \\eta \\in V_\\zeta , $ with $\\zeta ^\\ast = \\zeta - \\tau \\zeta _{res}$ .", "Note that after discretisation, the differential operations occurring in $\\zeta _{res}$ are applied cell-wise, including those of $\\mathbf {G}(\\mathbf {F})$ as defined by (REF ).", "There is then choice in the time discretisation; this is discussed briefly in Section .", "We conclude the description of the SUPG stabilisation with the following four observations.", "First, the choice of residual $\\zeta _{res}$ ensures that the discretisation () is consistent with the strong equation of the vorticity evolution, as then $\\zeta _{res} = 0$ and $\\zeta ^\\ast $ reduces to $\\zeta $ .", "At the same time the evolution equations for $\\mathbf {F}$ and $\\zeta $ are still consistent with one another.", "Secondly, the modification to the vorticity used in () has a stabilising effect akin to the more standard SUPG modification (REF ).", "This can be seen by setting $\\eta = \\zeta $ in (), which leads to a non-positive definite term of the form $\\frac{1}{2}\\frac{d}{dt}\\Vert \\zeta \\Vert _2^2 = \\int _\\zeta \\frac{\\partial {\\zeta }}{\\partial {t}}\\hspace{2.13394pt}\\mathrm {d}{x} = \\cdots - \\Vert \\sqrt{\\tau } \\mathbf {v} \\cdot \\mathbf {\\nabla } \\zeta \\Vert _2^2, $ on the equation's right-hand side, showing that as expected for the SUPG method, there is potential for vorticity dissipation along the direction of the flow.", "Thirdly, whilst the above formulation allows for the dissipation of vorticity, it does not necessarily dissipate the divergence field.", "If the latter field is large, additional stabilisation mechanisms may be required for the transport of $\\mathbf {F}$ .", "An example for this would be an interior penalty term [35], based on the divergence field $\\mathbf {\\nabla \\cdot \\mathbf {F}} \\in \\text{DG}_{k-1}$ .", "For the type of shallow-water scenarios typically considered in numerical weather prediction, the divergence field is small, and no such additional mechanism is required.", "Lastly, if the term $\\mathbf {G}$ and its weak discrete version $\\mathbf {G}^{\\prime }$ are equal to zero – as will be the case if the advecting velocity $\\mathbf {v}$ is set equal to $\\mathbf {F}$ – then the vorticity evolution equation can be rewritten in standard SUPG form $\\int _\\left( \\eta + \\tau \\mathbf {v} \\cdot \\mathbf {\\nabla } \\eta \\right) \\left( \\frac{\\partial {\\zeta }}{\\partial {t}} + \\mathbf {\\nabla } \\mathbf {\\cdot } (\\zeta \\mathbf {v}) \\right)\\hspace{2.13394pt}\\mathrm {d}{x} = 0, && \\forall \\eta \\in V_\\zeta .$ Note that to arrive at the above equation, we applied integration by parts, which does not lead to any additional facet integrals as mentioned above when deriving ().", "When $\\mathbf {G}^{\\prime }$ is non-zero, the non-equivalence of the Petrov-Galerkin and residual-based approaches is a necessity arising from formulating () in a manner consistent with (REF ).", "This non-equivalence can also be found in other applications in the literature, such as SUPG discretisations of the Navier-Stokes equations.", "In the latter case, a residual-based formulation may be preferred in order to avoid a double-derivative applied to the SUPG-modified test function in the weak diffusion term [36].", "It remains to describe the weak, discrete operator $\\mathbf {G}^{\\prime }$ .", "In order to stabilise the gradient terms occurring in $\\mathbf {G}$ , an upwind formulation is used, so that for test functions $\\mathbf {w}$ $\\mathbf {G}^{\\prime }(\\mathbf {F}; \\mathbf {w}) = \\frac{1}{2} \\int _\\left(\\mathbf {F\\cdot }\\left[{\\mathbf {\\nabla }{}}\\mathbf {\\cdot }(\\mathbf {v}\\otimes \\mathbf {w})\\right] -\\mathbf {v \\cdot }\\left[{\\mathbf {\\nabla }{}}\\mathbf {\\cdot }(\\mathbf {F} \\otimes \\mathbf {w})\\right]\\right)\\hspace{2.13394pt}\\mathrm {d}{x}+\\frac{1}{2}\\int _\\Gamma \\left(\\mathbf {w}^+\\mathbf {\\cdot }\\widehat{\\mathbf {n}}^+\\right)\\left(\\llbracket \\mathbf {v}\\rrbracket _+ \\mathbf {\\cdot }\\mathbf {F}^\\dagger -\\llbracket \\mathbf {F}\\rrbracket _+ \\mathbf {\\cdot }\\mathbf {v}^\\dagger \\right) \\hspace{2.13394pt}\\mathrm {d}{S}.$ Again, terms associated with the boundaries of the domain have been neglected.", "As with the benchmark (REF ), a correction could be added to project the upwind term into the tangent bundle.", "Finally, it should be stressed that in the context of a shallow-water model, alternative variables to the (relative) vorticity $\\zeta $ are the absolute vorticity $\\omega =\\mathbf {\\nabla }^\\perp \\mathbf {\\cdot u} +f$ or potential vorticity $q = (\\mathbf {\\nabla ^\\perp \\cdot u} + f)/h$ .", "As the potential vorticity is conserved along the flow, it is often preferred to $\\zeta $ , as in the case of the compatible finite element discretisations in [32] and [10].", "In particular, the APVM and SUPG stabilisations derived in the aforementioned papers dissipate the enstrophy $hq^2$ , while conserving the system's total energy.", "These works also solved the vorticity evolution equation corresponding to the whole shallow-water equation for the velocity (REF ), whereas in this section we consider only the transport part; as mentioned in Section REF the motivation here is to find a “black box” to solve the vector transport equation.", "The addition of the SUPG stabilisation is still consistent within the transport step, and any errors arising from not applying SUPG to the whole equation will do so in the form of a splitting error in time.", "Note that this SUPG setup is different to the ones used in [10] and [11].", "In the former, a different vorticity variable is used, leading to a forcing contribution of the form $g{\\mathbf {\\nabla }{(h+h_b)}}$ , which vanishes in the vorticity evolution equation (since $\\mathbf {\\nabla ^\\perp \\cdot } {\\mathbf {\\nabla }{(h+h_b)}} = 0$ ).", "In the latter, there is no time splitting, and a forcing contribution $\\mathbf {\\nabla ^\\perp \\cdot } \\mathbf {J}$ for some baroclinic forcing terms $\\mathbf {J}$ is included in the vorticity equation's residual.", "While these approaches avoid errors due to time splitting and lead to additional conservation properties such as energy conservation, they require additional information from the equations and cannot be used as “black box” vector transport methods.", "In particular, this is a drawback for code implementations: while a “black box” setup can be used for a variety of different equation sets, in the specific setups of [10] and [11], the transport implementation has to be adjusted each time the overall equation sets are changed." ], [ "Numerical Results", "This section demonstrates the schemes presented in Sections and , through some transport-only tests in Section REF and in the context of a shallow-water model in Section REF .", "The new schemes are compared with the benchmark scheme of Section REF , all applied to the lowest-order quadrilateral Raviart-Thomas elements.", "Throughout this section, the equations are discretised in time using the trapezoidal rule.", "If the discretisation of the integrated transport term is given by $\\mathcal {G}\\left[\\mathbf {\\gamma },\\mathbf {v},\\mathbf {F}\\right]$ , then the value of $\\mathbf {F}$ at the $(n+1)$ -th time step is found from $ \\int \\mathbf {\\gamma \\cdot }\\left(\\mathbf {F}^{n+1} -\\mathbf {F}^{n}\\right)\\hspace{2.13394pt}\\mathrm {d}{x} = \\frac{\\Delta t}{2}\\left(\\mathcal {G}\\left[\\mathbf {\\gamma },\\mathbf {v},\\mathbf {F}^n\\right] + \\mathcal {G}\\left[\\mathbf {\\gamma },\\mathbf {v},\\mathbf {F}^{n+1}\\right] \\right), \\qquad \\forall \\mathbf {\\gamma }\\in V_F.$ This yields a matrix-vector problem for $\\mathbf {F}^{n+1}$ which is then solved to obtain the transported solution.", "For the mixed vorticity scheme of Section , the integrated transport terms for $\\mathbf {F}$ and $\\zeta $ are given respectively by $\\mathcal {G}\\left[\\mathbf {\\gamma },\\mathbf {v},\\mathbf {F},\\zeta \\right]$ and $\\mathcal {H}\\left[\\eta ,\\mathbf {v},\\mathbf {F},\\zeta \\right]$ , so that the trapezoidal rule is given by $\\int \\mathbf {\\gamma \\cdot }\\left(\\mathbf {F}^{n+1} -\\mathbf {F}^{n}\\right)\\hspace{2.13394pt}\\mathrm {d}{x} = \\frac{\\Delta t}{2}\\left(\\mathcal {G}\\left[\\mathbf {\\gamma },\\mathbf {v},\\mathbf {F}^n,\\zeta ^{n}\\right] + \\mathcal {G}\\left[\\mathbf {\\gamma },\\mathbf {v},\\mathbf {F}^{n+1},\\zeta ^{n+1}\\right] \\right), \\qquad &\\forall \\mathbf {\\gamma }\\in V_F, \\\\\\int \\eta \\left(\\zeta ^{n+1} -\\zeta ^{n}\\right)\\hspace{2.13394pt}\\mathrm {d}{x} = \\frac{\\Delta t}{2}\\left(\\mathcal {H}\\left[\\eta ,\\mathbf {v},\\mathbf {F}^n,\\zeta ^n\\right] + \\mathcal {H}\\left[\\eta ,\\mathbf {v},\\mathbf {F}^{n+1},\\zeta ^{n+1}\\right] \\right), \\qquad &\\forall \\eta \\in V_\\zeta .$ To implement these schemes, we used the Firedrake software, [37], which is a library for solving PDEs using finite element methods and is built on the PETSc solver library [38].", "Firedrake constructs the quadrilateral Raviart-Thomas elements as tensor-product elements [39] and provides support for the hybridised solver [40] used in the shallow-water model of Section REF .", "The orthographic projections were plotted using the Cartopy python package [41].", "Finally, for the SUPG method used in the stabilised voriticity discretisation, we consider a stabilisation parameter of the form $\\tau = \\left(\\lambda \\frac{2}{\\Delta t} + \\frac{2|\\mathbf {u}|}{\\Delta x} \\right)^{-1},$ for local mesh size $\\Delta x$ , and a tuning parameter $\\lambda \\ge 0$ .", "The latter parameter can be seen to adjust the stabilisation's “aggressiveness” and in this section, we took $\\lambda =0.5$ ; for details, see [11]." ], [ "Transport-only tests", "Although the literature on numerical weather prediction contains many test cases for the transport of scalar fields, there are few for the transport of vector fields.", "This section describes two test cases on curved manifolds that may be used for assessing the convergence properties of transport schemes for vector fields." ], [ "Deformation on the cylinder", "The surface of a cylinder is a curved manifold on which the vector transport equation does not have metric terms.", "It is therefore straightforward to adapt existing transport tests to the cylinder using the standard format for convergence tests, in which the true final solution of a transported vector is equal to its initial state.", "If the azimuthal and height coordinates are $\\mathbf {x}=(\\phi ,z)$ and the radius of the cylinder is $\\varrho $ , the vector transport equation (REF ) can be expressed in components as $& \\frac{\\partial {F_\\phi }}{\\partial {t}} + \\frac{v_\\phi }{\\varrho }\\frac{\\partial {F_\\phi }}{\\partial {\\phi }} +v_z\\frac{\\partial {F_z}}{\\partial {z}} = 0, \\\\& \\frac{\\partial {F_z}}{\\partial {t}} + \\frac{v_\\phi }{\\varrho }\\frac{\\partial {F_\\phi }}{\\partial {\\phi }} +v_z\\frac{\\partial {F_z}}{\\partial {z}} = 0.$ Here the transporting velocity is inspired by the time-varying and deformational divergence-free flows from [42] and [43], but adapted to the cylinder.", "The cylinder has radius $\\varrho $ and length $L$ , which is periodic in the $z$ direction.", "The time $t$ runs from 0 to $T$ .", "With speeds $U=2\\pi \\varrho /T$ and $W$ , and a modified coordinate $\\phi ^{\\prime }=\\phi -Ut/\\varrho $ , the transporting velocity is given by $v_\\phi & = U + 2\\pi W\\sin \\left(\\phi ^{\\prime }\\right)\\sin \\left(\\frac{2\\pi z}{L}\\right)\\cos \\left(\\frac{\\pi t}{T}\\right), \\\\v_z & = \\frac{W L}{\\varrho }\\cos \\left(\\phi ^{\\prime }\\right)\\cos \\left(\\frac{2\\pi z}{L}\\right)\\cos \\left(\\frac{\\pi t}{T}\\right).$ With this flow the true solution at $t=T$ is equal to the initial condition.", "As in [42] and [43], the flow has a translational component to avoid fortuitous cancellation of errors.", "The amount of deformation can be controlled by changing $W$ relative to $U$ .", "This flow can also be expressed using a stream function, but this will contain a jump on the periodic cylinder due to the translational component of the flow.", "For our test we took $L=100$ m, $\\varrho = L/(2\\pi )$ , $T = 100$ s and $W=U/10$ .", "To describe the initial conditions, let the distance from a specific point $(\\phi _c, z_c)$ on the cylindrical surface be defined via $\\ell ^2(\\phi ,z) = \\left(\\cos ^{-1}\\left[\\cos \\left(\\phi - \\phi _c\\right)\\right] \\right)^2 + \\left(\\cos ^{-1}\\left[\\cos \\left(\\frac{2\\pi (z-z_c)}{L}\\right)\\right] \\right)^2.$ The initial condition uses a vector whose cylindrical components are both a Gaussian hill of size $F_0$ , width $\\ell _0$ and centred on $(\\phi _c,z_c)$ , taking: $\\mathbf {F} =\\left(\\widehat{\\mathbf {e}}_\\phi + \\widehat{\\mathbf {e}}_z\\right)F_0\\exp \\left(-\\ell ^2(\\phi ,z)/\\ell ^2_0\\right),$ where $\\phi _c=\\pi /4$ , $z_c=L/2$ , $\\ell _0=1/10$ , $F_0=3$ m s$^{-1}$ .", "This initial condition and a numerical solution at $t=T/2$ are displayed in Figure REF .", "Figure: The transported field 𝐅\\mathbf {F} in the deformational cylindrical transport test of Section .The contours show the magnitude of 𝐅\\mathbf {F}, with the arrows indicating its direction.", "(Left) the initial condition, and true solution at t=Tt=T.", "(Right) a numerical computation of the deformed field at t=T/2t=T/2.The contours are spaced at 0.5 m s -1 ^{-1}.To perform a convergence test, the $L^2$ error was computed for the numerical solution against the true solution at $t=T$ , for a range of spatial resolutions.", "The same time step $\\Delta t = 0.002$ s was used for all simulations, and the meshes were constructed of uniform quadrilateral cells.", "Results of the convergence test comparing the benchmark scheme of Section REF to the new schemes are shown in the left of Figure REF .", "Both schemes show a very clear improvement from the benchmark scheme, with the recovered scheme approaching second-order accuracy and the vorticity scheme (which used the SUPG stabilisation) even achieving some super-convergence.", "Figure: Convergence results for the transport test of Sections and .The L 2 L^2 errors in the transported 𝐅\\mathbf {F} field are plotted for a range of spatial resolutions.The benchmark case of Section is compared with the recovered scheme of Section and the vorticity scheme of with the SUPG stabilisation.The legends indicate the gradients of lines of best fit through the error measurements, which approximate the rate of convergence of the scheme.", "(Left) results for the cylindrical test, and (right) results for the spherical test.For both tests, the two new schemes demonstrate much better convergence than the benchmark case." ], [ "Solid body rotations on sphere", "Unlike on a cylindrical manifold, the vector transport equation does have metric terms on a spherical manifold.", "If $(\\lambda ,\\vartheta )$ are the longitude and latitude, and $r$ is the radius of the sphere, the advective form of the transport equation (REF ) can be written as $& \\frac{\\partial {F_\\lambda }}{\\partial {t}} + \\frac{v_\\lambda }{r\\cos \\vartheta }\\frac{\\partial {F_\\lambda }}{\\partial {\\lambda }} +\\frac{v_\\vartheta }{r}\\frac{\\partial {F_\\lambda }}{\\partial {\\vartheta }} - \\frac{v_\\lambda F_\\vartheta \\tan \\vartheta }{r} = 0, \\\\& \\frac{\\partial {F_\\vartheta }}{\\partial {t}} + \\frac{v_\\lambda }{r\\cos \\vartheta }\\frac{\\partial {F_\\vartheta }}{\\partial {\\lambda }} +\\frac{v_\\vartheta }{r}\\frac{\\partial {F_\\vartheta }}{\\partial {\\vartheta }} + \\frac{v_\\lambda F_\\lambda \\tan \\vartheta }{r} = 0.$ The presence of the metric terms in these equations makes the design of a convergence test difficult, as any zonal component of $\\mathbf {v}$ will cause the rotation of $\\mathbf {F}$ at a rate depending on the latitude.", "One strategy to avoid this is to explicitly add the metric terms as a forcing to the equation, but this requires a discretisation of the metric terms themselves which can confuse the interpretation of any results.", "Another strategy is to use an exactly reversing flow to cancel out the effects of the metric terms, but this could also result in the fortuitous cancellation of dispersion errors.", "Here we present a spherical convergence test that avoids these issues by composing four solid body rotations to reverse the effects of the metric terms.", "First, $\\mathbf {F}$ is initialised with a smooth profile centred at $(\\lambda _c, \\vartheta _c)$ , taking $\\lambda _c=0$ and $\\vartheta _c=-\\pi /6$ .", "Using the usual definition of distance on a spherical surface, $\\ell (\\lambda ,\\vartheta ) = \\cos ^{-1}\\left[\\sin \\vartheta _c\\sin \\vartheta +\\cos \\vartheta _c\\cos \\lambda _c\\cos \\lambda +\\cos \\vartheta _c\\sin \\lambda _c\\sin \\lambda \\right],$ the initial condition is $F_\\lambda = 0, \\quad F_\\vartheta = F_0 \\exp \\left(-\\ell ^2(\\lambda ,\\vartheta )/\\ell _0^2\\right),$ with $F_0=3$ and $\\ell _0=1/4$ .", "This is displayed in Figure REF .", "The transporting velocity is made by composing four solid body rotations, each performing half of a rotation of the profile around an axis.", "The first half-rotation is around the $z$ -axis, leaving a profile that should be centred on $\\lambda =\\pi $ .", "Then the velocity is changed to perform a half-rotation around the $x$ -axis, rotating the profile from the southern hemisphere to the northern hemisphere.", "The third rotation uses the same winds as the first, rotating the profile again around the $z$ -axis.", "By performing the same solid body rotation again, but this time with the profile in the northern hemisphere instead of the southern, the metric effects induced by the first half-rotation will be cancelled out.", "Finally, another half-rotation is completed around the $x$ -axis, which reverses the effects of the metric terms from the previous rotation around the $x$ -axis.", "The resulting path around the sphere is illustrated in Figure REF .", "Figure: An illustration of the path taken by the transported field in the solid body rotation test presented in Section .The `front' of the sphere is shown on the left and the `back' on the right.The path, shown in grey, is broken into four stages, marked by the black circles.Each stage involves a solid body rotation: from points 0 to 1 and 2 to 3 this is a solid body rotation around the zz-axis, while it is a solid body rotation around the xx-axis from points 1 to 2 and 3 to 0.Taking this path, the effects induced by metric terms upon a transported vector cancel out, as any transport at a latitude ϑ\\vartheta is matched by equal transport at -ϑ-\\vartheta .Thus the true final solution is equal to the initial solution.The transporting velocity can be summarised in $(\\lambda ,\\vartheta )$ components as $ \\begin{array}{lll}v_\\lambda = U\\cos \\vartheta , & v_\\vartheta = 0,& \\mathrm {for} \\ 0\\le t\\le T/2 \\ \\mathrm {and} \\ T < t \\le 3T/2, \\\\& & \\\\v_\\lambda = -U\\cos \\lambda \\sin \\vartheta ,& v_\\vartheta = U\\sin \\lambda (\\cos ^2\\vartheta -\\sin ^2\\vartheta )& \\mathrm {for} \\ T/2 < t \\le T \\ \\mathrm {and} \\ 3T/2 < t \\le 2T.\\end{array}$ where $U=2\\pi r/T$ .", "The test is run from $t=0$ to $t=2T$ .", "Along with the initial condition, the state at $t=T$ is shown in the right of Figure REF .", "Figure: The transported field 𝐅\\mathbf {F} for the spherical transport test of Section .The contours show the magnitude of 𝐅\\mathbf {F}, with the arrows indicating its direction.", "(Left) the initial condition and true solution at t=2Tt=2T, shown on the `front' of the sphere.The meridional velocity is a Gaussian profile centred on λ c =0\\lambda _c=0 and ϑ c =-π/6\\vartheta _c=-\\pi /6.", "(Right) a numerical computation of the transported field at t=Tt=T, shown on the `back' of the sphere.We see the effect of the metric terms here on the direction of 𝐅\\mathbf {F}.The contours are spaced at 0.30.3 m s -1 ^{-1}.For a convergence test, the $L^2$ error of $\\mathbf {F}$ is computed for transported solutions at $t=2T$ against the true field, at a range of spatial resolutions.", "We took $r=100$ m and $T=200$ s and performed all simulations with $\\Delta t=0.05$ s. To mesh the sphere we use a cubed-sphere grid.", "The results for the different schemes are displayed in the right of Figure REF , which again shows the improvements of the new schemes of Sections and compared with the benchmark scheme of Section REF .", "The results indicate that both schemes are approaching the desired second-order accuracy." ], [ "Shallow-water test cases", "Now the new vector transport schemes are demonstrated within a compatible finite element discretisation for the shallow-water equations on the sphere (REF ).", "This discretisation is summarised in Section REF .", "The transport of $h$ uses the recovered transport scheme for scalars presented by [9], with the time discretisation as the trapezoidal scheme (REF ).", "The new schemes for transporting $\\mathbf {u}$ are compared with the benchmark upwind scheme (REF ).", "For the vorticity scheme in Section , before each transport step the initial vorticity needed for the vorticity evolution equation is updated from $\\mathbf {u}^*$ by solving (REF ).", "The different velocity transport schemes are demonstrated in this shallow-water model through two standard test cases.", "Firstly, the second test from the suite of Williamson et al [23], which describes a zonal geostrophic flow.", "This is a steady-state flow, so the evolved $u$ and $h$ fields can be compared with their initial values to compute errors due to the discretisation.", "For full details of the initial conditions, see [23].", "In Figure REF , the errors in $\\mathbf {u}$ and $h$ are plotted after 5 days of simulation, for both the new schemes and the benchmark scheme.", "The errors are computed at different spatial resolutions to approximate the order of accuracy of the overall model.", "As in Section REF , the test was performed with a cubed-sphere mesh.", "For all simulations we took the same time step of $\\Delta t=240$ s. Figure REF , shows the clear benefits of the two new schemes over the benchmark, improving the order of accuracy of the model from roughly first-order to approximately second-order for both schemes and both variables.", "Figure: Convergence results from the second shallow-water test case test of Williamson et al .The normalised L 2 L^2 error after 5 days is plotted as a function of spatial resolution.The test case describes a steady-state zonal geostrophic flow.The legends indicate the gradients of lines of best fit through the error measurements, which approximate the order of accuracy of the model.The shallow-water simulations differ only in the scheme used to transport the velocity field, comparing the benchmark scheme of Section against the new schemes of Sections and .", "(Left) the results for the velocity field 𝐮\\mathbf {u} and (right) for the height field hh.For both new schemes and for both variables, the model has around second-order accuracy, whereas the benchmark case has only first-order accuracy.The second test case is the unstable jet of Galewsky et al [44].", "This test adds a perturbation to an unstable jet in geostrophic balance, which then leads to the jet becoming unbalanced.", "Full details of the initial conditions can be found in [44].", "Figure REF shows the diagnostic vorticity field after 6 days.", "It shows that the benchmark scheme of Section REF is too diffusive for the fine details of the instability to develop, and that the new schemes are clear improvements on this, with the results resembling those of [44].", "It also demonstrates the impact of the SUPG stabilisation, by comparing the vorticity scheme with and without this stabilisation (bottom two panels of Figure REF ).", "The SUPG stabilisation removes some of the noise seen in the vorticity scheme.", "The removal of this noise can also be seen in Figure REF , which plots the evolution over time of the global energy and the global enstrophy for this test case.", "While the SUPG stabilisation does not appear to have an effect on the energy, it does result in some degradation of enstrophy compared with the standard vorticity scheme.", "Figure REF also shows the diffusivity of the benchmark scheme relative to the improved schemes.", "These simulations were all performed with a time step of $\\Delta t=300$ s, and using a cubed-sphere mesh with $128\\times 128$ cells per panel.", "Figure: The vorticity field after 6 days for the unstable shallow-water jet test case of Galewsky et al .The four plots correspond to simulations which only differ in the velocity transport scheme.These simulations were performed on a cubed-sphere mesh with 128×128128\\times 128 cells per panel.The simulation using the benchmark scheme is so diffusive that the jet does not clearly form.With both the new schemes, the solutions resemble that of , with the vorticity transport form being particularly close.The bottom two panels compare simulations with the vorticity scheme, without and with the SUPG stabilisation.The contours are spaced by 2×10 -5 2\\times 10^{-5} s -1 ^{-1}, and dashed contours indicate negative values.The zero contour is omitted.Figure: Time series of the evolution of the global energy and global enstrophy in the unstable jet test case of .The benchmark scheme shows significant decay of energy and enstrophy compared with the new transport schemes.While both vorticity schemes have good conservation of energy, the SUPG stabilisation results in more diffusion of enstrophy." ], [ "Discussion and Summary", "This work has examined two finite element methods for solving the vector transport equation with the lowest-order Raviart-Thomas elements.", "This was motivated by increasing the order of accuracy when compared with a standard upwind discretisation.", "The first scheme is an extension to the transport schemes of [9], and solves the transport equation in advective form and recovers the field in a higher-order function space to transport it there.", "The second scheme is a take on the mixed finite element formulation of [10], applied to the vector transport equation and using a residual based stabilisation concept of [11].", "This is written in a vorticity form, solving a problem for both the transported vector and its vorticity.", "As demonstrated through the test cases in Section , both schemes do have improved accuracy.", "In the future, we intend to apply these schemes to the lowest-order Raviart-Thomas elements on triangular cells, and to three-dimensional manifolds.", "Some preliminary investigations using the test cases described in Section showed that the recovered scheme of Section is naturally extended to triangular cells, by using appropriate higher-order finite element spaces for the recovery process.", "However, with the lowest-order Raviart-Thomas elements on triangular cells, the vorticity-form scheme of Section suffered from a large amount of noise in the divergence field of $\\mathbf {F}$ .", "The noise lies in the null-space of the $\\mathbf {\\nabla }^\\perp \\mathbf {\\cdot }$ operator and does not appear in the vorticity evolution equation, and can therefore not be attenuated by the SUPG-based vorticity stabilisation method.", "Although it comes at the cost of reduced accuracy, we found that the noise can be controlled effectively by a divergence-based interior penalty term as mentioned in Section ." ], [ "Acknowledgements", "The authors would like to thank James Kent, Colin Cotter and Thomas Melvin for their advice and some very helpful discussions through the evolution of this manuscript." ] ]
2207.03519
[ [ "Should All Proposals be Treated Equally in Object Detection?" ], [ "Abstract The complexity-precision trade-off of an object detector is a critical problem for resource constrained vision tasks.", "Previous works have emphasized detectors implemented with efficient backbones.", "The impact on this trade-off of proposal processing by the detection head is investigated in this work.", "It is hypothesized that improved detection efficiency requires a paradigm shift, towards the unequal processing of proposals, assigning more computation to good proposals than poor ones.", "This results in better utilization of available computational budget, enabling higher accuracy for the same FLOPS.", "We formulate this as a learning problem where the goal is to assign operators to proposals, in the detection head, so that the total computational cost is constrained and the precision is maximized.", "The key finding is that such matching can be learned as a function that maps each proposal embedding into a one-hot code over operators.", "While this function induces a complex dynamic network routing mechanism, it can be implemented by a simple MLP and learned end-to-end with off-the-shelf object detectors.", "This 'dynamic proposal processing' (DPP) is shown to outperform state-of-the-art end-to-end object detectors (DETR, Sparse R-CNN) by a clear margin for a given computational complexity." ], [ "Introduction", "Object detection is a challenging but fundamental task in computer vision, which aims to predict a bounding box and category label for each object instance in an image.", "A popular strategy, introduced by the Faster RCNN [25], is to rely on a backbone network to produce a relatively large set of object proposals and a detection head to derive a final prediction from these.", "Since then, the design trend for this two-stage detection framework, e.g.", "the path Faster RCNN [25] $\\rightarrow $ Cascade RCNN [2] $\\rightarrow $ DETR [3] $\\rightarrow $ Sparse RCNN [27], has been to sparsify the proposal density.", "Recent approaches, such as the Sparse R-CNN [27], successfully reduce the thousands of proposals of the Faster RCNN [25] to a few hundred.", "However, because the per proposal computation of the detection head is substantially increased by the use of a much more complicated architecture, the overall computational benefits of reducing the number of proposals are limited.", "While the aggregate effect has been to make detectors more efficient, in general, these approaches are still not suitable for use with lighter backbones, since the head complexity becomes a larger fraction of the overall computation.", "Figure: Existing object detectors treat proposals equally, applying the same operator to all proposals.", "Dynamic Proposal Processing (DPP) instead argues for an unequal treatment, by learning to dynamically assign different proposals to operators of different complexities.", "This enables the allocation of more (less) computation to high (low) IoU proposals and enables improved complexity-precision curves.While efficient object detection is now an extensively researched problem in computer vision, this literature has mostly focused on the design of computationally efficient backbones.", "The introduction of heavy detection heads would reverse the computational gains that have been achieved with lightweight models [26], [23], [8].", "For example, the detection head of the Sparse RCNN [27] with 300 proposals consumes $4\\times $ the computation of the entire MobileNetV2 [26] (25 GFLOPS vs $5.5$ GFLOPS).", "In this work, we investigate whether it is possible to retain the accuracy gains and proposal sparsity of modern detection heads while reducing their computational cost, so as to make them applicable to efficient object detection design.", "We note that a main limitation of existing high-end detectors is that they treat all proposals equally, in the sense that the detection head applies to all proposals an operator of identical complexity, maintaining a constant cost per proposal.", "This, however, is unintuitive.", "While it seems appropriate to spend significant computation on good proposals, it is wasteful to allocate equal resources to poor proposals.", "Since the IoU of each proposal is known during training, the detector could, in principle, learn to allocate different amounts of computation to different proposals.", "This, however, requires a paradigm shift for detector design, illustrated in Figure REF : that different proposals should be treated unequally in terms of resource allocation, reserving more computation for high quality proposals than low quality ones.", "The difficulty is that, because IoUs are not available at inference, the network has to learn to perform the resource allocation on the fly.", "This implies the need for a resource allocation function that depends on the proposal itself and has to be learned, i.e.", "a dynamic network module.", "To address this problem, we propose the dynamic proposal processing (DPP) framework, where the single operator used by current detection heads is replaced by an operator set, composed of multiple operators of different complexities.", "The benefit of this approach is to allow the detector to operate on multiple points of the complexity-precision curve, on a proposal by proposal basis, so as to optimize the overall trade-off between the two objectives.", "This is implemented by the addition of a selection model that chooses the best operator to apply to each proposal, at each stage of the network.", "We show that this selector can be very lightweight, a multi-layer perceptron that outputs a one-hot code over operator indices, and learned at training time, in an end-to-end manner.", "This is enabled by the introduction of two novel loss functions, which jointly encourage the allocation of the available computational budget to proposals of large IoU.", "An IoU loss teaches the detector to recognize proposals of large IoU and improve their alignment with ground truth bounding boxes.", "A complexity loss makes the selector aware of the number of instances, per image, so as to dynamically control the allocation of computational resources and meet the overall computational target.", "Experimental results on the COCO dataset show that DPP achieves a better complexity-precision curve (see Figure REF ) than designs that treat proposals equally, especially in the low complexity regime, confirming the effectiveness of treating proposals unequally.", "For large backbones (ResNet [9]), DPP achieves the best precision-complexity curves in the literature, achieving state-of-the-art precision with $60\\%$ of the computation of current models.", "For low-complexity networks (MobileNet [26]), the gains are even more significant, in that DPP establishes a new state-of-the-art in terms of both precision and computation and produces the best latency-precision curves." ], [ "Related Work", "Object detection.", "Object detection frameworks can be mainly categorized into one-stage [22], [19], [28], [13], [16], [33] vs two-stage [25], [2], [1], [6], [5], depending on the approach used to generate proposals.", "One-stage detectors can be anchor-based or not, but all rely on the very dense generation of proposals, which means each feature vector in the feature map is leveraged as a proposal.", "Two-stage detectors rely on a region proposal network [25] to filter out the majority of regions that are unlikely to contain an object instance.", "All the aforementioned methods require a post-processing step (non-maximum suppression) to remove a large number of duplicate proposals.", "More recently, an attention based framework [3], [34], [32], [21] has been proposed to overcome this problem, eliminating the need to post-process candidate predictions.", "By resorting to an attention mechanism, [27] even showed that it is possible to rely on a very sparse proposal density.", "In result, existing methods differ significantly in terms of proposal density.", "However, within each framework, all proposals are treaty equally.", "In this paper, we show that, by diversifying the complexity of proposal processing dynamically, it is possible to reduce detection complexity without decreasing precision.", "Dynamic network.", "Dynamic networks are a family of networks with input dependent structures or parameters derived from dynamic branches [11].", "For classical convolutional networks, this can be done by using input-dependent rather than static filters [31], [4], [17], [14], [29], [15] or reweighing features spatially or in a channel-wise manner [11], [10].", "Transformers are by definition dynamic networks, due to their extensive reliance on attention.", "Beyond that, [24], [30] dynamically discard uninformative tokens to reduce computational cost.", "While previous methods show remarkable improvements in network efficiency, they mainly focus on backbones.", "This cannot fully address the problem of object detection, namely the heavy computation required to process proposals.", "Dynamic DETR [7] attempts to address the problem by building dynamic blocks on the detection head.", "However, it still processes all proposals with a common operator, inducing a constant complexity per proposal.", "In this work, we propose to leverage the power of dynamic networks by matching proposals to operators of variable complexity in a dynamic manner." ], [ "Complexity and Precision of Proposals", "In this section, we compare the complexity of treating proposals equally or unequally.", "We assume that a backbone produces a set $\\mathbf {X}=\\lbrace \\mathbf {x}_1, \\mathbf {x}_2,...,\\mathbf {x}_N\\rbrace $ of proposals and focus on the cost of the detection head, i.e.", "ignore backbone costs.", "We further assume that the computation of the detection head can be decomposed into a per-proposal operator $h$ , e.g.", "a network block, and a pairwise component $p$ that accounts for the cost of inter-proposal computations.", "For example, the NMS operation of classical detectors or a self-attention mechanism between proposals for transformers.", "Complexity of equally treated proposals.", "In prior works, all proposals are processed by the same operator $h$ .", "This has complexity $\\mathcal {C}(\\psi )= NC_h + \\frac{N(N-1)}{2}C_p,$ where $\\psi =\\lbrace h,p\\rbrace $ , and $C_h$ and $C_p$ are the per proposal complexity of $h$ and $p$ , respectively.", "Complexity of unequally treated proposals.", "We propose to treat proposals unequally.", "Rather than applying the same operator $h$ to all proposals, we propose to leverage an operator set $\\mathcal {G} = \\lbrace h_j\\rbrace _{j=1}^J$ of $J$ operators of different architectures and complexity, which are assigned to the proposals $\\mathbf {x}_i$ by a dynamic selector $s$ .", "This has complexity $\\mathcal {C}(\\psi ) = \\sum _{i=1}^NC_{h_{s_i}} + \\frac{N(N-1)}{2}C_p,$ where $s_i=s(\\mathbf {x}_i)$ , $h_{s_i} \\in \\mathcal {G}$ represents the operator from $\\mathcal {G}$ that is assigned to the proposal $\\mathbf {x}_i$ by the selector $s$ , $\\psi =\\lbrace \\lbrace h_{s_i}\\rbrace _i,s,p\\rbrace $ , and $C_{h_{s_i}}$ is the complexity of the entire per proposal operation (selector plus operator).", "For simplicity, the pairwise complexity is still considered constant.", "Precision over proposals.", "When the detection head treats proposals unequally, the optimal detector precision for a given complexity constraint $C$ can be determined by optimizing the assignment of operators to proposals $P(\\psi ^*| C) = \\max _{\\underset{\\mathcal {C}(\\psi )<C}{h_{s_i}\\in \\mathcal {G}}}\\mathcal {P}(\\lbrace h_{s_i}\\rbrace _i), $ where $\\mathcal {P}(\\lbrace h_{s_i}\\rbrace _i)$ is the precision of a specific operator assignment $\\lbrace h_{s_i}\\rbrace _i$ .", "As $C$ changes, $P(\\psi ^*|C)$ forms a complexity-precision (C-P) curve that characterizes the optimal performance, in terms of the trade-off between cost and precision, of the object detectors implementable with $\\mathcal {G}$ .", "In this work, we use both precision (mAP) and the C-P curve as criteria to justify the effectiveness of treating proposals unequally.", "Note that the assignment of operators to proposals is the key to optimize the precision under a given computation budget $C$ .", "This is formulated as a learning function implementable with a simple network branch and solved via suitable loss functions, as discussed next." ], [ "Dynamic Proposal Processing", "In this section, we proposed a dynamic proposal processing (DPP) framework for the solution of (REF ).", "Following the design of prior works [3], [34], [27], we assume a detector head composed of multiple stages ($\\psi =\\phi _1 \\circ \\ldots \\circ \\phi _K$ ) that process proposals sequentially.", "Each stage $\\phi _k$ is implemented with an operator chosen from $\\mathcal {G}$ by a selector $s$ .", "To minimize complexity, the selector can be applied only to a subset $k\\in \\mathcal {K} \\subset \\lbrace 1, \\ldots , K\\rbrace $ of the stages, with the remaining stages using the operator chosen for their predecessor, i.e.", "$ \\phi _k = \\phi _{k-1}, \\forall k \\notin \\mathcal {K}$ ." ], [ "Operator Set", "In this paper, we consider an operator set $\\mathcal {G}=\\lbrace g_0, g_1, g_2\\rbrace $ composed of three operators of very different computational cost.", "Specifically, $g_0$ is a high complexity operator, implemented with a dynamic convolutional layer (DyConv) of proposal dependent parameters and a feed forward network (FFN) [3].", "This operator is based on the dynamic head architecture employed in the recent Sparse R-CNN [27].", "$g_1$ is a medium complexity operator, implemented with a static FFN [3].", "Finally, $g_2$ is a light operator formed by an identity block, which simply feeds the proposal forward with no further refinement." ], [ "Selector", "In DPP, the selector is the key component to control the trade-off between precision and complexity, by controlling the assignment of operators to proposals.", "Let $\\mathbf {z}^k_i$ be the embedding of proposal $\\mathbf {x}_i$ at the input of stage $\\phi _k$ .", "The selector is implemented with a 3-layer MLP that associates a 3 dimensional vector $\\epsilon ^k_i \\in [0,1]^3$ with $\\mathbf {z}^k_i$ according to $\\epsilon ^k_i = \\textrm {MLP}(\\mathbf {z}^k_i) $ where $\\epsilon ^k_{i,j}$ is the selection variable in $\\epsilon ^k_i$ that represents the strength of the assignment of operator $g_j$ to proposal $\\mathbf {x}_i$ .", "During training, the selection vector is a one hot code over three variables and the Gumble-Softmax function [12] is used as activation of the MLP to generate the selection vector.", "For inference, the selection variables have soft values and the operator that matches the index of the selection variable with largest value is chosen.", "The flow graph of the operator assignment process is illustrated in Figure REF .", "Please note that the proposed selector is very light (using 4e-3 GFLOPS for 100 proposals in our experimental setting), in fact negligible in complexity when compared to the detection head.", "It is clear from (REF ) that the chosen operator varies both across proposals $i$ and head stages $k$ , enabling the unequal treatment of proposals in a dynamic manner.", "Furthermore, while ${\\cal G}$ has cardinality three, the cardinality of the set of network architectures that can be used to implement the detector head is $3^{|{\\cal K}|}$ .", "Finally, because the selector is trainable, the assignment function can be learned end-to-end.", "Figure: Flow graph of operator assignments to proposals.", "The selector takes the proposal embeddings, i.e.", "{𝐳 1 k \\lbrace \\mathbf {z}_1^k, 𝐳 2 k \\mathbf {z}_2^k,..., 𝐳 N k }\\mathbf {z}_N^k\\rbrace in the k th k^{th} stage as input and outputs a selection vector per proposal.", "The operator that matches the index of largest value in the selection vector is selected to process the proposal.", "In the operator set, operator g 0 g_0 contains a high complexity dynamic convolution (DyConv) followed by a FFN .", "g 1 g_1 consists of a feed forward network (FFN), while g 2 g_2 is implemented with an identity function (Identity)." ], [ "Loss Functions", "To assure that, given a complexity budget, DPP selects the optimal sequence of operators for each proposal, a selection loss is applied to the selector of each stage in ${\\cal K}$ .", "This selection loss is designed to encourage two goals.", "First, complex operators should be assigned to high quality proposals (large IoU), since these require most additional work by the detection head.", "This is enforced through the IoU loss $L_{iou} = \\frac{1}{N}\\sum _{i=1}^N \\sum _{k \\in {\\cal K}} \\sum _{j\\in \\lbrace 0,1\\rbrace } (1-u_i^k)\\epsilon ^k_{i,j} + u_i^k(1-\\epsilon ^k_{i,j}), $ where $u_i^k$ is the IoU of the $i^{th}$ proposal in $k^{th}$ stage.", "$L_{iou}$ pushes the selector to turn $\\epsilon _{i,0}^k$ and $\\epsilon _{i,1}^k$ into `0' for proposals of IoU smaller than 0.5 and into `1' otherwise.", "This encourages the use of more complex operators in stage $k$ for the high quality proposals, which require more efforts for classifying categories and regressing bounding boxes.", "Moreover, the loss magnitude is determined by the IoU value, originating larger gradients when the selector predicts $\\epsilon _{i,0}^k$ or $\\epsilon _{i,1}^k$ as `1' for tiny IoU proposals or as `0' for large proposals.", "Second, the selector should be aware of the total number of instances in each image and adjust the overall complexity according to it, i.e., selecting more complex operators when instances are dense.", "This is enforced through the complexity loss $L_c=\\frac{1}{N}\\sum _{k \\in {\\cal K}}\\left|\\sum _{i=1}^N \\epsilon ^k_{i,0}-T\\right|,$ where $T$ is the target number of times that operator $g_0$ is selected for a particular image.", "This is defined as $T=\\alpha M$ where $\\alpha $ is a multiplier that specifies a multiple of the $M$ object instances in the image.", "Moreover, the condition $T \\in [T_{min}, N]$ is enforced, by clipping $\\alpha M$ according to a pre-specified lower bound $T_{min}$ and an upper bound given by the overall proposal number $N$ .", "The lower bound prevents a very sparse selection of high complexity operators $g_0$ and $\\alpha $ then adjusts the selector according to the number of instances.", "$\\alpha $ , $T_{min}$ and $N$ are hyperparameters that can be leveraged to modify the behavior of DPP, as discussed in the experimental section.", "The overall selection loss is finally $L_{s} = L_{iou}+\\lambda L_{c},$ where $\\lambda $ is the hyperparameter that controls the trade-off between the loss components.", "Note that the selection loss is a plug-and-play loss that can be applied to different object detectors.", "In this paper, $L_{s}$ is combined with all the losses of the original detector to which DPP is applied, including the cross entropy loss and the bounding box regression loss, which are omitted from our discussion." ], [ "Experiments", "Dataset.", "DPP is evaluated on the COCO dataset [20].", "It is trained on the train2017 split and mainly tested on the val2017 split with mAP.", "Network.", "DPP is applied to detectors whose backbone is built on MobileNet V2 [26] or ResNet-50 [9], using Feature Pyramid Networks (FPN) [18], on top of which proposals are generated using the strategies of [27].", "For MobileNetV2 [26], the FPN only considers features with strides 16 and 32 and the $3\\times 3$ FPN convolution is decomposed into an $1\\times 1$ pointwise convolution and a $3\\times 3$ depthwise convolution for efficiency.", "For ResNet-50, FPN is implemented on features with the standard 4 different strides.", "Following [27], the detection head is a decoder only transformer of 6 stages.", "For simplicity, the selector is only applied in stages $\\mathcal {K} = \\lbrace 2,4,6\\rbrace $ .", "In the first stage, all proposals are processed with the high complexity operator ($g_0$ ).", "The full operator set $\\cal G$ is used in all remaining stages.", "Experimental setting.", "DPP is pretrained without selectors, using the hyperparameters and data augmentations of [3], [27], [25] on COCO.", "The selectors are then added and trained with learning rate 2e-5, while 2e-6 is used for other layers.", "The training process lasts 36 ($3\\times $ ) epochs and the learning rate is divided by 10 at 27 and 33 epochs.", "The selection loss $L_s$ is combined with all the losses used in [3], [27], [25], i.e.", "cross entropy loss $L_{ce}$ , GIoU loss $L_{giou}$ and bounding boxes regression loss $L_{bbox}$ and $\\lambda =10$ is used for all selectors.", "The lower bound $T_{min}$ for the target number of times that $g_0$ is selected is manually set to $T_{min}^{last}$ for the last selector.", "For the remaining selectors, $T_{min}$ is derived automatically, so that it decreases exponentially from $N$ (number of proposals) to $T_{min}^{last}$ .", "Hence we omit the subscript of the lower bound for conciseness in the following sections.", "The multiplier $\\alpha $ is constant across all selectors." ], [ "Proposal processing by DPP", "We start by discussing experiments that illustrate the unequal processing of proposals by DPP and how this impacts complexity.", "In these experiments, we analyze how each operator contributes to the processing of proposals produced by a ResNet-50 backbone.", "DPP training uses a lower bound $T_{min}=1$ , a multiplier $\\alpha =2$ , and $N=100$ proposals.", "Contribution of Each Operator.", "The influence of each operator in $\\mathcal {G}=\\lbrace g_0, g_1, g_2\\rbrace $ is investigated separately.", "For this, we manually split the proposals into three groups, according to the operator that process them, and evaluate precision for each group.", "For simplicity, the split is only based on the selector of the last DPP stage, i.e.", "the analysis is limited to this stage.", "Table: Contribution of each operator to proposal processing.", "Performance is evaluated on the COCO validation set.", "N eval N_{eval} is the average number of proposals matched to the checkmarked operator(s).Table REF shows the precision of proposals processed by the different operators.", "$N_{eval}$ represents the average number of proposals evaluated across the COCO validation set.", "This is equivalent to the number of times that the operators checkmarked in the table were used.", "Clearly, the proposals processed by $g_0$ are the main contributors to the overall precision ($41.0$ vs $42.2$ ), even though only 15 such proposals are evaluated on average.", "For proposals processed by $g_1$ or $g_2$ the performance is quite poor.", "This shows that the selector successfully allocates operators to proposals, assigning the operators of large complexity to the proposals that have higher chance of being associated with objects and devoting much less computation to the remaining.", "Interestingly, the vast majority of proposals ($78\\%$ ) are assigned to $g_2$ , i.e.", "use no computation in the final DPP stage.", "These are very poor proposals (almost zero AP), showing that the DPP detector learns to “give up” on such proposals, simply shipping them to the output.", "When the proposals processed by $g_0$ and $g_1$ are merged, the precision is promoted by $0.7\\%$ ($41.0$ vs $41.7$ ).", "This shows, that the two types of proposals are complementary and confirms that $g_1$ is important although sparsely used.", "Performance of Each Stage in DPP.", "Precision is tested across all stages ($k=1\\sim 6$ ) and we obtain the AP as $\\lbrace 15.6,32.1,39.3,41.7,42.0, 42.2\\rbrace $ .", "The results show that the precision increases quickly in the first 4 stages and then saturates.", "Among the 6 stages, the selector is applied in stages $\\lbrace 2,4,6\\rbrace $ .", "The IoU distribution of the proposals selected by different operators is shown in Figure REF .", "The total number of proposals processed by each operator is illustrated as a subplot.", "Note that the proposals of larger IoU are mostly processed by $g_0$ (blue curve) even though the overall number of proposals processed by $g_0$ decreases drastically for the later stages (blue bar).", "In these stages, most proposals are simply “shipped to the next stage” without any computation ($g_2$ ).", "Conversely, most low IoU proposals are processed by operator $g_2$ (green curve) and the number of such proposals increases drastically with the stage (green bar).", "This illustrates how DPP is quite successful at trading off complexity for precision.", "Figure: IoU distribution of proposals matched to the three operators across DPP stages (stage indexes k∈{2,4,6}k\\in \\lbrace 2,4,6\\rbrace ).", "Within each plot, the number of proposals processed per operator is shown as a subplot.Visualization.", "Figure REF shows some qualitative results, in the form of bounding boxes predicted by the high complexity operator $g_0$ in stages 4 and 6.", "Note that the boxes predicted in stage 6 (right column) have a good overlap with the ground truth (left column) with limited duplication.", "By comparing the boxes predicted in different stages, we can observe that they are refined in the deeper stages.", "More importantly, duplication is removed to a remarkable extent, indicating that not only the selector prevents poor proposals from being processed by operator $g_0$ but the network gradually transforms duplicates into bad proposals, in order to meet the complexity constraint.", "Figure: Boxes predicted by operator g 0 g_0 in stages 4 and 6." ], [ "Main Results", "DPP is compared to the state-of-the-arts for two backbones, ResNet-50 [9] and MobileNetV2 [26], with the results shown in Table REF and REF respectively.", "In Table REF and REF , $\\bar{N}$ represents the number of proposals for the Faster R-CNN [25] and Sparse R-CNN [27] and the number of queries (which play identical roles to proposals for final prediction) for attention baselines.", "For DPP, where proposals are processed by different operators, $\\bar{N}$ is the equivalent proposal number, defined as the ratio between the overall FLOPS spent by the detector head and the FLOPS spent by the high complexity operator $g_0$ .", "For ResNet-50 the results of the baselines are copied from the original papers.", "For MobileNetV2 baselines, they are obtained with the official code, using the recommended hyperparameters.", "ResNet.", "When ResNet-50 is used as backbone, four variants of DPP are used as the detection head.", "DPP-S, DPP-M and DPP-L use different overall numbers of proposals ($N\\in \\lbrace 50,100,300\\rbrace $ ).", "The other hyperparameters, i.e.", "$T_{min}$ and $\\alpha $ , are set to 1 and 2 respectively.", "In this way, $L_c$ can assure there is at least 1 high complexity operator and assign 2 high complexity operators per instance, on average.", "DPP-XL, is equivalent to DPP-L but further increases the hyperparameter $T_{min}$ to 100.", "Table REF shows that DPP achieves a good trade-off between complexity and precision.", "At the high end, DPP-XL performs on par with the Sparse R-CNN [27] with a much lighter detection head (15 vs 25 GFLOPS).", "The prior method with this level of complexity (Faster RCNN-FPN) has an AP loss of close to 5 points ($40.2\\%$ vs $45.0\\%$ ).", "At the low end, DPP-S reduces the Sparse R-CNN computation by $12.5\\times $ , for a decrease of $4.6$ points in AP.", "This is equivalent to the Faster RCNN-FPN, but saving $7\\times $ computation.", "Figure REF shows that the complexity-precision (C-P) curve of DPP is better than those of all other baselines.", "This confirms the benefits of treating proposals unequally.", "Finally, DPP is evaluated on the COCO test set and we achieve the AP as $\\lbrace 44.7, 43.8, 42.5, 40.7\\rbrace $ for the four variants of DPP ($44.7$ is obtained for SparseRCNN [27]), which further justifies the effectiveness and stability of DPP.", "Table: Comparison to state-of-the-art object detectors on COCO validation set with ResNet-50.", "Four variants of DPP with various sizes are shown, based on FPN.", "For DPP, N ¯\\bar{N} is the equivalent proposal number, defined as the ratio between the overall FLOPS spent by the detector head and the FLOPS spent by each high complexity operator g 0 g_0 (N ¯=C(ψ)/C g 0 \\bar{N}=C(\\psi )/C_{g_0} in ()), while for baselines N ¯\\bar{N} is either the proposal number or the number of queries.", "The complexity (GFLOPS) is only that of the the detection head.MobileNetV2.", "For MobileNetV2 we consider a lighter detection head, by decreasing the number of proposals for both DPP and baselines.", "Similar to ResNet, four variants of DPP are proposed.", "Given the more important role played by the detection head in this case, we force the selector to choose more high complexity operators, by increasing the lower bound $T_{min}$ for the target number of the operators $g_0$ .", "When comparing DPP to the state-of-the-arts, the results shown in Table REF and Figure REF enable even stronger conclusions than those drawn for the ResNet.", "In this case, DPP-XL outperforms the Sparse R-CNN, establishing a new state of the art, and even DPP-S has a small AP loss (less than $1\\%$ ) compared to the latter.", "These results confirm that DPP is a generic framework, which can perform well with different types of backbones.", "Table: Comparison to state-of-the-art object detectors on COCO validation set with MobileNetV2.", "Four variants of DPP with various sizes are shown, based on light FPN for features with 2 strides (16, 32).", "For DPP, N ¯\\bar{N} is the equivalent proposal number, defined same as that in Table , while for baselines N ¯\\bar{N} is either the proposal number or the number of queries.", "The complexity (GFLOPS) is only that of the the detection head.Figure: NO_CAPTIONFigure: NO_CAPTIONInference speed.", "Inference speed is measured for DPP and baselines on MobileNetV2 with a single-threaded core Intel(R) Xeon(R) CPU E5-2470 (2.4GHz) (Deformable DETR [34] does not support CPU implementation).", "Results are obtained by averaging inference time of all images in the COCO validation split and shown in Figure REF .", "The latency is for the whole network, not just the head.", "It can be seen that DPP achieves a consistently better latency-precision curve and its savings in computation are clearly reflected in savings of inference time." ], [ "Ablation Study", "In this section, we present some ablation studies for the proposed loss function and hyperparameters used by DPP.", "The backbone is based on ResNet-50 and, by default, 100 proposals ($N=100$ ) are used by all models.", "The lower bound $T_{min}$ and multiplier $\\alpha $ for the operator $g_0$ are 1 and 2.", "All experiments are performed on the COCO dataset.", "Selection loss.", "We start by exploring the influence of the selection loss $L_s$ on DPP performance.", "Table REF shows that using either component, IoU loss $L_{iou}$ or complexity loss $L_c$ , alone degrades the precision of DPP.", "Without $L_{iou}$ the precision drops by $0.4\\%$ ($41.8\\%$ vs $42.2\\%$ ), because the selector can no longer fully match the proposal qualities to the operator complexities.", "Without $L_{c}$ , the precision is $1.1\\%$ worse ($41.1\\%$ vs $42.2\\%$ ).", "This is because, during training without $L_{c}$ , the model is more prone to assign the light operator ($g_2$ ) to proposals.", "Beyond weakening precision, this more critically prevents the complexity of DPP from being modified as needed.", "Table REF studies the trade-off between $L_{iou}$ and $L_c$ as a function of $\\lambda $ .", "The performance is similar for $\\lambda =1$ and $\\lambda =10$ , but further increasing $\\lambda $ makes DPP focus too much on complexity and ignore the importance of IoU matching, degrading performance.", "We thus set $\\lambda =10$ in all subsequent experiments.", "Table: Effect of the loss functions, i.e.", "the IoU loss L iou L_{iou} and the complexity loss L c L_c on DPP.Table: Effect of the hyperparameter λ\\lambda in the selection loss (L s =L iou +λL c L_s=L_{iou}+\\lambda L_{c}) of DPP.Target number of heavy operators.", "The target number $T$ of useage of heavy operator $g_0$ is leveraged in $L_c$ to control the complexity of DPP.", "$T$ is determined by two hyperparameters, the lower bound $T_{min}$ and the multiplier $\\alpha $ .", "Four DPP variants are implemented by varying these hyperparameters as shown in Figure REF .", "The average number of times the operator $g_0$ is selected per image is $\\lbrace 8,15,22,31\\rbrace $ in the COCO validation set, for models ranging from small to large FLOPS in Figure REF .", "It can be seen that the precision of the model with $T_{min}=1$ and $\\alpha =2$ is at the inflection point of the curve, beyond which the precision grows slowly for a large increase of the computational cost.", "When $\\alpha =2$ and $T_{min}=1$ , the model selects the $g_0$ operator 15 times on average.", "This is only twice the average instance number in COCO (7), confirming the effectiveness of loss function $L_c$ .", "This result also suggests that using twice as many high complexity proposals as the number of object instances is a very effective choice in terms of the complexity-precision trade-off for the detection head.", "In summary, the number of high complexity operators used on average can be very smaller than the overall number of proposals ($N=100$ ).", "Moreover, both hyperparameters can be used to modify the complexity of the detection head.", "The multiplier is more useful when the best complexity-precision trade-off is desired while the lower bound is more effective when the goal is to achieve the best precision irrespective of complexity." ], [ "Conclusion", "In this paper, we propose to treat proposals of object detection unequally.", "A matching problem between proposals and operators is designed and optimized via a dynamic proposal processing (DPP) framework that contains a simple selector supervised with two loss functions, the IoU loss and the complexity loss.", "Experimental results show that the DPP framework achieves the state-of-the-art complexity-precision trade-off for the object detection on different types of backbones under a wide complexity range.", "We hope this paper can provide inspiration for different approaches of proposal processing by future research as well as research in deeper questions, such as the role of computational constraints in the development of effective vision systems." ] ]
2207.03520
[ [ "Droop-e: Exponential Droop as a Function of Power Output for\n Grid-Forming Inverters with Autonomous Power Sharing" ], [ "Abstract This paper presents the novel Droop-e grid-forming inverter control strategy, which establishes an active power-frequency relationship based on an exponential function of the inverter power dispatch.", "The advantages of this control strategy include an increased utilization of available headroom, mitigated system frequency dynamics, and a natural limiting behavior, all of which are directly compared to the hitherto standard static droop approach.", "First, the small signal stability of the Droop-e control is assessed on a 3-bus system and found stable across all possible inverter power dispatches.", "Then, time-domain simulations show improved frequency dynamics at lower power dispatches, and a limiting behavior at higher dispatches.", "Finally, a novel secondary control scheme is introduced that achieves power sharing following the primary Droop-e response to load perturbations, which is shown to be effective in time-domain simulations of the 3- and 9-bus systems; comparative simulations with a static 5% droop yields unacceptable frequency deviations, highlighting the superiority of the Droop-e control." ], [ "Introduction", "As the shares of energy supplied by inverter based resources (IBRs) continues to grow around the world, dynamical challenges associated with the fundamental differences between IBRs and synchronous generators (SGs) become more exacerbated [1], [2].", "In particular, with the hitherto ubiquitous grid-following (GFL) control approach for parallel connected IBRs, very high instantaneous power penetrations become infeasible due to system dynamics and stability related concerns stemming from a paucity of grid-forming (herein understood as devices that establish and generally regulate the local voltage waveform) assets on the power system [3], [4], [5].", "Thus, attention in both academia and industry has recently shifted towards grid-forming (GFM) IBRs, which regulate the local frequency and voltage magnitude [6] independently, as opposed to conventional GFL IBRs that regulate real and reactive power injections as a function of the local voltage and frequency.", "In this paper, the frequency regulation capability of GFMs is leveraged to devise a novel frequency control method, Droop-e, which unlocks the full power potential of GFM devices, enabling the reliable and secure operation of power grids supplied by up to 100% IBRs, and offering autonomous power sharing amongst all frequency responsive devices.", "The presence of a power generation device that directly regulates frequency on a power system is unprecedented; even an SG, the traditional grid-forming asset on power systems, has frequency trajectories first and foremost dictated and constrained by the laws of rotational kinematics (i.e., the swing equation).", "On the contrary, GFM IBRs have substantial control freedom and response agility due to the absence of physical motions and are instead primarily constrained by the limits of power availability and device component ratings (e.g., switches, capacitors, etc).", "This fundamental contrast between the GFM IBR and the SG presents a great opportunity for a modern approach to generation device enhanced operability, particularly when these GFM devices are paired with storage or curtailed resources, an unavoidable reality during high IBR futures when positive headroom, a fundamental requirement for general power system operation, must be sourced from IBRs.", "A review of the state of the art reveals that a variety of control schemes exist for the GFM approach [7], including static droop [8], virtual synchronous machine [9], and virtual oscillator control [10].", "While these approaches have substantially different dynamical responses, they all center on linear frequency–power relationships.", "Variations on static droop include a feedforward mechanism know as selfsync, virtual impedance loop (primarily for reactive power sharing), adaptive droop with a power differential element, and robust droop (applicable to resistive networks) [11].", "Matching control adjusts phasor frequency based on the DC-link capacitor voltage state, which is an indicator for unequal power flow through the device and roughly analogous to the kinetic energy in a synchronous generator rotor [12].", "Varied droop gains are explored briefly in [13], with no discussion on the frequency dynamics but rather the steady state impacts and unequal load sharing.", "The work in [14] explored the concept of an exponential type droop control for distributed energy resources (DERs), but the aim is more accurate reactive power sharing in resistive networks.", "The exponential relation is inverted as compared to the proposed method herein; whereas the goal in [14] is to not exceed a certain frequency threshold, here the goal is to provide the most power to the network prior to natural limiting.", "Zhong [15] investigates limitations in droop control based on resistive (distributed) impedances and the $P$ – $E$ , $Q$ – $\\omega $ scheme.", "GFM power overload limiting by rapidly reducing the frequency with a PI controller has been presented in [16], [17], [18].", "This paper introduces a novel GFM control method, Droop-e, that leverages the unique capability of GFM inverters to directly regulate frequency as an exponential function of real power output, making it a nonlinear control approach.", "This is a departure from the convention, the linear droop control which regulates frequency directly proportional to changes in real power output.", "The Droop-e control improvement of system frequency response is multifaceted, including: 1) a larger utilization of available headroom, 2) improved frequency dynamics with exhibiting higher damping, 3) less deviant nadir and a more favorable rate of change of frequency (ROCOF), and 4) an intrinsic, natural power limiting behavior.", "Additionally, a novel secondary controller is presented that enables power sharing amongst interconnected devices following the primary Droop-e response." ], [ "Fundamentals of Frequency Response", "This section discusses the fundamentals of device frequency response for SGs and GFM IBRs, highlighting the contrasts and motivational basis for the novel Droop-e control." ], [ "Frequency Response of Synchronous Generators", "In conventional power systems, wherein power deficits are compensated by SGs through a governor response, a generation-load imbalance requires a commensurate deviation in frequency as a signal for SG governors to adjust power output.", "The change in power supplied to the network by governor action is a function of frequency, as described in (REF ): $p_{m,G} - p_{m,G,set} = D (\\omega _0 - \\omega )$ where $p_{m,G}$ is the SG mechanical power, $p_{m,G,set}$ is the exogenous SG mechanical power setpoint, $D$ is the droop gain, which in per unit is 5% in the United States, $\\omega _{0}$ is the radian frequency setpoint, and $\\omega $ is the local, and system-wide synchronization radian frequency upon reaching steady state.", "The core governor dynamics of an SG are captured by (REF ), and a basic no-reheat turbine in (), as presented in [19]: $T_{SV}\\frac{dp_{SV}}{dt} &= - p_{SV} + p_{m,G,set} - \\frac{1}{D}\\left(\\frac{\\omega _G}{\\omega _{s}} - 1\\right)\\\\T_{CH}\\frac{dp_{m,G}}{dt} &= - p_{m,G} + p_{SV}$ where $T_{SV}$ is the valve time constant, $p_{SV}$ is the steam chest power command, $D$ is the droop gain, $\\omega _G$ is the SG frequency, $\\omega _{s}$ is the synchronous frequency, $T_{CH}$ is the turbine steam chest time constant, and $p_{m,G}$ is the mechanical power, equal to the device mechanical torque ($t_{m}$ ) in per unit.", "The reciprocal position of $D$ in (REF ) shows that for values of $D$ approaching 0%, the governor dynamics become increasingly faster without bound.", "The frequency dynamics of the device evolve according to the swing equation (REF ); the damping component is not shown for illustrative purposes: $\\frac{2H}{\\omega _s} \\frac{d\\omega _G}{dt} = p_{m,G} - p_{e,G}$ where $H$ is the inertia constant of the device and $p_{e,g}$ is the electrical power.", "Transient load perturbations manifest as deviations in $p_{e,g}$ , which cause the frequency to evolve according to (REF ).", "Only after the frequency changes will the governor/turbine systems modulate $p_{m,G}$ ; changes in $p_{m,G}$ due to a perturbation are a function of $\\omega _G$ , and inversely proportional to $D$ .", "To achieve larger $p_{m,G}$ contributions to a relative network perturbation would require smaller $D$ values, which may result in instability due to the increase in rate of change of $p_{SV}$ (REF ), caused by the reciprocal relationship with $D$ .", "Operating at $D=0$ is mathematically infeasible." ], [ "Frequency Response of Grid-Forming Inverters", "In emerging power systems with more GFM IBRs coming online, the frequency-power dynamic response of power systems might be governed differently.", "The droop controlled GFM frequency dynamics are shown in (REF ) and () [20]: $\\frac{d\\delta _{I}}{dt} &= D \\left(p_{m,I,set} - p_{m,I}\\right) + \\omega _{set}\\\\\\frac{d\\omega _I}{dt} &= D\\omega _{fil}\\left(p_{m,I} - p_{meas,I}\\right)$ where $\\delta _I$ is the inverter electric angle, $D$ is the droop gain, $p_{m,I,set}$ is the exogenous power setpoint, $p_{m,I}$ is the filtered power, $\\omega _I$ is the inverter frequency, $\\omega _{fil}$ is the power measurement cutoff frequency, and $p_{meas,I}$ is the measured, instantaneous power output.", "This control approach leverages the natural frequency–droop characteristics of inductive networks to distribute power perturbations amongst devices on the network[21].", "Conspicuously absent as a control variable in the frequency dynamics of the GFM is $\\omega _I$ .", "Changes in $\\omega _I$ and the point of interconnection frequency result in power deviations due to the laws of power flow; in fact, it is appropriate to think that power is extracted from a GFM due to the frequency regulation approach of the device.", "A change in $\\omega _I$ is not required to change the power exported to the network; with respect to frequency regulation, GFM devices are proactive.", "$D$ is a lever to influence how the local frequency changes, as a function of $p_{m,I}$ .", "Expressed another way, a GFM can deliver larger amounts of power to the network by simply changing the frequency less, which is accomplished with a smaller $D$ .", "As $D\\longrightarrow 0$ , deviations of $p_{m,I}$ yield decreasing changes in frequency, and the rate of change of frequency (ROCOF), expressed in (), eventually reaches zero.", "The governing equations indicate that operating at $D=0$ is feasible." ], [ "Comparative Analysis", "To demonstrate the different underlying dynamics for the SG and GFM, simple load step simulations in the power systems computer aided design (PSCAD) software environment were performed.", "The results are the time-domain response of an SG and a droop-controlled GFM for varied values of $D$ .", "The devices are isolated and connected to a load that is 50% of the device rating.", "A 10% load step perturbation was applied for three different scenarios; $D = 5\\%$ , $D = 1\\%$ , $D = 0\\%$ .", "The results are shown in Fig.", "REF .", "The time-domain plots indicate that the SG response contains an oscillatory mode with a growing frequency for smaller values of $D$ .", "This corroborates the observation of the infeasible operation at $D = 0$ for an SG from (REF ).", "On the other hand, the GFM response shows no signs of instability for any of the tested values of $D$ , indicating that at a local level, $D$ can take any of the simulated values.", "Figure: Power and frequency response of isolated GFM and SG devices to a 10% load step at a 50% initial loading.", "p e p_e is identical for each device.", "Varied droop gains are tested." ], [ "Observations and Motivation", "The maintenance of a static droop gain across a system is sensible from a load sharing perspective, as all devices will contribute to load deviations proportionally according to the device rating.", "However, the change in power supplied to the network for frequency responsive devices such as SGs requires a frequency deviation that may be substantial depending on the disturbance; e.g., at a static 5% droop, a 25% increase in $p_{m,g}$ requires a 0.75 Hz deviation.", "For GFM devices at a 5% static droop gain, the served power incurs a potentially large frequency deviation for what might be a relatively small delivery of available headroom.", "If a GFM device would change the local frequency less for a given disturbance, achievable with a smaller $D$ , a larger amount of power could be extracted (if available), while simultaneously reducing the impact to frequency dynamics.", "For instance, the nadir for a given load perturbation would be relatively higher, while the rate of change of frequency would be smaller.", "The tests from Fig.", "REF show that variations in $D$ for an SG are limited, but $D$ can potentially take any value for a GFM inverter.", "The idea of non-5% droop gains has been presented [13], but the shortcomings of a smaller static droop gain include poor transient load sharing, and a high susceptibility to limit violations, particularly at higher dispatches.", "These shortcomings serve as the motivation for Droop-e control, where the simulation supported, hypothesized unbounded limit for assignment of $D$ for the GFM will be leveraged." ], [ "The Droop-e Concept", "The primary idea behind the Droop-e concept is making $D$ a function of available headroom, which is accomplished by using $p_{m,I}$ as the independent variable in an exponential (instead of linear) function, as shown in (REF ): $D = D_e(p_{m,I}) = \\omega _b\\alpha \\left[ e^{\\beta (p_{m,I,set})}- e^{\\beta (p_{m,I})}\\right]$ where $\\alpha $ is the proportional scalar, with units of per unit frequency, $\\beta $ is the argument scale in per unit power, and $\\omega _b$ is the base frequency.", "This function, $D_e(p_{m,I})$ , we call Droop-e control.", "The values of $\\alpha $ and $\\beta $ have been chosen as 0.002 and 3.0, respectively.", "This paper aims is to serve as a proof of concept and hence, the optimal tuning of these values are not pursued, though future work could focus on any number of constraints such as maximal damping at low dispatches, largest extraction of power prior to under frequency load shedding (UFLS), or acceptable slopes at/near the inverter limits, to name a few.", "Figure: Droop-e frequency curves, showing the resultant frequency trajectories for two different dispatches, with tangential droop curves at those dispatches for illumination purposes.", "A 5% static droop curve is included for comparison at the p set =0.2p_{set}=0.2 dispatch.", "The region below 59.0 Hz is shaded as an indicator of potential non-linear protection such as UFLS; however, it is noted that some UFLS schemes may trigger at higher frequencies.Figure REF shows the Droop-e power–frequency curves for two different inverter dispatch values, $p_{m,I,set}$ , when the steady state frequency is 60 Hz.", "A network perturbation will cause $p_{m,I}$ to change according to the laws of power flow, because the GFM inverter will initially maintain the local frequency, which incurs changes in angle differentials and a resultant change in power extraction from the inverter.", "Focusing on $p_{m,I,set}=0.2$ , the green trace shows the Droop-e frequency trajectory, with a static 5% droop trajectory (solid yellow trace) also shown at the same dispatch for comparison.", "The dotted green curve represents an extrapolation of the initial droop value at $p_{m,I,set}=0.2$ , which is equal to 1%.", "At $p_{m,I,set}=0.73$ , due to the values of $\\alpha $ and $\\beta $ selected for this work, the initial droop value is equal to 5%.", "Note the vastly different frequency–power trajectory for the Droop-e control at this dispatch, vs. the dispatch $p_{m,I,set}=0.2$ .", "The advantage of the proposed control scheme is made obvious when considering the three rays between the Droop-e and 5% static curves from $p_{set}=0.2$ , labelled 'a', 'b', and 'c' in Fig.", "REF .", "The power deviations for each control and resultant frequency deviations are presented in Table REF .", "Evidently, the Droop-e control delivers more power to the network for a given frequency deviation at lower dispatches, which is numerically presented by the $\\Delta p_{diff} = (Droop-e) - (Static\\phantom{0}5\\%)$ values.", "Thus, Droop-e allows the generator to utilize from 18% to 25% more of its headroom on a capacity basis than a static linear droop value of 5%, at $p_{m,I,set}=0.2$ , for a 0.75 Hz deviation from the nominal.", "Table: Comparison of Power Delivered for Droop-e and Static 5% Control at p m,I,set =0.2(pu)p_{m,I,set}=0.2 (pu).", "Corresponds with Fig.", "." ], [ "Increased use of Available Headroom", "A primary benefit of the Droop-e control is to leverage a larger amount of available headroom for a smaller frequency deviation at relatively lower dispatches, precisely when larger amounts of headroom are available.", "As a result, the frequency dynamics of the system are suppressed due to the GFM inverter delivering more power to the network with a relatively smaller frequency deviation.", "While this is helpful to mitigate the dynamics of smaller power systems when load perturbations are on par with the rating of the device, it is also beneficial from a greater headroom delivery potential for larger interconnected systems." ], [ "Intrinsic Limiting Capability", "A second benefit to the $Droop-e$ control is the increase in droop slope at higher dispatches.", "This is advantageous because a GFM inverter cannot export more power than the rating, and a mitigation strategy must be employed.", "With $Droop-e$ control, the frequency will be lowered at a greater rate at higher dispatches, which will incur larger power extraction from adjacent, frequency responding devices.", "One type of GFM limiting in the literature is the CERTS limiter [16], which employs aggressive PI controllers to rapidly change frequency when violations are met.", "The benefit of Droop-e control over this method is that the device does not enter a non-droop calculated regime with power violations, but instead maintains a droop-type relation with $p_{m,I}$ ." ], [ "Lower Rate of Change of Frequency", "A third benefit of the Droop-e control comes in the form of reduced ROCOF at lower inverter dispatch levels.", "The expression of ROCOF in () shows a direct proportionality to $D$ .", "With Droop-e, this is replaced by $D_e(p_{m,I})$ , which is strictly less than $D$ for dispatches below $p_{m,I,set} = 0.73$ .", "Therefore, at these lower dispatches, the ROCOF is less than for a static 5% droop.", "This is an important benefit to secure the reliability of power delivery where the grid is equipped with relays that activate on the basis of ROCOF." ], [ "Small Signal Stability Assessment", "The first step in assessing the viability of the Droop-e control is a small signal analysis.", "The small signal stability analysis approach consists of expressing the entire power system including lines, loads, and generators in the differential–algebraic form of (REF ) and (): $\\frac{dx}{dt} &= f(x,y,u)\\\\0 &= g(x,y,u)$ where $x$ is a vector of dynamical states, $y$ is a vector of algebraic variables, $u$ is the set of exogenous inputs, $f$ is the set of functions describing the time evolution of the dynamical states, $x$ , and $g$ is the set of functions relating the network algebraic variables.", "(REF ) and () can be linearized in the following form: $\\Delta \\dot{x} = A_{sys}\\Delta x + B\\Delta u$ where $A_{sys}$ represents the aggregation of all algebraic equations within the dynamical expressions, and $B$ is the matrix of exogenous control parameters.", "The eigenvalues $\\lambda _i$ of $A_{sys}$ are generally complex in the form of $\\lambda _i = \\alpha _i + j\\omega _i$ , where $\\alpha _i$ and $\\omega _i$ are the real and imaginary parts, respectively, of the $i$ th eigenvalue.", "Positive values of $\\alpha _i$ indicate fundamental instabilities, while the damping ($\\zeta $ ) of the eigenvalues is calculated as (REF ): $\\zeta _i = \\frac{-\\alpha _i}{\\sqrt{\\alpha _i^2 + \\omega _i^2}}$ Consider the simple 3-bus network of Fig.", "REF .", "This system is used to demonstrate the device level stability via a small signal stability analysis.", "An SG is located at bus 1 and a Droop-e GFM IBR (assumed a battery energy storage system with no energy availability constraints) is at bus 3.", "The impedances $X_a$ and $X_b$ connect the three buses.", "The network base is 100 MVA, 18 kV, which applies to all per unit values except for the GFM, which is rated at 50 MVA.", "The GFM was purposefully chosen at a lower rating as compared to the SG, to show the stabilizing benefit of the Droop-e control even at relatively lower ratings.", "The load at bus 2 is constant power, with a 0.95 leading power factor.", "The network details are provided in Table REF .", "Figure: A simple 3-bus system.", "A synchronous generator is located at bus 1, while a Droop-e grid-forming inverter is at bus 3." ], [ "Synchronous Generator Model", "The SG model used in these studies is constructed from the base model used in [19], implemented as shown in the block diagram of Fig.", "REF .", "The governor model (REF ) is a first order system acting on the difference between $p_{m,G,set}$ and the droop relation to frequency deviations, $\\Delta \\omega _G$ .", "The turbine model is a simple steam chest with no reheat process ().", "The standard swing equation (REF ) machine dynamics are included.", "The exciter is based on the IEEE Type-1 model.", "The saturation function is an exponential of the form: $S_E(E_{fd}) = \\gamma e^{\\epsilon E_{fd}}$ .", "Flux decay is modelled but not shown in Fig.", "REF .", "The result is a 9-th order model, with the states provided in (REF ).", "The parameter values are those from machine 3 in the 9 bus model developed in [19], with governor and turbine parameters from [18].", "The standard voltage behind reactance model is used to connect the SG to the network (see [19]).", "The SG parameters are provided in Table REF .", "Figure: The synchronous generator model with governor, turbine, machine, and exciter dynamic sub-systems.Figure: Eigenvalue trajectories of the 3 bus system for those with participation from grid-forming inverter states.", "Note the varied x and y axis scales.$x_{SG} = [\\delta _{G}, \\omega _{G}, E^{\\prime }_q, E^{\\prime }_d, E_{fd}, V_R, R_f, p_{m,G}, p_{SV}]$" ], [ "Grid-Forming Inverter Model", "The frequency control for the GFM model is shown in Fig.", "REF .", "The instantaneous measured power $p_{meas,I}$ is passed through a first order filter with time constant $T_{fil}$ .", "The resultant $p_{m,I}$ value is provided to the Droop-e block along with $p_{m,I,set}$ to determine the output frequency, $\\omega _I$ ; a factor of $2\\pi $ is not explicitly shown.", "Figure: Frequency control with the Droop-e method.", "The instantaneous, measured output power (p meas,I p_{meas,I}) is filtered prior to being passed to the Droop-e controller.", "Integration of the frequency (ω I \\omega _I) yields the local angle for the GFM (δ I \\delta _I).A voltage behind impedance model is used, as shown in Fig.", "REF , wherein the standard LCL filter coupling inductance is the impedance.", "A GFM inverter regulates the voltage across the LCL capacitor, which is the voltage provided to the source in Fig.", "REF .", "A constant voltage is assumed, which absolves the voltage and current proportional–integral controllers and the filter capacitor and inductor dynamical states [20].", "As the interest here is primarily on the relatively slower frequency dynamics, the constant voltage is practical and similar reductions have been exercised in other analyses [18].", "The governing equations of the GFM inverter with Droop-e control, as installed at bus 3 in the network of Fig.", "REF , are (REF ) and (): $\\frac{d\\delta _I}{dt} &= \\omega _b\\alpha \\left[ e^{\\beta (p_{m,I,set})} - e^{\\beta (p_{m,I})}\\right] + \\omega _{set}\\\\\\frac{d p_{m,I}}{dt} &= \\frac{-p_{m,I}}{T_{fil}} + \\frac{V_3sin(\\delta _I-\\theta _3)I_{I,d} + V_{3}cos(\\delta _I-\\theta _3)I_{I,q}}{T_{fil}}$ where $V_3$ is the RMS voltage at bus 3, $\\theta _3$ is the angle of bus 3, $\\delta _I$ is the internal angle of the GFM, and $I_{I,d}$ and $I_{I,q}$ are the internal $d$ and $q$ axis currents.", "The internal values are brought into the global reference frame with the $e^{j\\left(\\delta _I - \\frac{\\pi }{2}\\right)}$ expression.", "The internal voltages, $E_d$ and $E_q$ , are taken as constants.", "This constant voltage assumption reduces the prototypical 13th order GFM model [20] to a 2nd order model with the states of (REF ), because the current and voltage controllers, the filter inductor and capacitors, and the reactive power equations, are ignored.", "The relevant parameters are provided in Table REF .", "Figure: The voltage behind impedance model adopted to represent the grid-forming inverter in the small signal stability analysis.$x_{GFM}= [\\delta _{I}, p_{m,I}]$ Table: System Parameters" ], [ "Results and Discussion", "The eigenvalues of the 3-bus system of Fig.", "REF were calculated for a range of power flows that span the per unit dispatch of the GFM inverter, from 0.01 to 0.99 to assess the stability of the full range of power dispatches for the Droop-e controller.", "Three complex eigenvalue pairs were identified via participation factor analysis as involving the GFM states of (REF ); $\\lambda _{1,2}$ , $\\lambda _{3,4}$ , and $\\lambda _{5,6}$ .", "The eigenvalue pair $\\lambda _{1,2}$ involves the states $\\delta _I$ , $p_{m,I}$ , $\\delta _G$ , and $\\omega _G$ .", "The level of participation varies as the dispatch, but all four states are present through the range of investigated dispatches.", "Figure REF shows that the eigenvalue trajectory is pure real and negative, for the range $p_{set} = [0.01 - 0.4]$ , and at $p_{m,I,set} =0.4$ a bifurcation occurs and the modes become oscillatory.", "The damping, as interpreted by the damping traces of $\\zeta $ included in the charts, decreases monotonically up to the full dispatch point, $p_{set} = 0.99$ , but remains at a reasonable level.", "Figure REF depicts the loci of migration of the eigenvalue pair $\\lambda _{3,4}$ in the complex plane, which includes participation from the states $\\delta _I$ , $p_{m,I}$ , $\\delta _G$ , $\\omega _G$ , $E^{\\prime }_d$ , and $p_{SV}$ .", "This eigenvalue pair has an oscillatory element at low $p_{m,I,set}$ values that becomes increasingly damped as the dispatch is increased.", "The mode becomes pure real at the same bifurcation point as $\\lambda _{1,2}$ , indicating a likely coupling between the two.", "The trajectory of this pair shows that the mode is always damped, due to the left half plane location.", "The eigenvalue pair, $\\lambda _{5,6}$ , shown in Fig.", "REF , depicts an oscillatory mode at all dispatches, with a decreasing damping as the dispatch of the GFM inverter is increased.", "This mode is always well damped, with $\\zeta < 0.7$ for all dispatches.", "This mode, which is in the range of $0.15\\rightarrow 0.63 Hz$ , is a low frequency oscillatory mode.", "The trajectory of this pair corroborates the frequency damping benefit to the system of the Droop-e control, particularly at low dispatches.", "The results of the small signal stability analysis here suggest that all modes of the 3 bus system with the Droop-e GFM control, including those not shown but only involving SG states, have a negative real part and positive damping due to $\\alpha _i < 0$ for all $p_{set}$ values, and hence form a stable system.", "Figure: Device frequency response for the three dispatch cases: A, B, and C.Figure: Device real power response for the three dispatch cases; A, B, and C." ], [ "Time-Domain Simulations: 3-Bus System", "The dynamic simulations were conducted in the PSCAD software package was used to run dynamic simulations for varied dispatch points to display the full expression of characteristics.", "The GFM model is 13th order, with internal current and voltage controllers and an output LCL filter.", "It is executing a reactive power-droop relationship ($V-Q$ ).", "A full description of the model can be found in [20], while the parameters are those from Tables REF and REF .", "The SG model was constructed with internal PSCAD models and established with the parameters from Table REF .", "The dynamic models and the 3-bus network used is available open-source at [22].", "Table: Additional Dynamic System ParametersThree 10% load step (7.5 MW, 2.5 Mvar) simulations were carried out, where the dispatch of the GFM is varied in order to showcase the improved response at lower dispatches.", "The initial power flow for each case, 'A', 'B', and 'C', is presented in Table REF .", "Because the SG is rated at 100 MVA, the full range of dispatch with the 50 MVA GFM inverter is possible with the 75 MW load.", "The simulations are executed by establishing the two devices as ideal sources with no dynamics enabled at the power flow determined voltage and angle.", "The dynamics are subsequently released in a manner conducive to achieving the desired steady state.", "Two metrics are used to quantify frequency response; the nadir, which is the lowest frequency value post disturbance, and the peak ROCOF, which is calculated with a sliding window of $T_w = 0.1s$ ; $ max|\\dot{f_G}(t)| = \\frac{\\omega _G(t + T_{w}) - \\omega _G(t)}{2\\pi T_{w}}$ .", "The frequency statistics are calculated from the SG rotational frequency alone.", "Table: Dispatches for each case.", "Synchronous Generator is 100 MVA; Grid-Forming Inverter is 50 MVA" ], [ "Results and Discussion", "The time-domain plots of device frequency and real power output are presented in Fig.", "REF , and Fig.", "REF , respectively.", "Table REF shows frequency statistics and power differentials from the steady state values for each case.", "Table: Case Results–Frequency Statistics Apply to Synchronous Generator Shaft Speed; ΔP\\Delta P is machine per unitFigure: Synchronous generator and grid-forming inverter power output for static 5% droop and case B dispatch.", "Response is identical for cases A, B, and C; A and C are not shown.", "Note that the grid-forming inverter power output exceeds 0.05 p.u., a power violation in case C.The results of case A show slow frequency dynamics (Fig.", "REF , with a ROCOF of 0.44 Hz/s and a nadir of 59.93 Hz, which is approximately equivalent to the settling frequency.", "The device powers (Fig.", "REF ) show that the majority of the load perturbation was met by the GFM inverter.", "The slow frequency mode of $\\lambda _{5,6}$ (explained in the previous section) is very well damped, as expected by the eigenvalue trajectory of Fig.", "REF .", "The frequency response of both the SG and GFM are nearly identical.", "Case B, where the dispatch of the GFM inverter is $p_{set} = 0.5$ , shows a relatively larger frequency deviation (Fig.", "REF ) as compared to case A, with the case B nadir 0.1 Hz lower.", "The ROCOF is larger, at 0.61 Hz/s, which is expected due to the larger Droop-e gains as applied to ().", "A low frequency oscillatory element is more present in the response, which corroborates the decrease in damping at higher dispatches from the $\\lambda _{5,6}$ trajectory.", "The power outputs of the devices show that a larger amount of power is delivered to the network from the SG, which is accomplished only because the frequency has a lower settling frequency.", "The GFM inverter still delivers a larger per unit quantity, indicative that even at a 50% dispatch, the GFM can still contribute more power than for a static 5% droop value, wherein the per unit power export of both devices would settle to the same value.", "Finally, the results of case C, when the Droop-e GFM inverter is dispatched at $p_{m,I,set} = 0.95$ , are shown in Fig.", "REF and Fig.", "REF .", "First, the frequency dynamics show an even lower nadir (59.70 Hz), and a larger ROCOF of 0.93 Hz/s.", "Because the GFM inverter has very little headroom, its ability to diminish the frequency dynamics is minimal.", "The high frequency oscillatory mode of $\\lambda _{1,2}$ is present in the GFM output; it is a local mode.", "These oscillations are exhibit poor damping; however, a mitigation strategy is beyond the scope of this paper and yield a direction for future research.", "Also present is the low frequency mode of $\\lambda _{5,6}$ (explained in the previous section), which shows even less damping, further corroborating the eigenvalue trajectory of Fig.", "REF .", "The power output of the devices includes a `GFM Limit' trace to show the maximum output permissible of the GFM device.", "Even though the device is dispatched very near the limit, at $p_{set} = 0.95$ , the limit is not violated.", "This shows the naturally limiting behavior of the Droop-e control, wherein the device maintains the frequency–power exponential relationship without additional PI controller action, as has been presented in other work [13]." ], [ "Power Sharing", "The Droop-e control of the GFM is a strict departure from the static droop convention, which yields power sharing amongst frequency responsive devices.", "Namely, if all devices operating with frequency response maintain a global droop value (i.e., 5% in North America), then all devices will contribute to power differentials equally, as a function of the device rating.", "If device do not share power equitably, then some may be far closer to limits than others, which can lead to instability.", "The Droop-e control does not hold this power sharing objective, as the primary goal is to provide more power by maintaining smaller deviations in frequency via the nonlinear nature.", "Thus far, the efficacy of the Droop-e control in mitigating frequency transients has been shown, but the question remains of how to achieve autonomous power sharing after the transients have diminished.", "Here, the unique capability of the GFM inverter to directly regulate frequency is used to develop a novel power sharing control." ], [ "Power Sharing Controller", "The proposed power sharing controller, presented in Fig.", "REF , operates by modulating the output frequency with an offset component, $\\omega _{ps}$ .", "By not bypassing the fundamental Droop-e, the GFM inverter will continue to provide damping to the system with the exponential droop relation, but will also change frequency such that the other frequency responsive devices react and equitable power sharing is accomplished.", "First, the frequency deviation that would result with a static droop (i.e., 5%), $\\omega _{5\\%}$ in Fig.", "REF , is directly calculated with the resultant power deviation (REF ).", "This frequency is compared with the Droop-e output, $\\omega _{D_e}$ , and the resultant power sharing component $\\omega _{ps}$ , to generate an error ().", "$\\omega _{5\\%} &= (p_{m,I,set}-p_{m,I})D_{5\\%}\\\\\\omega _{e} &= \\omega _{5\\%} - \\omega _{\\Delta } - \\omega _{ps}$ The logic block will remain open until a disturbance is registered (REF ): $closed\\phantom{0}if\\phantom{0}{\\left\\lbrace \\begin{array}{ll}|\\Delta p_{m,I}|>\\epsilon _p \\\\|\\frac{d p_{m,I}}{dt}| < \\epsilon _{dp}\\end{array}\\right.", "}$ where are $\\epsilon _p$ and $\\epsilon _{dp}$ are tolerance parameters.", "Once the disturbance criteria is met, this error is passed through an integrator block with gain $k$ , which generates the frequency offset $\\omega _{ps}$ .", "As this offset is added to the output frequency $\\omega _I$ , $p_{m,I}$ will change due to the dynamics of AC power transfer and other frequency responsive devices on the network.", "This change is compensated for in the controller, and the GFM will arrive at the equitable, per unit power sharing value as $\\omega _e$ is driven to 0 by the integrator.", "Note that the static droop gain is a parameter that can be arbitrarily set; e.g., 4% in Europe and 5% in North America.", "Figure: The power sharing controller that adds an offset, ω ps \\omega _{ps}, to the output frequency ω I \\omega _I, to achieve autonomous, 5% power sharing amongst other frequency responsive devices." ], [ "Demonstration of Power Sharing Mechanism", "To demonstrate the efficacy of the power sharing controller in Fig.", "REF , dynamic simulations were performed on the 3-bus system similar to the load steps of Section , but with a larger, 37.5 MW load increase (a 50% increase, which is recognized as enormous, but used for illustrative purposes on this simple system) to show the capability of the Droop-e control as well as that of the proposed power sharing strategy.", "Two simulations were performed; (i) with Droop-e and (ii) with Static-5% droop.", "The dispatch data corresponds to case A from Table REF .", "In the first simulation, one with Droop-e control, $k = 0.3$ .", "The results exhibit when the power deviation was registered ($|\\Delta p_{m,I}| > \\epsilon _p = 0.01\\phantom{0}pu$ ), and the transients diminished ($|\\frac{dp_{m,I}}{dt}<\\epsilon _{dp} = \\phantom{0}0.001$ ), the controller began applying the recovery offset, $\\omega _{ps}$ .", "Fig.", "REF displays the response contribution from different components involved in this power sharing control strategy, involving $\\omega _{D_e}$ , $\\omega _{5\\%}$ , $\\omega _{\\Delta }$ , and $\\omega _{ps}$ .", "A factor of $(2\\pi )^{-1}$ was applied to each trace for obtaining a Hz value.", "Once the logic gate was closed, the exponential change in $\\omega _{ps}$ began.", "As the frequency of the device changed with $\\omega _{ps}$ , the output power $p_{m,I}$ also changed, which incurred changes in $\\omega _{D_e}$ and $\\omega _{5\\%}$ .", "At the conclusion of this extended controller action, the frequency successfully reached the equitable settling value with with $\\omega _{\\Delta }$ arriving at the $\\omega _{5\\%}$ value.", "Figure: Power sharing controller variable trajectories during the power sharing interval of the simulation; t=5.8-14st=5.8-14s.", "Variables match those from the controller diagram in Fig.", "The novelty of the proposed power sharing controller allows the GFM device to compensate for power deficit and frequency variations in a more efficient way with Droop-e control, and still settle at the very same value that a conventional 5% static droop-based power sharing would yield.", "The results of these simulations are shown in Fig.", "REF , with Fig.", "REF presenting the SG frequency response for these two simulations; the GFM frequency response with a static 5% droop was nearly identical to the SG and hence, not shown.", "These frequency results corroborate the superiority of the power sharing extended Droop-e control relative to the static droop control.", "They indicate the peak ROCOF for the Droop-e control was 2.3 Hz/s, compared to 3.9 Hz/s for the static 5% droop.", "The static 5% case experienced a much more deviant frequency nadir, and entered potential UFLS territory; Droop-e certainly did not.", "Once the power sharing controller was initiated, at approximately $t=6s$ , the frequency response showed an exponential decrease (due to the integrator) as the GFM inverter tracked to achieve power sharing, and made GFM headroom available to respond to another potential event.", "The nadir with the Droop-e control was the settling frequency.", "Figure: Power sharing recovery control implementation, following a 50% load step.", "The static 5% droop response is provided for comparison.Fig.", "REF shows the power output of the GFM inverter for each controller; droop-e vs. static 5% droop.", "These results show that the Droop-e control delivered more power to the network than the static 5% droop control.", "When the power sharing controller was initiated, the power output exhibited a slow exponential decline to the 5% droop value; equitable power sharing was achieved autonomously within 15 seconds of the perturbation, while this rate was a parameterized gain that can be tuned for a faster or slower response by adjusting the value of $k$ ." ], [ "Time-Domain Simulations: IEEE 9-Bus System", "The IEEE 9-bus system [23], as configured in [20], was used to demonstrate and validate the capability of the Droop-e control on a mesh network, with multiple Droop-e GFM devices.", "The system configuration is given in Table REF , which corresponds with the network diagram shown in Fig.", "REF .", "The perturbation applied was a 10% load step at bus 6.", "Three cases were simulated, the first is 9-A with all generators as SGs, modelled as in the previous dynamic simulations.", "The second case, 9-B, has generators 1 and 3 supplanted with static 5% droop GFMs.", "The third case, 9-C, has these two GFMs converted to Droop-e control.", "The power sharing control parameters were the same as from Section .", "Table: 9 Bus ConfigurationFigure: IEEE 9 bus system.", "Generators 1 and 3 are changed to Static-5% and Droop-e in Cases 9-B, and 9-C, respectively.In these simulations, a weighted frequency is calculated according to $f(t) = \\frac{\\sum _{i=1}^n (MVA_i*f_i(t))}{\\sum _{i=1}^n MVA_i}$ where $f_i(t)$ is the frequency of device $i$ at time $t$ , $MVA_i$ is the device $i$ rating, and $n$ is the number of devices.", "This weighted frequency is used to determine the ROCOF and nadir values, according to the same definitions as presented in Section .", "The mechanical inertia rating of the system configuration, presented in Table REF , is a weighted average calculated as $ H = \\frac{\\sum _{i=1}^n H_i S_{B,i}}{\\sum _{i=1}^nS_{B,i}}$ where $H_i$ is the inertia rating (in $s$ ) of device $i$ , $S_{B,i}$ is the MVA rating of device $i$ , and $n$ is the number of devices.", "The inertia rating of the GFM devices is 0 s. The time-domain average frequency response from the simulations of each case are presented in Fig.", "REF .", "The blue, dot-dash, trace represents the Case 9-A response, with the quintessential second-order trajectory, the most deviant nadir (56.68 Hz) of the three cases, and an initial ROCOF of 0.69 Hz/s.", "Case 9-B, the orange dashed trace, shows an improvement in the nadir (59.77 Hz), but a much larger ROCOF of 1.22 Hz/s.", "Finally, Case 9-C, with the solid green trace, shows the superior performance of the Droop-e control.", "In this case, Case 9-C, although the inertia is a third of Case 9-A, the ROCOF values are identical at 0.66 Hz/s.", "The frequency trace for 9-C showed only a small initial overshoot, but this did not even register as the nadir because of the relatively large immediate delivery of power due to the Droop-e controller.", "It also showed approximately 4 seconds after the load step, the power sharing recovery control became engaged, with a gradual, exponential decrease in frequency towards to the settling frequency, 59.83 Hz, identical two the other two cases.", "The nadir for Case 9-C was the settling frequency.", "Table REF summarizes the frequency statistics for each case Table: 9-Bus Case Results–Frequency Statistics Derived From AverageFigure: Average frequency response of 9-bus system for the three cases simulated following a 10% load step at bus 6.The power response of each device for Case 9-C is shown in Fig.", "REF .", "It is clear that the two Droop-e controlled GFM inverters at bus 1 and 3 delivered significantly more power to the network than the SG at bus 2.", "Consequently, the transients were diminished within 1.5 seconds of the load step perturbation.", "Due to the differences in dispatch and the operation of the Droop-e control, the resultant power delivery from each GFM device was different.", "Approximately 4 seconds after the perturbation, the power sharing control was activated for each GFM, with Gen 3 engaging a tenth of a second prior to Gen 1.", "This control is autonomous and a factor of local variables; therefore, the time of initiation could vary amongst devices.", "The power output from all three devices then converged to an identical value, successfully achieving the 5% droop derived contribution based on the size of the load step.", "Figure: Individual device power responses for Case 9-C.The results here show a superior transient frequency response by Droop-e, and the autonomous, equitable power sharing with multiple devices operating under the same control." ], [ "Conclusion", "This paper presented the novel Droop-e control strategy for grid-forming inverters, which establishes an active power–frequency relationship based on an exponential function of the power dispatch.", "The advantages of this control approach consist of an increased utilization of available headroom, mitigated frequency dynamics, and a natural limiting behavior.", "The proposed controller was demonstrated and validated using both the small-signal stability analysis and computational time-domain EMT simulations and compared to the hitherto standard static droop approach.", "Further, a novel secondary control that achieves power sharing autonomously with multiple devices following the primary Droop-e response to load perturbations was introduced and simulated on the 9-bus network.", "Some potential directions for future research are: More comprehensive controller design to mitigate the high frequency mode present at high $p_{m,I,set}$ values.", "Stability analysis, both analytical and transient, of larger networks with multiple Droop-e devices.", "Analysis of the secondary power sharing control with multiple devices Droop-e devices.", "Investigation of Droop-e on larger networks to explore the potential reduction in the quantity of frequency responsive devices required for standard contingencies." ], [ "Acknowledgment", "This work was authored by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No.", "DE-AC36-08GO28308.", "The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government.", "The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes." ] ]
2207.03564
[ [ "The contact mapping class group and rational unknots in lens spaces" ], [ "Abstract We determine the contact mapping class group of the standard contact structures on lens spaces.", "To prove the main result, we use the one-parametric convex surface theory to classify Legendrian and transverse rational unknots in any tight contact structure on lens spaces up to Legendrian and transverse isotopy." ], [ "Introduction", "Ever since Eliashberg [13] determined the homotopy type of the group of contactomorphisms of $(S^3,\\xi _{std})$ relative to a point, there have been several studies on the group of contactomorphisms of various contact manifolds.", "For example, Gompf [31] showed that there exists a contactomorphism of the standard contact structure on $S^1 \\times S^2$ which is smoothly isotopic to the identity, but not contact isotopic to the identity.", "After that, there have been many similar results, see [5], [10], [22], [25], [39] for examples.", "Also, there have been studies on higher homotopy groups, see [4], [5], [6], [14], [19], [20] for examples.", "However, the contact mapping class group has not been determined for many contact manifolds so far.", "If we focus on closed manifolds, the contact mapping class group was only determined for the standard contact structure on $S^3$ by Eliashberg [13], the overtwisted contact structures on $S^3$ by Vogel [42], and the canonical contact structures on the unit cotangent bundles $U^*\\Sigma _g$ for $g \\ge 1$ and their cyclic covers by Giroux and Massot [29].", "The main difficulty to study the contact mapping class group is that we need to understand the behavior of some Legendrian (or contact) submanifolds under Legendrian (or contact) isotopy, which has been poorly studied in other than $S^3$ .", "According to the author's knowledge, the only known results on the classification of Legendrian knots outside of $S^3$ were the linear curves in any tight contact structure on $T^3$ by Ghiggini [24] and some knots and links in the standard contact structure on $S^1 \\times S^2$ by Ding and Geiges [11] and by Chen, Ding and Li [8].", "In Theorem REF and REF , we determine the contact mapping class group of the standard contact structures on lens spaces and $S^1 \\times S^2$ .", "The main ingredients are Theorem REF and REF , the classification of Legendrian and transverse rational unknots up to Legendrian and transverse isotopy.", "Once we have the classification, we can perturb a contactomorphism to fix a standard neighborhood of some Legendrian rational unknot.", "Then the problem reduces to determine the contact mapping class group of its complement, which is determined in Theorem REF by applying the one-parametric convex surface theory, in particular, Colin's isotopy discretization [9], and various properties of bypasses studied by Honda [34], [35] and Honda, Kazez and Matić [37], see Section  REF The main technique for Theorem REF and REF is again the application of one-parametric convex surface theory and various properties of bypasses, which were utilized to study Legendrian and transverse knots in contact structures on $S^3$ in [7], [18].", "In Section , we develop the technique and apply it to a solid torus and lens spaces to study Legendrian and transverse knots in tight contact structures on those manifolds." ], [ "The contact mapping class group of lens spaces", "We first review the basic notations.", "First, we assume every contactomorphism is a coorientation preserving one unless otherwise specified.", "We denote the group of contactomorphisms of a closed contact manifold $(M,\\xi )$ by $\\operatorname{Cont}(M,\\xi ) = \\text{the group of coorientation preserving contactomorphisms of $(M,\\xi )$}.$ The contact mapping class group of $(M,\\xi )$ is defined to be the group of contact isotopy classes of contactomorphisms of $(M,\\xi )$ .", "We denote it by $\\pi _0(\\operatorname{Cont}(M,\\xi )) = \\operatorname{Cont}(M,\\xi )/\\sim $ where $f \\sim g$ if $f$ is contact isotopic to $g$ .", "Let $U$ be the unknot in $S^3$ .", "For a pair of coprime integers $(p,q)$ satisfying $p > q > 0$ , we define $L(p,q) = S^3_{-p/q}(U).$ Now we are ready to state our first main result.", "Theorem 1.1 The contact mapping class group of $(L(p,q),\\xi _{std})$ is $\\pi _0(\\operatorname{Cont}(L(p,q), \\xi _{std})) = {\\left\\lbrace \\begin{array}{ll}\\mathbb {Z}_2 &\\; p \\ne 2\\, \\text{ and }\\, q \\equiv -1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}), \\\\\\mathbb {Z}_2 &\\; q \\lnot \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p})\\, \\text{ and }\\, q^2 \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\1 &\\; \\text{otherwise}.", "\\end{array}\\right.", "}$ For the first two cases, the contact mapping class group is generated by a contactomorphism $\\sigma $ .", "See Section REF for the definition of $\\sigma $ .", "There is a quick application of Theorem REF .", "In the forthcoming paper with Baker, Etnyre, and Onaran [2], we classify Legendrian and transverse torus knots in the standard tight contact structures on lens spaces.", "Basically, we classify the knots coarsely, meaning that up to coorientation preserving contactomorphism which is smoothly isotopic to the identity.", "However, due to Theorem REF , we could improve the result up to Legendrian and transverse isotopy.", "Comparing the contact mapping class group and the smooth mapping class group of lens spaces (Theorem REF ), we can make the following observation.", "Corollary 1.2 The induced map from the natural inclusion $i_*\\colon \\pi _0(\\operatorname{Cont}(L(p,q), \\xi _{std})) \\rightarrow \\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is an isomorphism if and only if $q \\equiv -1 \\pmod {p}$ .", "In particular, $i_*$ is injective but not surjective if $q \\lnot \\equiv -1 \\pmod {p}$ .", "The main reason for the failure of $i_*$ being surjective is that there exist two non-isotopic standard contact structures on $L(p,q)$ if and only if $q \\lnot \\equiv -1 \\pmod {p}$ .", "See Section REF for more details about the standard contact structures on lens spaces.", "Since we can consider $S^1 \\times S^2$ as $L(0,1)$ , we also determine the contact mapping class group of the standard contact structure on $S^1 \\times S^2$ .", "We should mention that the main part of the proof was essentially done by Ding and Geiges [10].", "Theorem 1.3 The contact mapping class group of $(S^1 \\times S^2, \\xi _{std})$ is $\\pi _0(\\operatorname{Cont}(S^1 \\times S^2, \\xi _{std})) = \\mathbb {Z} \\oplus \\mathbb {Z}_2.$ Recall that $\\operatorname{Diff}_0(M)$ is the connected component of $\\operatorname{Diff}_+(M)$ containing the identity.", "We can define a subgroup of $\\operatorname{Cont}(M,\\xi )$ as follows: $\\operatorname{Cont}_0(M,\\xi ) := \\operatorname{Cont}(M,\\xi ) \\cap \\operatorname{Diff}_0(M).$ Ding and Geiges [10] proved that $\\pi _0(\\operatorname{Cont}_0(S^1 \\times S^2, \\xi _{std}))$ is isomorphic to $\\mathbb {Z}$ .", "From Corollary REF , it is immediate that $\\pi _0(\\operatorname{Cont}_0(L(p,q),\\xi _{std}))$ is trivial.", "Corollary 1.4 For every lens space $L(p,q)$ , we have $\\pi _0(\\operatorname{Cont}_0(L(p,q),\\xi _{std})) = 1.$" ], [ "Legendrian and transverse rational unknots in lens spaces", "A rational unknot in a lens space is a core of a Heegaard torus.", "See Section REF for more details, and also see Figure REF and REF for (contact) surgery presentations of rational unknots in lens spaces.", "Figure: Two contact surgery presentations for the Legendrian rational unknots in tight contact structures on L(p,q)L(p,q) with error ℚ \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}.", "Here, p ' /q ' p^{\\prime }/q^{\\prime } is the largest rational number satisfying pq ' -p ' q=-1pq^{\\prime } - p^{\\prime }q = -1.Etnyre and Baker [1] coarsely classified Legendrian rational unknots in any tight contact structure on lens spaces, see Theorem REF .", "Recall that the coarse classification is the classification up to contactomorphism which is smoothly isotopic to the identity.", "Also, Geiges and Onaran [23] coarsely classified non-loose Legendrian unknots in some lens spaces.", "However, to study the contact mapping class group, we need to understand the behavior of Legendrian (or contact) submanifolds under Legendrian (or contact) isotopy.", "Our second main result is the classification of Legendrian rational unknots in any tight contact structure on lens spaces up to Legendrian isotopy.", "Theorem 1.5 Suppose $p > q > 0$ and $\\xi $ is a tight contact structure on $L(p,q)$ .", "Rational unknots in $\\xi $ are Legendrian simple: there are Legendrian representatives ${\\left\\lbrace \\begin{array}{ll}L_1 \\,& p = 2, \\\\L_1,-L_1 \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}), \\\\L_1,-L_1,L_2,-L_2 & \\text{otherwise}.\\end{array}\\right.", "}$ with $\\operatorname{tb}_{\\mathbb {Q}}(\\pm L_1)= -\\frac{p-q}{p} \\;\\; \\text{and}\\;\\; \\operatorname{tb}_{\\mathbb {Q}}(\\pm L_2) = -\\frac{p-p^{\\prime }}{p}\\\\$ where $p^{\\prime }/q^{\\prime }$ is the largest rational number satisfying $pq^{\\prime } - p^{\\prime }q = -1$ .", "Also the rational rotation numbers are determined by the formula in Lemma REF or REF .", "Every Legendrian representative of rational unknots in $\\xi $ is Legendrian isotopic to one of the Legendrian representatives above, or their stabilization.", "See Figure REF for the Legendrian mountain range of the rational unknot in $L(2,1)$ .", "We also give the classification of (positive) transverse rational unknots in any tight contact structure on lens spaces up to transverse isotopy.", "Figure: The Legendrian mountain range of the rational unknot in (ℝℙ 3 ,ξ std )(\\mathbb {R}\\mathbb {P}^3,\\xi _{std}).", "Each dot represents a unique Legendrian representative with (rot ℚ ,tb ℚ )(\\operatorname{rot}_{\\mathbb {Q}},\\operatorname{tb}_{\\mathbb {Q}}).Theorem 1.6 Suppose $p > q > 0$ and $\\xi $ is a tight contact structure on $L(p,q)$ .", "Rational unknots in $\\xi $ are transversely simple: there are transverse representatives ${\\left\\lbrace \\begin{array}{ll}T_1 \\,& p = 2, \\\\T_1,\\overline{T}_1 \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}), \\\\T_1,\\overline{T}_1,T_2,\\overline{T}_2 & \\text{otherwise,}\\end{array}\\right.", "}$ such that every transverse representative of rational unknots in $\\xi $ is transversely isotopic to one of the transverse representatives above, or their stabilization.", "Also.", "$T_i$ is a positive transverse push-off of $L_i$ and $\\overline{T}_i$ is a positive transverse push-off of $-L_i$ .", "There is an application of Theorem REF and REF .", "In the forthcoming paper with Baker, Etnyre, and Onaran [2], we classify Legendrian and transverse positive torus knots in any tight contact structure on lens spaces.", "Basically, we classify the knots coarsely.", "However, due to Theorem REF and REF , we could improve the result up to Legendrian and transverse isotopy.", "Unfortunately, there are some subtleties for negative torus knots (e.g.", "Legendrian large cables, see [7]), so we could only improve the result for the negative torus knots with sufficiently negative $\\operatorname{tb}_{\\mathbb {Q}}$ and $\\operatorname{sl}_{\\mathbb {Q}}$ ." ], [ "Acknowledgements", "The author thanks Anthony Conway for helpful comments on the first draft of the paper and John Etnyre for a useful conversation." ], [ "Background and preliminary results", "In this section, we review and prove some useful results on contact topology and the mapping class group of lens spaces that will be used throughout the paper.", "We assume the reader is familiar with 3-dimensional contact topology, in particular, Legendrian and transverse knots and the convex surface theory.", "See [16], [21], [33] for more details." ], [ "Convex surfaces and bypasses", "First, we warn the reader that our convention is slopes of curves on a torus are given by $\\frac{\\rm {meridian}}{\\rm {longitude}}$ .", "This has led to some differences between how we cite results and how they were initially stated.", "We will use several properties of convex surfaces without explicitly mentioning them: perturbing a compact surface to be convex, realizing a particular characteristic foliation for the given dividing set, and using the Legendrian realization principle.", "We also assume that the boundary of any convex surface $\\Sigma $ is Legendrian, if non-empty.", "When $\\operatorname{\\partial }\\Sigma $ is connected, then $\\operatorname{\\partial }\\Sigma $ is null-homologous and $tb(\\operatorname{\\partial }\\Sigma )$ is well-defined.", "Kanda [38] proved $tb(\\operatorname{\\partial }\\Sigma ) = -\\frac{1}{2}\\left|\\operatorname{\\partial }\\Sigma \\cap \\Gamma _\\Sigma \\right|,$ where $\\Gamma _\\Sigma $ is the dividing set of $\\Sigma $ .", "Suppose $\\Sigma $ is a properly embedded convex surface in a contact 3–manifold with convex boundary.", "Then the relative Euler class of the contact structure evaluates to $\\chi (\\Sigma _+) - \\chi (\\Sigma _-)$ on $\\Sigma $ where $\\Sigma _\\pm $ are the positive/negative regions of the convex surface.", "We can modify a convex surface by attaching a bypass, introduced by Honda [33].", "Consider a convex overtwisted disk whose dividing set consists of a single contractible closed curve.", "Take a properly embedded arc $\\gamma $ on the disk intersecting the dividing curve in two points.", "By applying the Legendrian realization principle, we can assume that $\\gamma $ is a Legendrian arc, and cut the disk along $\\gamma $ ; each half-disk is called a bypass.", "Now, suppose a bypass $D$ transversely intersects a convex surface $\\Sigma $ such that $D \\cap \\Sigma = \\gamma $ .", "Let $\\Gamma _\\Sigma $ be the dividing set of $\\Sigma $ .", "Since the dividing set interleaves, $\\gamma $ intersects $\\Gamma _\\Sigma $ in three points.", "We call the Legendrian arc $\\gamma $ on $\\Sigma $ the attaching arc of the bypass $D$ and say $D$ is a bypass for $\\Sigma $ .", "After edge-rounding, the convex boundary of a neighborhood of $D \\cup \\Sigma $ is a surface isotopic to $\\Sigma $ but with its dividing set changed in a neighborhood of the attaching arc as shown in Figure REF .", "We call this process a bypass attachment along $\\gamma $.", "Note that Figure REF is drawn for the case that the bypass $D$ is attached “from the front”, that is, sitting above the page.", "If we attach a bypass “from the back” of $\\Sigma $ , the result will be the mirror image of Figure REF .", "Figure: The effect of a bypass attachment from the front.To study the effect of a bypass attachment on a torus, we first need to review the Farey graph.", "Given two rational numbers $a/b$ and $c/d$ , we define their Farey sum to be $\\frac{a}{b} \\oplus \\frac{c}{d} = \\frac{a+c}{b+d}.$ We also define their Farey multiplication to be $\\frac{a}{b} \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}$ cd = ad - bc.", "$$ Take the Poincaré disk in $\\mathbb {R}^2$ and label the points $(0,1)$ as $0=0/1$ and $(0,-1)$ as $\\infty =1/0$ .", "Take the half circle with non-negative $x$ -coordinate.", "Pick a point in a half-way between two labeled points and label it with the Farey sum of the two points and connect it to both points by a geodesic.", "Repeat this process until all the positive rational numbers are a label on some point on the unit disk.", "Repeat the same for the half circle with non-positive $x$ -coordinate (for $\\infty $ , use the fraction $-1/0$ ).", "We call this disk with the labels the Farey graph, see Figure REF .", "Also notice that two rational numbers $r$ and $s$ satisfy $|r \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}$ s| = 1$ if and only if there is an edge between them in the Farey graph.$ Figure: The Farey graph.Consider a convex torus $T$ with the dividing set $\\Gamma _T$ , consisting of two homologically essential closed curves.", "Let $\\gamma $ be an attaching arc of a bypass for $T$ .", "Honda [33] completely studied what happens when $\\gamma $ is a part of a ruling curve for $T$ .", "Theorem 2.1 (Honda [33]) Suppose a convex torus $T$ has two dividing curves of slope $s$ , and $\\gamma $ is an attaching arc of a bypass for $T$ , which is a part of a ruling curve of slope $r$ .", "Let $T^{\\prime }$ be the convex torus obtained from $T$ by attaching a bypass along $\\gamma $ .", "Then the dividing set $\\Gamma _{T^{\\prime }}$ consists of two dividing curves of slope $s^{\\prime }$ , where if the bypass is attached from the front, then $s^{\\prime }$ is the farthest point on the Farey graph clockwise of $s$ and counterclockwise of $r$ that is connected to $s$ by an edge (and if $s$ and $r$ are connected by an edge, then $s^{\\prime } = r$ ), if the bypass is attached from the back, then $s^{\\prime }$ is the farthest point on the Farey graph counterclockwise of $s$ and clockwise of $r$ that is connected to $s$ by an edge (and if $s$ and $r$ are connected by an edge, then $s^{\\prime } = r$ ).", "In general, we can find a bypass lying on a convex surface if there exist boundary-parallel dividing curves on the surface.", "In particular, we can find a bypass in a convex disk with more than one dividing curves by applying the Legendrian realization principle.", "Theorem 2.2 (Honda [33]) Let $\\Sigma $ be a convex surface and $D$ be a convex disk with Legendrian boundary.", "Suppose $\\Sigma $ and $D$ intersect transversely and $\\Sigma \\cap D = \\operatorname{\\partial }D$ .", "Suppose $tb(\\operatorname{\\partial }D) < -1$ .", "Then for any boundary-parallel dividing curve $d$ on $D$ , there exists a bypass for $\\Sigma $ containing $d$ ." ], [ "Bypasses and contact isotopy", "We continue to review the properties of bypasses.", "Let $\\Sigma $ be a convex surface and $D$ be a bypass for $\\Sigma $ .", "Suppose the attaching arc of $D$ passes three dividing curves $d_1$ , $d_2$ and $d_3$ consecutively.", "We say the bypass $D$ is effective if $d_2$ is different from $d_1$ and $d_3$ .", "Honda showed [33] attaching an effective bypass to a torus will decrease the number of dividing curves if $d_1$ , $d_2$ and $d_3$ are all different, or change the dividing slope of $T$ if $d_1$ and $d_3$ are the same (Theorem REF ).", "Suppose $D$ is a non-effective bypass for a convex surface $\\Sigma $ and let $\\Sigma ^{\\prime }$ be the resulting convex surface after attaching the bypass $D$ to $\\Sigma $ .", "Define $|\\Gamma _{\\Sigma }|$ to be the number of dividing curves on $\\Sigma $ .", "There are three types of non-effective bypasses for $\\Sigma $ according to the effect on the dividing set: $\\Gamma _{\\Sigma ^{\\prime }} = \\Gamma _\\Sigma $ , $\\Gamma _{\\Sigma ^{\\prime }}$ contains a contractible closed curve and $|\\Gamma _{\\Sigma ^{\\prime }}| > |\\Gamma _{\\Sigma }|$ , $\\Gamma _{\\Sigma ^{\\prime }} \\ne \\Gamma _\\Sigma $ and $|\\Gamma _{\\Sigma ^{\\prime }}| \\ge |\\Gamma _{\\Sigma }|$ .", "See Figure REF for the first two cases.", "Recall that Giroux [26] proved that an $I$ -invariant neighborhood of a convex surface $\\Sigma $ is tight if and only if $\\Sigma \\lnot \\cong S^2$ and there is no closed contractible dividing curve on $\\Sigma $ , or $\\Sigma \\cong S^2$ and there is a single dividing curve on $\\Sigma $ .", "Thus the second type of bypasses does not occur in a tight contact structure.", "If a bypass does not change the dividing set, we call it a trivial bypass.", "Honda [35] showed that a trivial bypass is indeed trivial.", "Lemma 2.3 (Honda [35]) Suppose $\\Sigma $ is a convex surface which is closed or compact with Legendrian boundary.", "If $D$ is a trivial bypass for $\\Sigma $ , then a neighborhood $N(\\Sigma \\cup D)$ , which is a result of the bypass attachment, is an $I$ -invariant neighborhood of $\\Sigma $ .", "Figure: Two attaching arcs of non-effective bypasses.A rotative layer is a universally tight contact structure on $T^2 \\times I$ with convex boundary such that the dividing slopes of $T^2 \\times \\lbrace 0\\rbrace $ and $T^2 \\times \\lbrace 1\\rbrace $ are different.", "Also, a non-rotative layer is a tight contact structure on $T^2 \\times [0,1]$ with convex boundary such that any convex tori parallel to the boundary have the same dividing slope.", "Non-rotative layers were studied in [34], [37].", "One useful property is the attach=dig principle.", "We introduce a version of the principle for a simple case.", "Theorem 2.4 (The attach=dig principle, Honda–Kazez–Matić [37]) Let $(T^2 \\times [0,4], \\xi )$ be a rotative layer.", "Denote $T \\times \\lbrace i\\rbrace $ by $T_i$ for $i \\in \\mathbb {Z}$ and suppose they are convex.", "Let $s_i$ and $n_i$ be the dividing slope and the number of dividing curves on $T_i$ , respectively.", "Suppose $s_0 < s_2 < s_4$ .", "Then after contact isotopy relative to $T_2$ and the boundary, $T^2 \\times [1,3]$ becomes an $I$ -invariant neighborhood with $s_1 = s_2 = s_3$ and $n_1 = n_3 = 2$ .", "Also, $T_1$ can be obtained by attaching a sequence of bypasses from the back of $T_2$ , and $T_3$ can be obtained by attaching a sequence of bypasses from the front of $T_2$ .", "Let $\\Sigma $ be a closed surface or a compact surface with boundary.", "Giroux [27] showed that we can perturb a contact structure on $\\Sigma \\times [0,1]$ so that $\\Sigma \\times \\lbrace t\\rbrace $ are convex for all but finite $t \\in [0,1]$ , and a neighborhood of the non-convex $\\Sigma \\times \\lbrace t\\rbrace $ is contactomorphic to a bypass attachment.", "After that, Honda and Huang [36] generalized it to every dimension.", "Theorem 2.5 (Giroux [27], Honda–Huang [36]) Let $\\xi $ be a contact structure on $\\Sigma \\times [0,1]$ such that $\\Sigma \\times \\lbrace 0\\rbrace $ and $\\Sigma \\times \\lbrace 1\\rbrace $ are convex.", "Then up to contact isotopy relative to the boundary, there exists a finite sequence $0 < t_1 < \\cdots < t_n < 1$ such that $\\Sigma \\times \\lbrace t\\rbrace $ is convex except for $t = t_i$ .", "There exists $\\epsilon > 0$ for each $i$ such that $\\Sigma \\times [t_i - \\epsilon , t_i + \\epsilon ]$ is contactomorphic to a bypass attachment.", "Colin [9] improved this result for a one-parameter family of embedded surfaces in a contact 3–manifold.", "Theorem 2.6 (Isotopy discretization, Colin [9], see also Honda [35]) Let $(M,\\xi )$ be a contact 3–manifold and $\\Sigma $ be a convex surface which is closed or compact with Legendrian boundary.", "Suppose $\\phi _t\\colon \\Sigma \\rightarrow M$ for $t\\in [0,1]$ is a smooth isotopy of $\\Sigma $ fixing the boundary and $\\phi _0(\\Sigma )$ and $\\phi _1(\\Sigma )$ are convex.", "Then there exists a finite sequence $0 = t_0 < t_1 < \\cdots < t_n = 1$ such that $\\phi _{t_i}(\\Sigma )$ is convex for $i = 0, \\ldots , n$ .", "$\\phi _{[t_i,t_{i+1}]}(\\Sigma )$ is contactomorphic to a bypass attachment.", "We end this section by showing that if a contactomorphism fixes a convex surface or a Legendrian knot, then after contact isotopy, the contactomorphism also fixes a neighborhood of them.", "The second statement of Lemma REF was proved in [10], but we present a proof for completeness.", "Lemma 2.7 Let $C$ be a subset in a compact contact 3–manifold $(M,\\xi )$ and $f\\colon (M,\\xi ) \\rightarrow (M,\\xi )$ be a contactomorphism.", "Suppose $f|_C = id$ .", "Then, if $C = \\Sigma $ is a compact convex surface, then there exist an $I$ -invariant neighborhood $N$ of $\\Sigma $ and a contactomorphism $\\widetilde{f}$ of $(M,\\xi )$ such that $\\widetilde{f}|_N = id$ and $\\widetilde{f}$ is contact isotopic to $f$ .", "if $C = L$ is a Legendrian knot, then there exist a standard neighborhood $N$ of $L$ and a contactomorphism $\\widetilde{f}$ of $(M,\\xi )$ such that $\\widetilde{f}|_N = id$ and $\\widetilde{f}$ is contact isotopic to $f$ .", "Consider the case $C = \\Sigma $ first.", "Take an $I$ -invariant neighborhood of $\\Sigma $ which is contactomorphic to $(\\Sigma \\times \\mathbb {R},\\, \\beta + u\\,dt)$ where $\\Sigma = \\Sigma \\times \\lbrace 0\\rbrace $ , $\\beta \\in \\Omega ^1(\\Sigma )$ and $u\\colon \\Sigma \\rightarrow \\mathbb {R}$ .", "Take another small neighborhood $N = \\Sigma \\times [-\\epsilon , \\epsilon ]$ satisfying $f(N) \\subset \\Sigma \\times \\mathbb {R}$ .", "We will use the following strategy: we will find an isotopy of contact embeddings $f_s\\colon (N,\\xi |_N) \\rightarrow (M,\\xi )$ where $f_0 = f|_N$ and $f_1 = id$ .", "According to the contact isotopy extension theorem [21], there exists a contact isotopy $\\phi _s\\colon (M,\\xi ) \\rightarrow (M,\\xi )$ satisfying $\\phi _0 = id$ and $\\phi _s \\circ f_0 = f_s$ .", "Then $\\widetilde{f} := \\phi _1 \\circ f$ is our desired contactomorphism.", "Let $v_0 := \\partial _t$ and $v_1 := f_*(\\partial _t)$ .", "It is not hard to check $\\mathcal {L}_{v_0}\\alpha = 0$ and $\\mathcal {L}_{v_1}\\alpha = \\lambda \\,\\alpha $ where $\\alpha = \\beta + u\\,dt$ and $\\lambda \\colon \\Sigma \\times \\mathbb {R} \\rightarrow \\mathbb {R}$ , so both $v_0$ and $v_1$ are contact vector fields transverse to $\\Sigma $ .", "Notice that $v_1$ is well-defined on $f(N)$ but we can extend it to entire $\\Sigma \\times \\mathbb {R}$ by extending the corresponding contact Hamiltonian.", "Now we define $v_s := sv_0 + (1-s)v_1$ .", "Then for every $s \\in [0,1]$ , the vector field $v_s$ is also a contact vector field since $\\mathcal {L}_{v_s}\\alpha &= s\\mathcal {L}_{v_0}\\alpha + (1-s)\\mathcal {L}_{v_1}\\alpha \\\\&= (1-s)\\lambda \\,\\alpha .$ Let $\\psi ^t_s$ be the flow of $v_s$ .", "Since $v_0 = \\partial _t$ , we have $\\psi ^t_0(p,0) = (p,t) \\,\\text{ for }\\, (p,t) \\in \\Sigma \\times \\mathbb {R}.$ Also, since $v_1 = f_*(v_0)$ on $N$ and $f|_{\\Sigma } = id$ , we have $\\psi ^t_1(p,0) &= f \\circ \\psi ^t_0 \\circ f^{-1}(p,0)\\\\&= f \\circ \\psi ^t_0(p,0)\\\\ &= f(p,t)$ for $(p,t) \\in N$ .", "Define an isotopy of contact embeddings $f_s(p,t) := \\psi ^t_{1-s}(p,0)$ for $(p,t) \\in N$ and it is our desired isotopy.", "Now consider the case $C = L$ .", "Take a standard neighborhood of $L$ which is contactomorphic to $(S^1 \\times \\mathbb {R}^2, dz - y\\,dx)$ where $x \\sim x+1$ is the coordinate on $S^1 = \\mathbb {R}/\\mathbb {Z}$ , the pair $(y,z)$ is the coordinates on $\\mathbb {R}^2$ and $L$ is identified with $S^1 \\times \\lbrace 0\\rbrace $ .", "Take another standard neighborhood $N = S^1 \\times D^2$ where $D^2$ is a small disk containing the origin such that $f(N) \\subset S^1 \\times \\mathbb {R}^2$ .", "We will follow the same strategy as in the case of $C = \\Sigma $ .", "That is, it is enough to find an isotopy of contact embeddings $f_s\\colon (N,\\xi |_N) \\rightarrow (M,\\xi )$ satisfying $f_0 = f|_N$ and $f_1 = id$ .", "We can write $f|_N$ in the form $f|_N(x,y,z) = (u(x,y,z),v(x,y,z),w(x,y,z))$ where $u\\colon N \\rightarrow S^1$ , $v\\colon N \\rightarrow \\mathbb {R}$ and $w\\colon N \\rightarrow \\mathbb {R}$ .", "In the local coordinates, we can rewrite the condition $f^*(\\alpha ) = \\lambda \\alpha $ where $\\lambda \\colon N \\rightarrow \\mathbb {R}^+$ for $f|_N$ to be a contact embedding as follows: $dw - v\\,du = \\lambda (dz - y\\,dx),$ which is equivalent to $\\left\\lbrace \\begin{array}{rcl}\\displaystyle \\frac{\\partial w}{\\partial x} - v\\frac{\\partial u}{\\partial x} & = & -\\lambda y,\\\\\\displaystyle \\frac{\\partial w}{\\partial y} - v\\frac{\\partial u}{\\partial y} & = & 0,\\rule {0cm}{9mm}\\\\\\displaystyle \\frac{\\partial w}{\\partial z} - v\\frac{\\partial u}{\\partial z} & = & \\lambda .\\rule {0cm}{9mm}\\end{array}\\right.$ Since $f|_L = id$ , we have $u(x,0,0) = x,\\quad v(x,0,0) = w(x,0,0) = 0.$ Notice that for $s>0$ , the dilation $\\delta _s(x,y,z) = (x, sy, sz)$ is a contactomorphism of $(S^1 \\times \\mathbb {R}^2, dz - y\\,dx)$ .", "Thus we have an isotopy of contact embeddings $g_s := \\delta _s^{-1} \\circ f|_N \\circ \\delta _s (x,y,z) = (u(x,sy,sz), \\frac{1}{s} v(x,sy,sz), \\frac{1}{s} w(x,sy,sz)).$ Let $\\lambda _0(x) := \\lambda (x,0,0)$ .", "Since $u$ , $v$ and $w$ are $C^\\infty $ , we have $g_0 := \\lim _{s\\rightarrow 0}g_s = (x,\\, y \\cdot v_y(x,0,0) + z \\cdot v_z(x,0,0),\\, y \\cdot w_y(x,0,0) + z \\cdot w_z(x,0,0)).$ Differentiate the first equation in (REF ) with respect to $z$ and we obtain $w_{xz} - v_z u_x - v u_{xz} = -y\\lambda _z.$ Evaluate this equation at $(x,0,0)$ .", "Then by the equations in (REF ), we obtain $v_z(x,0,0) = w_{xz}(x,0,0).$ Differentiate the third equation in (REF ) with respect to $x$ and we obtain $w_{xz} - v_x u_z - v u_{xz} = \\lambda _x.$ Evaluate this equation at $(x,0,0)$ .", "Then by the equations in (REF ), we obtain $w_{xz}(x,0,0) = \\lambda _x(x,0,0) = \\lambda ^{\\prime }_0(x).$ Differentiate the first equation in (REF ) with respect to $y$ and we obtain $w_{xy} - v_y u_x - v u_{xy} = -\\lambda - y\\lambda _y.$ Evaluate this equation at $(x,0,0)$ .", "Then by the equations in (REF ), we obtain $v_y(x,0,0) = \\lambda (x,0,0) = \\lambda _0(x).$ Evaluate the equations in (REF ) at $(x,0,0)$ .", "Then by the equations in (REF ), we obtain $w_y(x,0,0) = 0, \\quad w_z(x,0,0) = \\lambda _0(x).$ Finally, from the equations (REF ), (REF ), (REF ) and (REF ), we obtain $g_0(x,y,z) = (x,\\, y \\cdot \\lambda _0(x) + z \\cdot \\lambda _0^{\\prime }(x),\\, z \\cdot \\lambda _0(x)),$ which is a contact embedding from $(N,\\xi |_N)$ to $(S^1 \\times \\mathbb {R}^2, dz - y\\,dx)$ .", "Now define $\\lambda _s(x) := s + (1-s)\\lambda _0(x)$ .", "Then we can define another isotopy of contact embeddings as follows: $h_s(x,y,z) := (x,\\, y \\cdot \\lambda _s(x) + z \\cdot \\lambda _s^{\\prime }(x),\\, z \\cdot \\lambda _s(x)).$ Let $f_s$ be a concatenation of $g_{1-s}$ and $h_s$ .", "This is our desired isotopy." ], [ "Tight contact structures on a solid torus and lens spaces", "Consider a tight contact structure $\\xi $ on $T(s_1,s_2) = T^2 \\times I$ with a characteristic foliation $\\mathcal {F}$ on the boundary that is divided by two dividing curves of slope $s_i$ on $T^2 \\times \\lbrace i\\rbrace $ for $i=0,1$ , where $s_0$ and $s_1$ are connected by an edge in the Farey graph.", "We say that a contact structure $\\xi $ is minimally twisting if for any boundary-parallel convex torus $T$ in $\\xi $ , the dividing slope is clockwise of $s_0$ and counterclockwise of $s_1$ in the Farey graph.", "Theorem 2.8 (Honda [33]) If $T(s_1,s_2)$ and $\\mathcal {F}$ are as above, then there exist exactly two minimally twisting tight contact structures on $T(s_1,s_2)$ that induce $\\mathcal {F}$ on the boundary, up to isotopy fixing $\\mathcal {F}$ .", "The two contact structures given by Theorem REF are distinguished by their relative Euler class.", "We call them positive and negative basic slices after picking an orientation.", "Let $V = S^1 \\times D^2$ and choose coordinates for $H_1(\\operatorname{\\partial }V)$ such that 0 is a longitude $S^1 \\times \\lbrace p\\rbrace $ (product framing), and $\\infty $ is a meridian.", "Let $p/q$ is a rational number and $k$ be the unique integer such that $\\frac{p+kq}{q} \\in [-1,0)$ , and $\\frac{q}{p+kq} = [r_0, \\ldots , r_n] = r_0-\\frac{1}{r_1-\\frac{1}{\\cdots - \\frac{1}{r_n}}}$ where $r_n \\le -1$ , and $r_i \\le -2$ for $i=0,\\ldots ,n-1$ .", "Theorem 2.9 (Honda [33]) Suppose $V = S^1 \\times D^2$ with two dividing curves of slope $s = p/q$ .", "Fix a characteristic foliation $\\mathcal {F}$ on $\\partial V$ that is divided by the dividing curves.", "Then there are $\\left|(r_0+1)\\cdots (r_{n-1}+1)r_n\\right|$ tight contact structures up to isotopy fixing $\\mathcal {F}$ , there is one to one correspondence between tight contact structures on $V$ and $T(\\lfloor s \\rfloor ,s)$ , if $s \\in \\mathbb {Z}$ , there is a unique tight contact structure and it is universally tight, if $s \\notin \\mathbb {Z}$ , there are exactly two universally tight contact structures, a tight contact structure on $V$ is universally tight if and only if it has the extremal relative Euler class, which evaluates to $\\pm (|q| - 1)$ on a convex meridian disk $D$ of $V$ whose boundary intersects the dividing curves on $\\partial V$ minimally.", "In this case, all dividing curves on $D$ is boundary-parallel.", "See Figure REF for example.", "To study tight contact structures on lens spaces, it is useful to use different coordinates for a solid torus.", "We say a tight contact solid torus with convex boundary is a solid torus with lower meridian if it has two dividing curves of slope $s$ with meridional slope $r$ , and any convex torus in the solid torus parallel to the boundary has a dividing slope clockwise of $r$ and counterclockwise of $s$ in the Farey graph.", "We denote it by $S(s,r;l)$ .", "We say a tight contact solid torus with convex boundary is a solid tours with upper meridian if it has two dividing curves of slope $s$ with meridional slope $r$ , and any convex torus in the solid torus parallel to the boundary has a dividing slope counterclockwise of $r$ and clockwise of $s$ in the Farey graph.", "We denote it by $S(s,r;u)$ .", "According to Theorem REF , both $S(s,r;l)$ and $S(s,r;u)$ admit a unique tight contact structure if and only if there is an edge between $s$ and $r$ in the Farey graph.", "We assume a solid torus $S^1 \\times D^2$ has lower meridian of slope $\\infty $ unless otherwise specified.", "Recall the standard contact structure $\\xi _{std}$ on $S^3$ is a union of standard neighborhoods of the Legendrian Hopf link $L_1 \\cup L_2$ with $\\operatorname{tb}(L_1) = \\operatorname{tb}(L_2) = -1$ .", "This gives a decomposition of $\\xi _{std}$ into $S(-1,0;u)$ and $S(-1,\\infty ;l)$ , where $L_1$ and $L_2$ are the cores of $S(-1,0;u)$ and $S(-1,\\infty ;l)$ , respectively.", "Suppose $(p,q)$ is a pair of coprime integers satisfying $p > q > 0$ .", "According to Giroux [27] and Honda [33], we can obtain any tight contact structure on a lens space $L(p,q)$ by performing contact $-(p/q-1)$ –surgery on $L_2$ , that is, remove a standard neighborhood $S(-1,\\infty ;l)$ of $L_2$ and glue $S(-1,-p/q;l)$ to the complement.", "Thus any tight contact structure on $L(p,q)$ can be decomposed into $S(-1,0;u) \\cup S(-1,-p/q;l)$ .", "See the first drawing of Figure REF .", "Notice that $L_1$ still has a standard neighborhood $S(-1,0;u)$ .", "We can also represent this decomposition on the Farey graph.", "Let $s_0,\\ldots ,s_n$ be the shortest path from $-p/q$ to 0 in the Farey graph clockwise of $-p/q$ and counterclockwise of 0.", "Notice that $s_0=-p/q$ , $s_{n-1}=-1$ and $s_n = 0$ .", "Decorate the edges in the path with $+$ or $-$ except for the first and the last ones.", "Each decorated edge from $s_i$ to $s_{i+1}$ represents a basic slice $T(s_i,s_{i+1})$ and the decoration on the edge represents the sign of the basic slice.", "The first and the last edges represent the solid tori $S(s_1,-p/q;l)$ and $S(-1,0;u)$ , respectively.", "We can consider $S(-1,-p/q;l)$ as a union of $S(s_1,-p/q;l)$ and $T(s_1,-1)$ .", "See Figure REF .", "For later usage, notice that $s_1 = (p^{\\prime }-p)/(q-q^{\\prime })$ where $p^{\\prime }/q^{\\prime }$ is the largest (extended) rational number satisfying $pq^{\\prime } - p^{\\prime }q = -1$ .", "We set $(p^{\\prime },q^{\\prime })=(1,0)$ if $q \\equiv 1 \\pmod {p}$ .", "Figure: A path in the Farey graph representing a tight contact structure on L(p,q)L(p,q).Let $(p,q)$ be a pair of coprime integers satisfying $p > q > 0$ and $\\frac{p}{q} = [r_0, \\ldots , r_n]$ where $r_i \\le -2$ for $0 \\le i \\le n$ .", "Theorem 2.10 (Giroux [27], Honda [33]) Suppose $(p,q)$ are as above.", "Then there are $\\left|(r_0+1)\\cdots (r_{n-1}+1)(r_n+1)\\right|$ tight contact structures on $L(p,q)$ up to isotopy, if $q \\equiv -1 \\pmod {p}$ , there exists a unique tight contact structure on $L(p,q)$ and it is universally tight, if $q \\lnot \\equiv -1 \\pmod {p}$ , there are exactly two universally tight contact structures on $L(p,q)$ , Any tight contact structure $\\xi $ on $L(p,q)$ can be decomposed into tight contact structures on $S(-1,-p/q;l)$ and $S(-1,0;u)$ .", "In particular, $\\xi $ is universally tight if and only if the contact structure $\\xi $ restricted to $S(-1,-p/q;l)$ is universally tight.", "There is another way to construct universally tight contact structures on lens spaces $L(p,q)$ .", "Consider $S^3$ as a unit sphere in $\\mathbb {C}^2$ .", "Then the standard contact structure $\\xi _{std}$ on $S^3$ is the kernel of $\\alpha = (x_1dy_1 - y_1dx_1 + x_2dy_2 - y_2dx_2)|_{S^3}.$ We can consider $L(p,q)$ as the quotient of $S^3 \\subset \\mathbb {C}^2$ under the $\\mathbb {Z}_p$ -action generated by $(z_1,z_2) \\mapsto (e^{2\\pi i/p}z_1, e^{2\\pi qi / p}z_2).$ Since $\\alpha $ is invariant under this $\\mathbb {Z}_p$ -action, we obtain an induced contact structure on $L(p,q)$ .", "We call this contact structure the standard contact structure $\\xi _{std}$ on $L(p,q)$ .", "However, one should notice that the standard contact structure is not unique in general.", "In fact, we can repeat the same construction on $-\\alpha $ and obtain another contact structure on $L(p,q)$ .", "Although $\\xi _{std}$ and $-\\xi _{std}$ are isotopic in $S^3$ , this is not the case for the induced contact structures on lens spaces in general.", "Thus we denote them by $\\xi _{std}^+$ and $\\xi _{std}^-$ , respectively.", "According to Theorem REF , $\\xi _{std}^+$ and $\\xi _{std}^-$ are isotopic if and only if $q \\equiv -1 \\pmod {p}$ .", "Thus in this case, we just denote them by $\\xi _{std}$ .", "Even in the case of $q \\lnot \\equiv -1 \\pmod {p}$ , since most of the arguments work in the same way, we will frequently denote them by $\\xi _{std}$ and this means that we fix one of two standard contact structures.", "If $q^2 \\equiv 1 \\pmod {p}$ , there exists an orientation preserving diffeomorphism $\\sigma $ on $L(p,q)$ , which is defined by $\\sigma \\colon L(p,q) &\\rightarrow L(p,q)\\\\(z_1,z_2) &\\mapsto (z_2, z_1)$ Also, there exists an orientation preserving diffeomorphism $\\tau $ on any lens spaces $L(p,q)$ , which is defined by $\\tau \\colon L(p,q) &\\rightarrow L(p,q)\\\\(z_1,z_2) &\\mapsto (\\overline{z}_1, \\overline{z}_2)$ We can check $\\sigma $ is a coorientation preserving contactomorphism of $\\xi _{std}$ as follows: $\\sigma ^*(\\alpha ) &= \\sigma ^*(x_1dy_1 - y_1dx_1 + x_2dy_2 - y_2dx_2)\\\\&= x_2dy_2 - y_2dx_2 + x_1dy_1 - y_1dx_1\\\\&= \\alpha .$ We can also check $\\tau $ is a coorientation reversing contactomorphism of $\\xi _{std}$ as follows: $\\tau ^*(\\alpha ) &= \\tau ^*(x_1dy_1 - y_1dx_1 + x_2dy_2 - y_2dx_2)\\\\&= -x_1dy_1 + y_1dx_1 - x_2dy_2 + y_2dx_2\\\\&= -\\alpha .$ If $q \\equiv -1 \\pmod {p}$ , since $\\xi _{std}^+$ and $\\xi _{std}^-$ are isotopic, we can apply the Moser's trick (see [21]) and find an isotopy $\\psi _t$ such that $(\\psi _1)_*(\\xi _{std}^\\pm ) = \\xi _{std}^\\mp $ .", "Thus $\\psi _1 \\circ \\tau $ is a coorientation preserving contactomorphism of $\\xi _{std}$ which is smoothly isotopic to $\\tau $ .", "We denote this by $\\overline{\\tau }$ ." ], [ "The mapping class group and rational unknots in lens spaces", "The mapping class group of lens spaces was determined by Bonahon [3].", "We warn the reader that his definition of lens spaces is different from ours.", "He defined $L(p,q)$ to be $p/q$ -surgery on the unknot in $S^3$ , while we defined $L(p,q)$ to be $-p/q$ -surgery on the unknot in $S^3$ (which is commonly used by contact topologists).", "Thus some statements below are different from the ones initially stated.", "Recall that in Section REF , we defined two diffeomorphisms, $\\sigma $ by $(z_1,z_2) \\mapsto (z_2,z_1)$ if $q^2 \\equiv 1 \\pmod {p}$ , and $\\tau $ by $(z_1,z_2)\\mapsto (\\overline{z}_1,\\overline{z}_2)$ .", "Bonahon [3] determined the mapping class group of lens spaces in terms of $\\sigma $ and $\\tau $ .", "Theorem 2.11 (Bonahon [3]) The mapping class group of $L(p,q)$ is $\\pi _0(\\operatorname{Diff}_+(L(p,q))) = {\\left\\lbrace \\begin{array}{ll}\\mathbb {Z}_2 \\oplus \\mathbb {Z}_2 \\cong \\langle \\sigma , \\tau \\rangle \\,& p \\ne 2,\\; q \\lnot \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p})\\, \\text{ and }\\, q^2 \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\\\mathbb {Z}_2 \\cong \\langle \\sigma \\rangle \\cong \\langle \\tau \\rangle \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv -1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\\\mathbb {Z}_2 \\cong \\langle \\tau \\rangle \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\\\mathbb {Z}_2 \\cong \\langle \\tau \\rangle \\,& p \\ne 2\\, \\text{ and }\\, q^2 \\lnot \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\1 & p = 2.", "\\end{array}\\right.", "}$ It will be useful to consider the mapping class group of lens spaces relative to a Heegaard torus.", "Let $T$ be a Heegaard torus of $L(p,q)$ , which is unique up to smooth isotopy by Bonahon [3].", "Define $\\operatorname{Diff}_+(L(p,q);T)$ to be the group of orientation preserving diffeomorphisms fixing $T$ setwise.", "Bonahon [3] determined the mapping class group of $L(p,q)$ relative to $T$ .", "Theorem 2.12 (Bonahon [3]) The mapping class group of $L(p,q)$ relative to $T$ is $\\pi _0(\\operatorname{Diff}_+(L(p,q);T)) = {\\left\\lbrace \\begin{array}{ll}\\mathbb {Z}_2 \\oplus \\mathbb {Z}_2 \\cong \\langle \\sigma , \\tau \\rangle \\,& q^2 \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\\\mathbb {Z}_2 \\cong \\langle \\tau \\rangle \\,& q^2 \\lnot \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}).\\end{array}\\right.", "}$ Bonahon [3] also studied the natural inclusion $i:\\operatorname{Diff}_+(L(p,q);T)) \\hookrightarrow \\operatorname{Diff}_+(L(p,q))$ at the $\\pi _0$ level.", "Theorem 2.13 (Bonahon [3]) The induced map from the natural inclusion $i_*\\colon \\pi _0(\\operatorname{Diff}_+(L(p,q);T)) \\rightarrow \\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is surjective and the kernel is $\\ker i_* = {\\left\\lbrace \\begin{array}{ll}\\mathbb {Z}_2 \\oplus \\mathbb {Z}_2 \\cong \\langle \\sigma , \\tau \\rangle \\,& p = 2,\\\\\\mathbb {Z}_2 \\cong \\langle \\sigma \\circ \\tau \\rangle \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv -1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\\\mathbb {Z}_2 \\cong \\langle \\sigma \\rangle \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}),\\\\1 \\,& q \\lnot \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}).\\end{array}\\right.", "}$ Now Theorem REF easily follows from Theorem REF and REF .", "A knot $K$ in a 3–manifold is a rational unknot if it is rationally null-homologous and its minimal rational Seifert genus is 0.", "Baker and Etnyre [1] showed that rational unknots in lens spaces are cores of the Heegaard torus $T$ , which is unique up to smooth isotopy by Bonahon [3].", "Since $T$ bounds two solid tori, we can define two oriented rational unknots $K_1$ and $K_2$ as follows: $K_1 &= \\lbrace (e^{i\\theta },0) : 0 \\le \\theta \\le \\frac{2\\pi }{p}\\rbrace ,\\\\K_2 &= \\lbrace (0,e^{i\\theta }) : 0 \\le \\theta \\le \\frac{2\\pi q}{p}\\rbrace .$ Also we define $-K_i$ to be the orientation reversal of $K_i$ for $i=1,2$ .", "From this, it is clear that $\\sigma (\\pm K_i) = \\pm K_{3-i}$ and $\\tau (\\pm K_i) = \\mp K_i$ for $i = 1,2$ .", "By Theorem REF and REF , we can determine when $\\pm K_1$ and $\\pm K_2$ become smoothly isotopic.", "Lemma 2.14 The oriented rational unknots in $L(p,q)$ are given by ${\\left\\lbrace \\begin{array}{ll}K_1 \\,& p = 2, \\\\K_1,-K_1 \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}), \\\\K_1,-K_1,K_2,-K_2 & \\text{otherwise}.\\end{array}\\right.", "}$ up to smooth isotopy.", "Bonahon [3] essentially showed that if an orientation preserving diffeomorphism $f\\colon L(p,q)\\rightarrow L(p,q)$ sends $K_1$ to $K_2$ (up to isotopy), then it is smoothly isotopic to $\\sigma $ .", "He also showed that if $f$ sends $K_1$ to $-K_1$ , then it is smoothly isotopic to $\\tau $ .", "Due to this fact, we only need to figure out when those diffeomorphisms are smoothly isotopic to the identity, which can be found in Theorem REF .", "If $p=2$ , both $\\sigma $ and $\\tau $ are smoothly isotopic to the identity.", "Thus all $\\pm K_1$ and $\\pm K_2$ are smoothly isotopic.", "If $q \\equiv -1$ , $\\sigma \\circ \\tau $ is smoothly isotopic to the identity.", "Thus $K_1$ is smoothly isotopic to $-K_2$ , and $-K_1$ is smoothly isotopic to $K_2$ .", "If $q \\equiv 1$ , $\\sigma $ is smoothly isotopic to the identity, but $\\tau $ is not.", "Thus $K_1$ is smoothly isotopic to $K_2$ , and $-K_1$ is smoothly isotopic to $-K_2$ .", "If $q \\lnot \\equiv \\pm 1$ , none of $\\sigma $ and $\\tau $ is smoothly isotopic to the identity.", "Thus none of $\\pm K_1$ and $\\pm K_2$ is smoothly isotopic to each other.", "Let $(p,q)$ be a pair of coprime integers satisfying $p > q > 0$ .", "Geiges and Onaran [23] depicted surgery presentations for the rational unknots $K_1$ and $K_2$ in $L(p,q)$ , see Figure REF and REF .", "Here, $p^{\\prime }/q^{\\prime }$ is the largest rational number satisfying $pq^{\\prime } - p^{\\prime }q = -1$ .", "Figure: Surgery presentations for the rational unknots K 1 K_1 and K 2 K_2.We can write the negative continued fraction of $-p/q$ as follows: $-\\frac{p}{q} = [r_0, \\dots , r_n]$ where $r_i \\le -2$ for $0 \\le i \\le n$ .", "Then we have $\\begin{pmatrix}p & p^{\\prime }\\\\-q & -q^{\\prime }\\end{pmatrix}=\\begin{pmatrix}-r_0 & 1\\\\-1 & 0\\end{pmatrix}\\begin{pmatrix}-r_1 & 1\\\\-1 & 0\\end{pmatrix}\\cdots \\begin{pmatrix}-r_n & 1\\\\-1 & 0\\end{pmatrix}$ (see [40] for example).", "After taking the inverse of these matrices, we obtain $-\\frac{p}{p^{\\prime }} = [r_n, \\ldots , r_0].$ Notice that the two surgery presentations in Figure REF are not the same.", "However, by the equality (REF ), we can naturally identify one surgery presentation with the other as shown in Figure REF .", "Moreover, if we fix the signs of stabilization, then we can also identify one contact surgery presentation with the other in Figure REF .", "Figure: Surgery presentations for the rational unknots K 1 K_1 and K 2 K_2.We end this section by reviewing the mapping class group of $S^1 \\times S^2$ and contactomorphisms of the standard contact structure on $S^1 \\times S^2$ .", "Consider $S^1 \\times S^2 \\subset S^1 \\times \\mathbb {R}^3$ , where $S^2$ is a unit sphere in $\\mathbb {R}^3$ .", "Then the standard contact structure $\\xi _{std}$ on $S^1 \\times S^2$ is the kernel of $\\alpha = (z\\,d\\theta + x\\,dy - y\\,dx)|_{S^1 \\times S^2}.$ There exists an orientation preserving diffeomorphism $\\eta $ of $S^1 \\times S^2$ which is defined by $\\eta \\colon S^1 \\times S^2 &\\rightarrow S^1 \\times S^2,\\\\(\\theta ,\\mathbf {x}) &\\mapsto (-\\theta , -\\mathbf {x}).\\\\$ Consider a rotation matrix $r_\\theta $ of $\\mathbb {R}^3$ about $z$ -axis: $r_\\theta =\\begin{pmatrix}\\cos \\theta & -\\sin \\theta & 0\\\\\\sin \\theta & \\cos \\theta & 0\\\\0 & 0 & 1\\end{pmatrix}.$ There is another orientation preserving diffeomorphism $\\delta $ of $S^1 \\times S^2$ , which is the Dehn twist about an essential sphere, defined by $\\delta \\colon S^1 \\times S^2 &\\rightarrow S^1 \\times S^2,\\\\(\\theta ,\\mathbf {x}) &\\mapsto (\\theta , r_\\theta (\\mathbf {x})).$ We can check $\\eta $ is a coorientation preserving contactomorphism immediately: $\\eta ^*(\\alpha ) &= \\eta ^*(z\\,d\\theta + x\\,dy - y\\,dx)\\\\&= (-z)\\,d(-\\theta ) - x\\,d(-y) + y\\,d(-x)\\\\&= \\alpha .$ Although $\\delta $ is not a contactomorphism of $\\xi _{std}$ , since there exists a unique tight contact structure on $S^1 \\times S^2$ , two contact structures $\\xi _{std}$ and $\\delta _*(\\xi _{std})$ are isotopic.", "We can apply the Moser's trick again and obtain an isotopy $\\psi _t$ such that $(\\psi _1)_*(\\xi _{std}) = \\delta _*(\\xi _{std})$ .", "Then clearly $\\psi _1^{-1} \\circ \\delta $ is a contactomorphism of $\\xi _{std}$ which is smoothly isotopic to $\\delta $ .", "We just relabel $\\psi _1^{-1} \\circ \\delta $ as $\\delta $ .", "The mapping class group of $S^1 \\times S^2$ was determined by Gluck [30].", "After that, Hatcher [32] determined the homotopy type of $\\operatorname{Diff}(S^1 \\times S^2)$ .", "Theorem 2.15 (Gluck [30], see also Hatcher [32]) The mapping class group of $S^1 \\times S^2$ is $\\pi _0(\\operatorname{Diff}_+(S^1 \\times S^2)) = \\mathbb {Z}_2 \\oplus \\mathbb {Z}_2 \\cong \\langle \\delta , \\eta \\rangle .$ We define a positively oriented core of $S^1 \\times S^2$ to be $K := \\lbrace (\\theta ,0) : 0 \\le \\theta \\le 2\\pi \\rbrace ,$ and $-K$ to be its orientation reversal and call it a negatively oriented core.", "Notice that $K$ and $-K$ are not smoothly isotopic to each other.", "Chen, Ding and Li [8] classified Legendrian representatives of the oriented core in the standard contact structure $\\xi _{std}$ on $S^1 \\times S^2$ up to Legendrian isotopy.", "Notice that the core is not (rationally) null-homologous, so the (rational) Thurston–Bennequin invariant is not well-defined.", "However, since $e(\\xi _{std})=0$ , the contact structure is trivial as a plane field, so the rotation number is well-defined.", "Theorem 2.16 (Chen–Ding–Li [8]) The oriented core in the standard contact structure $\\xi _{std}$ on $S^1 \\times S^2$ is Legendrian simple: any two Legendrian representatives are Legendrian isotopic if they have the same orientation and the same rotation number.", "Ding and Geiges [10] also studied the effect of $\\delta $ and $\\eta $ on the rotation number of Legendrian representatives of the oriented cores in $S^1 \\times S^2$ .", "Lemma 2.17 (Ding–Geiges [10]) Let $L$ be a Legendrian representative of the positively oriented core in the standard contact structure $\\xi _{std}$ on $S^1 \\times S^2$ and $-L$ be its orientation reversal.", "Then we have $\\operatorname{rot}(\\delta (\\pm L)) = \\operatorname{rot}(\\pm L) \\pm 1.$ Also, we have $\\operatorname{rot}(\\eta (\\pm L)) = -\\operatorname{rot}(\\pm L).$" ], [ "Invariants of rationally null-homologous Legendrian and transverse knots", "The classical invariants for null-homologous Legendrian and transverse knots were extended to rationally null-homologous knots by Baker and Etnyre [1].", "Let $L$ be a Legendrian representative of a rationally null-homologous knot in a contact 3–manifold $(M,\\xi )$ .", "Suppose the order of $L$ in $H_1(M)$ is $r$ and a rational Seifert surface for $L$ is $\\Sigma $ .", "Let $L^{\\prime }$ be a push-off of $L$ along the contact framing.", "Then the rational Thurston–Bennequin invariant of $L$ is defined by $\\operatorname{tb}_{\\mathbb {Q}}(L) := \\frac{1}{r} (L^{\\prime } \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}$ ).", "$Notice that there is an inclusion map $ M$ which is an embedding in the interior of $$, and an $ r$-fold cover of $ L$ on $$.", "Since the pullback contact structure $ i*()$ is trivial as a plane field, the pullback of a non-vanishing tangent vector field $ i*(v)$ of $ L$ gives a section on $ R2 $ after fixing a trivialization.", "This induces a Gauss map $ f S1 S1$.", "We define the rational rotation number of $ L$ as follows:$$\\operatorname{rot}_{\\mathbb {Q}}(L) := \\frac{1}{r} \\deg f.$$Let $ T$ be a transverse representative of a rationally null-homologous knot in a contact $ 3$--manifold $ (M,)$, and $ L$ be a Legendrian representative such that $ T$ is a (positive) transverse push-off of $ L$.", "Then the rational self-linking number of $ T$ is defined by$$\\operatorname{sl}_{\\mathbb {Q}}(T) := \\operatorname{tb}_{\\mathbb {Q}}(L) - \\operatorname{rot}_{\\mathbb {Q}}(L).$$We also denote the maximum rational Thurston--Bennequin invariant among the Legendrian representatives of $ K$ by $tbQ(K)$.", "Similarly, we can define the maximum rational self-linking number $slQ(K)$.$ Baker and Etnyre [1] also showed that the stabilization has the same effect on the invariants as in the null-homologous case: $\\operatorname{tb}_{\\mathbb {Q}}(S_\\pm (L)) &= \\operatorname{tb}_{\\mathbb {Q}}(L) - 1,\\\\\\operatorname{rot}_{\\mathbb {Q}}(S_\\pm (L)) &= \\operatorname{rot}_{\\mathbb {Q}}(L) \\pm 1,\\\\\\operatorname{sl}_{\\mathbb {Q}}(S(T)) &= \\operatorname{sl}_{\\mathbb {Q}}(T) - 2.$ Using these invariants, Baker and Etnyre [1] coarsely classified Legendrian rational unknots in any tight contact structure on lens spaces.", "Theorem 2.18 (Baker–Etnyre [1]) Suppose $p > q > 0$ and $\\xi $ is a tight contact structure on $L(p,q)$ .", "Rational unknots in $\\xi $ are coarsely Legendrian simple: there are Legendrian representatives ${\\left\\lbrace \\begin{array}{ll}L_1 \\,& p = 2, \\\\L_1,-L_1 \\,& p \\ne 2\\, \\text{ and }\\, q \\equiv \\pm 1 \\;(\\!\\!\\!\\!\\!\\!\\mod {p}), \\\\L_1,-L_1,L_2,-L_2 & \\text{otherwise}.\\end{array}\\right.", "}$ with $\\operatorname{tb}_{\\mathbb {Q}}(\\pm L_1)= -\\frac{p-q}{p} \\;\\; \\text{and}\\;\\; \\operatorname{tb}_{\\mathbb {Q}}(\\pm L_2) = -\\frac{p-p^{\\prime }}{p},\\\\$ where $p^{\\prime }/q^{\\prime }$ is the largest rational number satisfying $pq^{\\prime } - p^{\\prime }q = -1$ .", "Also the rational rotation numbers are determined by the formula in Lemma REF or REF .", "For any Legendrian representative of rational unknots in $\\xi $ , there is a contactomorphism $f$ of $\\xi $ which is smoothly isotopic to the identity such that $f(L)$ is one of the Legendrian representatives above, or their stabilization.", "Recall from Section REF and REF that any tight contact structure on $L(p,q)$ can be decomposed into $S(-1,-p/q;l)$ and $S(-1,0;u)$ , and $K_1$ is the core of $S(-1,0;u)$ and $K_2$ is the core $S(-1,p/q;l)$ .", "Also, we showed that there is a Legendrian representative $L_1$ of $K_1$ , whose standard neighborhood is $S(-1,0;u)$ , see Figure REF .", "By Theorem REF , the decomposition is unique, so there exists a contactomorphism from one decomposition to another.", "Thus there exists unique $L_1$ up to contactomorphism.", "Let $L$ be a Legendrian representative of $K_1$ and $N$ be a standard neighborhood of $L$ .", "Notice that $N$ has longitudinal dividing curves, which implies there is an edge between the dividing slope $s$ and 0 in the Farey graph, so $s=1/n$ for some $n \\in \\mathbb {Z}$ .", "If $n \\ge 0$ , then a non-minimally twisting $T^2 \\times I$ layer embeds in $L(p,q)$ , which contradicts the tightness of $\\xi $ .", "Thus $n \\le -1$ and $N$ is $S(1/n,0;u)$ and the complement of $N$ is $S(1/n,-p/q;l)$ .", "If $n < -1$ , we can further decompose it into $S(-1,-p/q;l)$ and $T(-1,1/n)$ , a minimally twisting $T^2 \\times I$ layer with slopes $-1$ and $1/n$ .", "Thus we can thicken $N$ using $T(-1,1/n)$ and obtain $S(-1,0;u)$ .", "Thus $L$ destabilizes to $L_1$ .", "We can apply the same argument to $-L_1$ and $\\pm L_2$ .", "Now we are left to calculate the invariants.", "For the rotation numbers, see Lemma REF or REF .", "Here, we calculate the Thurston–Bennequin invariants of $\\pm L_i$ for $i = 1,2$ .", "We consider $L_1$ first.", "Notice that the order of $L_1$ is $p$ .", "Let $T$ be a convex Heegaard torus $\\partial S(-1,0;u)$ .", "Since the dividing slope of a standard neighborhood of $L_1$ is $-1$ , we can put a push-off of $L_1$ along the contact framing on $T$ as a $-1$ slope curve (it is a Legendrian divide).", "Pick a meridian disk of $S(-1,-p/q;l)$ for a rational Seifert surface of $L_1$ .", "We can put the boundary of the meridian disk on $T$ as a $-p/q$ slope curve.", "We can check the sign of each intersection point between these two curves is negative.", "Thus we have $\\operatorname{tb}_{\\mathbb {Q}}(L_1) = -\\frac{1}{p} \\left|-1 \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ -pq| = -p-qp $since $ tbQ$ is not sensitive to the orientation, we have $ tbQ(-L1) = tbQ(L1)$.$ We use the second surgery presentation in Figure REF for $L_2$ .", "Then by the same argument above, we have $\\operatorname{tb}_{\\mathbb {Q}}(\\pm L_2) = -\\frac{1}{p} \\left|-1 \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ -pp'| = -p-p'p $$ There are two ways to calculate the rational rotation number of a Legendrian rational unknot: using contact surgery presentations by Geiges and Onaran [23], or using the Farey graph essentially due to Baker and Etnyre [1].", "We introduce both methods.", "It is well known that we can calculate the classical invariants from a contact surgery presentation for a given Legendrian knot in an integral homology sphere (see [12] for example).", "Geiges and Onaran [23] showed that the same formula works for contact surgery presentations for rationally null-homologous Legendrian knots in a homology sphere.", "Consider a contact surgery presentation for a rationally null-homologous Legendrian knot $L$ in a homology sphere.", "Convert the contact surgery presentation into a $(\\pm 1)$ –surgery presentation.", "Let $L_1, \\ldots , L_n$ be the surgery components of the $(\\pm 1)$ –surgery presentation, $M$ be the linking matrix of $L_1,\\ldots ,L_n$ where the $i$ -th diagonal entry is the smooth surgery coefficient of $L_i$ , $\\mathbf {rot} := (\\operatorname{rot}(L_1), \\ldots , \\operatorname{rot}(L_n))^\\intercal $ where $\\operatorname{rot}(L_i)$ is the rotation number of $L_i$ in $(S^3,\\xi _{std})$ , $\\mathbf {lk} := \\left(\\mathrm {lk}(L,L_1),\\ldots ,\\mathrm {lk}(L,L_n)\\right)^\\intercal $ where $\\mathrm {lk}(L,L_i)$ is the linking number between $L$ and $L_i$ and $\\operatorname{rot}_0$ be the rotation number of $L$ in $(S^3,\\xi _{std})$ .", "Lemma 2.19 (Geiges–Onaran [23]) With the notations defined above, we have $\\operatorname{rot}_{\\mathbb {Q}}(L) = \\operatorname{rot}_0 - \\mathbf {rot}^\\intercal \\cdot M^{-1} \\cdot \\mathbf {lk}.$ Notice that if we change the orientation of $L$ , then $\\operatorname{rot}_0$ changes the sign and every component in $\\mathbf {lk}$ also changes the sign while $\\mathbf {rot}$ and $M$ do not change.", "Thus we have $\\operatorname{rot}_{\\mathbb {Q}}(-L) = -\\operatorname{rot}_{\\mathbb {Q}}(L)$ .", "Now we introduce the second method.", "Recall from Section REF that a decorated path $P$ for a tight contact structure on lens space $L(p,q)$ is the shortest path in the Farey graph from $-q/p$ to 0, where all edges are decorated with $+$ or $-$ except for the first and the last ones.", "Let $-q/p = s_0, s_1, \\ldots , s_n=-1$ be the vertices in $P$ .", "If $q \\lnot \\equiv -1 \\pmod {p}$ , we define $r_1 = \\sum _{i=1}^{n-1} \\epsilon _i \\left((s_{i} \\ominus s_{i+1}) \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ -pq) $and$ r2 = i=1n-1 i ((si+1 si) 01) $where $ i$ is the sign of the edge from $ si$ to $ si+1$.", "Here, we assume the numerator of $ si$ is negative and the denominator of $ si$ is positive.", "If $ q -1 ( p)p$, then we define both $ r1$ and $ r2$ to be $ 0$.$ Lemma 2.20 The Legendrian knots $L_1$ and $L_2$ in Figure REF have the rotation numbers $\\operatorname{rot}_{\\mathbb {Q}}(L_1) = \\frac{r_1}{p} \\quad \\text{and} \\quad \\operatorname{rot}_{\\mathbb {Q}}(L_2) = \\frac{r_2}{p}.$ Recall that $L_1$ has the order $p$ in $H_1(L(p,q))$ and its standard neighborhood is $S(-1,0;u)$ .", "Let $T = \\partial S(-1,0;u)$ and $C = S(-1,-p/q;l)$ that is the complement $S(-1,0;u)$ .", "Baker and Etnyre [1] showed that the rational rotation number of $L$ is equal to $\\operatorname{rot}_{\\mathbb {Q}}(L_1)=\\frac{1}{p} e(\\xi |_C,s)[D]$ where $s$ is a non-vanishing section of $\\xi |_T$ and $D$ is a meridian disk of $C$ .", "Decompose $C$ into $S(s_1,-p/q;l) \\cup T(s_1,s_2) \\cup \\ldots \\cup T(s_{n-1},s_n),$ where $T(s_i,s_{i+1})$ is a basic slice with slopes $s_i$ and $s_{i+1}$ .", "According to [33], we can calculate the relative Euler class of a basic slice $T(s_i,s_{i+1})$ evaluated on a properly embedded annulus with $-p/q$ slope boundary as follows: $e(\\xi ,t)[A] = \\epsilon _i \\left((s_{i} \\ominus s_{i+1}) \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ -pq) $where $ $ is a non-vanishing section of $$ restricted to $ T(si,si+1)$.", "Also, the relative Euler class of $ S(s1,-p/q;l)$ evaluates to $ 0$ on a meridian disk by Theorem~\\ref {thm:classification-s1d2}.", "Since the relative Euler class is additive under union, we obtain the formula in the statement by taking a summation.", "The same argument works for $ L2$.$ Legendrian and transverse rational unknots in lens spaces In this section, we classify Legendrian and transverse rational unknots in any tight contact structure on lens spaces and prove the theorems in Section REF .", "To do so, we first determine the contact mapping class group of universally tight contact structures on a solid torus with two dividing curves.", "Then using this, we classify Legendrian representatives of the core in a tight contact structure on a solid torus with two dividing curves.", "Before we start, we first extend the definitions.", "When $(M,\\xi )$ is a contact manifold with convex boundary, we define $\\operatorname{Cont}(M,\\xi ) = \\text{the group of contactomorphisms of $(M,\\xi )$ that are the identity on $\\partial M$}.$ Also we define the contact mapping class group of $(M,\\xi )$ to be $\\pi _0(\\operatorname{Cont}(M,\\xi )) = \\operatorname{Cont}(M,\\xi ) / \\sim $ where $f \\sim g$ if $f$ is contact isotopic to $g$ relative to the boundary.", "We start with determining the contact mapping class group of universally tight contact structures on $S^1 \\times D^2$ with two dividing curves.", "If the dividing curves are longitudinal, it was already determined by Giroux [28] and Vogel [41].", "Theorem 3.1 Let $\\xi $ be a universally tight contact structure on a solid torus $V = S^1 \\times D^2$ such that $\\partial V$ is convex and the dividing set $\\Gamma $ on $\\partial V$ consists of two closed curves.", "Then we have $\\pi _0(\\operatorname{Cont}(V, \\xi )) = 1.$ Let $D$ be a meridian disk of $V$ .", "After isotopy, we can assume that $D$ is convex and $\\partial D$ is Legendrian intersecting $\\Gamma $ minimally.", "According to Theorem REF , the relative Euler class of $\\xi $ is extremal, which implies that every dividing curve on $D$ is boundary parallel (it is called a well-groomed dividing set).", "See Figure REF for example.", "Consider a bypass whose attaching arc lies on $D$ .", "Notice that this bypass cannot be effective.", "Also, the attaching arc is one of the two configurations in Figure REF , and the bypass is trivial or yields a contractible dividing curve.", "Since $\\xi $ is tight, the bypass must be trivial.", "Figure: The dividing curves on a meridian disk in a universally tight S 1 ×D 2 S^1 \\times D^2 with two dividing curves.Let $f \\in \\operatorname{Cont}(V,\\xi )$ .", "After a small perturbation, we can assume that $D$ and $f(D)$ intersect transversely in a finite set of circles.", "Choose an innermost circle $c$ among them.", "Then a disk in $D$ bounded by $c$ and a disk in $f(D)$ bounded by $c$ form a sphere.", "Since $V$ is irreducible, this sphere bounds a ball, so using this we can isotope the disk in $f(D)$ bounded by $c$ and reduce the number of intersection circles.", "See Figure REF for a schematic picture.", "Repeat this until $D$ and $f(D)$ intersect only in $\\partial D$ .", "Again, $D$ and $f(D)$ form a sphere and by irreducibility, this sphere bounds a ball.", "Thus $D$ and $f(D)$ are smoothly isotopic relative to the boundary.", "By Theorem REF , there exists a sequence of convex disks $D_1, \\ldots , D_n$ with the identical boundary where $D_1 = D$ , $D_n = f(D)$ and $D_{i+1}$ is obtained by attaching a bypass to $D_i$ .", "As we observed above, the only allowable bypasses for $D$ are trivial bypasses, so inductively all $D_i$ has the same dividing set and $D_i$ and $D_{i+1}$ co-bound an $I$ -invariant neighborhood by Lemma REF .", "Thus $D_i$ and $D_{i+1}$ are contact isotopic for $1 \\le i \\le n-1$ and this implies that $f$ is contact isotopic to a contactomorphism fixing $D$ .", "By Lemma REF , we can further assume that $f$ fixes a small neighborhood $N$ of $\\partial V \\cup D$ .", "Now pick a sphere $S$ contained in $N$ and parallel to a sphere $\\partial N \\setminus \\partial V$ .", "Perturb $S$ to be convex and let $B$ be the ball in $V$ bounded by $S$ .", "By Eliashberg [13], there exists a unique tight contact structure on $B$ up to isotopy fixing the characteristic foliation on $S$ .", "Also, according to Eliashberg [13], we have $\\pi _0(\\operatorname{Cont}(B,\\xi |_B)) = 1$ .", "This implies that $f|_B$ is contact isotopic to the identity relative to the boundary.", "Since $f|_N$ is the identity, $f$ is contact isotopic to the identity relative to the boundary and this completes the proof.", "Figure: A schematic picture for DD and f(D)f(D).", "A shaded region represents a ball bounded by two disks in DD and f(D)f(D).We need several steps to classify Legendrian and transverse rational unknots in tight contact structures on lens spaces.", "The first step is to classify Legendrian representatives of the core in a universally tight contact structure on a solid torus with two dividing curves.", "Legendrian knots in a solid torus with longitudinal dividing curves were already studied by Etnyre and Vértesi [15].", "Proposition 3.2 Let $\\xi $ be a universally tight contact structure on a solid torus $V = S^1 \\times D^2$ with two dividing curves of slope $s$ .", "Then the core of $(V,\\xi )$ is Legendrian simple: there exists a unique Legendrian representative $L$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F = \\lfloor s \\rfloor $ where $F$ is the product framing of $V$ .", "Any Legendrian representative of the core is Legendrian isotopic to $L$ or its stabilization.", "We only consider the case $s \\in [-1,0)$ since we can realize any dividing slope by the Dehn twists about a meridian disk.", "We first show that there exists a unique Legendrian representative of the core of $(V,\\xi )$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F=-1$ up to Legendrian isotopy.", "Suppose $L$ is a Legendrian representative of the core of $(V,\\xi )$ .", "The dividing slope of a standard neighborhood $N$ of $L$ is an integer, so $\\lfloor s \\rfloor = -1$ is the maximum twisting number.", "Let $L_1$ and $L_2$ be Legendrian representatives of the core with $\\operatorname{tw}_F=-1$ .", "Suppose $N_1$ and $N_2$ are standard neighborhoods of $L_1$ and $L_2$ , respectively.", "Then we have $T_i(-1,s) = V \\setminus N_i$ for $i=1,2$ , which are minimally twisting $T^2 \\times I$ layers with the dividing slopes $-1$ and $s$ .", "According to Theorem REF , a tight contact structure on $V$ is completely determined by the tight contact structure on $T_i(-1,s)$ .", "Thus there exists a coorientation preserving contactomorphism $f \\colon T_1(-1,s) \\rightarrow T_2(-1,s)$ fixing $\\partial V$ .", "Since there exists a unique tight contact structure on a standard neighborhood of a Legendrian knot, we can extend $f$ to entire $(V,\\xi )$ so that $f(L_1)=L_2$ .", "By Theorem REF , $f$ is contact isotopic to the identity.", "Since $f$ sends $L_1$ to $L_2$ , they are Legendrian isotopic.", "Next, we will show that if $L$ is a Legendrian representative of the core of $(V,\\xi )$ with $n = \\operatorname{tw}_F(L) < -1$ , then $L$ destabilizes.", "Suppose $N$ is a standard neighborhood of $L$ .", "Then we have $T(n,s) = V \\setminus N$ , which is a minimally twisting $T^2 \\times I$ layer with the dividing slopes $n$ and $s$ .", "Since $n<-1$ , we can decompose $T(n,s)$ into $T(n,n+1) \\cup T(n+1,s)$ .", "Notice that $T(n,n+1)$ is a basic slice and we can thicken $N$ by attaching $T(n,n+1)$ .", "This corresponds to a destabilization of $L$ .", "Next, we improve the result by classifying the Legendrian representatives of the core in any tight contact structure on a solid torus with two dividing curves.", "Proposition 3.3 Let $\\xi $ be a tight contact structure on a solid torus $V = S^1 \\times D^2$ with two dividing curves of slope $s$ .", "Then the core of $(V,\\xi )$ is Legendrian simple: there exists a unique Legendrian representative $L$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F = \\lfloor s \\rfloor $ where $F$ is the product framing of $V$ .", "Any Legendrian representative of the core is Legendrian isotopic to $L$ or its stabilization.", "Again, we only consider the case $s \\in [-1,0)$ since we can realize any dividing slope by the Dehn twists about a meridian disk.", "We first show that there exists a unique Legendrian representative of the core of $(V,\\xi )$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F=-1$ up to Legendrian isotopy.", "Suppose $L$ is a Legendrian representative of the core of $(V,\\xi )$ .", "The dividing slope of a standard neighborhood $N$ of $L$ is an integer, so $\\lfloor s \\rfloor = -1$ is the maximum twisting number.", "Let $L_1$ and $L_2$ be Legendrian representatives of the core with $\\operatorname{tw}_F=-1$ .", "Take a meridian disk $D$ of $V$ intersecting $L_2$ transversely once.", "Perturb $D$ to be convex with Legendrian boundary such that $\\partial D$ intersects $\\Gamma _{\\partial V}$ minimally and $D$ intersects $L_1$ transversely.", "We will consider two cases according to the intersection number between $L_1$ and $D$ .", "First, we consider the case $|D \\cap L_1| = 1$ .", "Suppose $N_1$ and $N_2$ are standard neighborhoods of $L_1$ and $L_2$ , respectively.", "After perturbing $D$ and $\\partial N_i$ , we can assume that the ruling slope of $N_i$ is $\\infty $ and there exists a ruling curve $c_i$ that lies on $D$ and it is the only intersection between $\\partial N_i$ and $D$ for $i = 1,2$ .", "Since $\\operatorname{tb}(c_1)=\\operatorname{tb}(c_2)=-1$ by the equality (REF ), each $c_1$ and $c_2$ intersects a dividing curve on $D$ at two points.", "See Figure REF for example.", "Choose the dividing curves $d_1,\\ldots ,d_n$ on $D$ such that $c_1$ intersects $d_1$ , $c_2$ intersects $d_n$ , and $d_i$ and $d_{i+1}$ are adjacent.", "We claim that we can isotope $L_1$ through Legendrian knots so that $c_1$ intersects $d_2$ and does not intersect any other dividing curve on $D$ .", "Take a solid torus $\\overline{N}$ such that $\\overline{N}$ contains $N_1$ and $\\partial \\overline{N}$ intersects $D$ in a closed curve $\\overline{c}$ that contains $c_1$ and intersects $d_1$ and $d_2$ at four points.", "See Figure REF for example.", "Perturb $\\partial \\overline{N}$ to be convex and $\\overline{c}$ to be Legendrian.", "Let $\\overline{s}$ be the dividing slope of $\\overline{N}$ .", "By the equality (REF ), we have $\\operatorname{tb}(\\overline{c})=-2$ .", "Due to this fact, there are only three cases we need to consider for the dividing curves on $\\partial \\overline{N}$ .", "The first case is $\\overline{s} > -1$ .", "Let $2n$ be the number of dividing curves on $\\partial \\overline{N}$ and $\\overline{s} = p/q$ for $|p| > q \\ge 1$ .", "Since the dividing set interleaves, $\\left|\\overline{c} \\cap \\Gamma _D\\right| = \\left|\\overline{c} \\cap \\Gamma _{\\partial V}\\right|$ .", "Thus we have $\\operatorname{tb}(\\overline{c}) = -2 = -\\frac{1}{2}\\left|\\overline{c} \\cap \\Gamma _D\\right| = -\\frac{1}{2}\\left|\\overline{c} \\cap \\Gamma _{\\partial V}\\right| \\le -n\\left|\\frac{p}{q} \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ 10| = -nq.", "$The equality holds if and only if $ c$ intersects $ V$ minimally.", "Since $ -1 < s s [-1,0)$, we have $ q > 1$ and this implies that $ n = 1$.", "Thus there are two dividing curves on $N$.", "Notice that the disk $D D$, bounded by $c$, contains two boundary-parallel dividing curves as shown in Figure~\\ref {fig:disk2}.", "Notice that these two dividing curves are a part of $ d1$ and $ d2$, but we just relabel them as $ d1$ and $ d2$.", "According to Theorem~\\ref {thm:imba}, we can take a bypass lying on $D$ containing the dividing curve $ d1$.", "Remove a bypass attachment of this bypass from $N$.", "Then by Theorem~\\ref {thm:bypass-torus}, the resulting solid torus $N1$ has two dividing curves of slope $s1$ satisfying $ -1 s1 < s$, and the resulting meridian disk $D1$ contains the single dividing curve $ d2$.", "Perturb $c1 =D1$ to be Legendrian.", "Then by equality (\\ref {eq:tb}), we have $ tb(c1) = -1$.", "Since the dividing set interleaves, we have$$\\operatorname{tb}(\\overline{c}) = -1 \\le -\\left|\\overline{s}_1 \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ 10|, $which implies that $ s1$ is an integer.", "Thus $s1 = -1$ and $N1$ has two dividing curves of slope $ -1$.", "Let $L1$ be a Legendrian representative of the core of $N1$ with $ twF=-1$.", "Then $N1$ is a standard neighborhood of $L1$.", "Since $D$ only contains boundary-parallel dividing curves, the restricted contact structure $ |N$ is universally tight by Theorem~\\ref {thm:classification-s1d2}.", "Since $N$ contains both $ L1$ and $L1$, by Proposition~\\ref {prop:core-univ}, $ L1$ is Legendrian isotopic to $L1$.", "Notice that $c1$ intersects $ d2$ and does not intersect any other dividing curve on $ D$.$ Figure: The red curves are the dividing curves on a convex disk DD.", "The closed curves are Legendrian curves.The second case is $\\overline{s} = -1$ and there are four dividing curves on $\\partial \\overline{N}$ .", "Again, the disk $\\overline{D} \\subset D$ , bounded by $\\overline{c}$ , contains two boundary parallel dividing curves as shown in Figure REF .", "According to Theorem REF , we can take a bypass lying on $\\overline{D}$ containing the dividing curve $d_1$ .", "Remove a bypass attachment of this bypass from $\\overline{N}$ and let $\\overline{N}_1$ be the resulting solid torus and $\\overline{D}_1$ be the resulting meridian disk.", "Perturb $\\overline{c}_1 = \\partial \\overline{D}_1$ to be Legendrian.", "Since $\\overline{D}_1$ contains the single dividing curve $d_2$ , we have $\\operatorname{tb}(\\overline{c}_1) = -1$ .", "Thus there are two dividing curves on $\\partial \\overline{N}_1$ as discussed in the first case.", "Since there are more than two dividing curves on $\\partial \\overline{N}$ , the bypass attachment does not change the dividing slope.", "Thus $\\overline{N}_1$ has two dividing curves of slope $-1$ and $\\overline{N} \\setminus \\overline{N}_1$ is a non-rotative layer.", "Let $\\overline{L}_1$ be a Legendrian representative of the core of $\\overline{N}_1$ with $\\operatorname{tw}_F = -1$ .", "Then $\\overline{N}_1$ is a standard neighborhood of $\\overline{L}_1$ .", "By the attach=dig principle (Theorem REF ), there is a solid torus $\\widetilde{N}$ containing $\\overline{N}$ with two dividing curves of slope $-1$ .", "By Theorem REF , the restricted contact structure $\\xi |_{\\widetilde{N}}$ is universally tight.", "Since $\\widetilde{N}$ contains both $L_1$ and $\\overline{L}_1$ , by Proposition REF , $L_1$ is Legendrian isotopic to $\\overline{L}_1$ .", "Notice that $\\overline{c}_1$ intersects $d_2$ and does not intersect any other dividing curve on $D$ .", "Figure: The red curves are the dividing curves on a convex disk DD.", "The closed curves are Legendrian curves.The third case is $\\overline{s} = -1$ and there are two dividing curves on $\\partial \\overline{N}$ .", "In this case, $\\overline{c}$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally.", "However, the disk $\\overline{D} \\subset D$ , bounded by $\\overline{c}$ , still contains two boundary parallel dividing curves as shown in Figure REF .", "According to Theorem REF , we can take a bypass lying on $\\overline{D}$ containing the dividing curve $d_1$ .", "Remove a bypass attachment of this bypass from $\\overline{N}$ and let $\\overline{N}_1$ be the resulting solid torus and $\\overline{D}_1$ be the resulting meridian disk.", "Perturb $\\overline{c}_1 = \\partial \\overline{D}_1$ to be Legendrian.", "Since $\\overline{D}_1$ contains the single dividing curve $d_2$ , we have $\\operatorname{tb}(\\overline{c}_1) = -1$ .", "Thus there are two dividing curves on $\\partial \\overline{N}_1$ as discussed in the first case.", "Since $\\overline{c}$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally, the bypass is not effective and the bypass attachment does not change the dividing slope.", "Thus $\\overline{N}_1$ has two dividing curves of slope $-1$ .", "Let $\\overline{L}_1$ be a Legendrian representative of the core of $\\overline{N}_1$ with $\\operatorname{tw}_F = -1$ .", "Then $\\overline{N}_1$ is a standard neighborhood of $\\overline{L}_1$ .", "By Theorem REF , the restricted contact structure $\\xi |_{\\overline{N}}$ is universally tight.", "Since $\\overline{N}$ contains both $L_1$ and $\\overline{L}_1$ , by Proposition REF , $L_1$ is Legendrian isotopic to $\\overline{L}_1$ .", "Notice that $\\overline{c}_1$ intersects $d_2$ and does not intersect any other dividing curve on $D$ .", "We just have proved the claim.", "By applying the claim inductively, we can isotope $L_1$ through Legendrian knots until $c_1$ intersects $d_n$ and does not intersect any other dividing curve on $D$ .", "After that, take a bypass lying on $D$ which does not contain $d_n$ , and remove a bypass attachment of the bypass from $V$ .", "Repeat this until there is only one dividing curve, $d_n$ , left.", "See Figure REF for example.", "Let $\\overline{V}$ be the resulting solid torus and $\\overline{D}$ be the resulting meridian disk.", "Perturb $\\overline{c} = \\partial \\overline{D}$ to be Legendrian.", "Since $\\overline{D}$ contains the single dividing curve $d_n$ , we have $\\operatorname{tb}(\\overline{c}) = -1$ .", "Let $\\overline{s} = p/q$ be the dividing slope of $\\partial \\overline{V}$ and $2n$ be the number of dividing curves.", "Since the dividing set interleaves, we have $\\operatorname{tb}(\\overline{c}) = -1 \\le -n\\left|\\frac{p}{q} \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ 10|.", "$The equality holds if and only if $ c$ intersects $ V$ minimally.", "From the inequality, we have $ n=1$ and $ q=1$.", "Thus there are two dividing curves on $V$ and $s$ is an integer.", "Since none of the bypasses does not intersect both $ N1$ and $ N2$ and the bypass attachment is a local operation, $V$ contains both $ N1$ and $ N2$.", "Thus we have $ -1 s s [-1,0)$ and this implies that $s=-1$.", "Thus $V$ has two dividing curves of slope $ -1$.", "By Theorem~\\ref {thm:classification-s1d2}, the restricted contact structure $V$ is universally tight.", "Since $V$ contains both $ L1$ and $ L2$, by Proposition~\\ref {prop:core-univ}, $ L1$ and $ L2$ are Legendrian isotopic.$ Figure: The red curves are the dividing curves on a convex disk DD.", "The closed curves are Legendrian curves.Next, we consider the case $m := |D \\cap L_1| > 1$ .", "In this case, we can perturb $\\partial N_1$ so that the ruling slope is $\\infty $ and there are $m$ ruling curves $c_1^1, \\ldots , c_m^1$ lying on $D$ and each $c_i^1$ intersects a dividing curve on $D$ at two points.", "Similarly, we can also perturb $\\partial N_2$ so that the ruling slope is $\\infty $ and there is a ruling curve $c^2$ lying on $D$ intersecting a dividing curve on $D$ at two points.", "Choose the dividing curves $d_1,\\ldots ,d_n$ on $D$ such that $c_1^1$ intersects $d_1$ , $c^2$ intersects $d_n$ , and $d_i$ and $d_{i+1}$ are adjacent.", "We claim that we can isotope $L_1$ through Legendrian knots so that $c_1^1$ intersects $d_2$ , while fixing other $c_i^1$ for $2 \\le i \\le m$ .", "After perturbing $D$ , we can take a solid torus $\\overline{N}$ such that $\\overline{N}$ contains $N_1$ and there are $m$ ruling curves $\\overline{c}_1,\\ldots ,\\overline{c}_m$ of $\\partial \\overline{N}$ lying on $D$ such that $\\overline{c}_i = c_i^1$ for $2 \\le i \\le m$ , $\\overline{c}_1$ contains $c_1^1$ and $\\overline{c}_1$ intersects $d_1$ and $d_2$ at four points.", "By the equality (REF ), we have $\\operatorname{tb}(\\overline{c}_2) = -1$ and this implies that there are two dividing curves on $\\partial \\overline{N}$ and the dividing slope $\\overline{s}$ is an integer as discussed above.", "Also, since $\\overline{N}$ contains $N_1$ , we have $-1 \\le \\overline{s} \\le s \\in [-1,0)$ and $\\overline{s} = -1$ .", "Thus $\\overline{N}$ has two dividing curves of slope $-1$ .", "This implies that $\\overline{c}_1$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally.", "Since there are two boundary-parallel dividing curves on the disk $\\overline{D} \\subset D$ , bounded by $\\overline{c}_1$ , we can find a bypass lying on $\\overline{D}$ that contains $d_1$ according to Theorem REF .", "Remove a bypass attachment of this bypass from $\\overline{N}$ and let $\\overline{N}_1$ be the resulting solid torus and $\\overline{D}_1$ be the resulting meridian disk.", "Since $\\overline{D}_1$ contains the single dividing curve $d_2$ , there are still two dividing curves on $\\overline{N}_1$ .", "Since $\\overline{c}_1$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally, the bypass is not effective and the bypass attachment does not change the dividing slope.", "Thus $\\overline{N}_1$ has two dividing curves of slope $-1$ .", "Let $\\overline{L}_1$ be a Legendrian representative of the core of $\\overline{N}_1$ with $\\operatorname{tw}_F = -1$ .", "Then $\\overline{N}_1$ is a standard neighborhood of $\\overline{L}_1$ .", "By Theorem REF , the restricted contact structure $\\xi |_{\\overline{N}}$ is universally tight.", "Since $\\overline{N}$ contains both $L_1$ and $\\overline{L}_1$ , by Proposition REF , $L_1$ is Legendrian isotopic to $\\overline{L}_1$ .", "This completes the claim.", "By applying the claim inductively, we can isotope $L_1$ through Legendrian knots until $c_1^1$ intersects $d_n$ while fixing other $c_i^1$ for $2 \\le i \\le m$ .", "After that, apply the claim to $c_2^1$ and we can isotope $L_1$ through Legendrian knots until $c_2^1$ intersects $d_n$ while fixing other $c_i^1$ .", "Repeat the argument until all $c_i^1$ for $1 \\le i \\le m$ intersect $d_n$ .", "Now using Theorem REF , take a bypass lying on $D$ that does not contain $d_n$ and remove a bypass attachment of this bypass from $V$ .", "Repeat this until there is only one dividing curve, $d_n$ , left.", "Let $\\overline{V}$ be the resulting solid torus and $\\overline{D}$ be the resulting meridian disk.", "Perturb $\\overline{c} = \\partial \\overline{D}$ to be Legendrian.", "Since $\\overline{D}$ contains the single dividing curve $d_n$ , we have $\\operatorname{tb}(\\overline{c}) = -1$ and this implies that there are two dividing curves on $\\partial \\overline{V}$ and the dividing slope $\\overline{s}$ is an integer as discussed above.", "Since none of the bypasses does not intersect both $N_1$ and $N_2$ and the bypass attachment is a local operation, $\\overline{V}$ contains both $N_1$ and $N_2$ .", "Thus we have $-1 \\le \\overline{s} \\le s \\in [-1,0)$ and this implies that $\\overline{s}=-1$ .", "Thus $\\overline{V}$ has two dividing curves of slope $-1$ .", "By Theorem REF , the restricted contact structure $\\xi _{\\overline{V}}$ is universally tight.", "Since $\\overline{V}$ contains both $L_1$ and $L_2$ , by Proposition REF , $L_1$ and $L_2$ are Legendrian isotopic.", "Lastly, we show that if $L$ is a Legendrian representative of the core with $n := \\operatorname{tw}_F < -1$ , then $L$ destabilizes.", "Suppose $N$ is a standard neighborhood of $L$ .", "Then we have $V \\setminus N = T(n,s)$ , which is a minimally twisting $T^2 \\times I$ layer with the dividing slopes $n$ and $s$ .", "Since $n<-1$ , we can decompose $T(n,s)$ into $T(n,n+1) \\cup T(n+1,s)$ .", "Notice that $T(n,n+1)$ is a basic slice and we can thicken $N$ by attaching $T(n,n+1)$ .", "This corresponds to a destabilization of $L$ .", "Now we are ready to classify Legendrian and transverse rational unknots in any tight contact structure on lens spaces.", "We first show the Legendrian simplicity.", "Proposition 3.4 Let $\\xi $ be a tight contact structure on a lens space $L(p,q)$ and $K$ be an oriented rational unknot in $L(p,q)$ .", "Then there exists a unique Legendrian representative $L$ of $K$ in $\\xi $ such that any Legendrian representative of $K$ is Legendrian isotopic to $L$ or its stabilization.", "Recall from Section REF that a tight contact structure on $\\xi $ on $L(p,q)$ can be decomposed into $S(-1,-p/q;l) \\cup S(-1,0;u)$ .", "Also, recall that $s_0,\\ldots ,s_n$ are the vertices of the shortest path in the Farey graph where $s_0 = -p/q$ and $s_n = 0$ .", "Thus $\\xi $ can also be decomposed into $S(s_1,-p/q;l)$ and $S(s_1,0;u)$ .", "Suppose $K = K_1$ .", "We first show that there exists a unique Legendrian representative of $K$ with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ up to Legendrian isotopy.", "Let $L$ and $L^{\\prime }$ be Legendrian representatives of $K$ with $\\operatorname{tb}_{\\mathbb {Q}}(L) = \\operatorname{tb}_{\\mathbb {Q}}(L^{\\prime }) = \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ , and $N$ and $N^{\\prime }$ be standard neighborhoods of $L$ and $L^{\\prime }$ , respectively.", "As shown in the proof of Theorem REF , both $N$ and $N^{\\prime }$ are $S(-1,0;u)$ , i.e., they have two dividing curves of slope $-1$ with the upper meridional slope 0.", "Since $L$ and $L^{\\prime }$ are smoothly isotopic, there exists a smooth isotopy from $N$ to $N^{\\prime }$ .", "Then by Theorem REF , there exists a sequence of solid tori $N_1, \\ldots , N_n$ where $N_1 = N$ , $N_n = N^{\\prime }$ and $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ .", "Let $s_i$ be the dividing slope of $N_i$ .", "Here, we define Legendrian representatives $L_i$ associated to $N_i$ as follows.", "First, if $s_i \\le -1$ , then $N_i$ contains a solid torus $S(-1,0;u)$ .", "Define $L_i$ to be a Legendrian representative of the core of this $S(-1,0;u)$ with the maximum twisting number.", "Notice that this $S(-1,0;u)$ is a standard neighborhood of $L_i$ so $\\operatorname{tb}_{\\mathbb {Q}}(L_i) = \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ .", "If $s_i > -1$ , then $N_i$ is contained in some $S(-1,0;u)$ .", "Define $L_i$ to be a Legendrian representative of the core of this $S(-1,0;u)$ with the maximum twisting number.", "Notice that this $S(-1,0;u)$ is a standard neighborhood of $L_i$ so $\\operatorname{tb}_{\\mathbb {Q}}(L_i) = \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ .", "From the definition, we can choose $L_0$ to be $L$ and $L_n$ to be $L^{\\prime }$ .", "We claim that $L_i$ and $L_{i+1}$ are Legendrian isotopic, and this implies that $L$ and $L^{\\prime }$ are Legendrian isotopic by induction.", "Observe that if $s_i > -1$ , then $s_{i+1} \\ge -1$ by Theorem REF .", "Similarly, if $s_i < -1$ , then $s_{i+1} \\le -1$ .", "Due to this fact, there are only two cases we need to consider.", "The first case is $s_i, s_{i+1} \\le -1$ .", "Assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is contained in $N_i$ .", "In this case, $N_i$ contains $N_{i+1}$ and this implies that $N_i$ contains both $L_i$ and $L_{i+1}$ .", "If $\\partial N_i$ has more than two dividing curves, then by the attach=dig principle (Theorem REF ), we can thicken $N_i$ and reduce the number of dividing curves.", "Also notice that $L_i$ and $L_{i+1}$ still have the maximum twisting number in $N_i$ .", "If not, they destabilize and it contradicts that they have $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ .", "Thus by Proposition REF , $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "Now assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is not contained in $N_i$ .", "Then $N_{i+1}$ contains $N_i$ and this implies that $N_{i+1}$ contains both $L_i$ and $L_{i+1}$ .", "Thus we can apply the same argument above (by switching the role of $N_i$ and $N_{i+1}$ ) and conclude that $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "The second case is $s_i, s_{i+1} \\ge -1$ .", "In this case, the complements of each $N_i$ and $N_{i+1}$ contains $S(s_1,-p/q;l)$ since $s_1 \\le -1$ .", "Let $\\overline{L}_i$ and $\\overline{L}_{i+1}$ be the Legendrian representatives of the core of each $S(s_1,-p/q;l)$ containing in $N_i$ and $N_{i+1}$ , respectively, with the maximum twisting number.", "Notice that standard neighborhoods of $\\overline{L}_i$ and $\\overline{L}_{i+1}$ are $S(s_1,-p/q;l)$ .", "Assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is contained in $N_i$ .", "Then $N_i$ contains $N_{i+1}$ and this implies that the complement of $N_{i+1}$ contains both $\\overline{L}_i$ and $\\overline{L}_{i+1}$ .", "Also notice that $\\overline{L}_i$ and $\\overline{L}_{i+1}$ still have the maximum twisting number in the complement of $N_{i+1}$ .", "If not, they destabilize and standard neighborhoods of them are $S(s,-p/q;l)$ where $s$ is clockwise of $s_1$ and there is an edge between $s$ and $-p/q$ in the Farey graph.", "This implies that $s < -p/q$ or $s = \\infty $ , and a non-minimally twisting $T^2 \\times I$ layer embeds in $(L(p,q),\\xi )$ , which contradicts the tightness of $\\xi $ .", "Now $\\overline{L}_i$ and $\\overline{L}_{i+1}$ are Legendrian isotopic by Proposition REF .", "Thus after Legendrian isotopy, we can identify $\\overline{L}_i$ with $\\overline{L}_{i+1}$ , and then both $L_i$ and $L_{i+1}$ are contained in the complement of a standard neighborhood of $\\overline{L}_i$ .", "By Proposition REF again, $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "Now assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is not contained in $N_i$ .", "Then $N_{i+1}$ contains $N_i$ and this implies that the complement of $N_i$ contains both $\\overline{L}_i$ and $\\overline{L}_{i+1}$ .", "Thus we can apply the same argument above (by switching the role of $N_i$ and $N_{i+1}$ ) and conclude that $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "This completes the claim.", "By Theorem REF , a Legendrian representative $L$ of $K$ with $\\operatorname{tb}_{\\mathbb {Q}}(L) < \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ destabilizes.", "The identical argument works for $-K_1$ and $\\pm K_2$ .", "We leave them as exercises for the reader.", "Remark 3.5 Notice that in the proof of Proposition REF , we need Proposition REF , not just Proposition REF even for the universally tight contact structures on $L(p,q)$ .", "This is because there exist virtually overtwisted neighborhoods of $K$ even in the universally tight contact structures.", "The theorem immediately follows from Theorem REF and Proposition REF .", "In [17], Etnyre and Honda showed that the classification of transverse knots is equivalent to the classification of Legendrian knots up to negative stabilization.", "Thus the theorem immediately follows from Theorem REF .", "The contact mapping class group of the standard lens spaces In this section, we use the results from the previous sections to prove Theorem REF , and Corollary REF and REF .", "We also prove Theorem REF using the results of Ding and Geiges [10], see Section REF .", "Recall from Section REF that the standard contact structure $\\xi _{std}$ on $L(p,q)$ can be decomposed into $S(-1,-p/q;l) \\cup S(-1,0;u)$ , and the contact structure $\\xi _{std}$ restricted to $S(-1,0;u)$ is universally tight by Theorem REF .", "Also, the contact structure $\\xi _{std}$ restricted to $S(-1,-p/q;l)$ is universally tight by Theorem REF .", "Let $L_1$ be a Legendrian representative of $K_1$ with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K_1)$ .", "As shown in the proof of Theorem REF , a standard neighborhood of $L_1$ is $S(-1,0;u)$ .", "Recall from Section REF that if $q^2 \\equiv 1 \\pmod {p}$ , then $\\sigma $ is a contactomorphism of the standard contact structure $\\xi _{std}$ on $L(p,q)$ .", "Also, if $q \\equiv -1 \\pmod {p}$ , then $\\tau $ is smoothly isotopic to a contactomorphism $\\overline{\\tau }$ of $\\xi _{std}$ on $L(p,q)$ .", "If $q \\lnot \\equiv -1 \\pmod {p}$ , then there exist two standard contact structure $\\xi _{std}^\\pm $ on $L(p,q)$ and $\\tau _*$ sends one to the other.", "We first show that any contactomorphism $f \\in \\operatorname{Cont}(L(p,q),\\xi _{std})$ is contact isotopic to either $\\sigma $ , $\\overline{\\tau }$ , or the identity.", "Suppose $f$ is smoothly isotopic to the identity.", "Since $f$ is a contactomorphism, $f$ sends a Legendrian representative of $K_1$ with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K_1)$ to the one with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K_1)$ .", "Then by Proposition REF (or Theorem REF ), $f(L_1)$ is Legendrian isotopic to $L_1$ .", "By the contact isotopy extension theorem [21], we can assume $f$ fixes $L_1$ .", "Moreover, by Lemma REF , we can further assume that $f$ fixes a standard neighborhood $S(-1,0;u)$ of $L_1$ .", "As discussed above, the contact structure $\\xi _{std}$ restricted to the complement of $S(-1,0;u)$ , which is $S(-1,-p/q;l)$ , is universally tight.", "Thus by Theorem REF , $f|_{S(-1,-p/q;l)}$ is contact isotopic to the identity relative to the boundary.", "Since $f|_{S(-1,0;u)}$ is the identity, $f$ is contact isotopic to the identity.", "Next suppose $f$ is smoothly isotopic to $\\sigma $ .", "Let $g := f^{-1} \\circ \\sigma $ .", "Then $g$ is a contactomorphism of $\\xi _{std}$ which is smoothly isotopic to the identity.", "By the argument above, $g$ is contact isotopic to the identity.", "Thus $f$ and $\\sigma $ are contact isotopic.", "Suppose $f$ is smoothly isotopic to $\\tau $ .", "If $q \\equiv -1 \\pmod {p}$ , then $\\tau $ is smoothly isotopic to a contactomorphism $\\overline{\\tau }$ as discussed above.", "Let $g := f^{-1} \\circ \\overline{\\tau }$ .", "Then $g$ is a contactomorphism which is smoothly isotopic to the identity.", "By the argument above, $g$ is contact isotopic to the identity, so $f$ and $\\overline{\\tau }$ are contact isotopic.", "If $q \\lnot \\equiv -1 \\pmod {p}$ , then $\\tau $ is a coorientation reversing contactomorphism sending $\\xi _{std}^\\pm $ to $\\xi _{std}^\\mp $ .", "Since $f$ and $\\tau $ are isotopic, $f_*(\\xi _{std}^\\pm )$ is isotopic to $\\tau _*(\\xi _{std}^\\pm ) = \\xi _{std}^\\mp $ , which contradicts that $f$ is a contactomorphism of $\\xi _{std}^\\pm $ .", "Now assume $p=2$ .", "Then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is trivial by Theorem REF .", "In this case, any contactomorphism $f$ is contact isotopic to the identity, so we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std})) = 1.$ If $p\\ne 2$ and $q\\equiv -1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\sigma $ by Theorem REF .", "In this case, any contactomorphism $f$ is contact isotopic to $\\sigma $ or the identity, so we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std})) = \\mathbb {Z}_2,$ generated by $\\sigma $ .", "If $p\\ne 2$ and $q\\equiv 1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\tau $ by Theorem REF .", "In this case, any diffeomorphism $f$ smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ by the argument above.", "Thus we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))=1.$ If $p\\ne 2$ , $q\\lnot \\equiv \\pm 1$ and $q^2 \\equiv 1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\sigma $ and $\\tau $ by Theorem REF .", "Again, any diffeomorphism $f$ which is smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ by the argument above.", "Thus we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))=\\mathbb {Z}_2,$ generated by $\\sigma $ .", "Finally, if $q^2 \\lnot \\equiv 1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\tau $ by Theorem REF .", "Again, any diffeomorphism $f$ smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ by the argument above.", "Thus we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))=1.$ This completes the proof.", "In the proof of Theorem REF , we showed that any contactomorphism $f \\in \\operatorname{Cont}(L(p,q),\\xi _{std})$ is contact isotopic to either $\\sigma $ , $\\overline{\\tau }$ , or the identity.", "This proves the injectivity of $i_*$ since $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\sigma $ and $\\tau $ .", "Also, in the proof of Theorem REF , we showed that any diffeomorphism which is smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ when $q \\lnot \\equiv -1 \\pmod {p}$ .", "Moreover, $\\tau $ is not isotopic to the identity if $p\\ne 2$ by Theorem REF .", "This implies that $i_*$ is not surjective when $q \\lnot \\equiv -1 \\pmod {p}$ .", "Finally, in the proof of Theorem REF , we showed that both $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))$ and $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ are generated by $\\sigma $ if $q\\equiv -1$ , so $i_*$ is surjective.", "Suppose $L$ is a Legendrian representative of the positively oriented core of $S^1 \\times S^2$ with $\\operatorname{rot}(L)=0$ .", "First, observe that $\\delta ^m$ is not contact isotopic to $\\delta ^n$ if $m \\ne n$ since $\\operatorname{rot}(\\delta ^m(L)) \\ne \\operatorname{rot}(\\delta ^n(L))$ by Lemma REF .", "Let $f$ be a contactomorphism of $\\xi _{std}$ on $S^1 \\times S^2$ .", "We will show that $\\delta $ and $\\eta $ commute, and $f$ is contact isotopic to $\\delta ^m \\circ \\eta ^i$ for some $m \\in \\mathbb {Z}$ and $i \\in \\mathbb {Z}_2$ .", "Then a map $\\Phi : \\mathbb {Z} \\oplus \\mathbb {Z}_2 &\\rightarrow \\pi _0(\\operatorname{Cont}(S^1 \\times S^2), \\xi _{std}),\\\\(m,i) &\\mapsto \\delta ^m\\circ \\eta ^i$ is a well-defined homomorphism and clearly it is an isomorphism.", "Suppose $n := \\operatorname{rot}(f(L))$ .", "There are two cases we need to consider according to the orientation of $f(L)$ .", "Suppose $f(L)$ is smoothly isotopic to $L$ .", "According to Lemma REF , $\\delta $ increases the rotation number of $f(L)$ by 1, so we have $\\operatorname{rot}((\\delta ^{-n} \\circ f)(L)) = 0.$ Thus $(\\delta ^{-n} \\circ f)(L)$ is Legendrian isotopic to $L$ by Theorem REF .", "Moreover, by Lemma REF , we can assume that $\\delta ^{-n} \\circ f$ fixes a standard neighborhood $N$ of $L$ .", "Ding and Geiges [10] showed that the complement of $N$ in the standard contact structure on $S^1 \\times S^2$ is a solid torus with two longitudinal dividing curves.", "Now by Theorem REF , the restriction of $\\delta ^{-n} \\circ f$ to the complement of $N$ is contact isotopic to the identity.", "Thus $\\delta ^{-n} \\circ f$ is contact isotopic to the identity and hence $f$ is contact isotopic to $\\delta ^n$ .", "Now suppose $f(L)$ is smoothly isotopic to $-L$ .", "Then $(\\eta ^{-1} \\circ \\delta ^n \\circ f)(L)$ is smoothly isotopic to $L$ and by Lemma REF we have $\\operatorname{rot}((\\eta ^{-1} \\circ \\delta ^n \\circ f)(L)) = 0.$ Thus by Theorem REF , $(\\eta ^{-1} \\circ \\delta ^n \\circ f)(L)$ is Legendrian isotopic to $L$ and by Lemma REF , we can assume that $\\eta ^{-1} \\circ \\delta ^n \\circ f$ fixes a neighborhood of $L$ .", "Again, $\\eta ^{-1} \\circ \\delta ^n \\circ f$ is contact isotopic to the identity by the same argument above.", "Thus $f$ is contact isotopic to $\\delta ^{-n} \\circ \\eta $ .", "Finally, consider a contactomorphism $\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta $ .", "By Lemma REF , we have $\\operatorname{rot}((\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta )(L)) = 0$ and $(\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta )(L)$ is smoothly isotopic to $L$ .", "By Theorem REF , $(\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta )(L)$ is Legendrian isotopic to $L$ .", "Applying the argument above, we can show $\\delta \\circ \\eta $ is contact isotopic to $\\eta \\circ \\delta $ .", "This completes the proof.", "Notice that $\\operatorname{Cont}_0(M,\\xi ) = \\ker i$ , where $i:\\operatorname{Cont}(M,\\xi ) \\rightarrow \\operatorname{Diff}_+(M)$ is the natural inclusion.", "Now the corollary is immediate from the injectivity of $i_*$ from Corollary REF ." ], [ "Legendrian and transverse rational unknots in lens spaces", "In this section, we classify Legendrian and transverse rational unknots in any tight contact structure on lens spaces and prove the theorems in Section REF .", "To do so, we first determine the contact mapping class group of universally tight contact structures on a solid torus with two dividing curves.", "Then using this, we classify Legendrian representatives of the core in a tight contact structure on a solid torus with two dividing curves.", "Before we start, we first extend the definitions.", "When $(M,\\xi )$ is a contact manifold with convex boundary, we define $\\operatorname{Cont}(M,\\xi ) = \\text{the group of contactomorphisms of $(M,\\xi )$ that are the identity on $\\partial M$}.$ Also we define the contact mapping class group of $(M,\\xi )$ to be $\\pi _0(\\operatorname{Cont}(M,\\xi )) = \\operatorname{Cont}(M,\\xi ) / \\sim $ where $f \\sim g$ if $f$ is contact isotopic to $g$ relative to the boundary.", "We start with determining the contact mapping class group of universally tight contact structures on $S^1 \\times D^2$ with two dividing curves.", "If the dividing curves are longitudinal, it was already determined by Giroux [28] and Vogel [41].", "Theorem 3.1 Let $\\xi $ be a universally tight contact structure on a solid torus $V = S^1 \\times D^2$ such that $\\partial V$ is convex and the dividing set $\\Gamma $ on $\\partial V$ consists of two closed curves.", "Then we have $\\pi _0(\\operatorname{Cont}(V, \\xi )) = 1.$ Let $D$ be a meridian disk of $V$ .", "After isotopy, we can assume that $D$ is convex and $\\partial D$ is Legendrian intersecting $\\Gamma $ minimally.", "According to Theorem REF , the relative Euler class of $\\xi $ is extremal, which implies that every dividing curve on $D$ is boundary parallel (it is called a well-groomed dividing set).", "See Figure REF for example.", "Consider a bypass whose attaching arc lies on $D$ .", "Notice that this bypass cannot be effective.", "Also, the attaching arc is one of the two configurations in Figure REF , and the bypass is trivial or yields a contractible dividing curve.", "Since $\\xi $ is tight, the bypass must be trivial.", "Figure: The dividing curves on a meridian disk in a universally tight S 1 ×D 2 S^1 \\times D^2 with two dividing curves.Let $f \\in \\operatorname{Cont}(V,\\xi )$ .", "After a small perturbation, we can assume that $D$ and $f(D)$ intersect transversely in a finite set of circles.", "Choose an innermost circle $c$ among them.", "Then a disk in $D$ bounded by $c$ and a disk in $f(D)$ bounded by $c$ form a sphere.", "Since $V$ is irreducible, this sphere bounds a ball, so using this we can isotope the disk in $f(D)$ bounded by $c$ and reduce the number of intersection circles.", "See Figure REF for a schematic picture.", "Repeat this until $D$ and $f(D)$ intersect only in $\\partial D$ .", "Again, $D$ and $f(D)$ form a sphere and by irreducibility, this sphere bounds a ball.", "Thus $D$ and $f(D)$ are smoothly isotopic relative to the boundary.", "By Theorem REF , there exists a sequence of convex disks $D_1, \\ldots , D_n$ with the identical boundary where $D_1 = D$ , $D_n = f(D)$ and $D_{i+1}$ is obtained by attaching a bypass to $D_i$ .", "As we observed above, the only allowable bypasses for $D$ are trivial bypasses, so inductively all $D_i$ has the same dividing set and $D_i$ and $D_{i+1}$ co-bound an $I$ -invariant neighborhood by Lemma REF .", "Thus $D_i$ and $D_{i+1}$ are contact isotopic for $1 \\le i \\le n-1$ and this implies that $f$ is contact isotopic to a contactomorphism fixing $D$ .", "By Lemma REF , we can further assume that $f$ fixes a small neighborhood $N$ of $\\partial V \\cup D$ .", "Now pick a sphere $S$ contained in $N$ and parallel to a sphere $\\partial N \\setminus \\partial V$ .", "Perturb $S$ to be convex and let $B$ be the ball in $V$ bounded by $S$ .", "By Eliashberg [13], there exists a unique tight contact structure on $B$ up to isotopy fixing the characteristic foliation on $S$ .", "Also, according to Eliashberg [13], we have $\\pi _0(\\operatorname{Cont}(B,\\xi |_B)) = 1$ .", "This implies that $f|_B$ is contact isotopic to the identity relative to the boundary.", "Since $f|_N$ is the identity, $f$ is contact isotopic to the identity relative to the boundary and this completes the proof.", "Figure: A schematic picture for DD and f(D)f(D).", "A shaded region represents a ball bounded by two disks in DD and f(D)f(D).We need several steps to classify Legendrian and transverse rational unknots in tight contact structures on lens spaces.", "The first step is to classify Legendrian representatives of the core in a universally tight contact structure on a solid torus with two dividing curves.", "Legendrian knots in a solid torus with longitudinal dividing curves were already studied by Etnyre and Vértesi [15].", "Proposition 3.2 Let $\\xi $ be a universally tight contact structure on a solid torus $V = S^1 \\times D^2$ with two dividing curves of slope $s$ .", "Then the core of $(V,\\xi )$ is Legendrian simple: there exists a unique Legendrian representative $L$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F = \\lfloor s \\rfloor $ where $F$ is the product framing of $V$ .", "Any Legendrian representative of the core is Legendrian isotopic to $L$ or its stabilization.", "We only consider the case $s \\in [-1,0)$ since we can realize any dividing slope by the Dehn twists about a meridian disk.", "We first show that there exists a unique Legendrian representative of the core of $(V,\\xi )$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F=-1$ up to Legendrian isotopy.", "Suppose $L$ is a Legendrian representative of the core of $(V,\\xi )$ .", "The dividing slope of a standard neighborhood $N$ of $L$ is an integer, so $\\lfloor s \\rfloor = -1$ is the maximum twisting number.", "Let $L_1$ and $L_2$ be Legendrian representatives of the core with $\\operatorname{tw}_F=-1$ .", "Suppose $N_1$ and $N_2$ are standard neighborhoods of $L_1$ and $L_2$ , respectively.", "Then we have $T_i(-1,s) = V \\setminus N_i$ for $i=1,2$ , which are minimally twisting $T^2 \\times I$ layers with the dividing slopes $-1$ and $s$ .", "According to Theorem REF , a tight contact structure on $V$ is completely determined by the tight contact structure on $T_i(-1,s)$ .", "Thus there exists a coorientation preserving contactomorphism $f \\colon T_1(-1,s) \\rightarrow T_2(-1,s)$ fixing $\\partial V$ .", "Since there exists a unique tight contact structure on a standard neighborhood of a Legendrian knot, we can extend $f$ to entire $(V,\\xi )$ so that $f(L_1)=L_2$ .", "By Theorem REF , $f$ is contact isotopic to the identity.", "Since $f$ sends $L_1$ to $L_2$ , they are Legendrian isotopic.", "Next, we will show that if $L$ is a Legendrian representative of the core of $(V,\\xi )$ with $n = \\operatorname{tw}_F(L) < -1$ , then $L$ destabilizes.", "Suppose $N$ is a standard neighborhood of $L$ .", "Then we have $T(n,s) = V \\setminus N$ , which is a minimally twisting $T^2 \\times I$ layer with the dividing slopes $n$ and $s$ .", "Since $n<-1$ , we can decompose $T(n,s)$ into $T(n,n+1) \\cup T(n+1,s)$ .", "Notice that $T(n,n+1)$ is a basic slice and we can thicken $N$ by attaching $T(n,n+1)$ .", "This corresponds to a destabilization of $L$ .", "Next, we improve the result by classifying the Legendrian representatives of the core in any tight contact structure on a solid torus with two dividing curves.", "Proposition 3.3 Let $\\xi $ be a tight contact structure on a solid torus $V = S^1 \\times D^2$ with two dividing curves of slope $s$ .", "Then the core of $(V,\\xi )$ is Legendrian simple: there exists a unique Legendrian representative $L$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F = \\lfloor s \\rfloor $ where $F$ is the product framing of $V$ .", "Any Legendrian representative of the core is Legendrian isotopic to $L$ or its stabilization.", "Again, we only consider the case $s \\in [-1,0)$ since we can realize any dividing slope by the Dehn twists about a meridian disk.", "We first show that there exists a unique Legendrian representative of the core of $(V,\\xi )$ with the maximum twisting number $\\operatorname{\\overline{\\operatorname{tw}}}_F=-1$ up to Legendrian isotopy.", "Suppose $L$ is a Legendrian representative of the core of $(V,\\xi )$ .", "The dividing slope of a standard neighborhood $N$ of $L$ is an integer, so $\\lfloor s \\rfloor = -1$ is the maximum twisting number.", "Let $L_1$ and $L_2$ be Legendrian representatives of the core with $\\operatorname{tw}_F=-1$ .", "Take a meridian disk $D$ of $V$ intersecting $L_2$ transversely once.", "Perturb $D$ to be convex with Legendrian boundary such that $\\partial D$ intersects $\\Gamma _{\\partial V}$ minimally and $D$ intersects $L_1$ transversely.", "We will consider two cases according to the intersection number between $L_1$ and $D$ .", "First, we consider the case $|D \\cap L_1| = 1$ .", "Suppose $N_1$ and $N_2$ are standard neighborhoods of $L_1$ and $L_2$ , respectively.", "After perturbing $D$ and $\\partial N_i$ , we can assume that the ruling slope of $N_i$ is $\\infty $ and there exists a ruling curve $c_i$ that lies on $D$ and it is the only intersection between $\\partial N_i$ and $D$ for $i = 1,2$ .", "Since $\\operatorname{tb}(c_1)=\\operatorname{tb}(c_2)=-1$ by the equality (REF ), each $c_1$ and $c_2$ intersects a dividing curve on $D$ at two points.", "See Figure REF for example.", "Choose the dividing curves $d_1,\\ldots ,d_n$ on $D$ such that $c_1$ intersects $d_1$ , $c_2$ intersects $d_n$ , and $d_i$ and $d_{i+1}$ are adjacent.", "We claim that we can isotope $L_1$ through Legendrian knots so that $c_1$ intersects $d_2$ and does not intersect any other dividing curve on $D$ .", "Take a solid torus $\\overline{N}$ such that $\\overline{N}$ contains $N_1$ and $\\partial \\overline{N}$ intersects $D$ in a closed curve $\\overline{c}$ that contains $c_1$ and intersects $d_1$ and $d_2$ at four points.", "See Figure REF for example.", "Perturb $\\partial \\overline{N}$ to be convex and $\\overline{c}$ to be Legendrian.", "Let $\\overline{s}$ be the dividing slope of $\\overline{N}$ .", "By the equality (REF ), we have $\\operatorname{tb}(\\overline{c})=-2$ .", "Due to this fact, there are only three cases we need to consider for the dividing curves on $\\partial \\overline{N}$ .", "The first case is $\\overline{s} > -1$ .", "Let $2n$ be the number of dividing curves on $\\partial \\overline{N}$ and $\\overline{s} = p/q$ for $|p| > q \\ge 1$ .", "Since the dividing set interleaves, $\\left|\\overline{c} \\cap \\Gamma _D\\right| = \\left|\\overline{c} \\cap \\Gamma _{\\partial V}\\right|$ .", "Thus we have $\\operatorname{tb}(\\overline{c}) = -2 = -\\frac{1}{2}\\left|\\overline{c} \\cap \\Gamma _D\\right| = -\\frac{1}{2}\\left|\\overline{c} \\cap \\Gamma _{\\partial V}\\right| \\le -n\\left|\\frac{p}{q} \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ 10| = -nq.", "$The equality holds if and only if $ c$ intersects $ V$ minimally.", "Since $ -1 < s s [-1,0)$, we have $ q > 1$ and this implies that $ n = 1$.", "Thus there are two dividing curves on $N$.", "Notice that the disk $D D$, bounded by $c$, contains two boundary-parallel dividing curves as shown in Figure~\\ref {fig:disk2}.", "Notice that these two dividing curves are a part of $ d1$ and $ d2$, but we just relabel them as $ d1$ and $ d2$.", "According to Theorem~\\ref {thm:imba}, we can take a bypass lying on $D$ containing the dividing curve $ d1$.", "Remove a bypass attachment of this bypass from $N$.", "Then by Theorem~\\ref {thm:bypass-torus}, the resulting solid torus $N1$ has two dividing curves of slope $s1$ satisfying $ -1 s1 < s$, and the resulting meridian disk $D1$ contains the single dividing curve $ d2$.", "Perturb $c1 =D1$ to be Legendrian.", "Then by equality (\\ref {eq:tb}), we have $ tb(c1) = -1$.", "Since the dividing set interleaves, we have$$\\operatorname{tb}(\\overline{c}) = -1 \\le -\\left|\\overline{s}_1 \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ 10|, $which implies that $ s1$ is an integer.", "Thus $s1 = -1$ and $N1$ has two dividing curves of slope $ -1$.", "Let $L1$ be a Legendrian representative of the core of $N1$ with $ twF=-1$.", "Then $N1$ is a standard neighborhood of $L1$.", "Since $D$ only contains boundary-parallel dividing curves, the restricted contact structure $ |N$ is universally tight by Theorem~\\ref {thm:classification-s1d2}.", "Since $N$ contains both $ L1$ and $L1$, by Proposition~\\ref {prop:core-univ}, $ L1$ is Legendrian isotopic to $L1$.", "Notice that $c1$ intersects $ d2$ and does not intersect any other dividing curve on $ D$.$ Figure: The red curves are the dividing curves on a convex disk DD.", "The closed curves are Legendrian curves.The second case is $\\overline{s} = -1$ and there are four dividing curves on $\\partial \\overline{N}$ .", "Again, the disk $\\overline{D} \\subset D$ , bounded by $\\overline{c}$ , contains two boundary parallel dividing curves as shown in Figure REF .", "According to Theorem REF , we can take a bypass lying on $\\overline{D}$ containing the dividing curve $d_1$ .", "Remove a bypass attachment of this bypass from $\\overline{N}$ and let $\\overline{N}_1$ be the resulting solid torus and $\\overline{D}_1$ be the resulting meridian disk.", "Perturb $\\overline{c}_1 = \\partial \\overline{D}_1$ to be Legendrian.", "Since $\\overline{D}_1$ contains the single dividing curve $d_2$ , we have $\\operatorname{tb}(\\overline{c}_1) = -1$ .", "Thus there are two dividing curves on $\\partial \\overline{N}_1$ as discussed in the first case.", "Since there are more than two dividing curves on $\\partial \\overline{N}$ , the bypass attachment does not change the dividing slope.", "Thus $\\overline{N}_1$ has two dividing curves of slope $-1$ and $\\overline{N} \\setminus \\overline{N}_1$ is a non-rotative layer.", "Let $\\overline{L}_1$ be a Legendrian representative of the core of $\\overline{N}_1$ with $\\operatorname{tw}_F = -1$ .", "Then $\\overline{N}_1$ is a standard neighborhood of $\\overline{L}_1$ .", "By the attach=dig principle (Theorem REF ), there is a solid torus $\\widetilde{N}$ containing $\\overline{N}$ with two dividing curves of slope $-1$ .", "By Theorem REF , the restricted contact structure $\\xi |_{\\widetilde{N}}$ is universally tight.", "Since $\\widetilde{N}$ contains both $L_1$ and $\\overline{L}_1$ , by Proposition REF , $L_1$ is Legendrian isotopic to $\\overline{L}_1$ .", "Notice that $\\overline{c}_1$ intersects $d_2$ and does not intersect any other dividing curve on $D$ .", "Figure: The red curves are the dividing curves on a convex disk DD.", "The closed curves are Legendrian curves.The third case is $\\overline{s} = -1$ and there are two dividing curves on $\\partial \\overline{N}$ .", "In this case, $\\overline{c}$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally.", "However, the disk $\\overline{D} \\subset D$ , bounded by $\\overline{c}$ , still contains two boundary parallel dividing curves as shown in Figure REF .", "According to Theorem REF , we can take a bypass lying on $\\overline{D}$ containing the dividing curve $d_1$ .", "Remove a bypass attachment of this bypass from $\\overline{N}$ and let $\\overline{N}_1$ be the resulting solid torus and $\\overline{D}_1$ be the resulting meridian disk.", "Perturb $\\overline{c}_1 = \\partial \\overline{D}_1$ to be Legendrian.", "Since $\\overline{D}_1$ contains the single dividing curve $d_2$ , we have $\\operatorname{tb}(\\overline{c}_1) = -1$ .", "Thus there are two dividing curves on $\\partial \\overline{N}_1$ as discussed in the first case.", "Since $\\overline{c}$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally, the bypass is not effective and the bypass attachment does not change the dividing slope.", "Thus $\\overline{N}_1$ has two dividing curves of slope $-1$ .", "Let $\\overline{L}_1$ be a Legendrian representative of the core of $\\overline{N}_1$ with $\\operatorname{tw}_F = -1$ .", "Then $\\overline{N}_1$ is a standard neighborhood of $\\overline{L}_1$ .", "By Theorem REF , the restricted contact structure $\\xi |_{\\overline{N}}$ is universally tight.", "Since $\\overline{N}$ contains both $L_1$ and $\\overline{L}_1$ , by Proposition REF , $L_1$ is Legendrian isotopic to $\\overline{L}_1$ .", "Notice that $\\overline{c}_1$ intersects $d_2$ and does not intersect any other dividing curve on $D$ .", "We just have proved the claim.", "By applying the claim inductively, we can isotope $L_1$ through Legendrian knots until $c_1$ intersects $d_n$ and does not intersect any other dividing curve on $D$ .", "After that, take a bypass lying on $D$ which does not contain $d_n$ , and remove a bypass attachment of the bypass from $V$ .", "Repeat this until there is only one dividing curve, $d_n$ , left.", "See Figure REF for example.", "Let $\\overline{V}$ be the resulting solid torus and $\\overline{D}$ be the resulting meridian disk.", "Perturb $\\overline{c} = \\partial \\overline{D}$ to be Legendrian.", "Since $\\overline{D}$ contains the single dividing curve $d_n$ , we have $\\operatorname{tb}(\\overline{c}) = -1$ .", "Let $\\overline{s} = p/q$ be the dividing slope of $\\partial \\overline{V}$ and $2n$ be the number of dividing curves.", "Since the dividing set interleaves, we have $\\operatorname{tb}(\\overline{c}) = -1 \\le -n\\left|\\frac{p}{q} \\mathchoice{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\displaystyle \\bullet $}}}}{}{}{}\\right.", "{\\mathbin {\\hbox{\\scalebox {0.6}{$\\m@th \\textstyle \\bullet $}}}}$ 10|.", "$The equality holds if and only if $ c$ intersects $ V$ minimally.", "From the inequality, we have $ n=1$ and $ q=1$.", "Thus there are two dividing curves on $V$ and $s$ is an integer.", "Since none of the bypasses does not intersect both $ N1$ and $ N2$ and the bypass attachment is a local operation, $V$ contains both $ N1$ and $ N2$.", "Thus we have $ -1 s s [-1,0)$ and this implies that $s=-1$.", "Thus $V$ has two dividing curves of slope $ -1$.", "By Theorem~\\ref {thm:classification-s1d2}, the restricted contact structure $V$ is universally tight.", "Since $V$ contains both $ L1$ and $ L2$, by Proposition~\\ref {prop:core-univ}, $ L1$ and $ L2$ are Legendrian isotopic.$ Figure: The red curves are the dividing curves on a convex disk DD.", "The closed curves are Legendrian curves.Next, we consider the case $m := |D \\cap L_1| > 1$ .", "In this case, we can perturb $\\partial N_1$ so that the ruling slope is $\\infty $ and there are $m$ ruling curves $c_1^1, \\ldots , c_m^1$ lying on $D$ and each $c_i^1$ intersects a dividing curve on $D$ at two points.", "Similarly, we can also perturb $\\partial N_2$ so that the ruling slope is $\\infty $ and there is a ruling curve $c^2$ lying on $D$ intersecting a dividing curve on $D$ at two points.", "Choose the dividing curves $d_1,\\ldots ,d_n$ on $D$ such that $c_1^1$ intersects $d_1$ , $c^2$ intersects $d_n$ , and $d_i$ and $d_{i+1}$ are adjacent.", "We claim that we can isotope $L_1$ through Legendrian knots so that $c_1^1$ intersects $d_2$ , while fixing other $c_i^1$ for $2 \\le i \\le m$ .", "After perturbing $D$ , we can take a solid torus $\\overline{N}$ such that $\\overline{N}$ contains $N_1$ and there are $m$ ruling curves $\\overline{c}_1,\\ldots ,\\overline{c}_m$ of $\\partial \\overline{N}$ lying on $D$ such that $\\overline{c}_i = c_i^1$ for $2 \\le i \\le m$ , $\\overline{c}_1$ contains $c_1^1$ and $\\overline{c}_1$ intersects $d_1$ and $d_2$ at four points.", "By the equality (REF ), we have $\\operatorname{tb}(\\overline{c}_2) = -1$ and this implies that there are two dividing curves on $\\partial \\overline{N}$ and the dividing slope $\\overline{s}$ is an integer as discussed above.", "Also, since $\\overline{N}$ contains $N_1$ , we have $-1 \\le \\overline{s} \\le s \\in [-1,0)$ and $\\overline{s} = -1$ .", "Thus $\\overline{N}$ has two dividing curves of slope $-1$ .", "This implies that $\\overline{c}_1$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally.", "Since there are two boundary-parallel dividing curves on the disk $\\overline{D} \\subset D$ , bounded by $\\overline{c}_1$ , we can find a bypass lying on $\\overline{D}$ that contains $d_1$ according to Theorem REF .", "Remove a bypass attachment of this bypass from $\\overline{N}$ and let $\\overline{N}_1$ be the resulting solid torus and $\\overline{D}_1$ be the resulting meridian disk.", "Since $\\overline{D}_1$ contains the single dividing curve $d_2$ , there are still two dividing curves on $\\overline{N}_1$ .", "Since $\\overline{c}_1$ does not intersect $\\Gamma _{\\partial \\overline{N}}$ minimally, the bypass is not effective and the bypass attachment does not change the dividing slope.", "Thus $\\overline{N}_1$ has two dividing curves of slope $-1$ .", "Let $\\overline{L}_1$ be a Legendrian representative of the core of $\\overline{N}_1$ with $\\operatorname{tw}_F = -1$ .", "Then $\\overline{N}_1$ is a standard neighborhood of $\\overline{L}_1$ .", "By Theorem REF , the restricted contact structure $\\xi |_{\\overline{N}}$ is universally tight.", "Since $\\overline{N}$ contains both $L_1$ and $\\overline{L}_1$ , by Proposition REF , $L_1$ is Legendrian isotopic to $\\overline{L}_1$ .", "This completes the claim.", "By applying the claim inductively, we can isotope $L_1$ through Legendrian knots until $c_1^1$ intersects $d_n$ while fixing other $c_i^1$ for $2 \\le i \\le m$ .", "After that, apply the claim to $c_2^1$ and we can isotope $L_1$ through Legendrian knots until $c_2^1$ intersects $d_n$ while fixing other $c_i^1$ .", "Repeat the argument until all $c_i^1$ for $1 \\le i \\le m$ intersect $d_n$ .", "Now using Theorem REF , take a bypass lying on $D$ that does not contain $d_n$ and remove a bypass attachment of this bypass from $V$ .", "Repeat this until there is only one dividing curve, $d_n$ , left.", "Let $\\overline{V}$ be the resulting solid torus and $\\overline{D}$ be the resulting meridian disk.", "Perturb $\\overline{c} = \\partial \\overline{D}$ to be Legendrian.", "Since $\\overline{D}$ contains the single dividing curve $d_n$ , we have $\\operatorname{tb}(\\overline{c}) = -1$ and this implies that there are two dividing curves on $\\partial \\overline{V}$ and the dividing slope $\\overline{s}$ is an integer as discussed above.", "Since none of the bypasses does not intersect both $N_1$ and $N_2$ and the bypass attachment is a local operation, $\\overline{V}$ contains both $N_1$ and $N_2$ .", "Thus we have $-1 \\le \\overline{s} \\le s \\in [-1,0)$ and this implies that $\\overline{s}=-1$ .", "Thus $\\overline{V}$ has two dividing curves of slope $-1$ .", "By Theorem REF , the restricted contact structure $\\xi _{\\overline{V}}$ is universally tight.", "Since $\\overline{V}$ contains both $L_1$ and $L_2$ , by Proposition REF , $L_1$ and $L_2$ are Legendrian isotopic.", "Lastly, we show that if $L$ is a Legendrian representative of the core with $n := \\operatorname{tw}_F < -1$ , then $L$ destabilizes.", "Suppose $N$ is a standard neighborhood of $L$ .", "Then we have $V \\setminus N = T(n,s)$ , which is a minimally twisting $T^2 \\times I$ layer with the dividing slopes $n$ and $s$ .", "Since $n<-1$ , we can decompose $T(n,s)$ into $T(n,n+1) \\cup T(n+1,s)$ .", "Notice that $T(n,n+1)$ is a basic slice and we can thicken $N$ by attaching $T(n,n+1)$ .", "This corresponds to a destabilization of $L$ .", "Now we are ready to classify Legendrian and transverse rational unknots in any tight contact structure on lens spaces.", "We first show the Legendrian simplicity.", "Proposition 3.4 Let $\\xi $ be a tight contact structure on a lens space $L(p,q)$ and $K$ be an oriented rational unknot in $L(p,q)$ .", "Then there exists a unique Legendrian representative $L$ of $K$ in $\\xi $ such that any Legendrian representative of $K$ is Legendrian isotopic to $L$ or its stabilization.", "Recall from Section REF that a tight contact structure on $\\xi $ on $L(p,q)$ can be decomposed into $S(-1,-p/q;l) \\cup S(-1,0;u)$ .", "Also, recall that $s_0,\\ldots ,s_n$ are the vertices of the shortest path in the Farey graph where $s_0 = -p/q$ and $s_n = 0$ .", "Thus $\\xi $ can also be decomposed into $S(s_1,-p/q;l)$ and $S(s_1,0;u)$ .", "Suppose $K = K_1$ .", "We first show that there exists a unique Legendrian representative of $K$ with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ up to Legendrian isotopy.", "Let $L$ and $L^{\\prime }$ be Legendrian representatives of $K$ with $\\operatorname{tb}_{\\mathbb {Q}}(L) = \\operatorname{tb}_{\\mathbb {Q}}(L^{\\prime }) = \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ , and $N$ and $N^{\\prime }$ be standard neighborhoods of $L$ and $L^{\\prime }$ , respectively.", "As shown in the proof of Theorem REF , both $N$ and $N^{\\prime }$ are $S(-1,0;u)$ , i.e., they have two dividing curves of slope $-1$ with the upper meridional slope 0.", "Since $L$ and $L^{\\prime }$ are smoothly isotopic, there exists a smooth isotopy from $N$ to $N^{\\prime }$ .", "Then by Theorem REF , there exists a sequence of solid tori $N_1, \\ldots , N_n$ where $N_1 = N$ , $N_n = N^{\\prime }$ and $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ .", "Let $s_i$ be the dividing slope of $N_i$ .", "Here, we define Legendrian representatives $L_i$ associated to $N_i$ as follows.", "First, if $s_i \\le -1$ , then $N_i$ contains a solid torus $S(-1,0;u)$ .", "Define $L_i$ to be a Legendrian representative of the core of this $S(-1,0;u)$ with the maximum twisting number.", "Notice that this $S(-1,0;u)$ is a standard neighborhood of $L_i$ so $\\operatorname{tb}_{\\mathbb {Q}}(L_i) = \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ .", "If $s_i > -1$ , then $N_i$ is contained in some $S(-1,0;u)$ .", "Define $L_i$ to be a Legendrian representative of the core of this $S(-1,0;u)$ with the maximum twisting number.", "Notice that this $S(-1,0;u)$ is a standard neighborhood of $L_i$ so $\\operatorname{tb}_{\\mathbb {Q}}(L_i) = \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ .", "From the definition, we can choose $L_0$ to be $L$ and $L_n$ to be $L^{\\prime }$ .", "We claim that $L_i$ and $L_{i+1}$ are Legendrian isotopic, and this implies that $L$ and $L^{\\prime }$ are Legendrian isotopic by induction.", "Observe that if $s_i > -1$ , then $s_{i+1} \\ge -1$ by Theorem REF .", "Similarly, if $s_i < -1$ , then $s_{i+1} \\le -1$ .", "Due to this fact, there are only two cases we need to consider.", "The first case is $s_i, s_{i+1} \\le -1$ .", "Assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is contained in $N_i$ .", "In this case, $N_i$ contains $N_{i+1}$ and this implies that $N_i$ contains both $L_i$ and $L_{i+1}$ .", "If $\\partial N_i$ has more than two dividing curves, then by the attach=dig principle (Theorem REF ), we can thicken $N_i$ and reduce the number of dividing curves.", "Also notice that $L_i$ and $L_{i+1}$ still have the maximum twisting number in $N_i$ .", "If not, they destabilize and it contradicts that they have $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ .", "Thus by Proposition REF , $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "Now assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is not contained in $N_i$ .", "Then $N_{i+1}$ contains $N_i$ and this implies that $N_{i+1}$ contains both $L_i$ and $L_{i+1}$ .", "Thus we can apply the same argument above (by switching the role of $N_i$ and $N_{i+1}$ ) and conclude that $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "The second case is $s_i, s_{i+1} \\ge -1$ .", "In this case, the complements of each $N_i$ and $N_{i+1}$ contains $S(s_1,-p/q;l)$ since $s_1 \\le -1$ .", "Let $\\overline{L}_i$ and $\\overline{L}_{i+1}$ be the Legendrian representatives of the core of each $S(s_1,-p/q;l)$ containing in $N_i$ and $N_{i+1}$ , respectively, with the maximum twisting number.", "Notice that standard neighborhoods of $\\overline{L}_i$ and $\\overline{L}_{i+1}$ are $S(s_1,-p/q;l)$ .", "Assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is contained in $N_i$ .", "Then $N_i$ contains $N_{i+1}$ and this implies that the complement of $N_{i+1}$ contains both $\\overline{L}_i$ and $\\overline{L}_{i+1}$ .", "Also notice that $\\overline{L}_i$ and $\\overline{L}_{i+1}$ still have the maximum twisting number in the complement of $N_{i+1}$ .", "If not, they destabilize and standard neighborhoods of them are $S(s,-p/q;l)$ where $s$ is clockwise of $s_1$ and there is an edge between $s$ and $-p/q$ in the Farey graph.", "This implies that $s < -p/q$ or $s = \\infty $ , and a non-minimally twisting $T^2 \\times I$ layer embeds in $(L(p,q),\\xi )$ , which contradicts the tightness of $\\xi $ .", "Now $\\overline{L}_i$ and $\\overline{L}_{i+1}$ are Legendrian isotopic by Proposition REF .", "Thus after Legendrian isotopy, we can identify $\\overline{L}_i$ with $\\overline{L}_{i+1}$ , and then both $L_i$ and $L_{i+1}$ are contained in the complement of a standard neighborhood of $\\overline{L}_i$ .", "By Proposition REF again, $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "Now assume that $N_{i+1}$ is obtained by attaching a bypass to $\\partial N_i$ which is not contained in $N_i$ .", "Then $N_{i+1}$ contains $N_i$ and this implies that the complement of $N_i$ contains both $\\overline{L}_i$ and $\\overline{L}_{i+1}$ .", "Thus we can apply the same argument above (by switching the role of $N_i$ and $N_{i+1}$ ) and conclude that $L_i$ and $L_{i+1}$ are Legendrian isotopic.", "This completes the claim.", "By Theorem REF , a Legendrian representative $L$ of $K$ with $\\operatorname{tb}_{\\mathbb {Q}}(L) < \\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K)$ destabilizes.", "The identical argument works for $-K_1$ and $\\pm K_2$ .", "We leave them as exercises for the reader.", "Remark 3.5 Notice that in the proof of Proposition REF , we need Proposition REF , not just Proposition REF even for the universally tight contact structures on $L(p,q)$ .", "This is because there exist virtually overtwisted neighborhoods of $K$ even in the universally tight contact structures.", "The theorem immediately follows from Theorem REF and Proposition REF .", "In [17], Etnyre and Honda showed that the classification of transverse knots is equivalent to the classification of Legendrian knots up to negative stabilization.", "Thus the theorem immediately follows from Theorem REF ." ], [ "The contact mapping class group of the standard lens spaces", "In this section, we use the results from the previous sections to prove Theorem REF , and Corollary REF and REF .", "We also prove Theorem REF using the results of Ding and Geiges [10], see Section REF .", "Recall from Section REF that the standard contact structure $\\xi _{std}$ on $L(p,q)$ can be decomposed into $S(-1,-p/q;l) \\cup S(-1,0;u)$ , and the contact structure $\\xi _{std}$ restricted to $S(-1,0;u)$ is universally tight by Theorem REF .", "Also, the contact structure $\\xi _{std}$ restricted to $S(-1,-p/q;l)$ is universally tight by Theorem REF .", "Let $L_1$ be a Legendrian representative of $K_1$ with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K_1)$ .", "As shown in the proof of Theorem REF , a standard neighborhood of $L_1$ is $S(-1,0;u)$ .", "Recall from Section REF that if $q^2 \\equiv 1 \\pmod {p}$ , then $\\sigma $ is a contactomorphism of the standard contact structure $\\xi _{std}$ on $L(p,q)$ .", "Also, if $q \\equiv -1 \\pmod {p}$ , then $\\tau $ is smoothly isotopic to a contactomorphism $\\overline{\\tau }$ of $\\xi _{std}$ on $L(p,q)$ .", "If $q \\lnot \\equiv -1 \\pmod {p}$ , then there exist two standard contact structure $\\xi _{std}^\\pm $ on $L(p,q)$ and $\\tau _*$ sends one to the other.", "We first show that any contactomorphism $f \\in \\operatorname{Cont}(L(p,q),\\xi _{std})$ is contact isotopic to either $\\sigma $ , $\\overline{\\tau }$ , or the identity.", "Suppose $f$ is smoothly isotopic to the identity.", "Since $f$ is a contactomorphism, $f$ sends a Legendrian representative of $K_1$ with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K_1)$ to the one with $\\operatorname{\\overline{\\operatorname{tb}}}_{\\mathbb {Q}}(K_1)$ .", "Then by Proposition REF (or Theorem REF ), $f(L_1)$ is Legendrian isotopic to $L_1$ .", "By the contact isotopy extension theorem [21], we can assume $f$ fixes $L_1$ .", "Moreover, by Lemma REF , we can further assume that $f$ fixes a standard neighborhood $S(-1,0;u)$ of $L_1$ .", "As discussed above, the contact structure $\\xi _{std}$ restricted to the complement of $S(-1,0;u)$ , which is $S(-1,-p/q;l)$ , is universally tight.", "Thus by Theorem REF , $f|_{S(-1,-p/q;l)}$ is contact isotopic to the identity relative to the boundary.", "Since $f|_{S(-1,0;u)}$ is the identity, $f$ is contact isotopic to the identity.", "Next suppose $f$ is smoothly isotopic to $\\sigma $ .", "Let $g := f^{-1} \\circ \\sigma $ .", "Then $g$ is a contactomorphism of $\\xi _{std}$ which is smoothly isotopic to the identity.", "By the argument above, $g$ is contact isotopic to the identity.", "Thus $f$ and $\\sigma $ are contact isotopic.", "Suppose $f$ is smoothly isotopic to $\\tau $ .", "If $q \\equiv -1 \\pmod {p}$ , then $\\tau $ is smoothly isotopic to a contactomorphism $\\overline{\\tau }$ as discussed above.", "Let $g := f^{-1} \\circ \\overline{\\tau }$ .", "Then $g$ is a contactomorphism which is smoothly isotopic to the identity.", "By the argument above, $g$ is contact isotopic to the identity, so $f$ and $\\overline{\\tau }$ are contact isotopic.", "If $q \\lnot \\equiv -1 \\pmod {p}$ , then $\\tau $ is a coorientation reversing contactomorphism sending $\\xi _{std}^\\pm $ to $\\xi _{std}^\\mp $ .", "Since $f$ and $\\tau $ are isotopic, $f_*(\\xi _{std}^\\pm )$ is isotopic to $\\tau _*(\\xi _{std}^\\pm ) = \\xi _{std}^\\mp $ , which contradicts that $f$ is a contactomorphism of $\\xi _{std}^\\pm $ .", "Now assume $p=2$ .", "Then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is trivial by Theorem REF .", "In this case, any contactomorphism $f$ is contact isotopic to the identity, so we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std})) = 1.$ If $p\\ne 2$ and $q\\equiv -1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\sigma $ by Theorem REF .", "In this case, any contactomorphism $f$ is contact isotopic to $\\sigma $ or the identity, so we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std})) = \\mathbb {Z}_2,$ generated by $\\sigma $ .", "If $p\\ne 2$ and $q\\equiv 1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\tau $ by Theorem REF .", "In this case, any diffeomorphism $f$ smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ by the argument above.", "Thus we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))=1.$ If $p\\ne 2$ , $q\\lnot \\equiv \\pm 1$ and $q^2 \\equiv 1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\sigma $ and $\\tau $ by Theorem REF .", "Again, any diffeomorphism $f$ which is smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ by the argument above.", "Thus we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))=\\mathbb {Z}_2,$ generated by $\\sigma $ .", "Finally, if $q^2 \\lnot \\equiv 1 \\pmod {p}$ , then $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\tau $ by Theorem REF .", "Again, any diffeomorphism $f$ smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ by the argument above.", "Thus we have $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))=1.$ This completes the proof.", "In the proof of Theorem REF , we showed that any contactomorphism $f \\in \\operatorname{Cont}(L(p,q),\\xi _{std})$ is contact isotopic to either $\\sigma $ , $\\overline{\\tau }$ , or the identity.", "This proves the injectivity of $i_*$ since $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ is generated by $\\sigma $ and $\\tau $ .", "Also, in the proof of Theorem REF , we showed that any diffeomorphism which is smoothly isotopic to $\\tau $ cannot be a contactomorphism of $\\xi _{std}$ when $q \\lnot \\equiv -1 \\pmod {p}$ .", "Moreover, $\\tau $ is not isotopic to the identity if $p\\ne 2$ by Theorem REF .", "This implies that $i_*$ is not surjective when $q \\lnot \\equiv -1 \\pmod {p}$ .", "Finally, in the proof of Theorem REF , we showed that both $\\pi _0(\\operatorname{Cont}(L(p,q),\\xi _{std}))$ and $\\pi _0(\\operatorname{Diff}_+(L(p,q)))$ are generated by $\\sigma $ if $q\\equiv -1$ , so $i_*$ is surjective.", "Suppose $L$ is a Legendrian representative of the positively oriented core of $S^1 \\times S^2$ with $\\operatorname{rot}(L)=0$ .", "First, observe that $\\delta ^m$ is not contact isotopic to $\\delta ^n$ if $m \\ne n$ since $\\operatorname{rot}(\\delta ^m(L)) \\ne \\operatorname{rot}(\\delta ^n(L))$ by Lemma REF .", "Let $f$ be a contactomorphism of $\\xi _{std}$ on $S^1 \\times S^2$ .", "We will show that $\\delta $ and $\\eta $ commute, and $f$ is contact isotopic to $\\delta ^m \\circ \\eta ^i$ for some $m \\in \\mathbb {Z}$ and $i \\in \\mathbb {Z}_2$ .", "Then a map $\\Phi : \\mathbb {Z} \\oplus \\mathbb {Z}_2 &\\rightarrow \\pi _0(\\operatorname{Cont}(S^1 \\times S^2), \\xi _{std}),\\\\(m,i) &\\mapsto \\delta ^m\\circ \\eta ^i$ is a well-defined homomorphism and clearly it is an isomorphism.", "Suppose $n := \\operatorname{rot}(f(L))$ .", "There are two cases we need to consider according to the orientation of $f(L)$ .", "Suppose $f(L)$ is smoothly isotopic to $L$ .", "According to Lemma REF , $\\delta $ increases the rotation number of $f(L)$ by 1, so we have $\\operatorname{rot}((\\delta ^{-n} \\circ f)(L)) = 0.$ Thus $(\\delta ^{-n} \\circ f)(L)$ is Legendrian isotopic to $L$ by Theorem REF .", "Moreover, by Lemma REF , we can assume that $\\delta ^{-n} \\circ f$ fixes a standard neighborhood $N$ of $L$ .", "Ding and Geiges [10] showed that the complement of $N$ in the standard contact structure on $S^1 \\times S^2$ is a solid torus with two longitudinal dividing curves.", "Now by Theorem REF , the restriction of $\\delta ^{-n} \\circ f$ to the complement of $N$ is contact isotopic to the identity.", "Thus $\\delta ^{-n} \\circ f$ is contact isotopic to the identity and hence $f$ is contact isotopic to $\\delta ^n$ .", "Now suppose $f(L)$ is smoothly isotopic to $-L$ .", "Then $(\\eta ^{-1} \\circ \\delta ^n \\circ f)(L)$ is smoothly isotopic to $L$ and by Lemma REF we have $\\operatorname{rot}((\\eta ^{-1} \\circ \\delta ^n \\circ f)(L)) = 0.$ Thus by Theorem REF , $(\\eta ^{-1} \\circ \\delta ^n \\circ f)(L)$ is Legendrian isotopic to $L$ and by Lemma REF , we can assume that $\\eta ^{-1} \\circ \\delta ^n \\circ f$ fixes a neighborhood of $L$ .", "Again, $\\eta ^{-1} \\circ \\delta ^n \\circ f$ is contact isotopic to the identity by the same argument above.", "Thus $f$ is contact isotopic to $\\delta ^{-n} \\circ \\eta $ .", "Finally, consider a contactomorphism $\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta $ .", "By Lemma REF , we have $\\operatorname{rot}((\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta )(L)) = 0$ and $(\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta )(L)$ is smoothly isotopic to $L$ .", "By Theorem REF , $(\\delta ^{-1} \\circ \\eta ^{-1} \\circ \\delta \\circ \\eta )(L)$ is Legendrian isotopic to $L$ .", "Applying the argument above, we can show $\\delta \\circ \\eta $ is contact isotopic to $\\eta \\circ \\delta $ .", "This completes the proof.", "Notice that $\\operatorname{Cont}_0(M,\\xi ) = \\ker i$ , where $i:\\operatorname{Cont}(M,\\xi ) \\rightarrow \\operatorname{Diff}_+(M)$ is the natural inclusion.", "Now the corollary is immediate from the injectivity of $i_*$ from Corollary REF ." ] ]
2207.03590
[ [ "VeriDark: A Large-Scale Benchmark for Authorship Verification on the\n Dark Web" ], [ "Abstract The DarkWeb represents a hotbed for illicit activity, where users communicate on different market forums in order to exchange goods and services.", "Law enforcement agencies benefit from forensic tools that perform authorship analysis, in order to identify and profile users based on their textual content.", "However, authorship analysis has been traditionally studied using corpora featuring literary texts such as fragments from novels or fan fiction, which may not be suitable in a cybercrime context.", "Moreover, the few works that employ authorship analysis tools for cybercrime prevention usually employ ad-hoc experimental setups and datasets.", "To address these issues, we release VeriDark: a benchmark comprised of three large scale authorship verification datasets and one authorship identification dataset obtained from user activity from either Dark Web related Reddit communities or popular illicit Dark Web market forums.", "We evaluate competitive NLP baselines on the three datasets and perform an analysis of the predictions to better understand the limitations of such approaches.", "We make the datasets and baselines publicly available at https://github.com/bit-ml/VeriDark" ], [ "Introduction", "The Dark Web is content found on the DarkNet, which is a restricted network of computers accessible using special software and communication protocols, such as Torhttps://www.torproject.org/.", "This infrastructure offers anonymity and protection from surveillance and tracking, which greatly benefits users whose privacy is of great concern (e.g.", "journalists, whistleblowers, or people under oppressive regimes).", "However, cybercriminals often exploit these services to conduct illegal activities via discussion forums and illicit shops corresponding to different marketplaces, frequently referred to as hidden services.", "Law enforcement agencies have started to crack down on DarkNet cybercriminals in the last decade.", "One of the largest marketplaces was Silk Road, estimated to host between 30000 and 150000 active customers [10], which was shutdown by the FBI in 2013.", "However, this success was short-lived due to the quick shift of the customer base and vendors to other marketplaces.", "As of 2022, relocation to other marketplaces remains a key issue when dealing with DarkNet cybercrime [14].", "This indicates that more efforts are needed to develop law enforcement forensic tools to help analysts.", "Since one of the largest digital footprints of a DarkWeb user is their post history, developing text-based authorship analysis software enables these efforts.", "Figure: An Agora Dark Web forum post compared to a PAN 2020 Authorship Verification dataset sample.", "Notice the different themes (drugs, hacking vs. fiction) and style (colloquial, sloppy vs. narration, dialogue).", "The Dark Web posts often include emojis and acronyms of DarkNet-related concepts, while PAN samples contain fictional character names and fabricated words.The authorship analysis field has a long history.", "The first efforts go back to as early as the nineteenth century [25], followed by the seminal work of [26] during the mid-twentieth century, where the authors used Bayesian statistical methods to identify the authors of the controversial Federalist papers.", "The domain of authorship analysis spans a range of different tasks, such as: authorship attribution, which is the task of identifying the author of a text given a closed set of identities (i.e.", "a list of known authors); authorship verification (AV), the task of determining if two texts A and B are written by the same author or not; author profiling, where certain characteristics (e.g.", "age, gender, nationality) need to be predicted.", "Research in the past decade has been spearheaded by machine learning approaches coupled with increasingly large corpora.", "However, these efforts were mainly targeting literary works in the form of novels and fan fiction [17].", "As such, authorship verification solutions developed using these datasets are likely to underperform under a domain shift when deployed on Dark Web corpora, which have a different linguistic style and register (domain-specific abbreviations and code names, slang, a particular netiquette, presence of emojis, etc.", "), as exemplified in Fig.", "REF .", "We thus introduce the VeriDark bechmark, which contains three large-scale authorship verification datasets collected from social media.", "The first one is DarkReddit+, a dataset collected from a Reddit forum called /r/darknetmarkets, which features discussions related to trading on the DarkNet marketplaces.", "The next two datasets, Agora and SilkRoad1, are collected from two Dark Web forums associated with the two most popular defunct marketplaces (Agora and Silk Road).", "Our aim is twofold: (i) to enable the development and evaluation of modern AV models in a cybersecurity forensics context; (ii) to facilitate the evaluation of existing AV methods under a domain shift.", "Moreover, we introduce a small authorship identification task in the cybersecurity context.", "Specifically, given a comment, the task asks to predict the user that made the comment.", "The task is a 10-way classification problem, where the classes are given by the top 10 most active users.", "The train set contains comments from the subreddit /r/darknetmarkets only, but we test on both /r/darknetmarkets and other subreddits unrelated to DarkNet discussions.", "We will make both the datasets and the data processing code publicly availablehttps://github.com/bit-ml/VeriDark.", "We will also maintain a public leaderboardhttps://veridark.github.io/.", "Table: Statistics for our proposed datasets (bold) versus other relevant datasets.", "Agora and SilkRoad1 are the largest authorship verification datasets to date in terms of the number of examples, while having much shorter texts on average than the PAN datasets.", "To our knowledge, Agora and SilkRoad1 are the first publicly available authorship verification datasets featuring text from the Dark Web.", "DarkReddit+ is used for both the authorship verification and identification tasks.", "The row Total is obtained by merging Agora, SilkRoad1 and DarkReddit+." ], [ "Related Work", "Our work relates to DarkNet forensic investigations and author profiling.", "One of the first works exploring the Dark Web hidden services is an analysis of the Silk Road 1 marketplace by [10], where the author examined and described the inner-workings of the defunct illegal marketplace.", "Undertaking a similar challenge, [31] used an anomaly detection method to study how the DarkNet markets react to certain disruptive events, such as Europol's Operation Onymous [13].", "In addition to studying the Dark Web, researchers endeavored to investigate drug trafficking on the clear web.", "[21], [22], [20], [28] have employed various machine learning models to identify illegal drug postings on Instagram and Twitter.", "To support this kind of pursuits, [8] crowdsourced a dataset for Drug Named Entity Recognition in the DarkNet marketplaces.", "Dark Web author profiling historically employed various techniques and multimodal data.", "In both [32] and [15], the authors linked multiple accounts (Sybils) of vendors using photos of their products.", "Similarly, [33] proposed a model that leverages both images and descriptions of drug listings to create a low-dimensional vendor embedding which can be used to determine Sybils across different marketplaces.", "In [23], the authors proved that multitask learning can be successfully employed for authorship attribution across DarkNet markets, while [1] showed that it is possible to link the activity of users in the illegal forums with their activity on Reddit using stylometric approaches for authorship attribution.", "More broadly, our work relates to authorship attribution and authorship verification, which is a more recent and difficult task.", "Historically, machine learning models have been successfully deployed for the authorship attribution problem on small datasets, some of the more prevalent techniques employing decision trees [34] and Support Vector Machines [11].", "Machine learning methods have also found success in the authorship verification task, where the unmasking technique has been particularly influential [19], [4].", "Recently, the direction has shifted towards using large scale datasets and neural networks [5], [6], [7], [27], [29], [30].", "These advancements primarily result from the efforts of the CLEF Initiative, which released the first large-scale Authorship Verification datasets and tasks [16], [17], [18].", "Distinctly from the PAN AV datasets, which feature literary texts, we tackle authorship verification in the cybersecurity context, where there is a significant domain shift towards a more informal and concise writing style, as can be seen in Fig.", "REF ." ], [ "Dark Authorship Verification Datasets", "For all the authorship verification datasets, we removed the PGP signatures, keys and messages, since they can contain information which could leak the user's identity.", "We then filtered the duplicate messages and removed the non-English messages using the langdetecthttps://github.com/fedelopez77/langdetect library.", "We removed comments with less than 200 characters due to very little information available to decide the authorship status.", "Then, we partition the list of authors into three disjoint sets: 90% of the authors are kept for the train set, 5% of the authors are kept for the validation set, and another 5% are kept for the test set.", "We thus create a more difficult open-set author verification setup, in which documents at test time belong to authors that haven't been seen during training time.", "The training, validation and test split ratios remain approximately 90%, 5% and 5%, respectively.", "Figure: Histogram of text lengths (as number of words) for all the three datasets." ], [ "DarkReddit+ Authorship Verification Dataset", "We build a large AV dataset called DarkReddit+, which contains same author (SA) and different author (DA) pairs of comments from the /r/darknetmarkets subreddit.", "This defunct subreddit was a gateway to the DarkNet and hosted discussions about illicit services and goods.", "These comments were written between January 2014 and May 2015.", "The data was retrieved from a large archive https://reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/ containing all the comments written on reddit.com up until 2015 [2].", "We generate same author pairs by iterating through each author, then selecting document pairs until all its documents are exhausted.", "We then generate different author pairs using the following procedure: $(i)$ we select two random authors $a_1$ and $a_2$ , $(ii)$ a random document from author $a_1$ and a random document from author $a_2$ , and $(iii)$ finally, repeat the first two steps until we obtain the same number of different author pairs and same author pairs, resulting in balanced classes.", "We obtain a total number of almost 120k pairs, which are divided into 107,024 training pairs, 6,164 validation pairs and 6,680 test pairs, as shown in Table REF ." ], [ "DarkNet Authorship Verification Datasets", "The next two datasets, called SilkRoad1 and Agora, were obtained from comments retrieved from two of the most popular DarkNet marketplace forums: Silk Road and Agora.", "Silk Road was one of the largest black markets, which started in 2011 and was shut down in 2013.", "We used publicly available forums scraped from DarkNet from 2013 to 2015 [9].", "The two archived forums have different folder structures, due to crawling differences, but share the same SimpleMachineForums 2 forum layout https://www.simplemachines.org.", "To extract the comments, we first retrieved all the HTML files that contained the word `topic' for both forums.", "We then parsed each HTML file using the BeautifulSoup library https://www.crummy.com/software/BeautifulSoup/bs4/doc/, extracting the author names and the comments and storing them in a dictionary.", "We generated same author and different author pairs using the same procedure used for the DarkReddit+ dataset.", "Statistics for both datasets are listed in Table REF .", "The Agora dataset is almost 7 times larger than the SilkRoad1 dataset, which in turn is almost 7 times larger than the DarkReddit+ dataset.", "To our knowledge, the DarkNet datasets are the largest authorship verification datasets to date in terms of the number of examples.", "However, distinctly from the PAN text pairs, which feature large fanfiction excerpts, the DarkNet examples are much shorter (average number of words is around 120), due to the nature of the short and frequent forum interactions.", "We plot the histograms of the text lengths for all three datasets, where the length of a text is the number of its words.", "All our datasets have similar length distributions, as illustrated in Figure REF , but DarkReddit+ has shorter comments overall, with very few comments over 200 words." ], [ "DarkReddit+ Author Identification Dataset", "A forensic scenario of interest is identifying users on the Web based on their activity on the Dark Web.", "To implement this scenario, we could perform author attribution where the target document is from the Web and the reference documents are from the Dark Web.", "An easier task would be to identify a Dark Web user based on its comment.", "To this end, we retrieve all the comments that users from /r/darknetmarkets have written in other subreddits as well.", "We refer to /r/darknetmarkets as DarkReddit and to other subreddits as ClearReddit.", "We then keep the users that have at least 5 comments in DarkReddit and 5 comments in ClearReddit.", "Further, we select the top 10 most active users in DarkReddit, based on the number of comments.", "Next, we create a classification task in which given a comment, the correct user (out of 10 possible users) must be predicted.", "The dataset has approximately 10k examples.", "We list the train, validation and test splits in Table REF ." ], [ "Experiments", "Next, we describe our training methodology, models, and general experimental setup.", "For all our experiments, we use the train/validation/test splits as described in Sec.", "." ], [ "Authorship Verification", "Training.", "We fine-tune BERT-based models [12] on our dataset in a discriminative way, as suggested in [24].", "During training, given two texts $S_1$ and $S_2$ , we randomly select 254 tokens from $S_1$ and 254 tokens from $S_2$ .", "We concatenate the two token sequences, separating them using the $[SEP]$ special token.", "If the texts are too short, we append the $[PAD]$ special token to the sequence, until we obtain a sequence of length 512.", "The tokens are then forwarded to the backbone and we obtain a joint sequence pair embedding $h_{[CLS]}$ .", "Finally, we feed the obtained embedding to a linear layer and optimize the whole network using the binary cross-entropy loss.", "Table: Results on the VeriDark authorship datasets: DarkReddit+, Agora, SilkRoad1 and All (the previous three datasets aggregated).", "Unsurprisingly, the best average test metrics are obtained when training on the same dataset.", "Notice the good cross-dataset performance transfer when training on one dataset and testing on the other two.", "Training on All datasets performs slightly worse than training on the same dataset as the test set, but outperforms the cross-dataset strategy, showing that more data can be helpful for making a model more robust across datasets.Table: Performance on the VeriDark datasets when training on the PAN dataset.", "Notice the lower performance compared to the models trained on the VeriDark datasets, shown in Table .Evaluation.", "We evaluate the model as follows: given two texts $S_1$ and $S_2$ , we split each document into chunks of 254 tokens.", "We trim the document with more chunks, such that both documents have the same number of $K$ chunks.", "We then make $K$ pairs of chunks, by picking the chunks with the same indices, as can be seen in Figure REF .", "We follow the training procedure and construct the input sequence by separating the two chunks with [SEP] tokens and prepending them with the [CLS] token.", "Finally, we separately feed the $K$ pairs into the model and obtain the probabilities for the two classes.", "We average the probabilities for the final prediction.", "For a better understanding of the models' performance, we evaluate them using the same metrics as in the PAN Authorship Verification shared tasks [16], namely the $F1$ , $F0.5$ , $C@1$ and $AU\\!ROC$ scores, with $avg.$ being the average over the previously mentioned metrics.", "We look at in-domain vs. cross-domain performance for each of the VeriDark datasets in Table REF .", "The best performance on each test dataset is obtained when training on the same dataset, with the exception of Agora, where the results are slightly worse than training on All datasets.", "Cross-domain performance levels when training on the VeriDark datasets are significantly better than the cross-domain results when training on the PAN dataset, as shown in Table REF .", "This suggests that, in the forensics context, it is important to train authorship methods on data related to cybersecurity.", "We trained our models on RTX Titan and RTX 2080 GPUs cards on an internal cluster.", "The estimated time to reproduce all the results is four days.", "Table: Results on the DarkReddit+ Author Identification Dataset.", "Each column denotes the source of the samples comprising the 'Other' class.", "Here, None indicates that the model has been trained on the original dataset only, without texts from other authors.", "The first two rows showcase results on the test set comprised of posts from the DarkReddit+ corpora, while the bottom two rows show results using test posts from ClearReddit; that is, identifying users writing on miscellaneous subreddits based on posts on the /r/darknetmarkets subreddit.", "The first row of each section presents results on classifying texts belonging only to the ten original authors.", "For the second row of each section, texts from the 'Other' class were introduced at test time from the secondary source." ], [ "Author Identification", "Results on the author identification task are featured in Table REF .", "We provide baselines using five training setups and four testing scenarios.", "First, we simply train and evaluate our model on the proposed DarkReddit+ author identification dataset.", "Then, in order to simulate a more realistic scenario, in which a post to be verified doesn't belong to any of the investigated authors, we introduce a new `Other' class.", "For this additional class, we select a number of samples equal to the average number of samples per author.", "These are randomly chosen from a secondary source: either the VeriDark AV corpora or the PAN dataset.", "In Table REF , each column denotes the source of the samples comprising the `Other' class.", "Most notably, evaluations performed on texts from ClearReddit (i.e.", "identifying authors on ClearReddit based on comments from DarkReddit+) are provided.", "We also provide evaluations on texts from the DarkReddit+.", "Finally, regarding the additional `Other' class, we feature results with and without samples belonging to distinct authors from the secondary source.", "All benchmark results come from fine-tuning a BERT classification model on the corresponding corpora.", "Unsurprisingly, as the secondary source becomes closer in terms of style to the original dataset, introducing more communalities in terms of slang and writing patterns, it becomes harder to distinguish the original authors from the rest of the community.", "We can also see how the community plays an important role in the way of writing of an individual.", "When posting on ClearReddit, to a different community, the style of our ten authors changes and it becomes harder to identify them, resulting in a drop of 3-7% in performance.", "We note that our baselines are competitive, while also allowing a lot of room for improvements." ], [ "Error Analysis", "We perform a qualitative analysis by inspecting both true positive (same author correctly detected) and false positive (different author pairs predicted as having the same author) predictions.", "We notice in Table REF that the first three examples are wrongly classified as having the same author, which may be due to similar punctuation (multiple `!", "'), same abbreviations (`u'), same writing style (negative feedback) or multiple occurrences of some named entities (drug names).", "The last two same author pairs are correctly classified, which may be due to many shared named entities in the fourth example (vendor name, drug name, country) or similar computer science jargon (encrypted, whonix).", "The shared named entities, while being suggestive of same authors, may represent spurious features, which the model will likely exploit, leading to false positives.", "The importance of mitigating the effect of named entities has also been noted by [3] and [24].", "Since there are many types of named entities that can be removed (vendor names, drug names, usernames, places, etc.", "), we do not perform any such deletion and instead leave this preprocessing decision to be made during model creation.", "Table: Qualitative analysis for several example pairs from the Agora test set.", "Some sensitive words (such as drug names or vendor names) have been censored.", "The first three examples are written by different authors, but predicted as having the same author.", "The first prediction may be triggered by similar punctuation (multiple exclamation marks).", "The second pair shares a similar style of negative feedback.", "The third and fourth same-author pairs share many named entities (drug name, vendor, country), but the third one is misclassified.", "The fifth same-author pair example may be correctly classified due to similar specialized vocabularies (encrypted, whonix)." ], [ "Limitations and Further Work", "Noisy negative examples.", "One of the underlying assumptions when collecting texts for authorship analysis from different discussion and publishing platforms is that that distinct authors have distinct usernames.", "However this may sometimes not be the case, as people often have multiple accounts and as a result, write under several pseudonyms or usernames.", "These accounts are known as Sybils in the cybersecurity context and belong to either DarkNet vendors or users with multiple identities.", "As a result of this phenomenon, some different author pairs may actually have the wrong label, since they are written by the same author.", "This may hurt the training process, which may distance same author document representations instead of making them more similar.", "The evaluation may also suffer due to a model that may correctly predict same author but will be penalized due to the wrong label different author.", "Noisy positive examples.", "Another related phenomenon may arise from multiple users writing under the same account.", "This leads to examples being labeled as same author when in fact the correct label is different author.", "Such examples may again affect the learning process, which tries to draw closer different author representations instead of distancing them.", "The evaluation performance may decrease from correctly classifying different author pairs, which are mislabeled as being same author.", "Both phenomena are not inherent to our proposed datasets and may also arise in other authorship verification benchmarks, such as PAN, where an author may write fanfictions under multiple pseudonyms, or multiple authors may share the same account.", "However, we acknowledge that these issues may be more prevalent in the Dark Web, due to a more privacy-aware userbase.", "Still, the large-scale nature of our datasets should lessen these effects.", "Benchmark updates.", "We pledge to update the VeriDark benchmark with other DarkNet-related datasets, either by crawling them ourselves or using other available message archives.", "We believe that more DarkNet datasets lead to more robust results across all the datasets, as Table REF suggests.", "Furthermore, having multiple DarkNet domains (from other marketplace forums for instance) can help the author identification problem.", "Specifically, instead of finding authors on the Web, based on their Dark Web activity, we could also find Sybils based on the Dark Web activity only, by putting together target and reference documents from distinct marketplace forums." ], [ "Broader Impact and Ethical Concerns", "We acknowledge the increased importance of Tor-based privacy especially for certain categories of vulnerable users (journalists, whistleblowers, dissidents, etc.).", "The authorship analysis methods developed using our datasets should help law enforcement agencies tackle illicit activities, but we are aware that they could also be put to use by ill-intentioned third parties and organizations in order to detect at-risk categories or regular people.", "However, we believe that the benefits of such methods could outweigh the potential risks of usage by ill-intended actors.", "Opening up research about authorship analysis may shed light on potential authorship clues that good faith users leave as a trace, therefore allowing them to mitigate such artifacts.", "To limit harm, we restrict our analysis to datasets that are collected from defunct, publicly available discussion platform archives (closed subreddit and marketplace forums).", "We also anonymized usernames and removed potential information leaks such as PGP keys, messages and signatures." ], [ "Conclusions", "In this paper, we release three large-scale authorship verification datasets featuring data from the either Dark Web or Dark Web related discussion forums.", "Firstly, the broad scope of this work is to advance the field of authorship verification, which has been focusing on corpora of mostly literary works.", "Secondly, we want to advance the text-based forensic tools, by enabling methods to be trained on cybersecurity-related corpora.", "We trained BERT-based baselines on our proposed datasets.", "These models proved competitive, while still leaving enough room for further performance improvements.", "We also trained a baseline on the large PAN2020 dataset and showed that it performed worse on the VeriDark benchmark when compared to the baselines trained on either of the VeriDark datasets, highlighting the important need for domain-specific data in the cybersecurity context.", "Upon qualitative analysis of some examples, we revealed potential linguistic features indicative of the same author class (punctuation marks, named entities and shared jargon), which may represent spurious features and should be more carefully handled by future methods.", "We also addressed ethical considerations and the broader impact of our work, by highlighting the potential misuse of our datasets by ill-intended actors instead of law enforcement agencies.", "We tried to limit potential harms by hiding private information and using publicly available discussions platforms that have already been taken down." ] ]
2207.03477
[ [ "Residuals of an Equilibrium Model for the Galaxy Reveal a State of\n Disequilibrium in the Solar Neighborhood" ], [ "Abstract We simultaneously model the gravitational potential and phase space distribution function (DF) of giant stars near the Sun using the {\\it Gaia} DR2 radial velocity catalog.", "We assume that the Galaxy is in equilibrium and is symmetric about both the spin axis of the disk and the Galactic midplane.", "The potential is taken as a sum of terms that nominally represent contributions from the gas disk, stellar disk, bulge, and dark matter halo.", "Our DF model for the giants comprise two components to account for a mix of thin and thick disk stars.", "The DF for each component is described by an analytic function of the energy, the spin angular momentum, and the vertical energy, in accord with Jeans theorem.", "We present model predictions for the radial and vertical forces within $\\sim 2\\,{\\rm kpc}$ of the Sun, highlighting the rotation curve and vertical force profile in the Solar Neighbourhood.", "Finally, we show residuals for star counts in the $R-z$ and $z-v_z$ planes as well as maps of the mean radial and azimuthal velocities in the $z-v_z$ plane.", "Using our model for the potential, we also examine the star count residuals in action-frequency-angle coordinates.", "The {\\it Gaia} phase spirals, velocity arches, some of the known moving groups and bending modes appear as well-defined features in these maps." ], [ "Introduction", "Our understanding of the mass distribution in galaxies invariably comes from models of the gravitational potential, which in turn are informed by observations of kinematic tracers and assumptions about their orbits.", "However, constraints on the mass distribution from tracers tend to be highly non-local, especially for external galaxies.", "For example, a galaxy's rotation curve is governed by the cumulative, roughly spherically-averaged mass distribution.", "By contrast, showed that for the Milky Way one can probe the local gravitation potential and hence local mass distribution by considering the vertical motions of stars in the Solar Neighbourhood.", "The framework he laid out has been the basis for numerous analyses in the intervening years and is known as the Oort Problem.", "In general, stars in Milky Way's disk follow orbits that are determined by the mean Galactic gravitational field rather than close encounters with other stars.", "Their dynamical state is therefore described by a single particle distribution function (DF), which obeys the collisionless Boltzmann equation (CBE) and Poisson's equation .", "In the Oort Problem, one typically assumes that the Galaxy is in dynamical equilibrium and that both the DF and the potential are symmetric about the spin axis and Galactic midplane.", "The analysis then proceeds by one of several methods based on low-order moments of the CBE, namely the continuity and Jeans equations, or the Jeans theorem, which states that a DF for an equilibrium system can be written in terms of isolating integrals of motion.", "Modeling the local potential and DF is technically challenging even when a high degree of symmetry is imposed.", "Approaches based on the Jeans theorem require a third integral of motion since models for the DF based on the two readily available integrals, namely the energy and the spin angular momentum, are inadequate to describe the physics of disk dynamics .", "On the other hand, approaches based on moments of the CBE invariably include only the the mass continuity equation (zeroth moment) and Jeans equations (first moment).", "They therefore require additional assumptions to close the system of moment equations.", "A common strategy, which dates back to Oort's original work, is to assume that the local stellar dynamics in the vertical and in-plane directions decouple.", "Over the years, various methods have been devised to exploit this strategy and estimate the systematic errors that arise from the breakdown of its core assumption , , , , , , .", "In (hereafter Paper I) we simultaneously modelled the stellar DF and gravitational potential using kinematic measurements from the radial velocity catalog of Gaia's Second Data Release (hereafter GDR2).", "We included only stars within $\\sim 1.5\\,{\\rm kpc}$ of the midplane whose Galactocentric radii were within $500\\,{\\rm pc}$ of the Solar Circle.", "The analysis in Paper I was strictly one-dimensional: we assumed that the DF for the tracers was a function of the vertical energy and that the potential was a function of $z$ .", "A novel feature of Paper I was that it modelled the full $z-v_z$ DF and potential simultaneously and therefore one could diagnose departures from equilibrium from the model residuals.", "Over the past decade, numerous examples of disequilibrium in vertical structure of the Solar Neighborhood have been uncovered using data from various surveys such as RAVE , SDSS , LAMOST and Gaia .", "For instance, the vertical number count profile showed a distinct pattern of North-South asymmetries of order $\\sim 10\\%$ and with length scales on the order of hundreds of parsecs , , , .", "Similarly, the bulk vertical velocity and velocity dispersion showed evidence for bending and breathing motions of the disk in the direction perpendicular to the midplane , , , .", "Beyond the Solar Circle, the disk also appeared to be corrugated and warped , , , , .", "Perhaps the most intriguing manifestations of disequilibrium in the Solar Neighbourhood are the phase spirals discovered by [1] using data from GDR2.", "These spirals appear in maps of number counts, mean radial velocity and mean vertical velocity as projected onto the vertical phase space, i.e.", "the $z-v_z$ plane.", "While the origin of the phase spirals is still a matter of debate, a promising idea is that they are partially phase-mixed perturbations caused by a disturbance in the disk, which itself may have been due by a passing satellite (see e.g.", ", [1], , , , , , , ).", "In Paper I the number count phase spiral appeared as a residual of the DF.", "Since we also modelled the potential, we were able to transform the residual map to $\\Omega _z-\\theta _z$ coordinates where $\\Omega _z$ is the vertical frequency and $\\theta _z$ is the angle variable associated with the vertical action.", "In these coordinates, the spiral transformed to parallel, diagonal bands whose slope provided an estimate for the age of the perturbation that created it.", "The most significant shortcoming of Paper I was its strict adherence to the 1D approximation.", "To illustrate this problem, we divided our sample into hot and cold sub-populations as defined by the in-plane kinetic energy.", "If the 1D approximation was strictly valid, the two sub-populations would have led to the same inferred potential and force.", "In fact, the inferred vertical force from these two sub-populations differed by $10\\%-30\\%$ .", "In this work, we go beyond the 1D approximation by considering models for the full 6D DF and 3D potential while still retaining the assumptions of equilibrium, axisymmetry, and mirror symmetry about the midplane.", "We model the potential as the sum of contributions from a spherical dark matter halo, a point mass bulge, a razor-thin gas disk and a stellar disk.", "Our model for the DF of the tracer population includes two components to account for contributions from thin and thick disks.", "Following , we model each component as an analytic function of the energy and spin angular momentum which are exact integrals of motion, and the vertical energy which is an approximate integral of motion.", "Thus, our model is not exactly in equilibrium.", "The alternative is to use angle-action variables , , but the computational overhead for doing so for the fitting procedure, where ${\\cal O}(10^5)$ realizations of the model must be evaluated, is unfeasible.", "Moreover, the departures from equilibrium due to using the vertical energy are likely swamped by actual departures from equilibrium, as discussed above.", "In fact, we find angle-action variables extremely useful for analysing residuals of the best-fit model.", "We first test our fitting procedure with mock data generated by GalactICS, a code designed to build equilibrium models for disk-bulge-halo systems , , .", "Stars for the mock sample are drawn from the same DF as is used in our statistical model.", "However, the potential in GalactICS is calculated self-consistently from the total mass distribution whereas the potential for our statistical model is a parametric function of $z$ and Galactocentric radius $R$ .", "Thus, the mock data is not generated by the model, which, in some sense, makes it a stronger test of the method.", "Our analysis recovers the radial and vertical forces as well as low-order velocity moments of the DF.", "The fractional residuals in the potential, though only a few percent, are dominated by systematic errors.", "This reflects the fact that the potential for the mock data is not drawn from the model.", "On the other hand, the residuals for the DF are dominated by statistical errors.", "We next fit a sample of $\\sim $ 260 thousand giant stars from GDR2.", "Our results for the circular speed and vertical force in the Solar Neighborhood are consistent with literature values, though the rotation curve we measure is somewhat flatter.", "We examine residuals of star counts and velocity moments of DF in $R-z$ and $z-v_z$ phase-space planes.", "These residuals reveal hints of a coupling between in-plane and radial motions as well as the Gaia phase spirals discovered by [1].", "Finally, we use our best-fit model for the gravitational potential to transform star count residuals to action-frequency-angle coordinates.", "Here, we find sharp features corresponding to the Gaia phase spirals as well as velocity arches first seen in and some of the moving groups identified by .", "This paper is organized as follows: In Section , we present our model for the potential and DF as well as our method for computing and optimizing the likelihood function.", "In Section , we describe our sample selection from GDR2.", "We test our fitting algorithm with mock data in Section and show the results from our GDR2 sample in Section .", "In Section we show the star counts residuals in action-frequency-angle spaces.", "We discuss prospects for improving the method in and conclude with a summary of our results in Section ." ], [ "Preliminaries", "In this section, we describe our models for the gravitational potential and stellar DF.", "We also outline our fitting procedure including the construction of the likelihood function.", "We work in Galactocentric cylindrical coordinates $(R,\\phi ,z)$ , where $R$ is the in-plane distance to the Galactic Center, $\\phi $ is the azimuthal angle towards the direction of Galactic rotation with the Sun at $\\phi =0$ , and $z$ is the displacement from the midplane.", "Correspondingly, we use $v_R$ , $v_\\phi $ and $v_z$ to denote radial, azimuthal and vertical velocities respectively.", "The distance to the Galactic Center is denoted by $r=\\sqrt{R^2+z^2}$ .", "We take the Solar Circle radius to be $R_0=8.3\\,{\\rm kpc}$ and the Sun's displacement from the midplane to be $z_0=20.3\\,{\\rm pc}$ ." ], [ "Potential", "We model the gravitational potential as the sum of contributions from the gas and stellar disks, the bulge and the dark halo: $\\Psi (R,z)=\\Psi _g(z)+\\Psi _b(r)+\\Psi _d(R,z)+\\Psi _h(r)~.$ We assume that the gas disk is razor thin and has constant surface density $\\Sigma _g$ .", "The potential is then given by $\\Psi _g(z)=2G\\Sigma _g|z|$ where $G$ is the gravitational constant.", "As discussed in Section , stars with $\\left|R-R_0\\right|>2\\,{\\rm kpc}$ and $\\left|z-z_0\\right|<80\\,{\\rm pc}$ are excluded from our sample.", "Therefore, the scale length and scale height of the gas disk are poorly constrained by the model.", "In effect, the potential in Equation REF fixed the scale length to be infinity and the scale height to be zero.", "We take the contribution from the central bulge to be that of a point mass, $\\Psi _b(r)=-\\frac{GM_b}{r}~,$ where $M_b$ is the bulge mass.", "That is, we assume that the mass of the bulge is entirely inside the Solar Circle and that its potential is well-approximated by the monopole term in a spherical harmonics expansion in our region of interest.", "Next, we consider two possible forms for $\\Psi _d(R,z)$ : the Miyamoto-Nagai (MN) potential $\\Psi _{\\rm MN}(R,z)=-\\frac{GM_d}{\\sqrt{R^2+\\left(a+\\sqrt{z^2+h^2}\\right)^2}}$ and the potential $\\Psi _{\\rm ES}(R,z)$ of an exponential-sech-squared disk (ES), which satisfies the Poisson's equation for the density $\\rho _{\\rm ES}(R,z)=\\frac{M_d}{8a^2h}\\exp {\\left(-\\frac{R}{a}\\right)}{\\rm sech}^2 \\frac{z}{2h}~.$ In both models, $M_d$ is the disk mass and $a$ and $h$ are disk scale length and height respectively.", "Finally, we adopt the NFW potential for the halo, $\\Psi _h(r)=-\\frac{4G\\rho _0r_h^3}{r}\\ln {\\left(1+\\frac{r}{r_h}\\right)}~.$ The corresponding density profile is $\\rho _h(r)=\\rho _0\\left(\\frac{r}{r_h}\\right)^{-1}\\left(1+\\frac{r}{r_h}\\right)^{-2}$ where $r_h$ is the halo scale radius and $\\rho _0$ is a scale density.", "In this work, we have found it more convenient to parameterize the halo by the mass inside a sphere of radius $R_0$ , i.e.", "$M_{h\\odot }\\equiv \\int _{0}^{R_0}4\\pi r^2\\rho _h(r){\\rm d}r$ and the halo density at the position of the Sun, i.e.", "$\\rho _{h\\odot }\\equiv \\rho _h(R_0)$ The relationship between $(M_{h\\odot },\\rho _{h\\odot })$ and $(\\rho _0,r_h)$ can be easily obtained by combining Equations REF and REF to give $\\frac{M_{h\\odot }}{4{R_0}^3\\rho _{h\\odot }}=\\varphi \\left(\\frac{r_h}{R_0}\\right)$ where $\\varphi (x)\\equiv (x+1)^2\\ln {\\left(1+\\frac{1}{x}\\right)}-x-1,\\qquad x>0$ is a monotonically decreasing function with limits $\\varphi (0^+)=+\\infty $ and $\\varphi (+\\infty )=\\frac{1}{2}$ .", "The monotonicity of $\\varphi (x)$ guarantees that we can numerically solve for $r_h$ from Equation REF and then solve for $\\rho _0$ from the value of $M_{h\\odot }$ or $\\rho _{h\\odot }$ ." ], [ "Distribution function", "As will be discussed in Section , we use a sample of giant stars within $\\sim 2\\,{\\rm kpc}$ of the midplane.", "We assume that the stars come from a multi-component stellar disk that is in dynamical equilibrium and ignore the possibility that there are stars from the stellar halo or a stellar stream mixing into the sample.", "We build the model DF from analytic functions of the spin angular momentum $L_z=Rv_\\phi $ , the vertical energy, $E_z\\equiv \\frac{1}{2} {v_z}^2+\\Psi (R,z)-\\Psi (R,0)~,$ and the in-plane energy $E_p\\equiv \\frac{1}{2}\\left(v_R^2+v_\\phi ^2\\right)+\\Psi (R,0)=E-E_z$ (see and references therein).", "The advantage of this approach is that for a given potential, the three integrals are explicit functions of the phase space coordinates and therefore are easy to compute.", "The main drawback is that while $E_z$ and $E_p$ are conserved to a good approximation for nearly circular orbits, they vary significantly along orbits that make large excursions in $R$ and $z$ .", "Thus, a model that includes a warm disk will be somewhat out of equilibrium.", "We will have more to say about this issue in Section REF .", "We model the stellar DF as the sum of two terms, $f({r},\\,{v})=\\eta f_1({r},\\,\\,{v})+(1-\\eta ) f_2({r},\\,{v})$ to account for thin and thick disk components.", "In this expression, the dimensionless constant $\\eta \\in (0,1)$ controls the relative contributions of the two components.", "For each disk, we write $f_i({r},\\,{v})=\\frac{\\Omega }{\\kappa }\\frac{\\tilde{\\rho }_i}{\\tilde{\\sigma }_{Ri}^2\\tilde{\\sigma }_{zi}}\\exp {\\left(-\\frac{E_p-E_c}{\\tilde{\\sigma }_{Ri}^2}-\\frac{E_z}{\\tilde{\\sigma }_{zi}^2}\\right)}$ where $i=1,2$ .", "Here, $E_c(R_c)=\\Psi (R_c,0)+\\frac{1}{2}v_c^2$ is the energy of a particle on a circular orbit with angular momentum $L_z$ and $R_c$ is the guiding radius which depends implicitly on $L_z$ .", "The angular frequency $\\Omega $ and epicycle frequency $\\kappa $ are well-known functions of $R_c$ (see e.g.", "and also Appendix ).", "By contrast, $\\tilde{\\sigma }_{Ri}$ , $\\tilde{\\sigma }_{zi}$ , and $\\tilde{\\rho }_{di}$ are user-specified functions of $R_c$ , which control the radial and vertical velocity dispersions and density in the midplane.", "Normally, one thinks of these quantities as functions of $R$ .", "The essence of the construction is to use $R_c$ , which is an integral of motion, as a proxy for $R$ , which isn't.", "In this work, we choose the following parametric forms for these quantities: $\\begin{split}\\tilde{\\rho }_{i}(R_c)\\propto &\\exp {\\left(-\\frac{R_c-R_0}{R_{\\rho ,i}}\\right)}\\\\\\tilde{\\sigma }_{Ri}(R_c)=&\\sigma _{Ri,0}\\exp {\\left(-\\frac{R_c-R_0}{R_{\\sigma _R,i}}\\right)}\\\\\\tilde{\\sigma }_{zi}(R_c)=&\\sigma _{zi,0}\\exp {\\left(-\\frac{R_c-R_0}{R_{\\sigma _z,i}}\\right)}\\end{split}$ Note that the density scale parameters $\\rho _{i,0}$ are absorbed into an overall normalization factor (see Section REF ) and $\\eta $ ." ], [ "Likelihood function and fitting procedure", "The likelihood function is given by the product of probabilities for individual stars, $\\mathcal {L}=\\prod _{i=1}^{N_s} \\frac{f({r}_i,\\,{v}_i)}{\\mathcal {N}}$ where $N_s$ is the number of stars and $\\mathcal {N}\\equiv \\int f({r},\\,{v})g({r}){\\rm d}^3{r}{\\rm d}^3{v}$ is the normalization factor.", "The function $g({r})$ accounts for our geometrical selection function as described in Section .", "This form of the likelihood function has the attractive feature that it doesn't require binning.", "Details of the calculation of $\\mathcal {N}$ are given in Appendix .", "As discussed in the introduction, exact equilibrium models can be constructed using action-angle variables rather than the explicit integrals of motion $(E_p,\\,L_z,\\,E_z)$ (see e.g.", ", , , , ).", "However, the transformation between the usual phase space coordinates and action-angle variables requires its own set of approximations (see Section REF ).", "Moreover, the calculation of the normalization factor, which involves the geometric selection function $g(r)$ , would require a complicated and computationally expensive six-dimensional Monte Carlo integration.", "Since our fitting algorithm involves ${\\cal O}(10^5)$ evaluations of the likelihood function, this approach seems unfeasible.", "By contrast, several of the integrals for the normalization factor can be done analytically when we work with $(E_p,\\,L_z,\\,E_z)$ , as discussed in Appendix .", "In Table REF we list the model parameters.", "There are eighteen parameters in total: seven for the potential and eleven for the DF.", "Note that $a$ , $h$ , and $M_d$ have different meanings in the ES and MN disk potentials.", "We compute the potential and forces using the Python package AGAMA and optimize the model over the model parameters via the Markov chain Monte Carlo (MCMC) sampler emcee .", "We adopt linear priors for all parameters as listed in Table REF .", "We also require that $M_{h\\odot }>2R_0^3\\rho _{h\\odot }$ to ensure that Equation REF has a valid solution.", "Table: Parameter prior ranges adopted in this work." ], [ "Moments of distribution function", "As discussed above, one of our goals is to examine residuals of the best-fit model since these may reveal manifestations of departures of the disk from equilibrium.", "In general, it is unfeasible to do this in the full six-dimensional phase space since residuals generally require some sort of binning.", "We therefore set up the machinery to examine moments of the DF in the $R-z$ space or the meridonal plane (subscript “mer\") and the $z-v_z$ space or the vertical phase space plane (subscript “ver\").", "For example, the number densities in these two planes are given by $n_{\\rm mer}(R,z)=\\mathcal {N}_{\\rm mer}\\int f({r},\\,{v})g({r})R{\\rm d}\\phi {\\rm d}^3{v}$ and $n_{\\rm ver}(z,v_z)=\\mathcal {N}_{\\rm ver}\\int f({r},\\,{v})g({r})R{\\rm d}\\phi {\\rm d}R{\\rm d}v_R{\\rm d}v_\\phi $ where ${\\cal N}_{\\rm mer}$ and ${\\cal N}_{\\rm ver}$ are normalization factors such that the model-predicted total number counts matches the data.", "Likewise, the mean azimuthal velocities in these planes are given by $\\langle v_\\phi \\rangle _{\\rm mer}(R,\\,z)=\\frac{\\mathcal {N}_{\\rm mer}}{n_{\\rm mer}(R,z)}\\int v_\\phi f({r},\\,{v})(R{\\rm d}\\phi ){\\rm d}^3{v}$ and $\\langle v_\\phi \\rangle _{\\rm ver}(z,v_z)=\\frac{\\mathcal {N}_{\\rm ver}}{n_{\\rm ver}(z,v_z)}\\int v_\\phi f({r},\\,{v})g({r})R{\\rm d}R{\\rm d}\\phi {\\rm d}v_R{\\rm d}v_\\phi \\\\$ By symmetry, the model predicts $\\langle v_R\\rangle = 0$ in both spaces.", "Finally, we note that $n_{\\rm ver}$ , $\\langle v_R\\rangle _{\\rm ver}$ and $\\langle v_\\phi \\rangle _{\\rm ver}$ correspond to the three views of the Gaia phase spirals in [1].", "The calculation of these moments is discussed in Appendix ." ], [ "Data Selection", "In this section, we describe our sample selection.", "As in Paper I we draw our sample from the catalog gaiaRVdelpeqspdelsp43See https://zenodo.org/record/2557803 for their data .", "This catalog includes stars in GDR2 with complete 6D phase space measurements and corrects for systematic biases in Gaia's parallaxes.", "We use E_dist from the catalog as the distance $d$ from the Sun and take $\\varepsilon _d=\\sqrt{distm2-E\\_dist^2}$ to be its uncertainty, where distm2 is the expectation value of $d^2$ .", "To ensure precision in parallax measurements, we implement the same quality cuts as in Paper I, which mainly follow those recommended by : Photometry: $3<G<14.5$ , $G_{RP}>0$ and $G_{BP}>0$ Radial velocity uncertainty: $\\error _{v_{\\rm rad}}<10\\,{\\rm km/s}$ Parallax uncertainty: $\\error _\\varpi <0.1\\,{\\rm mas}$ and $\\varpi /\\error _\\varpi > 5$ Visibility period: $n_{\\rm vis}>5$ BP/RP flux excess factor range: $1.172<{\\tt bp_rp_excess_factor}<1.3$ Minimum heliocentric distance constraint: $d>80\\,{\\rm pc}$ As discussed in Paper I, we guarantee completeness by selecting stars from a particular region of the color-magnitude diagram.", "The completeness constraint comes mainly from the availability of radial velocity measurements, which are based on stellar spectra and generally available only for the brightest stars.", "We therefore add the following photometry cuts to select giant stars: $B$ minus $R$ color: $G_{BP}-G_{RP}>1$ Absolute $G$ -band magnitude: $M_G<2$ Finally, we remove stars with Galactocentric speed greater than $550\\,{\\rm km/s}$ which is the approximate escape speed of the Galaxy .", "We also remove stars identified by as potentially having large radial velocity errors due to contamination of their spectra by their neighborsSee https://arxiv.org/src/1901.10460v1/anc/ for a catalog of these stars.", "We calculate the positions and velocities of stars in our sample using the astropy.coordinates Python package [2], .", "We take the Sun's peculiar velocities to be $(U_\\odot ,\\,V_\\odot ,\\,W_\\odot )=(11.1,\\,12.24,\\,7.25)\\,{\\rm km/s}$ and the rotation speed at Solar Circle to be $v_{c\\odot }=220\\,{\\rm km/s}$ in the calculation.", "To avoid problems with extinction, we exclude stars within $80\\,{\\rm pc}$ of the midplane and those with Galactic latitude $|b|<15^\\circ $ We measure $b$ with regard to the Galactic midplane, instead of the direction from the Sun pointing towards the Galactic Center..", "In addition, we include only those stars in our local patch of the disk by requiring that $|\\phi |<4^\\circ $ , $|R-R_0|<2\\,{\\rm kpc}$ and $|z-z_0|<2\\,{\\rm kpc}$ .", "Heliocentric distances within our sample volume $\\mathcal {V}$ range from $r_{\\rm min}=0.08\\,{\\rm kpc}$ to $r_{\\rm max}=2.901\\,{\\rm kpc}$ from the Sun.", "Thus, given our apparent magnitude cut $3<G<14.5$ , we require an absolute magnitude cut of $\\begin{split}3-5{\\rm log_{10}}\\frac{r_{\\rm min}}{10\\,{\\rm pc}}&=-1.52<M_G<\\\\14.5-5{\\rm log_{10}}\\frac{r_{\\rm max}}{10\\,{\\rm pc}}&=2.19\\end{split}$ to avoid the Malmquist bias.", "If we combine these cuts with the constraint $M_G<2$ for giants, we arrive at an absolute magnitude cut of $-1.52<M_G<2$ , which is the same as was used Paper I.", "The geometrical selection function $g(r)$ is simply the Heaviside function: unity inside ${\\cal V}$ and zero outside.", "The final sample has $\\sim 260$ thousand stars." ], [ "Mock data", "In this section, we describe tests of our method based on mock data.", "The data are created using the code GalactICS, which was designed to generate equilibrium initial conditions for N-body simulations of isolated disk galaxies , , .", "The DF in GalactICS is given by Equation REF , which is the same as the DF in our statistical model.", "However, the potential in GalactICS is calculated from the total density via Poisson's equation whereas our statistical model uses a parametric expression for the potential.", "The working assumption is that this parametric expression is flexible enough to accurately model the “true\" potential (i.e, the potential used to generate the mock data) but there is no a priori guarantee of this.", "As we will see, this leads to systematic errors in the recovery of the potential and force.", "In principle, these errors can be reduced by choosing a more general parametric model for the potential.", "Of course, we expect similar systematic errors in GDR2, which is why we perform the mock tests in this way." ], [ "mock data sample", "We begin by constructing a self-consistent GalactICS model that comprises a thin disk, a thick disk, a bulge, and a dark halo.", "The model is chosen to qualitatively match characteristics of the Milky Way.", "In particular, the thin disk has a mass of $3.7\\times 10^{10}\\,M_\\odot $ , a radial scale length of $2.5\\,{\\rm kpc}$ and a vertical scale height of $300\\,{\\rm pc}$ .", "The corresponding values for the thick disk are $1.2\\times 10^{10}\\,M_\\odot $ , $3.5\\,{\\rm kpc}$ , and $900\\,{\\rm pc}$ .", "We sample the DFs for the two disks with the same geometric cuts that will be applied to the real data and arrive at a final sample size of 250,730, which is very close to that of our GDR2 sample." ], [ "parameter estimation", "We use the MCMC sampler emcee to map out the posterior probability distribution function (PDF) of the model parameters.", "Our choice of starting values for the walkers is guided by global properties of the Milky Way.", "The chain burns in quickly, as can be verified by inspecting values of each parameter as a function of position along the chain.", "In Figure REF we show two dimensional projections of PDFs for the six parameters associated with the potential excluding $\\Sigma _g$ , because we set $\\Sigma _g=0$ as there is no gas disk in the GalactICS model.", "We include results from both ES and MN models for comparison.", "These figures reveal a number of strong correlations between various pairs of parameters.", "In particular, there is a positive correlation between $M_d$ and $h$ and negative correlations between $M_d$ and both $M_{h\\odot }$ and $\\rho _{h\\odot }$ .", "The $M_d-M_{h\\odot }$ correlation works to keep the total mass interior to the Sun constant.", "On the other hand, the $M_d-h$ correlation works to keep the total density in the Solar Neighborhood roughly constant since an increase in the disk thickness can be compensated by an increase in the disk mass.", "The $M_d-\\rho _{h\\odot }$ correlation also works to keep the density in the Solar Neighborhood roughly constant.", "Note that this correlation is much tighter for the ES disk than the MN one.", "The difference is likely due to the difference in the vertical structure between the two models.", "In the ES model, the mass distribution is more tightly confined to the plane, with an exponential rather than power-law fall off with increasing $|z|$ .", "The conclusion from these entirely expected correlations, is that the data constrain locally rather than globally defined quantities from the potential.", "Inspection of 2D projections for the full 18-dimensional parameter space didn't reveal any further strong correlations.", "Figure: 2D projections of the posterior PDF for the model parameters with ES (top) and MN (bottom) disks.", "We only show projections for the six model parameters associated with the gravitational potential excluding Σ g \\Sigma _g." ], [ "rotation curve and gravitational force", "In Figure REF , we plot model predictions for the circular speed curve $v_{\\rm circ}$ as determined from the best-fit potential, and the rotation curve $\\overline{v}_\\phi (R)$ as determined from the best-fit DF via Equation REF with $z=0$ .", "We also show the true $v_{\\rm circ}$ , which is derived directly from the potential in GalactICS and the true $\\overline{v}_\\phi (R)$ , which is determined by computing the average azimuthal velocity of mock stars within $50\\,{\\rm pc}$ of the midplane.", "The difference between $v_{\\rm circ}$ and $\\overline{v}_\\phi (R)$ is due to asymmetric drift (see e.g.", ").", "Our model correctly accounts for this difference and recovers both curves to within $\\sim 1\\% - 2\\%$ between $6\\,{\\rm kpc}$ and $10\\,{\\rm kpc}$ .", "Note that the ES and MN models agree remarkably well in their predictions for $\\overline{v}_\\phi (R)$ over this range, which is not surprising since the mean azimuthal velocity near the Sun is directly reflected in the data.", "Figure REF also shows 1$\\sigma $ error ranges that are estimated by sampling potential parameters from the MCMC chain.", "Overall, these error bars under-predict differences between the model and the mock data.", "Indeed, the residuals for the rotation curve are comparable to the difference between ES and MN predictions.", "Thus, we can use the difference between predictions by these two models as an estimate of systematic errors.", "This last point is further illustrated in Figure REF where we compare predictions from the ES and MN models for the vertical and radial forces.", "The estimated statistical errors are a factor of $10\\sim 20$ times smaller than the residuals between either model and the true force, but are comparable to the difference between the models themselves.", "Figure: Inferred force from the analysis of mock data for both ES model (top row) and MN model (middle row).", "We show results of the radial and vertical components (left and right columns, respectively) as a color map in units of ( km /s) 2 / kpc {\\rm (km/s)^2/kpc}.", "We also show 1σ1\\sigma statistical uncertainties from the model (white contours) and the true--model residuals (black contours).", "The bottom row shows the difference between the two models." ], [ "Distribution function and its moments", "As a further test of our fitting procedure, we compare model predictions for the number counts and velocity moments in the meridonal and vertical phase space planes.", "We first consider the residuals of the number counts scaled by the square root of star counts in each pixel in either plane.", "That is, we compute $\\tilde{\\sigma }_n = \\frac{n_d - n_m}{\\sqrt{n_d}}$ where $n_d$ is the number of stars in a given pixel and $n_m$ is the model prediction for $n_d$ as computed in the center of that pixel using Equation REF or REF .", "The scaled residuals are shown in Figure REF and appear to be random with no evidence for systematic errors that depend on $R$ , $z$ , or $v_z$ .", "In Figure REF , we show the probability densities of $\\tilde{\\sigma }_n$ for the $R-z$ and $z-v_z$ planes and the ES and MN models.", "We find that they are all well-approximated by the standard normal distribution, which lends credence to the contention that the statistical errors from the MCMC analysis properly account for errors in the model.", "Figure: Data -- Model residuals in meridonal and vertical phase space planes scaled by the square root of the star counts in each pixel from data for the R-zR-z.", "The bin size in the meridonal plane is 80 pc ×20 pc 80\\,{\\rm pc}\\times 20\\,{\\rm pc} while the bin size in the z-v z z-v_z is 20 pc ×2.5 km /s20\\,{\\rm pc}\\times 2.5\\,{\\rm km/s}).", "We only show results for the ES model, as the results for the MN model are hardly visually distinguishable.Figure: Probability density as derived from histograms of σ ˜ n \\tilde{\\sigma }_n for the R-zR-z and z-v z z-v_z phase space projections.", "The black dashed line shows the standard normal distribution.Finally, we consider residuals of $\\langle v_R\\rangle _{\\rm ver}$ and $\\langle v_\\phi \\rangle _{\\rm ver}$ in the vertical phase space plane.", "These plots are analogous to those shown by [1] in their discovery paper of the Gaia phase spirals.", "We will show similar versions of these plots when we turn to GDR2 data.", "Note that the model prediction for $\\langle v_R\\rangle _{\\rm ver}$ is zero and therefore the residual of $\\langle v_R\\rangle _{\\rm ver}$ is just the data value.", "The situation with $\\langle v_\\phi \\rangle _{\\rm ver}$ is more complicated since it involves the rotation of the disk and asymmetric drift.", "Furthermore, we expect $\\langle v_\\phi \\rangle _{\\rm ver}$ to depend on the vertical and in-plane energies since $\\sigma _z$ and $\\sigma _R$ both depend on $L_z$ , as seen in Equation REF .", "The importance of this coupling for the nature of the phase spiral was stressed by and .", "Maps of the residuals for $\\langle v_R\\rangle _{\\rm ver}$ and $\\langle v_\\phi \\rangle _{\\rm ver}$ are shown in Figure REF .", "The residuals appear to be randomly distributed with zero mean.", "The increase in the RMS of the residuals as one moves out from the origin comes from the decrease in the number of particles per $z-v_z$ pixel.", "In the $\\Delta \\langle v_\\phi \\rangle _{\\rm ver}$ panel, we plot the residuals for the ES model and overlay contours that show the difference between the ES and MN models In this work, all differences between ES and MN model predicted value are defined as ES predictions minus MN predictions.. Evidently, the two models make nearly identical predictions for $\\langle v_\\phi \\rangle _{\\rm ver}$ .", "In summary, the model does an excellent job of recovering the DF.", "Figure: Map of residuals for the mean radial velocity (top panel) and mean azimuthal velocity (bottom panel) in the z-v z z-v_z plane.", "Results are shown for the ES model.", "In the bottom panel, the difference between model predictions for the MN and ES models is indicated by the black contours.", "Pixels with fewer than 10 stars are colored grey." ], [ "Results on GDR2 sample fitting", "In this section, we present our results for the GDR2 data.", "The procedure is the same as was used in our analysis of mock data except that here, the data are presented in celestial coordinates $(\\alpha ,\\delta ,\\varpi ,v_{\\rm los},\\mu _\\alpha ^*=\\dot{\\alpha }\\cos \\delta ,\\mu _\\delta =\\dot{\\delta })$ .", "To account for uncertainties in these coordinates, we generate ten data sets that add in random errors under the assumption that the quoted errors are Gaussian.", "For each of these data sets, we convert celestial coordinates to positions and velocities and apply the selection criteria as described in Section .", "Note that the data sets end up with slightly different numbers of stars.", "We find that the mean sample size is 264,150 stars with a standard deviation of 124.", "All results in this section are derived from the combined parameter chains for the ten data sets." ], [ "parameter estimation", "For the first data set, we use 50 walkers and require 5000 steps before convergence is reached.", "For other nine data sets, the burn-in period is shorter at around 1000 steps since we are able to use results from the first data set as a starting point.", "In Figure REF we show two-dimensional projections of the posterior PDF for the potential parameters.", "The PDF is qualitatively similar to the one we found for the mock data.", "We do not find any strong correlations among the parameters for the DF or between the DF and potential parameters.", "The three strong correlations that have been discussed in Section REF also appear in Figure REF .", "In addition, there is a weak negative correlation between $\\Sigma _g$ and $a$ and a positive correlation between $\\Sigma _g$ and $h$ .", "Recall that our gas disk is assumed to be razor thin and have constant surface density in $R$ .", "Thus, an increase in $\\Sigma _g$ can be compensated by a decrease in $a$ and/or increase in $h$ .", "The best-fit parameters and $1\\sigma $ uncertainties for our two models are presented in Table REF .", "The predictions for the DF parameters from the ES and MN models are strikingly similar.", "Evidently, the data tightly constrain the stellar DF.", "The differences are more pronounced for the potential, which is perhaps not surprising since they assume very different functional forms for the disk contribution.", "We should stress that the model is only sensitive to the total potential and that the gas-disk-bulge-halo decomposition in Equation REF should be interpreted cautiously.", "Nevertheless, we can compare our result for the gas disk with literature values.", "For example, proposed $\\Sigma _g=13.2{\\rm M_\\odot /pc^2}$ , which was adopted by , , as a fixed parameter.", "In this work, we obtain values that are higher by $35\\%\\sim 45\\%$ .", "Table: Best-fit values and 1σ1\\sigma uncertainties from the fitting of the GDR2 sample.Figure: 2D projections of the posterior PDF for parameters from the ES model (top) and MN model (bottom).", "Similar as Figure , we only show results for parameters associated with the potential." ], [ "rotation curve and the Oort constants", "In Figure REF , we show model predictions for $v_{\\rm circ}$ and $\\overline{v}_\\phi $ in the midplane for $4\\,{\\rm kpc} < R< 12\\,{\\rm kpc}$ .", "Predictions for $v_{\\rm circ}$ and $\\overline{v}_\\phi $ at the position of the Sun are given in Table REF .", "As we saw with the mock data test, the formal statistical uncertainties for the rotation curve are extremely small.", "Individually, the ES and MN models do not account for systematic errors that arise because they assume restricted functional forms for the potential.", "In short, we are over-fitting the data.", "The consistency of the two models in their predictions of $\\overline{v}_\\phi $ near the Sun indicates that this is the most secure prediction.", "Indeed, the predictions of $\\overline{v}_\\phi $ at the Sun from the two models are within the statistical uncertainties.", "On the other hand, the ES and MN predictions for $v_{\\rm circ}$ at the Solar Circle differ by about a factor of five over the formal $1\\sigma $ statistical uncertainties.", "We take that difference to be an indication of the systematic uncertainties in the model.", "We stress that this difference is only $\\sim 1\\,{\\rm km/s}$ , and both model predictions are consistent with the literature value $219.1\\pm 14.1\\,{\\rm km/s}$ as the average of measurements in , , and .", "Our analysis of the GDR2 data indicates an asymmetric drift of $v_a\\simeq 10\\,{\\rm km/s}$ .", "The mean azimuthal velocity of stars satisfying $|R-R_0|<50\\,{\\rm pc}$ and $|z-z_\\odot |<100\\,{\\rm pc}$ is $212.3\\,{\\rm km/s}$ , which agrees very well with the local measurement of $\\overline{v}_\\phi $ .", "We find that $\\left<v_R^2\\right>=1217\\,{\\rm (km/s)^2}$ .", "Following the arguments in Section 4.8.2 of , we find $v_a = \\left<v_R^2\\right>/\\left(80\\,{\\rm km\\,s}^{-1}\\right) \\simeq \\,15{\\rm km/s}$ for the asymmetric drift in the Solar Neighborhood.", "This is higher than the measured value of $v_a\\simeq 10\\,{\\rm km/s}$ .", "The difference may be due to the fact that our sample excludes stars near the midplane and therefore may overestimate $\\langle v_R^2\\rangle $ or due to systematic differences in other terms in the asymmetric drift formula.", "We can also use our model to predict values for the Oort constants $\\begin{split}A=&\\frac{1}{2}\\left.\\left(\\frac{v_{\\rm circ}}{R}-\\frac{{\\rm d}v_{\\rm circ}}{{\\rm d}R}\\right)\\right|_{R=R_0,\\,z=0}\\\\B=&-\\frac{1}{2}\\left.\\left(\\frac{v_{\\rm circ}}{R}+\\frac{{\\rm d}v_{\\rm circ}}{{\\rm d}R}\\right)\\right|_{R=R_0,\\,z=0}~.\\end{split}$ The Oort constants are derived from the local value for the rotation frequency of the disk and the slope of the rotation curve.", "They can be used to estimate the radial contribution to the Laplacian of the gravitational potential, which in turn can be used to estimate the local matter density from a model of the vertical potential.", "In Paper I, we adopted $A=15.45\\pm 0.34\\,{\\rm km/s/kpc}$ and $B=-12.27\\pm 0.40\\,{\\rm km/s/kpc}$ when deriving our estimate for the local matter density.", "These values were obtained by averaging results from , , , and .", "Here, we derive our own predictions for the Oort constants and present them in Table REF .", "We find a lower value for $A+B$ than the literature average, indicating a flatter rotation curve near the Solar Circle.", "Table: Rotation curve quantities with their 1σ1\\sigma uncertainties at the Sun's position.", "We include the circular speed, the mean azimuthal velocity, and the Oort constants.", "In this table, v circ v_{\\rm circ} and v ¯ φ \\overline{v}_\\phi are given in km /s{\\rm km/s} while AA, BB, A+BA+B and A-BA-B are given in km /s/ kpc {\\rm km/s/kpc}." ], [ "Force and surface density", "In Figure REF , we show model predictions of the radial and vertical components of the force in the $R-z$ plane.", "As was found with the mock data, the formal $1\\sigma $ statistical uncertainties from the model are $\\sim 5$ times smaller than the differences between the predictions from the ES and MN models.", "These differences again reflect the different structures of the two potentials and in particular, on the density fall-off as one moves away from the midplane.", "Nevertheless, the two models agree to within $\\sim 1\\%$ for $F_R$ and $\\sim 5\\%$ for $F_z$ throughout the range of the sample.", "In Figure REF , we plot the vertical force as a function of distance from the midplane at the Solar Circle.", "For both ES and MN models, the measured vertical force is consistent with literature values.", "Figure: Model predictions for the forces from our GDR2 sample for the ES model (top row) and MN model (middle row).", "Results are shown for the radial (left column) and vertical (right column) components as a color map in units of ( km /s) 2 / kpc {\\rm (km/s)^2/kpc}.", "We also show 1σ1\\sigma statistical uncertainties from the model as white contours.", "The bottom row shows the difference in the predictions from the ES and MN models.", "These panels provide an estimate of systematic errors.Figure: The vertical force in the Solar Neighborhood as predicted by our model.", "Also included are the same literature values as was presented in Figure 9 in Paper I. Grey curves shows predictions from 500 random samples of the MCMC chain for the ES model.", "Orange curves show the same for the MN model.In the 1D approximation, the surface density is proportional to the vertical force, $\\Sigma _{\\rm 1D}(z)=(2G)^{-1}F_z(z),$ whereas the true surface density is defined in terms of an integral over the density $\\Sigma _{\\rm true}(z)\\equiv \\int _{-z}^z\\rho (R_0,z^{\\prime }){\\rm d}z^{\\prime }~.$ In Table REF , we present our predictions for both $\\Sigma _{\\rm 1D}$ and $\\Sigma _{\\rm true}$ at $z=0.5,\\,1.0,\\,1.5,\\,2.0\\,{\\rm kpc}$ .", "We see that at the Solar Circle, the deviation of $\\Sigma _{\\rm 1D}$ from $\\Sigma _{\\rm true}$ is $\\lesssim 2\\%$ .", "Table: 1D-approximated surface densities and the true surface densities at the Solar Circle for z=0.5,1.0,1.5,2.0 kpc z=0.5,1.0,1.5,2.0\\,{\\rm kpc} with their 1σ1\\sigma uncertainties." ], [ "Moments of the distribution function", "In the upper left panel of Figure REF , we plot the fraction residual in number counts, $\\delta n=\\frac{n_d}{n_m}-1~,$ in the meridonal plane.", "The results are shown for the ES model with contours indicating the (practically negligible) difference between ES and MN models.", "The relatively large residuals reflect significant departures from a plane symmetric model.", "This figure can be compared with results from the analysis of SDSS data by who also found order $\\sim 50\\%$ departures from an equilibrium model.", "In particular, the prominent overdensity around $R\\simeq 9.5\\,{\\rm kpc}$ and $z\\simeq 0.7\\,{\\rm kpc}$ can be seen in their Figure 26.", "The pattern of over and under density predictions that run along the lower edge of the sample region may indicate that the disk is bent to negative $z$ inside the Solar Circle and positive $z$ outside the Solar Circle.", "Figure: Residuals of the number counts and velocity moments.", "We show fractional residuals of number counts in the R-zR-z plane (upper left) and the z-v z z-v_z plane (upper right).Lower panels show data -- model residuals of 〈v R 〉 ver \\langle v_R\\rangle _{\\rm ver} (left) and 〈v φ 〉 ver \\langle v_\\phi \\rangle _{\\rm ver} (right).", "We show results for the ES model.", "Black contours indicate differences between predicted values for the two models.", "Bin sizes are the same in Figures and .", "We assign grey color to all bins where the ten bootstrapped samples combine to give fewer than 10 stars.Figure REF also shows $\\delta n$ as well as data $-$ model residuals for $\\langle v_R\\rangle _{\\rm ver}$ and $\\langle v_\\phi \\rangle _{\\rm ver}$ in the $z-v_z$ plane.", "Recall that by symmetry, the model predictions for $\\langle v_R\\rangle $ are zero so in that panel, we are showing $\\langle v_R\\rangle $ from the data.", "The number counts panel shows the Gaia phase spiral that was discovered by [1].", "In the $\\Delta \\langle v_R\\rangle _{\\rm ver}$ panel, the most striking feature is a variation in bulk radial motion as one circles the phase space at a vertical energy of $300\\,{\\rm (km/s)^2}\\lesssim E_z\\lesssim 1300\\,{\\rm (km/s)^2}$ .", "Since the model is even in $v_R,\\,v_z,$ and $z$ and our geometrical selection is independent of velocity, the variation must be intrinsic to the data.", "The pattern is such that the bulk motion towards the midplane is correlated with motion radially outward.", "At smaller $E_z$ , we find hints of the phase spirals seen in [1].", "We also find hints of the spirals in the $\\Delta \\langle v_\\phi \\rangle _{\\rm ver}$ panel.", "The spirals in the three vertical phase space panels of Figure REF are not as distinct or sharply defined as in [1] or subsequent studies such as , and our own Paper I.", "This is perhaps not surprising since we are considering a broad range in $R$ .", ", , , , and others have shown that the spiral patterns appear sharper if one bins stars according to $R$ , $L_z$ , Galactic azimuth angle $\\phi $ , etc.", "and that the shape of the spiral differs from one region of the Galaxy to another.", "Whether this reflects a change in the characteristics of the disk (e.g., surface density) or the possibility that there have been multiple disturbances in the disk, as suggested by remains an open question." ], [ "Residuals in frequency-angle space", "It is widely accepted that the Gaia phase spirals are generated by incomplete phase mixing of a perturbation to the disk.", "Phase mixing is easy to understand in 1D if we ignore the back-reaction of the gravitational field generated by the perturbation on the perturbation itself, a.k.a.", "the self-gravity.", "Consider a perturbation that shifts the center of an equilibrium $z-v_z$ distribution by an amount $\\Delta v_z$ so that all stars are given a velocity “kick\".", "Since the vertical potential is generally anharmonic such that the vertical frequency $\\Omega _z$ decreases with $|z|$ , the DF will be sheared into a trailing spiral.", "The process is particularly simple if we plot the evolution of the DF in $\\Omega _z-\\theta _z$ coordinates where $\\theta _z$ is the angle associated with the vertical action.", "The perturbation considered above leads to a ridge of stars centered along a particular value of $\\theta _z$ , say $\\theta _0$ .", "In our example, the ridge runs from the origin in the $z-v_z$ plane along the positive $v_z$ axis.", "Phase mixing then shears the ridge into a diagonal stripe with $\\theta (t) =\\theta _0 + t\\cdot \\Omega _z$ in the $\\Omega _z-\\theta _z$ plane.", "In Paper I, and we used this idea to estimate a perturbation age of $t = 543\\,{\\rm Myr}$ .", "In what follows, we consider the model residuals in angle-frequency-angle coordinates for the full 6D phase space.", "The novel feature of our analysis is that the potential used to accomplish the transformation and the distribution function for the giant stars are fit simultaneously from a single data set." ], [ "angle-action-frequency variables", "Actions and angles are conjugate variables of the Hamiltonian, where actions are exactly conserved in a time-independent system, which isn't true for the corresponding quantities $E_p$ and $E_z$ .", "Furthermore, actions are adiabatic invariants.", "That is, they are constants of motion in time-dependent systems so long as the timescale for the potential to change is long compared to the oscillation periods.", "For a general axisymmetric potential, the azimuthal action $J_\\phi =L_z$ is readily explicit functions of the phase space coordinates and hence easy to determine.", "However, the other action-frequency-angle coordinates are implicit functions of the phase space coordinates and difficult to calculate.", "In what follows, we determine action-angle variables and the associated frequencies using AGAMA, which employs the so-called Stäkel fudge (See and references therein).", "Consider a stellar system with an axisymmetric potential $\\Psi (R,z)$ .", "A Stäckel potential is one in which there exist functions $U(u)$ and $V(v)$ and a parameter $\\Delta $ such that $\\Psi (R,z)=\\frac{U(u)-V(v)}{\\sinh ^2 u+\\sin ^2 v}$ with $R=\\Delta \\sinh u\\sin v,\\qquad z=\\Delta \\cosh u\\cos v$ In a Stäckel potential, stellar orbits are separable in $u$ and $v$ and all three pairs of angle-action variables and the associated frequencies $(J_i,\\,\\theta _i,\\Omega _i)$ ($i=u,\\,v,\\,\\phi $ ) can be calculated exactly, where frequencies are the time derivatives of the associated angles which conserves for any individual star.", "The idea of the Stäckel fudge is to extend this result to general potentials by finding an approximation of the Stäckel form for the gravitational potential over small region of the Galaxy.", "In this way, subscripts $u$ and $v$ will be replaced with $R$ and $z$ .", "In principle, a perturbation to the disk will lead to a pattern of diagonal stripes in the $\\Omega _i-\\theta _i$ planes provided the potential is constant after the initial kick and self-gravity is negligible.", "As we will see, the pattern of residuals is far more complex." ], [ "residuals in action-frequency-angle coordinates", "It is straightforward to calculate action-frequency-angle coordinates from ${r}$ and ${v}$ using our best-fit potential since the Stäckel Fudge has been implemented for the potentials used in our model within the AGAMA toolbox .", "It is more difficult to calculate model predictions for the number counts in terms of action-frequency-angle variables since this requires computationally expensive calculation of the Jacobians for the transformations.", "In what follows, we use a Monte Carlo approach.", "We first generate 50 million particles uniformly across the 6D phase space with the constraints $|\\phi |<4^\\circ $ , $|R-R_0|<2\\,{\\rm kpc}$ and $|z-z_0|<2\\,{\\rm kpc}$ .", "Each particle is assigned a weight proportional to $f(r,v)g(r)$ where the normalization constant is determined such that the sum of weights for all stars equals the average number of stars in our 10 bootstrapped data sets.", "We then convert the phase space coordinates to frequencies and angles using AGAMA.", "The model-predicted distribution of stars in the space of any two quantities is just the weighted two-dimensional histogram of these particles.", "Similarly, we combine our 10 bootstrapped Gaia data sets together and assign each star a weight of $0.1$ to arrive at the same quantities for data.", "In Figure REF , we show the fractional residuals for the number counts in the spaces of all 15 pairs of the six frequency-angle coordinates, as well as in the $J_\\phi -\\sqrt{J_R}$ space, for the ES model.", "We show the same for the MN model in Figure REF .", "The span of frequencies are a little different but the residual maps are very similar and reveal a wealth of substructures.", "The diagonal stripes in the $\\Omega _z-\\theta _z$ space are analogous to those seen in the lower panel of Figure 12 in Paper I.", "The slope of the stripes gets shallower with increasing $\\Omega _z$ .", "For the MN model, the slope for $\\Omega _z\\lesssim 45\\,{\\rm Gyr}^{-1}$ is consistent with a perturbation age of $500\\,{\\rm Myr}$ , which is similar to what we found in Paper I.", "For the ES model though, the slope is clearly smaller.", "For larger $\\Omega _z$ , the slope for both models are consistent with an age of $\\sim 250\\,{\\rm Myr}$ , though it is difficult to make precise interpretations since the stripes are somewhat disjoint.", "The structures in the $\\Omega _R-\\theta _R$ are even more challenging to interpret.", "In particular, it is difficult to find diagonal features that extend across of a wide range in $\\theta _R$ though there are certainly features consistent with a disturbance that occurred $500\\,{\\rm Myr}$ ago for both models.", "The implication is that the perturbation (or multiple perturbations) to the disk has a strong dependence on radial action.", "Finally, we come to the $\\Omega _\\phi -\\theta _\\phi $ plane.", "Here, the difficulty is that we have results across a limited range in $\\phi $ .", "In principle, residuals in this plane should encode some of the variations in the phase spirals as a function of $J_z$ and $\\theta _z$ as see in and others.", "We next turn to the $\\Omega _R-\\Omega _\\phi $ plane.", "We see that most of the stars in our sample lie along a narrow ridge in this plane, given that we don't show pixels with fewer than 5 stars.", "The ridge is roughly defined by the condition $\\frac{4}{3}\\lesssim \\frac{\\Omega _\\phi }{\\Omega _R}\\lesssim \\frac{5}{3}$ , which is as expected since $\\frac{\\Omega }{\\kappa }\\simeq \\sqrt{2}$ for flat rotation curve and small epicyclic motions.", "The more useful phase space projection is the $J_\\phi -\\sqrt{J_R}$ plane, which is shown in the upper right of Figure REF .", "The distribution of stars in this plane has been studied in the context of moving groups by and the scaling in our figure is chosen to match their Figure 5.", "As those authors note, the appearance of moving groups in their figure is strongly affected by selection effects (see their Figure 2).", "The over-density of particles in Figure REF at $(J_\\phi ,\\,\\sqrt{J_R}) = (1,2)$ in scaled units is likely a combination of the Hyades and Coma Berenices moving groups whereas the structure extending to higher $\\sqrt{J_R}$ and lower $J_\\phi $ may be the Hercules moving group.", "Finally, there are the near-vertical stripes at higher $\\sqrt{J_R}$ , which are likely connected to the velocity arches discovered as discussed in though the extension of the most prominent over-density to lower $J_R$ may also include stars from the Sirius moving group.", "Figure: Projections of the fractional residuals in star counts in the spaces of all 15 coordinate pairs from the six frequency-angle coordinates for the ES model.", "Note that θ φ \\theta _\\phi does not span the entire (-,)(-,\\,) range due to our geometrical selection.", "We plot straight lines corresponding to t=500 Myr t=500\\,{\\rm Myr} in the Ω R -θ R \\Omega _R-\\theta _R, Ω z -θ z \\Omega _z-\\theta _z, and Ω φ -θ φ \\Omega _\\phi -\\theta _\\phi panels.", "Also shown are the fractional residuals in the J φ -J R J_\\phi -\\sqrt{J_R} plane.", "The scaling for this panel is chosen to match Figure 5 of .", "Only pixels with at least 5 stars are shown.Figure: Same as Figure but for the MN model." ], [ "Discussion", "One troubling aspect of our analysis is that the statistical uncertainties appear to under-represent the true uncertainties of the model.", "In our mock data tests, we found this to be about an order of magnitude smaller than the true error.", "The implication is that the model is over-fitting the data, which is perhaps not surprising given that we have measurements for six phase space coordinates for $\\sim 260$ thousand stars and an 18-parameter model.", "More to the point, the functional form of the potential is likely too restrictive.", "This conjecture is supported by the fact that the errors found in our mock data test are comparable to the differences between results from two different choices for the potential.", "For the purposes of this paper, we therefore advocate using this difference to estimate systematic uncertainties.", "We stress that these systematic uncertainties are still impressively small.", "Moving forward, we will consider more flexible forms for the potential, either by adding in additional components or using an expansion in a set of basis functions as in the self-consistent field method.", "Our choice for the DF also deserves further consideration.", "In this paper, we assumed that the stars in the sample could be decomposed into thin and thick components.", "Alternatively, in Paper I, we introduced the rational linear distribution function (RLDF), which described a superposition of components with smoothly varying scale height and velocity dispersion.", "This model correctly predicted the differential surface density profile, that is, surface density in the Solar Neighborhood as a function of vertical velocity dispersion (see Figure 11 of Paper I and Figure 8 of ).", "At present, we do not have a three-integral extension of the RLDF but it may be worthwhile to explore whether one exists.", "In Paper I, we computed the likelihood function by first binning the data in the $z-v_z$ plane and then computing a $\\chi ^2$ -statistic based on number counts in these bins.", "Here, we compute the likelihood function directly from the unbinned data by taking the product of the DF at the measured phase space positions of each of the stars in our sample.", "Binning in the full 6D phase space is clearly unfeasible with $\\sim 260$ thousand stars.", "Even with two orders of magnitude more stars, which will be the case after the Third and Fourth Gaia data releases, the number of stars will be too small to provide statistically meaningful averages on a 6D grid.", "On the other hand, the increase in the number of stars may put the computational cost of our present method out of reach.", "An intermediate approach is to bin stars in the $R-z-v_z$ sub-space of the entire phase space and base the likelihood function on number counts as well as low-order $v_R$ and $v_\\phi $ moments of the DF.", "Perhaps the most interesting result from this work is the complexity of structure seen when residuals in the number count are plotted in terms of frequency-angle or action-angle variables.", "Some of the features in these plots can be identified with known moving groups or the velocity arches discovered by as in .", "We also identify the phase spirals discovered by [1].", "The complexity of these structures suggests that the Solar Neighborhood may have experienced multiple disturbances and the associated phase space perturbations can depend on both the action (or frequency) and angle of individual stars.", "Moreover, self-gravity can also influence the evolution of perturbations as was shown in the case of the phase spirals by .", "Numerical simulations combined with techniques such as the Stäckel fudge will provide an invaluable tool for understanding the complicated dynamics of the Solar Neighborhood , , , , , .", "There is also the question of the interplay between disk perturbations and our attempts to determine the local gravitational potential and dark matter density , , , , ." ], [ "Conclusion", "In this work, we simultaneously determine the DF for a tracer population and the Milky Way gravitational potential within $\\sim 2\\,{\\rm kpc}$ of the Sun using astrometric data of a sample of giant stars from GDR2.", "The results are used to predict the radial and vertical components of the force in the sample region.", "We consider two different models for the contribution of the disk to the gravitational potential and use the difference in predictions from the two models as a means of estimating systematic errors.", "Since the model implicitly includes the effect of asymmetric drift, we are able to make separate predictions for the mean azimuthal velocity curve and the circular speed curve.", "Our value for the circular speed in the Solar Neighborhood is consistent with literature average though the rotation curve is somewhat flatter than the curves usually found in the literature.", "Our measured vertical forces are consistent with literature values, while the radial and vertical forces for our entire sample range agrees to $\\sim 1\\%$ and $\\sim 5\\%$ respectively between two models.", "An attractive feature of our method is that the residuals of the DF can be viewed in either the original phase space coordinates or in terms of angle-frequency-angle coordinates.", "In original phase space coordinates, residuals in number counts in the $R-z$ plane reveal complicated substructures that indicate the departure of the Solar Neighborhood from dynamical equilibrium.", "The number counts residuals in the $z-v_z$ plane show spiral-like patterns similar to what was found by [1] and also seen in Paper I, though the patterns here are more disorganized and faint.", "The difference may be due to the fact that we are considering a wider range in Galactocentric radius or that the potential differs from the true one.", "Similar results are obtained for the mean radial and vertical velocity components in the $z-v_z$ plane, which also shows coupling between in-plane and vertical motions of stars.", "When plotted in angle, frequency, and angle coordinates, the star count residuals reveal a wide range of complicated structures.", "These include diagonal stripes that suggest phase mixing of disturbances $250-500\\,{\\rm Myr}$ ago, moving groups, and velocity arches." ], [ "Data availability", "The Gaia Second Data Release is available at the following website: https://gea.esac.esa.int/archive/.", "All other data used for our work is available through the links posted in the footnotes where necessary.", "We acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada." ], [ "Normalization factor for DF", "From Equations REF , REF and REF , one can see that $\\mathcal {N}=\\eta \\mathcal {N}_1+(1-\\eta )\\mathcal {N}_2$ where ($i=1,2$ ) $\\mathcal {N}_i\\equiv \\int \\frac{\\Omega }{\\kappa }\\frac{\\tilde{\\rho }g({r})}{\\tilde{\\sigma }_{Ri}^2\\tilde{\\sigma }_{zi}}\\exp {\\left(-\\frac{E_p-E_c}{\\tilde{\\sigma }_{Ri}^2}-\\frac{E_z}{\\tilde{\\sigma }_{zi}^2}\\right)}{\\rm d}^3{r}{\\rm d}^3{v}$ We attempt to calculate this integral in Galactocentric cylindrical coordinates.", "First of all, the only terms that involves $v_z$ and $v_R$ respectively are $E_z$ as Equations REF and $E_p-E_c$ which follows: $E_p-E_c=\\frac{1}{2}v_R^2+\\Psi _{\\rm eff}(R,0)-\\Psi _{\\rm eff}(R_c,0)\\ge 0$ where $\\Psi _{\\rm eff}(R,z)\\equiv \\Psi (R,z)+\\frac{{L_z}^2}{2R^2}$ is the effective potential.", "Integrating these dimensions out, we have $\\begin{split}\\mathcal {N}_i=2&\\int \\frac{\\Omega }{\\kappa }\\frac{\\tilde{\\rho }}{\\tilde{\\sigma }_{Ri}}g({r})\\times \\\\&\\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{\\Delta \\Psi _z(R,z)}{\\tilde{\\sigma }_{zi}^2}\\right]}{\\rm d}^3{r}{\\rm d}v_\\phi \\end{split}$ where $\\begin{split}&\\Delta \\Psi _{\\rm eff}(R,R_c)\\equiv \\Psi _{\\rm eff}(R,0)-\\Psi _{\\rm eff}(R_c,0)\\\\&\\Delta \\Psi _z(R,z)\\equiv \\Psi (R,z)-\\Psi (R,0)\\end{split}$ Then, one should observe that ${\\rm d}^3{r}=R{\\rm d}R{\\rm d}\\phi {\\rm d}z$ and that $g({r})$ is the only term that involves $\\phi $ .", "Integrating the $\\phi $ -dimension out, we have: $\\begin{split}\\mathcal {N}_i=2\\cdot 2&\\int _0^{+\\infty }{\\rm d}R\\int _{-\\infty }^{+\\infty }{\\rm d}z\\tilde{g}(R,z)\\int _0^{+\\infty }\\frac{\\Omega }{\\kappa }\\frac{\\tilde{\\rho }}{\\tilde{\\sigma }_{Ri}}\\times \\\\&\\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{\\Delta \\Psi _z}{\\tilde{\\sigma }_{zi}^2}\\right]}(R{\\rm d}v_\\phi )\\end{split}$ where $\\tilde{g}(R,z)\\equiv \\int _0^{2\\pi } g(r){\\rm d}\\phi $ indicates the effect of geometrical selection in the $R-z$ plane, and the additional factor of two comes from the symmetry of the integrand with regard to $v_\\phi $ .", "For the geometrical selection applied in our work as described in Section , $\\tilde{g}(R,z)=2\\min \\left\\lbrace \\phi _m,\\,\\arccos \\frac{{R_0}^2+R^2-\\left(\\frac{z-z_0}{\\tan 15^\\circ }\\right)^2}{2R_0R}\\right\\rbrace $ if $80\\,{\\rm pc}<|z-z_0|<2\\,{\\rm kpc}$ and $|R-R_0|<\\min \\left\\lbrace 2\\,{\\rm kpc},\\,\\frac{|z-z_0|}{\\tan 15^\\circ }\\right\\rbrace $ Otherwise, $\\tilde{g}(R,z)=0$ .", "To evaluate $\\mathcal {N}_i$ from Equation REF , one needs to write ${\\rm d}v_\\phi $ in terms of $R_c$ assuming constant position coordinates.", "To do this, we first recognize that the epicyclic frequencies as functions of $R_c$ are calculated as $\\Omega (R_c)=\\frac{v_c}{R_c}=\\frac{L_z}{R_c^2}$ and $\\kappa ^2(R_c)=\\frac{\\partial ^2 \\Psi _{\\rm eff}(R_c,0)}{\\partial R_c^2}=\\frac{\\partial ^2\\Psi (R_c,0)}{\\partial R_c^2}+\\frac{3L_z^2}{R_c^4}$ We define the following two derivatives: $D_k\\equiv \\left.\\frac{\\partial ^k \\Psi }{\\partial R^k}\\right|_{R=R_c,\\,z=0},\\qquad k=1,2$ For $k=1$ , given the effective potential as in Equation REF and that $\\left.\\frac{\\partial \\Psi _{\\rm eff}}{\\partial R}\\right|_{R=R_c,\\,z=0}=0$ , we have: $D_1=-\\left.\\frac{{\\rm d}}{{\\rm d}R}\\left(\\frac{{L_z}^2}{2R^2}\\right)\\right|_{R=R_c}=\\frac{{L_z}^2}{{R_c}^3}=R_c\\Omega ^2$ For $k=2$ , Equation REF indicates that $D_2=\\kappa ^2-3\\frac{{L_z}^2}{{R_c}^4}=\\kappa ^2-3\\Omega ^2$ Note that $Rv_\\phi =L_z=R_cv_c={R_c}^{\\frac{3}{2}}{D_1}^{\\frac{1}{2}}$ Therefore, $\\begin{split}R\\left(\\frac{\\partial v_\\phi }{\\partial R_c}\\right)_{\\vec{r}}=&\\left(\\frac{3}{2}{R_c}^{\\frac{1}{2}}\\right){D_1}^{\\frac{1}{2}}+{R_c}^{\\frac{3}{2}}\\left(\\frac{1}{2}{D_1}^{-\\frac{1}{2}}D_2\\right)\\\\=&\\frac{3}{2}{R_c}^{\\frac{1}{2}}\\left(R_c\\Omega ^2\\right)^{\\frac{1}{2}}+\\frac{{R_c}^{\\frac{3}{2}}\\left(\\kappa ^2-3\\Omega ^2\\right)}{2\\left(R_c\\Omega ^2\\right)^{\\frac{1}{2}}}\\\\=&\\frac{R_c\\kappa ^2}{2\\Omega }\\end{split}$ Plugging this into Equation REF : $\\begin{split}\\mathcal {N}_i=2&\\int _0^{+\\infty }{\\rm d}R\\int _{-\\infty }^{+\\infty }{\\rm d}z\\tilde{g}(R,z)\\int _0^{+\\infty }\\frac{R_c\\kappa \\tilde{\\rho }_i}{\\tilde{\\sigma }_{Ri}}\\times \\\\&\\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{\\Delta \\Psi _z(R,z)}{\\tilde{\\sigma }_{zi}^2}\\right]}{\\rm d}R_c\\end{split}$" ], [ "Moments of the DF", "For density moments, one should observe from Equations REF , REF and REF that $n_{\\rm mer}(R,z)=&\\mathcal {N}_{\\rm mer}\\left[\\eta \\tilde{n}_{\\rm mer,1}+(1-\\eta )\\tilde{n}_{\\rm mer,2}\\right]\\\\n_{\\rm ver}(z,v_z)=&\\mathcal {N}_{\\rm ver}\\left[\\eta \\tilde{n}_{\\rm ver,1}+(1-\\eta )\\tilde{n}_{\\rm ver,2}\\right]$ where $\\mathcal {N}_{\\rm mer}$ and $\\mathcal {N}_{\\rm ver}$ are normalization factors mentioned in Section REF .", "For each individual disk ($i=1,2$ ): $\\tilde{n}_{{\\rm mer},i}=\\tilde{n}_{{\\rm mer},i}(R,z)=&\\int f_i({r},\\,{v})g({r})(R{\\rm d}\\phi ){\\rm d}^3{v}\\\\\\tilde{n}_{{\\rm ver},i}=\\tilde{n}_{{\\rm ver},i}(z,v_z)=&\\int f_i({r},\\,{v})g({r})(R{\\rm d}R{\\rm d}\\phi ){\\rm d}v_R{\\rm d}v_\\phi $ As for $v_\\phi $ moments, one should observe from Equations REF and REF that $\\langle v_\\phi \\rangle _{\\rm mer}(R,z)=&\\frac{\\eta \\xi _1+(1-\\eta )\\xi _2}{\\eta \\tilde{n}_{\\rm mer,1}+(1-\\eta )\\tilde{n}_{\\rm mer,2}}\\\\\\langle v_\\phi \\rangle _{\\rm ver}(z,v_z)=&\\frac{\\eta \\zeta _1+(1-\\eta )\\zeta _2}{\\eta \\tilde{n}_{\\rm ver,1}+(1-\\eta )\\tilde{n}_{\\rm ver,2}}$ where $\\xi _i=\\xi _i(R,z)=&\\int v_\\phi f_i({r},{v})g({r})(R{\\rm d}\\phi ){\\rm d}^3{v}\\\\\\zeta _i=\\zeta _i(z,v_z)=&\\int v_\\phi f_i({r},{v})g({r})(R{\\rm d}R{\\rm d}\\phi ){\\rm d}v_R{\\rm d}v_\\phi $ To get the profiles we need, we need to evaluate integrals $\\tilde{n}_{{\\rm mer},i}(R,z)$ , $\\tilde{n}_{{\\rm ver},i}(z,v_z)$ , $\\xi _i(R,z)$ and $\\zeta _i(z,v_z)$ .", "For all these integrals, one should integrate $v_R$ and $\\phi $ out in the same way as in Appendix , and plug in $v_\\phi =\\frac{L_z}{R}$ for $v_\\phi $ and Equation REF for $R{\\rm d}v_\\phi $ .", "Then, for $\\tilde{n}_{{\\rm mer},i}(R,z)$ and $\\xi _i(R,z)$ we should further integrate out $v_z$ also in the same way as in Appendix .", "When the smoke clears: $\\begin{split}\\tilde{n}_{{\\rm mer},i}(R,z)=&2\\tilde{g}(R,z)\\int _0^{+\\infty }{\\rm d}R_c \\frac{R_c\\kappa \\tilde{\\rho }_i}{\\tilde{\\sigma }_{Ri}}\\times \\\\& \\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{\\Delta \\Psi _z(R,z)}{\\tilde{\\sigma }_{zi}^2}\\right]}\\end{split}$ $\\begin{split}\\tilde{n}_{{\\rm ver},i}(z,v_z)=&\\sqrt{2}\\int _0^{+\\infty }{\\rm d}R_c \\frac{R_c\\kappa \\tilde{\\rho }_i}{\\tilde{\\sigma }_{Ri}\\tilde{\\sigma }_{zi}}\\int _0^{+\\infty }{\\rm d}R\\tilde{g}(R,z)\\times \\\\&\\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{E_z}{\\tilde{\\sigma }_{zi}^2}\\right]}\\end{split}$ $\\begin{split}\\xi _i(R,z)=&\\frac{2\\tilde{g}(R,z)}{R}\\int _0^{+\\infty }{\\rm d}R_c \\frac{L_zR_c\\kappa \\tilde{\\rho }_i}{\\tilde{\\sigma }_{Ri}}\\times \\\\& \\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{\\Delta \\Psi _z(R,z)}{\\tilde{\\sigma }_{zi}^2}\\right]}\\end{split}$ $\\begin{split}\\zeta _i(z,v_z)=&\\sqrt{2}\\int _0^{+\\infty }{\\rm d}R_c \\frac{L_zR_c\\kappa \\tilde{\\rho }_i}{\\tilde{\\sigma }_{Ri}\\tilde{\\sigma }_{zi}}\\int _0^{+\\infty }{\\rm d}R \\frac{\\tilde{g}(R,z)}{R}\\times \\\\&\\exp {\\left[-\\frac{\\Delta \\Psi _{\\rm eff}(R,R_c)}{\\tilde{\\sigma }_{Ri}^2}-\\frac{E_z}{\\tilde{\\sigma }_{zi}^2}\\right]}\\end{split}$ where $E_z=E_z(R,z,v_z)$ follows Equations REF ." ] ]
2207.03516
[ [ "DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection" ], [ "Abstract Graph Anomaly Detection (GAD) has recently become a hot research spot due to its practicability and theoretical value.", "Since GAD emphasizes the application and the rarity of anomalous samples, enriching the varieties of its datasets is a fundamental work.", "Thus, this paper present DGraph, a real-world dynamic graph in the finance domain.", "DGraph overcomes many limitations of current GAD datasets.", "It contains about 3M nodes, 4M dynamic edges, and 1M ground-truth nodes.", "We provide a comprehensive observation of DGraph, revealing that anomalous nodes and normal nodes generally have different structures, neighbor distribution, and temporal dynamics.", "Moreover, it suggests that those unlabeled nodes are also essential for detecting fraudsters.", "Furthermore, we conduct extensive experiments on DGraph.", "Observation and experiments demonstrate that DGraph is propulsive to advance GAD research and enable in-depth exploration of anomalous nodes." ], [ "Introduction", "Graph data widely presents in various domains and conveys abundant information [35].", "Dozens of efforts have been devoted to graph-related research, including node classification [2], link prediction [32], and graph property prediction [40], etc.", "Among them, Graph Anomaly Detection (GAD) has currently become a hot spot due to its practicability and theoretical value [19], [1].", "Anomalies are a number of nodes, edges and graphs that are distinct from the majority [22].", "In real-world scenarios, anomalies are widespread, damaging, but difficult to detect.", "For example, as reported in 2021, China sees more than 2,700 telecom fraud cases every day, with a loss of nearly RMB 140 million.", "However, less than 5% of these cases have been closed Reported in China News Service.. Fraudsters in these cases are typical anomalous nodes in social networks.", "In view of this, GAD aims to detect these anomalies in graphs by utilizing rich graph data with classic anomaly detection approaches [7], [39].", "Thus, investigating GAD is beneficial and applicable in various of scenarios in the real world.", "This paper focuses on anomalous node detection for its representativeness in GAD.", "Since “anomaly” is a domain-specific concept, narrowing the gap between academia and industry is the primary requirement of GAD datasets.", "However, due to the rarity of anomalies in real world, only a small number of public datasets with both graph structure and anomaly ground-truth can be used in GAD research [19], such as Amazon [8], YelpChi [8], and Elliptic [33].", "Thus, enriching the variety of GAD datasets is a fundamental work of current GAD research.", "Collecting dataset from some domains that are representative but not covered by current works can greatly speed up this process.", "For example, the financial fraudster detection [34]  is such a typical domain.", "Meanwhile, current GAD datasets have some limitations, which may gap the current GAD research and practical applications.", "Firstly, temporal dynamics of graphs are ignored by most of the current GAD datasets, despite they are being common in the real world [4].", "Secondly, scales of current GAD datasets have a gap with industrial scenarios (with more than 1 million nodes) [13].", "For example, current GAD datasets which are commonly-used only have 11,944 to 203,769 nodes.", "Last but not least, in most real-world scenarios, not all the nodes in a graph are actually required to be classified/predicted.", "But removing these nodes can lose their abundant information and damage the connectivity of network structures, which is somehow like removing background knowledge from a complete story.", "Therefore, we term these nodes as background nodes and the opposite of them as target nodes.", "However, most of the current GAD datasets ignore background nodes.", "Figure: The overview of DGraph.To enrich the variety of current GAD datasets and overcome their limitations, we propose DGraph, a real-world and large-scale dynamic graph consisting of over 3M nodes and 4M edges.", "DGraph  are provided by Finvolution Group .", "It represents a real-world social network in the financial industry.", "A node represents a Finvolution  user, and an edge from one user to another means that the user regards the other one as the emergency contact.", "Besides, the anomalous node in DGraph  has a practical meaning: the user who has overdue behaviors.", "DGraph  provides over 1M extremely unbalanced ground-truth nodes, offering a great benefit to evaluation and promotion of previous GAD studies.", "In addition, DGraph  preserves more than 2M background nodes, referring to users who are not detection targets in lack of borrow behavior.", "These nodes are real-world instances and can effectively promote understanding of background nodes in social networks.", "Meanwhile, DGraph  contains abundant dynamic information which can be utilized for accurate fraudster identification and further exploration of GAD research.", "An illustrative overview of the dataset is shown in Fig.", "REF .", "We carefully observe DGraph  and conduct extensive experiments.", "The results demonstrate that DGraph  possesses a variety of novel and promising properties.", "Firstly, observations suggest that anomalous and normal users in DGraph  have various characteristics in terms of network structure, the distribution of neighbors' features, and temporal dynamics.", "Comprehensively modeling the abundant information of DGraph  is still a challenge for GAD research Besides, observations also demonstrate that background nodes in DGraph  are vital for detecting fraudsters.", "DGraph  can support and promote the exploration of background nodes in depth.", "Last but not least, experimental results of 9 popular methods on DGraph  reveal that the generalization of current GAD methods is limited.", "DGraph  can offer exciting opportunities to advance previous GAD methods.", "In summary, our contributions are as follows: [leftmargin=*,topsep=-1pt] We propose DGraph, a real-world and large-scale dynamic graph from financial scenarios.", "We provide a comprehensive observation of DGraph, which comprehensively explores the novel and promising properties of DGraph.", "We conduct extensive experiments on DGraph.", "The results demonstrate that DGraph  offers exciting opportunities to advance previous GAD methods.", "Our dataset can be found at: https://dgraph.xinye.com/" ], [ "Related datasets", "Since graph data is widespread, many works have been devoted to graph research[35], [37].", "With the development of graph research, various benchmark datasets are proposed to support and promote the research.", "These benchmark datasets link to many graph-related tasks, such as node classification [38], link prediction [3] and graph property prediction [11].", "Graph anomaly detection (GAD) has currently become a hot research direction due to its practicability and theoretical value.", "Many efforts go to this topic, aiming to extend GAD to a range of application scenarios [7], [39], [8], [14].", "For example, detecting fraudsters on financial platforms [17], [24], anti-money laundering in Bitcoin [18], fake news filtering on social media [8], etc.", "However, ground-truth anomalies are hard to be collected because of their rarity.", "Therefore, only Enron [26], Twitter Sybil [9], Disney [27], Amazon [8], Elliptic [33] and YelpChi [8] have both anomaly ground truth and graph structures to date [19].", "However, more than half of them are not suited for anomalous node detection due to their network structure.", "For example, Enron is an email communications dataset, but it has about only 150 users [26].", "Therefore, most of the anomalous node detection methods use Amazon, YelpChi, and Elliptic, to evaluate performance.", "Amazon is constructed from a review dataset provided by Amazon.com [20].", "Its anomalies are reviews with low ratings.", "YelphChi is constructed from a review dataset provided by Yelp.com [25].", "It is worth noting that the label of YelpChi is not the real ground truth since it is constructed by a Yelp Review Filter with about 90% accuracy [21].", "Besides, Elliptic is a Bitcoin transaction network provided by Elliptic.com, consisting of 203,769 nodes.", "We provide a detailed summary of these datasets in Table REF ." ], [ "Proposed Dataset: ", "DGraph  is a dynamic graph that is derived from a real-world finance scenario and linked to a practical application: fraudsters detection.", "This section introduces the background of raw data in DGraph  first.", "Next, we detail the DGraph  construction process based on raw data.", "Last but not least, we present the online leaderboard of DGraph  that is used to track current advancements." ], [ "Raw data", "The raw data of DGraph  is provided by Finvolution Grouphttps://ir.finvgroup.com/, a pioneer in China's online consumer finance industry which has more than 140 million registered consumers.", "DGraph  focuses on the fintech platform of Finvolution, which connects underserved borrowers with financial institutions.", "According to the financial report https://ir.finvgroup.com/Annual-Report, more than 14 million consumers borrowed money by this platform during fiscal year 2021, with a total transaction volume of RMB 137.3 billion.", "The individual borrower must offer a phone number and register an account on Finvolution  in order to utilize this platform.", "Users also need to voluntarily complete a basic personal profile, including age, gender, a description of their financial background, etc., which will be used to determine their loan limit.", "Meanwhile, the emergency contact information is a compulsory requirement.", "Before commencing each new loan application, users are required to offer at least one contact's name and phone number, which must be kept current.", "The platform will evaluate loan requests and determine whether or not to give loans to users.", "In addition, the platform monitors all loans to determine whether users have payed on time and to record the actual repayment date.", "The raw data is compiled from the aforementioned information.", "It is especially emphasized that all raw data are processed through data masking and not to disclose any user privacy Summarily, the raw data for a specific user includes five components: (1) User id.", "(2) Basic personal profile information, such as age, gender, etc.", "(2) Telephone number; note that each account is matched with a specific telephone number.", "(4) Borrowing behavior, which includes the repayment due date and the actual repayment date.", "(5) Emergency contacts, which includes the name, telephone number, and last updating time for each contact." ], [ "Graph construction", "The emergency contacts in the raw data described above represent a strong connection among users with temporal information, which can reflect a part of their dynamic social relationship.", "Meanwhile, the personal profile can be used to describe a user's basic characteristics.", "Furthermore, the borrowing behavior of users can reveal their inherent property.", "These factors enable us to construct a dynamic graph for the anomaly detection task based on the raw data, namely DGraph.", "DGraph  are constructed in three steps.", "In the first step, we create DGraph's network structure.", "We extract users' personal profiles in the second step to build node features.", "Finally, we label nodes based on their borrowing behaviors.", "After that, we detail each step of the construction process.", "Step 1.", "Building the network.", "First, we gathered all Finvolution  users and their corresponding raw data.", "Next, we select a period of emergency contact records and obtain the user id by matching the telephone number.", "Then, depending on the user id of the contact, we construct the directed dynamic edge between users, which indicates who is his emergency contact at a given time.", "In consideration of privacy, we filter some of the emergency contacts, as they are not Finvolution  users.", "Then, we construct a graph including all users and edges.", "From this graph, we take one weakly connected components, which contains 3,700,550 nodes and 4,300,999 directed edges, and utilize it as the network structure of DGraph.", "The goal of this operation is to maintain the integrity of the network structure.", "To safeguard users' privacy, we record the time mark of the edge with a timestamp that can only reflect the time gap between each edge.", "Step 2.", "Building nodes features.", "The node feature derived from the basic personal profile is a vector with 17 dimensions.", "Each dimension of the node attribute corresponds to a distinct element of the personal profile, such as age and gender.", "To safeguard the privacy of our users, we do not disclose the significance of any dimension.", "Since each element of the user's profile is optional (see Sec.", ".1), numerous node attributes miss values.", "These values are preserved and consistently recorded as “-1”, namely, missing values.", "Step 3.", "Labeling nodes.", "32.2% of the nodes (# 1,225,601) in DGraph  have related borrowing records.", "These nodes are labeled based on their borrowing behavior.", "We define users who exhibit at least one overdue activity (repay the loan after the due date) as anomalies/fraudsters.", "Normal users are individuals who have borrowed money and repaid it on time.", "According to this rule, 15,509 nodes are classified as fraudsters and 1,210,092 nodes as normal users.", "Except for to fraudsters and normal users, Except for fraudsters and normal users, DGraph  comprises 2,474,949 nodes/users (66.8 %) who are registered users but have no borrowing behavior from the platform.", "These nodes are background nodes.", "Due to the lack of borrowing behavior, these nodes are not targets for anomaly detection.", "Nonetheless, these nodes play a crucial role in DGraph's connectivity and it can assist us better identify anomalous nodes (See details in Sec. .3).", "Therefore, they are preserved and labeled as background nodes." ], [ "Leaderboard", "We provide an online leaderboard for DGraphhttps://dgraph.xinye.com/leaderboards/dgraphfin, with the goal of assisting researchers in keeping track of current methods and evaluating the efficacy of newly proposed methods.", "Furthermore, in June 2022, Finvolution  will host a deep learning competitionhttps://ai.ppdai.com/mirror/goToMirrorDetailSix?mirrorId=28 based on a dataset that is nearly identical to DGraph  except for the time involved.", "DGraph  and its leaderboard will be used as a competition guide, which will benefit DGraph's promotion.", "More researchers will be invited to contribute to this exciting new resource." ], [ "Observation on ", "DGraph  has a small number of anomalous nodes.", "Due to the fact that the characteristics of nodes vary in terms of structure, neighbors, and something else, recognizing and interpreting these anomalous nodes is challenging and difficult.", "In construction, DGraph  preserves two unique properties: missing values and background nodes.", "In this section, we make a preliminary observation of this graph, which can help us better comprehend the proposed graph and provide guidance to the question of how to design and interpret models." ], [ "Overall", "Firstly, we compare DGraph  with commonly-used graphs in GAD.", "Table REF   displays a summary of the findings.", "DGraph  is the largest public dataset in GAD to date.", "Specifically, the number of nodes in DGraph  is 17.1 times greater than that of Elliptic, with over one million ground-truth and the lowest proportion of anomalies.", "Therefore, DGraph  is a challenging GAD dataset, requiring a model to process a large number of labeled samples and detect anomalous nodes on samples with the extreme imbalance.", "Table REF   also show two unique characteristics of DGraph .", "Due to the platform setting (see details in Sec.", ".1), DGraph  naturally contains 49.9 % missing values.", "In addition, DGraph  contains over 2M background nodes, indicating a valuable resource for observing and understanding the function of background nodes in networks.", "Table: Summary of existing datasets for GAD.In which, “AN” means “Anomalous Nodes”, “MV” means “Missing Values”, “BN” means “Background Nodes”, and “-” means not be reported by the literature.", "Note that YelpChi and Amazon are re-constructed datasets based on two reviews dataset: and ." ], [ "Anomalous vs. normal", "Fraudsters and normal users generally have distinct graph structures and neighbor characteristics.", "As shown in Fig.", "REF  (a), fraudsters and normal users have similar average in-degrees, but their average out-degrees differ significantly.", "The average out-degree of normal users (1.73) is 2.33 times of the fraudsters' (0.75).", "This result indicates that the graph structure plays a vital role in the detection of fraudsters.", "Next,we define a neighbor similarity metric in neighbors' features to reveal the similarity between a user's features and its' neighbors'.", "The formulation of this metric is $s_i = \\sum _{(i,j)\\in \\mathcal {E}}{\\frac{x_i \\cdot x_j}{|x_i||x_j|}}$ , where $x_i$ represents the features of node $i$ and $\\mathcal {E}$ represents a specific edge set.", "After that, we group nodes according to their labels and calculate the average neighbor similarity on in- and out-edges for each group.", "The result is shown in Fig.", "REF  (b).", "On average, fraudsters have a lower neighbor similarity than normal users on out-edges, with values of 0.242 and 0.324, respectively.", "This result suggests that neighbors features also possess an important trait for detecting fraudsters.", "DGraph  possesses distinctive characteristics that are also worth investigating in fraudsters identification but usually ignored by many GAD datasets.", "The existence of missing values and dynamic edge are two particularities of DGraph, and they are also helpful in detecting fraudsters.", "Fig.", "REF  (c) depicts the proportion of fraudsters and normal users with varying numbers of missing values.", "As a result of the design of node features, the majority of users have 0 or 14 missing values.", "Among them, 41.8% of normal users have no missing values, while only 19.0% of fraudsters have no missing values.", "Consequently, the absence of a value is also a factor that aids in classifying node labels.", "Meanwhile, DGraph  provides the last updating date of each edge, allowing us to investigate users' various temporal characteristics.", "We observe the out-edge frequency of nodes.", "We classify users based on their out-degree and calculate the average number of out-edges added per user per cycle.", "Fig.", "REF  (d) demonstrates the result.", "Higher out-degree nodes have a lower cycle (higher frequency) of adding out-edge, and fraudsters have a lower cycle than normal users with the same out-degree.", "This result suggests that fraudsters are more likely to fill their emergency contact information in a short amount of time and not update it in the future.", "Therefore, in DGraph, fraudsters and normal users have differences in aspects of graph structure, neighbor feature distribution, missing values, and temporal dynamics characteristics.", "In other words, it can comprehensively be used to evaluate the representational capacity of graph models.", "Figure: Observation of fraudsters and normal users.", "(a) shows their difference in degrees.", "(b) shows their difference in neighbors' features.", "(c) shows their difference in the distribution of missing values.", "(d) shows their different temporal frequency of edges." ], [ "Background node", "The real-world graph is usually massive, redundant, and contains background nodes.", "For example, in MAG240M [12], only 2 million Arxiv papers are concerned with classification among the 121 million papers.", "The remaining 119 million papers are not required for the task, but they are useful for node classification due to their significance in maintaining network connectivity and abundance of semantic information.", "These nodes that are required for classification and prediction are referred to as target nodes, while others are referred to as background nodes.", "DGraph  has a lot of background nodes that represent Finvolution  users who haven't borrowed any money yet, which are ignored by previous GAD datasets.", "These nodes can assist us in investigating the inherent properties of background nodes.", "Although background nodes do not exhibit any borrowing behaviors, there is little distinction between the majority of their features and those of other nodes.", "We sampled 10,000 target and background nodes and illustrated their characteristics using T-SNE [31].", "As shown in Fig.", "REF  (a), about 92% of background nodes are inseparable from other nodes.", "Next, nodes are divided into train-set, validation-set, and test-set with a 6/2/2 split setting.", "Then, we judge whether nodes are background nodes based on their node features using XGBoost[6].", "The test-set f1-score for the model is only 0.826, which is only a 3.1% improvement over the random guess (f1-score is 0.801).", "These results indicate that background nodes are difficult to differentiate based on their features.", "However, these hardly separable nodes play a crucial role in maintaining the graph's connectivity.", "Fig.", "REF  (b) illustrates that the number of weakly connective graph components increases to 605,194, of which 380,490 have a single node after removing background nodes from the DGraph.", "It specifies a vast quantity of target nodes linked by background nodes.", "Meanwhile, the background node contains an abundance of semantic information.", "As shown in Fig.", "REF  (c), about 46.0% of the in-neighbors of anomalous nodes are background nodes, whereas only 31.1% of the out-neighbors are background nodes.", "In contrast, the in-neighbors of normal users have a low ratio of background nodes while the out-neighbors have a high ratio of BN.", "In addition, we observe the role played by the background node in the two-hop relationship.", "As shown in Fig.", "REF  (d) We compare the homophily ratio of various connection relationships.", "we find that 2-hop connection relationship with a background node as intermediate nodes have a higher homophily ratio than others.", "Moreover, homophily ratios of 2-hop connection relationships are greater than that of two directly connected nodes.", "Note the reported ratio are measured by the class insensitive edge homophily ratio[16], as well as two popular graphs for comparison: Ogbn-Arxiv[13] and Actor[29].", "Therefore, it is worthwhile to investigate how to use background nodes in DGraph  to enhance performance.", "In general, background nodes are essential for maintaining the network's connectivity and contain abundant semantic information for detecting fraudsters.", "Due to the fact that BN cannot be easily seperated by node characteristics, end2end models rarely use these nodes automatically (see details in Sec. ).", "In our paper, the utilization of the background node merits investigation." ], [ "Experiments on ", "DGraph  is a newly-proposed graph for GAD with an extremely low percentage of anomalous nodes.", "It possesses a variety of general characteristics.", "According to the observation, normal users and fraudsters differ in a variety of aspects, such as network structure, temporal dynamics, missing values, and background nodes.", "In this section, we delve deeper into DGraph  via extensive experiments, beginning with three questions: Q1: How powerful are current GAD models on DGraph?", "Q2: How to process missing values of DGraph?", "Q3: How important are DGraph's background nodes?" ], [ "Performance of current models (", "Setup.", "We select 9 advanced methods, including 1 baseline methods: MLPs, 4 general graph methods: Node2Vec [10], GCN [15], SAGE [36], and TGAT [36], and 4 anomaly detection methods: DevNet [23], CARE-GNN [8], PC-GNN [17] and AMNet [5].", "These methods can capture various graph properties, which are summarized in Appendix.", "We randomly divide the nodes of DGraph  into training/validation/test sets with a split setting of 70/15/15, respectively.", "Due to the extreme imbalance of the label distribution, we evaluate performance by AUC (ROC-AUC) and AP (Average Precision).", "See more in Appendix.", "Table: Comparison of AUC and AP achieved by 7 methods based on DGraph .Discussion.", "The results are shown in Table REF , where DGraph  is converted into an undirected one for simplicity.", "First, we observe that MLPs and DevNet that do not utilize any graph information are significantly outperformed by other baselines that utilize both graph information and node features.", "But Node2Vec, which only utilizes graph structure, is surpassed by all others.", "This suggests that both graph information and node features are key factors in detecting fraudsters.", "It is worth noting that most GAD methods can not outperform the general GNNs.", "This result is in contrast to the previous result on Amazon and YelpChi, suggesting that previous methods may overfit on current GAD datasets.", "Therefore, DGraph  can motivate future works to propose more general models.", "Among all compared methods, TGAT achieves the state-of-art performance since it can capture the most range of information, including dynamic information, node features and graph information.", "This result indicates that future GAD methods can take account of more graph properties to make progress.", "In general, DGraph  offers exciting opportunities for the improvement of previous GAD methods and benefits future works." ], [ "Missing values in ", "According to the observation made in Sec.", ", missing values play a crucial role in detecting anomalous nodes.", "The next problem is how to handle these missing values.", "Since the treatment of missing values in graphs has not yet been broadly discussed by current graph models, and most GAD methods are GNNs based methods, we evaluate whether some commonly used tricks are applicable to GNNs.", "Setup.", "We choose 4 settings to handle missing values, namely, Default, it is the default setting; it replaces missing values with \"-1\".", "Trick A: it involves adding flags and replacing missing values with \"-1\", where the flag is set as \"1\" or \"0\" to indicate whether a dimension's value is missing.", "In other words, if a node's feature is $[null,3]$ , after adding flag, the node's feature will be $[-1,3,1,0]$ , where the last two numbers are flags.", "Trick B: it involves adding flags and replacing missing values with \"0\".", "Trick C: it involves adding flags and imputing missing values by a prediction method: IterativeImputer [30].", "We conduct experiments using MLPs and GCNs whose input node characteristics are processed by these three techniques.", "See more detailed experimental setup in Appendix.", "Discussion.", "The experimental result is shown in Fig.", "REF  (a).", "Tricks of handling missing values bring more notable improvements on GCNs than those on MLPs.", "The average improvement on GCNs is 0.39% of AUC, and that on MLPs is 0.11% of AUC.", "It suggests that handling missing values for GNNs is indeed necessary.", "Meanwhile, compared to other tricks on GCNs, Tricks B  achieves the best improvements.", "This result indicates that carefully choosing a suitable value for GNNs is also required.", "However, generally determining a suitable value is complex because the optimal missing value is task-specific.", "Therefore, how to generally handle missing values on graphs is worth investigating.", "DGraph  provides an opportunity to explore missing values on the graph." ], [ "Background nodes in ", "Background nodes are another distinguishing characteristic of DGraph.", "Observation reveals that background nodes are difficult to differentiate from other nodes but are necessary for maintaining DGraph's connectivity and offering sufficient semantic information for detecting fraudsters.", "Next, we further investigate how can we utilize background node.", "Removing background nodes.", "We remove a variable proportion of background nodes from DGraph  and feed the remaining graph to GCNs for training and prediction.", "The experimental conditions are identical to those described previously.", "Fig.", "REF  (b) shows the result.", "As the proportion of background nodes being removed increases, the average AUC of GCNs in the testset decreases from 0.76 to 0.72.", "These results once again demonstrate the significance of background nodes.", "It is also worth noting that the time cost of GCNs decreases from 20 to 12 as the proportion of background nodes being removed increases, which indicates a potential direction, which is how to strike a balance between compressing the background nodes to accelerate the model and maintaining the performance.", "Processing background nodes.", "According to the observation, background nodes of DGraph  have abundant semantic information.", "However, since these nodes and target nodes have tiny differences in the node features, automatically identifying background nodes and utilizing their semantic information is a great challenge for end2end models.", "Therefore, we conduct an experiment to investigate how can GNNs utilize background nodes.", "We first add a label indicating whether or not nodes are background nodes into the node features, which is denoted as GCN + Label.", "In addition, we regard the graph as a heterogeneous graph with two types of nodes, the target nodes and the background nodes, and use RGCN [28], a heterogeneous GNNs, to learn the node representation.", "We restrict the number of RGCN parameters to that of GCN.", "As shown in Table REF , GCN + Label achieves a 2.26% improvement over GCN.", "Meanwhile, it is surprising that RGCN has a 4.39% improvement over GCN.", "This result suggests background nodes indeed contains a wealth of semantic information that is ignored by current end2end methods.", "Therefore, investigating the background nodes is also a promising direction to advance current GAD methods.", "Table: Comparison of different methods for processing with background nodes.Discussion.", "These two experimental result indicate the value of backgound nodes.", "DGraph  can be used to explore a general problems: How to generally process background nodes?" ], [ "Conclusion", "This paper presents DGraph, a real-world dynamic graph in finance domain, with the aim of enriching the variety of GAD datasets and overcoming the limitations of current datasets.", "In the construction of DGraph, we preserve missing values on node features, and those unlabeled nodes which are referred to as background nodes.", "We make a comprehensive observation on DGraph.", "It reveals that anomalous nodes and normal nodes generally have differences on various graph-related characteristic.", "Meanwhile, the importance of missing values and background nodes is covered by observation.", "Furthermore, we conduct abundant experiments on DGraph, and gain many thought-provoking discoveries.", "Compared with general GNNs, most current GAD methods present worse performance.", "It indicates that these GAD methods may overfit on several datasets.", "Meanwhile, results show that handling missing values and processing background nodes is indeed crucial in DGraph.", "It is expected that these discoveries can be extended to more general fields.", "In general, DGraph  overcomes the limitations of current GAD datasets and enriches their varieties.", "We believe DGraph  will become an essential resource for a broad range of GAD research.", "Checklist For all authors... Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?", "See Section  Did you describe the limitations of your work?", "Did you discuss any potential negative societal impacts of your work?", "Have you read the ethics review guidelines and ensured that your paper conforms to them If you are including theoretical results... Did you state the full set of assumptions of all theoretical results?", "Did you include complete proofs of all theoretical results?", "If you ran experiments (e.g.", "for benchmarks)... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?", "See Section  Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?", "See Appendix Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?", "See Section  Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?", "See Appendix If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...", "If your work uses existing assets, did you cite the creators?", "See Section  Did you mention the license of the assets?", "See Section  Did you include any new assets either in the supplemental material or as a URL?", "Did you discuss whether and how consent was obtained from people whose data you're using/curating?", "See Section  Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?", "See Section  If you used crowdsourcing or conducted research with human subjects... Did you include the full text of instructions given to participants and screenshots, if applicable?", "Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?", "Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?", "Appendix Experiment details Data splitting We randomly divide the nodes of DGraph  into training/validation/test sets, with a split of 70/15/15, respectively.", "We fix this split and provide it in the public dataset.", "Methods Baseline methods.", "We select MLPs as the baseline methods.", "Its' input is the node feature.", "General graph models.", "We evaluate 4 general graph models on DGraph, which are Node2Vec, GCN, SAGE, and TGAT.", "Specifically, Node2Vec only utilizes graph structure information.", "GCN and SAGE can utilize both structure information and node features.", "TGAT is a general dynamic GNNs.", "It can handle dynamic edges by a time encoder.", "Anomaly detection methods.", "We evaluate four anomaly detection methods, which are DevNet, CARE-GNN, PC-GNN and AMNet.", "All of them have special components to handle the extreme imbalance of samples.", "Among them, DevNet is similar to MLPs, in which input is only node features.", "Other methods are GNNs-based methods, which can both utilize structure information and node features.", "l Table REF summarize the difference of these methods.", "Setup We optimize each model's hyper-parameters based on their AUC performance on the validation set.", "For all experiments, the number of epochs is set to 1000 except for Node2Vec, where the model is pre-trained for 600 epochs to get the nodes embedding that is further used to train MLPs for 1000 epochs to classify the nodes.", "To evaluate the models, we repeat all the experiments for five runs and take the average performance.", "For anomaly detection methods, we use source code provided by their authors and modify hyper-parameters in accordance with their instructions.", "Since the imbalanced class, we search the class weight of loss function range from [1:1,1:25,1:50,1:100] for general methods, excluding the search of general hyper-parameter settings (such as hidden size).", "We report the AUC and AP for each model on the test set.", "Our experiments are conducted in Python3 on a Dell PowerEdge T640 with 48 CPU cores and 1 Tesla P100 GPU.", "More details can see https://github.com/hxttkl/DGraph_Experiments Table: Summary of selected methods.", "denotes a method have a particular component to handle a specific factor." ], [ "Checklist", " For all authors... Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?", "See Section  Did you describe the limitations of your work?", "Did you discuss any potential negative societal impacts of your work?", "Have you read the ethics review guidelines and ensured that your paper conforms to them If you are including theoretical results... Did you state the full set of assumptions of all theoretical results?", "Did you include complete proofs of all theoretical results?", "If you ran experiments (e.g.", "for benchmarks)... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?", "See Section  Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?", "See Appendix Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?", "See Section  Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?", "See Appendix If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...", "If your work uses existing assets, did you cite the creators?", "See Section  Did you mention the license of the assets?", "See Section  Did you include any new assets either in the supplemental material or as a URL?", "Did you discuss whether and how consent was obtained from people whose data you're using/curating?", "See Section  Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?", "See Section  If you used crowdsourcing or conducted research with human subjects... Did you include the full text of instructions given to participants and screenshots, if applicable?", "Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?", "Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?" ], [ "Data splitting", "We randomly divide the nodes of DGraph  into training/validation/test sets, with a split of 70/15/15, respectively.", "We fix this split and provide it in the public dataset." ], [ "Methods", "Baseline methods.", "We select MLPs as the baseline methods.", "Its' input is the node feature.", "General graph models.", "We evaluate 4 general graph models on DGraph, which are Node2Vec, GCN, SAGE, and TGAT.", "Specifically, Node2Vec only utilizes graph structure information.", "GCN and SAGE can utilize both structure information and node features.", "TGAT is a general dynamic GNNs.", "It can handle dynamic edges by a time encoder.", "Anomaly detection methods.", "We evaluate four anomaly detection methods, which are DevNet, CARE-GNN, PC-GNN and AMNet.", "All of them have special components to handle the extreme imbalance of samples.", "Among them, DevNet is similar to MLPs, in which input is only node features.", "Other methods are GNNs-based methods, which can both utilize structure information and node features.", "l Table REF summarize the difference of these methods." ], [ "Setup", "We optimize each model's hyper-parameters based on their AUC performance on the validation set.", "For all experiments, the number of epochs is set to 1000 except for Node2Vec, where the model is pre-trained for 600 epochs to get the nodes embedding that is further used to train MLPs for 1000 epochs to classify the nodes.", "To evaluate the models, we repeat all the experiments for five runs and take the average performance.", "For anomaly detection methods, we use source code provided by their authors and modify hyper-parameters in accordance with their instructions.", "Since the imbalanced class, we search the class weight of loss function range from [1:1,1:25,1:50,1:100] for general methods, excluding the search of general hyper-parameter settings (such as hidden size).", "We report the AUC and AP for each model on the test set.", "Our experiments are conducted in Python3 on a Dell PowerEdge T640 with 48 CPU cores and 1 Tesla P100 GPU.", "More details can see https://github.com/hxttkl/DGraph_Experiments Table: Summary of selected methods.", "denotes a method have a particular component to handle a specific factor." ] ]
2207.03579
[ [ "A Comparison of Group Criticality Notions for Simple Games" ], [ "Abstract We analyze two independent efforts to extend the notion of criticality in simple games that measure the influence of a player in conjunction with others: the notion of rank of d-criticality given by Beisbart (2010) and the order of criticality in Dall'Aglio et al.", "(2016) and Aleandri et al.", "(2021).", "The aim is to get elements from both works and define measures of group criticality that take the best of both approaches." ], [ "Introduction", "A single vote can make the difference between the acceptance and the rejection of a proposal and then measurament of voting power is a central issue in voting situations.", "In [10] the authors explore a general setting in which there is a board that makes any decision in favor or against a proposal by vote.", "The power of a voter is identified with voting power measures, such as the probability that a voter is critical.", "The notion of criticality encodes the power of a player of changing the outcome when she votes differently.", "In [4] the author points out that the probability approach may fails to depict the entire voting power of a player and proposes a measure of voting power that goes beyond the probability of criticality.", "In the literature of the last century several power indeces have been proposed, the most famous are the Shapley-Shubik [15] and the Banzhaf [3] indices, but equally important are the indeces introduced by Coleman [5], Degan and Packel [7], Johnston [12], Holler and Packel [11].", "Besides these power indeces other approaches have landed to define semivalues [16] and coalitional values [14].", "As well resumed in [13] power indeces and their extension are studied through either an axiomatic approach or a probabilistic one.", "We follow the probabilistic approach and we analyze two endeavors to extend the notion of criticality in simple games that measure the influence of a player in conjunction with others.", "The first proposal is the notion of rank of $d-$ criticality given by Beisbart [4] and the second one is the order of criticality in Dall'Aglio et al and Aleandri et al [1], [2].", "In the definition given by Beisbart a coalition of players is critical if it has the opportunity to make another coalition winning and to make it losing.", "This opportunity, according to the author, must depend only on the power that the other players concede.", "In the definition given by Dall'Aglio et al and Aleandri et al the criticality of a coalition requires that the players that make it up are essential.", "In this case the \"opportunity test\" may fail, but a player's true impact on coalition formation is grasped.", "In this work we propose a new prospective for the opportunity test and a notion of criticality that encloses all the contributions that a player can give.", "The paper in structures as follow: Section 2 recalls the definitions of the previous works.", "In Section 3 the group essential criticality is introduced.", "In Section 4 the definitions of criticality are compared.", "In Section 5 new measures of agents' power are defined.", "Section 6 is devoted to a new approach at the opportunity test." ], [ "Previous works", "A simple cooperative game with transferable utility (TU-game) is a pair $(N,v)$ , where $N=\\lbrace 1,2,\\ldots ,n\\rbrace $ denotes the finite set of players and $v:2^n\\rightarrow \\lbrace 0,1\\rbrace $ is the characteristic function, with $v(\\varnothing )=0$ , $v(S)\\le v(T)$ for all $S,T$ subsets of $N$ such that $S\\subseteq T$ and $v(N)=1$ .", "Given a coalition $S \\subseteq N$ , if $v(S)=0$ then $S$ is a losing coalition, while if $v(S)=1$ , then $S$ is a winning coalition.", "Given a winning coalition $S$ , if $S\\setminus \\lbrace i\\rbrace $ is losing, then $i\\in S$ is a critical player for $S$ .", "We let $\\mathcal {W}=\\lbrace S \\subseteq N: v(S)=1\\rbrace $ be the set of winning coalitions.", "Beisbart [4] proposes measures that \"quantify the extent to which a voter can make a difference as a member of a group\".", "First of all the notion of criticality of a coalition is given.", "Definition 2.1 (Definition 3.1 in [4]) Let be $G \\subseteq N$ .", "$G$ is critical wrt to a coalition $S$ , iff $S \\cup G \\in \\mathcal {W}$ and $S \\setminus G \\notin \\mathcal {W}$ .", "If $G$ is critical wrt to $S$ , $G$ is called critical inside (outside) $S$ , iff $S \\in \\mathcal {W}$ ($S \\notin \\mathcal {W}$ ).", "Turning to a measure of the single players, Beisbart provides the following: Definition 2.2 (Defintions 4.1 and 4.2 in [4]) A player $i\\in N$ is d-critical of rank $\\kappa _i^S\\in \\mathbb {N}$ wrt $S \\subseteq N$ iff there exists $G \\subseteq N$ , with $i \\in G$ and $|G|=\\kappa _i^S$ such that $G$ is critical wrt $S$ and $G$ has minimal cardinality: no other coalition $G^{\\prime }$ with $|G^{\\prime }|<\\kappa _i^S$ and $i \\in G^{\\prime }$ is critical wrt $S$ .", "Remark 2.3 Given any coalition $S$ and a critical coalition $G$ then each $i\\in G$ is d-critical of some rank $\\kappa _i^S$ wrt S and for each $i,j\\in G$ we can have $\\kappa _i^S\\ne \\kappa _j^S$ .", "Example 2.4 Take $\\mathcal {W}=\\lbrace \\lbrace 1,3\\rbrace ,\\lbrace 1,2,3\\rbrace \\rbrace $ and $S=\\lbrace 1\\rbrace $ .", "Player 2 is d-critical of rank $\\kappa _2^S=2$ via the coalition $G=\\lbrace 2,3\\rbrace $ .", "We can observe that $3\\in G$ but she is d-critical of rank $\\kappa _3^S=1$ via the coalition $G^{\\prime }=\\lbrace 3\\rbrace $ .", "Finally, player 1 is d-critical of order $\\kappa _1^S=2$ via the coalition $G^{\\prime \\prime }=\\lbrace 1,3\\rbrace $ .", "Based on the above definitions, the author defines a voting power index as the probability of a random coalition to be critical of any given rank, thus extending the classical measurement of power based on the player's solitary effort.", "Beisbart points out that the proposed indices that do not depend on the player's action, but only on the opportunity that the other players offer to the observed player.", "Definition 2.5 A criticality notion passes the opportunity test, iff whenever a player $a \\in N$ is critical wrt $S$ , then, the same player is also critical wrt $S \\cup \\lbrace a\\rbrace $ and $S \\setminus \\lbrace a\\rbrace $ .", "The proposed notion of criticality passes the test for any rank, while another proposal by the same author fails the test, and for this reason is relegated to the work's appendix.", "Definition 2.6 Let $(N,v)$ a TU-cooperative game and fix a coalition $S\\subseteq N$ .", "Let $G$ be critical wrt a $S$ .", "Player $i$ is essential for $G$ being critical (wrt $S$ ), iff $G \\setminus \\lbrace i\\rbrace $ is not critical wrt $S$ .", "A player $i$ is essential wrt $S$ if there is at least one critical coalition $G$ (wrt $S$ ) for which it is essential.", "A player $i \\in N$ is e-critical of rank $\\kappa _i^S\\in \\mathbb {N}$ wrt $S$ , iff there is a coalition $G$ with the following properties: $|G|=\\kappa _i^S$ $G$ is critical wrt $S$ $i$ is essential for $G$ being critical wrt $S$ More recently, Dall'Aglio et al.", "in [6], unaware of Beisbart's work, gave another definition of criticality that involves several players.", "This notion considers only the criticality inside $S$ , i.e.", "when the player acts with others to make the coalition $S$ lose.", "Definition 2.7 Let $k\\ge 0$ be an integer, let $S\\subseteq N$ , with $|S|\\ge k+1$ , be a winning coalition.", "We say that a player $i$ is negative In the original work the player was referred to simply as critical.", "We add here the term \"negative\" to distinguish it from the complentary situation described in Definition REF critical of order $k+1$ wrt a coalition $S$ , and write $\\rho ^-(i,S)=k+1$ , if $k$ is the minimum integer such that there exists a coalition $K\\subseteq S\\setminus \\lbrace i\\rbrace $ of cardinality $k$ with $v\\big (S\\setminus K\\big )-v\\big (S\\setminus (K\\cup \\lbrace i\\rbrace )\\big )=1.$ The notion has been further investigated in the context of connection games in Dall'Aglio et al.", "[8], to define monotone indices of power (Dall'Aglio et al.", "[9]), to rank the players according to a lexicographic criterion (Aleandri et al.", "[1]) and as dual definition of the outside criticality when the desirability relations is total.", "(Aleandri et al.", "[2])" ], [ "Essential criticality", "In this section, we build upon the definitions given by Breisbart and by Dall'Aglio et al.", "to come up with a notion of group criticality that draws elements from both sources.", "We then make a comparison between the new proposal and that of d-criticality used by Beisbart.", "We reconsider definition REF and notice that a similar definition can be given for the case when the player acts to turn the coalition $S$ into a winner.", "Definition 3.1 Let $k\\ge 0$ be an integer, let $S\\subseteq N$ , with $|S|\\le n-k-1$ , be a losing coalition.", "We say that a player $i$ is positive critical of order $k+1$ wrt a coalition $S$ , and write $\\rho ^+(i,S)=k+1$ , if $k$ is the minimum integer such that there exists a coalition $K\\subseteq N\\setminus \\left( S\\cup \\lbrace i\\rbrace \\right)$ of cardinality $k$ with $v\\big (S\\cup (K\\cup \\lbrace i\\rbrace )\\big )-v\\big (S\\cup K\\big )=1.$ We now turn to the notion of e-criticality of definition REF and note that it does not require all the players in $G$ to be essential.", "Example 3.2 (continue example REF ) Taking the coalition $G=\\lbrace 2,3\\rbrace $ we observe that players 3 is essential for $G$ being critical, but player 2 is not essential.", "On the other hand, take $S^{\\prime }=\\lbrace 2\\rbrace $ then players 1 and 3 are e-critical of rank 2.", "Consider $G=\\lbrace 1,3\\rbrace $ then $G$ is critical for S and $G\\setminus \\lbrace 1\\rbrace $ , $G\\setminus \\lbrace 3\\rbrace $ are not critical.", "The previous example shows that player 1 and 3 are e-critical of rank 3 also.", "Indeed taking $\\widetilde{G}=\\lbrace 1,2,3\\rbrace $ then $\\widetilde{G}$ is critical for S and $\\widetilde{G}\\setminus \\lbrace 1\\rbrace $ , $\\widetilde{G}\\setminus \\lbrace 3\\rbrace $ are not critical.", "The e-criticality rank of a player does not provide any information about the minimum number of players required to make the coalition $G$ critical.", "Moreover a player $i\\in N$ is always d-critical of some order wrt a coalition, while it may fail to be e-critical of any order.", "Example 3.3 (continue example REF ) Take $S=\\lbrace 1,3\\rbrace $ then player 2 is d-critical of rank 2 via coalitions $\\lbrace 2,1\\rbrace $ and $\\lbrace 2,3\\rbrace $ but she is not e-critical of rank 2 because she is not essential because $S$ is winning and player 2 has no role in making it lose.", "The non-essential players in the critical coalition have no effective power, and it is natural to restrict our attention to essential players only.", "Definition 3.4 A coalition $G\\subseteq N$ is called essential critical, or simply essential, wrt a coalition $S\\subseteq N$ iff each agent $i\\in G$ is essential for $G$ being critical wrt $S$ .", "Let us define $\\mathcal {G}_e^S$ the set of all essential coalitions wrt $S$ .", "When essential coalitions $G$ are considered, only two criticality scenarios, for player $i\\in N$ , come into play: $S \\notin \\mathcal {W}$ and $i \\notin S$ .", "In this case $G \\subset S^c$ and we have outside or positive criticality.", "$S \\in \\mathcal {W}$ and $i \\in S$ .", "In this case $G \\subset S$ and we have inside or negative criticality.", "Let us consider the other two cases (a) Suppose $S \\notin \\mathcal {W}$ and $i \\in S$ .", "Then, if a critical coalition $G$ contains $i$ , that player is not essential, together with the whole coalition $G$ .", "In fact, $S\\cup G= S\\cup ( G\\setminus \\lbrace i\\rbrace )\\in \\mathcal {W}$ and $S\\setminus G\\subset S\\setminus (G\\setminus \\lbrace i\\rbrace )\\notin \\mathcal {W}$ .", "(b) Similarly, when $S \\in \\mathcal {W}$ and $i \\notin S$ and a critical coalition $G$ contains $i$ , that player is again not essential and neither is the coalition $G$ .", "Indeed $S\\cup G\\subset S\\cup ( G\\setminus \\lbrace i\\rbrace )\\subset S\\in \\mathcal {W}$ and $S\\setminus (G\\setminus \\lbrace i\\rbrace )=S\\setminus G\\notin \\mathcal {W}$ .", "The essential coalitions form a minimal (in terms of cardinality) cover of the set of all critical coalitions for a given $S$ .", "Proposition 3.5 Let be $S\\subseteq N$ and $G$ a critical coalition wrt $S$ .", "If we remove from $G$ the non-essential players we still have a critical coalitions.", "Moreover any coalition $G^{\\prime }$ such that $G\\subseteq G^{\\prime }$ is critical wrt $S$ .", "Let write $G=E\\cup nE$ , where $E$ is the set of essential players and $nE$ is the set of non essential players.", "By definition of non essential player, for all $i\\in nE$ , the coalition $G\\setminus \\lbrace i\\rbrace $ is critical wrt $S$ .", "The first part of the proposition is proved.", "For the second part it is sufficient to observe that $S\\cup G^{\\prime }\\supseteq S\\cup G\\in \\mathcal {W}$ and $S\\setminus G^{\\prime }\\subseteq S\\setminus G\\notin \\mathcal {W}$ .", "Clearly, essential coalitions may have different cardinality.", "Example 3.6 Let us consider $\\mathcal {W}=\\lbrace \\lbrace 1,2,3\\rbrace ,\\lbrace 1,2,4,5\\rbrace \\rbrace $ and take $S=\\lbrace 1\\rbrace $ then the coalition $\\lbrace 2,3\\rbrace $ and $\\lbrace 2,4,5\\rbrace $ are both essential.", "When essential coalitions are considered we are able to define a notion of criticality that takes into account the fact that each player involved is essential and that in doing so, a minimal number of players are involved.", "Definition 3.7 A player $i\\in N$ is group essential critical, or simply $g$ -critical of rank $g_i^S\\in \\mathbb {N}$ wrt $S \\subseteq N$ iff there exists an essential coalition $G \\subseteq N$ of cardinality $|G|= g_i^S$ and containing player $i$ such that no other coalition $G^{\\prime }\\subseteq N$ with $i \\in G^{\\prime }$ and $|G^{\\prime }|<g_i^S$ is essential critical wrt S. Example 3.8 (continue example REF ) Take the coalition $S=\\lbrace 1\\rbrace $ then players 2 and 3 are $g$ -critical of rank 2 and players 4 and 5 are $g$ -critical of rank 3.", "We are now able to create the link between Beisbart's approach and that of Dall'Aglio et al.", "Proposition 3.9 Given any coalition $S\\subseteq N$ then player $i\\in N$ is $g$ -critical of rank $g_i^S$ if and only if player $i$ is either positive or negative critical of order $g_i^S$ .", "If player $i$ is not $g$ -critical of any order then she is not essential.", "This implies that $v(S\\setminus T)-v(S\\setminus (T\\cup \\lbrace i\\rbrace )=0$ for all $T\\subseteq S\\setminus \\lbrace i\\rbrace $ then player $i$ is not critical of any order.", "The converse is straightforward.", "Case 1: Player $i\\in S$ .", "Suppose that player $i$ is $g$ -critical of order $g_i^S>0$ , then there exists an essential critical coalition $G\\subseteq S$ (wrt $S$ ) such that $|G|=g_i^S$ .", "Define $K=G\\setminus \\lbrace i\\rbrace $ then we observe that equation (REF ) holds and $\\rho ^-(i,S)\\le g_i^S$ .", "Now suppose that $\\rho ^-(i,S) < g_i^S$ and there exists $K^{\\prime }\\subset S$ such that $|K^{\\prime }|<|K|$ such that equation REF is satisfied, then by the minimality of $\\rho ^-(i,S)$ the coalition $K^{\\prime }\\cup \\lbrace i\\rbrace $ is essential critical wrt coalition $S$ implying that player $i$ is $g$ -critical of rank $k^{\\prime }+1<g_i^S-1$ .", "Conversely, suppose that player $i$ is negative critical of order $g_i^S>0$ , then there exists a coalition $K\\subseteq S\\setminus \\lbrace i\\rbrace $ such that equation (REF ) is satisfied.", "Define $G=K\\cup \\lbrace i\\rbrace $ , we can observe that $G$ is essential critical wrt $S$ , then player $i$ is $g$ -critical of order less or equal than $g_i^S$ .", "Suppose there exists an essential critical coalition $G^{\\prime }\\ni i$ such that $|G^{\\prime }|<g_i^S$ containing player $i$ .", "This implies that $S\\setminus G\\notin \\mathcal {W}$ and $S\\setminus (G\\setminus \\lbrace i\\rbrace )\\in \\mathcal {W}$ then taking $K^{\\prime }=G^{\\prime }\\setminus \\lbrace i\\rbrace $ equation (REF ) holds and $\\rho ^-(i,S)<g_S^i$ .", "Case 2: The proof is analogue to the Case 1 where in place of $S$ we use $S^c$ ." ], [ "A comparison of old and new definitions", "It is natural to juxtapose the notions of $d$ -criticality and $g$ -criticality.", "A group of coalitions plays a special role in this comparison.", "Definition 4.1 A coalition $G$ is minimal essential for $S$ if $G=\\arg \\min \\lbrace |G^{\\prime }|: G^{\\prime }\\in \\mathcal {G}^S_e\\rbrace $ .", "We denote the family of all minimal essential coalitions for $S$ as $\\mathcal {G}^S_{m}$ .", "The players in the minimal essential coalitions play a special role in both notions of $d$ -criticality and $g$ -criticality.", "Definition 4.2 A player $i$ is minimal essential critical, or simply $m$ -critical of rank $m^S$ if there exists $G\\in \\mathcal {G}_m^s$ such that $i\\in G$ and $|G|=m^S$ .", "If a player does not belong to any minimal essential coalition wrt $S$ it is called non minimal.", "We observe that a $g$ -critical player may fail to be $m$ -critical.", "Example 4.3 Consider example REF , player 4 is $g$ -critical of rank 3, but $\\mathcal {G}_m^S=\\lbrace \\lbrace 2,3\\rbrace \\rbrace $ .", "The following result clarifies the importance of essential coalitions of minimal size in $d$ -criticality.", "Proposition 4.4 Given a coalition $S \\subseteq N$ , If player $i \\in N$ is $m$ -critical for $S$ with rank $m_S$ , then $i$ is $d$ -critical and $g$ -critical of the same rank; Otherwise, player $i$ is $d$ -critical of rank $m_S+1$ .", "Denote as $\\kappa ^S_i$ the rank for d-criticality of $i \\in N$ wrt $S$ and suppose $i$ is $m$ -critical with rank $m_S$ .", "Then the minimal essential coalition $\\widetilde{G}$ to which $i$ belongs is critical, therefore $\\kappa ^S_i \\le m_S$ .", "The inequality cannot be strict, otherwise we would be able to extract a minimal essential coalition with smaller cardinality than $m_S$ from the critical coalition that defines the rank of $d$ -criticality.", "Since $\\widetilde{G}$ is essential, a similar argument shows that player $i$ is also $g$ -critical of the same rank.", "For any other player $j \\in N$ , consider the coalition $\\widehat{G}=G_m \\cup \\lbrace i\\rbrace $ with $G_m \\in \\mathcal {G}^S_m$ .", "Now $\\widehat{G}$ is critical for $S$ , with $|\\widehat{G}|=m_S+1$ .", "No smaller coalition containing $j$ does the same, otherwise the coalition would be minimal and we would fall in the previous case.", "Player $j$ is therefore $d$ -critical of rank $m_s+1$ .", "Definition 4.5 A player $i\\in N$ is a free rider for $d$ -criticality if either $i)$ $i$ is not g-critical of any rank or $ii)$ its rank for $g$ -critical is higher than that of $d$ -criticality.", "Roughly speaking a free rider is a player that can be d-critical without being essential for the corresponding critical coalition, gaining a rank that does not reflect the real impact on coalition formation.", "The following simple example helps to identify the free riders.", "Example 4.6 Take $\\mathcal {W}_{\\min } =\\lbrace \\lbrace 1,2,3\\rbrace ,\\lbrace 3,4,5\\rbrace ,\\lbrace 4,5,6,7\\rbrace \\rbrace $ and $S=\\lbrace 1\\rbrace $ .", "Here $\\lbrace 2,3\\rbrace $ is the only minimal essential coalition so 2 and 3 are $m$ -critical, $d$ -critical and $g$ -critical of rank 2.", "Players 4,5 are $d$ -critical and $g$ critical of rank 3.", "Finally players 6 and 7 are $g$ -critical of rank 4, but $d$ -critical of rank 3, so they are free riders.", "Note player 4 may form an essential coalition with 3 and 5 or may join the minimal essential coalition $\\lbrace 2,3\\rbrace $ , but this will not lower her rank.", "A similar argument occurs for player 5.", "The previous example should convince the reader that free riding occurs when the rank for $g$ criticality is at least two units above the minimal rank." ], [ "A comparison of the properties", "Besibart has defined $d$ -criticality with the purpose of creating a power index that measures the players' influence in conjunction with others and satisfy the opportunity test of Definition REF .", "Clearly, $g$ -criticality does not pass the same test.", "Take for instance $S =\\lbrace 1\\rbrace $ in Example REF .", "While player 2 is $d$ -critical of rank 2 for both $S \\setminus \\lbrace 2\\rbrace =S$ and $S \\cup \\lbrace 2\\rbrace =\\lbrace 1,2\\rbrace $ , the same player is $g$ -critical of the same rank for $S$ only, while it is inessential for $S \\cup \\lbrace 2\\rbrace $ , and therefore not $g$ -critical of any order.", "This is not a coincidence and it takes place whenever a player is critical with the help of others.", "Proposition 4.7 Suppose player $i \\in N$ is $d$ -critical of rank 2 or higher wrt $S$ .", "Then $i$ is inessential for at least one of the critical coalitions that define the rank of $d$ -criticality in $S \\setminus \\lbrace i\\rbrace $ and $S \\cup \\lbrace i\\rbrace $ .", "If and $i \\notin S \\in \\mathcal {W}$ , then $i$ is always inessential for the critical coalition that turns $S=S\\setminus \\lbrace i\\rbrace $ into a losing coalition.", "Similar is the case when $i \\in S \\notin \\mathcal {W}$ , where $i$ is not essential for $S=S \\cup \\lbrace i\\rbrace $ .", "Consider now $i \\in S \\in \\mathcal {W}$ .", "Since $i$ is $d$ -critical with rank greater than 1, then $S \\setminus \\lbrace i\\rbrace \\in \\mathcal {W}$ , with $i$ not essential for the critical coalition that turns $S\\setminus \\lbrace i\\rbrace $ into a losing one.", "The case $a \\notin S \\notin \\mathcal {W}$ is treated similarly.", "When a group of players is involved, $d$ -criticality passes the opportunity test, but the price to pay is make inessential players critical in at least half of the cases where higher-order criticality is recorded.", "The fraction of cases may be higher than 1/2 since a player may be inessential for both $S\\setminus \\lbrace i\\rbrace $ and $S\\cup \\lbrace i\\rbrace $ .", "Consider for instance player 6 in Example REF that is inessential for both $S\\setminus \\lbrace 6\\rbrace $ and $S\\cup \\lbrace 6\\rbrace $ when $S=\\lbrace 1\\rbrace $ .", "The notion of $d$ -criticality suffers from other problems – recognized by Beisbart himself.", "These are easily overcome when $g$ -criticality is considered.", "First of all, it is acknowledged in [4] (p.478) that a straightforward relationship between $d$ -criticality and minimal winning coalitions (or minimal blocking ones $\\mathcal {B}_{\\min }$Beisbart actually mentions the collection of maximal losing coalitions, which are, set by set, the complements of the minimal blocking coalitions) cannot be found.", "Turning to the newly defined notion of criticality, we show a straightforward method to derive the rank of $g$ -criticality wrt to any coalition from the collection of minimal winning coalitions $\\mathcal {W}_{\\min }$ or the dual notions of minimal blocking ones $\\mathcal {C} \\subseteq 2^N$ and $S \\subseteq N$ we define $\\mathcal {C} \\setminus S =\\left\\lbrace C \\setminus S:C \\in \\mathcal {C} \\right\\rbrace $ , and, for any player $i$ , $\\left(\\mathcal {C}\\right)_i= \\left\\lbrace C: i \\in C \\in \\mathcal {C} \\right\\rbrace $ .", "Proposition 4.8 Take $S \\notin \\mathcal {W}$ , then player $i$ is $g$ -critical if $\\left(\\mathcal {W}_{\\min } \\setminus S \\right)_i$ is non-empty with the rank given by the minimal cardinality of the sets in $\\left(\\mathcal {W}_{\\min } \\setminus S \\right)_i$ .", "Take $S \\in \\mathcal {W}$ , player $i$ is $g$ -critical if $\\left(\\mathcal {B}_{\\min } \\setminus S^c \\right)_i$ is non-empty with the rank given by the minimal cardinality of the sets in $\\left(\\mathcal {B}_{\\min } \\setminus S^c \\right)_i$ .", "The proof is straightforward observing that if $S \\notin \\mathcal {W}$ and $\\left(\\mathcal {W}_{\\min } \\setminus S \\right)_i$ is not empty we can construct the essential critical coalition starting from an element of this set.", "The same arguments holds if $S \\in \\mathcal {W}$ and $\\left(\\mathcal {B}_{\\min } \\setminus S^c \\right)_i$ is non-empty.", "Example 4.9 Consider 5 players and $\\mathcal {W}_{\\min }=\\lbrace \\lbrace 1,2,3\\rbrace ,\\lbrace 3,4,5\\rbrace ,\\lbrace 4,5,6,7\\rbrace \\rbrace $ and take $S=\\lbrace 1\\rbrace \\notin \\mathcal {W}$ .", "Now $\\mathcal {W}_{\\min } \\setminus S = \\lbrace \\lbrace 2,3\\rbrace ,\\lbrace 3,4,5\\rbrace ,\\lbrace 4,5,6,7\\rbrace \\rbrace $ , so players 2 and 3 are $g$ -critical of rank 2, players 4 and 5 are $g$ -critical of rank 3 and players 6 and 7 are $g$ -critical of rank 4.", "If $S=\\lbrace 3,4,5,6,7\\rbrace \\in \\mathcal {W}$ , $\\mathcal {B}_{\\min }=\\lbrace \\lbrace 1,4\\rbrace ,\\lbrace 1,5\\rbrace ,\\lbrace 2,4\\rbrace ,\\lbrace 2,5\\rbrace ,\\lbrace 3,6\\rbrace ,\\lbrace 3,7\\rbrace \\rbrace $ and $\\mathcal {B}_{\\min } \\setminus S^c=\\lbrace \\lbrace 4\\rbrace ,\\lbrace 5\\rbrace ,\\lbrace 3,6\\rbrace ,\\lbrace 3,7\\rbrace \\rbrace $ , so players 4 and 5 are $g$ -critical of rank 1, while players 3,6 and 7 are $g$ -critical of rank 2.", "Another problem with $d$ -criticality comes from the fact that any player – including null ones – is critical of some order.", "Therefore, power indices based on $d$ -criticality will always assign non-null power to null players as long as the index is based on a probability distribution that supports any coalition.", "This is not the case with $g$ -criticality, since null players are always inessential.", "Example 4.10 Consider again $N=\\lbrace 1,2,3,4,5\\rbrace $ and $\\mathcal {W}_{\\min }=\\lbrace \\lbrace 1,2\\rbrace ,\\lbrace 1,3,4\\rbrace \\rbrace $ .", "player 5 is clearly a null player but she is $d$ -critical of rank 2 for any coalition $S \\subseteq N$ , while the same player is never $g$ -critical.", "A more formal analysis between criticality and null players is postponed to the next section." ], [ "Power Indices", "Based on the new definitions of criticality, we consider new measures of the agents' power.", "These notions require a probability distribution $p\\in \\mathcal {P}^N$$\\mathcal {P}^N$ is the set of probability distributions on the power set of $N$ ., with $p(S)$ denoting the probability for coalition $S \\subseteq N$ to form.", "Definition 5.1 The minimal essential measure of voting power of order $\\kappa $ for player $i \\in N$ , denoted $\\beta ^{m,\\kappa }_{i}$ is the probability for $i$ of being $m$ -critical wrt to a random coalition $S \\subseteq N$ originated with probability $p$ .", "Replacing $m$ -criticality with $g$ -criticality we obtain the group essential measure of voting power of order $\\kappa $ , $\\beta ^{g,\\kappa }_{i}$ , while replacing it with d-criticality, we obtain the differential measure of criticality, $\\beta ^{d,\\kappa }_{i}$ , always of order $\\kappa $ .", "The index $\\beta ^{d,\\kappa }_{i}$ was simply referred to as the measure of voting power of rank $\\kappa $ .", "We add the term \"differential\" to distinguish it from the other values defined here.", "To better examine the three indices, we need to introduce some \"global\" notions of criticality.", "Definition 5.2 We denote the probability of being essential minimal (essential, differential, resp.)", "critical for a player $i \\in N$ as $\\pi ^m_i$ ($\\pi ^g_i$ , $\\pi ^d_i$ , resp.).", "Therefore, $\\pi ^m_i = \\sum _{\\kappa =1}^{n} \\beta ^{m,\\kappa }_{i}, \\qquad \\pi ^g_i = \\sum _{\\kappa =1}^{n} \\beta ^{g,\\kappa }_{i}, \\qquad \\pi ^d_i = \\sum _{\\kappa =1}^{n} \\beta ^{d,\\kappa }_{i} \\; .$ Proposition 5.3 The following relationships hold for any $i \\in N$ , $\\pi ^m_i \\le \\pi ^g_i \\le \\pi ^d_i =1$ To prove (REF ), we simply note that minimal essential criticality is only a part of both essential and differential criticality, and, therefore $\\pi ^m_i \\le \\pi ^g_i$ and $\\pi ^m_i \\le \\pi ^d_i$ .", "The last equality holds because not all agents are essential critical of any order.", "Proposition 5.4 The following relation holds for any $i\\in N$ : $\\mbox{ If player i is null } \\Rightarrow \\pi _i^g=0.\\\\\\mbox{ If } \\pi _i^m=0 \\Rightarrow \\mbox{ player i is null}.$ If player i is null it is easy to show that $\\pi _i^g=0$ .", "To prove the second relation suppose that player $i$ is not null.", "Then she belongs to some minimal winning coalition, $S$ .", "We observe that the coalition $G=\\lbrace i\\rbrace $ is minimal essential wrt $S$ and then $\\pi _i^m\\ne 0$ .", "We denote the average order of essential minimal (essential, differential, resp.)", "criticality for a player $i \\in N$ as $\\bar{\\kappa }^m_i$ ($\\bar{\\kappa }^g_i$ , $\\bar{\\kappa }^d_i$ , resp.).", "Remembering that a player may fail to be minimal or essential critical, the weighting probabilities are normalized to one.", "Therefore, if $i\\in N$ is not a null player, we define $\\bar{\\kappa }^m_i := \\cfrac{\\sum _{\\kappa =1}^{n} \\kappa \\beta ^{m,\\kappa }_{i}}{\\pi ^m_i} \\qquad \\bar{\\kappa }^g_i := \\cfrac{\\sum _{\\kappa =1}^{n} \\kappa \\beta ^{g,\\kappa }_{i}}{\\pi ^g_i} \\qquad \\bar{\\kappa }^d_i := \\sum _{\\kappa =1}^{n} \\kappa \\beta ^{d,\\kappa }_{i} \\; .$ The quantities just defined always follow a well-determined rank.", "Proposition 5.5 The following relationships hold for any $i \\in N$ , $\\bar{\\kappa }^m_i \\le \\bar{\\kappa }^d_i = \\bar{\\kappa }^m_i + 1 - \\pi ^m_i \\le \\bar{\\kappa }^g_i$ To prove the equality (REF ), we apply the average property of subgroups, and obtain $\\bar{\\kappa }^d_i=\\pi ^m_i \\bar{\\kappa }_i^m + \\left( 1 - \\pi ^m_i \\right) \\bar{\\kappa }^{\\text{d-m}} =\\pi ^m_i \\bar{\\kappa }_i^m + \\left( 1 - \\pi ^m_i \\right) \\left( \\bar{\\kappa }_i^{m}+1 \\right) = \\bar{\\kappa }^m_i+1 - \\pi ^m_i$ where $\\bar{\\kappa }_i^{\\text{d-m}} $ denotes the average criticality order of the differential but non minimal players, which is equal to $\\bar{\\kappa }_i^{m}+1$ due to Proposition REF .", "This shows that $\\bar{\\kappa }^m_i \\le \\bar{\\kappa }^d_i$ .", "To prove the last inequality, $\\bar{\\kappa }^d_i \\le \\bar{\\kappa }^g_i$ we write : $\\bar{\\kappa }^d_i=\\pi ^m_i \\bar{\\kappa }_i^m + \\left( 1 - \\pi ^m_i \\right) \\left( \\bar{\\kappa }^{m}_i+1 \\right) \\mbox{ and } \\bar{\\kappa }^g_i=\\cfrac{\\pi ^m_i}{\\pi ^g_i} \\bar{\\kappa }_i^m + \\left( 1 - \\frac{\\pi ^m_i}{\\pi ^g_i} \\right) \\bar{\\kappa }^{\\text{ g non m}}$ where $\\bar{\\kappa }^{\\text{ g non m}}_i $ denotes the average criticality order of the essential but non minimal essential players.", "If $\\bar{\\kappa }^{\\text{ g non m}}=0$ then we can observe that $\\bar{\\kappa }^d_i = \\bar{\\kappa }^g_i$ Otherwise we can write that $\\left( \\frac{\\pi ^m_i}{\\pi ^e_i} - \\pi ^m_i \\right) \\left(1+ \\bar{\\kappa }^m_i - \\bar{\\kappa }^{\\text{ g non m}} \\right) \\le \\left( \\frac{\\pi ^m_i}{\\pi ^e_i} - \\pi ^m_i \\right) \\bar{\\kappa }^m_i$ that implies $\\bar{\\kappa }^d_i \\le \\bar{\\kappa }^g_i$ ." ], [ "A New Look at the Opportunity Test", "In the previous sections we have shown that the notion of $d$ -criticality passes the opportunity test, but does not satisfy several reasonable properties such as ignoring inessential or null players when measuring the players' power, or establishing a clear relationship between criticality and minimal winning coalitions.", "Conversely, the newly introduced notion of $g$ -criticality satisfies those properties, while failing the opportunity test.", "We question here the adequacy of the opportunity test of Definition REF , which was defined in a context where voting power measures \"quantify the extent (denoted as $E$ ) to which voter $a$ has the opportunity to make a difference as to whether a bill passes or not\", in a situation where the focus is on \"the extent (denoted as $E_k$ ) to which $a$ can be a member of a group of size $\\kappa $ that has the opportunity to make a difference as to whether the bill passes\".", "In other words, the opportunity test should be modified in a way that takes into account the ability of the group, instead of a single player to change the game outcome.", "In order to proceed, we examine the implications of the opportunity test, to verify whether a new test involving groups of players satisfies similar properties.", "The test has three important implications: No matter what action a player takes, she will always be critical for the outcome; depending on the action, the player will be either outside or inside critical; if outside and inside criticality have the same relevance, the power of the player is the same no matter what action is taken.", "In defining an opportunity test for coalitions, we would like the test to show a coalition $K$ to be critical in some sense for a coalition $S$ , independently of the behavior of players in $K$ (property $i$ ) and we would like players in $K$ capable of both outside and inside criticality wrt $S$ (property $ii$ ).", "We set property $iii$ aside for the moment.", "In a voting context, we require that, no matter how players in $K$ cast their vote, some players will always be critical – and every player is critical for some configuration.", "Turning to a more formal approach, we consider all possible behaviors of the players in $K$ by considering all possible subsets in $K$ .", "Depending on $S$ , these subsets will represent all players voting \"yes\" or \"no\".", "Definition 6.1 A criticality criterion for coalitions satisfies the opportunity test for coalitions if any time coalition $K$ is critical for coalition $S$ , we have the following: For any $H\\subseteq K$ , there exists a coalition $G_H \\subseteq K$ such that is critical wrt either $S \\cup H$ or $S \\setminus H$ .", "Every player is critical for at least one configuration of players $\\bigcup _{H \\subseteq K} G_H =K$ There exist two coalitions $H^{\\prime }$ and $H^{\\prime \\prime }$ , such that $G_{H^{\\prime }}$ is critical outside and $G_{H^{\\prime \\prime }}$ is critical inside.", "We now focus on the notion of essential criticality.", "Proposition 6.2 If we replace the expression \"criticality\" with \"essential criticality\" in $i)$ of Definition REF , essential critical coalitions pass the test and they are the only coalitions to do so.", "If $K$ is an essential coalition for $S$ , then, $G_K=K$ and $G_H=K \\setminus H$ for $H \\ne K$ .", "$G_K$ is essential critical for $S \\cup H$ when $S \\notin \\mathcal {W}$ , and it is essential critical for $S \\setminus H$ when $S \\in \\mathcal {W}$ .", "If $K$ is not critical, it does not satisfy property $iii)$ of Definition REF , while if it is not essential it does not satisfy property $ii)$ .", "We new give examples in the context of voting.", "Suppose $S$ is a losing coalition of players voting \"yes\" and $K$ is a critical coalition outside $S$ .", "If every player in $K$ votes \"yes\", the bill passes and every player in $K$ is inside $g$ -critical of rank 1.", "Otherwise, the bill will not pass and only the players who voted \"no\" will now be critical – their rank being given by the cardinality of this smaller group.", "Conversely, if $S$ is winning and $K$ is critical inside $S$ , players will be inside critical of rank 1 only if they jointly vote \"no\".", "Conversely, only the players who voted \"yes\" will be jointly critical, their rank being given by their cardinality.", "We remark that player $i$ is $g$ -critical of rank $\\kappa $ wrt $S$ if $\\kappa $ is the minimum cardinality of the essential critical coalitions that contain player $i$ .", "In extending the opportunity test notion, we set aside property $iii$ .", "When $g$ -criticality is under scrutiny, the symmetry of criticality breaks down.", "We may construct power indices restricted to the different configurations of players in $K$ .", "Suppose $p_K$ is a probability distribution on the subsets of $K$ that indicates the players voting \"yes\".", "We may then consider a restricted power index a la Bahnzaf $\\beta ^{i,r}_K =\\sum _{ H \\subseteq K} p_K(H) I(i,G_H) \\qquad \\mbox{where} \\qquad I(i,G_H)= {\\left\\lbrace \\begin{array}{ll}1 & \\mbox{if }i \\in G_H\\\\0 & \\mbox{otherwise}\\end{array}\\right.", "}$ To make things simpler, we assume $S$ losing, $|K|=k$ and $p(H)=1/2^k$ for every $ \\subseteq K$ .", "Now $\\beta ^{i,1}_K=1/2^{k-1}$ and this is the sum of two critical events: inside criticality for $S \\cup K$ and outside criticality for $(S \\cup K) \\setminus \\lbrace i\\rbrace $ .", "For higher ranks only outside criticality is counted and, for any $r=2,3,\\ldots ,k$ , $\\beta _K^{i,r} = \\frac{\\binom{r-1}{k-1}}{2^k}$ The measure of outside criticality is not compensated by any degree of inside criticality for the same rank, and this is where symmetry breaks down.", "We note that symmetry holds when first order rank is considered in line with what is already known about classical criticality.", "Higher order criticality, instead, contributes to a single side only: outside criticality when $S \\notin \\mathcal {W}$ and inside criticality otherwise.", "This asymmetric part arises from the collaborative effort among players" ], [ "Conclusions", "We analysed two different and independent approaches to the definition of criticality of a player and we show that they can be related.", "We did not find a \"perfect\" group criticality criterion that meets all the requirements in terms of being reasonable and compliant to the requirement of measuring power as the opportunity left by the remaining players.", "We question the applicability of the \"classical\" opportunity test and we make a proposal to define the opportunity that the remaining player leave to a a group of players, not necessarily acting in coordination.", "Further efforts have to be made in this direction." ] ]
2207.03565
[ [ "A Step in Understanding the $S_8$ Tension" ], [ "Abstract Models of dark sectors with a mass threshold can have important cosmological signatures.", "If, in the era prior to recombination, a relativistic species becomes non-relativistic and is then depopulated in equilibrium, there can be measurable impacts on the CMB as the entropy is transferred to lighter relativistic particles.", "In particular, if this ``step'' occurs near $z\\sim 20,000$, the model can naturally accommodate larger values of $H_0$.", "If this stepped radiation is additionally coupled to dark matter, there can be a meaningful impact on the matter power spectrum as dark matter can be coupled via a species that becomes non-relativistic and depleted.", "This can naturally lead to suppressed power at scales inside the sound horizon before the step, while leaving conventional CDM signatures for power outside the sound horizon.", "We study these effects and show such models can naturally provide lower values of $S_8$ than scenarios without a step.", "This suggests these models may provide an interesting framework to address the $S_8$ tension, both in concert with the $H_0$ tension and without." ], [ "Introduction", "The past two decades have seen cosmology become a precision science.", "A wealth of new data over wide ranges of redshifts and distance scales has appeared, allowing a precision determination of the parameters of $\\Lambda $ CDM.", "In addition, as error bars continue to shrink, we now look forward to an era where even small deviations from $\\Lambda $ CDM might show themselves as a statistically significant deviation in the data.", "There are many reasons to expect that such deviations should appear at some point.", "Most models of CDM have interactions at some level, with itself and with other species, possibly within the Standard Model.", "Additional radiation is naturally populated by thermal contact in the early universe, although it may be proportionately small today due to the large number of degrees of the Standard Model, or some other source of entropy.", "While many properties of high energy physics are hidden in the smallest scales, many different observables are sensitive to physical processes from when the photon bath had a temperature of a keV and below.", "Although this is much smaller than many particle physics scales, it is larger than other important scales known in nature.", "In particular, both the neutrino mass ($m_\\nu \\sim 0.1 \\,{\\rm eV}$ ) and the cosmological constant scale ($\\Lambda ^4 \\sim (10^{-2.5} \\,{\\rm eV})^4$ ) are physical scales below this.", "In particle physics models, there are frequently mass scales induced at scales in the several orders of magnitude around $\\sim \\,{\\rm TeV}^2/M_{pl} \\sim 10^{-4} \\,{\\rm eV}$ .", "Thus, not only can cosmology probe the presence of additional sectors of physics, presently disconnected from ours, it probes a potentially interesting energy range, as well.", "Recently, the consequences of a step in a fluid of strongly interacting radiation (SIDR) was investigated in the context of the $H_0$ tension [1].", "The $H_0$ tension is the $4.8\\sigma $ disagreement between the measurement of the local expansion rate based on late-universe observables, in particular Cepheid-calibrated Type IA supernovae from SH0ES [2] $H_0=73.04\\pm 1.04\\,$ km/s/Mpc when compared against inferences based early universe physics, such as from the CMB alone or in combination with BAO $H_0=67.27\\pm 0.60\\,$ km/s/Mpc [3].", "A broad variety of proposals have been presented in attempts to reconcile this, see., e.g., [4], [5], [6], [7], [8], [9], [10], [11], [12], [13] and additional models summarized in [14].", "One common group of models include additional radiation [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], which may be strongly interacting.", "In [1], it was shown that the presence of a mass threshold in a fluid of SIDR allowed for a higher value of $H_0$ and a better overall fit to a broad set of data.", "In particular, the data preferred a mass threshold occurring near $z_t \\sim 20\\,000$ .", "The model considered there was a simple two-component model, with a massive scalar and a massless fermion.", "This simple model is neatly packaged in the simplest possible supersymmetric model, and was thus termed WZDR, or Wess-Zumino Dark Radiation.", "As the scalar becomes non-relativistic, its entropy is transferred to the fermion, heating it, and increasing the effective radiation compared to the step-less case.", "It is clear that extensions of this model are interesting to study.", "Probably the most immediately obvious to consider is the question: what if the dark matter additionally interacts with a portion of this dark fluid, such as, e.g., a Yukawa coupling to the scalar?", "Naively, this extension would lead to a suppression of matter power for modes that were inside the horizon while the dark matter is coupled to the dark radiation fluid.", "In contrast, modes that come inside the horizon later would not be suppressed, leading to CDM-phenomenology on large scales, and deviations at smaller scales.", "The comoving horizon for modes entering at redshift $z$ before matter radiation equality is approximately $r_{s} \\sim 100\\, \\text{Mpc}\\, {z_{eq}}/{z}$ .", "Thus, for the parameters which are preferred by the $H_0$ tension, we expect deviations from CDM on scales of $\\sim 10\\, h^{-1}\\text{Mpc}$ and below.", "Such a scale is interesting in light of a separate tension, namely the $S_8$ tension.", "Loosely speaking, the $S_8$ tension is the difference between more direct measurements of the level of fluctuations in the matter power spectrum on $\\sim $ 8 h$^{-1}$ Mpc scales and their prediction using $\\Lambda $ CDM and the CMB to normalize.", "More quantitatively, the directly measured values from KiDS-1000 [26] and DES Y3 [27] combined give $S_8 \\equiv \\sqrt{\\Omega _m/0.3}\\, \\sigma _8 = 0.769 \\pm 0.016$ which is $2.9\\sigma $ lower then - for example - the value obtained from the CMB by Planck [3] $S_8=0.834\\pm 0.016$ .", "Although this tension is not as statistically significant as the $H_0$ tension, it has appeared across many different independent measurements of $S_8$ [14].", "Moreover, it raises a basic questions as to what sorts of models might impact the power spectrum at measurable scales, while leaving CDM-like cosmology on larger scales.", "The fact that a simple model extension immediately points to consequences relevant at the appropriate scale is a striking coincidence.", "In this paper we will explore the consequences of dark matter interacting with a stepped fluid, where the late-time behavior after the step of dark matter is that of CDM.", "The layout of the paper is as follows: in Section we lay out the specific scenarios we wish to consider, namely the WZDR model of [1] where the dark matter additionally interacts with the scalar.", "We study the cosmological signals of such a scenario in Section , showing that such scenarios naturally suppress matter power on $\\sim 15 \\, \\text{Mpc}\\times [20\\,000/z_{t}]$ scales and smaller.", "We perform a global fit in Section to an extended data set including a wide variety of cosmological datasets, finding a good improvement in fits compared to $\\Lambda $ CDM and WZDR with CDM.", "Finally, in Section , we conclude.", "Supplementary details regarding the WZDR-DM interaction and its implementation in the Boltzmann solver, as well as a full set of posterior densities and best-fit cosmological parameters as determined by our MCMC analysis, are provided in two appendices." ], [ "Model of a Stepped Dark Sector", "The Wess-Zumino Dark Radiation model (WZDR) [1] is a simple and natural example of a dark sector with a mass threshold.", "It contains just two particles - a fermion $\\psi $ and a scalar $\\phi $ , which interact through a Yukawa coupling $\\phi \\psi \\psi $ .", "If we allow for scalar quartic interactions $\\phi ^4$ , this model is efficiently packaged into the simplest known supersymmetric model, namely, the Wess-Zumino model.", "Supersymmetry breaking induces a small mass $m_\\phi ^2 \\phi ^2$ for the scalar, which can naturally lie near the  eV  scale.", "Importantly, the dynamics we discuss are independent of supersymmetry, although this provides natural model-building directions.", "At early times, before the CMB era, some process produces As discussed in Ref.", "[1], as we will assume here, this process can occur just after Big Bang nucleosynthesis (BBN) the WZDR $\\psi $ and $\\phi $ particles with an energy density equivalent to $N_\\text{UV}$ additional neutrino species.", "As $\\phi $ and $\\psi $ always maintain chemical and kinetic equilibrium, once the temperature of this sector decreases below $m_\\phi $ , the scalars decay and annihilate, depositing their entropy into the lighter $\\psi $ species.", "Due to this process which takes approximately a decade in redshift, the relative energy density of the fluid, as quantified by the effective number of additional neutrinos species $N (z)$ , increases to a value $N_\\text{IR}= (15/7)^{1/3} N_\\text{UV}$ .", "Assuming a temperature of the dark radiation today of $T_{d0}$ , the transition approximately starts at redshift $1+z_t = m_\\phi / T_{d0}$ , and the evolution of $N(z)$ as a function of redshift is calculated by solving the entropy conservation equation.", "This simple model was shown to significantly alleviate the Hubble tension, with the introduction of a single low mass threshold, a “step” [1].", "It is easy to imagine extensions of this model, perhaps the simplest of which is if the relativistic fluid additionally has interactions with dark matter, $\\chi $ .", "If $\\chi $ is a fermion, the simplest interaction possible is one in which an additional Yukawa coupling is added $\\phi \\chi \\chi $ .", "Interactions with the fluid would arise through Compton-like $\\phi \\chi \\rightarrow \\phi \\chi $ processes, as well as $\\chi \\psi \\rightarrow \\chi \\psi $ mediated by t-channel $\\phi $ exchange.", "Of these, the Compton-like process is typically smaller, suffering an additional $T_d^2/M_\\chi ^2$ suppression.", "At high temperatures, $T_d \\gtrsim m_\\phi $ , the momentum transfer rate between the DM and the WZDR sector from the t-channel process scales as [15] $\\Gamma \\propto \\frac{T_d^2 }{M_\\chi } \\qquad \\text{for }T_d \\gtrsim m_\\phi ~,$ where $M_\\chi $ is the mass of the DM.", "Therefore during radiation domination, the momentum transfer rate scales as Hubble, and the ratio $\\Gamma /H$ neither increases nor decreases over time.", "However, at late times, once the temperature drops below $m_\\phi $ , the $\\psi -DM$ interaction is effectively given by the four-fermi contact operator, $\\psi ^2 \\chi ^2/m_\\phi ^2$ which gives a suppressed momentum transfer rate that scales as $\\Gamma \\propto \\frac{T_d^2 }{M_\\chi }\\left(\\frac{T_d}{m_\\phi }\\right)^4 \\qquad \\text{for }T_d \\lesssim m_\\phi ~,$ and the interaction shuts off quite rapidly after the transition time $z_t$ .", "We found that to a very good approximation the momentum transfer rate is given by the phenomenologically motivated fitting formula $\\Gamma (x) & = \\Gamma _0\\frac{(1+z_t)^2}{x^2} \\left(\\frac{1}{1 - 0.05 \\sqrt{x} + 0.131 x }\\right)^4 ~,$ where $x \\equiv m_\\phi / T_d$ and $\\Gamma _0$ is the momentum transfer rate extrapolated to today in a theory in which there is no step (i.e.", "where $m_\\phi =0$ ).", "The coefficients have been chosen to best approximate the exact numerical result.", "A full derivation of the momentum transfer rate is given in Appendix .", "From now on we will refer to a WZDR model with dark matter – dark radiation interaction that shuts off as Eq.", "(REF ) as WZDR+.", "As a consequence, we have a scenario in which at early times dark matter exchanges momentum with a relativistic fluid made up of $\\psi $ and $\\phi $ particles.", "This momentum transfer acts as a mild friction on the growth of matter perturbations.", "At late times, after $z_t$ , we have a CDM-like $\\chi $ decoupled from a still tightly self-coupled fluid of $\\psi $ .", "This mass threshold then naturally produces a CDM-like cosmology at late times, and a very different one beforehand." ], [ "Effects of a Stepped Dark Sector", "Stepped fluids which interact with the dark matter have interesting phenomenology, and variety of effects and imprints during the evolution of the universe.", "Here we discuss some of them, emphasizing that there are two distinct effects which are both sensitive to the WZDR mass scale.", "The first effect which was discussed in detail in Ref.", "[1] is due to the increase in the effective number of degrees of freedom.", "Through the transition, as the energy density increases, the expansion rate of the universe increases accordingly.", "As a result, there is an $\\ell $ dependent phase-shift of the CMB power spectrum compared to a model of interacting radiation without a step.", "To leading order there is no phase shift for small $k$ -modes that enter the horizon after the transition, while there is a linear phase-shift for large $k$ -modes that enter the horizon before the transition.", "Therefore, as the change in the behavior of the phase shift depends on the time of transition, the CMB is sensitive to $m_\\phi $ or equivalently to $z_t$ .", "Ref.", "[1] found that this shift is strongly preferred by the full data set including measurements of $H_0$ and allows for better fits with higher expansion rates.", "The new ingredient of the model which we introduce here is the interaction of the stepped fluid with the dark matter.", "During the time of radiation domination and well before the transition the momentum transfer rate scales as Hubble, $\\Gamma \\propto H$ , and therefore remains equally important throughout this period.", "Since in our model the WZDR fluid interacts with $100\\%$ of the DM, similar to the study of [15], [17], the WZDR-DM coupling is weak and the momentum transfer rate is always smaller then Hubble.", "As a result the DM does not oscillate but only experiences a friction as it falls into the gravitational potentials.", "As explained in Ref.", "[17], the effect on the matter power spectrum (MPS) compared to a model with no interaction between the fluid and the DM, is a linear suppression in $\\log k$ space with a slope proportional to the momentum transfer rate, $\\frac{P_{\\text{ interacting}}}{P_{\\text{not-interacting}}} & \\simeq {\\left\\lbrace \\begin{array}{ll}1 & k \\ll k_{s.o.}", "\\\\1 - \\sqrt{2}\\, \\Gamma \\!/\\!H\\times \\log k / k_{s.o.", "}& k \\gg k_{s.o.}\\end{array}\\right.", "}~.$ Figure: Dependence of the MPS on model parameters.", "Both panels show ratios of the MPS in WZDR+ compared to a reference model with no DR-DM interaction.", "The blue line indicates the best fit point from our 𝒟ℋ𝒮\\mathcal {DHS} fit, with 10 7 Γ b.f =5 Mpc -1 10^7 \\Gamma _{b.f} = 5\\, {\\rm Mpc}^{-1}.", "In the top panel the different lines correspond to different interaction strengths Γ\\Gamma compared to the reference model with no interaction (i.e.i.e.", "WZDR).", "The bottom panel shows how the MPS varies with the redshift of the transition z t z_t compared to a reference model with z t →∞z_t\\rightarrow \\infty .", "The black dashed line in both plots is the best fit of SIDR+ to the 𝒟ℋ𝒮\\mathcal {DHS}\\,fit compared to the same parameter space point with the DM-DR interactions shut off.", "The gray band shows the scales to which S 8 S_8 is most sensitive to through the σ 8 \\sigma _8 window function and the dotted gray line indicates k=1/8hk=1/8\\, hMpc -1 ^{-1}.Here $k_{s.o.", "}$ is the wave number of the mode which enters the horizon when the interaction shuts off.", "This effect is shown in the top panel of Figure REF , where the linear suppression is evident and the slope is proportional to the momentum transfer rate.", "The fact that this suppression is smooth in $\\log k$ and does not introduce a sharp feature or drop-off in the MPS allows this kind of model to lower $S_8$ and fit the MPS extracted from Lyman-$\\alpha $ data which prefer a steeper slope at the scale $k\\sim 1$ Mpc$^{-1}$  [28].", "In Ref.", "[17] the interaction becomes smaller compared to Hubble after matter-radiation equality, as $H\\propto a^{-3/2}$ while the interaction still drops as $\\Gamma \\propto a^{-2}$ , thus the shut-off time is at the time of equality $\\eta _{eq.", "}$ In contrast, for the WZDR+ model the momentum transfer rate shuts off once the temperature drops below $m_\\phi $ (see Eq.", "(REF )).", "This is shown in the bottom panel of Figure REF .", "As a result the matter power spectrum, through the $S_8$ measurements, is sensitive to $m_\\phi $ or equivalently to $z_t$ ." ], [ "Analysis", "Within the WZDR+ framework, there are some immediate questions, namely: can this model give a good description of existing data?", "In particular, can it address the known tensions in the data?", "And finally, is there consistency between the different datasets in the value of $z_t$ extracted?", "In this Section we consider precisely these questions.", "We will study how well WZDR+ can improve the overall fit to the CMB and alleviate the Hubble and $S_8$ tensions.", "Here we highlight our main results, more details from our analysis can be found in Appendix .", "We modified CLASS v3.1 [29] to include the stepped fluid [1] and further modified the code to include interactions with DM as described in Appendix .", "We use the MontePython v3.5 [30], [31] MCMC sampler to study the constraints of various data sets on the model.", "Similar to Ref.", "[1], for the WZDR+ model, we adopt flat priors on the 6 $\\Lambda {\\rm CDM}$ cosmological parameters $\\lbrace \\omega _b,\\, \\omega _{\\rm dm},\\, \\theta _s, n_s, A_s,\\, \\tau _{\\rm reio} \\rbrace $ .", "For the 3 new parameters of WZDR+ we include a flat prior on the amount of dark radiation after the stepThe lower bound was included to avoid numerical issues of our code near $N_\\text{IR}=0$ .", "We explicitly checked that our results are not very sensitive to small changes of this bound.", "$N_\\text{IR}> 0.01$ , a logarithmic prior on the redshift of the step locationThese bounds are designed to avoid scanning over models in which the transition occurs too early or too late to have much effect on the CMB, see [1].", "$\\log _{10} (z_t) \\in [4.0, 4.6]$ , and a linear prior on the strength of the interaction between dark radiation and dark matter $\\Gamma _0>0$ .", "We consider combinations of three data sets: Our baseline data set $\\mathcal {D}$ includes the Planck 2018 [3], TT,TE, and EE data for low-$\\ell $ (`lowl_TT', `lowl_EE') and high-$\\ell $ (`highl_TTTEEE') with the full set of nuisance parameters.", "It also includes the late-universe constraints: the BAO-only likelihood (`bao_boss_dr12') from BOSS DR12 ($z = 0.38, 0.51, 0.61$ )[32] and the small-z BAO likelihood (`bao_smallz_2014') including data from the 6dF ($z = 0.106$ )[33] and MGS ($z = 0.15$ ) [34] catalogs, as well as the PANTHEON [35] supernova likelihood (`Pantheon').", "The data set $\\mathcal {H}$ is chosen to test the Hubble tension, it consists of the latest measurement of the intrinsic magnitude of supernovae $M_b=-19.253\\pm 0.027$ by the SHOES collaboration [2] , which we implement as a Gaussian likelihood for this parameter.", "The data set $\\mathcal {S}$ is chosen to test the $S_8$ tension.", "It includes the $3\\times 2 pt$ weak lensing and galaxy clustering analyses by KiDS-1000x{2dFLenS+BOSS} [26] and DES-Y3 [27] which obtain $S_8=0.766^{+0.020}_{-0.014}$ and $S_8=0.775^{+0.026}_{-0.024}$ , which we implement as simple asymmetric Gaussian likelihoods for $S_8$ .", "For quantifying the “Gaussian Tension” we also combine the two $S_8$ measurements and their positive $1\\sigma $ ranges to $S_8^{\\rm direct} = 0.769 \\pm 0.016$ .", "In addition, the data set $S$ includes the Planck lensing [3] likelihood (`Planck_lensing').", "We do not include here the ACT DR4 dataset [36].", "Although ACT provides the promise of great sensitivity, there are known tensions between ACT and Planck, and a thorough analysis would be needed to study the results of including potentially conflicting datasets.", "Given the early state of the ACT data and the promise of tremendous improvements, we defer this analysis to a separate study.", "We will perform fits to the combinations of data sets $\\mathcal {D}$ , $\\mathcal {DH}$ , $\\mathcal {DS}$ and $\\mathcal {DHS}$ .", "To put our fits in perspective we compare WZDR+ to two other models: $\\Lambda $ CDM and SIDR+, a model in which self-interacting dark radiation (SIDR) weakly interacts with the DM [15], [17].", "SIDR+ is a natural model to compare to because it also has i. extra radiation which is important for the Hubble tension and ii.", "friction between dark radiation and dark matter which is important to address the $S_8$ tension; relative to WZDR+ it is just missing the mass threshold, the “step”.", "Figure: Posterior distributions for Λ\\Lambda CDM vs. WZDR+ fitted to the data set 𝒟\\mathcal {D}.", "The contours of WZDR+ are much broader and in a better consistency with direct measurements of S 8 S_8 and H 0 H_0.A first question to address is whether WZDR+ can provide an overall better fit to the data, and, in particular, whether it ameliorates both the $H_0$ and $S_8$ tensions simultaneously.", "Figure REF shows the posterior in the $H_0-S_8$ plane of our fit of WZDR+ to the $\\mathcal {D}$ data set, i.e.", "without the direct measurements of $H_0$ or $S_8$ .", "For comparison, we also show the fit of $\\Lambda {\\rm CDM}$   to the same data and the 68% and 95% confidence bands of the direct measurements in gray.", "One sees clearly that the posterior of WZDR+ is much wider in both $H_0$ and $S_8$ than the one for $\\Lambda {\\rm CDM}$ and that it has significant overlap with the direct measurements.", "Note also the strongly non-Gaussian tail of the 1d posterior towards smaller values of $S_8$ corresponding to increasing DMDR interaction $\\Gamma _0$ , and the much broader and also somewhat non-Gaussian 1d posterior towards larger values of $H_0$ corresponding to increased dark radiation fluid $N_{IR}$ .", "The correlations of the WZDR+ parameters $N_{IR}$ and $\\Gamma _0$ with $H_0$ and $S_8$ are clearly visible in Figure REF .", "Figure: Posterior distribution of WZDR+ fitted to the 𝒟\\mathcal {D} data set.", "The distribution shows a clear correlation between the model parameters N IR N_{IR} and Γ 0 \\Gamma _0 and the inferred quantities H 0 H_0 and S 8 S_8.Given the broadening and shift towards larger $H_0$ and smaller $S_8$ we expect to find that the predictions for the values of these parameters in WZDR+ are in less tension with the direct measurements than in $\\Lambda {\\rm CDM}$ .", "One way to quantify the tension (or lack thereof) between two data sets in a given model is to perform a combined fit to the two data sets in question and examine the goodness of fit, the $\\chi ^2$ .", "Therefore we performed fits of all three models, $\\Lambda {\\rm CDM}$ , SIDR+ and WZDR+ to the full data set $\\mathcal {DHS}\\,$ .", "Figure REF shows the resulting posteriors in the $\\lbrace S_8,\\,H_0\\rbrace $ plane in blue compared with the fit to the base data set $\\mathcal {D}\\,$ (red).", "One sees that even with the pull from the direct measurements the $\\Lambda $ CDM posterior remains far from the direct measurements in $H_0$ and to a lesser extent in $S_8$ .", "On the contrary, the much broader WZDR+ posterior from the fit to $\\mathcal {D}$ overlaps both direct measurements at $1\\sigma $ and almost reaches the overlap of both.", "Thus it is easily pulled to largely overlap with both direct measurements once fit to the full data set $\\mathcal {DHS}\\,$ .", "The Figure shows that SIDR+ can also address both tensions, and based on the Figure alone one cannot ascertain a preference for WZDR+ versus SIDR+ or quantify the goodness of fit of either.", "Figure: Posterior distributions of Λ\\Lambda CDM (top left), SIDR+ (bottom left), and WZDR+ (bottom right) fitted to 𝒟\\mathcal {D} vs. 𝒟ℋ𝒮\\mathcal {DHS} data sets.Thus we need to compute and compare the $\\chi ^2$ values of the various best fit (BF) points to probe if they provide good overall fits to the data.", "Specifically, we will compute the $Q_{DMAP}$ value which quantifies the tension between the prediction for an observable from a fit in a model to the direct measurement by comparing the $\\chi ^2$ values of the BF points in the fit with and without the direct measurement.", "For example, to determine the $H_0$ tension in $\\Lambda {\\rm CDM}$ , we compare the $\\chi ^2$ of the BF point in the fit to the $\\mathcal {D}\\,$ data set, $\\chi ^2_\\mathcal {D}$ , to the BF of the fit to the $\\mathcal {DH}\\,$ data set, $\\chi ^2_\\mathcal {DH}$ .", "The $Q_{DMAP}$ valueWhen the direct measurement consists of multiple measurements as in the case of $S_8$ one must also subtract the $\\chi ^2_\\mathcal {S}$ due to the tension between the different direct measurements, and the $Q_{DMAP}$ formula becomes more symmetric $\\left(\\chi ^2_\\mathcal {DS}-\\chi ^2_\\mathcal {D}-\\chi ^2_\\mathcal {S}\\right)^{1/2}$ .", "Because of the excellent agreement between the KiDS and DES measurements this correction is numerically insignificant $\\chi ^2_\\mathcal {S}=0.08$ .", "For Gaussian posteriors the $Q_{DMAP}$ value agree with the Gaussian Tension.", "in units of $\\sigma $ is then $\\left(\\chi ^2_\\mathcal {DH}-\\chi ^2_\\mathcal {D}\\right)^{1/2}$ .", "Assuming Gaussian distributed errors the expectation for the $Q_{DMAP}$ value in a model which perfectly describes the data is $1\\sigma $ .", "Table REF shows the results of 3 different tests, the $Q_{DMAP}$ for the data sets $\\mathcal {DH}\\,$ , $\\mathcal {DS}\\,$ , and $\\mathcal {DHS}\\,$ , all compared to $\\mathcal {D}\\,$ .", "Beginning with the first row, we see that within $\\Lambda {\\rm CDM}$ the prediction for $S_8$ from the fit to $\\mathcal {D}\\,$ is in moderate $2.6\\sigma $ tension with the direct measurements.", "Much more significant at $5.6\\sigma $ is the Hubble tension, the tension with the direct measurement from SH0ES.This value is even larger than the $5\\sigma $ tension obtained for $H_0$ in [2].", "This is because we quantify the tension with the supernova magnitude $M_b$ instead of $H_0$ which avoids the model dependence included in the systematic uncertainties of $H_0$ from [2].", "The tension with both direct measurements combined is $5.8\\sigma $ , and clearly $\\Lambda {\\rm CDM}$ cannot explain the combined $H_0/S_8$ tension.This is slightly smaller than the combination (in quadrature) of the $Q_{DMAP}$ tensions for $\\mathcal {DH}\\,$ and $\\mathcal {DS}\\,$ .", "This is due to the correlation of $S_8$ and $H_0$ visible in the $\\Lambda {\\rm CDM}$ panel of Figure REF .", "Table: Q DMAP Q_{DMAP} tensionsSIDR+, in contrast, makes a significant improvement in addressing the Hubble tension (which is not surprising, given the extra radiation).", "It does not help the $S_8$ tension, however, as shown by the lack of improvement in the ${\\cal DS}$ dataset.", "The failure of SIDR+ to significantly reduce $S_8$ can be understood from Figure REF which shows that the suppression of the MPS starts too early at $k\\sim 0.01 h/\\text{Mpc}$ , which generates a tension with CMB data.The SIDR+ $\\mathcal {DS}\\,$ posterior is bimodal, one mode has a minimal amount of extra radiation $N{\\rm fluid} \\sim 0.0007$ while the other has $N{\\rm fluid} \\sim 0.07$ with a local minimum of $\\chi ^2$ .", "Here we focus on the region of parameter space with $N_{\\rm fluid}>0.01$ since we are interested in the models' potential for solving both tensions.", "The improvement in the ${\\cal DHS}$ $Q_{DMAP}$ for SIDR+ is almost entirely driven by the pull of the $\\mathcal {H}$ data set and the reduction of the Hubble tension.", "WZDR+, on the other hand, does better in all regards, improving ${\\cal DS}$ to a two sigma anomaly, and reducing ${\\cal DH}$ to below three-sigma.", "The overall cosmological tension from both Hubble and $S_8$ tensions is reduced to about three-sigma.", "If one looks at the breakdown of $\\chi ^2$ values in Appendix , one sees that in fitting to $\\mathcal {DHS}\\,$ , WZDR+ is gaining improvements from all portions of the dataset.", "Another test is $\\Delta AIC$ , the Akaike information criterium, which is defined as the difference of the best-fit $\\chi ^2$ to all the data $\\mathcal {D}HS$ between a given model and the reference $\\Lambda $ CDM, with a $\\chi ^2$ penalty of $+2$ for each new model parameter beyond $\\Lambda $ CDM: $\\Delta AIC & = \\chi ^2 - \\chi ^2_{\\Lambda \\rm CDM} + 2 \\times (\\text{new parameters})~.$ The results of this relatively straightforward test are shown in Table REF .", "We see immediately that the inclusion of just a single parameter (the DM interaction strength versus WZDR, and the mass threshold versus SIDR+) improves the $\\chi ^2$ by more than 5, improving the $\\Delta AIC$ in each case by better than 3.", "This is a weak preference, to be clear, but is also reflecting the simple fact that the $S_8$ data are not (yet) strong enough to make this more than a tension.", "Table: χ 2 \\chi ^2 differences and ΔAIC\\Delta AIC of WZDR, SIDR+, and WZDR+ relative to Λ CDM \\Lambda {\\rm CDM}Finally, there is the Gaussian Tension (GT) test.", "This is not an ideal test in this case because the posteriors for $S_8$ and the supernova magnitude $M_b$ are not Gaussian in SIDR+ and WZDR+, but we nonetheless include the GT for completeness The Gaussian Tension between two measurements with their $1\\sigma $ errors $x_i\\pm \\delta x_i$ is defined as $GT=|x_1-x_2|/\\sqrt{\\delta x_1^2 + \\delta x_2^2}$ ..", "The posteriors for SIDR+ and WZDR+ overlap the direct measurements of $S_8$ and $M_b$ at the $\\sim ~ 2-3\\, \\sigma $ level, therefore we use one half of the $2\\sigma $ intervals characterizing the 1-d posteriors (these are produced in the `.h_info' files in the analysis output of MontePython).", "This gives a slightly better approximation to the true tension in the models than simply using the $1\\sigma $ intervals.", "Table REF shows that the predicted value for $M_b$ from the fit to $\\mathcal {D}$ in $\\Lambda {\\rm CDM}$ has a GT of about $5.5\\sigma $ to the direct measurement of $M_b$ from SH$_0$ ES.", "In WZDR+ (and SIDR+) this tension is reduced to $2.6\\sigma $ ($2.7\\sigma $ ) due to the interacting radiation with (and without) a step.", "Similarly, Table REF shows a reduction of the GT in $S_8$ (here we compare to the combined KiDS+DES measurement) from $2.6\\sigma $ in $\\Lambda {\\rm CDM}$ to $1.7\\sigma $ in WZDR+ and $2.0\\sigma $ in SIDR+.", "This reduction in $S_8$ is due to the DMDR interaction.Using the naive $1\\sigma $ intervals, the GT for $M_b$ is $5.5/3.2/3.0$ , and for $S_8$ it is $2.6/2.0/1.8$ in order $\\Lambda {\\rm CDM}$ /SIDR+/WZDR+.", "Table: Gaussian Tensions.The success we see above (in reducing the combined tensions in the data), still leaves the question whether the new ingredient - namely the transition scale - is working simultaneously to alleviate both tensions, whether they pull in different directions, or whether the improvements are really independent of each other.", "As was discussed in previous sections the WZDR+ model is sensitive to the mass threshold at $z_t$ through two independent physical processes (a) via the $\\ell $ dependence of the phase shift of the CMB due to the change in $N_{\\rm eff}$ when $T_d$ drops below $m_\\phi $ , and (b) through the suppression of the matter power spectrum from the coupling between the dark matter and the dark radiation fluid due to scattering at temperatures above $m_\\phi $ .", "In our previous paper we found that the CMB preferred $\\log _{10} (z_t) \\sim 4.3$ , and it is interesting to see if this value of $z_t$ is also preferred by the $\\mathcal {DS}\\,$ data set: the CMB power spectrum, the CMB lensing potential and the matter power spectrum at distance scales of order $k_8\\sim $ h/(8Mpc) which are all sensitive to the shutoff redshift of the interaction.", "To answer this question we compare the WZDR+ mean-values and best-fit points for the four data sets $\\mathcal {D}, \\mathcal {DH}, \\mathcal {DS},$ and $\\mathcal {DHS}$ and check for consistency.", "Table: log10z t log10 z_t BF table.We find the remarkable result that the value of $z_t$ preferred by data set $\\mathcal {D}$ alone (which is dominated by the CMB) is the same to within half a sigma to the preferred value for $\\mathcal {DH}$ , $\\mathcal {DS}$ and $\\mathcal {DHS}$ even though these include different new physics.", "We find this coincidence interesting and consider it potential evidence for the existence of a new scale $T_d\\sim 10$ eV in Cosmology corresponding to redshifts of order $z_t\\sim 3\\times 10^5$ .", "This coincidence can also been seen in the 1d posteriors of the variable $\\log _{10} (z_t)$ for the 4 different data sets shown in Figure REF and more concretely in the values of the best fit values of $z_t$ seen in Table REF .", "Figure: The 1D posteriors of the transition time z t z_t, fitting WZDR+ to four different data sets." ], [ "Discussion and Outlook", "In the energy range $\\Lambda _{QCD} < E< \\,{\\rm TeV}$ , the Standard Model has seven mass thresholds including a phase transition.", "Below $\\Lambda _{QCD}$ , the Standard Model has myriad mass thresholds from the resonances of quarks, but additionally the muon and electron masses, and at least two neutrino masses.", "It is somewhat striking, then, that it is commonly assumed that models of dark sectors exhibit no meaningful mass thresholds when all the physics we have ever seen is full of them.", "In this vein, we have considered the effect of a single mass threshold in a dark fluid which is gently coupled to the dark matter (the WZDR+ model).", "The mass threshold has been previously shown to allow for enhanced $H_0$ with better overall fit when compared to $\\Lambda $ CDM or to a fluid with no mass threshold (WZDR).", "In particular, the Hubble tension seems to suggest a mass threshold near $z_t \\sim 20,000$ , when the sound horizon is approximately $10h^{-1} \\text{Mpc}$ .", "Even without knowing this, it would be natural to consider extensions to the WZDR model where the dark radiation is coupled to the dark matter.", "However, noting that the threshold occurs at a location which is precisely in the range of another known anomaly, namely the $S_8$ tension, suggests that the two anomalies - the $H_0$ tension and the $S_8$ tension, may have a common origin.", "As the dark radiation passes through the mass threshold, the gentle coupling to the dark sector turns off rapidly.", "This can occur either because the particle with which the DM is interacting has become exponentially suppressed, or because the mediator mass is suddenly relevant, and the gentle interaction turns off.", "This has the natural effect of producing a CDM cosmology on large scales, and non-CDM on small scales, with the transition occurring at a scale which is singled out by the $H_0$ tension to be $\\sim 10\\, \\text{Mpc}$ .", "We have seen that this scenario naturally produces a suppressed value of $S_8$ , consistent with the directly observed value.", "It is quite striking that all of the different combinations of datasets (${\\cal D, DS, DH, DHS}$ ), all point to the same value of $z_t$ (although the preference within ${\\cal D}$ alone is quite small).", "But this is a non-trivial consistency check, without which this overall setup would be unable to reconcile these observations.", "Our efforts here are the simplest extension of WZDR.", "One could consider multi-component dark matter where only a portion couples to the dark sector, but with enhanced interaction [23], [17].", "In such a “fractional WZDR+” setup, one would expect that a similar phenomenology to the above would be found, but constraining a product of interaction strength and interacting dark matter fraction, leaving a large degeneracy.", "In the tightly coupled limit, the interacting dark matter fraction would acoustically oscillate, rather than feel a slight friction during infall.", "In this limit, one would similarly expect a good fit, but trading a precise value of $\\Gamma _0$ for a precise value of the interacting fraction $f_\\chi $ to fit the value of $S_8$ .", "We leave the details to future work.", "Beyond this, one could also imagine multiple mass thresholds, couplings to neutrinos and more.", "What is clear, however, is that the data, in their present form, provide sensitivity to the presence of a mass threshold in the dark sector.", "As data improve - both from CMB datasets of ACT, Simons Observatory and CMB-S4, as well as from LSS measurements KiDS, DES, HSC and future galaxy surveys with Rubin, Roman and UNIONS - it will become clear both whether a dark mass threshold is truly preferred by the data, and what sort of dark sector dynamics we are being pointed to.", "Note added: As this work was being completed, [37] appeared, which also studies DM interacting with a WZDR fluid and includes fits to additional data.", "Importantly, [37] do not consider the $z_t$ -dependent turn-off of the DM-DR interaction which we studied in this paper." ], [ "Acknowledgements", "We thank Asher Berlin for collaboration and insightful discussions about phase shifts at the beginning of this work.", "The work of D.A., M.J., M.S.", "and E.N.S.", "is supported by the U.S. Department of Energy (DOE) under Award DE-SC0015845.", "N.W.", "is supported by NSF under award PHY-1915409, by the BSF under grant 2018140, and by the Simons Foundation.", "Our MCMC runs were performed on the Shared Computing Cluster, which is administered by Boston University's Research Computing Services." ], [ "Momentum transfer rate", "The perturbation equations for the dark matter and the WZDR fluid are sensitive to the momentum transfer rate between the two fluids.", "This rate is defined as $\\dot{\\vec{P}}_{\\chi } = -a \\Gamma \\vec{P}_{\\chi }$ , the change in momentum of a DM particle $P_\\chi $ due to friction it experiences while moving through the WZDR fluid of temperature $T$ .", "The thermally averaged rate is given by $\\dot{\\vec{P}} & = \\frac{a}{2 E_P} \\int \\frac{d^3 k}{(2\\pi )^3 \\, 2 E_k} f(k;T) \\int \\frac{d^3 k^{\\prime }}{(2\\pi )^3 \\, 2 E_k^{\\prime }} \\frac{d^3 P^{\\prime }}{(2\\pi )^3 \\, 2 E_P^{\\prime }}(2\\pi )^4 \\delta ^{(4)}(P + k - P^{\\prime } -k^{\\prime }) \\left|\\mathcal {M}\\right|^2\\left(\\vec{P}\\,^{\\prime } - \\vec{P}\\right)~,$ where $P,\\,P^{\\prime }$ stands for the incoming and outgoing DM momentum, $k,\\,k^{\\prime }$ stands for a WZDR momentum, $f(k;T)$ is the thermal distribution function for the incoming scatterers from the thermal bath, and we neglect the stimulated emission/Pauli blocking term for final state particles.", "As discussed in the text the $\\phi -\\chi $ scattering is suppressed by the massive DM propagator.", "Here we consider only the $\\psi -\\chi $ scattering mediated by $t$ -channel $\\phi $ exchange.", "The matrix element relevant for this process has the following dependence on the kinematical variables $\\left|\\mathcal {M}\\right|^2 & = \\frac{g_{\\chi \\phi }^2 g_{\\psi \\phi }^2}{4} \\frac{t(t - 4M^2)}{(t - m_\\phi ^2)^2}~,$ where the subscript of the coupling constants indicate the particles involved in the interaction.", "Plugging this matrix element into Eq.", "(REF ), and after some tedious algebra one finds $\\Gamma & = \\tilde{\\alpha }^2\\frac{T^2}{M} \\int _{0}^{\\infty } d\\tilde{k} \\, \\tilde{k} ^2 e^{-\\tilde{k} }\\int \\frac{ dc_\\theta \\, (1-c_\\theta )^2}{\\left[2(1-c_\\theta ) + \\frac{x^2}{\\tilde{k}^2}\\right]^2}~,$ where $x = m_\\phi / T$ , and $\\tilde{\\alpha }$ is an effective average coupling constant normalized such that $\\Gamma =\\tilde{\\alpha }^2\\frac{T^2}{M}$ in the case of $x= 0$ .", "The integrals above can be evaluated analytically in terms of an awful expression with various special functions, but in all regions of interest it is approximated to a few % precision by $\\left(\\frac{1}{1 - 0.05 \\sqrt{x} + 0.131 x }\\right)^4 ~,$ where the coefficients have been tuned to approximate the exact result.", "Finally we parameterize the momentum transfer rate as $\\Gamma (x) & = \\Gamma _0\\frac{(1+z_t)^2}{x^2} \\left(\\frac{1}{1 - 0.05 \\sqrt{x} + 0.131 x }\\right)^4 ~,$ where $\\Gamma _0$ is the momentum transfer rate extrapolated to today in a theory in which there is no step (i.e.", "where $m_\\phi =0$ ).", "With this definition the interaction rate in the UV (i.e.", "$T\\gg m_\\phi $ ) is $\\Gamma \\simeq \\Gamma _0 (T_{UV}/T_0)^2=\\Gamma _0 (1+z)^2 (7/15)^{2/3}$ so that the scattering rate in the UV is smaller than the corresponding rate in SIDR+ by $(7/15)^{2/3}\\sim 0.6$ .", "For completeness, the momentum transfer rate enters into the dipole equations of the interacting DM and the Wess-Zumino fluid as $\\dot{\\theta }_{dm} & = -\\mathcal {H}\\theta _{dm} + k^2 \\psi + a\\Gamma \\left(\\theta _{wz} - \\theta _{dm}\\right)~, \\\\\\dot{\\theta }_{wz} & = k^2 \\left(\\frac{\\delta _{wz}}{4} + \\psi \\right) - a\\Gamma R \\left(\\theta _{wz} - \\theta _{dm}\\right)~,$ where $R \\equiv 3\\rho _{dm}/ 4\\rho _{wz}$ ." ], [ "Triangle Plots and Parameter Values", "Since our fits include observables which are sensitive to the MPS at weakly non-linear scales one might wonder about the importance of non-linear effects.", "To probe the sensitivity of our results to non-linear physics we evaluated the $\\chi ^2$ values of several of our best fit points with and without using halofit [38], [39].", "We found slight improvements in the fits to the CMB and CMB lensing (with overall $\\delta \\chi ^2\\sim 2-4$ ) with no significant differences in the level of improvement between models.", "We conclude that the impact of non-linear effects on our analysis is small and we only present results without halofit.", "We also note that $S_8$ is defined as an integral over the linear matter power spectrum which means that its sensitivity to non-linear physics is a subtlety that must be faced in the extraction of $S_8$ from observations, not in the theory calculation.", "Since halofit is tuned to reproduce the non-linear effects for $\\Lambda {\\rm CDM}$ it is not necessarily accurate for WZDR+ and a full analysis including non-linear effects in WZDR+ is beyond the scope of this work.", "Additionally we assume that the extra radiation in SIDR+ and WZDR+ is populated after BBN so that the predicted abundance of primordial helium $Y_p$ is sensitive only to the Standard Model radiation at BBN ($ N_{\\rm eff}= 3.044.$ ).", "Figure: A comparison of the posteriors of WZDR+ for the four different data sets.", "The dark and light shaded regions correspond to 68% and 95% C.L., respectively.", "We see that the preference for the location of the transition (z t z_t) is fairly consistent across the datasets possibly signaling new physics at this scale.", "Additionally the fit to 𝒟ℋ𝒮\\mathcal {DHS} in comparison with the fits to 𝒟ℋ\\mathcal {DH} and 𝒟𝒮\\mathcal {DS} show that WZDR+ is capable of simultaneously alleviating the H 0 H_0 and S 8 S_8 tensions.Figure: A comparison of the posteriors Λ CDM \\Lambda {\\rm CDM}, SIDR+, and WZDR+ fitting to 𝒟ℋ𝒮\\mathcal {DHS}.", "The dark and light shaded regions correspond to 68% and 95% C.L., respectively.", "From this comparison we see the importance of the step in simultaneously alleviating the H 0 H_0 and S 8 S_8 tensions.", "Although SIDR+ allows for larger H 0 H_0 values, it does no better than Λ CDM \\Lambda {\\rm CDM} in resolving the S 8 S_8 tension.Table: Mean and ±1σ\\pm 1 \\sigma values for a fit to dataset and and H̥ S..Table: Best-fit values for a fit to dataset and and H̥ S..Table: Mean and ±1σ\\pm 1 \\sigma values for a fits of WZDR+.Table: Best-fit values for a fits of WZDR+.Table: Best-fit points used for calculating the Q DMAP Q_{DMAP} values.", "Note that for the data sets 𝒟𝒮\\mathcal {DS}\\,and 𝒟ℋ𝒮\\mathcal {DHS}\\,in this table and in the resulting Q DMAP Q_{DMAP} values only we excluded the Planck Lensing likelihood from the 𝒮\\mathcal {S} data set.", "This was done because it is not straightforward to define Q DMAP Q_{DMAP} with it included in 𝒮\\mathcal {S}.", "Here Q DMAP (𝒟𝒳)=χ 𝒟𝒳 2 -χ 𝒟 2 1/2 Q_{DMAP}(\\mathcal {DX})=\\left(\\chi ^2_\\mathcal {DX}-\\chi ^2_\\mathcal {D}\\right)^{1/2} for 𝒳=ℋ,𝒮,ℋ𝒮\\mathcal {X} = \\mathcal {H}, \\mathcal {S}, \\mathcal {HS}." ] ]
2207.03500
[ [ "Mirror Complementary Transformer Network for RGB-thermal Salient Object\n Detection" ], [ "Abstract RGB-thermal salient object detection (RGB-T SOD) aims to locate the common prominent objects of an aligned visible and thermal infrared image pair and accurately segment all the pixels belonging to those objects.", "It is promising in challenging scenes such as nighttime and complex backgrounds due to the insensitivity to lighting conditions of thermal images.", "Thus, the key problem of RGB-T SOD is to make the features from the two modalities complement and adjust each other flexibly, since it is inevitable that any modalities of RGB-T image pairs failure due to challenging scenes such as extreme light conditions and thermal crossover.", "In this paper, we propose a novel mirror complementary Transformer network (MCNet) for RGB-T SOD.", "Specifically, we introduce a Transformer-based feature extraction module to effective extract hierarchical features of RGB and thermal images.", "Then, through the attention-based feature interaction and serial multiscale dilated convolution (SDC) based feature fusion modules, the proposed model achieves the complementary interaction of low-level features and the semantic fusion of deep features.", "Finally, based on the mirror complementary structure, the salient regions of the two modalities can be accurately extracted even one modality is invalid.", "To demonstrate the robustness of the proposed model under challenging scenes in real world, we build a novel RGB-T SOD dataset VT723 based on a large public semantic segmentation RGB-T dataset used in the autonomous driving domain.", "Expensive experiments on benchmark and VT723 datasets show that the proposed method outperforms state-of-the-art approaches, including CNN-based and Transformer-based methods.", "The code and dataset will be released later at https://github.com/jxr326/SwinMCNet." ], [ "Introduction", "Salient object detection focuses on mimicking the attention mechanism in the human visual system (HVS) to locate the most salient objects in a scene [4], which is applied in various fields such as computer vision, computer graphics and robotics.", "Representative applications include image understanding, semantic segmentation, non-photo-realist rendering, automatic image cropping and human-robot interaction, etc.", "Although significant progress has been made in the SOD field over the past several years, there is still room for improvement in the single modality SOD (i.e., detection on a single RGB input image) when faced with challenging factors, such as low illumination conditions or the background is cluttered in the scenes [5], [6].", "To break through the performance bottleneck, researchers try to introduce additional supplementary knowledge to cope with challenging scenes of SOD, such as depth maps [5], thermal maps [7] or light field data [6].", "In the field of multi-modal SOD, the depth map are introduced earlier by researchers as a complementary information to RGB image, namely RGB-D SOD.", "The value of each pixel in the depth map represents the distance of that point from the camera in the scene.", "Thus the depth map usually contains rich spatial position and structural information about the salient objects.", "It is often used as an auxiliary modality to handle challenging scenes such as low contrast and complex backgrounds [8], [9], [2].", "However, since depth map is easily disturbed by low illumination and occlusion, which limits the practical applications [8].", "Thermal infrared cameras convert temperature differences into images by capturing infrared radiation from all objects in nature with temperatures above absolute zero.", "Its imaging mechanism does not depend on lighting conditions, thus the combination of RGB and thermal modalities has natural advantages in challenging scenes such as bad weather, night and so on.", "RGB-T provides new ideas for many challenging computer vision tasks [10], for example, RGB-T tracking, RGB-T crowd counting, and RGB-T person re-identification.", "In this paper, we focus on exploring the salient objection detection task in an aligned visible and thermal infrared image pair, which is named RGB-T SOD.", "In fact, RGB images contain rich color and texture features, in most cases the objects are more salient relative to the background.", "For thermal images, the infrared radiation from the same object tends to be uniform or varies very subtle, while the temperature difference between different objects is relatively large (except for thermal crossover occurs).", "It can be said that the same object is always homogeneous and the edge of objects is usually clear in the thermal imaging of a scene by an infrared camera.", "As shown in Fig.", "REF , in the scenes under low illumination or image blur, traditional cameras cannot capture enough details of the objects (see the top two rows).", "It is difficult for RGB-based SOD methods (e.g., GCPANet [1]) to accurately segment salient objects in this case.", "The RGB-D SOD methods usually regard the depth maps as auxiliary information of RGB images, and use a lightweight network to learn the features of objects from depth maps.", "Due to different data characteristics, it could not obtain satisfactory results by simply applying the RGB-D SOD model to the RGB-T SOD task (e.g., EFNet [2]).", "As a recent RGB-T SOD model, SwinNet [3] performs well in most scenes, however, it is still not accurate enough in some challenging scenes (e.g., bad thermal scenes).", "The last column in Fig.", "REF illustrates that our model can make full use of the complementary benefits of RGB and thermal modalities, thus obtain satisfactory results.", "To make the features from the two modalities complement and adjust each other flexibly, we design a mirror complementary network (MCNet) based on the complementary advantages of RGB and thermal images (see Fig.", "REF ).", "Inspired by the successful use of Transformer networks in the field of computer vision [11], [3], we study the RGB-T SOD method based on Transformer and CNN hybrid structure.", "The hybrid structure (e.g., Swin Transformer [11]) combines the long-range dependency merit of Transformer and the locality and hierarchy advantages of CNN, and its computational complexity is limited to a linear function of image size through shifted window operation, so it is very suitable for pixel-level prediction tasks with high image resolution.", "In this paper, we extract the hierarchical features of RGB and thermal modalities respectively based on a Transformer feature extractor, and model the complementary relationship of saliency information implied in cross modalities apply the locality merit of CNN.", "The idea of mirror complementary is similar to full-duplex or bi-directional strategies utilized in video or RGB-D SOD [12], [13], we design the overall structure of MCNet according to the characteristic of RGB-T data.", "The two streams of MCNet have fully equivalent encoding and decoding structures, and use a set of complementary grayscale labels to supervise the salient features of the two modalities respectively.", "Specifically, the complementary grayscale labels control the RGB stream to focuses on the skeleton region of the salient objects and the thermal stream to focuses on the contour of the salient objects.", "On the one hand, different color blocks of the same object can easily be misjudged as different objects in the RGB image, while the thermal image is likely to be consistent.", "On the other hand, thermal image has single color and low contrast between objects and background.", "Therefore, as shown in Fig.", "REF , we embed an attention-based feature interaction module between the low-level features of the two modalities, which introduces thermal information into the skeleton branch dominated by RGB to avoid the separation of the same object and maintain the integrity of the salient region.", "Also, RGB information is introduced into the contour branch dominated by thermal to enable the network to capture the saliency of the objects and filter out irrelevant background clutters.", "Furthermore, in order to fuse the complementary semantic features of the two modalities and suppress the saliency bias, we introduce a serial multiscale dilated convolution (SDC) based feature fusion module to make the model focus on the common salient regions and obtain more accurate segmentation.", "In summary, this paper makes three major contributions: We design a mirror complementary RGB-T SOD network (MCNet) with Transformer and CNN hybrid architecture.", "By combining the Transformer-based feature extraction module and CNN-based feature interaction and fusion modules, the proposed model can effective utilize the complementary information of the two modalities, thus outperforming the state-of-the-art RGB-T SOD models.", "An attention-based feature interaction module and a SDC-based feature fusion module are proposed to achieve the complementary fusion of low-level features and semantic features between RGB and thermal modalities.", "The two modalities can adjust each other and complement flexibly, which significantly improves the predicted saliency map.", "We build a more challenging RGB-T SOD dataset VT723 and make it available to the research community.", "Most of the scenes of this dataset contain challenges common in the real world such as low illumination, complex background and so on.", "Extensive experiments on three benchmark RGB-T SOD datasets and VT723 show that the proposed model performs well in different challenging scenes.", "The rest of this paper is organized as follows: the related works are introduced in Sec.", "II, and Sec.", "III illustrates the proposed MCNet.", "Expensive experiments on benchmark datasets and the proposed V723 dataset are conducted in Sec.", "IV.", "Finally, we conclude our work in Sec.", "V. The early non-deep SOD models rely on low-level features and certain heuristics [4] (e.g., color contrast, texture, structure and background prior).", "With the successful application of deep learning technology in CV field, various deep learning SOD models have been proposed.", "The deep model based on multi-layer perceptron (MLP) [14], [15] first showed better performance than traditional scheme.", "Later, the Fully Convolutional Networks (FCN) with end-to-end pixel-level operation was adopted to further improve SOD efficiency and performance, and became the mainstream of SOD architecture [1], [16].", "According to the basic idea of feature representation, these methods are summarized into three representative categories: edge-focus or boundary-focus [17], multiscale learning [18] and integrity learning [19].", "Although deep learning based algorithms have greatly promoted the development of SOD, the existing methods based on single image source RGB still face some challenges to extremely complex scenes.", "This motivates researchers introduce additional information for SOD task." ], [ "RGB-D salient object detection", "Existing CNN-based RGB-D SOD methods [20], [21], [2], [22] can be divided into three categories according to the mode of feature extraction and cross-modal fusion.", "The first category uses input fusion to directly combine the original data of the two modalities.", "Some researches  [20], [23] concatenated RGB and depth across channel dimension as the raw input of the encoder, and Chen et al.", "[24] introduces 3D CNN to extract saliency features of the high-dimensional fusion modality.", "The second category [21], [25] adopt a Siamese network with shared weights for the RGB/depth stream as an encoder and conduct independent feature extraction of the two modalities.", "The third category extract features from RGB and depth maps independently by using a two-stream network, according to the fusion stage, it can be further divided into result fusion [26], [27] and feature fusion [2], [28], [9], [29], [30], [31].", "The result fusion generate the final saliency map by fusing the respective predictions of the two modalities.", "Feature fusion usually design complex fusion module based on the progressive fusion mechanism of encoder and decoder, and this kind of model gradually becomes the mainstream structure of RGB-D SOD [2], [28].", "Figure: The pipeline of mirror complementary network (MCNet).", "MCNet uses a Transformer-based two-stream structure to extract hierarchical features of RGB and thermal modalities, and adopts a complementary set of labels to supervise them respectively.", "Specifically, the encoders are based on Transformer backbone, followed by four-layer CNN-based decoders.Through the attention-based feature interaction module (Cross_ATTCross\\_{ATT}) and SDC-based feature fusion module (SDC), the proposed model achieves the complementary fusion of low-level features and semantic features between RGB and thermal modalities." ], [ "RGB-T salient object detection", "Early RGB-T SOD methods mostly use graph-based techniques and design bottom-up or top-down models to learn cross-modal feature representations.", "Wang et al.", "[32] create the first benchmark RGB-T SOD dataset called VT821 and propose a graph-based multi-task manifold ranking algorithm to fuse RGB and thermal data.", "Tu et al.", "adopted multi-mode multi-scale manifold ranking [33] and cooperative graph learning algorithm [34] to achieve cross-modal SOD, and they also built a more challenging dataset VT1000 in [34].", "In later works, Tu et al.", "[7] contributed a large dataset VT5000, which provides benchmark training data for deep learning-based RGB-T SOD.", "With this dataset, they proposed a baseline deep learning method based on two-stream structure.", "Similar to [7], later CNN-based methods [10], [35], [36], [37] mostly adopted feature fusion in two-stream structure to achieve cross-modal fusion for accurate RGB-T SOD.", "Zhou et al.", "[35] used a bilateral inversion fusion module to bilaterally fuse the foreground and background information.", "To prevent the information of the two modalities from over-influencing each other, Tu et al.", "[36] only performed fusion in the decoding stage.", "Wang et al.", "[37] designed step by step fusion of cross-modal from the perspective of mutual guidance of two modalities.", "The key of RGB-T SOD is to exploit the correlations between RGB and thermal images and fuse these two modalities effectively to accurately segment the common salient regions." ], [ "Vision Transformer for SOD", "Inspired by the breakthroughs from Transformer [38] networks in Natural Language Processing (NLP) domain, researchers applied it to computer vision tasks and achieved remarkable results [39], [40].", "Dosovitskiy et al.", "[41] proposed ViT model based on transformer for the first time in large-scale supervised image classification tasks.", "Wang et al.", "[42] designed a progressive shrinking pyramid Transformer named PVT, and Liu et al.", "[11] proposed Swin Transformer with sliding window operation and hierarchical design.", "As backbone of dense prediction, these two structures can combine numerous detection methods and have good performance in various image/video tasks.", "Subsequently, researchers apply Transformer to SOD tasks to further improve the detection performance.", "Ren et al.", "[43] and Zhu et al.", "[44] applied the pure Transformer-based encoder on single modality SOD, while some researches [45], [46] adopted PVT [42] and CNN hybrid structure for feature extraction and saliency map prediction.", "Some researches [47], [48] modeled the long-range characteristics of RGB and depth respectively based on pure Transformer architecture.", "And others [49], [50], [51] combined CNN and Transformer into hybrid architecture, then used the similarity-based attention mechanism of Transformer to achieve accurate RGB-D SOD.", "Liu et al.", "[3] uses Swin Transformer [11] encoder to extract features of RGB and thermal/depth images, and then adopt edge guidance to achieve cross-modal fusion for SOD.", "As discussed in previous section, the characteristic of RGB-T data is different from that of RGB-D data.", "Therefore, it is necessary to design an effective SOD network based on RGB-T data.", "In this work, we propose a Transformer-CNN hybrid network to effective utilize the complementary information of thermal and RGB modalities, which performs well in a variety of challenging scenes." ], [ "Motivation", "Generally speaking, RGB modality contains rich color and texture information, so the objects are more significant; while the thermal modality contains clear and continuous edge information.", "However, the RGB or thermal data in RGB-T image pairs is not always helpful for SOD due to the influence of complex imaging conditions.", "For example, the noisy and low contrast RGB modality caused by low light conditions (e.g., the first row in Fig.", "REF ) and the failure of thermal modality caused by thermal crossover (e.g., the fourth row in Fig.", "REF ).", "The imaging mechanism of thermal modality determines that RGB-T data is significantly different from RGB-D data [8], since the former allows bad RGB imaging conditions (RGB may not contain much information) but the latter is usually based on high quality RGB image [5], [9], [2].", "Therefore, the key of the RGB-T SOD task is how to extract features conducive to salient objects representation based on the potential cues of the two modalities, and make them mutually adjustable and flexible complementary.", "To reflect challenging scenes such as extreme illumination and complex backgrounds that are common in the real world, we build an RGB-T SOD dataset containing 723 pairs of RGB-thermal images based on autonomous driving scenes (see Sec.", "REF )." ], [ "Overview of the proposed network", "We model the common saliency of RGB and thermal modalities by a symmetric two-stream network supervised by a pair of complementary labels.", "The RGB images emphasize the overall saliency of objects with its rich color and texture information, while the thermal images can help to obtain better spatial consistency and contour localization.", "Specifically, the hierarchical features of two modalities are first constructed based on Transformer-based feature extraction module, then an attention-based feature interaction module are designed to perform channel-aware interaction and spatial-cross interaction on the feature maps.", "Finally, a serial multiscale dilated convolution module is proposed to generate the refined fusion result.", "Fig.", "REF shows the overview of the proposed model, the details are described in the following sections." ], [ "Transformer-based feature extraction module", "In this section, we design a Transformer-based feature extraction module.", "The hierarchical features of two modalities are constructed based on two independent Swin Transformer [11] backbone, then the features of last four layers are used for further feature interaction and fusion.", "By utilizing self-attention based on non-overlapping local windows and cross-window mechanisms, the locality and hierarchy of CNN and the long-range dependencies of Transformer are introduced simultaneously.", "Considering the trade-off between model performance and computational efficiency, as shown in Fig.", "REF , we use the pre-trained Swin-B as the backbone encoder, which accept the input size of 384*384.", "Firstly, the input is segmented into a series of non-overlapping patches.", "Then, the model adopts a hierarchical design, consisting of 4 stages in total.", "The channels of the input feature map of each stage are doubled and the resolution is halved, so as to expand the receptive field layer by layer.", "At this point, we can obtain two groups of feature maps from the two modalities containing five different resolutions, which are respectively denoted as: $SF_{rgb} = \\lbrace SF^i_{rgb}|i = 1,2,3,4,5\\rbrace $ and $SF_t = \\lbrace SF^i_{t}|i = 1,2,3,4,5\\rbrace $ .", "Compared to high-level features, low-level features contribute less to the performance of deep polymerization methods [52].", "On the other hand, the larger spatial resolution of low-level features means more computation consumption.", "Therefore, in order to make full use of the global long-range dependency features modeled by Transformer and avoid excessive computing costs, we retain the features of the last four layers (that is, the outputs from the four stages of Swin-B) and discard the lowest features $SF^1_{rgb}$ and $SF^1_{t}$ .", "In Fig.", "REF , we show an example of hierarchical features outputted from Swin-B.", "As can be seen, the lowest features (a) is similar to the original input because it has not yet passed through Transformer blocks.", "On the one hand, it contains more background clutter; On the other hand, the edge information of the object is reserved in the following two layers of features (b) and (c) (especially (b)).", "The ablation experiments in Sec.", "REF about the hierarchical features of Swin-B demonstrate that the last four levels contain more valid information that can represent salient objects.", "Figure: The proposed attention-based feature interaction module.", "We use Swin-B as the backbone encoder to extract the multi-scale features of the two modalities as SF rgb i SF^i_{rgb} and SF t i SF^i_{t}, and then two groups of features of the same size are fed into the attention-based interaction module to obtain the low-level interaction features of two modalities as LF rgb i LF^i_{rgb} and LF t i LF^i_{t}.", "Each layer of the attention-based interaction module includes a channel-aware interaction module and a spatial-cross interaction module." ], [ "Attention-based feature interaction module", "As analyzed above, the low-level features from Transformer backbone capture continuous boundary and global details but also with noise, while the high-level features capture semantic feature of salient objects but fail to extract boundaries.", "In order to distill out effectively squeezed cues from two modalities and adjust them each other, we propose a novel attention-based feature interaction module followed the feature extraction module.", "As show in Fig.", "REF , the hierarchical features $SF_{rgb}$ and $SF_t$ outputted by Swin Transformer blocks are uniformly squeezed into 64 channels through two concatenated convolution operations with kernel size 3$\\times $ 3 and 1$\\times $ 1.", "We denote these two groups of features as: $F_{rgb} = \\lbrace F^i_{rgb}|i = 2,3,4,5\\rbrace $ and $F_t = \\lbrace F^i_{t}|i = 2,3,4,5\\rbrace $ .", "Then the proposed attention-based feature interaction module performs channel-aware interaction and spatial-cross interaction on each set of feature maps with the same resolution.", "We get two sets of attention interaction maps by: $Att^i_{rgb} = S_{att}(F^i_{t})+Conv^{3\\times 3}(C_{att}(F^i_{fuse})),\\vspace*{-5.69054pt}$ $Att^i_{t} = S_{att}(F^i_{rgb})+Conv^{3\\times 3}(C_{att}(F^i_{fuse})),$ where $C_{att}(\\cdot )$ and $S_{att}(\\cdot )$ denote the channel attention and spatial attention [53], respectively.", "$F^i_{fuse}$ is the shared channel-aware feature of the two modalities, and the detailed structure is shown at the bottom of Fig.", "REF .", "Figure: The original image pair and five-level feature maps from Swin-B.", "From (a) to (e) are the feature maps from different levels of the encoder, respectively.", "The feature maps in (a) are similar to the origin inputs with background noise, while the deep features such as egde and body details are mainly extracted in (b) - (e).Figure: The serial multiscale dilated convolution (SDC) module.", "SDC module takes the concatenated features of two modalities as inputs, through the cascade of dilated convolution blocks with different expansion rates to expand the receptive field.", "The low-level interaction features from the two modalities are further integrated and fused.Channel-Aware Interaction (CAI) module.", "The CAI module is designed to select the saliency consistency features of channel correlation between two modalities ($C_{att}^i$ as shown at the bottom of Fig.", "REF ), thus extracting more discriminative features for object understanding.", "To enhances the common pixels while alleviates the ambiguous ones in the feature maps, we uses pixel-level multiplication to get the shared channel-wise features $F^i_{fuse}$ : $F^i_{fuse} = Concat(\\hat{F}^i_{rgb}\\otimes \\hat{F}^i_{t},\\hat{F}^i_{rgb},\\hat{F}^i_{t}),$ where $\\hat{F}^i_{rgb}$ and $\\hat{F}^i_t$ are obtained from $F^i_{rgb}$ and $F^i_t$ through 3$\\times $ 3 convolution operation followed by a batch normalization layer and a ReLU activation function, $Concat(\\cdot )$ means the concatenation operation, and $\\otimes $ denotes pixel-wise multiplication.", "Then, the shared channel-wise features are fed into the channel attention module [53].", "More specifically, $C_{att}^i =\\sigma (MLP(P_{max}(F^i_{fuse}))+MLP(P_{avg}(F^i_{fuse})))\\otimes F^i_{fuse},$ where $\\sigma $ denotes the sigmoid function, and $\\otimes $ denotes the multiplication by the dimension broadcast.", "$P_{max}(\\cdot )$ and $P_{avg}(\\cdot )$ represent the global max pooling and average pooling operation for each feature map respectively, $MLP(\\cdot )$ is a two-layer perceptron.", "Spatial-Cross Guidance (SCG) module.", "In this section, we propose a novel SCG module to handle the cross-modal interaction of RGB and thermal images.", "Different from most existing interaction strategies [36], [10], [37], SCG makes the features of two modalities guide each other, and finally supervised by the skeleton label (RGB branch) and contour label (thermal branch) after the correspond decoders, respectively.", "Our design idea comprehensively considers the characteristics of the two modalities (as analyzed in Sec.", "), so that the spatial consistency can complement each other and improve the robustness of the model under extreme conditions.", "Specifically, SCG module firstly deduces the spatial-wise attention features [53] of the two modalities respectively, and eventually cross-added them to the hierarchical features of the two modalities.", "The detail operations are defined as follows: $S_{att}(F^i) = \\sigma (Conv^{7\\times 7}(Concat(R_{max}(F^i),R_{avg}(F^i)))\\otimes F^i,$ where $F^i$ denotes the input feature map, $\\sigma $ denotes the sigmoid function.", "$R_{max}(\\cdot )$ and $R_{avg}(\\cdot )$ represent the global max pooling and average pooling operation for each point in the feature map along the channel axis respectively, $Concat(\\cdot )$ represents a concatenation operation along channel wise, and $Conv^{7\\times 7}$ represents a convolution operation with filter size $7\\times 7$ .", "Finally, the outputted attention interaction maps $Att_{rgb} = \\lbrace Att^i_{rgb}|i = 2,3,4,5\\rbrace $ and $Att{t} = \\lbrace Att^i_{t}|i = 2,3,4,5\\rbrace $ are added to the corresponding backbone features of two modalities.", "So we obtain the low-level interaction features $LF_{rgb} = \\lbrace LF^{i}_{rgb}|i = 2,3,4,5\\rbrace $ and $LF_{t} = \\lbrace LF^{i}_{t}|i = 2,3,4,5\\rbrace $ that complement and regulate each other in the channel and spatial dimensions as follows: $LF^i_{rgb} = F^i_{rgb} + Att^i_{rgb},\\ \\ LF^i_{t} = F^i_{t} + Att^i_{t}.$ To demonstrate the effect of the proposed attention-based feature interaction module, we show the feature maps outputted by the backbone encoder and attention-based interaction module in challenging scenes with low illumination and thermal cross.", "The first column of Fig.", "REF provides the input image pair and saliency label.", "(a) and (b) are the feature maps outputted by the first block of Swin-B and attention-based interaction module respectively.", "(a) $F^2_{rgb}$ and (a) $F^2_{t}$ do not contain a complete salient object due to the low illumination and thermal cross.", "However, the low-level interaction feature maps (b) $LF^2_{rgb}$ and (b) $LF^2_{t}$ are obviously complementary to each other, and complete object region are mapped on most channels.", "It confirms that the proposed attention-based feature interaction module can learn the complementary information of the two modalities and model the common saliency region.", "In Sec.", "REF we further discuss the impact of variants about the proposed cross-modal attention module on cross-modal saliency modeling.", "Figure: Feature visualization in three stages of the proposed mirror complement structure.", "The first 16 channels of feature maps are shown in the figure.", "The first column shows RGB, GT, and Thermal, respectively.", "(a) F rgb 2 F^2_{rgb} and F t 2 F^2_{t} are the second layers from backbone features.", "(b) LF rgb 2 LF^2_{rgb} and LF t 2 LF^2_{t} are the corresponding layer features outputted from attention-based feature interaction module.", "(c) OUT rgb OUT_{rgb} and OUT t OUT_{t} means deep interactive features from SDC module." ], [ "SDC-based feature fusion module", "In this section, we design a serial multiscale dilated convolution (SDC) based feature fusion module after getting the predictions of the two modalities.", "This module can help the model capture objects of various sizes, especially to improve the segmentation effect of small objects that are difficult to inference.", "Through the interactive iteration of deep semantic features, the proposed model can focus on common salient regions, and obtain more accurate segmentation.", "Specifically, the diverse semantic features of $LF_{rgb}$ and $LF_{t}$ are fused through layer-by-layer aggregated decoders $Dec_{RGB}$ and $Dec_{T}$ , thus, the two branches generate corresponding predictions under the supervision of skeleton labels and contour labels, respectively.", "As shown in Fig.", "REF , the SDC module takes the concatenated features as inputs to further refine the saliency maps: $SDC_{in} = Concat(Dec_{RGB}(LF_{rgb}), Dec_T(LF_{t})),$ Inspired by [54], we concatenate a group of dilated convolution with different expansion rates $\\gamma = (1,3,5,7)$ to further aggregate the two modalities.", "Specifically, we set two plain $\\gamma \\times \\gamma $ and 1$\\times $ 1 convolution layers before each dilated convolution layer, and use the maximum pooling after each dilated convolution layer.", "Then these multi-level features outputted by each dilated convolution layer $SDC_{out} = \\lbrace SDC^{i}_{out}|i = 2,3,4,5\\rbrace $ will be applied with 3$\\times $ 3 convolution layers to obtain two sets of deep interactive features $DF_{rgb} = \\lbrace DF^{i}_{rgb}|i = 2,3,4,5\\rbrace $ and $DF_{t} = \\lbrace DF^{i}_{t}|i = 2,3,4,5\\rbrace $ , which are suitable for RGB decoder and thermal decoder respectively.", "The direct superposition method is used to fuse the low-level interaction features $LF_{rgb}$ and $LF_{t}$ with the deep interactive features, and the final saliency map is generated by concatenating these two complementary predictions as follows: $Out = Concat(Dec_{RGB}(DF_{rgb}+LF_{rgb}), Dec_T(DF_{t}+LF_{t})).$ Figure: Sample challenging image pairs and annotated ground truths from our VT723 dataset.", "(a) and (b) indicate RGB and thermal image, respectively.", "(c) is corresponding ground truth of RGB-T image pairs.", "The last column is the statistics of the number of typical challenging scenes.The last column ((c) $OUT_{rgb}$ and $OUT_{t}$ ) in Fig.", "REF shows the feature maps of the two modalities outputted from the SDC module.", "It proves that the deep interaction makes more channels obtain the complete salient object, while the background clutter is greatly eliminated.", "More ablation experiments can be found in Sec.", "REF ." ], [ "RGB-T label decoupling strategy", "Inspired by [16], we design a label decoupling strategy according to the characteristic of RGB-T data.", "Since thermal images usually have distinct edge information (expect for thermal cross scenes) while RGB images provide more details of the salient object, we decompose the original label into a contour map and a skeleton map which are used to supervise thermal and RGB modalities, respectively.", "Among them, the skeleton map focuses on the main area of the salient object, and the contour map focuses on the outer contour of the salient object.", "The framework is shown in Fig.", "REF .", "Specifically, by identifying the Euclidean distance between the pixels belong to salient objects and background, the binary saliency label is converted into a grayscale image (the grayscale value only appears in salient regions).", "Then the gray values are normalized to obtain a skeleton map that supervises the RGB modality.", "Correspondingly, by removing the skeleton map from the original saliency map, the contour map is obtained for supervising the thermal modality.", "Finally, the final prediction after fusion module is supervised by the complete label.", "If both modalities are available, the model could capture the correlation between them and focus its attention on the common salient regions; when suffering extreme lighting conditions or complex thermal environments which results in the failure of any modal, the model could focus on the information-rich branch and accurately segment salient objects with the help of another modal." ], [ "Loss function", "Based on the proposed RGB-T label decoupling strategy, we obtain three outputs and labels of the RGB branch, the thermal branch, and the fusion module, respectively.", "So, the total loss $\\mathcal {L}$ can be defined as the combination of three losses as follows: $\\mathcal {L} = \\ell _{rgb} + \\ell _{thermal} + \\ell _{fusion},$ where $\\ell _{rgb}$ , $\\ell _{thermal}$ and $\\ell _{fusion}$ denote RGB loss, thermal loss and fusion loss, respectively.", "Considering the global structure of the fusion features from RGB and thermal data, we utilize the IoU loss [55] to measure the similarity and focus on common salient regions.", "Moreover, the BCE loss [56] is utilized to maintain a smooth gradient for all pixels.", "SSIM loss [57] is also introduced into our training.", "For the RGB and thermal data, SSIM loss helps the optimization to focus on the boundary of salient objects.", "For RGB branch and thermal branch, since skeleton label and contour label are not binary, they cannot be used for IOU loss.", "So, for $\\ell _{rgb}$ and $\\ell _{thermal}$ , we directly take the sum of BCE loss and SSIM loss.", "Moreover, the hybrid loss $\\ell _{fusion}$ is defined as: $\\ell _{fusion} = \\ell _{bce} + \\ell _{ssim} + \\ell _{iou},$ where $\\ell _{bce}$ , $\\ell _{ssim}$ and $\\ell _{iou}$ denote BCE loss, IoU loss and SSIM loss, respectively.", "Table: Performance comparison with state-of-the-art methods on three testing datasets.", "The best and second best results are highlighted in red and blue respectively.Figure: Performance comparison with state-of-the-art SOD methods on VT5000, VT1000, and VT821 datasets.", "The first row shows precision-recall curves.", "The second row shows F-measure curves with different thresholds." ], [ "Experimental setup", "Public datasets.", "As mentioned in Sec.", "REF , there are three RGB-T SOD dataset publicly available, including VT821 [32], VT1000 [34] and VT5000 [7].", "In VT821, the thermal infrared images formed vacant regions during manual registration, and the author also adds noise to some visible images to heighten the challenge of this dataset.", "In VT5000, the authors labeled 11 challenging scenes based on factors such as object size, lighting conditions, center deviation, number of prominent objects, and background quality.", "Same as [36], [10], [35], 2500 various image pairs in VT5000 are utilized to train our model and the rest image pairs together with VT821 and VT1000 are taken as testing sets.", "Table: Quantitative comparisons of the proposed method on 11 challenging scenes with F avg {\\rm F_{avg}} and MAE metrics.", "The best and second best results are highlighted in red and blue respectively.The proposed challenging dataset VT723 To further validate the robustness of the proposed model under common challenging scenes in real world, we build a more challenging RGB-T SOD dataset VT723 based on a large public semantic segmentation RGB-T dataset [58] used in the autonomous driving domain.", "We pick image pairs in which the salient objects are significant in at least one of their color and thermal modalities.", "After careful screening, we collectd 723 sets of RGB-thermal image pairs in which 473 are taken during daytime and 250 are taken at night.", "The SOD ground truths are obtained by professional annotators looking at both modalities, selecting common salient regions, and manually marking pixel by pixel on the original segmentation ground truths.", "The salient objects in VT723 are mainly vehicles, bicycles, pedestrians, as well as road signs and roadblocks.", "Since the dataset was captured in an open city street, most of the data have the challenge of multiple salient objects (MSO), complex backgrounds (IC) and center bias (CB), as well as bad weather (BW) challenging scenes.", "Obviously, data captured at night has low illumination (LI) challenges.", "In addition, large vehicles typically occupy a large portion of the scene (BSO, size of the big salient object is over 0.26) or even extend beyond the scene boundaries (CIB), while pedestrians and roadblocks usually contain both small salient object (SSO, size of the small salient object is smaller than 0.05) and center bias (CB) challenges.", "Besides, there are a few scenes in which the object and the background have a similar appearance (SA).", "By looking at the raw image pairs, we also found that the vehicle-to-background (e.g.", "buildings, roads, trees) contrast in the night thermal images is extremely low (That is, the phenomenon of thermal crossover occurs, TC), which leads to limited information from the thermal images when the RGB images are almost invalid (LI).", "Fig.", "REF illustrates the raw images of two modalities and the corresponding ground truth for the above challenging scenes in the proposed VT723 dataset.", "We summarized the proposed VT723 dataset according to the definitions of typical challenging scenes in [7], the results are listed in the last column of Fig.", "REF .", "Evaluation metrics.", "In the experiments, we use Precision-Recall (PR) curve, the mean F-measure (${\\rm F_{avg}}$ ) [59] , max F-measure (${\\rm F_{max}}$ ), weighted F-measure (${\\rm F^{\\omega }}$ ) [60], mean absolute error (MAE) [61], E-measure (${\\rm E_{m}}$ ) [62], and S-measure (${\\rm S_{m}}$ ) [63] to evaluate the performance of our method and existing state-of-the-art methods.", "Figure: Qualitative comparisons with 14 state-of-the-arts methods.", "We select 11 RGB-thermal image pairs with diverse challenges to compare the quality of the saliency maps.", "From the left to right columns are RGB image, thermal image, ground-truth and the results of 15 methods, respectively.Training details We train our model using PyTorch on a single Tesla M40 GPU.", "The parameters of backbone network are initialized from the Swin-B pretrained on ImageNet22k.", "The whole network is trained end-to-end by stochastic gradient descent (SGD), momentum and weight decay are set to 0.9 and 0.0005, respectively.", "The maximum learning rate is 0.005 for backbone of two branches (they do not share weights) and 0.05 for other parts.", "We train our model 48 epoches and the learning rate increases linearly to maximum in the first half of the iteration cycle and then decreases linearly.", "In training phase, the batchsize is set to 16, and all the training image pairs of RGB-thermal are augmented using multiple strategies (i.e., random flipping, rotating and border clipping) for data augmentation.", "We resize the input RGB and thermal images to 384$\\times $ 384 for both the training and test phases.", "Our model runs at over 8 fps." ], [ "Comparison with state-of-the-arts on public datasets", "To demonstrate the effectiveness of the proposed method, 15 state-of-the-art SOD methods are introduced to compare as follows: Two deep learning based single-modality SOD methods: GCPANet [1] and LDF [16].", "Three deep learning based RGB-D SOD methods: EFNet [2], RD3D [24] and HAINet [28].", "Three traditional RGB-T SOD methods: MTMR [32], M3S-NIR [33] and SGDL [34].", "And seven deep learning based RGB-T SOD methods: ADF [7], MIDD [36], CSRNet [10], ECFFNet [35], CGFNet [37], SwinNet [3], and CAVER [49].", "All learning-based models are trained on the VT5000 training set (2500 images) described in Sec.", "REF .", "In experiments, we evaluate the performance based on the saliency maps provided by the original paper (e.g.", "[32], [33], [34], [7], [36], [10], [35], [3], [49]).", "For the methods that do not provide saliency maps (e.g.", "[1], [16], [2], [24], [28]), we utilize the code published by the original paper with the recommended parameters to obtain results.", "For fair comparisons, RGB-D and single modality SOD methods are retrained in experiments.", "Specifically, the early fusion strategy of two modalities [36] is applied to the single modality SOD methods to improve the performance.", "As for the RGB-D SOD methods, the depth map is replaced with the thermal map.", "Table: Performance on challenging dataset VT723.", "The best and second best results are highlighted in red and blue respectively.Figure: Performance comparison with state-of-the-art methods on VT723 dataset.", "The first column shows precision-recall curves.", "The second column shows F-measure curves with different thresholds.Quantitative evaluation.", "We use the same evaluation code to evaluate the performance of the proposed method and other 15 methods.", "The quantitative results of CAVER are from the original paper, the author only provides results on partial metrics.", "As shown in Table REF , the best results are highlighted with red color.", "Obviously, compared with other counterparts on three benchmark datasets, the proposed method outperforms state-of-the-art methods by a large margin.", "In addition, compared with the three traditional methods, the deep learning methods significantly improves the detection performance due to its powerful characterization ability of high-level semantic information.", "The single-modality SOD methods simply takes the integration of the two modalities as input, without explicitly modeling the relationship between them.", "Therefore, such methods are difficult to obtain satisfactory results in RGB-T SOD task.", "RGB-D SOD methods perform better than above methods since the fusion strategies are more complex.", "RD3D [24] is an input fusion model, and the two modalities are gradually fused at the encoder stage.", "Compared with the single-modality SOD method, it has a significantly improved performance.", "The authors of EFNet [2] and HAINet [28] consider their models to be general and prove their validity for RGB-T tasks in the paper.", "EFNet first enhances the depth modality with the prior knowledge learned from the RGB modality and then fuse the RGB and enhanced depth features to obtain final saliency map.", "In the case of poor quality of RGB modality, this method further accumulates error priors, leading to failure to learn effective information of another modality.", "However, in RGB-T task, the low quality RGB image caused by extreme lighting conditions is very common, so the performance of this method is poor.", "HAINet is a typical two-stream structures, and we can find that it also works well on the RGB-T SOD datasets.", "By comparing different methods, we can find that structures which treat the two modalities equally and explicitly model the relationship between them can obtain more accurate saliency maps.", "The precision-recall curves and F-measure curves on three datasets in Fig.", "REF also prove the above analysis and conclusion.", "The curves of the proposed method consistently lie above others.", "Figure: Qualitative comparisons with state-of-the-arts on VT723 dataset.", "We select 9 RGB-thermal image pairs that represent real world challenges to compare the quality of the saliency maps.", "From top to bottom rows are RGB image, thermal image, ground-truth and the results of 9 methods, respectively.Table: Ablation study on different backbones, network architectures and labels on three testing datasets.", "The best results are highlighted in red.Quantitative evaluation on challenging scenes.", "We further conduct a experiment under 11 challenging scenes labeled by VT5000.", "We evaluate the ${\\rm F_{avg}}$ and MAE scores of our model as well as 14 state-of-the-art methods.", "Table REF shows the scores, the numbers of image pairs belonging to the corresponding scenes are also provided.", "The proposed method performs best on almost all 11 attributes, especially in the following challenging scenes, compared with the second best model SwinNet [3], the ${\\rm F_{avg}}$ metric of our method are increase respectively: SSO 9$\\%$ , BW 5.6$\\%$ , MSO and TC 4$\\%$ , which indicating the strong robustness of our model.", "It is worth mentioning that, the outstanding performance in TC (thermal crossover), LI (low illumination) and BW (bad weather) scenes shows the proposed mirror complementary model can flexibly fuse complementary information from both modalities.", "Qualitative evaluation.", "Some representative examples of the proposed method and the above 14 SOD methods in a variety of challenging scenes are shown in Fig.", "REF .", "Rows (a), (f), and (g) show SA scene in RGB modal, while the objects in rows (a) and (g) contain the challenge of BSO.", "Rows (b), (c), (d), and (i) show challenging scenes of SSO and CB, besides rows (b), (c) and (d) contain IC challenge.", "Rows (c) and (h) show the challenging scenes that contain MSO challenge, and the objects in row (a) also suffer SA challenge.", "The objects shown in rows (a), (c), (d) and (k) contain rich details and complex boundaries, moreover, the different color blocks of the object in RGB modality make it difficult to segment the complete outline.", "The LI scene in row (j) cause the object in the RGB modality invisible, the two single-modality SOD methods cannot capture the salient objects.", "While the scenes in rows (b) and (i), the thermal modality suffers from TC cause the object to almost blend in with the background.", "Since thermal information is not fully utilized, three RGB SOD methods perform not well when the RGB modality contain challenges such as SA and IC (see rows (a) - (f)).", "Compared to SwinNet, our method is robust in most challenging scenes.", "By effectively combining the complementary features of RGB and thermal, the rich details of the objects can be captured while some ambiguous background regions are suppressed, and the common salient regions of two modalities are highlighted.", "However, when both modalities suffer from severe challenges (e.g., the RGB modality is disturbed by low contrast, and the thermal modality is disturbed by thermal crossover), the performance of our model will also be affected.", "As shown in row (l) of Fig.", "REF , the umbrella behind the person cannot be completely segmented.", "Nevertheless, thanks to our mirror complementary structure which make the useful features of two modalities effectively extracted and adjusted, the results of our method are still better than other methods.", "Figure: Visual comparisons of the hierarchical features of each layer of the backbone networks and the final saliency maps.The first and third rows are RGB and thermal modalities and corresponding hierarchical features respectively, and the middle row are ground truth and the predicted results.‘Layers_*’ represents the outputs of each block of the backbone network (i.e., the hierarchical features used by the proposed model), and we present the feature maps of the first 5 channels for each layer." ], [ "Comparisons on the proposed VT723 dataset", "To further illustrate the superior performance of the proposed method, we conduct experiments on the more challenging dataset VT723.", "Table REF shows the quantitative comparison of the results with eight recent deep-learning based methods which provide the codes: three RGB-T method (SwinNet [3], CGFNet [37] and MIDD [36]), three RGB-D methods (RD3D [24], EFNet [2], and HAINet [28]), and two RGB methods (GCPANet [1], and LDF [16]).", "Fig.", "REF presents the precision-recall curves and F-measure curves on VT723 dataset.", "It can be seen that compared with other methods, our results have notable improvement on the VT723 dataset.", "We show visualization results of the proposed method and above eight methods in Fig.", "REF .", "Our method can highlight the common salient objects of two modalities clearly under various challenging scenes, including SA (column (a)), CB (columns (a), (c) and (e)), SSO (column (e)), MSO (columns (e), (f) and (g)), LI (columns (c)-(f)), TC (columns (a) and (b)) and IC (column (g)).", "In the scenes illustrated in Fig.", "REF , center bias location and small size of pedestrian objects, as well as low illumination at nighttime are unavoidable challenges in autonomous driving.", "The results show that the objects of our results are more clear and complete than others.", "The thermal modality plays a crucial role especially in challenging scenes such as severe weather and nighttime, so the model designed for the RGB-T SOD task should make full use of this modality." ], [ "Ablation studies", "In this section, we study the effects of each component of the proposed method on three benchmark datasets.", "The ablation study contains three parts: backbone ablation, architecture ablation and label ablation.", "The effectiveness of different backbone.", "We compare the effectiveness of mainstream backbones, including ResNet-50 [64], ResNet-101 [64], Res2Net-50 [65], Res50+ViT16 [66] and PVT-M [42], and we also report the results when using the low-level four-layer features of Swin-B as in [3].", "The quantitative comparison results are shown in the first 6 rows of Table REF .", "The detection performance is significantly improved due to the Swin Transformer taking into account the advantages of both globality and locality.", "In addition, we provide a visual comparison of the hierarchical features of each layer of the backbone networks and the final saliency maps in Fig.", "REF .", "The low-level features of the Swin Transformer contain more comprehensive edge and detail information, and its high-level features tend to retain more effective information while filtering out background noise, which is more conducive to downstream dense prediction tasks.", "The effectiveness of feature interaction and fusion modules.", "The middle 4 rows of Table REF illustrates the results of different architecture of the proposed modules.", "‘Share attention’ means that channel attention and spatial attention are successively performed on the shared features $F_{fuse}$ , and then two groups of shared attention maps $ATT_{rgb}$ and $ATT_{t}$ with 64 channels are obtained by two sets of 3$\\times $ 3 convolution, which are respectively added to the backbone features $F_{rgb}$ and $F_{t}$ to obtain low-level interaction features $LF_{rgb}$ and $LF_{t}$ .", "‘Cross attention’ performs channel attention and spatial attention operations on the backbone features in turn, and the two sets of attention maps obtained are cross-added to the backbone features to obtain low-level interactive features.", "‘Noninteraction attention’ means that the low-level features of the two modalities do not interact, but are the sum of backbone features and their own attention maps: $LF_{rgb}=F_{rgb}+ATT_{rgb}$ and $LF_{t}=F_{t}+ATT_{t}$ .", "And the attention maps are obtained by channel attention and spatial attention operation: $ATT_{rgb}=Sa(Ca(F_{rgb}))$ and $ATT_{t}=Sa(Ca(F_{t}))$ .", "The proposed attention-based feature interaction module achieves the best performance on three benchmark datasets, especially on VT821.", "‘No SDC’ means that the serial multiscale dilated convolution module is not used between the two branches.", "After getting the concatenated features from two decoders, we directly use a set of plain 3$\\times $ 3 convolution layer (instead of the multi-scale dilated convolution in the SDC module) to obtain the fusion features of four scales.", "The proposed SDC-based feature fusion module can significantly improve the performance of the model in each evaluation metrics.", "The effectiveness of complementary label pair: The row of ‘GT’ in Table REF illustrates the results of training without using decoupled labels, that is both branches and fusion module are supervised by ground truths.", "It is clear that the proposed complementary labels achieve superior qualitative results based on the scores on three datasets.", "Table: Comparison of complexity and performance (weighted F-measure metrics on VT 5000 test set).", "The model size (num_\\_parameters), FLOPs and F ω \\rm F^{\\omega } of different models are shown in the Table.", "The best and second best results are highlighted in red and blue respectively." ], [ "Complexity analysis", "Table REF shows the number of parameters and FLOPs (Floating Point Operations) of different algorithms.", "CNN-based single modality SOD methods (e.g., GCPANet [1] and LDF [16]) usually have a simple network structure because they only process one modal, but their performance is poor in RGB-T SOD task.", "In fact, ‘Ours$\\_$ ResNet50’ achieves second best in terms of parameter number in CNN-based cross-modal SOD methods (e.g., the RGB-T methods [36], [37], [3], the RGB-D methods [2], [24], [28]).", "Meanwhile, compared with the most recent method CGFNet [37], ‘Ours$\\_$ ResNet50’ achieves higher quantitative results with fewer parameters and FLOPs.", "This proves the validity of the proposed mirror complementary structure.", "By combining the mirror structure with Transformer, the performance of ‘Ours$\\_$ Swin-B’ is improved a large margin than all other methods.", "Compared with the most recent SwinNet [3], the proposed method improves weighted F-measure performance by more than 5$\\%$ with fewer parameters and FLOPs, and achiveves SOTA performance.", "In summary, the proposed SOD model can significantly improve the detection performance while reducing the complexity.", "In this paper, we have presented a novel Transformer-based mirror complementary network for RGB-T SOD.", "RGB-T data is significantly different from RGB-D data since their imaging mechanisms are different.", "To highlight the common challenge in RGB-T SOD task, we build a novel RGB-T SOD dataset VT723 and make it available to the research community.", "The complementary fusion of cross attention module between low-level Transformer-based features and the SDC interactive between semantics features are proposed to effective capture the details of RGB and thermal modalities.", "To further exploit the characteristics of thermal and RGB images, a set of complementary labels is introduced to separately supervise the RGB images with color and texture information and the thermal images with complete structures and clear edges.", "Expensive experiments demonstrate that the proposed method outperforms state-of-the-art methods under different evaluation metrics, especially improving the performance in challenging scenes.", "Overall, we hope that our work will shed light on the development of more effective RGB-T SOD models." ] ]
2207.03558
[ [ "Cherenkov radiation and scattering of external dispersive waves by\n two-color solitons" ], [ "Abstract For waveguides with two separate regions of anomalous dispersion, it is possible to create a quasi-stable two-color solitary wave.", "In this paper we consider how those waves interact with dispersive radiation, both generation of Cherenkov radiation and scattering of incident dispersive waves.", "We derive the analytic resonance conditions and verify them through numeric experiments.", "We also report incident radiation driving the internal oscillations of the soliton during the scattering process in case of an intense incident radiation.", "We generalize the resonance conditions for the case of an oscillating soliton and demonstrate how one can use the scattering process to probe and excite an internal mode of two-color soliton molecules." ], [ "Dispersion profile model", "We model dispersion coefficient $\\beta (\\omega )$ with the following rational expression $\\beta (\\omega ) = \\frac{1}{c} \\frac{\\sum _{n=0}^{3} {C_{n} \\omega ^{n+1}}}{\\sum _{m=0}^{3} {D_{m} \\omega ^{m}}}$ where $c = 0.299792458~\\mu \\text{m/fs}$ is the speed of light, and the coefficient sequences $C$ and $D$ are defined by $C &=& (9.654, -39.739 \\,\\mathrm {fs}, 16.885 \\,\\mathrm {fs^2}, -2.746 \\,\\mathrm {fs^3}),\\quad \\text{and},\\\\D &=& (1, -9.496 \\,\\mathrm {fs}, 4.221 \\,\\mathrm {fs^2}, -0.703 \\,\\mathrm {fs^3}).$ In here and everywhere in the paper we assume fs as a unit of time and $\\mu $ m as a unit of distance.", "Figure REF displays group velocity $v_{g}(\\omega ) = 1 / \\beta ^{\\prime }(\\omega )$ and second order dispersion coefficient $\\beta ^{\\prime \\prime }(\\omega )$ as functions of frequency.", "Frequencies $\\omega _{1}$ and $\\omega _{2}$ correspond to the central frequencies of the soliton's spectral components as chosen in the simulation corresponding to Fig.", "1 of the paper.", "Figure: (a) group velocity v g v_{g} and (b) second order dispersion coefficient β '' (ω)\\beta ^{\\prime \\prime }(\\omega ) in the model fiber.", "Labels A 1,2 A_{1,2} and NN mark the regions of anomalous and normal dispersion.When analyzing the nonlinear scattering near an oscillatory mode we used an expression for the oscillation frequency.", "In this section we will derive this expression.", "Let us return to Eq.", "(3) (of the main text) for coupled solitons.", "We can re-normalize the equations by performing the following transformation $U_{n} \\rightarrow \\gamma _{n}^{1/2} e^{i \\beta _{n} z} \\cdot U_{n},$ which will make the equations symmetric $i \\partial _{z} U_{n}- \\frac{1}{2} \\beta ^{\\prime \\prime }_{n} \\partial _{t}^{2} U_{n}+ \\gamma _{n}^{2} \\left| U_{n} \\right|^{2} U_{n}+ 2 \\gamma _{n} \\gamma _{m} \\left| U_{m} \\right|^{2} U_{n} = 0.$ This in turn allows us to recognize the modified couple of equations as Euler-Lagrange equations for Lagrangian $\\int \\limits _{-\\infty }^{+\\infty }\\mathcal {L}(U_{1}, \\partial _{z} U_{1}, \\partial _{t} U_{1}, \\ldots )\\, dt,$ where Lagrangian density $\\mathcal {L}$ is defined as a sum of three components $\\mathcal {L} = \\mathcal {L}_{1} + \\mathcal {L}_{2} + \\mathcal {L}_{\\text{int}}$ , with $\\mathcal {L}_{n}$ being a single-soliton Lagrangian density $\\mathcal {L}_{n} =\\frac{i}{2} \\left(\\partial _{z} U_{n} \\cdot U_{n}^{*} -\\partial _{z} U_{n}^{*} \\cdot U_{n}\\right)+ \\frac{1}{2} \\beta ^{\\prime \\prime }_{n} \\, \\partial _{t} U_{n} \\partial _{t} U_{n}^{*}+ \\frac{1}{2} \\gamma _{n}^{2} \\, \\left| U_{n} \\right|^{4},$ and $\\mathcal {L}_{\\text{int}}$ being the interaction term $\\mathcal {L}_{\\text{int}} =2 \\gamma _{1} \\gamma _{2}\\left| U_{1} \\right|^{2} \\left| U_{2} \\right|^{2}.$ Let us assume that the soliton components $U_{n}$ can be described by the following generic ansatz $U_{n}(z, t) = A_{n}(z) S\\left(\\frac{t - t_{n}(z)}{\\sigma _{n}(z)}\\right) \\exp \\left(- i \\Omega _{n}(z) t + i \\phi _{n}(z)\\right).$ In here $A_{n}$ is the amplitude of the pulse, $t_{n}$ is the central position, $\\sigma _{n}$ is the pulse width, $\\Omega _{n}$ is the frequency detuning, $\\phi _{n}$ is the phase, and $S(x)$ is function that defines the envelope shape.", "At the moment we will not specify the concrete form of $S(x)$ , but will assume that it is an even function.", "Before we continue let us stress one important thing: this ansatz cannot express all the possible internal oscillations of the soliton.", "One obvious example, as it was noted in the text, is the case of the pulse-width oscillation.", "In order to capture this dynamics, we need to add frequency chirp to the ansatz.", "Substituting (REF ) into (REF ) and (REF ) and integrating over $t$ we arrive at the expressions for the averaged Lagrangians $L_{n} =I_{1} \\, t_{n} \\sigma _{n} A_{n}^{2} \\frac{d \\Omega _{n}}{d z} +I_{1} \\, \\sigma _{n} A_{n}^{2} \\frac{d \\phi _{n}}{d z} +I_{2} \\, \\frac{\\beta ^{\\prime \\prime }_{n}}{2} \\frac{A_{n}^{2}}{\\sigma _{n}} \\nonumber \\\\+ I_{1} \\, \\frac{\\beta ^{\\prime \\prime }_{n}}{2} \\sigma _{n} A_{n}^{2} \\Omega _{n}^{2} +I_{3} \\, \\frac{\\gamma _{n}^{2}}{2} \\sigma _{n} A_{n}^{4} \\\\L_{\\text{int}} =2 \\, \\gamma _{1} \\gamma _{2} \\, A_{1}^{2} A_{2}^{2} \\,I_{\\text{int}}(\\sigma _{1}, \\sigma _{2}, t_{1}, t_{2}),$ where the following integrals have been defined $I_{1} =\\int \\limits _{-\\infty }^{+\\infty }S^{2}(x) \\, dx &&I_{2} =\\int \\limits _{-\\infty }^{+\\infty }(S^{\\prime }(x))^{2} \\, dx \\\\I_{3} =\\int \\limits _{-\\infty }^{+\\infty }S^{4}(x) \\, dx &&I_{\\text{int}} =\\int \\limits _{-\\infty }^{+\\infty }S^{2}\\left(\\frac{t - t_{1}}{\\sigma _{1}}\\right)S^{2}\\left(\\frac{t - t_{2}}{\\sigma _{2}}\\right) \\, dt$ Due to the time invariance in the problem, $I_{\\text{int}}$ depends only on the difference between $t_{1}$ and $t_{2}$ $I_{\\text{int}} = I_{\\text{int}}(t_{1} - t_{2}, \\sigma _{1}, \\sigma _{2}),$ and it is an even function of that difference.", "The averaged Lagrangian $L = L_{1} + L_{2} + L_{\\text{int}}$ is now a function defined in terms of soliton parameters { $A_{n}$ , $\\sigma _{n}$ , $t_{n}$ , $\\Omega _{n}$ , $\\phi _{n}$  } and only them.", "Therefore, the Euler-Lagrange equations for the new Lagrangian have to be defined in terms of variations over the soliton parameters $\\frac{\\delta L}{\\delta P_{n}} =\\frac{\\partial {L}}{{\\partial P_{n}}} -\\frac{d}{dz} \\frac{\\partial L}{\\partial \\dot{P}_{n}} = 0,$ where $P_{n}$ stands for either $A_{n}$ , $\\sigma _{n}$ , $t_{n}$ , $\\Omega _{n}$ or $\\phi _{n}$ .", "The latter case — variation with respect to the phase $\\phi _{n}$ — immediately yields the conservation of mass $N_{n} = \\sigma _{n}(z) A_{n}^{2}(z) = \\mathrm {const.", "}$ Variation with respect to the detuning $\\Omega _{n}$ fixes the group velocity of individual solitons $\\frac{d t_{n}}{dz} = \\beta ^{\\prime \\prime }_{n} \\Omega _{n}(z).$ Variation with respect to the soliton position $t_{n}$ gives us an equation for the frequency $\\frac{d \\Omega _{n}}{dz} =2 \\cdot \\frac{N_{m} \\gamma _{1} \\gamma _{2}}{I_{1} \\sigma _{1}(z) \\sigma _{2}(z)} \\cdot \\frac{\\partial I_{\\text{int}}}{\\partial t_{n}}.$ The symmetry in the overlap integral $I_{\\text{int}}$ with respect to the soliton positions $t_{1}$ and $t_{2}$ leads to conservation of momentum $N_{1} \\Omega _{1}(z) + N_{2} \\Omega _{2}(z) = \\mathrm {const}.$ Finally, the difference between the variations with respect to $A_{n}$ and $\\sigma _{n}$ gives us $I_{2} \\beta ^{\\prime \\prime }_{n} +\\frac{I_{3} \\gamma _{n}}{2} N_{n} \\sigma _{n}(z)+ 2 N_{m} \\gamma _{1} \\gamma _{2} \\frac{\\sigma _{n}(z)}{\\sigma _{m}(z)} \\left(I_{\\text{int}} + \\sigma _{n} \\frac{\\partial I_{\\text{int}}}{\\sigma _{n}}\\right) = 0.$ The very last equation — omitted here — is the evolution equation for the phase $\\phi _{n}$ .", "The right-hand's side of the equation is quite complicated, but since the phase does not occur anywhere in (REF ), (REF ) or (REF ), it is not important for the remaining analysis.", "Let us switch from the the individual soliton positions to the mean position and the relative delay instead $t_{0} = \\frac{1}{2} \\left(t_{1} + t_{2}\\right) &&\\Delta t = t_{1} - t_{2}$ Equation for the relative delay $\\Delta t$ $\\frac{d \\Delta t}{dz} =\\beta ^{\\prime \\prime }_{1} \\Omega _{1}(z) +\\beta ^{\\prime \\prime }_{2} \\Omega _{2}(z)$ and equations (REF ) and (REF ) form a closed system, with equations for $d \\Delta t / dz$ , $d \\Omega _{n} / dz$ acting as equations of motion and equations (REF ) fixing the widths $\\sigma _{n}(z)$ as functions of $\\Delta t$ .", "By differentiating (REF ) one more time and using (REF ) we get $\\frac{d^{2} \\Delta t}{dz^{2}} +2 \\frac{\\gamma _{1} \\gamma _{2} \\left(\\beta ^{\\prime \\prime }_{1} N_{1} + \\beta ^{\\prime \\prime }_{2} N_{2}\\right)}{I_{1} \\sigma _{1}(\\Delta t) \\sigma _{2}(\\Delta t)} \\frac{\\partial }{\\partial \\Delta t}I_{\\text{int}} (\\Delta t, \\sigma _{1}, \\sigma _{2}) = 0$ To transform this into a harmonic oscillator equation we need to linearize the second term around the equilibrium point $\\Delta t = 0$ .", "Since $I_{\\text{int}}$ is an even function, the derivative $\\partial I_{\\text{int}} / \\partial \\Delta {t}$ is odd and it vanishes at $\\Delta t = 0$ .", "This means we can ignore $\\Delta t$ dependency in $\\sigma _{1}$ and $\\sigma _{2}$ — only the term proportional to $\\partial ^{2} I_{\\text{int}} / \\partial \\Delta t^{2}$ will survive.", "Thus we finally arrive at $\\frac{d^{2} \\Delta t}{dz^{2}} +K_{0}^{2} \\Delta t = 0,$ where the resonance frequency $K_{0}$ is $K_{0}^{2} = 2 \\frac{\\gamma _{1} \\gamma _{2} \\left(\\beta ^{\\prime \\prime }_{1} N_{1} + \\beta ^{\\prime \\prime }_{2} N_{2}\\right)}{I_{1} \\sigma _{1}(0) \\sigma _{2}(0)} I_{\\text{int}}^{\\prime \\prime }(0; \\sigma _{1}(0), \\sigma _{2}(0)).$ For a more concrete estimate let us finally consider a Gaussian envelope, i.e.", "let us set $S(x) = \\exp (-x^{2})$ .", "Such a choice of the envelope shape fixes the integrals $I_{1} = \\sqrt{\\pi / 2}$ and $I_{\\text{int}}(\\Delta t, \\sigma _{1}, \\sigma _{2}) =\\sqrt{ \\frac{\\pi }{2} }\\frac{\\sigma _{1} \\sigma _{2}}{\\sqrt{\\left(\\sigma _{1}^{2} + \\sigma _{2}^{2}\\right)}}\\cdot \\exp \\left(\\frac{-2 \\Delta t^{2}}{\\sigma _{1}^{2} + \\sigma _{2}^{2}}\\right),$ which finally gives us the following expression for the resonance frequency $K_{0}^{2} =- \\frac{8 \\, \\gamma (\\omega _{1}) \\gamma (\\omega _{2})}{\\left(\\sigma _{1}^{2} + \\sigma _{2}^{2}\\right)^{3/2}} \\cdot \\left(\\beta ^{\\prime \\prime }(\\omega _{1}) \\sigma _{1} A_{1}^{2} +\\beta ^{\\prime \\prime }(\\omega _{2}) \\sigma _{2} A_{2}^{2}\\right).$" ] ]
2207.03541
[ [ "VMAS: A Vectorized Multi-Agent Simulator for Collective Robot Learning" ], [ "Abstract While many multi-robot coordination problems can be solved optimally by exact algorithms, solutions are often not scalable in the number of robots.", "Multi-Agent Reinforcement Learning (MARL) is gaining increasing attention in the robotics community as a promising solution to tackle such problems.", "Nevertheless, we still lack the tools that allow us to quickly and efficiently find solutions to large-scale collective learning tasks.", "In this work, we introduce the Vectorized Multi-Agent Simulator (VMAS).", "VMAS is an open-source framework designed for efficient MARL benchmarking.", "It is comprised of a vectorized 2D physics engine written in PyTorch and a set of twelve challenging multi-robot scenarios.", "Additional scenarios can be implemented through a simple and modular interface.", "We demonstrate how vectorization enables parallel simulation on accelerated hardware without added complexity.", "When comparing VMAS to OpenAI MPE, we show how MPE's execution time increases linearly in the number of simulations while VMAS is able to execute 30,000 parallel simulations in under 10s, proving more than 100x faster.", "Using VMAS's RLlib interface, we benchmark our multi-robot scenarios using various Proximal Policy Optimization (PPO)-based MARL algorithms.", "VMAS's scenarios prove challenging in orthogonal ways for state-of-the-art MARL algorithms.", "The VMAS framework is available at https://github.com/proroklab/VectorizedMultiAgentSimulator.", "A video of VMAS scenarios and experiments is available at https://youtu.be/aaDRYfiesAY." ], [ "Introduction", "Many real-world problems require coordination of multiple robots to be solved.", "However, coordination problems are commonly computationally hard.", "Examples include path-planning [11], task assignment [21], and area coverage [34].", "While exact solutions exist, their complexity grows exponentially in the number of robots.", "Multi-Agent Reinforcement Learning (MARL) can be used as a scalable approach to find near-optimal solutions to these problems [29].", "In MARL, agents trained in simulation collect experiences by interacting with the environment, and train their policies (represented with deep neural networks) through a reward signal.", "However, current MARL approaches present several issues.", "Firstly, the training phase can require significant time to converge to optimal behavior.", "This is partially due to the sample efficiency of the algorithm, and partially to the computational complexity of the simulator.", "Secondly, current benchmarks are specific to a predefined task and mostly tackle unrealistic videogame-like scenarios [24], [27], far from real-world multi-robot problems.", "This makes research in this area fragmented, with a new simulation framework being implemented for each new task introduced.", "Multi-robot simulators, on the other hand, prove to be more general, but their high fidelity and full-stack simulation results in slow performance, preventing their applicability to MARL.", "Furthermore, we argue that full-stack simulation is not necessary in MARL.", "Learning can be made more sample-efficient if simulation is used to solve high-level multi-robot coordination problems, while leaving low-level robotic control to first-principles-based methods.", "Motivated by these reasons, we introduce VMAS, a vectorized multi-agent simulator.", "VMAS is a vectorized 2D physics simulator written in PyTorch [18], designed for efficient MARL benchmarking.", "It simulates agents and landmarks of different shapes and supports torque, elastic collisions and custom gravity.", "Holonomic motion models are used for the agents to simplify simulation.", "Vectorization in PyTorch allows VMAPS to perform simulations in a batch, seamlessly scaling to tens of thousands of parallel environments on accelerated hardware.", "VMAS has an interface compatible with OpenAI Gym [5] and with the RLlib library [12], enabling out-of-the-box integration with a wide range of RL algorithms.", "VMAS also provides a framework to easily implement custom multi-robot scenarios.", "Using this framework, we introduce a set of 12 multi-robot scenarios representing difficult learning problems.", "Additional scenarios can be implemented through a simple and modular interface.", "We vectorize and port all scenarios from OpenAI MPE [13] in VMAS.", "We benchmark four of VMAS's new scenarios using three MARL algorithms based on Proximal Policy Optimization (PPO) [25].", "We show the benefits of vectorization by benchmarking our scenarios in the RLlib [12] library.", "Our scenarios prove to challenge state-of-the-art MARL algorithms in complementary ways.", "Contributions.", "We now list the main contributions of this work: We introduce the VMAS framework.", "A vectorized multi-agent simulator which enables MARL training at scale.", "VMAS supports inter-agent communication and customizable sensors, such as LIDARs.", "We implement a set of twelve multi-robot scenarios in VMAS, which focus on testing different collective learning challenges including: behavioural heterogeneity, coordination through communication, and adversarial interaction.", "We port and vectorize all scenarios from OpenAI MPE [13] into VMAS and run a performance comparison between the two simulators.", "We demonstrate the benefits of vectorization in terms of simulation speed, showing that VMAS is up to 100$\\times $ faster than MPE.", "The VMAS codebase is available herehttps://github.com/proroklab/VectorizedMultiAgentSimulator." ], [ "Related work", "In this section, we review the related literature in the fields of multi-agent and multi-robot simulation, highlighting the core gaps of each field.", "Furthermore, we compare the most relevant simulation frameworks with VMAS in tab:simulatorcomparison.", "Multi-agent reinforcement learning environments.", "A significant amount of work exists in the context of MARL to address the issues of multi-robot simulation for learning hard coordination strategies.", "Realistic GPU-accelerated simulators and engines have been proposed.", "Isaac [14] is a proprietary NVIDIA simulator used for realistic robotic simulation in reinforcement learning.", "Instead of using environment vectorization to accelerate learning, it uses concurrent execution of multiple training environments in the same simulation instance.", "Despite of this, its high-fidelity simulation makes it computationally expensive for high-level MARL problems.", "Brax [6] is a vectorized 3D physics engine introduced by Google.", "It uses the Jax [4] library to achieve environment batching and full-differentiability.", "However, computational issues occur when scaling the number of simulated agents, leading to stalled environments with just 20 agents.", "There also exist projects for single-agent vectorized environments [10], [30], but the complexity of extending these to the multi-agent domain is non-trivial.", "The core benchmark environments of the MARL literature focus on high-level inter-robot learning.", "Multiagent Particle Environments (MPE) [13] are a set of enviroments created by OpenAI.", "They share VMAS's principles of modularity and ease of new scenario creation, without providing environment vectorization.", "MAgent [33] is a discrete-world environment supporting a high number of agents.", "Multi-Agent-Learning-Environments [7] is another simplified discrete-world set of environments with a range of different multi-robot tasks.", "Multi-Agent-Emergence-Environments [2] is a customizable OpenAI 3D simulator for hide-and-seek style games.", "Pommerman [22] is a discretized playground for learning multi-agent competitive strategies.", "SMAC [24] is a very popular MARL benchmark based on the Starcraft 2 videogame.", "Neural-MMO [27] is another videogame-like set of environments where agents learn to survive in large populations.", "Google Research Football [9] is a football simulation with a suite of scenarios that test different aspects of the game.", "Gym-pybullet-drones [17] is a realistic PyBullet simulator for multi-quadricopters control.", "Particle Robots Simulator [26] is a simulator for particle robots, which require high coordination strategies to overcome actuation limitations and achieve high-level tasks.", "Multi-Agent Mujoco [19] consists in multiple agents controlling different body parts of a single Mujoco [28] agent.", "While all these environments provide interesting MARL benchmarks, most of them focus on specific tasks.", "Furthermore, none of these environments provide GPU vectorization, which is key for efficient MARL training.", "We present a comparison between VMAS and all the aforementioned environments in tab:simulatorcomparison.", "Multi-robot simulators.", "Video-game physics engines such as Unity and Unreal Engine grant realistic simulation that can be leveraged for multi-agent robotics.", "Both make use of the GPU-accelerated NVIDIA PhysX.", "However, their generality causes high overheads when using them for robotics research.", "Other popular physics engines are Bullet, Chipmunk, Box2D, and ODE.", "These engines are all similar in their capabilities and prove easier to adopt due to the availability of Python APIs.", "Thus, they are often the tool of choice for realistic robotic simulation.", "However, because they do not leverage GPU-accelerated batched simulation, these tools lead to performance bottlenecks in MARL training.", "The most widely known robotic simulators are Gazebo [8] and Webots [15].", "Their engines are based on the ODE 3D dynamics library.", "These simulators support a wide range of robot models, sensors, and actuators, but suffer from significant performance loss when scaling in the number of agents.", "Complete simulation stall is shown to occur with as few as 12 robots [16] even when tun in headless mode.", "For this reason, Argos [20] has been proposed as a scalable multi-robot simulator.", "It is able to simulate swarms in the thousands of agents by assigning parts of the simulation space to different physics engines with different simulation goals and fidelity.", "Furthermore, it uses CPU parallelization through multi-threading.", "Despite these features, none of the simulators described are fast enough to be usable in MARL training.", "This is because they prioritize realistic full-stack multi-robot simulation over speed, and they do not leverage GPU acceleration for parallel simulations.", "This focus on realism is not necessary in MARL.", "In fact, the collective coordination problem can be decoupled from low-level problems relating to sensing and control.", "These problems can then be efficiently solved independently without loss of generality.", "This insight is the key factor motivating the holonomicity assumption in VMAS.", "Table: Comparison of multi-agent and multi-robot simulators and environments." ], [ "The VMAS platform", "The unique characteristic that makes VMAS different from the related works compared in tab:simulatorcomparison is the fact that our platform brings together multi-agent learning and environment vectorization.", "Inspired by the modularity of some existing solutions, like MPE [13], we created our framework as a new scalable platform for running and creating MARL benchmarks.", "With this goal in mind, we developed VMAS following a set of tenets: Vectorized.", "VMAS vectorization can step any number of environments in parallel.", "This significantly reduces the time needed to collect rollouts for training in MARL.", "Simple.", "Complex vectorized physics engines exist (e.g., Brax [6]), but they do not scale efficiently when dealing with multiple agents.", "This defeats the computational speed goal set by vectorization.", "VMAS uses a simple custom 2D dynamics engine written in PyTorch to provide fast simulation.", "General.", "The core of VMAS is structured so that it can be used to implement general high-level multi-robot problems in 2D.", "It can support adversarial as well as cooperative scenarios.", "Holonomic robot simulation shifts focus to high-level coordination, obviating the need to learn low-level controls using MARL.", "Extensible.", "VMAS is not just a simulator with a set of environments.", "It is a framework that can be used to create new multi-agent scenarios in a format that is usable by the whole MARL community.", "For this purpose, we have modularized our framework to enable new task creation and introduced interactive rendering to debug scenarios.", "Compatible.", "VMAS has multiple wrappers which make it directly compatible with different MARL interfaces, including RLlib [12] and Gym [5].", "RLlib has a large number of already implemented RL algorithms.", "Let us break down VMAS's structure in depth.", "Figure: VMAS structure.", "VMAS has a vectorized MARL interface (left) with wrappers for compatibility with OpenAI Gym  and the RLlib RL library .", "The default VMAS interface uses PyTorch  and can be used for feeding input already on the GPU.", "Multi-agent tasks in VMAS are defined as scenarios (center).", "To define a scenario, it is sufficient to implement the listed functions.", "Scenarios access the VMAS core (right), where agents and landmarks are simulated in the world using a 2D custom written physics module.Interface.", "The structure of VMAS is illustrated in fig:vmapsstructure.", "It has a vectorized interface, which means that an arbitrary number of environments can be stepped in parallel in a batch.", "In sec:mpecomparison, we demonstrate how vectorization grants important speed-ups on the CPU and seamless scaling on the GPU.", "While the standard simulator interface uses PyTorch [18] to enable feeding tensors directly as input/output, we provide wrappers for the standard non-vectorized OpenAI Gym [5] interface and for the vectorized interface of the RLlib [12] framework.", "This enables users to effortlessly access the range of RL training algorithms already available in RLlib.", "Actions for all environments and agents are fed to VMAS for every simulation step.", "VMAS supports movement and inter-agent communication actions, both of which can be either continuous or discrete.", "The interface of VMAS provides rendering through Pyglet [1].", "Scenario.", "Scenarios encode the multi-agent task that the team is trying to solve.", "Custom scenarios can be implemented in a few hours and debugged using interactive rendering.", "Interactive rendering is a feature where agents in scenarios can be controlled by users in a videogame-like fashion and all environment-related data is printed on screen.", "To implement a scenario, it is sufficient to define a few functions: make_world creates the agents and landmarks for the scenario and spawns them in the world, reset_world_at resets a specific environment in the batch or all environments at the same time, reward returns the reward for one agent for all environments, observation returns the agent's observations for all environments.", "Optionally, done and info can be implemented to provide an ending condition and extra information.", "Further documentation on how to create new scenarios is available in the repositoryfoot:vmasurl and in the code.", "Core.", "Scenarios interact with the core.", "This is where the world simulation is stepped.", "The world contains $n$ entities, which can be agents or landmarks.", "Entities have a shape (sphere, box, or line) and a vectorized state $(\\mathbf {x}_i,\\dot{\\mathbf {x}}_i,\\theta _i,\\dot{\\theta }_i ),\\, \\forall i \\in [1..n] \\equiv N$ , which contains their position $\\mathbf {x}_i\\in \\mathbb {R}^2$ , velocity $\\dot{\\mathbf {x}}_i\\in \\mathbb {R}^2$ , rotation $\\theta _i\\in \\mathbb {R}$ , and angular velocity $\\dot{\\theta }_i \\in \\mathbb {R}$ for all environments.", "Entities have a mass $m_i\\in \\mathbb {R}$ and a maximum speed and can be customized to be movable, rotatable, and collidable.", "Agents’ actions consist of physical actions, represented as forces $\\mathbf {f}^a_i \\in \\mathbb {R}^2$ , and optional communication actions.", "Agents can either be controlled from the interface or by an “action script” defined in the scenario.", "Optionally, the simulator can introduce noise to the actions and observations.", "Custom sensors can be added to agents.", "We are currently support LIDARs.", "The world has a simulation step $\\delta t$ , velocity damping coefficient $\\zeta $ , and customizable gravity $\\mathbf {g} \\in \\mathbb {R}^2$ .", "VMAS has a force-based physics engine.", "Therefore, the simulation step uses the forces at time $t$ to update the state in the following way: ${\\left\\lbrace \\begin{array}{ll}\\mathbf {f}_i(t) = \\mathbf {f}^a_i(t) + \\mathbf {f}_i^g + \\sum _{j \\in N \\setminus \\lbrace i\\rbrace }\\mathbf {f}_{ij}^e(t) \\\\\\dot{\\mathbf {x}}_i(t) = (1-\\zeta )\\dot{\\mathbf {x}}_i(t-1) + \\frac{\\mathbf {f}_i(t)}{m_i}\\delta t\\\\\\mathbf {x}_i(t) = \\mathbf {x}_i(t-1) + \\dot{\\mathbf {x}}_i(t)\\delta t\\end{array}\\right.", "}\\,,$ where $\\mathbf {f}^a_i$ is the agent action force, $\\mathbf {f}_i^g = m_i\\mathbf {g}$ is the force deriving from gravity and $\\mathbf {f}_{ij}^e$ is the environmental force used to simulate collisions between entities $i$ and $j$ .", "It has the following form: $\\mathbf {f}^e_{ij}(t) ={\\left\\lbrace \\begin{array}{ll}c \\frac{\\mathbf {x}_{ij}(t)}{\\left\\Vert \\mathbf { x}_{ij}(t)\\right\\Vert } k\\log {\\left(1 + e^{\\frac{-\\left(\\left\\Vert \\mathbf { x}_{ij}(t)\\right\\Vert -d_{\\textrm {min}}\\right)}{k}}\\right)} & \\quad \\text{if }\\left\\Vert \\mathbf { x}_{ij}(t)\\right\\Vert \\leqslant d_{\\textrm {min}} \\\\0 & \\quad \\text{otherwise}\\ \\end{array}\\right.", "}\\, .$ Here, $c$ is a parameter regulating the force intensity.", "$\\ \\mathbf {x}_{ij}$ is the relative position between the closest points of the two entities.", "$d_{\\textrm {min}}$ is the minimum distance allowable between them.", "The term inside the logarithm computes a scalar proportional to the penetration of the two entities, parameterized by a coefficient $k$ .", "This is then multiplied by the normalized relative position vector.", "This is the same collision system used in OpenAI MPE [13].", "The same simulation step is applied to the angular state: ${\\left\\lbrace \\begin{array}{ll}\\tau _i(t) = \\sum _{j \\in N \\setminus \\lbrace i\\rbrace }\\left\\Vert \\mathbf {r}_{ij}(t) \\times \\mathbf {f}^e_{ij}(t) \\right\\Vert \\\\\\dot{\\theta }_i(t) = (1-\\zeta )\\dot{\\theta }_i(t-1) + \\frac{\\tau _i(t)}{I_i}\\delta t\\\\\\theta _i(t) = \\theta _i(t-1) + \\dot{\\theta }_i(t)\\delta t\\end{array}\\right.", "}\\,.$ Here, $\\mathbf {r}_{ij}\\in \\mathbb {R}^2$ is the vector from the center of the entity to the colliding point, $\\tau _i$ is the torque, and $I_i$ is the moment of inertia of the entity.", "The rules regulating the physics simulation in the core are basic 2D dynamics implemented in a vectorized manner using PyTorch.", "They simulate holonomic (unconstrained motion) entities only." ], [ "Multi-robot scenarios", "Alongside VMAS, we introduce a set of 12 multi-robot scenarios.", "These scenarios contain various multi-robot problems, which require complex coordination—like leveraging heterogeneous behaviour and inter-agent communication—to be solved.", "While the ability to send communication actions is not used in these scenarios, communication can be used in the policy to improve performance.", "For example, Graph Neural Networks (GNNs) can be used to overcome partial observability through information sharing [3].", "By default, our scenarios provide all of the necessary information to the agents, but observations can be removed to make the tasks more difficult and incentivise communication.", "All tasks contain numerous parametrizable components.", "Every scenario comes with a set of tests, which run a local heuristic on all agents.", "Furthermore, we vectorize and port all 9 scenarios from MPE [13] to VMAS.", "In this section, we give a brief overview of our new scenarios.", "For more details (e.g., observation space, reward, etc.)", "you can find in-depth descriptions in the VMAS repositoryfoot:vmasurl.", "Transport (fig:transport).", "$N$ agents have to push $M$ packages to a goal.", "Packages have a customizable mass and shape.", "Single agents are not able to move a high-mass package by themselves.", "Cooperation with teammates is thus needed to solve the task.", "Wheel (fig:wheel).", "$N$ agents have to collectively rotate a line.", "The line is anchored to the origin and has a parametrizable mass and length.", "The team's goal is to bring the line to a desired angular velocity.", "Lines with a high mass are impossible to push for single agents.", "Therefore, the team has to organize with agents on both sides to increase and reduce the line's velocity.", "Balance (fig:balance).", "$N$ agents have to cooperatively transport a spherical package, positioned randomly on top of a line, to a given goal.", "The scenario has gravity.", "The package has a parametrizable mass and the line can rotate, making the problem harder.", "Give Way (fig:giveway).", "Two agents start in front of each other's goals in a symmetric environment.", "To solve the task, one agent has to give way to the other by using a narrow space in the middle of the environment.", "Football (fig:football).", "A team of $N$ blue agents competes against a team of $M$ red agents to score a goal.", "By default, red agents are controlled by a heuristic AI, but self-play is also possible.", "Cooperation among teammates is required to coordinate attacking and defensive maneuvers.", "Agents need to communicate and assume different behavioural roles in order to solve the task.", "Passage (fig:passage).", "5 agents, starting in a cross formation, have to reproduce the same formation on the other side of a barrier.", "The barrier has $M$ passages ($M=1$ in the figure).", "Agents are penalized for colliding amongst each other and with the barrier.", "This scenario is a generalization of the one considered in [3].", "Reverse transport (fig:reversetransport).", "This task is the same as Transport, except only one package is present.", "Agents are spawned inside of it and need to push it to the goal.", "Dispersion (fig:dispersion).", "There are $N$ agents and $N$ food particles.", "Agents start in the same position and need to cooperatively eat all food.", "Most MARL algorithms cannot solve this task as they are constrained by behavioural homogeneity deriving from parameter sharing.", "Heterogeneous behaviour is thus needed for each agent to tackle a different food particle.", "Dropout (fig:dropout).", "$N$ agents have to collectively reach one goal.", "To complete the task, it is enough for only one agent to reach the goal.", "The team receives an energy penalty proportional to the sum of all the agents' controls.", "Therefore, agents need to organize themselves to send only the closest robot to the goal, saving as much energy as possible.", "Flocking (fig:flocking).", "$N$ agents have to flock around a target without colliding among each other and $M$ obstacles.", "While decentralized solutions to simulate flocking behaviour exist [23], they typically only consider convex environments, whereas our flocking environment contains obstacles, resulting in a more difficult coordination problem.", "Discovery (fig:discovery).", "$N$ agents have to coordinate to cover $M$ targets as quickly as possible while avoiding collisions.", "A target is considered covered if $K$ agents have approached a target at a distance of at least $D$ .", "After a target is covered, the $K$ covering agents each receive a reward and the target is re-spawned at a random position.", "This scenario requires coordination and communication to be solved efficiently, since multiple agents are required to cover a target.", "Waterfall (fig:waterfall).", "$N$ agents move from top to bottom through a series of obstacles.", "This is a testing scenario that can be used to discover VMAS's functionalities." ], [ "Comparison with MPE", "In this section, we compare the scalability of VMAS and MPE [13].", "Given that we vectorize and port all the MPE scenarios in VMAS, we can compare the two simulators on the same MPE task.", "The task chosen is “simple_spread”, as it contains multiple collidable agents in the same environment.", "In fig:mpecomparison we can see the growth in execution time with respect to the number of environments stepped in parallel for the two simulators.", "MPE runs only on the CPU, while VMAS, using PyTorch, runs both on the CPU and on the GPU.", "In this experiment, we compare the two simulators on an Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz and we also run VMAS on an NVIDIA GeForce RTX 2080 Ti.", "The results show the impact of vectorization on simulation speed.", "On the CPU, VMAS is up to 5x faster than MPE.", "On the GPU, the simulation time for MAPS is independent of the number of environments, and runs up to 100$\\times $ faster.", "The same results can be reproduced on different hardware.", "In the VMAS's repositoryfoot:vmasurl we provide a script to repeat this experiment.", "Figure: Comparison of the scalability of VMAS and MPE  in the number of parallel environments.", "In this plot, we show the execution time of the “simple_spread” scenario for 100 steps.", "MPE does not support vectorization and thus cannot be run on a GPU." ], [ "Experiments and benchmarks", "We run a set of training experiments to benchmark the performance of MARL algorithms on four VMAS scenarios.", "Thanks to VMAS's vectorization, we are able to perform a training iteration, comprised of 60,000 environment interactions and deep neural network training, in 25s on average.", "The runs reported in this section all took under 3 hours to complete.", "The models compared are all based on Proximal Policy Optimization [25], an actor-critic RL algorithm.", "The actor is a Deep Neural Network (DNN) which outputs actions given the observations and the critic is a DNN (used only during training) which, given the observations, outputs a value representing the goodness of the current state and action.", "We refer to the actor and critic as centralized when they have access to all the agents' observations and output all the agent's actions/values and we call them decentralized when they only map one agent's observations to its action/value.", "The models compared are: CPPO: This model uses a centralized critic and actor.", "It treats the multi-agent problem as a single-agent problem with one super-agent.", "MAPPO [32]: This model uses a centralized critic and a decentralized actor.", "Therefore, the agents act independently, with local decentralized policies, but are trained with centralized information.", "IPPO [31]: This model uses a decentralized critic and actor.", "Every agent learns and acts independently.", "Model parameters are shared among agents so they can benefit from each other's experiences.", "HetIPPO: We customize IPPO to disable parameter sharing, making each agent's model unique.", "Heuristic: This is a hand-designed decentralized heuristic.", "Experiments are run in RLlib [12] using the vectorized interface.", "We run all algorithms for 400 training iterations.", "Each training iteration is performed over 60,000 environment interactions.", "We plot the mean and standard deviation of the mean episode rewardThe episode reward mean is the mean of the total rewards of episodes contained in the training iteration over 10 runs with different seeds.", "The model used for all critics and actors is a two layer Multi Layer Perceptron (MLP) with hyperbolic tangent activations.", "A video of the learned policies is available at this linkfoot:video.", "In the following, we discuss the results for the trained scenarios.", "Transport (fig:experimentstransport).", "In the Transport environment, only IPPO is able to learn the optimal policy.", "This is because the other models, which have centralized components, have an input space consisting of the concatenation of all the agents' observations.", "Consequently, centralized architectures fail to generalize in environments requiring a high initial exploration like this one, where there is a high variance in possible joint states (and therefore there is a low probability that a similar state will be encountered).", "Wheel (fig:experimentswheel).", "The Wheel environment proved to be a hard task for MARL algorithms.", "Here, all models were not able to solve the task and performed worse than the heuristic.", "Balance (fig:balance).", "In Balance, all models were able to solve the task and outperform the heuristic.", "However, this is largely due to the use of a big observation space containing global information.", "The task can be made arbitrarily harder by removing part of the observation space and thus increasing partial observability.", "Give Way (fig:giveway).", "In the Give Way scenario, it is shown that only algorithms able to develop heterogeneous agent behaviour can solve the environment.", "In fact, IPPO and MAPPO, which use parameter sharing and decentralized actors, fail this scenario.", "On the other hand, it is shown that the scenario can be solved either through a centralized actor (CPPO) or by disabling parameter sharing and allowing agent policies to be heterogeneous (HetIPPO).", "The experimental results confirm that VMAS proposes a selection of scenarios which prove challenging in orthogonal ways for current state-of-the-art MARL algorithms.", "In addition, vectorization enables faster training, which is key to a wider adoption of multi-agent learning in the robotics community." ], [ "Conclusion", "In this work, we introduced VMAS, an open-source vectorized simulator for multi-robot learning.", "VMAS uses PyTorch and is composed of a core vectorized 2D physics simulator and a set of multi-robot scenarios, which encode hard collective robotic tasks.", "The focus of this framework is to act as a platform for MARL benchmarking.", "Therefore, to incentivize contributions from the community, we made implementing new scenarios as simple and modular as possible.", "We showed the computational benefits of vectorization with up to 30,000 parallel simulations executed in under 10s on a GPU.", "We benchmarked the performance of MARL algorithms on our scenarios.", "During our training experiments, we were able to collect 60,000 environment steps and perform a training iteration in under 25s.", "Experiments also showed how VMAS scenarios prove difficult in orthogonal ways for state-of-the-art MARL algorithms.", "In the future, we plan to extend the features of VMAS to widen its adoption, continuing to implement new scenarios and benchmarks.", "We are also interested in modularizing the physics engine, enabling users to swap vectorized engines with different fidelities and computational demands." ], [ "Acknowledgements", "This work was supported by ARL DCIST CRA W911NF-17-2-0181 and European Research Council (ERC) Project 949940 (gAIa).", "R. Kortvelesy was supported by Nokia Bell Labs through their donation for the Centre of Mobile, Wearable Systems and Augmented Intelligence to the University of Cambridge.", "J. Blumenkamp acknowledges the support of the ‘Studienstiftung des deutschen Volkes’ and an EPSRC tuition fee grant." ] ]
2207.03530
[ [ "The Berry dipole photovoltaic demon and the thermodynamics of\n photo-current generation within the optical gap of metals" ], [ "Abstract We dismantle the previously held misconception that it is impossible for bulk rectification mechanisms to induce a net DC electric current when the frequency of the impinging radiation lies within the optical gap of a metal in the limit of small carrier relaxation rates.", "We argue that generically such in-gap rectification mechanisms are irreversible and accompanied by a continuous exchange of energy with a heat bath and must also be necessarily accompanied by a small but finite absorption of radiation in order to guarantee the positivity of the net entropy production and abide by the second law of thermodynamics.", "We show, however, that the intra-band non-linear Hall effect arising from the Berry curvature is a special kind of in-gap rectification mechanism that behaves as a ``photo-voltaic demon'', namely it can operate as an ideal reversible and dissipationless conveyor of energy between the radiation and an external circuit.", "Its reversible nature allows for an interesting mode of operation as an amplifier of circularly polarized light, whose efficiency can approach 100%, and which could be technologically promising especially in the infrared frequency range." ], [ "Introduction", "Materials with broken inversion symmetry can display bulk rectification effects, whereby an oscillating electric field produces an average rectified DC electric current.", "While these effects have been investigated for decades [1], [2], [3], [4], there is a recent upsurge of interest in investigating their interplay with the electronic band structure and Berry phase geometry [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], as well as their potential for novel opto-electronic technologies [8], [20], [24], [25], [26], [27], [28].", "Despite all this research activity, the understanding of how these bulk rectification effects fit within the conceptual framework of non-equilibrium thermodynamics is relatively un-explored.", "Therefore, the first major objective of our current study is to contribute to fill in this gap by investigating perturbatively the constraints imposed by the second law of thermodynamics on the leading non-linear response functions that govern such bulk rectification effects.", "A second major objective of our work is to demonstrate that it is possible to have a finite DC rectified current when the frequency of radiation lies within the optical gap of a material in the clean limit of small carrier relaxation rates.", "While examples of in-gap rectification have been discussed recently [29], [30], [31], earlier prominent studies [32], [33] had concluded that such current rectification within the optical gap of a material was impossible in the clean limit.", "Hence, our second major objective is basically to try to dismantle this fundamental misconception that has been spread by past work [32], [33], namely, we will demonstrate that there can be a net rectified DC current when the frequency of the driving oscillating electric field lies within the optical gap of the electronic band structure in the ideal limit of zero temperature and vanishingly small relaxation rates and to second order in the driving electric fields.", "By using a microscopically explicit model of an electronic system coupled to a heat bath, we will show that this possible for metallic systems with a Fermi surface and we will also discuss why this is consistent with the laws of thermodynamics.", "From the existence of these in-gap rectification effects, one might be tempted to conclude that such mechanisms could induce DC electric photo-currents without an accompanying light absorption.", "As, we will see however, despite remaining finite in the limit of small relaxation rates, such in-gap rectification mechanisms are generically dissipative, in the sense that they are generically accompanied by a net positive entropy production.", "As a consequence, they are generically accompanied by a small but finite photon absorption, which must be present in order to abide by the second law of thermodynamics.", "Therefore, it is inaccurate to claim that these in-gap rectification mechanisms are not accompanied by photon absorption, as recently stated in Ref. [34].", "We have found however one special limit in which one particular in-gap rectification mechanism behaves as a non-dissipative reversible mechanism that does not contribute to the net entropy production and, ideally, does not need to be accompanied by irreversible light absorption.", "This mechanism is the non-linear Hall effect [7], [9], [15], [16], [17], [35].", "As we will show, in the limit of frequencies smaller compared to the optical gap but larger than the relaxation rate, the “Hall” nature of this non-linear Hall effect allows to transfer the energy of circularly polarized light onto the energy of an external electric circuit and vice-versa in a reversible non-dissipative fashion, which is why we refer this mechanism “as a photovoltaic demon”.", "The third major objective of our study will be then to illustrate the interesting opportunities that this mechanism offers for novel opto-electronic technologies.", "More specifically, we will show that the efficiency of this mechanism to convert the energy of circularly polarized light onto DC electric energy can approach 100% in the limit of small frequencies compared to the optical gap.", "But perhaps, more interestingly, because of its reversibility, the same mechanism can be used to transfer energy from at DC circuit onto the radiation with high efficiency and therefore act as an effective amplifier of low frequency circularly polarized light.", "Our paper is organized as follows.", "In Section , we setup the framework that incorporates both thermodynamics and nonlinear responses, discuss the work performed by the radiation and the circuit, and identify the key quantities that allow to determine whether an in-gap rectification mechanism is dissipative or not.", "In Section  we illustrate these principles and quantities within a simplified Boltzmann single-band description.", "Section  discusses a microscopic description of the bulk rectification in the presence of a physical heat bath.", "Section  applies the general considerations of Sections  and  to a specific model, and validates the simpler picture of Section .", "In Section , we discuss photovoltaic and light-amplification devices based on these principles, their efficiency and the requirements for their operation." ], [ "Thermodynamic considerations", "We consider a crystalline electronic system coupled to a heat bath and subjected to a spatially uniform but time dependent vector potential ${\\bf A}(t)$ .", "The energy of the system can change in two ways: by the work, $\\Delta W$ , performed by the vector potential ${\\bf A}(t)$ , and by the heat, $\\Delta Q$ , absorbed or released into the bath.", "From the density matrix describing the system, $\\rho _S(t)$ , these two quantities can be computed as follows [36], [37]: W = titf d t tr(S d HSd t) = titf d t j(t)E(t), Q = titf d t tr( HS d Sd t), where $t_i$ $t_f$ are initial and final times of a process and $H_S$ is the Hamiltonian of the electronic system.", "The second equality of the expression for the work can be obtained by assuming that the only explicit time dependent parameter changing in the Hamiltonian of the system is the vector potential ${\\bf A}(t)$ .", "The above makes manifest that the change of energy of the system is $\\Delta E=\\Delta W+\\Delta Q$ .", "We would like to investigate the energy exchange of the system with the radiation and an external electric circuit, as depicted in Fig.", "REF .", "We view the external electric circuit as providing a DC time independent electric field, ${\\bf E}_0$ , and the radiation as the source of an oscillating electric field with frequency $\\omega $ , ${\\bf E}_\\omega (t) = {\\bf E}_\\omega e^{i \\omega t} + \\text{c. c.}$ , where ${\\bf E}_\\omega $ is vector that can be complex to account for the degree of polarization of light.", "The total electric field acting on the system is the sum of these ${\\bf E}(t)= {\\bf E}_0 + {\\bf E}_\\omega (t)$ , and therefore, from Eq.", "(), the work can be partitioned into the work performed by the circuit and the radiation $\\Delta W = \\Delta W_\\text{circ} + \\Delta W_\\text{rad}$ , where Wcirc = titf d t j(t)E0, Wrad = titf d t j(t)E(t).", "Let us assume the system reaches a well defined steady state of oscillations periodic in the drive, with period $T = 2\\pi /\\omega $ .", "Because of periodicity the change of the system energy vanishes over one cycle: $\\Delta E = 0$  Due to the DC electric field, strictly speaking the Hamiltonian is not periodic in time, but it is periodic up to a gauge transformation after one period.", "On the other hand the Kelvin-Planck statement of the second law of thermodynamics [39], implies that during one cycle the system can only release heat into the bath: $\\Delta Q \\le 0$ .", "Therefore the second law of thermodynamics implies that the net work performed on the system must be non-negative: W = Wcirc + Wrad 0 .", "Figure: Schematic of the crystalline electronic system coupled to a heat bath, connected to an external circuit and subject to a radiation field.We can compute the above work to leading order in electric fields from linear response theory, where the electric current is given by: j(1) (t) = j0(1) + ( j(1) ei t+ c.c.", "),    j(1) = () E. Here ${\\sigma } (\\omega )$ is the complex linear conductivity tensor.", "By inserting the above expression onto Eq.", "(), we then obtain the leading expressions for the average power: Wcirc(2) T = E0T (0) E0 , Wrad(2) T = E[ () + () ] E, where $\\Delta W_\\text{circ}^{(2)} / T$ is the Joule heating effect and $\\Delta W_\\text{rad}^{(2)} / T$ accounts for the light absorption at finite frequency.", "Therefore the second law of thermodynamics, as stated in Eq.", "(), implies that the symmetric part of the DC conductivity and the Hermitian symmetrized finite frequency conductivity must be non-negative tensors.", "Let us now compute the work to the next order of perturbation theory.", "To second order in fields the current is given by: j(2) (t) = j0(2)+ ( j(2) eit+j2(2) e2 it + c.c. )", ", jf(2) = 1, 2 ( 1 +2 - f ) (1,2) E1 E2 where $\\omega _{1,2} \\in \\lbrace 0, \\pm \\omega \\rbrace $ , ${\\bf E}_{-\\omega } \\equiv {\\bf E}_{\\omega }^*$ , and ${\\sigma }(\\omega _1,\\omega _2)$ is the symmetrized second order conductivity tensor, namely abc (1,2) = acb (2,1) .", "From the above and using Eq.", "() the next order contributions to the circuit and radiation work can be shown to be: Wcirc(3) T = E0T (0,0) E0 E0 + E0T (,-) EE* , Wrad(3) T = E[ (, 0) + (, 0) ] EE0, Therefore from Eq.", "() we see that while a pure monochromatic electric field does not contribute to the power at the third order, there is a non-zero contribution to the work performed by the circuit and the radiation at third order when the DC and the oscillating electric are concomitantly present.", "The contributions from the second order currents can allow the system to act either as a solar cell, when the energy is transferred onto the DC circuit, $\\Delta W_\\text{circ} < 0$ , or as a light amplifier when it is transferred onto the radiation, $\\Delta W_\\text{rad} < 0$ , but such negative work should always be compensated by a positive work to abide by the second law of thermodynamics from Eq.().", "We will call a rectification mechanism dissipationless if the third order contribution to total power as defined in Eq.", "() arising from such mechanism vanishes, and we will call it dissipative if it does not.", "As we will see, the fact that a rectification mechanism allows for a rectified current within the optical gap of a material is not a sufficient condition for it to be dissipationless, and in fact, we find that generically such in-gap mechanisms are dissipative.", "We will show that, one specific example of these dissipative mechanisms that allows for in-gap is the semiclassical intra-band Jerk effect in metals [17].", "On the other hand, we will demonstrate that the CPGE associated with the Berry-dipole driven non-linear Hall effect allows for a non-zero current within the transparency region and that it is also a dissipationless mechanism for current rectification in the ideal intra-band limit in which the frequency is much smaller than the optical gap $\\Delta _0$ .", "Let us now specialize our discussion to the effects in metals.", "To focus on the intra-band effects, we imagine that the inter-band optical gap, $\\Delta _0$ , is sent to infinity, $\\Delta _0 \\rightarrow \\infty $ .", "At zero temperature and in the ideal limit of vanishing carrier relaxation rates ($\\Gamma \\rightarrow 0$ ), the metal will have a transparency region in $\\omega $ where the dissipative part of the conductivity would vanish as follows: () + () = 2 2 D,     where ${\\mathbb {D}}$ is the Drude weight tensor (taken to be symmetrized).", "Here $\\Gamma $ is the relaxation rate that will be defined in a more explicit microscopic form in the Section  below.", "From the above we see that in the limit $\\Gamma \\rightarrow 0$ the energy absorption by the material becomes vanishingly small for the frequencies within this optical gap.", "On the other hand, within this same frequency range, the metal can have finite rectification and as we will see also a non-zero 3rd order contribution to the power as defined from Eq.", "() that remain finite in the limit of $\\Gamma \\rightarrow 0$ .", "Combining Eqs.", "(), (), and (), then one would obtain that the leading contributions to the total power are: W T = 1 E0T D E0 + 22 ED E+ E0T K () EE* + , where we replaced ${\\sigma } (0)\\rightarrow {\\mathbb {D}}/ \\Gamma $ , and the sub-leading terms would contain terms of orders, e.g., $O({\\bf E}_0^3)$ , $O({\\bf E}_\\omega ^4)$ , $O({\\bf E}_0^2 {\\bf E}_\\omega ^2)$ .", "We have introduced the tensor ${\\mathbb {K}}(\\omega )$ which captures the 3rd order contribution to the total work, and can be obtained from the second order conductivity as follows: Ka b c () = a b c (, -) + a b c (0,-) + c a b (0,) .", "When the tensor ${\\mathbb {K}}(\\omega )$ is non-zero, the rectification process leads to a non-zero contribution to the total work, and, therefore, also to the total heat transfer.", "As a result, a rectification mechanism operating at a given $\\omega $ , will be irreversible or dissipative (namely contributing to the entropy change ) if ${\\mathbb {K}}(\\omega ) \\ne 0$ , and it will be reversible or dissipationless (namely not contributing to the entropy change) if ${\\mathbb {K}}(\\omega )=0$ .", "Notice from Eq.", "() that if we had not included the second term accounting for the small but finite residual light absorption arising from Eq.", "(), the power in Eq.", "() could be made negative for perturbatively small electric fields ${\\bf E}_0$ , violating the second law of thermodynamics.", "In fact the minimum of the power as a function ${\\bf E}_0$ is obtained for ${\\bf E}_0^\\text{min}= - \\Gamma {\\mathbb {D}}^{-1} {\\mathbb {K}}(\\omega ) {\\bf E}_\\omega {\\bf E}_\\omega ^* / 2$ (which is small by virtue of the smallness of $\\Gamma $ and ${\\bf E}_\\omega $ ), and is given by: W T|min = 22 ED E - 4 [K () EE*]T D-1 K () EE* + .", "We see in the above that the second term containing the power arising from dissipative second order processes with nonzero ${\\mathbb {K}}(\\omega )$ , is manifestly negative, because the Drude weight is a positive definite tensor.", "The first term in Eq.", "() however is perturbatively larger than the second term and guarantees the positivity of the total power in the perturbative regime.", "This is the term arising from the small residual light absorption from Eq.().", "Therefore we conclude that the Joule heating term alone is not enough to perturbatively enforce the positivity of the total power, and a small but finite radiation absorption must be present and coexist with the dissipative rectification processes when these induce in-gap photo-currents in order to abide by the second law of thermodynamics, in contrast to the claims in Ref.", "[34]." ], [ "Simplified Boltzmann description", "To illustrate the above considerations in detail within a simplified model, we consider the single band Boltzmann description within the relaxation-time description employed in Ref. [9].", "While this might appear to be a simple-minded treatment We note that this description does not include the correction to the Berry curvature introduced in Ref.", "[53], which can be neglected in the limit in which the interband energy separation is sent to infinity $\\Delta _0 \\rightarrow \\infty $ , while keeping the intraband Berry curvature finite so that a projection into a single band is justified (see Ref.", "[17]) where one recovers the familiar expression for the Berry phase induced anomalous velocity [54]., in Sections  and we will demonstrate that its predictions are recovered within a fully microscopic description of the system coupled to a heat bath in the intraband limit of $\\Gamma \\ll \\omega \\ll \\Delta _0$ .", "In the simplified Boltzmann description, the electric current density is: ${\\bf j} (t) = \\int _{\\bf k} f \\left[ \\partial _{\\bf k} \\epsilon + {\\bf \\Omega } \\times {\\bf E}(t) \\right],$ where $\\int _{\\bf k} \\equiv \\int \\text{d}{\\bf k} / (2\\pi )^d$ , $\\epsilon $ is the dispersion relation and ${\\bf \\Omega }$ is the Berry curvature of the band.", "The electron distribution function $f$ satisfies the Boltzmann equation: $\\partial _t f + {\\bf E}(t) \\cdot \\partial _{\\bf k} f = \\Gamma (f_0 - f),$ where $\\Gamma $ is a relaxation rate and $f_0$ is the equilibrium Fermi-Dirac distribution.", "Because the anomalous velocity is always orthogonal to the electric field, we immediately see that the work associated with the anomalous current is zero: ${\\bf j}^\\text{anom} (t) \\cdot {\\bf E}(t) = \\int _{\\bf k} f [{\\bf \\Omega } \\times {\\bf E}(t)] \\cdot {\\bf E}(t) = 0$ To linear order in electric fields, the electric current is ${\\bf j}^{(1)}(\\omega ) = \\frac{1}{\\Gamma +i\\omega }{\\mathbb {D}}{\\bf E}_\\omega +{\\cal F}\\times {\\bf E}_\\omega ,$ where ${\\cal F} = \\int _{\\bf k} f_0 {\\bf \\Omega }$ is average of the Berry curvature over the occupied states and $ {\\mathbb {D}}_{a b} = \\int _{\\bf k} f_0 \\partial _a \\partial _b \\epsilon $ is the Drude weight tensor with $\\partial _a \\equiv \\partial _{k_a}$ .", "It is easy to see that neither the circuit nor the radiation perform work on the electrons via the Berry curvature to leading order of perturbation theory: $\\Delta W_{\\text{BC}}^{(2)} = \\Delta W_{\\text{circ,BC}}^{(2)} = \\Delta W_{\\text{rad,BC}}^{(2)}=0.$ To leading order all the energy transfer to the electrons arises from the Drude weight.", "More specifically, the work performed by the radiation (radiation energy absorption) and the circuit (Joule heating effect) on the electrons to leading order are given by: Wrad(2) T=j(1)()E*+c.c.=22+2EDE*, Wcirc(2)T =j(1)(0)E0=1E0DE0.", "The sum of the two terms above gives rise to the second order part of the expression in Eq.", "() in the limit $\\Gamma \\ll \\omega $ .", "We will now describe the contributions to the next order in perturbation theory.", "Within the current Boltzmann approach, there are two different mechanism contributing to second order conductivities: the semiclassical Jerk term, arising from the non-parabolicity of the band dispersion, and the non-linear Hall effect, arising from the Berry curvature dipole (BCD) [9], [17].", "The second order conductivities introduced in Eq.", "() can be then separated into Jerk and BCD contributions as follows (for details, see Appendix A): a b cJerk(,-) = 22+2Ja b c, a b cBCD(,-) =d (a d cDb d+i+a d bDc d-i ), a b cJerk(,0) = 2+i/(+i)2Ja b c, a b cBCD(,0) = d ( adcDbd+i+1adbDcd ), for $j_c (\\omega _1+\\omega _2) = \\sigma _{cab}(\\omega _1,\\omega _2)E_{\\omega _1, a} E_{\\omega _2, b}$ , where ${\\mathbb {J}}_{abc} = \\int _{\\bf k} f_0 \\partial _a \\partial _b \\partial _c \\epsilon $ is the Jerk tensor, ${\\cal D}_{a b} = \\int _{\\bf k} f_0 \\partial _a \\Omega _b $ is the Berry dipole tensor, $\\epsilon _{abc}$ is the Levi-Civita symbol, and $E_{\\omega , a} \\equiv {\\bf E}_\\omega \\cdot {\\bf e}_a$ .", "We therefore see that both the Jerk and the Berry dipole indeed give rise to a finite rectification conductivity when the frequency resides within the optical gap, even in the limit of $\\Gamma \\rightarrow 0$ .", "Despite this shared interesting feature, there are, however, some key differences between these mechanisms even at the level of rectification conductivities.", "One is that the Jerk rectification conductivity tensor, $ {\\mathbb {J}}$ , vanishes in time reversal invariant systems while the Berry dipole tensor remains finite.", "Moreover, inside the gap and for $\\Gamma \\rightarrow 0$ , the real part of the BCD rectification conductivity vanishes, while the real part of the Jerk rectification conductivity remains finite.", "This implies that in this limit the Jerk mechanism leads to in-gap current rectification driven by linearly polarized light, while the BCD in-gap current rectification requires light with a non-zero degree of circular polarization.", "Finally, we also see a distinct scaling with frequency, with the BCD and Jerk decaying as $1/\\omega $ and $1/\\omega ^2$ away from the Drude peak, respectively.", "After substituting Eqs.", "() into Eq.", "() one can then obtain the third order contributions to the circuit and radiation powers, which are given by: $\\frac{\\Delta W ^{(3)}_\\text{rad}}{T} =\\frac{4\\Gamma ^2}{(\\Gamma ^2+\\omega ^2)^2}{\\bf E}_0{\\mathbb {J}} {\\bf E}_\\omega {\\bf E}_\\omega ^*-\\\\2 {\\bf E}_0\\cdot \\text{Re}\\left[\\frac{{\\cal D} {\\bf E}_{\\omega }\\times {\\bf E}_\\omega ^*}{\\Gamma +i\\omega } \\right] ,$ $\\frac{\\Delta W ^{(3)} _\\text{circ}}{T} =\\frac{1}{\\Gamma ^2}{\\bf E}_0{\\mathbb {J}} {\\bf E}_0{\\bf E}_0+\\frac{2}{\\Gamma ^2+\\omega ^2}{\\bf E}_0{\\mathbb {J}} {\\bf E}_\\omega {\\bf E}_\\omega ^*+\\\\2 {\\bf E}_0\\cdot \\text{Re}\\left[\\frac{{\\cal D} {\\bf E}_{\\omega }\\times {\\bf E}_\\omega ^*}{\\Gamma +i\\omega } \\right].$ Therefore, we see that there are contributions to the individual circuit and radiation works from both the BCD and Jerk terms.", "Notice, however, that the contributions of the BCD to the circuit and radiation work are exactly opposite to each other and therefore disappear in the net work, as expected from Eq.", "(REF ).", "In contrast the Jerk term contributes to the total work, and therefore the Jerk term is dissipative according to the general considerations of Section .", "More specifically the tensor ${\\mathbb {K}}(\\omega )$ introduced in Eq.", "(), directly depends on ${\\mathbb {J}}$ (for details, see S.I.A.", "): ${\\mathbb {K}}^\\text{Boltz}(\\omega )=\\frac{6\\Gamma ^2+2\\omega ^2}{(\\Gamma ^2+\\omega ^2)^2}{\\mathbb {J}} .$ When ${\\mathbb {J}}\\ne 0$ , ${\\mathbb {K}}^\\text{Boltz}(\\omega )$ approaches a non-zero limit for $\\Gamma \\ll \\omega $ .", "Following the general discussion of Section , we therefore see that a small but finite radiation absorption from the $1/\\omega ^2$ tail of the Drude peak in Eq.", "() necessarily needs to accompany this mechanism in order to not violate the second law of thermodynamics, see Eq.", "()." ], [ "Quantum description with the heat bath — formalism", "To investigate to what extent the Boltzmann description we discussed above is valid in capturing microscopic irreversible processes, we construct a fully microscopic quantum description of the crystalline electronic system coupled to a heat bath and subject to a spatially uniform but time dependent vector potential.", "As we will demonstrate, the microscopic description in this section agrees with the simpler Boltzmann description in the limit of $\\Gamma \\ll \\omega \\ll \\Delta _0$ , where $\\Delta _0$ is the scale controlling the inter-band optical gap, and thus it serves as a validation of the previous description.", "But in addition, the full quantum description will allow us to also describe the corrections that appear for frequencies that are comparable to the interband optical gap.", "We will use a model of a non-interacting free fermionic bath.", "This model is in the same class of those non-interacting fermionic models often described within the Keldysh formalism [13], [41], [42], [43], [44], [45], [46], [47].", "Here we will provide a description of these baths that avoids the need of second quantization and Keldysh Greens functions (but which is equivalent).", "As depicted in Fig.", "REF , we take a model of the bath in which the system sites are tunnel coupled to a collection of identical bath sites.", "Thus the system plus bath form a large tight-binding model as a whole.", "The single particle Hilbert space including the system and the bath can be then decomposed into a direct sum of system and bath subspaces, namely their Hamiltonian and states have block form as follows: H (t) = HS (t) HSB HSB HB ,    (t) = S (t) B (t) .", "The crystalline electronic system is described by a periodic tight-binding Hamiltonian together with the perturbation from the time dependent vector potential HS(t) = H0 + V(t) = n n | n n | + m n Vm n (t) | m n | , where $| \\chi _n \\rangle $ and $\\epsilon _n$ are unperturbed system state and energy with $m,n$ being general indices denoting wave vector, orbital or spin degrees of freedom, while V(t) = H0 (k - A(t) ) - H0(k) .", "The bath reads $H_B = \\sum _{n, i} \\varepsilon _i^{} \\, | \\varphi _{n,i} \\rangle \\langle \\varphi _{n,i} |$ with $| \\varphi _{n,i} \\rangle $ being bath state coupled to the system state $| \\chi _n \\rangle $ and $\\varepsilon _i$ its energy.", "For simplicity, we set the tunnel coupling $\\lambda $ between any system state and bath state to be identical such that $H_{SB} = \\lambda \\sum _{n, i} | \\chi _{n} \\rangle \\langle \\varphi _{n, i} | $ .", "This model is identical to that employed in Refs.", "[10], [47] Figure: Schematic of the crystalline electronic system described by a tight binding model with physical sites (red balls) which are tunnel coupled (solid lines) among themselves, and with their own identical fermionic bath (blue balls).From Eq.", "(), we obtain the coupled Schrödinger equations for system and bath states: $i \\dot{\\psi }_S (t) = H_S (t) \\psi _S (t) + H_{SB} \\psi _B (t) $ and $i \\dot{\\psi }_B (t) = H_{SB}^\\dagger \\psi _S (t) + H_{B} \\psi _B (t)$ (we set $\\hbar = 1$ throughout the paper).", "By inserting the second equation into the first one, one can formally eliminate the bath state $\\psi _B (t)$ and obtain an integro-differential equation that generalizes the Schrödinger equation for the open system $\\psi _S (t)$ .", "Its solutions only depend on initial states of the bath and the system, $\\psi _B(t_0)$ and $\\psi _S(t_0)$ .", "Importantly, we now assume the fermionic bath initially is in a thermal state with an equilibrium Fermi-Dirac distribution, namely B (t0) = n,i f0 (i) | n,i n,i| , f0 (i) = 1[0 (i - 0 )]+1, in which $\\mu _0$ is the chemical potential of and $\\beta _0 = 1/ k_B T_0 $ denotes the temperature of the bath, respectively, and we send the initial time to minus infinity $t_0 \\rightarrow -\\infty $ .", "It is possible then to obtain the density matrix of the system $\\rho _S (t) = \\sum _{n=0}^\\infty \\rho ^{(n)} (t) $ perturbatively in terms of $V(t)$ .", "The bath is taken into a thermodynamic limit in which its spectrum of energies $\\varepsilon _i$ becomes continuum and is described by a density of states: B() = i (-i) .", "For simplicity we take an ideal bath with a flat and infinitely broad spectrum, namely, we take its density of states to be a constant, $\\nu _B(\\omega ) = \\nu _0$ .", "The relaxation rate scale associated with the bath will then be: = 0 2/2 .", "With the above simplifications it is possible to find relatively simple closed expressions for the density matrix of the system expanded in powers of the time dependent perturbation.", "We obtain the density matrix expansions to the zeroth order in $V(t)$ : m n(0) = m n b 2 b2 + 2 f0(m + b) , where we used shorthand notations $\\int _{\\omega } \\equiv \\int _{-\\infty }^{\\infty } \\text{d} \\omega / 2 \\pi $ as well as $\\epsilon _{n m} \\equiv \\epsilon _{n} - \\epsilon _{m}$ .", "Here the subscripts $m$ , $n$ are generic and include both momentum and band (e.g., orbital, spin, valley) indices.", "The above distribution accounts for the broadening of the energy levels of the system due to its coupling to the bath, and reduces to the ideal Fermi-Dirac distribution in the limit of $\\Gamma \\rightarrow 0$ .", "Additionally, expanding $V(t)$ to the first order and the second order, we obtain $\\rho _{m n}^{(1)} (t) =\\int _{\\omega } e^{-i \\omega t}V_{m n} (\\omega ) \\, \\tilde{\\rho }_{m n}^{(1)} (\\omega ) ,$ $\\rho _{m n}^{(2)} (t) = \\int _{\\omega _1} \\int _{\\omega _2} e^{-i (\\omega _1 + \\omega _2) t} \\sum _{l} V_{m l} (\\omega _1) V_{l n} (\\omega _2)\\\\\\times \\tilde{\\rho }_{m l n}^{(2)} (\\omega _1, \\omega _2),$ in which we have m n(1) () = b 2 b2 + 2 f0(n + b ) - f0(m - b ) + b + n m + i , m l n(2) (1,2) = b 2 b2 + 2 [ f0(m - b ) (1 + 2 + b + n m + i ) (1 + b + l m + i ) + f0(n + b ) ( 1 + 2 + b + n m + i ) (2 + b + n l + i ) - f0(l + b ) ( 1 + b + l m + i ) ( 2 - b + n l + i ) ] .", "In the case of our interest, however, the perturbation $V(t)=H_0({\\bf k}-{\\bf A})-H_0 ({\\bf k})$ itself has a non-linear dependence on ${\\bf A}(t)$ .", "In order to calculate the current density along with the work performed by the circuit and radiation, we also need expansions of the perturbation $V(t)$ in terms of ${\\bf A}$ as V(n) (t) = a1 an (-1)nn!", "n H0 (k)a1 an A1, a1 An, an , in which $a_n = x,y,z$ stands for spatial indices, $\\partial _a \\equiv \\partial _{k_a}$ , and $A_{\\omega ,a} = {\\bf A}_{\\omega } \\cdot {\\bf e}_a$  Expanding perturbation (and current operator below) by derivatives requires applying unitary transformations $\\exp [\\mp i {\\bf k} \\cdot ({\\bf x}_\\alpha - {\\bf x}_\\beta )]$ before and after derivatives, with ${\\bf x}_{\\alpha }$ the position for the $\\alpha $ -th atom in the unit cell.", "Here we assumed all atoms have the same position for simplicity [55].", "Its Fourier transforms are given by: V(n) () =a1 an (- n n) A1, a1 An, an V(a1an), where we used the notation V(a1an) 2(-1)nn!", "n H0 (k)a1 an .", "Similarly, for the current operator $J_a (t) = - \\partial H_0 ({\\bf k}-{\\bf A} )/\\partial A_a = \\partial H_0 ({\\bf k}-{\\bf A} )/\\partial k_a $ , its expansions in ${\\bf A}$ are Ja(0) = a H0 (k) ,    Ja(n) (t) = a V(n) (t) .", "Their Fourier transforms are $J_a^{(0)} (\\omega ) = 2\\pi \\partial _a H_0 ({\\bf k})$ and Ja(n) () = a1 an (- n n) A1, a1 An, an Ja(a1 an), in which we defined Ja(a1 an) a V(a1an) .", "From the Eqs.", "() to (), we are able to calculate current densities and the corresponding conductivity tensors to linear order in ${\\bf A}$ , $j_a^{(1)} (\\omega ) = \\sigma _{a b}(\\omega ) E_{\\omega , b} ,$ $\\sigma _{ab} (\\omega ) = \\frac{i}{ \\omega }\\sum _{mn} \\Big [ \\tilde{V}_{mn}^{(b)} \\tilde{\\rho }_{mn}^{(1)} (\\omega ) J_{a,nm}^{(0)} + \\rho _{mn}^{(0)} \\tilde{J}_{a, nm}^{(b)} \\Big ] ,$ and those to the second order, ja(2) (1 + 2) = a b c(1, 2) E1, b E2, c , abc (1, 2) = i1 i2 mln [ Vml(b) Vln(c) mln(2) (1,2) Ja,nm(0) + Vmn(bc) mn(1) (1+2) Ja,nm(0) + Vmn(b) mn(1) (1) Ja, nm(c) + mn(0) Ja, nm(bc) ] + ( b c 1 2 ).", "Figure: (a) Dispersion relations for conduction band (red) and valence band (blue) of the time reversal breaking Hamiltonian h(𝐤)h({\\bf k}) in its Brillouin zone, for a chemical potential μ 0 \\mu _0 (gray plane) crossing the valence band.", "(b), (c), and (d) Conductivities σ xx (ω)\\sigma _{xx} (\\omega ), σ xxx (ω,-ω)\\sigma _{xxx} (\\omega ,-\\omega ), and Imσ xxy (ω,-ω)\\text{Im} \\sigma _{xxy}(\\omega , -\\omega ) for Γ=Δ 0 /10\\Gamma = \\Delta _0 / 10 (blue lines), Γ=Δ 0 /20\\Gamma = \\Delta _0 / 20 (green lines), and Γ=Δ 0 /40\\Gamma = \\Delta _0 / 40 (red lines).", "Light red areas denote the energy range in which optical transitions between the conduction and valence bands are allowed, while all other areas are gap regions.", "Light orange areas denote the energy range between the chemical potential and top of the valence band.", "The insets are zoomed-in views for regions of small values.", "Dashed circles highlight trends of in-gap conductivities as Γ\\Gamma decreases.", "Parameters used: Δ 0 =1\\Delta _0 = 1, a 0 =1a_0 = 1, t x =Δ 0 /5t_x = \\Delta _0 / 5, t y =Δ 0 /6t_y = \\Delta _0 / 6, φ x,1 =π/5\\phi _{x,1}= \\pi /5, φ y,1 =π/7\\phi _{y,1}= \\pi /7, m=Δ 0 /5m=\\Delta _0 / 5, φ x,2 =π/13\\phi _{x,2}=\\pi /13, φ y,2 =π/11\\phi _{y,2}=\\pi /11; μ 0 =-6Δ 0 /5\\mu _0 = -6 \\Delta _0/5, β 0 =100/Δ 0 \\beta _0 = 100 / \\Delta _0.We note that one needs to take limits, e.g., $\\sigma _{ab} (0) = \\lim _{\\omega \\rightarrow 0} \\sigma _{ab} (\\omega )$ and $\\sigma _{abc} (\\omega , -\\omega ) = \\lim _{\\omega _1 \\rightarrow -\\omega } \\sigma _{abc} (\\omega , \\omega _1) $ when encountering zero frequencies.", "Using Eq.", "(), we can then compute arbitrary components of ${\\mathbb {K}} (\\omega )$ from its definition Eq.", "()." ], [ "Quantum description with the heat bath — numerical results", "After describing the quantum microscopic formalism, we now consider a specific two-dimensional tight binding model h (k) = dx (k) x + dy (k) y + dz (k) z , in which $\\sigma _{x,y,z}$ are Pauli matrices; and we chose the following expressions for the vector parameterizing the Bloch Hamiltonian: $ d_x ({\\bf k}) = t_x \\sin (k_x a_0 - \\phi _{x,1})$ and $d_y ({\\bf k}) = t_y \\sin (k_y a_0 - \\phi _{y,1})$ represent complex nearest neighbour hoppings in $x$ - and $y$ - directions, and $d_z ({\\bf k}) = \\Delta _0 + m [ 2 - \\cos (k_x a_0 - \\phi _{x,2}) - \\cos (k_y a_0 - \\phi _{y,2}) ]$ with $\\Delta _0$ characterizing the band gap size.", "We set the lattice constant $a_0 = 1$ , and choose a generic set of phase factors $\\phi _{x(y),1(2)} \\ne 0$ such that $h ({\\bf k})$ does not have the time reversal symmetry or any crystalline symmetries.", "Fig.", "REF (a) shows a typical dispersion relations for $h({\\bf k})$ .", "The gap size between the valence band and the conduction band is approximately $2\\Delta _0$ for the parameters chosen for this figure.", "To focus on effects in metals, we chose a chemical potential $\\mu _0$ that crosses the valence band.", "Using Eqs.", "() and (), and for different $\\Gamma $ , we calculated linear and nonlinear conductivities for $h({\\bf k})$ .", "Illustrative results are shown in Fig.", "REF (b), (c), and (d).", "Fig.", "REF (b) for the linear conductivity $\\sigma _{xx} (\\omega )$ shows that our bath has well-behaved current relaxation: there is a non-zero DC conductivity for $\\omega \\rightarrow 0$ and finite $\\Gamma $ , and at low frequencies, one can observe a Drude peak that becomes sharper as $\\Gamma $ decreases.", "Also, from the inset of Fig.", "REF (b), one can see that when the frequency lies in the gap region (white areas), namely regions outside the energy range in which optical transitions are allowed (light red areas), the linear conductivity $\\sigma _{xx}(\\omega )$ approaches zero as $\\Gamma \\rightarrow 0$ (e.g., see dashed circles).", "On the other hand, Fig.", "REF (c) and (d) illustrate that there are indeed in-gap rectifications, exemplified by non-vanishing $\\sigma _{xxx} (\\omega ,-\\omega )$ and $\\text{Im}\\,\\sigma _{xxx} (\\omega ,-\\omega )$ in the gap region in the limit of $\\Gamma \\rightarrow 0$ (e.g., see dashed circles).", "This shows that non-vanishing rectification currents are not artefacts of the simpler Boltzmann description.", "To be able to isolate more precisely the in-gap rectification mechanisms that are present in time reversal symmetric systems, we consider the following related model of time reversal symmetric Hamiltonian: hTRS (k) = h (k) 0 0 h*(-k) .", "Namely this model is simply made by adding an additional time reversed copy to the earlier time reversal breaking model, making the new model time reversal invariant as a whole.", "The idea is that this new model is expected to display Berry dipole rectification but no Jerk effect.", "In fact, for the non-linear conductivity $\\text{Im}\\,\\sigma _{xxy}^\\text{TRS} (\\omega ,-\\omega )$ , which contains information about the rectification of circularly polarized light from the Berry dipole, one can verify that one simply obtains twice the previous result, namely that: Im xxyTRS (,-) = 2   Im xxy (,-) .", "However, except for this special case, other time reversal symmetric rectification conductivities behave differently from their time reversal breaking counterparts, which we exemplified by plotting $\\sigma _{xxx}^\\text{TRS} (\\omega ,-\\omega )$ in Fig.", "REF .", "We can see in Fig.", "REF that, within the gap region, time reversal symmetric conductivity $\\sigma _{xxx}^\\text{TRS} (\\omega ,-\\omega ) \\rightarrow 0$ , as $\\Gamma \\rightarrow 0$ (e.g., see dashed circles).", "This is in contrast to the nonvanishing of time reversal breaking conductivity $\\sigma _{xxx} (\\omega ,-\\omega ) $ as $\\Gamma $ decreases shown in Fig.", "REF (c).", "Importantly, when $\\omega $ is within the gap region, we verified that 0 xxxTRS (, -) = 0 xyyTRS (, -) = 0 Re xxyTRS (, -) = 0 .", "(gap) The above confirms that the rectification conductivities arising from the Jerk mechanism vanish in time reversal symmetric crystals, which is consistent with the conclusion in Eq. ().", "However, we also verified that 0 Im xxyTRS (, -) 0 ,    (gap) namely the rectification conductivity form the Berry dipole does not vanish for both time reversal breaking and time reversal symmetric crystals [see Eq. ()].", "Figure: Time reversal symmetric rectification conductivity σ xxx TRS (ω,-ω)\\sigma _{xxx}^\\text{TRS} (\\omega ,-\\omega ) for h TRS (𝐤)h^\\text{TRS} ({\\bf k}) at Γ=Δ 0 /10\\Gamma = \\Delta _0 / 10 (blue lines), Γ=Δ 0 /20\\Gamma = \\Delta _0 / 20 (green lines), and Γ=Δ 0 /40\\Gamma = \\Delta _0 / 40 (red lines).", "The colored areas, and markers have the same meaning as in Fig. .", "Parameters used are also the same with those in Fig. .", "This figure illustrates that this conductivity vanishes within the optical gap in the limit Γ→0\\Gamma \\rightarrow 0.After validating the existence of in-gap rectifications with and without the time reversal symmetry, we now turn to analyze the third order total power $\\Delta W^{(3)} / T$ in the clean limit $\\omega \\gg \\Gamma \\rightarrow 0$ , which is controlled by the ${\\mathbb {K}} (\\omega )$ tensor defined in Eqs.", "() and ().", "For the Hamiltonian $h ({\\bf k})$ without any symmetries, all components of ${\\mathbb {K}} (\\omega )$ can be nonzero.", "For simplicity, we assume that ${\\bf E}_0 = E_0 {\\bf e}_x$ such that we can focus on the components ${\\mathbb {K}}_{xab}$ ($a,b \\in x, y$ ).", "We note that the realness of the power mandates that ${\\mathbb {K}}_{xxx}, {\\mathbb {K}}_{xyy} \\in {\\mathbb {R}}$ are real; and due to symmetrized conductivity tensors [see Eq.", "()], using Eq.", "() we have ${\\mathbb {K}}_{xyx} (\\omega ) = {\\mathbb {K}}_{xxy} (-\\omega ) = {\\mathbb {K}}_{xxy}^* (\\omega )$ .", "Therefore the only four independent components for ${\\mathbb {K}}_{xab}$ ($a,b \\in x, y$ ) are ${\\mathbb {K}}_{xxx}(\\omega )$ , ${\\mathbb {K}}_{xyy}(\\omega )$ , $\\text{Re}\\,{\\mathbb {K}}_{xxy}(\\omega )$ and $\\text{Im}\\,{\\mathbb {K}}_{xxy}(\\omega )$ .", "We note that ${\\mathbb {K}}_{xxx} (\\omega )$ , ${\\mathbb {K}}_{xyy} (\\omega )$ and $\\text{Re}\\,{\\mathbb {K}}_{xxy} (\\omega )$ correspond to the work $\\Delta W^{(3)} / T$ performed by linearly polarized radiations; on the other hand, $\\text{Im}\\, {\\mathbb {K}}_{xxy} (\\omega )$ appears exclusively for work related to circularly polarized radiation.", "Figure: (a) Ratios between the microscopic bath description and Boltzmann description of the components of lim Γ→0 𝕂(ω)\\lim _{\\Gamma \\rightarrow 0} {\\mathbb {K}} (\\omega ) which determine the 3rd order contributions to power ΔW (3) \\Delta W^{(3)} for 𝐄 0 =E 0 𝐞 x {\\bf E}_0 = E_0 {\\bf e}_x in a 2D crystal without the time reversal symmetry or any crystalline symmetries.", "The fact that these curves approach 1 when ω≪Δ 0 \\omega \\ll \\Delta _0, validates the Boltzmann theory in the intra-band limit.", "When the time reversal symmetry is present so that the Berry dipole is the only intra-band rectification mechanism that is present, we see that only Im𝕂 ¯ xxy \\text{Im}\\,\\bar{\\mathbb {K}}_{xxy} from the Berry dipole 𝒟 xz {\\cal D}_{xz} persists and remains finite (solid line), while the other three components vanish (dashed lines).", "The vanishing of all the components of the work tensor 𝕂(ω){\\mathbb {K}}(\\omega ) in the limit ω≪Δ 0 \\omega \\ll \\Delta _0, indicates that the intra-band Berry dipole rectification is dissipationless.", "(b) Plots for three conductivities that constitute lim Γ→0 Im𝕂 xxy (ω)\\lim _{\\Gamma \\rightarrow 0} \\text{Im}\\,{\\mathbb {K}}_{xxy} (\\omega ) and are normalized by 𝒟 xz /ω{\\cal D}_{xz} / \\omega .", "Parameters used are the same with those in Fig.", ", while all results are obtained at ω≫Γ→0\\omega \\gg \\Gamma \\rightarrow 0 limit.To compare in detail the four independent ${\\mathbb {K}} (\\omega )$ components for $h ({\\bf k})$ from the quantum bath description and those from Boltzmann formalism, in Fig.", "REF (a), we computed the ratios between these components calculated from Eqs.", "() and (), and those from Eq.", "(REF ) in the $\\omega \\gg \\Gamma \\rightarrow 0$ limit: Kxxx = 0 Kxxx () KxxxBoltz () ,    Re Kxxy = 0 Re Kxxy () KxxyBoltz () , Kxyy = 0 Kxyy () KxyyBoltz () ,    Im Kxxy = 0 Im  Kxxy () Dxz / , where we used [see Eq.", "()] - 0 Im  xxyBoltz (,-) = 0 Im  yxxBoltz (0,) = Dxz = 1 k f0 x z, to normalize $\\text{Im}\\,{\\mathbb {K}}_{xxy} (\\omega )$ , because within the Boltzmann formalism $\\lim _{\\Gamma \\rightarrow 0} \\text{Im}\\,{\\mathbb {K}}_{xxy}^\\text{Boltz} (\\omega ) = 0$ .", "These ratios are plotted in Fig.", "REF (a) which demonstrate that our results from the quantum theory validate the Boltzmann analysis in the $\\omega \\ll \\Delta _0 \\rightarrow \\infty $ limit, which reproduce exactly the predictions of Boltzmann formalism in this regime.", "The microscopic multiband formalism allows us also to characterize the deviations beyond the Boltzmann intraband description when the frequency is comparable with the interband optical gap: for a finite $\\omega /\\Delta _0$ , $\\bar{\\mathbb {K}}_{xxx}$ , $\\bar{\\mathbb {K}}_{xxx}$ , and $\\text{Re}\\, \\bar{\\mathbb {K}}_{xxy}$ deviate from the unity; while $\\text{Im}\\, \\bar{\\mathbb {K}}_{xxy}$ becomes nonzero.", "Therefore, in general, all four components lead to nonzero $\\Delta W^{(3)}/T$ and are dissipative when interband effects are taken into account.", "For the time reversal symmetric model from Eq.", "(), we performed the same analysis for the four independent ${\\mathbb {K}} (\\omega )$ components.", "In this case, we found 0 KxxxTRS () = 0 KxyyTRS () = 0 Re  KxxyTRS () = 0 , namely, in time reversal symmetric crystals and in the $\\Gamma \\ll \\omega < \\Delta _0$ limit, total work $\\Delta W^{(3)} / T$ related to linearly polarized radiations are zero.", "Moreover, from Eq.", "(), one can conclude that $\\Delta W_\\text{circ}^{(3)} / T = \\Delta W_\\text{rad}^{(3)} / T = 0 $ vanish simultaneously in this circumstances.", "On the other hand, $\\text{Im}\\, {\\mathbb {K}}_{xxy}^\\text{TRS} (\\omega )$ takes the same value as in the time reversal broken model, namely we have that: 0 Im  KxxyTRS () = 0 Im  KxxyTRS () DxzTRS = 2 0 Im  Kxxy () 2 Dxz = 0 Im  Kxxy () , The above component is illustrated in Fig.", "REF (a) using the solid line.", "From Fig.", "REF (a), one can also observe that in the $\\Gamma \\ll \\omega \\ll \\Delta _0$ limit, $\\text{Im}\\, {\\mathbb {K}}_{xxy} (\\omega ) = 0$ .", "Therefore, this demonstrates that indeed for time reversal symmetric crystals the total work performed by circularly polarized radiation via the Berry dipole mechanism exactly vanishes, namely, that this mechanism of rectification is dissipationless in this limit.", "The striking point is that this occurs while in-gap rectification conductivity itself remains finite, namely, in this limit $\\text{Im}\\, \\sigma _{xxy}^\\text{TRS} (\\omega ,-\\omega ) = 2\\, \\text{Im}\\, \\sigma _{xxy} (\\omega ,-\\omega ) = -2 D_{xz} / \\omega $ remains finite as we illustrate in Fig.", "REF (b).", "This means that $\\Delta W_\\text{circ}^{(3)}/T = - \\Delta W_\\text{rad}^{(3)}/T \\ne 0$ , or, in other words, that it is possible to perform dissipationless energy transfer between the circuit and circularly polarized radiation, in agreement with the considerations of Section ." ], [ "Applications", "In this section we will discuss how the dissipationless nature of the intra-band non-linear Hall effect has a potential to develop highly efficient photovoltaic and light amplification devices.", "This is because the non-linear Hall effect arising from the Berry curvature dipole, behaves as a “photovoltaic Demon”, namely it transfers completely the energy from the radiation onto the circuit in a reversible fashion without any energy dissipated onto the heat bath.", "The BCD effect will necessarily coexist with other dissipative effects, such as Joule heating, and as a result there will always be a net imperfect conversion of energy from the radiation onto the circuit.", "We will show, however, that the ultimate bound of the efficiency of energy conversion is 100%, and can be approached when the non-linear Hall effect dominates over the Joule heating and the dissipative photon absorption processes.", "The dissipationless nature of the non-linear Hall effect arises from the fact that the anomalous velocity is orthogonal to the total electric field [see Eq.", "(REF )], leading to a perfect cancellation of the radiation and circuit BCD contributions to the total work: $\\Delta W^{(3)}_{\\text{BD}}=\\Delta W^{(3)}_{\\text{circ,BD}}+\\Delta W^{(3)}_{\\text{rad,BD}}=0.$ The above is the mathematical statement that the BCD does not produce heat and behaves as a photovoltaic demon that transfers completely the energy between the circuit and the radiation.", "The electronic system operates as a solar cell when $\\Delta W_\\text{rad} > 0$ and $\\Delta W_\\text{circ} < 0$ .", "In this regime the system absorbs energy from the radiation and transfer it onto the circuit.", "In the opposite case, it behaves as an amplifier of light, when the energy of the circuit is delivered onto the radiation $\\Delta W_\\text{rad}< 0$ .", "We therefore introduce two kinds of energy efficiency functions for the two modes of operation of the electronic system: $\\eta _\\text{Solar} = -\\frac{\\Delta W_{\\text{circ}}}{\\Delta W_{\\text{rad}}},\\text{ for }\\Delta W_\\text{rad} > 0,\\Delta W_\\text{circ} < 0, \\\\ \\eta _\\text{Amp} = -\\frac{\\Delta W_{\\text{rad}}}{\\Delta W_{\\text{circ}}},\\text{ for }\\Delta W_\\text{rad}< 0,\\Delta W_\\text{circ} >0.$ In the above equations $\\Delta W_\\text{circ}$ and $\\Delta W_\\text{rad}$ are understood to be the respective works of circuit and radiation including all processes, both dissipative and dissipationless.", "Notice that the second law of thermodynamics from Eq.", "( ) implies that each of the above efficiencies is always bounded by 1: $\\eta \\le 1$ .", "Since the non-linear Hall effect is allowed in time reversal invariant systems [9] and in order to eliminate the dissipative jerk term, from here on we assume that our system has time reversal symmetry leading to ${\\mathbb {J}}=0$ .", "In this case the work to leading 3rd order in electric field is: WcircT=1 E0DE0-2E0Re[E*D E+i ], WradT=2E*DE2+2+2E0Re[E*D E+i ].", "The first terms of $\\Delta W_\\text{circ}$ and $\\Delta W_\\text{rad}$ are the Joule heating effect and the photon absorption from the Drude peak, respectively, which are both dissipative processes.", "The second terms are the BCD contributions, which we see are exactly opposite, as expected from Eq.", "(REF ).", "We see also that, for $\\omega \\gg \\Gamma $ , the sign of the product of ${\\bf E}_0$ with the vector $\\mathrm {Im}\\left[ {\\bf E}^*_{\\omega }\\times {\\cal D} {\\bf E}_{\\omega }\\right]$ , is what determines whether the work can be negative, and therefore the sign of this is what ultimately determines if the system operates as a solar cell or as an amplifier (see the sign of the work done by a radiation from Eq.()).", "${\\bf E}_0$ is determined by the external circuit, whereas $\\mathrm {Im}\\left[ {\\bf E}^*_{\\omega }\\times {\\cal D} {\\bf E}_{\\omega }\\right]$ is determined by the radiation.", "Moreover, $\\mathrm {Im}\\left[ {\\bf E}^*_{\\omega }\\times {\\cal D} {\\bf E}_{\\omega }\\right]$ is only non-zero when the radiation has a finite degree of circular polarization and reverses direction when the handedness of the polarization is reversed.", "The intuition behind this product, is that $\\mathrm {Im}\\left[ {\\bf E}^*_{\\omega }\\times {\\cal D} {\\bf E}_{\\omega }\\right]$ is the direction in which the rectified current would flow when only ${\\bf E}_{\\omega }$ is present, and therefore we have a solar cell when ${\\bf E}_0$ is trying to oppose such current flow and a radiation amplifier when ${\\bf E}_0$ is aiding it (which requires the circuit to deliver the energy to sustain this).", "Figure: Maximum solar cell and light amplifier efficiency η max \\eta _\\text{max} as a function of |𝐄 b |/|𝐧 0 ·𝐄 a ||{\\bf E}_b|/|{\\bf n}_0\\cdot {\\bf E}_{a}| [see Eq.", "() for the definition of these electric field scales].", "The maximum efficiency is achieved at |𝐧 0 ·𝐄 a |≫|𝐄 b ||{\\bf n}_0 \\cdot {\\bf E}_{a}| \\gg |{\\bf E}_b| when BCD is much larger than a Drude weight.To estimate quantitatively the efficiency, for simplicity we will assume a diagonal structure of the Drude weight tensor ${\\mathbb {D}} = {\\mathbb {D}} \\,\\mathbb {I}_{2\\times 2}$ and introduce the following notation: ${\\bf E}_{a} \\equiv \\frac{2}{\\mathbb {D}}\\text{Re}\\left[\\frac{({\\bf E}^*_{\\omega }\\times {\\cal D}{\\bf E}_{\\omega })}{1+i\\omega /\\Gamma }\\right],~ |{\\bf E}_b|^2 \\equiv 2\\frac{{\\bf E}^*_{\\omega }\\cdot {\\bf E}_{\\omega }}{1+\\omega ^2/\\Gamma ^2},$ and ${\\bf E}_0 = E_0{\\bf n}_0$ .", "The system can operate as a solar cell for arbitrarily small ${E}_0$ and when the sign of the circuit voltage is chosen so as to satisfy ${\\bf n}_0\\cdot {\\bf E}_{a}>0$ .", "Namely $\\Delta W_\\text{circ}$ in Eq.", "(REF ) can becomes negative for arbitrarily small ${E}_0$ .", "The maximum efficiency of the solar cell as a function of ${E}_0$ is obtained by finding the maximum of Eq.", "(REF ), which occurs at: E0=ESolar, max =|Eb|2+|Eb|4|n0Ea|2 -|Eb|2n0Ea and the maximal efficiency is given by: $\\eta _\\text{max} =\\\\1 - 2 \\left(\\sqrt{\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}+\\frac{|{\\bf E}_b|^4}{|{\\bf n}_0\\cdot {\\bf E}_a|^4}}-\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}\\right).$ However, in order to operate as a radiation amplifier the circuit voltage direction has to be chosen to oppose the current induced by the radiation (${\\bf n}_0\\cdot {\\bf E}_{a}<0$ ) and ${E}_0$ needs to overcome a threshold, given by: $|{\\bf E}_{\\text{threshold}}| = \\frac{|{\\bf {E}}_b|^2}{|{\\bf n}_0\\cdot {\\bf {E}}_a|}.$ The maximum of Eq.", "() as a function of $E_0$ can be found in a similar fashion (see S.I.A for details), and despite differences between requirements for the regimes, the optimal efficiency of the light amplifier is also described by Eq.", "(REF ).", "Notice that interestingly, in the limit $|{\\bf n}_0\\cdot {\\bf E}_a| \\gg |{\\bf E}_b|$ efficiency of both devices approaches 100% (see FIG.", "REF ) and the threshold to reach the amplification regime given by Eq.", "(REF ) becomes arbitrarily small and therefore within the expected validity of the perturbative description We emphasize that even though it can be arbitrarily small, $\\Gamma $ is always viewed as finite.", "This is strictly needed in order to have a well defined steady state and for quantities such as the Joule heating to remain finite..", "The optimization that we just discussed focused on maximizing the efficiency, but in general this is not equivalent to maximizing the total delivered power [namely the maximum of the numerators of the expressions for $\\eta $ in Eqs.", "(REF , )], which might be more relevant for practical applications.", "The maximum of delivered power in the solar cell regime occurs at applied voltage $E_0 = |{\\bf n}_0\\cdot {\\bf {E}}_a|/2$ , and is given by: $\\frac{\\Delta W_\\text{circ,max}}{T} = \\frac{\\mathbb {D}}{\\Gamma }\\frac{|{\\bf n}_0\\cdot {\\bf E}_a|^2}{4},$ which grows with the radiation intensity and is proportional to a length of the vector $\\mathrm {Im}\\left[ {\\bf E}^*_{\\omega }\\times {\\cal D} {\\bf E}_{\\omega }\\right]$ for ($\\Gamma \\ll \\omega $ ).", "On the other hand the delivered power in the light amplifying regime has no maximum within the third order of the perturbation theory and increases linearly with increasing ${E}_0$  The optimal value will be controlled by higher order non-linear processes.", ": $\\frac{\\Delta W_\\text{rad}}{T} =\\frac{\\mathbb {D}}{\\Gamma }\\left({\\bf E}_0\\cdot {\\bf E}_a+|{\\bf E}_b|^2\\right).$" ], [ "Summary and discussion", "Contrary to previous claims [32], [33], we have demonstrated that it is possible for certain bulk rectification effects to induce a non-zero rectified electric current in metals when the frequency of the radiation resides within the optical gap of the material even in the limit of small relaxation rates, and shown that this is consistent with the laws of thermodynamics.", "We have accomplished this by using a fully microscopic description of the metallic electronic system coupled to a fermionic heat bath, and shown that this description reduces to a simpler Boltzmann single-band description within the relaxation time approximation in the limit $\\Gamma \\ll \\omega \\ll \\Delta _0$ , where $\\Gamma $ and $\\Delta _0$ are the relaxation rate and the optical gap for inter-band transitions respectively.", "By considering the electronic system subjected to the simultaneous presence of a DC electric field (e.g.", "arising from an external circuit) and an oscillating electric field (e.g.", "arising from the radiation), we have shown that generically these in-gap rectification processes are irreversible and accompanied by a non-zero exchange of heat with the bath, characterized by the tensor ${\\mathbb {K}}(\\omega )$ from Eq. ().", "We have seen that while always present, the DC Joule heating effect alone is not enough to guarantee the positivity of the net entropy production at arbitrarily small DC electric fields.", "Namely, in addition to the ubiquitous Joule heating, it is strictly necessary that these irreversible in-gap rectification processes [those with ${\\mathbb {K}}(\\omega ) \\ne 0$ ] in metals are accompanied by a small but finite absorption of radiation in order to guarantee the positivity of the net entropy production and abide by the second law of thermodynamics, in contrast to recent claims [34].", "These small absorption of radiation can be provided by the tails of the Drude peak or the tails of the interband absorption at the corresponding frequency $\\omega $ of the oscillating electric field that exist at small but finite relaxation rate $\\Gamma $ .", "We have shown, however, that the intra-band non-linear Hall effect arising from the Berry curvature dipole is special in the sense that it can be regarded as non-dissipative and reversible effect, whereby the electronic system acts as a perfect and reversible conveyor of the energy of radiation onto that energy of the circuit, and thus we have dubbed it a “photovoltaic demon”.", "This allows the electronic system to operate either as a highly efficient solar cell or alternatively as an amplifier of circularly polarized light.", "We caution that the “solar cell” mode of operation requires that the radiation has some circular polarization, and therefore it is hard to imagine that this could be technologically relevant as a traditional solar cell, since sunlight is random and has no net degree of polarization.", "However, interestingly the amount of light absorption can be tuned with an additional DC electric field (and vanishes when this field is zero), and therefore this principle could be technologically relevant for detection and for electrical control of the transparency of circularly polarized light.", "On the other hand, the mode of operation in which the electronic system behaves as an amplifier of circularly polarized light holds an interesting promise as an amplifier of circularly polarized light, specially in the range of infrared frequencies.", "During the completion of this work, Ref.", "[34] with some overlapping discussion on the possibility of in-gap rectification appeared, as well as Ref.", "[51] with a proposal for using the BCD effect for optoelectronic devices with optical gain that has some connection with our proposal of the BCD as a light amplifier.", "Some of our results had been preliminarily reported in [52].", "We would like to thank Elio König, Adolfo Grushin and Fernando de Juan for estimulating discussions.", "I. S. would like to specially thank Urmimala Dey who performed several unpublished calculations that served as motivation for this project.", "J. C. W. S. acknowledges support from the Ministry of Education, Singapore under its MOE AcRF Tier 3 Grant No.", "MOE2018-T3-1-002." ], [ "Boltzmann equation, perturbation theory, solar cell and a light amplifier", "In this section we will derive corrections to the electron distribution function and electric current in the presence of electric field, starting from Boltzmann equation: $\\partial _t f + {\\bf E}(t) \\cdot \\partial _{\\bf k} f = \\Gamma (f_0 - f),\\qquad {\\bf E}(t)= {\\bf E}_0 +{\\bf E}_\\omega e^{i \\omega t} + {\\bf E}_{-\\omega }^{} e^{-i \\omega t},$ $f = \\sum _{n=0} ^{\\infty } f_n(t), \\quad \\text{where} \\quad f_0 = f_{F-D},\\quad \\text{and}\\quad f_n \\sim |{\\bf E}|^n,\\quad {\\bf E}_{\\omega }^* = {\\bf E}_{-\\omega }^{}.$ $f_{F-D}$ stands for a Fermi-Dirac distribution.", "Iterative solution of equations above brings us to the following conclusion: $f_1(t) = f_1(0)+f_1(\\omega )e^{i\\omega t}+f_1(-\\omega )e^{-i\\omega t},\\\\f_2(t) = f_2(0)+f_2(\\omega )e^{i\\omega t}+f_2(2\\omega )e^{i2\\omega t}+f_2(-\\omega )e^{-i\\omega t}+f_2(-2\\omega )e^{-2i\\omega t},$ where $f(-\\omega ) = f(\\omega )^*$ and: $f_1(0) = -\\frac{1}{\\Gamma } {\\bf E}_0\\cdot \\partial _{\\bf {k}} f_0,\\quad f_1(\\omega ) = -\\frac{1}{\\Gamma +i\\omega } {\\bf E}_\\omega \\cdot \\partial _{\\bf {k}} f_0,\\\\f_2(0) = -\\frac{1}{\\Gamma } \\left({\\bf E}_0\\cdot \\partial _{\\bf k} f_1(0)+{\\bf E}_\\omega \\cdot \\partial _{\\bf k} f_1(-\\omega )+{\\bf E}_{-\\omega }\\cdot \\partial _{\\bf k} f_1(\\omega )\\right),\\\\ f_2(\\omega ) = -\\frac{1}{\\Gamma +i\\omega }{\\bf E}_{\\omega }\\cdot \\partial _{\\bf k}f_1(0)-\\frac{1}{\\Gamma +i\\omega }{\\bf E}_{0}\\cdot \\partial _{\\bf k}f_1(\\omega ),\\\\f_2(2\\omega ) = -\\frac{1}{\\Gamma +2i \\omega } {\\bf E}_{\\omega }\\cdot \\partial _{\\bf k} f_1 (\\omega ),\\quad \\text{where}\\quad {\\bf a}\\cdot \\partial _{\\bf k}\\equiv \\sum _i a^i \\frac{\\partial }{\\partial k^i},$ which allows us to compute the electric current response to electric field.", "In the first order we obtain : ${\\bf j}^{(1)}(t) ={\\bf j}_1(0) + {\\bf j}_1(\\omega )e^{i\\omega t}+ {\\bf j}^{(1)}(-\\omega )e^{-i\\omega t},\\\\{\\bf j}^{(1)}(0) = f_1(0)\\partial _{\\bf k}\\epsilon + f_0 \\left[{\\bf \\Omega }\\times {\\bf E}_0\\right],\\\\{\\bf j}^{(1)}(\\pm \\omega ) = f_1(\\pm \\omega )\\partial _{\\bf k}\\epsilon + f_0 \\left[{\\bf \\Omega }\\times {\\bf E}_{\\pm \\omega }\\right],$ and in the second order: ${\\bf j}^{(2)}(t) ={\\bf j}^{(2)}(0) + \\left({\\bf j}^{(2)}(\\omega )e^{i\\omega t}+ {\\bf j}^{(2)}(2\\omega )e^{2i\\omega t} + \\text{c.c.", "}\\right),\\\\{\\bf j}^{(2)}(0) = f_2(0)\\partial _{\\bf k}\\epsilon + f_1(0) \\left[{\\bf \\Omega }\\times {\\bf E}_0\\right]+ f_1(\\omega ) \\left[{\\bf \\Omega }\\times {\\bf E}_{-\\omega }\\right]+f_1(-\\omega ) \\left[{\\bf \\Omega }\\times {\\bf E}_{\\omega }\\right],\\\\{\\bf j}^{(2)}(\\omega ) = f_2(\\omega )\\partial _{\\bf k}\\epsilon + f_1(0) \\left[{\\bf \\Omega }\\times {\\bf E}_\\omega \\right]+ f_1(\\omega ) \\left[{\\bf \\Omega }\\times {\\bf E}_{0}\\right],\\\\{\\bf j}^{(2)}(2\\omega ) = f_2(2\\omega )\\partial _{\\bf k}\\epsilon + f_1(\\omega ) \\left[{\\bf \\Omega }\\times {\\bf E}_\\omega \\right].$ Before moving to the computation of the work done by a system, we want to emphasise that the total electric field has two physically different components: a DC component that represents a circuit voltage and an AC component that represents incoming radiation.", "These two components do a work separately and thus we split their contributions accordingly $\\Delta W = \\Delta W_\\text{circ} + \\Delta W_\\text{rad}$ , where $\\Delta W_\\text{rad}$ is a work/power performed by the radiation, the $\\Delta W_\\text{circ}$ work done by the circuit, and they are given by: $\\Delta W_\\text{circ} = \\int _{t_i}^{t_f} {\\bf j}(t)\\cdot {\\bf E}_0\\text{d} t,\\qquad \\Delta W_\\text{rad} = \\int _{t_i}^{t_f} {\\bf j}(t)\\cdot {\\bf E}_\\omega (t)\\text{d} t.$ The sum of two quantities above has to be non negative, which brings us to three possible regimes: $\\Delta W_\\text{circ}\\ge 0$ , $\\Delta W_\\text{rad}\\ge 0$ .", "In this regime system absorbs energy from all the incoming radiation.", "$\\Delta W_\\text{circ}\\ge 0$ , $\\Delta W_\\text{rad}\\le 0$ , $|W_\\text{circ}|\\ge |W_\\text{rad}|$ .", "In this regime system takes energy from a radiation and delivers part of it to a circuit.", "Which is a solar cell.", "$\\Delta W_\\text{circ}\\le 0$ , $\\Delta W_\\text{rad}\\ge 0$ , $|W_\\text{circ}|\\le |W_\\text{rad}|$ .", "In this regime system takes energy from a circuit and delivers part of it into a radiation.", "Which is a light amplifier.", "First, let us consider the Berry dipole current ${\\bf j}_\\text{BD}(t)$ and it's averaged power, which we separated into the absorbed power $\\Delta W _\\text{rad,BD} = {\\bf j}_\\text{BD}(\\omega ) \\cdot {\\bf E}_\\omega ^* + {\\bf j}_\\text{BD}(-\\omega ) \\cdot {\\bf E}_\\omega $ done by incident radiation with oscillating field, and the delivered power $\\Delta W _\\text{circ,BD} = {\\bf j}_\\text{BD}(0) \\cdot {\\bf E}_0$ done on to the electric circuit by the constant electric field.", "Using solution of the Boltzmann equation from above, we obtain: ${\\bf j}_\\text{BD}(0) = \\int _{\\bf k} \\big ( f_1(0) \\left[{\\bf \\Omega } \\times {\\bf E}_0\\right] + f_1(\\omega ) [{\\bf \\Omega } \\times {\\bf E}_{-\\omega }] + f_1(-\\omega ) [{\\bf \\Omega } \\times {\\bf E}_\\omega ] \\big ),\\\\{\\bf j}_\\text{BD}(\\omega ) = \\int _{\\bf k} \\big ( f_1(0) [{\\bf \\Omega } \\times {\\bf E}_\\omega ] + f_1(\\omega ) [{\\bf \\Omega } \\times {\\bf E}_0 ]\\big ).$ With these obtained, the Berry dipole related absorbed power and delivered power are W rad,BCD = k { f1() ([ E0] E-) + f1(-) ([ E0] E) }=-2 E0Re[D EE-+i ] , W circ,BCD = k { f1()([ E-] E0) + f1(-) ([ E] E0) }=2 E0Re[D EE-+i ], where ${\\cal D}_{a b} = \\int _{\\bf k} f_0 \\partial _a \\Omega _b $ is the Berry dipole and $[{\\cal D} {\\bf E}]_a = \\sum _b {\\cal D}_{a b} E_b$ is a matrix-vector multiplication.", "These two powers exactly cancel each other (${\\bf a}\\cdot [{\\bf b}\\times {\\bf c}]=-{\\bf c}\\cdot [{\\bf b}\\times {\\bf a}]$ ), $\\Delta W _\\text{rad,BD} + \\Delta W _\\text{circ,BD} = 0$ , which agrees with the above general analysis that total power from the Berry dipole related current vanishes at any order of the perturbation theory.", "Now, let us write the total energy delivered and absorbed up to the second order of the perturbation theory for current (third order for power), which after slight simplifications, can be written as: W rad =22+2 E-DE+42(2+2)2E0J EE--2 E0Re[D EE-+i ] , W circ =1 E0DE0+12E0J E0E0+12+2E0J EE-+2 E0Re[D EE-+i ], where ${\\bf A} {\\mathbb {D}} {\\bf B} = \\sum _{a b} A_a {\\mathbb {D}}_{ab} B_b$ , ${\\bf A} {\\mathbb {J}} {\\bf B} {\\bf C} = \\sum _{abc} A_a {\\mathbb {J}}_{abc} B_b C_c$ and: ${\\mathbb {D}}_{ab} = \\int _{\\bf k} f_0 \\partial _a \\partial _b \\epsilon , \\qquad {\\mathbb {J}}_{abc} = \\int _{\\bf k} f_0 \\partial _a \\partial _b \\partial _c \\epsilon .$ are the Drude weight and Jerk tensors.", "We see that delivered Eq.", "() and absorbed Eq.", "() powers are sensitive to a sign of a circuit voltage.", "If the Drude weight is negligible electro-optic effect is dominant, which enables an unexpected regime of powering the radiation from a circuit.", "Additionally, if the circuit voltage direction is switched the system transits into a solar cell regime.", "We note that the requirement: $\\Delta W _\\text{rad}+\\Delta W _\\text{circ}=\\frac{2\\Gamma }{\\Gamma ^2+\\omega ^2} {\\bf E}_{-\\omega }{\\mathbb {D}}{\\bf E}_\\omega +\\frac{1}{\\Gamma } {\\bf E}_0{\\mathbb {D}}{\\bf E}_0+\\frac{6\\Gamma ^2+2\\omega ^2}{(\\Gamma ^2+\\omega ^2)^2}{\\bf E}_0{\\mathbb {J}} {\\bf E}_\\omega {\\bf E}_{-\\omega }+\\frac{1}{\\Gamma ^2}{\\bf E}_0{\\mathbb {J}} {\\bf E}_0{\\bf E}_0\\ge 0$ set's a limit of perturbation theory validity.", "We see that BCD current is dissipationless and in not present in Eq.", "(REF ) whereas Jerk current is dissipative.", "It is important to notice that both effects are finite in an optical gap even in a clean limit $\\Gamma \\rightarrow 0$ .", "Interestingly, in a limit ${\\mathbb {J}}\\rightarrow 0$ the restriction Eq.", "(REF ) is automatically satisfied due to a positivity of a Drude weight.", "Yet, in general, this requirement may not be satisfied for arbitrary value of ${\\bf E}_0$ .", "For example in a limit $ \\omega \\gg \\Gamma $ we obtain: $\\Gamma {\\bf E}_0{\\mathbb {D}}{\\bf E}_0+{\\bf E}_0{\\mathbb {J}} {\\bf E}_0{\\bf E}_0\\ge 0,$ which defines limits for a perturbation theory validity.", "In the remaining part of the section, we want to demonstrate how to use our theory to optimise the performance of a system as a solar cell or a light amplified.", "First, assuming that $\\Delta W _\\text{rad}<0$ , $\\Delta W _\\text{circ}>0$ which means that system operates as a solar cell and time-reversal symmetry (${\\mathbb {J}} = 0$ ) we want to analyze an efficiency of the system: $\\eta _S=-\\frac{\\Delta W_{\\text{circ}}}{\\Delta W_{\\text{rad}}}= \\frac{ {\\bf E}_0\\cdot \\text{Re}\\left[\\frac{{\\bf E}_{-\\omega }\\times {\\cal D} {\\bf E}_{\\omega }}{1+i\\omega /\\Gamma } \\right]-\\frac{1}{2} {\\bf E}_0{\\mathbb {D}}{\\bf E}_0}{{\\bf E}_0\\cdot \\text{Re}\\left[\\frac{{\\bf E}_{-\\omega }\\times {\\cal D} {\\bf E}_{\\omega }}{1+i\\omega /\\Gamma } \\right]+\\frac{{\\bf E}_{-\\omega }{\\mathbb {D}}{\\bf E}_\\omega }{1+\\omega ^2/\\Gamma ^2} }.$ To simplify the further analysis we also assume that Drude weight is a diagonal tensor ${\\mathbb {D}} = {\\mathbb {D}} \\, \\mathbb {I}_{2\\times 2}$ , allowing us to rewrite the efficiency in a simplified form: $\\eta _\\text{Solar} = \\frac{{\\bf E}_0\\cdot {\\bf E}_a - |{\\bf E}_0|^2}{{\\bf E}_0\\cdot {\\bf E}_a +|{\\bf E}_b|^2},\\quad \\text{where}\\quad {\\bf E}_0 = E_0{\\bf n}_0, \\quad {\\bf E}_a =\\frac{2}{\\mathbb {D}}\\text{Re}\\left[\\frac{{\\bf E}_{-\\omega }\\times {\\cal D}{\\bf E}_{\\omega }}{1+i\\omega /\\Gamma }\\right],\\quad |{\\bf E}_b|^2 = 2\\frac{{\\bf E}_{-\\omega }\\cdot {\\bf E}_{\\omega }}{1+\\omega ^2/\\Gamma ^2},$ which in a limit ${\\cal D} \\gg {\\mathbb {D}}$ can approach 1 (it is transparently seen if we also assume diagonal structure of a Berry dipole, however it is not needed in general).", "We emphasize that limit of ultimate efficiency is achieved when Drude weight is negligible.", "This regime is physically distinct from a clean limit, where Joel heating becomes immense for arbitrary small value of a circuit voltage.", "In a clean limit BCD mechanism is possible, however, one can not use it to power a solar cell.", "Next, we study optimization of the device performance by tuning the applied voltage.", "It can be shown, that maximum efficiency of a solar cell (which is possible for ${\\bf n}_0\\cdot {\\bf E}_a>0$ and $E_0<{\\bf n}_0\\cdot {\\bf E}_a$ ) is expected at the following voltage: $E_0 =\\sqrt{|{\\bf E}_b|^2+\\frac{|{\\bf E}_b|^4}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}} -\\frac{|{\\bf E}_b|^2}{{\\bf n}_0\\cdot {\\bf E}_a} \\quad \\rightarrow \\quad \\text{max}[\\eta _S] =1 - 2 \\left(\\sqrt{\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}+\\frac{|{\\bf E}_b|^4}{|{\\bf n}_0\\cdot {\\bf E}_a|^4}}-\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}\\right).$ Note that maximization of delivered power occurs at a different voltage $\\max [ \\Delta W_{\\text{circ}}]\\Rightarrow E_0 = {\\bf n}_0\\cdot {\\bf E}_a/2 \\rightarrow W_\\text{circ,max} = |{\\bf n}_0\\cdot {\\bf E}_a|^2/4$ .", "Similar analysis can be done for maximization of an efficiency of light amplifier with time-reversal symmetry (which is possible for ${\\bf n}_0\\cdot {\\bf E}_a<0$ and $E_0>|{\\bf E}_b|^2/|{\\bf n}_0\\cdot {\\bf E}_a|$ ).", "In this case we have $\\Delta W _\\text{rad}<0$ and obtain: $\\eta _\\text{Amp}=-\\frac{\\Delta W_{\\text{rad}}}{\\Delta W_{\\text{circ}}}=\\frac{{\\bf E}_0\\cdot {\\bf E}_a +|{\\bf E}_b|^2}{{\\bf E}_0\\cdot {\\bf E}_a - |{\\bf E}_0|^2}.$ Which is maximised at the following electric field with the consequent maximum efficiency: $E_0 =\\sqrt{|{\\bf E}_b|^2+\\frac{|{\\bf E}_b|^4}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}} +\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|},\\quad \\rightarrow \\quad \\text{max}[\\eta _\\text{Amp}]=1 - 2 \\left(\\sqrt{\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}+\\frac{|{\\bf E}_b|^4}{|{\\bf n}_0\\cdot {\\bf E}_a|^4}}-\\frac{|{\\bf E}_b|^2}{|{\\bf n}_0\\cdot {\\bf E}_a|^2}\\right).$ Interestingly enough, the optimal efficiency of the light amplifier is the same as the solar cell's, where ultimate efficiency is achieved when the Drude weight is negligible compared to a Berry dipole.", "Yet, in amplifying regime, amplifying power has no optimal regime.", "The amplifying power linearly increases with the electric field magnitude." ] ]
2207.03496
[ [ "Mapping Milky Way disk perturbations in stellar number density and\n vertical velocity using Gaia DR3" ], [ "Abstract We have mapped the number density and mean vertical velocity of the Milky Way's stellar disk out to roughly two kiloparsecs from the Sun using Gaia Data Release 3 (DR3) and complementary photo-astrometric distance information from StarHorse.", "For the number counts, we carefully masked spatial regions that are compromised by open clusters, great distances, or dust extinction and used Gaussian processes to arrive at a smooth, non-parametric estimate for the underlying number density field.", "We find that the number density and velocity fields depart significantly from an axisymmetric and mirror-symmetric model.", "These departures, which include projections of the Gaia phase-space spiral, signal the presence of local disturbances in the disk.", "We identify two features that are present in both stellar number density and mean vertical velocity.", "One of these features appears to be associated with the Local Spiral Arm.", "It is most prominent at small heights and is largely symmetric across the mid-plane of the disk.", "The density and velocity field perturbations are phase-shifted by roughly a quarter wavelength, suggesting a breathing mode that is propagating in the direction of Galactic longitude $l\\sim 270$ deg.", "The second feature is a gradient in the stellar number density and mean vertical velocity with respect to Galactocentric radius.", "This feature, which extends across the entire region of our analysis, may be associated with the extension of the Galactic warp into the Solar neighbourhood in combination with more localised bending waves." ], [ "Introduction", "Gaia Data Release 3 (Gaia DR3) provides the measurements necessary to model the six-dimensional phase space distribution function (DF) of stars within a few kiloparsecs of the Sun [16], [17], [14] and thereby test our understanding of stellar dynamics and galactic evolution.", "However, analysis of the full phase space DF is challenging due to the curse of dimensionality and the difficulty in visualising structure in six dimensions.", "In this paper, we focus on the stellar number density $n$ and mean vertical velocity $\\bar{W}$ , which are derived from velocity moments of the DF.", "Our interest in these quantities stems from recent observations of dynamical features in the disk associated with structure perpendicular to the Galactic mid-plane.", "These include the number count asymmetry about the mid-plane [46], [50], [4], [10], bending and breathing motions [46], [47], [8], [15], [14], and the phase-space spiral in the $(Z,W)$ -plane [3].", "There is also evidence that the disk is corrugated beyond the Solar circle from observations of both $n$ [49] and $\\bar{W}$ [38], [13].", "In his seminal work on the vertical structure of the Milky Way, [33] combined observations of $n$ and $\\bar{W}$ in the Solar neighborhood to infer the vertical force $a_z$ as a function of $Z$ .", "The essence of the calculation can be understood from dimensional considerations; near the mid-plane $a_z \\simeq Z \\left(\\Delta W / \\Delta Z\\right)^2\\simeq Z\\Omega _z^2$ , where $\\Delta W$ and $\\Delta Z$ are characteristic widths of the disk in $Z$ and $W$ and $\\Omega _z$ is the frequency of vertical oscillations.", "In principle, simultaneous measurements of the spatial and velocity distributions associated with a disturbance of the disk would similarly allow one to study its dynamics.", "For example, armed with measurements of the displacement of the mid-plane and mean vertical velocity, one might be able to test theories of bending modes in the disk (see [39] and references therein).", "Unfortunately, most of the dynamical features mentioned above have been detected in either $n$ or $W$ .", "One notable exception is the phase spiral, where we have number counts in the $(Z,W)$ phase space.", "The phase spirals almost certainly arise from the incomplete phase mixing of a perturbation to the disk and the presence of information in both $Z$ and $W$ allows one to date the perturbation (e.g.", "[3], [24], [25], [42]).", "One of the main goals of this paper is to find other examples of perturbations that can be identified in both $n$ and $W$ and combining these two spatially varying fields in order to learn about the time-varying nature of disk perturbations.", "The main challenge in inferring $n$ comes from understanding the selection effects.", "In particular, the Gaia selection function is highly complex [6], [12], [11], [37]; it depends mainly on stellar crowding and Gaia's scanning law and brightness limits, with the additional confounding issues of dust reddening and extinction.", "These different factors have a spatial dependence, but are independent with respect to velocity.", "Modelling significant selection effects is often challenging, making it difficult to accurately extract the stellar number density distribution.", "By comparison, it is more straightforward to derive the mean velocity field as a function of spatial position (e.g.", "[15], [26]) since the spatially dependent selection does not induce a strong systematic bias but only lowers the amount of available statistics.", "On the other hand, velocity measurements require parallax, proper motion, and radial velocity measurements.", "At present, the number of stars in Gaia where this is possible is a factor of $\\sim 30$ less than the total number of stars in the survey.", "We modelled the spatial distribution of stars in the Galactic disk within a distance of a few kiloparsecs using data from Gaia DR3 [17], supplemented with photo-astrometric distances from StarHorse [2].", "We assumed that the three-dimensional stellar number density distribution is a Gaussian process (GP) and used GP regression to estimate the underlying smooth number density field in a non-parametric way that does not rely on any symmetry assumptions such as axisymmetry or mirror symmetry about the Galactic plane.", "We carefully masked any spatial region which is compromised by a large distance, dust extinction, or the presence of open clusters.", "Thanks to the inherent property of smoothness of Gaussian processes, a masked spatial volume is still informed by its unmasked spatial neighbourhood.", "In this manner, we were able to construct a model-independent yet robust three-dimensional map of the stellar number density distribution within a distance of a few kiloparsecs.", "We also mapped the mean vertical velocity field $\\bar{W}$ using a similar approach, for example masking open clusters, although we simply calculated the mean value in fixed spatial volumes without any GP regression.", "For a GP model of the velocity field, see [32].", "This article is structured as follows.", "In Sect.", ", we present the data and define our coordinate system.", "We describe our method for mapping the stellar number density distribution in Sect.", ", and our method for mapping the vertical velocity distribution in Sect. .", "In Sect.", ", we present our results.", "In the final Sects.", "and , we discuss and conclude." ], [ "Data", "We used data from Gaia DR3, supplemented with photo-astrometric distance and dust extinction information from StarHorse [2], which is available for Gaia stars with an apparent $G$ -band magnitude below 18.5.", "We analysed four different stellar populations defined by a range in absolute magnitude in the Gaia $G$ -band, according to $M_G \\in (0, 1],\\, (1, 2],\\, (2, 3],\\, (3, 4]$ .", "A colour magnitude diagram illustrating our data sample cuts can be found in Fig.", "REF .", "Figure: Colour magnitude diagram of stars in StarHorse within a distance of 400 pc.", "The panel on the right side shows the one dimensional absolute magnitude histogram.", "The dashed lines correspond to the magnitude cuts of our four data samples.We used the Cartesian heliocentric coordinates ${X} = (X,Y,Z)$ , which point in the directions of the Galactic centre, Galactic rotation, and Galactic north.", "In terms of the Galactic longitude and latitude, written $l$ and $b$ , these coordinates are defined according to $\\begin{split}X & = d \\, \\cos l \\, \\cos b, \\\\Y & = d \\, \\sin l \\, \\cos b, \\\\Z & = d \\, \\sin b, \\\\\\end{split}$ where $d$ is the distance from the solar position.", "We also made use of the Galactocentric radius, given by $R = \\sqrt{(R_\\odot -X)^2+Y^2},$ where we assumed a value of $R_\\odot = 8.2~\\text{kpc}$ for the Sun's distance from the Galactic centre (consistent with for example [28]).", "The time-derivatives of these spatial positions, $\\dot{{X}}$ , give the velocities in the solar rest-frame.", "In this work, we focused on the vertical velocity, which is given by $W = \\dot{Z} = d \\, k_\\mu \\, \\mu _b \\, \\cos b + v_\\text{RV} \\, \\sin b$ where $k_\\mu = 4.74057 ~ \\text{yr}\\, \\text{mas}^{-1} \\, \\text{kpc}^{-1} \\, \\text{km}\\,\\text{s}^{-1}$ is a unit conversion constant, $\\mu _b$ is the latitudinal proper motion, and $v_\\text{RV}$ is the radial velocity.", "We accounted for the statistical uncertainties in number counts while neglecting observational uncertainties in the positions of individual stars.", "For a star's spatial positions, we used the Gaia DR3 values for Galactic latitude and longitude and the StarHorse median value for the distance (labelled dist50 in that catalogue).", "When calculating the velocities, we used a similar procedure, neglecting observational uncertainties for the proper motions and radial velocity, although with the data quality cuts described in Sect. .", "StarHorse distances have a relative precision of 3 % in the bright end of apparent magnitudes smaller than roughly 14, but rises to roughly 15 % at $m_G=17$ (see figure 13 in [2]).", "The spatial volume we studied is potentially especially problematic, due to the high rate of dust extinction and stellar crowding in distant regions of the Galactic disk.", "At a distance of a few kiloparsecs, even a relative uncertainty of a few per cent is significant for our purposes, especially where there are strong degeneracies between distance and dust extinction.", "The data cuts we applied in order to circumvent these issues are described below in Sect.", "REF ." ], [ "Stellar number density distribution", "For each of our four stellar populations, defined by different cuts in absolute magnitude, we carefully masked spatial volumes that were compromised by large distances, high dust extinction, or the presence of open clusters.", "In the remaining spatial volume, where we could consider the data sample to be complete, we fitted a three-dimensional stellar number density distribution function using a Gaussian process.", "These steps are described in detail below." ], [ "Masks", "In order to stay within distances and magnitudes where the data is complete and well behaved, we constructed a sky mask in the following way.", "We divided the sky into a HEALPix map [18] of order 7 in $l$ and $b$ , corresponding to an angular resolution of $0.46$ degrees.", "For each pixel, we set an upper limit in distance, such that the following criterion was fulfilled: $M_{G,\\text{high}}+5 \\, \\text{log}_{10}\\left( \\frac{\\text{distance}}{10~\\text{pc}} \\right)+ (\\text{80th percentile dust ext.", "})<17,$ where $M_{G,\\text{high}}$ is the upper absolute magnitude bound of the data sample.", "This ensured that the non-masked data of some given sky angle coordinates and distance, had a distribution of apparent magnitudes which falls largely below 17, with only a weaker tail of stars that were dimmer than this limit.", "We used the 80th dust extinction percentile as a conservative measure; this was calculated from the volume defined by the $(l,b)$ -pixel and $100~\\text{pc}$ wide area cells in the $(X,Y)$ -plane, using the StarHorse data column ag50 for all stars with $M_G<6~\\text{mag}$ .", "We also masked the nearby spatial volume in order to avoid stars that are too bright.", "We set a lower limit in distance, requiring that this criterion was fulfilled: $\\log _{10} \\left(\\frac{\\text{distance}}{10~\\text{pc}}\\right) >\\frac{6-M_{G,\\text{low}}}{5},$ where $M_{G,\\text{low}}$ is the lower absolute magnitude bound of the data sample.", "We also masked the spatial volume where open clusters affected the stellar number density or obscured the field of view.", "We used the catalogue of open clusters from [7].", "For each open cluster, we assumed an angular size given by two times its half-light radius (r50 in the open cluster catalogue).", "Using the same sky map as defined for the dust mask above, we masked any spatial volume where the $(l,b)$ -pixel overlapped with an open cluster's angular area and the spatial distance extended beyond the 5th percentile distance of the open cluster (d05 in the open cluster catalogue).", "The mask functions of two of our stellar samples can be seen in Fig.", "REF .", "The circular patches are masked due to open clusters, while the remaining more complex structure arises from dust extinction and the limit in apparent magnitude as defined in Eq.", "REF .", "The two data samples shown in the figure differ in their spatial extent, where the brightest one reaches greater distances.", "Figure: Upper distance limit, given by Eq.", ", as a function of angles ll and bb, for our brightest and third brightest data samples.", "The centre of the map is in the direction of the Galactic centre, while positive bb is pointing upwards and positive ll is pointing to the right." ], [ "Spatial grid", "We divided the spatial volumes into a three-dimensional Cartesian grid with a 100 pc spacing in $X$ and $Y$ , and a 10 pc spacing in $Z$ .", "We were mainly interested in the stellar number density's vertical variations, which are more significant than those parallel to the Galactic plane, which is why this direction had a smaller grid spacing.", "Furthermore, because we wanted to study the Galactic disk, we restricted ourselves to $|Z| \\le 800~\\text{pc}$ .", "Each volume cell was labelled by the triplet of indices $(i,j,k)$ , which sets its spatial boundaries according to $\\begin{split}100\\times i -50 < & \\frac{X}{\\text{pc}} \\le 100\\times i +50, \\\\100\\times j -50 < & \\frac{Y}{\\text{pc}} \\le 100\\times j +50, \\\\10\\times k < & \\frac{Z}{\\text{pc}} \\le 10\\times (k+1).\\end{split}$ Before we fitted any stellar number density function, we further reduced the data in the following manner.", "Each volume cell had its specific number count, written $N_{i,j,k}$ , given by the number of stars that remained in the volume cell after applying the mask function.", "We also associated each volume cell with an effective fractional volume, written $\\tilde{f}_{i,j,k}$ , corresponding to the fraction of that volume that was not excluded by the mask functions, which was calculated via Monte Carlo integration.", "These quantities were of course unique for each separate data sample.", "For each volume cell where $\\tilde{f}_{i,j,k}>0$ (i.e.", "not completely masked), the normalised stellar number count was given by $\\tilde{N}_{i,j,k} = \\frac{N_{i,j,k}}{\\tilde{f}_{i,j,k}}.$ This is directly proportional to the stellar number density according to $n = \\tilde{N}/(10^5~\\text{pc}^3)$ , where $10^5~\\text{pc}^3$ is the cell volume.", "We took the associated statistical uncertainty of $\\tilde{N}_{i,j,j}$ to be $\\tilde{\\sigma }_{i,j,k} =\\frac{(N_{i,j,k}^2+5^2)^{1/4}}{\\tilde{f}_{i,j,k}},$ which roughly corresponds to a Poisson count uncertainty.", "We added a number 5 in quadrature in the nominator in order to decrease the statistical power where the data count is very low.", "Because we estimated the uncertainty from the data count, rather than from an underlying model that generates it, this statistical uncertainty was often underestimated especially for data bins with low number counts.", "We were mainly interested in the results where the number count was fairly high and the added number 5 was negligible; however, adding this number was necessary in order to avoid fitting artefacts, due to very low number count values at large distances from the Galactic mid-plane or where $\\tilde{f}_{i,j,k}$ was close to zero.", "Choosing a slightly different number did not significantly alter our results.", "The disk plane projections of number counts, after masks had been applied, can be seen in Fig.", "REF .", "The total number of volume cells that were not completely masked (i.e.", "$\\tilde{f}_{i,j,k}>0$ ) was equal to 1 005 181, 806 270, 584 684, 310 122 for our four data samples (going from brightest to dimmest).", "Figure: Stellar number counts per area cell in the (X,Y)(X,Y)-plane, for our four data samples (specified in the panels' top right corners), after masks have been applied.", "The arrows in the rightmost panel show the direction of the Galactic centre and the direction of Galactic rotation.", "The axis ranges are shared between all panels." ], [ "Gaussian process fit", "In this section, we describe how we modelled the normalised number counts as a Gaussian Process (GP).", "Gaussian process methods allow one to infer or interpolate an underlying function given a finite number of function observations.", "The main attraction of GPs for this work is that they allowed us to model the stellar number density $n({X})$ as a smooth and differentiable function without imposing a parametric form that presupposes constraints such as Galactic axisymmetry.", "In addition, data uncertainties can be incorporated into GP modelling as long as they are approximately Gaussian.", "Formally, a GP is a collection of random variables with the property that any finite subset of these variables has a multivariate normal distribution (see for example [34]).", "The probability distribution function (PDF) for ${\\cal N}$ random variables from a GP is therefore defined by an ${\\cal N}\\times {\\cal N}$ covariance matrix.", "In our case, the variables were the normalised number counts as labelled by $i,j,k$ , while the elements of the covariance matrix depended on the distances between pairs of volume bins through a function usually called the kernel.", "In this paper, we used the radial basis function kernel, guaranteeing continuity and smoothness, for which the covariance matrix element for two bins with grid indices $(i,j,k)$ and $(i^{\\prime },j^{\\prime },k^{\\prime })$ is $k({\\bf X}_{i,j,k},\\,{\\bf X}_{i^{\\prime },j^{\\prime },k^{\\prime }}) = Ae^{-(X_i-X_{i^{\\prime }})^2/l_x^2}e^{-(Y_j-Y_{j^{\\prime }})^2/l_y^2}e^{-(Z_k-Z_{k^{\\prime }})^2/l_z^2}~.$ The parameters $A$ , $l_x,\\,l_y,\\,$ and $l_z$ determine the overall variance and length scales associated with structure in the number counts.", "Proper choice of these hyperparameters is essential to finding a suitable model.", "Suppose we want to infer the number counts at some new position ${\\bf X}^*$ .", "The joint PDF for ${\\cal N}$ data points and the new point is defined by an $({\\cal N}+1)$ -dimensional Gaussian.", "On the other hand, the conditional PDF for the new point given the data is found by marginalising over the data via Bayes theorem.", "Since the PDFs are all Gaussian, the marginalisation integrals can be done analytically.", "However, for this step, one must invert the $N\\times N$ data covariance matrix, which is an ${\\cal O}(N^3)$ operation that requires ${\\cal O}(N^2)$ of rapid-access memory.", "An exact GP analysis of our full data set was unfeasible given the large number of measurements and the CPU and RAM requirements of the GP calculation.", "There are numerous approximation schemes such as the inducing point method that allow one to apply GP regression to very large data sets (see [40] and references therein).", "Here, we took the simple approach of applying GP regression to smaller spatial sub-volumes, rather than to the complete spatial volume all at once.", "For each area cell in the $(X,Y)$ -plane, we fitted a new Gaussian process to its surroundings, including all other area cells within 650 pc (i.e.", "$i_\\text{diff.}^2+j_\\text{diff.", "}^2<6.5^2$ ).", "The Gaussian process was fitted to the normalised number count $\\tilde{N}$ , with its associated uncertainty $\\tilde{\\sigma }$ .", "In terms of the hyperparameters of the Gaussian process, as expressed in Eq.", "(REF ), we set the variance $A$ to be equal to the variance of the normalised number count in the non-masked volume cells and used the spatial correlation scale lengths $(l_x,l_y,l_z) = (300, 300, 100)~\\text{pc}$ .", "Due to the computational cost and the shortcut of implementing GPs in sub-volumes, we did not attempt to fit the hyperparameters.", "Even if fitting the hyperparameters would be computationally feasible, it still might not be desirable.", "The hyperparameters were specifically chosen such that the stellar number density fits would have certain properties of being correlated over reasonable spatial scales.", "Choosing a smaller correlation scale lengths could make it too sensitive to perturbations and systematic issues on smaller spatial scales.", "For example, there are degeneracies between parallax and absolute magnitude, as well as with the three-dimensional distribution of dust, giving rise to spatially correlated systematic errors, potentially on smaller scales.", "Our choice is further discussed and motivated in the beginning of Sect.", "." ], [ "Symmetric analytic function", "For our non-parametric Gaussian process fit to the data, we were mainly interested in the perturbations with respect to some smooth background, and in what ways the symmetries of the Galactic disk are broken.", "In order to study such residuals, we also performed a parametric fit to our data using an analytic stellar number density distribution function, which was fully axisymmetric and mirror symmetric across the mid-plane.", "For this purpose, we used a mixture model of three disk components which take a functional form $\\begin{split}& f_\\text{symm.", "}(R,Z \\, | \\, a_i,L_i,H_i,Z_\\odot ,\\alpha ,\\beta ) = \\\\& \\sum _{i=1}^{3}a_i \\, \\exp \\left(-\\frac{R-R_\\odot }{L_i} \\right) \\, \\text{sech}^2\\left( \\frac{Z+Z_\\odot +\\alpha X+\\beta Y}{H_i} \\right).\\end{split}$ It has twelve free parameters: $a_i$ are the respective amplitude of the three disk components; $L_i$ are their scale lengths; $H_i$ are their scale heights; and $Z_\\odot $ , $\\alpha $ , and $\\beta $ , which parametrise the disk mid-plane's spatial position.", "The latter two were included to allow for the disk mid-plane to have a slight inclination with respect to how the Galactic disk is defined in the Gaia catalogue.", "We constrained $a_i$ to be positive, $L_i>500~\\text{pc}$ , and $H_i>100~\\text{pc}$ .", "We fitted $f_\\text{symm.", "}$ to the measured stellar densities in the non-masked spatial volume by maximising a Gaussian likelihood with the Adam optimiser [23].", "We used the same normalised stellar number count and statistical uncertainty as are defined in Eqs.", "(REF ) and (REF ), to the spatial volume that includes all area cells where the mean effective fractional volume for $|Z|<500~\\text{pc}$ was larger than 50 %.", "For each of our four data samples, we performed separate fits of $f_\\text{symm.", "}$ .", "The main purpose of this function is to facilitate the visualisation of the Gaussian process model and serve as a smooth and symmetric background distribution for comparison purposes.", "For this reason, we refrain from making any strong physical interpretation of this function in isolation.", "In addition to the stellar number density field, we also studied the vertical component of the velocity field.", "We calculated the mean vertical velocity of our four stellar samples, as a function of spatial position.", "We did so from the radial velocity sample, requiring a radial velocity uncertainty smaller than $5~\\text{km}\\,\\text{s}^{-1}$ .", "The vertical velocity of each such star was given directly by its StarHorse distance (dist50) and Gaia DR3 velocity information (neglecting observational uncertainties).", "We also produced results with stronger data quality cuts in both proper motion and distance uncertainty, but saw only small differences in the results.", "We cleaned the data of open clusters.", "For each open cluster, we masked the spatial volume defined by an angular radius within $3\\times \\texttt {r50}$ of its sky angular position and a distance from the Sun in range $(\\texttt {d05}-3\\times \\texttt {r50}$ ,  $\\texttt {d95}+3\\times \\texttt {r50})$ , where $\\texttt {r50}$ is the half-light radius and $\\texttt {d05}$ ($\\texttt {d95}$ ) is the 5th (95th) distance percentile in the open cluster catalogue of [7].", "Hence, this open cluster mask was slightly different from the one applied when studying the stellar number density field, where we also masked any spatial volume that lies behind the open cluster.", "For the vertical velocity field, open clusters are problematic because they are not representative of the bulk stellar distribution, while incompleteness effects that arise due to stellar crowding behind an open cluster are not expected to produce a significant bias.", "We divided the disk plane using the same area cells as defined in Eq.", "REF .", "We divided the bins in terms of height, using bin edges at 0, 50, 100, 200, 300, 500, and 700 pc for the Galactic north, and the corresponding negative values for the Galactic south.", "However, in our transformation from Solar to disk rest frame ($Z \\rightarrow z$ ), we simply added 15 pc, rather than accounting for the slightly inclined disk plane that was inferred when fitting $f_\\text{symm.", "}$ ." ], [ "Results", "In Fig.", "REF , we show the GP fit for a group of 25 neighbouring area cells.", "This area of the $(X,Y)$ -plane was chosen to illustrate a few key points.", "As can be seen in the second row first column panel, the presence of an open cluster has completely masked the number count information at $Z \\simeq -100~\\text{pc}$ ; however, because the Gaussian process is correlated with nearby spatial regions, the fitted curve is still inferred in this sub-volume, with reasonable results.", "By comparing the fit in the respective panels, we can see that the fitted $n(Z)$ distribution varies somewhat in shape; for example, in some panels $n(Z)$ is clearly more skewed than in others.", "Our spatial correlation lengths of (300,300,100) pc seem to be a good choice.", "Our fit picks out interesting structures on the hundred parsec scale.", "On the other hand, it smooths out smaller scale structures in the data, such as the feature near the mid-plane in the centre panel that has a spatial scale of a few tens of parsecs.", "We view these properties as an advantage of our method; structures that are considerably smaller than the disk scale height could well be artefacts of some systematic error, for example related to small scale structures in the dust distribution.", "With that in mind, caution should be taken when interpreting these results, as they are a product of a specific data processing procedure and not a perfect or complete representation of the underlying data.", "Figure: Gaussian process fit for data sample with absolute magnitude cuts 2<M G ≤32 < M_G \\le 3.", "Each panel corresponds to a 100-by-100 pc area cell in the (X,Y)(X,Y)-plane, labelled by indices ii and jj according to Eq.", "(), thus centred on (X,Y)=(200,-600)pc(X,Y)=(200,-600)~\\text{pc}.", "The horizontal and vertical axes show height with respect to the Sun and the normalised stellar number count as defined in Eq. .", "The solid lines correspond to the Gaussian process fits, with a smooth shaded region signifying its dispersion (mostly too small to see by eye).", "The jagged shaded region corresponds to the 1-σ\\sigma band of the data number count.", "The axis ranges are the same for all panels.In Figure REF , we show the number density perturbations from the $2<M_G < 3$ data sample as projected onto the disk plane for different bins in $Z$ .", "These perturbations are shown in terms of the ratio between our Gaussian process fit and the fitted symmetric function ($f_\\text{symm.", "}$ , as described in Sect.", "REF ; its fitted parameters are found in Appendix ).", "There are a number of prominent perturbation features.", "First, there is an over-density at around $(X,Y)=(-0.3,0.8)~\\text{kpc}$ and for bins close to the mid-plane ($Z < 300~\\text{pc}$ ).", "The structure is fairly symmetric across the north and south and matches the location of the Local Spiral Arm found by [48] and [36], [35].", "Secondly, at greater heights, and mainly for $500 \\le |Z| < 700~\\text{pc}$ , there are strong asymmetries between the north and south density fields.", "We interpret this structure, which roughly corresponds to a dipole oriented along the $X$ -axis, as an extension of the Galactic warp into the Solar Neighbourhood [9], [38].", "Thirdly, there are asymmetries between the north and south mainly around the disk location $(X,Y)=(1,-1)~\\text{kpc}$ and $|Z| < 100~\\text{pc}$ .", "This region is highly affected by dust extinction and stellar crowding and we cannot rule out the possibility that the feature is, at least in part, a systematic artefact.", "Figure: Stellar number density variations in the (X,Y)(X,Y)-plane, for different bins in height.", "The left (middle) column shows the density variations north (south) of the mid-plane, in terms of the ratio between the Gaussian process and symmetric analytic fit (as described in Sects.", "and , respectively).", "The right column shows the asymmetries between the north and south of the Gaussian process fits, where each row corresponds to a specific range in height with respect to the mid-plane's location when fitting f symm.", "f_\\text{symm.}.", "The arrows in the top right panel show the directions of the Galactic centre and Galactic rotation.", "The axes ranges are shared between all panels.", "In this plot, an area cell is shown only if the mean fractional volume within |Z|<400pc|Z|<400~\\text{pc} is greater than 60 &.In Figure REF , we show the stellar number density for the $2<M_G\\le 3$ data sample in the $(R,Z)$ -plane, for the region $|Y|<250~\\text{pc}$ .", "We also show its ratio with respect to $f_\\text{symm.", "}$ .", "We clearly see the projection of the phase-space spiral, which appears as over-densities at $Z\\simeq 250~\\text{pc}$ and $Z\\simeq -450~\\text{pc}$ for $R$ in range of roughly 7–9.5 kpc.", "It is difficult to tell whether these structures continue outside this range in $R$ .", "Moreover, it is unclear whether our results can be trusted at such great distances, especially so close to the disk mid-plane.", "The large-scale asymmetry seen at greater heights in Figure REF is also evident in both panels of Figure REF .", "The figure suggests that there is a misalignment between the thin and thick disks.", "The thick disk mid-plane, which we infer from stars at $|Z|\\simeq 600~\\text{pc}$ , has a positive slope with respect to $R$ while the thin disk plane, which we infer from stars within $200~\\text{pc}$ of the mid-plane runs along $z=0$ .", "Figure: Stellar number count in the plane of Galactocentric radius and height, for data sample 2<M G ≤32 < M_G \\le 3, averaged over the spatial volume within |Y|<250pc|Y|<250~\\text{pc}.", "The top panel shows the number count of the Gaussian process fit, while the bottom panel shows the ratio with respect to the symmetric analytic fit.Our discussion of number densities in this section has focused on the data sample defined by $2 < M_G \\le 3$ , which we consider to be most informative.", "The brighter data samples reach greater distances but it is more difficult to tease out clear stellar number density structures since the information gathered at those distances is plagued by poorer statistics and systematic issues that we have not been able to control for (e.g.", "degeneracies between dust extinction and distance).", "Conversely, the dimmest data sample has greatest number of stars and yields robust and trustworthy results, but also covers a smaller spatial volume.", "The corresponding plots of these other data samples can be found in Appendix .", "Overall, similar stellar number density structures are visible in all four stellar samples, although the perturbations at lower vertical energies are less pronounced for the dimmest data sample.", "In Fig.", "REF , we show the mean vertical velocity distribution for our brightest data sample in the same volume cells that were used in Fig.", "REF .", "The vertical velocity is offset by $7.25~\\text{km}\\,\\text{s}^{-1}$ to account for the Sun's motion with respect to the mid-plane.", "The corresponding plots for two other data samples can be found in Appendix , although they are much more limited in distance.", "The two main stellar number density perturbations that we saw in Fig.", "REF have clear counterparts in the vertical velocity field.", "First, the Local Spiral Arm over-density that is close to the Galactic mid-plane at approximately $(X,Y)=(-0.3,0.8)~\\text{kpc}$ has a vertical velocity counterpart with a similar shape.", "The feature is seen most clearly in the fourth row of Fig.", "REF , with negative values for $\\bar{w}_\\text{N}-\\bar{w}_\\text{S}$ in the third column, implying compression.", "Second, the Galactic warp-like asymmetries at greater heights have a corresponding structure in the vertical velocity field, as can be seen in the two bottom rows of Fig.", "REF ; towards the Galactic anti-centre, both the north and south have a positive mean vertical velocity.", "As with the stellar number density perturbation, the feature is present at larger distances from the mid-plane, suggesting that it is associated with the thick disk.", "In Figs.", "REF and REF , we show joint contour plots of the stellar number density and vertical velocity perturbations in the spatial region where we saw an elongated thin disk perturbation in both $n$ and $W$ .", "The figure highlights the relationship between the two fields.", "By eye, the perturbations in density and vertical velocity have roughly the same orientation and the same width across the short axis, but are out of phase by $\\pi /2$ .", "Simultaneous measurements of a perturbation in $n$ and $W$ allow us to immediately associated a timescale with the disturbance.", "The continuity equation can be written $\\frac{1}{n}\\frac{dn}{dt} = -\\nabla \\cdot {\\bf V},$ which gives the timesscale $\\tau = \\left(\\Delta W/\\Delta z\\right)/\\left(\\delta n/n\\right)$ .", "The perturbation described here, $\\Delta W\\simeq 3\\,\\text{km}\\,\\text{s}^{-1}$ for $\\Delta z\\simeq 600\\text{pc}$ and $\\delta n/n\\simeq 0.4$ , which gives $\\tau \\simeq 125\\text{Myr}$ .", "For a further discussion of the divergence of the local stellar velocity field, see [29] and [32].", "Figure: Mean vertical velocities of the data sample with 0<M G ≤10 < M_G \\le 1, in the same spatial volumes as in Fig. .", "The results of each bin in zz are smoothed over 150 pc in XX and YY for better visibility.", "A volume cell is masked if the total stellar number count falls below 25.Figure: Joint stellar number density perturbation and vertical velocity perturbation in the disk plane, for the data sample with absolute magnitude in 0<M G ≤10 < M_G \\le 1, integrated over |z|<300pc|z|<300~\\text{pc}.", "The mean vertical velocity distribution is smoothed over 150 pc in XX and YY for better visibility.", "The dotted lines corresponds to the location of the Local Spiral Arm, according to .Figure: Same as Fig.", ", but for the data sample with absolute magnitude in 1<M G ≤21 < M_G \\le 2.", "The range in XX and YY is slightly different in this figure, due to the distance limit imposed by v RV v_\\text{RV} observations." ], [ "Discussion", "Evidently, the stellar number density and vertical velocity fields show evidence of a perturbation to the disk.", "The features associated with the perturbations are present in all four samples, although to varying degree.", "In particular, the perturbation that we associate with the Local Spiral Arm is most prominent in the brighter samples.", "This may reflect the general observation that spiral structure is associated with recent star formation and hence stars at the bright end of the stellar luminosity function [5].", "The implication is that perturbations in $n$ do not perfectly reflect that of the total stellar mass distribution, though clearly they are related.", "There is a perturbation in $n$ and $\\bar{W}$ , fairly symmetric across the Galactic plane, seen most clearly around $(X,Y)=(-0.3,0.8)~\\text{kpc}$ and at heights $|z|\\lesssim 300~\\text{pc}$ , which we associate with the Local Spiral Arm found by [48].", "As seen in Figs.", "REF and REF , the perturbations in $n$ and $\\bar{W}$ are offset in the direction perpendicular to the spiral arm's extent, roughly by a quarter wavelength.", "This suggests the interpretation of a breathing mode travelling in the approximate direction of $l=270$  deg, such that the pattern speed of the Local Spiral Arm is slower than rotational velocity of the local stellar disk.", "This $\\pi /2$ phase offset is consistent with the analytic and simulation based predictions by [30].", "In a related work, [31] showed that a strong Galactic bar can alter and repress this phase offset between the $n$ and $\\bar{W}$ perturbations; this scenario is disfavoured by our results.", "The following toy model illustrates the breathing-mode hypothesis.", "For simplicity, we assume that the local gravitational potential is additively separable in $R$ and $z$ and that the vertical component of the potential is harmonic with $\\Phi (z) = \\frac{1}{2}\\Omega _z^2 z^2$ .", "The vertical action-angle variables are then $J_z = E_z/\\Omega _z$ and $\\theta _z = \\tan ^{-1}(\\Omega _z z/w)$ where $E_z = w^2/2 + \\Phi _z(z)$ is the vertical energy.", "Vertical oscillations follow a clockwise path (that is, increasing $\\theta _z$ ) in the $(z,w)$ -plane.", "For definiteness, we imagine that the unperturbed system is isothermal in the vertical direction so that the equilibrium DF in the $(z,w)$ -plane is $f_0 \\propto \\exp {(-E_z/\\sigma _z^2)}$ .", "The simplest breathing mode perturbation is proportional to $\\cos (2\\theta _z - \\omega _b t)$ .", "At $t=0$ , the DF is squeezed in $z$ and stretched in $w$ , thereby increasing the density near the mid-plane.", "The perturbed DF then rotates in the clockwise sense with a pattern speed $\\omega _b/2$ .", "The complete model is $f({\\bf X},W) = f_0(E_z) \\lbrace 1 + \\epsilon E_z\\cos [2\\theta _z - \\omega _b t - \\chi (X,Y) ] \\rbrace ,$ where $\\chi $ encodes the propagation of the wave in the plane of the disk.", "Though this model is purely phenomenological, its functional form is motivated by analytic studies of modes in an isothermal plane [27], [41], [45].", "In Fig.", "REF we present a chi-by-eye realisation of the model that captures qualitative features of Figs.", "REF and REF .", "The function $\\chi $ is chosen to correspond to an outward propagating, trailing logarithmic spiral: $\\chi (X,Y) = k\\log (R/R_0) - p\\phi ,$ where $R$ and $\\phi $ are Galactocentric polar coordinates, $p$ is the tangent of the pitch angle, and the wavelength is $2\\pi R_0/k$ .", "For the figure, we set $k$ and $p$ so that the wavelength is $1\\,{\\rm kpc}$ and the pitch angle is $12^\\circ $ , as is the case for the Local Spiral Arm [48].", "We have also included an envelope function that serves to localise the perturbation about the point $(X,Y) = (-200,\\,600)~\\text{pc}$ .", "The three top panels in Fig.", "REF correspond to $\\omega _b t = \\lbrace \\pi , \\, 3\\pi /2, \\, 2\\pi \\rbrace $ , respectively.", "Figure: Toy model perturbations to the disk.", "The bottom panel shows nn and W ¯\\bar{W} for |z|<300pc|z|<300~\\text{pc} and is analogous to Figs.", "and .", "The dotted line corresponds to the disk area covered in Fig. .", "In the top three panels, we show the number counts in the (z,w)(z,w)-plane at the points A, B, and C that are highlighted in the lower panel.", "In order to facilitate a comparison between them, an iso-energy contour for the unperturbed disk is shown as a dashed line.The density and velocity fields of this simple toy model capture the qualitative features seen in the data.", "However, as discussed in the previous section, the data exhibit a number density perturbation of order $40~\\%$ and a velocity perturbation of a few $\\text{km}\\,\\text{s}^{-1}$ .", "Our simple toy model predicts stronger perturbation for the velocity field relative to the number density field (where the overall strength is set by the free parameter $\\epsilon $ ).", "Evidently, an explanation of the observed relative strength of the velocity and density perturbations will require a more complicated model.", "An obvious extension would be to consider a superposition of modes.", "Furthermore, as mentioned in the beginning of this section, the observed perturbation in $n$ is likely affected by recent star formation, especially for our brighter data samples.", "Hence, the over-density in $n$ is likely inflated as compared to the relative over-density of the total matter density field in the same spatial location.", "The large scale feature that we associate with the Galactic warp is seen as a upward shift in the thick disk (roughly $|z|>300~\\text{pc}$ ), for stars in the direction of the Galactic anti-centre.", "The same structure is reflected in the vertical velocity distribution, where the corresponding northern and southern spatial volumes have a mean velocity towards the Galactic north, thus having the characteristic of a bending mode.", "However, the thin disk seems to be unaffected within the studied range in distance, remaining flat for both $n$ and $\\bar{W}$ .", "These results can be compared with those found by [38] who measured the mean vertical velocity as a function of $L_z$ , the angular momentum about the $z$ -axis, for solar neighbourhood stars in the Gaia-TGAS dataset.", "For a flat rotation curve, $L_z$ is proportional to the guiding radius, which can be used as a proxy for the Galactocentric radius $R$ .", "[38] found that $\\bar{W}$ was an increasing function of $R$ with a slope of $0.34~\\text{km}\\,\\text{s}^{-1}\\, \\text{kpc}^{-1}$ , which is in qualitative agreement with our results.", "A quantitative comparison is difficult since they effectively integrate over $z$ and we consider $\\bar{W}$ in different $z$ -bins.", "Some structures in $n$ , such as the horizontal bands in the bottom panel of Fig.", "REF with $|z|$ in range 200–600 pc, are projections of the phase-space spiral.", "The properties of the phase space spiral such as its phase and amount of winding in the $(z,w)$ -plane, vary slowly across the disk on scales of a few kiloparsecs in $X$ and $Y$ (see e.g.", "[43], [44], [20]; also supported by simulations, e.g.", "[21]).", "Hence, in terms of its projection on the disk plane, as shown in Fig.", "REF , the spiral manifests as larger area asymmetries mainly in the fourth and fifth panel rows.", "The two main density perturbations that we have identified in this work, which we tentatively associate with the Local Spiral Arm and the Galactic warp, do not match the properties of a projected phase-space spiral; the former is much too localised in space and is symmetric; the latter is a very large relative perturbation found mainly at greater heights and does not match the azimuthal variation of the phase-space spiral in this spatial region (see Appendix ).", "There are likely systematic effects that bias our results, especially at greater distances and in the general direction of the Galactic centre, where dust extinction and stellar crowding are more severe.", "In principle, there could be some confounding systematic that creates a spatially dependent distance bias, for example arising from dust clouds, which could affect both the $n$ and $\\bar{W}$ fields in the same spatial region.", "However, this is not likely to explain the two main perturbations that we have identified in this work.", "For the Local Spiral Arm, the structure in $n$ and $\\bar{W}$ is elongated and close to the solar position, such that the viewing angle relative to its axis of elongation varies significantly.", "Despite this, we see a qualitatively similar structure over its axis of elongation.", "For the Galactic warp, its presence at greater heights makes it much less affected by stellar crowding and dust extinction, and we also see it over a large portion of the sky.", "Furthermore, we see both of these structures in all data samples, at least to the extent that they probe those spatial volumes." ], [ "Conclusion", "In this work, we have mapped the stellar number density distribution ($n$ ) and the mean vertical velocity distribution ($\\bar{W}$ ), as a function of spatial position in the Milky Way disk, out to a distance of a few kiloparsecs.", "We have done so in a fairly model independent manner using Gaussian processes, which does not rely on any symmetry assumptions.", "Apart from projections of the phase-space spiral, we identify two main perturbation features with respect to a fully symmetric background.", "First, we see an elongated over-density feature in $n$ and corresponding breathing mode compression in $\\bar{W}$ at the spatial location of the Local Spiral Arm.", "The ridges of these $n$ and $\\bar{W}$ structures are offset in the direction perpendicular to the spiral arm, indicating a travelling breathing mode.", "Second, we see a large-scale bending mode feature in both $n$ and $\\bar{W}$ which we associate with the Galactic warp.", "We make the novel observation that within our studied spatial volume, out to a distance of at least 2 kpc in the direction of the Galactic anti-centre, this warp feature affects only the thick disk, while the thin disk ($|z|\\lesssim 300~\\text{pc}$ ) remains flat in both $n$ and $\\bar{W}$ .", "An obvious extension of this work would be to combine a smooth model for the number density field with a model for the full three-dimensional velocity field.", "This would allow one to use the continuity and Jeans equations to more fully explore the connections between vertical motions and spiral arms as well as other examples of disequilibrium in the disk [29], [30], [31], [32].", "We have demonstrated that with a careful treatment of selection effects, the stellar number density distribution can be mapped, even in fairly distant regions of the thin stellar disk.", "With more sophisticated and accurate complementary distance estimations, using photometric or spectroscopic information, in synergy with improved three-dimensional dust maps, we expect to reach even greater distances and depths in the near future.", "We would like to thank Friedrich Anders and Giacomo Monari for useful discussions.", "AW acknowledges support from the Carlsberg Foundation via a Semper Ardens grant (CF15-0384).", "APN is supported by a Research Leadership Award from the Leverhulme Trust.", "LMW acknowledges the financial support of the Natural Sciences and Engineering Research Council of Canada.", "This work made use of an HPC facility funded by a grant from VILLUM FONDEN (projectnumber 16599).", "This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).", "Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.", "This research utilised the following open-source Python packages: Matplotlib [22], NumPy [19], George [1], healpy [18]." ], [ "Symmetric analytic function fitted parameters", "The fitted free parameters of $f_\\text{symm.", "}$ can be found in Table REF .", "We also performed fits when using more than three disk components in our mixture model, but this did not produce noticeably better fits.", "In fact, for the brightest data sample, it seems that even three disk components are superfluous, as two of them had practically identical scale length and scale height values.", "The scale lengths and heights of the three disk components are increasing in unison, for all four data samples.", "The Sun's height with respect to the disk mid-plane is found to be roughly 11 pc, with variations of a few parsec between the data samples.", "The inclination of the disk mid-plane, as parametrised by $\\alpha $ and $\\beta $ , is small: at a 2 kpc distance from the Sun, the disk mid-plane varies on the scale of roughly 10 pc as compared to the plane defined by $b=0~\\deg $ .", "We see slight evidence for a misalignment between these two planes, but this result could very well be affected by systematic errors.", "Either way, this misalignment is not strong enough to alter our results in a meaningful way.", "Table: Inferred parameters for f symm.", "f_\\text{symm.", "}, for our four data samples." ], [ "Supplementary figures", "In Figs.", "REF –REF , we show plots corresponding to Figs.", "REF , REF , and REF in the main text, but for our other data samples (although the velocity plot for our dimmest data sample is excluded due to covering such a small spatial volume).", "For brighter data samples, in the $(X,Y)$ -plane projections of Figs.", "REF and REF , as well as the $(R,z)$ -plane projections of Figs.", "REF and REF , the distant regions ($\\gtrsim 2~\\text{kpc}$ ) seem to suffer from strong systematic errors, especially close to the disk mid-plane.", "Figure: Same as Fig.", ", but for the stellar sample with 0<M G ≤10 < M_G \\le 1.Figure: Same as Fig.", ", but for the stellar sample with 1<M G ≤21 < M_G \\le 2.Figure: Same as Fig.", ", but for the stellar sample with 3<M G ≤43 < M_G \\le 4.Figure: Like Fig.", ", but for the stellar sample with 0<M G ≤10 < M_G \\le 1.Figure: Like Fig.", ", but for the stellar sample with 1<M G ≤21 < M_G \\le 2.Figure: Like Fig.", ", but for the stellar sample with 3<M G ≤43 < M_G \\le 4.Figure: Same as Fig.", ", but for the stellar sample with 1<M G ≤21 < M_G \\le 2.Figure: Same as Fig.", ", but for the stellar sample with 2<M G ≤32 < M_G \\le 3." ], [ "Spiral angle plot", "In Fig.", "REF , we show the spiral angle as projected on the $(X,Y)$ -plane.", "The spiral angle is given by the location of the phase-space spiral over-density in the $(z,w)$ -plane along the iso-contour of vertical energy $E_z = \\Phi (500~\\text{pc})$ .", "These results come from directly from [44], although this figure was not included in that article.", "We point out that the orientation of the disk plane in Fig.", "REF is in agreement with other figures in this article, but differs from those in [44] by a quarter rotation.", "It is evident from the figure that the spiral angle varies significantly with the azimuth, over scales of a few kiloparsecs.", "We refer to [44] for further details.", "Figure: Angle of the phase-space spiral at the iso-energy contour E z =Φ(500pc)E_z=\\Phi (500~\\text{pc}) in the (z,w)(z,w)-plane, as inferred in .", "The colour bar is cyclical." ] ]
2207.03492
[ [ "False Negative Reduction in Semantic Segmentation under Domain Shift\n using Depth Estimation" ], [ "Abstract State-of-the-art deep neural networks demonstrate outstanding performance in semantic segmentation.", "However, their performance is tied to the domain represented by the training data.", "Open world scenarios cause inaccurate predictions which is hazardous in safety relevant applications like automated driving.", "In this work, we enhance semantic segmentation predictions using monocular depth estimation to improve segmentation by reducing the occurrence of non-detected objects in presence of domain shift.", "To this end, we infer a depth heatmap via a modified segmentation network which generates foreground-background masks, operating in parallel to a given semantic segmentation network.", "Both segmentation masks are aggregated with a focus on foreground classes (here road users) to reduce false negatives.", "To also reduce the occurrence of false positives, we apply a pruning based on uncertainty estimates.", "Our approach is modular in a sense that it post-processes the output of any semantic segmentation network.", "In our experiments, we observe less non-detected objects of most important classes and an enhanced generalization to other domains compared to the basic semantic segmentation prediction." ], [ "Introduction", "Semantic image segmentation aims at segmenting objects in an image by assigning each pixel to a class within a predefined set of semantic classes.", "Thereby, semantic segmentation provides comprehensive and precise information about the given scene.", "This is particularly desirable in safety relevant applications like automated driving.", "In recent years, deep neural networks (DNNs) have demonstrated outstanding performance on this task [1], [2].", "However, DNNs are usually trained on a specific dataset (source domain) and often fail to function properly on unseen data (target domain) due to a domain gap.", "An example of this behavior is shown in fig:idd.", "The DNN is trained on urban traffic data from Germany, Cityscapes dataset [3], and incorrectly predicts the unseen animals in Indian street scenes [4] as person, nature or fence.", "This is critical since potential hazardous situations are underestimated due to the prediction of non-dynamic classes.", "Unsupervised domain adaptation is an approach overcoming this issue.", "The idea is to train a DNN on labeled source domain data and jointly on unlabeled target data adapting the source domain distribution to the target one [5].", "As target data is not always available for training, the recent research has also been devoted to domain generalization resolving this limitation.", "The model is trained solely on source data without access to target data using a shared representation across multiple source domains improving the robustness to unseen domains [6], [7].", "In real-world applications, domain gaps may occur due to shifts in location, time and other environmental parameters.", "This causes domain shift on both, foreground and background classes.", "For many years, computer vision has used the decomposition into things – countable objects such as persons, animals, vehicles – and stuff – regions with similar texture or material like sky, road, nature, buildings – which is analogous to the foreground-background splitting [8], [9].", "The appearance of foreground objects may vary over time or due to a change of location and the same applies to the environment (background).", "fig:idd gives an example for the lack of generalization, i.e., the DNN is trained only on street scenes in German cities resulting in defective behavior on the unseen India road scenes.", "On the one hand, using semantic segmentation in open world scenarios, such as in automated driving, the appearance of objects that do not belong to any of the semantic classes the DNN has been trained on (like animals) may cause defective predictions [10].", "On the other hand, even objects of known classes can change their appearance, leading to erroneous predictions.", "For these reasons, the generalization of DNNs bridging the domain gap is of highest interest.", "In applications like automated driving, the foreground class is of particular interest due to its dynamical behavior.", "Especially in presence of domain gaps, the detection performance w.r.t.", "these object classes can decrease significantly.", "To this end, we propose a false negative reduction method which is robust against domain shift.", "Figure: Example image of the India Driving dataset.", "Left: Ground truth pixels of classes humans/animals colored in red and vehicles in blue.", "Right: Semantic segmentation for the mentioned classes.In this work, we introduce a domain generalization method reducing false negatives of important classes in semantic segmentation in the area of automated driving.", "Our approach serves as a post-processing step applicable to any semantic segmentation network.", "An overview is shown in fig:overview.", "Figure: Overview of our method.", "The input image is fed into a semantic segmentation network (bottom branch) and in parallel into a depth estimation network (top branch).", "The resulting depth heatmap is passed to our modified segmentation network which predicts a foreground-background segmentation.", "This prediction is aggregated with the semantic segmentation and finally, meta classification is applied to reduce false positive segments.Our method consists of two branches.", "The image segmentation branch is a semantic segmentation inference and the depth segmentation branch feeds the same RGB input image into a depth estimation network.", "The goal of depth estimation is to obtain a representation of the spatial structure of a given scene.", "The resulting depth heatmap is passed to a modified segmentation network which predicts foreground-background segmentation.", "The architecture of this network may be based on the architecture of the semantic segmentation network, but can be chosen independently.", "The foreground class consists of the most important classes in street scene segmentation, humans and vehicles.", "In the fusion step, the semantic segmentation and the foreground-background prediction are aggregated obtaining several segments (connected components of pixels belonging to the same class) per foreground class.", "As a result of combining the two masks, we detect overlooked segments of the basic semantic segmentation network on the source dataset as well as under domain shift using the depth information for domain generalization.", "However, the increased sensitivity towards finding foreground objects may result in an overproduction of false positive segments.", "To overcome this, we utilize so-called meta classification as false positive pruning, introduced for semantic segmentation in [11], [12] and for instance segmentation in [13].", "It tackles the task of predicting whether a segment intersects with ground truth or not which is measured by a commonly used performance measure, the intersection over union [14].", "Meta classification is used after the fusion step to also achieve a high precision rate [15], [16].", "Moreover, to gain a further performance boost, the meta classifier, which is trained only on the source domain, can be fine-tuned on a small amount of the respective target domain (lightweight domain adaptation).", "We only assume input data as well as a trained semantic segmentation and a depth estimation network.", "Due to the modularity of our method, we can set up our model based on these assumptions.", "In our tests, we employ two semantic segmentation [1], [17] and two depth estimation networks [18], [19] applied to four datasets, i.e., Cityscapes [3] as source domain and A2D2 [20], LostAndFound [10] as well as India Driving [4] as target domains.", "The application of these widely differing datasets is intended to demonstrate the domain generalization and error reduction capability of our approach.", "The source code of our method is publicly available at http://github.com/kmaag/FN-Reduction-using-Depth Our contributions are summarized as follows: We introduce a modified segmentation network which is fed with depth heatmaps and outputs foreground-background segmentation masks.", "The depth information enhances the robustness of the prediction in the presence of domain shift.", "We combine these foreground-background segmentation masks with semantic segmentation masks to detect possible overlooked segments (by the semantic segmentation network) of the most important classes.", "In addition, we perform meta classification to prune false positive segments.", "For the first time, we demonstrate successfully that depth information improves a semantic segmentation performance in a post-processing step (independently of the choice of semantic segmentation network).", "We compare the performance of our method with basic semantic segmentation performance on several datasets with domain gap, i.e., different semantic classes and/or environments obtaining area under precision-recall curve values of up to $97.08\\%$ on source domain and $93.83\\%$ under domain shift.", "The paper is structured as follows.", "In sec:relatedwork, we discuss the related work.", "Our approach is introduced in sec:method including the modified segmentation network, the aggregation of network predictions and meta classification.", "The numerical results are shown in sec:result." ], [ "Related Work", "In this section, we first discuss related methods improving robustness of DNNs under domain shift as well as false negative reduction approaches.", "Thereafter, we present works that use depth information to enhance semantic segmentation prediction." ], [ "Robustness under Domain Shift", "Unsupervised domain adaptation is often used to strengthen the robustness of DNNs bridging domain gaps [5].", "The DNN is trained with source data (labeled) and target data (unlabeled and different from source dataset) to align the target domain's distributions.", "In [21], this problem is tackled by a generative adversarial network which translates the target domain into the source domain before predicting semantic segmentation.", "Monocular depth estimation is used in [22], [23] to improve the prediction performance under domain shift.", "However, target data from various environments is not always available during the training process.", "To overcome this limitation, research on domain generalization has recently gained attention, using only source data to train the model.", "Synthetic to real domain generalization offers a possibility to exploit the advantage of the availability of synthetic data.", "In [24], the synthetically trained network is encouraged to maintain similar representations as the ImageNet pre-trained model.", "In other works, style-diversified samples [25] or web-crawled images [26] are utilized for improving the representational consistency between synthetic and real-world for the sake of generalizable semantic segmentation.", "The model presented in [6] is trained on multiple source domains (synthetic and real) to generalize to unseen data domains.", "The variety of contents and styles from ImageNet is leveraged in [7] to learn domain-generalized semantic features.", "In [27], an instance selective whitening loss is introduced to disentangle the domain-specific style and domain-invariant content to remove only the style information causing domain shift.", "In contrast to domain adaptation, our method does not require target domain data for training.", "The presented domain generalization approaches use a great amount of source domain data and/or modify the training process of the semantic segmentation network while we only consider depth information for domain generalization and are independent of the semantic segmentation network due to modularity." ], [ "False Negative Reduction in Semantic Segmentation", "Reducing false negatives, i.e., obtaining a higher recall rate, is often achieved in semantic segmentation by modifying the loss function.", "In [28], a higher recall rate for a real-time DNN is obtained by modifying the loss function, classifier and decision rule.", "A similar approach presented in [29] considers an importance-aware loss function to improve a network's reliability.", "To reduce false negative segments of minority classes, differences between the Bayes and the Maximum Likelihood decision rule are exploited introducing class priors that assign larger weight to underrepresented classes [15].", "Since minority classes are not necessarily hard to predict, leading to the prediction of many false positives, a hard-class mining loss is introduced in [30] by redesigning the cross entropy loss to dynamically weight the loss for each class based on instantaneous recall.", "In [31], false negative pixels in semi-supervised semantic segmentation are reduced by using the pixel-level $\\ell _{2}$ loss and the pixel contrastive loss.", "While the presented approaches modify the training process and/or the decision rule, we propose a post-processing method.", "For the first time, we present a false negative reduction approach which overcomes domain gaps using depth information.", "The only work [16] which also uses depth heatmaps addressing the recall rate improvement works on video instance segmentation." ], [ "Improving Segmentation using Depth Estimation", "The predictions of semantic segmentation and depth estimation masks are improved in previous works using joint network architectures sharing information for both tasks [32], [33].", "Furthermore, approaches are introduced where information of one task enhance the prediction quality of the other task.", "The semantic segmentation task is improved in [34] by an encoder consisting of two network branches which extract features from depth and RGB images simultaneously.", "In [35], RGB-D data is also fed into a network that extracts both RGB and depth features in parallel for semantic segmentation prediction (and object detection).", "Contrary, a single shared encoder is used in [36] to enhance performance for a supervised task, here semantic segmentation, which obtains information of two self-supervised tasks (colorization and depth prediction) exploiting unlabeled data.", "In [37], a semantic segmentation network is pre-trained for depth prediction to serve as a powerful proxy for learning visual representations.", "In addition to learning features from depth information, a student-teacher framework is considered in [38] to select the most helpful samples to annotate for semantic segmentation.", "In comparison to the described methods for improving semantic segmentation prediction via depth estimation, our approach does not modify the network architecture.", "Instead, our foreground-background prediction and the aggregation serve as post-processing steps, i.e., any semantic segmentation network can be used and thus, the network architecture is not altered.", "Contrary to the domain generalization and false negative reduction approaches, we consider the advantages of depth estimation to enhance the prediction performance under domain shift.", "Our method is composed of two branches, i.e., the image segmentation and depth segmentation branch, see fig:overview.", "The outputs of both streams are aggregated to detect segments overlooked by the basic semantic segmentation network.", "As many false positive segments can be generated by the fusion, false positive pruning is applied in an additional post-processing step." ], [ "Foreground-Background Segmentation", "In this section, we introduce our modified segmentation network for foreground-background segmentation.", "We assume that a depth estimation (and a semantic segmentation ground truth) is available for each input image.", "Our approach is a post-processing step and independent of the choice of the depth estimation (and the semantic segmentation) network.", "The basis for the modified network can be any standard semantic segmentation network.", "However, instead of feeding an RGB image into the network a depth estimation heatmap is used and the semantic space is composed of only two classes - foreground and background.", "The binarization into foreground and background is adapted from the thing and stuff decomposition in the computer vision field like in panoptic segmentation.", "Using automated driving as example application, things are countable objects such as persons, animals, cars or bicycles.", "The stuff classes consist of amorphous regions of similar texture or material such as sky, road, nature or buildings.", "Note, the idea of things and stuff also exists in other application areas like robot navigation.", "The thing classes receive the dominant share of attention in computer vision tasks such as object detection and instance segmentation.", "In automated driving, the foreground class is of particular interest due to its dynamical behavior." ], [ "Aggregation of Predictions", "From the first branch, we obtain a semantic segmentation prediction, i.e., a pixel-wise classification of image content.", "The DNN provides for each pixel $z$ a probability distribution $f_{z}(y|x)$ over a prescribed label space $y \\in \\mathcal {C} = \\lbrace y_{1}, \\ldots , y_{c} \\rbrace $ with $c$ different class labels, given an input image $x$ .", "The predicted class for each pixel $z$ is computed by the maximum a-posteriori principle $\\hat{y}_z(x)=\\operatornamewithlimits{arg\\, max}_{y\\in \\mathcal {C}}f_z(y|x) \\, .$ The second branch provides a foreground-background segmentation.", "Given the same input image $x$ , we obtain for each pixel $z$ the probability of being a foreground pixel $g_z(x) \\in [0,1]$ considering a binary classification problem.", "The predicted segmentations are aggregated pixel-wise resulting in a combined prediction with the class label background $y_{0}$ or a foreground class label $y \\in \\tilde{\\mathcal {C}} \\subset \\mathcal {C}$ per pixel.", "For this, we split the label space into foreground class labels $\\tilde{\\mathcal {C}} = \\lbrace y_{1}, \\ldots , y_{\\tilde{c}} \\rbrace $ , $\\tilde{c} < c$ , and background class labels $\\lbrace y_{\\tilde{c}+1}, \\ldots , y_{\\tilde{c}} \\rbrace $ with $y_{0} = \\mathcal {C} \\setminus \\tilde{\\mathcal {C}}$ .", "The combination is defined per pixel by $\\hat{s}_{z} (x) = {\\left\\lbrace \\begin{array}{ll}\\hat{y}_z(x), &\\text{ if } \\hat{y}_z(x) \\in \\tilde{\\mathcal {C}} \\\\\\displaystyle \\operatornamewithlimits{arg\\, max}_{y\\in \\tilde{\\mathcal {C}}}f_z(y|x), &\\text{ if } g_z(x) > 0.5 \\wedge \\hat{y}_z(x) \\notin \\tilde{\\mathcal {C}} \\\\y_{0}, &\\text{ else } \\, .\\end{array}\\right.", "}$ If the semantic segmentation network predicts a foreground class or the foreground-background network predicts foreground, the pixel is considered as foreground and assigned to the foreground class $y \\in \\tilde{\\mathcal {C}}$ of the semantic segmentation with the highest probability.", "Otherwise, the pixel is assigned to the class background.", "Moreover, $\\hat{\\mathcal {S}}_x=\\lbrace \\hat{s}_{z} (x) | z\\in x\\rbrace $ denotes the combined segmentation consisting of foreground classes and the background class." ], [ "Meta Classification", "The combination of the semantic segmentation and the foreground-background prediction can increase the number of false positives.", "For this reason, we apply meta classification [11] as false positive pruning step using uncertainty measures.", "The degree of randomness in semantic segmentation prediction $f_{z}(y|x)$ is quantified by (pixel-wise) dispersion measures, like the entropy.", "To obtain segment-wise features characterizing uncertainty of a given segment from these pixel-wise dispersion measures, we aggregate them over segments by average pooling.", "In addition, we hand-craft features based on object's geometry like the segment size or the geometric center obtaining uncertainty information.", "These hand-crafted measures form a structured dataset where the rows correspond to predicted segments and the columns to features.", "A detailed description of these hand-crafted features can be found in app:metaclassif.", "To determine if a predicted segment is a false positive, i.e., has no overlap with a ground truth segment of a foreground class, we consider the intersection over union ($\\mathit {IoU}$ ), a typical performance measure of segmentation networks with respect to the ground truth.", "Meta classification tackles the task of classifying between $\\mathit {IoU}= 0$ (false positive) and $\\mathit {IoU}> 0$ (true positive) for all predicted segments.", "If a segment is predicted to be a false positive, it is no longer considered as a foreground segment but as background.", "We perform meta classification using our structured dataset as input.", "Note, these hand-crafted measures are computed without the knowledge of the ground truth data.", "To train the classifier, we use gradient boosting [39] that outperforms linear models and shallow neural networks as shown in [13].", "We study to which extent our aggregated prediction followed by meta classification improves the detection performance for important classes compared to basic semantic segmentation." ], [ "Experiments", "In this section, we first present the experimental setting and then demonstrate the performance improvements of our method compared to the basic semantic segmentation network in terms of false negative reduction overcoming the domain gap.", "We perform our tests on four datasets for semantic segmentation in street scenes considering Cityscapes [3] as source domain and A2D2 [20], LostAndFound [10] as well as India Driving (IDD) [4] as target domains.", "The training/validation split of Cityscapes consists of $2,\\!975$ /500 images from dense urban traffic in 18/3 different German towns, respectively.", "Thus, our foreground class consists of all road user classes, i.e., human (person and rider) and vehicle (car, truck, bus, train, motorcycle and bicycle) and the background of categories flat, construction, object, nature and sky.", "From the A2D2 dataset, we sample 500 images out of 23 image sequences for our tests covering urban, highways and country roads in three cities.", "This variety of environments is not included in the Cityscapes dataset resulting in a domain shift in the background.", "The validation set of LostAndFound containing $1,\\!203$ images is designed for detecting small obstacles on the road in front of the ego-car.", "This causes a foreground domain shift as these objects are not contained in the semantic space of Cityscapes.", "We use 538 frames of the IDD dataset which contains unstructured environments of Indian roads inducing a domain shift in both, foreground and background.", "The latter is caused by, for example, the diversity of ambient conditions and ambiguous road boundaries.", "The foreground domain shift occurs as the IDD dataset consists of two more relevant foreground classes (animals and auto rickshaws) and the Cityscapes foreground objects differ significantly." ], [ "Networks", "We consider the state-of-the-art DeepLabv3+ network [1] with WideResNet38 [40] as backbone and the more lightweight (and thus weaker) DualGCNet [17] with ResNet50 [41] backbone for semantic segmentation.", "Both DNNs are trained on the Cityscapes dataset achieving $\\mathit {mean}$ $\\mathit {IoU}$ ($\\mathit {mIoU}$ ) values of $90.29\\%$ for DeepLabv3+ and $79.68\\%$ for DualGCNet on the Cityscapes validation set.", "For depth estimation trained on the KITTI dataset [42], we use the supervised depth estimation network BTS [19] with DenseNet-161 [43] backbone obtaining a relative absolute error on the KITTI validation set of $0.090$ and the unsupervised Monodepth2 [18] with ResNet18 backbone achieving $0.106$ relative absolute error.", "Our modified segmentation network is based on the DeepLabv3+ architecture with WideResNet38 backbone having high predictive power and is fed with depth estimation heatmaps of the Cityscapes dataset predicted by the BTS network and Monodepth2, respectively.", "We train this network on the training split of the Cityscapes dataset and use the binarized (into foreground and background) semantic segmentation ground truth to compare our results with the basic semantic segmentation network which is also trained on Cityscapes.", "For the BTS network a validation $\\mathit {mIoU}$ of $88.34\\%$ is obtained and for Monodepth2 of $85.12\\%$ ." ], [ "Evaluation Metrics", "Meta classification provides a probability of observing a false positive segment and such a predicted false positive segment is considered as background.", "We threshold on this probability with 101 different values $h \\in H = \\lbrace 0.00,0.01, \\ldots , 0.99, 1.00 \\rbrace $ .", "For each threshold, we calculate the number of true positive, false positive and false negative foreground segments resulting in precision ($\\mathit {prec}(h)$ ) and recall ($\\mathit {rec}(h)$ ) values on segment-level depended of $h$ .", "The degree of separability is then computed as the area under precision recall curve ($\\mathit {AUPRC}$ ) by thresholding the meta classification probability.", "In addition, we compute the recall rate at $80\\%$ precision rate ($\\mathit {REC}_{80}$ ) for the evaluation.", "Furthermore, we consider the segment-wise $F_{1}$ score which is defined by $F_{1} (h) = 2 \\cdot \\mathit {prec}(h) \\cdot \\mathit {rec}(h) / (\\mathit {prec}(h) + \\mathit {rec}(h))$ .", "To obtain an evaluation metric independent of the meta classification threshold $h$ , we calculate the averaged $F_{1}$ score $\\bar{F_1} = \\frac{1}{|H|} \\sum _{h \\in H} F_{1}(h)$ and the optimal $F_{1}$ score $F_1^* = \\max _{h \\in H} F_{1} (h)$ .", "For a detailed description of these metrics see app:metrics." ], [ "Results on the Source Domain", "First, we study the predictive power of the meta classifier trained on the Cityscapes (validation) dataset using a train/test splitting of $80\\%/20\\%$ shuffling 5 times, such that all segments are a part of the test set.", "We use meta classification to prune possible false positive segments that are falsely predicted as foreground segments.", "For the comparison of basic semantic segmentation performance with our approach, meta classifiers are trained on the predicted foreground segments, respectively.", "These classifiers achieve test classification $\\mathit {AUROC}$ values between $94.68\\%$ and $99.14\\%$ .", "The $\\mathit {AUROC}$ (area under receiver operating characteristic curve) is obtained by varying the decision threshold in a binary classification problem, here for the decision between $\\mathit {IoU}=0$ and $>0$ .", "The influence of meta classification on the performance is studied in app:effectmetaclassif.", "We compare the detection performances which are shown in tab:auprc using presented evaluation metrics.", "Table: Performance results for the Cityscapes dataset for the basic semantic segmentation prediction vs. our approach, i.e., the DeepLabv3+/DualGCNet prediction aggregated with foreground-background prediction using BTS or Monodepth2.We observe that our method obtain higher $\\mathit {AUPRC}$ , $F_1^*$ and $\\mathit {REC}_{80}$ values than the semantic segmentation prediction.", "Note, there is no consistency on which depth estimation network yields more enhancement.", "In particular, we reduce the number of non-detected segments of foreground classes.", "In fig:curves (left), the highest recall values of the semantic segmentation predictions are shown, i.e., no segments are deleted using meta classification.", "Figure: Left: The recall values under the assumption of same precision values for all datasets and networks.", "We distinguish the performance for the DeepLabv3+ (DL) and the DualGCNet (DG) semantic segmentation networks whose predictions serve as baselines.", "We compare these with our approach using the BTS and the Monodepth2 depth estimation network, respectively.", "Center: Precision-recall curves for the A2D2 dataset, the DeepLabv3+ and BTS networks.", "Right: Number of false positive vs. false negative segments for different meta classification thresholds for the IDD dataset, the DualGCNet and BTS networks using 20%20\\% of this dataset for fine-tuning.For our method, we use the meta classification threshold where the precision of our method is equal to that of the baseline.", "As a consequence, for the identical precision values we obverse an increase in recall by up to $2.71$ percent points (pp) for the Cityscapes dataset.", "In app:resultsclass, more numerical results evaluated on individual foreground classes are presented.", "The $\\mathit {mIoU}$ is the commonly used performance measure for semantic segmentation.", "To compute the $\\mathit {mIoU}$ for the aggregated prediction $\\hat{\\mathcal {S}}_x$ , we have to fill the background values as they are ignored up to now.", "Similar to how we obtain the foreground class during the combination, we assign to every background pixel the background class $y \\in \\mathcal {C} \\setminus \\tilde{\\mathcal {C}}$ of the semantic segmentation with the highest probability.", "The results for semantic segmentation prediction and the difference to our aggregated predictions are shown in the Cityscapes column of tab:miou.", "Table: 𝑚𝐼𝑜𝑈\\mathit {mIoU} results for both semantic segmentation networks and the difference to our approach.", "A higher 𝑚𝐼𝑜𝑈\\mathit {mIoU} value corresponds to better performance.We perform slightly worse in the overall performance accuracy ($\\mathit {mIoU}$ ) as the foreground-background masks are location-wise less accurate than the segmentation masks, see fig:examples.", "Figure: Examples for segments that are overlooked by the basic semantic segmentation network and detected by our approach for Cityscapes (DualGCNet, BTS, left), A2D2 (DeepLabv3+, BTS, center left), LostAndFound (DeepLabv3+, Monodepth2, center right) and IDD dataset (DeepLabv3+, BTS, right).", "Top: Ground truth images including only the labels of foreground classes.", "Bottom: Basic semantic segmentation prediction in typical Cityscapes colors for foreground segments (shades of blue and red) as well as the foreground prediction of our modified segmentation network (cyan).Nonetheless, we detect foreground objects, here road users, that are overlooked by the semantic segmentation network (for example, see the bicycle in fig:examples (left))." ], [ "Results under Domain Shift", "In this section, we study the false negative reduction for the A2D2, LostAndFound and IDD datasets under domain shift from the source domain Cityscapes.", "As mentioned above, since the semantic segmentation networks as well as the modified segmentation networks are trained on the Cityscapes dataset, we train also the meta classification model on this dataset using all predicted segments.", "We obtain meta classification test $\\mathit {AUROC}$ values up to $93.12\\%$ for A2D2, $91.65\\%$ for LostAndFound and $93.97\\%$ for IDD.", "We compare the performance of our approach with the semantic segmentation prediction by computing the evaluation metrics, results are given in tab:auprcdomain.", "Table: Performance results for the basic semantic segmentation prediction vs. our approach.Table: Evaluation results obtained by different splittings that are used for fine-tuning the meta classifier.The performance metrics are greatly increased by our method demonstrating that our approach is more robust to domain shift.", "Noteworthy, we outperform the stronger DeepLabv3+ network in all cases.", "Example curves are presented in fig:curves (center) for the A2D2 dataset where an $\\mathit {AUPRC}$ enhancement of $11.45$ pp is obtained.", "Our precision-recall curve is entirely above the baseline.", "In particular, for identical precision values, we obtain an increase in recall by up to $13.24$ pp, i.e., reduce the number of false negative segments, as also shown in fig:curves (left).", "Examples for detected segments that are missed by the semantic segmentation network are given in fig:examples for all datasets.", "Hence, our method detect segments of well-trained classes, i.e., the overlooked bicycle in the Cityscapes dataset or various cars in A2D2.", "Moreover, we bridge the domain gap as we find small obstacles (LostAndFound) and animals (IDD) that are not part of the Cityscapes dataset and thus, are not included in the semantic space for training.", "In app:resultsclass, more numerical results evaluated on individual foreground classes are presented.", "In tab:miou, the differences between the $\\mathit {mIoU}$ values are evaluated on the Cityscapes classes.", "For the A2D2 dataset, the classes are mapped to the Cityscapes ones and for the IDD dataset, we treat the additional classes animal as human and auto rickshaw as car.", "For the LostAndFound, an evaluation is not possible as it contains only labels for the road and the small obstacles which do not fit into the semantic space.", "With one positive exception, we are slightly worse in overall accuracy performance.", "On the one hand, the images in fig:examples demonstrate why we decrease the accuracy slightly as the predictions and in particular, the segment boundaries are less accurate.", "On the other hand, these images motivate the benefit of our method as completely overlooked segments are detected.", "Furthermore, we bridge the domain shift in a post-processing manner that only requires two more inferences which run in parallel to semantic segmentation prediction." ], [ "Fine-Tuning of the Meta Classifier", "Up to now, we have trained the segmentation networks as well as the meta classifier on Cityscapes for the experiments on A2D2, LostAndFound and IDD dataset.", "In this paragraph, we investigate the predictive power of the meta classifier and the implications on false negative reduction using parts of the target dataset for fine-tuning.", "Note, this domain adaptation only occurs in the post-processing meta classification step (retraining the neural networks is not necessary) and thus, the fine-tuning is light-weight and requires only a small amount of ground truth data.", "In detail, we retrain the meta classifier with $20\\%$ , $40\\%$ , $60\\%$ and $80\\%$ of the target dataset, respectively.", "The corresponding performance results are shown in tab:studyclassif.", "We observe great enhancements even with only a fine-tuning of $20\\%$ of the target domain obtaining an increase of up to $40.43$ pp for $\\mathit {AUPRC}$ .", "The maximal increase is achieved for the A2D2 dataset (on the DualGCNet and BTS networks) for which $20\\%$ correspond to about 100 images that are used for retraining and achieving such an improvement.", "For all datasets, the greatest performance gap occurs between a trained meta classifier only on the Cityscapes dataset and using a small amount of the target domain data (here $20\\%$ ).", "Increasing the subset of the target data, the performance is only slightly enhanced.", "Using $20\\%$ for fine-tuning, the highest $\\mathit {AUPRC}$ value of $94.71\\%$ is obtained by the DualGCNet and the BTS network on the IDD dataset.", "The corresponding number of false positives and false negatives is given in fig:curves (right).", "Note, the meta classifier for the baseline prediction is trained on the same train splitting.", "We outperform the basic semantic segmentation prediction and thus achieve a lower number of detection errors, in particular false negatives, therefore bridging the domain gap." ], [ "Conclusion and Outlook", "In this work, we proposed a post-processing method to enhance the prediction of any semantic segmentation using monocular depth estimation.", "In particular, our approach reduce non-detected segments of a semantic segmentation network and bridge the domain gap given by different datasets with various objects and environments.", "To this end, we inferred a depth heatmap via a modified segmentation network that predicts foreground-background masks in parallel to a semantic segmentation network.", "Aggregating both predictions with a focus on foreground classes (here humans and vehicles) overlooked, i.e., false negative, segments are detected.", "To also minimize the false positives, meta classification as pruning step was applied based on uncertainty information.", "In our tests, we compared our method with the basic semantic segmentation prediction for several meta classification thresholds and improved the segment-wise precision and recall values.", "We obtained area under precision-recall curve values of up to $97.08\\%$ on source domain and up to $93.86\\%$ on target domain.", "In conclusion, our approach performed well on the Cityscapes dataset (source domain) as well as on objects and environments not seen during training overcoming the domain shift.", "As an extension of this work, it might be interesting to apply our method on other application fields like robotic navigation.", "In this case, a differentiation between foreground and background classes is possible, for example, the items in robotic navigation are most important.", "Our approach is applicable without major modifications, since trained depth estimation networks are available for several tasks and our modified segmentation network can be retrained on the same data like the basic semantic segmentation network uses.", "If a task-specific segmentation network is required, then as foreground-background network the semantic segmentation network architecture can be used only with small modifications on input and output." ], [ "Acknowledgment", "We thank M. K. Neugebauer for support in data handling and programming.", "This work is supported by the Ministry of Culture and Science of the German state of North Rhine-Westphalia as part of the KI-Starter research funding program.", "sectionapp" ], [ "Details on Meta Classification", "The semantic segmentation neural network provides for each pixel $z$ a probability distribution $f_{z}(y|x)$ over a label space $\\mathcal {C} = \\lbrace y_{1}, \\ldots , y_{c} \\rbrace $ , with $y \\in \\mathcal {C}$ and given an input image $x$ .", "The degree of randomness in semantic segmentation prediction is quantified by (pixel-wise) dispersion measures, such as the entropy $E_z(x) =-\\frac{1}{\\log (c)}\\sum _{y\\in \\mathcal {C}}f_z(y|x)\\log f_z(y|x) \\, ,$ (see fig:a2d2entro (right)) the variation ratio $V_{z} = 1 - f_z(\\hat{y}_z(x)|x)$ or the probability margin $M_z(x) = V_{z} + \\max _{y\\in \\mathcal {C}\\setminus \\lbrace \\hat{y}_z(x)\\rbrace } f_z(y|x)$ with predicted class $\\hat{y}_z(x)=\\operatornamewithlimits{arg\\, max}_{y\\in \\mathcal {C}}f_z(y|x) \\, .$ Based on the different behavior of these measures and the segment's geometry for correct and false predictions, we construct segment-wise features by hand to quantify the observations that we made.", "Let $\\hat{\\mathcal {P}}_x$ denote the set of predicted segments, i.e., connected components, (of the foreground class).", "By aggregating these pixel-wise measures, segment-wise features are obtained and serve as input for the meta classifier.", "To this end, we compute for each segment $q \\in \\hat{\\mathcal {P}}_x$ the mean of the pixel-wise uncertainty values of a given segment, i.e., mean dispersions $\\bar{D}$ , $D \\in \\lbrace E,V,M\\rbrace $ .", "Furthermore, we distinguish between the inner of the segment $q_{in}\\subset q$ consisting of all pixels whose eight neighboring pixels are also elements of $q$ and the boundary $q_{bd}= q \\setminus q_{in}$ .", "We observe that poor or false predictions are often accompanied by fractal segment shapes (a relatively large amount of boundary pixels).", "An example is shown in fig:a2d2entro (left).", "This results in segment size $S=|q|$ and mean dispersion features per segment also for the inner and the boundary since uncertainties may be higher on a segment's boundary (see fig:a2d2entro (right)).", "Additionally, we define relative segment sizes $\\tilde{S} = S/S_{bd}$ and $\\tilde{S}_{in} = S_{in}/S_{bd}$ quantifying the degree of fractality as well as relative mean dispersions $\\tilde{\\bar{D}} = \\bar{D} \\tilde{S}$ and $\\tilde{\\bar{D}}_{in} = \\bar{D}_{in} \\tilde{S}_{in}$ where $D \\in \\lbrace E,V,M\\rbrace $ .", "For the foreground-background segmentation, given the same input image $x$ , we obtain for each pixel $z$ the probability of being a foreground pixel $g_z(x) \\in [0,1]$ .", "Thus, we calculate the mean and relative entropy features for the foreground-background prediction (having only two classes), denoted by $\\bar{F}_{*}$ , $* \\in \\lbrace \\_,in,bd\\rbrace $ , $\\tilde{\\bar{F}}$ and $\\tilde{\\bar{F}}_{in}$ .", "Last, we add the geometric center $\\bar{q} = \\frac{1}{S} \\sum _{(z_{v}, z_{h}) \\in q} (z_{v}, z_{h})$ where $(z_{v}, z_{h})$ describes the vertical and horizontal coordinate of pixel $z$ and the mean class probabilities $P(y|q) = \\frac{1}{S} \\sum _{z \\in q} f_{z}(y|x)$ for each foreground class $y \\in \\tilde{\\mathcal {C}} \\subset \\mathcal {C}$ where $\\tilde{\\mathcal {C}} = \\lbrace y_{1}, \\ldots , y_{\\tilde{c}} \\rbrace $ , $\\tilde{c} < c$ , to our set of hand-crafted features, resulting in the following set: $U^{q} & = \\ \\lbrace \\bar{D}, \\bar{D}_{in}, \\bar{D}_{bd}, \\tilde{\\bar{D}}, \\tilde{\\bar{D}}_{in} \\, : \\, D \\in \\lbrace E,V,M,F\\rbrace \\rbrace \\cup \\lbrace \\bar{q} \\rbrace \\\\& \\cup \\lbrace S, S_{in}, S_{bd}, \\tilde{S}, \\tilde{S}_{in} \\rbrace \\cup \\lbrace P(y|q) \\, : \\, y \\in \\tilde{\\mathcal {C}} \\rbrace \\, .$ Analogously to the set of predicted segments $\\hat{\\mathcal {P}}_x$ , we denote by $\\mathcal {P}_x$ the set of segments in the ground truth $\\mathcal {S}_x$ .", "To determine if a predicted segment $q \\in \\hat{\\mathcal {P}}_x$ is a false positive, we consider the intersection over union.", "The segment-wise $\\mathit {IoU}$ is then defined as $\\mathit {IoU}(q) = \\frac{| q \\cap Q |}{| q \\cup Q |}, \\quad Q = \\bigcup _{q^{\\prime } \\in \\mathcal {P}_x, q^{\\prime } \\cap q \\ne \\emptyset } q^{\\prime } \\, .$ Figure: Left: Semantic segmentation predicted by a DNN.", "Right: Entropy heatmap." ], [ "More Details on Evaluation Metrics", "Let $\\hat{\\mathcal {P}}_x$ denote the set of predicted segments and $\\mathcal {P}_x$ of ground truth segments.", "Meta classification provides a probability $m(q) \\in [0,1]$ for each segment $q \\in \\hat{\\mathcal {P}}_x$ to be a false positive on which we threshold with different values $h \\in H = \\lbrace 0.00,0.01, \\ldots , 0.99, 1.00 \\rbrace $ .", "A predicted false positive segment is considered as background.", "For each threshold $h$ , we calculate over of all foreground segments in a given validation set $\\mathcal {X}$ the number of false positives $\\mathrm {FP}(h) = \\sum _{x \\in \\mathcal {X}} \\sum _{ q \\in \\hat{\\mathcal {P}}_x } 1_{ \\lbrace \\mathit {IoU}(q) = 0 \\rbrace } \\hspace{0.84998pt} 1_{ \\lbrace m(q) \\le h \\rbrace } \\, ,$ true positives $\\mathrm {TP}(h) = \\sum _{x \\in \\mathcal {X}} \\sum _{ q^{\\prime } \\in \\mathcal {P}_x} 1_{ \\lbrace \\mathit {IoU}^{\\prime } (q,h) > 0 \\rbrace }$ and false negatives $\\mathrm {FN}(h) = \\sum _{x \\in \\mathcal {X}} \\sum _{ q^{\\prime } \\in \\mathcal {P}_x} 1_{ \\lbrace \\mathit {IoU}^{\\prime } (q,h) = 0 \\rbrace }$ where the indicator function is defined as $1_{ \\lbrace A \\rbrace } = {\\left\\lbrace \\begin{array}{ll}1, &\\text{ if } \\text{event } A \\text{ happens} \\\\0, &\\text{ else } \\,\\end{array}\\right.", "}$ and the $\\mathit {IoU}$ for a ground truth segment $q^{\\prime } \\in \\mathcal {P}_x$ as $\\mathit {IoU}^{\\prime } (q^{\\prime },h) = \\frac{| q^{\\prime } \\cap Q^{\\prime } |}{| q^{\\prime } \\cup Q^{\\prime } |}, \\quad Q^{\\prime } = \\bigcup _{\\begin{array}{c} q \\in \\hat{\\mathcal {P}}_x, q \\cap q^{\\prime } \\ne \\emptyset \\\\ m(q) \\le h\\end{array}} q \\, .$ Thus, we obtain precision $\\mathit {prec}(h) = \\frac{\\mathrm {TP}(h)}{\\mathrm {TP}(h) + \\mathrm {FP}(h)}$ and recall $\\mathit {rec}(h) = \\frac{\\mathrm {TP}(h)}{\\mathrm {TP}(h) + \\mathrm {FN}(h)}$ values on segment-level dependent of $h$ .", "The degree of separability is then computed as the area under precision recall curve ($\\mathit {AUPRC}$ ) by thresholding the meta classification probability.", "Furthermore, we use the recall rate at $80\\%$ precision rate ($\\mathit {REC}_{80}$ ) for the evaluation.", "Moreover, we consider the segment-wise $F_{1}$ score which is defined by $F_{1} (h) = 2 \\cdot \\frac{\\mathit {prec}(h) \\cdot \\mathit {rec}(h)}{\\mathit {prec}(h) + \\mathit {rec}(h)} \\, .$ To obtain an evaluation metric independent of the meta classification threshold $h$ , we calculate the averaged $F_{1}$ score $\\bar{F_1} = \\frac{1}{|H|} \\sum _{h \\in H} F_{1}(h)$ and the optimal $F_{1}$ score $F_1^* = \\max _{h \\in H} F_{1} (h) \\, .$" ], [ "Effects of Meta Classification", "In tab:metaimprove, we show the effects of meta classification comparing the $F_{1}$ score (see eq:f1) performance with and without meta classification.", "Table: Evaluation results using meta classification (F 1 * F_1^*) and without (F 1 (1)F_1(1)) for the basic semantic segmentation prediction (DeepLabv3+/DualGCNet) and our approach, i.e., the DeepLabv3+/DualGCNet prediction aggregated with foreground-background prediction using BTS or Monodepth2.$F_{1}(1)$ corresponds to the obtained precision and recall values without post-processing, i.e., meta classification and $F_1^*$ to the best possible ratio of both rates.", "Note, we use the meta classifier trained only on the source domain dataset Cityscapes.", "We observe that false positive pruning significantly improves the performance of our method as many false positive segments are predicted by the aggregation step to reduce the number of false negatives.", "We increase the $F_1$ score of up to $65.72$ pp for our method using meta classification.", "Noteworthy, the $F_1$ score for the basic semantic segmentation performance is also enhanced by up to $39.59$ pp.", "Moreover, the results show that without using meta classification the basic semantic segmentation prediction outperforms our method.", "This is caused by our foreground-background segmentation based on depth estimation being more prone to predicting foreground segments.", "We produce more possible foreground segments to reduce false negatives and using the false positive pruning, we outperform basic semantic segmentation." ], [ "Numerical Results per Class", "Up to now, the given results have been aggregated for all foreground classes, here we present results for three foreground classes separately, i.e., person, car and bicycle, see tab:auprcclassperson11, tab:auprcclasscar13 and tab:auprcclassbicycle18, respectively.", "Table: Performance results for the basic semantic segmentation prediction (DeepLabv3+/DualGCNet) vs. our approach, i.e., the DeepLabv3+/DualGCNet prediction aggregated with foreground-background prediction using BTS or Monodepth2, for class person.Table: Performance results for the basic semantic segmentation prediction vs. our approach for class car.Table: Performance results for the basic semantic segmentation prediction vs. our approach for class bicycle.As the LostAndFound dataset provides only labels for road and small obstacles, a class-wise evaluation is not possible.", "In most cases, we outperform the basic semantic segmentation prediction, although differences for the datasets and the three classes are observed.", "The highest performance up to $89.20\\%$ $\\mathit {AUPRC}$ is achieved for Cityscapes since this is the source domain and thus, the semantic segmentation network produces strong predictions.", "Under domain shift, we obtain $\\mathit {AUPRC}$ values of up to $75.53\\%$ .", "As for the foreground classes in general, there is no clear tendency which depth estimation network used in our method performs better.", "For the class car, we achieve higher performance metrics in comparison to classes person and bicycle.", "Cars occur more frequently than persons and bicycles in all three datasets (see [3], [20], [4]) and are easier to recognize given their larger size and similar shape.", "In summary, we improve the detection performance of the basic semantic segmentation network in most cases and in particular, bridge the domain gap.", "Even though our performance for bicycles, for example, is comparatively lower, we generally detect more overlooked foreground segments and thus, reduce false negatives." ] ]
2207.03513
[ [ "Solution theory of fractional SDEs in complete subcritical regimes" ], [ "Abstract We consider stochastic differential equations (SDEs) driven by a fractional Brownian motion with drift coefficient $b$ that is allowed to be arbitrarily close to criticality in a scaling sense.", "We develop a comprehensive solution theory that includes strong existence, path-by-path uniqueness, existence of a solution flow of diffeomorphisms, Malliavin differentiability, and $\\rho$-irregularity.", "Furthermore, it has direct consequences for McKean-Vlasov, transport, and continuity equations." ], [ "Introduction", "Given a vector field $b:\\mathbb {R}_+\\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^d$ , an initial condition $x_0\\in \\mathbb {R}^d$ , and a function $f:\\mathbb {R}_+\\rightarrow \\mathbb {R}^d$ , consider the differential equation $X_t = x_0 + \\int _0^t b_r ( X_r) \\mathrm {d}r + f_t.$ When $f$ is chosen according to some random distribution, one obtains a stochastic differential equation (SDE), which often exhibits much better properties than the unperturbed equation ($f\\equiv 0$ ), even at the level of existence and uniqueness of solutions.", "This phenomenon is often referred to as regularisation by noise and its study goes back to the works of Zvonkin [71] and Veterennikov [65], see the monograph [29] for a survey in the case of standard Brownian $f$ .", "One way to have a unified view on the many works on this phenomenon is by a scaling argument.", "From now on we take a fractional Brownian perturbation $B^H$ with Hurst parameterInteger values of $H$ are somewhat pathological, see [60].", "$H\\in (0,+\\infty ) \\setminus \\mathbb {N}$ , which satisfies the scaling relation (BHt)t0law=(-HBHt)t0 for any $\\lambda >0$ .", "Details about the processes $B^H$ are given below.", "Let us just briefly recall that the choice $H=1/2$ gives the standard Brownian motion, that this is the only choice where $B^H$ is a Markov process, and that the choices $H=k+1/2$ , $k\\in \\mathbb {N}_+$ (which we call “degenerate Brownian”) are the only other ones where $B^H$ satisfies the germ Markov property.", "For all other choices of $H$ the Markovian toolbox (Itô's formula, Kolmogorov equation, Zvonkin transformation, martingale problem) is unavailable and the study of the SDE requires fundamentally different tools.", "The equation then takes the form $X_t = x_0 + \\int _0^t b_r ( X_r) \\mathrm {d}r + B^H_t.$ In order for the regularising effects of $B^H$ to dominate the irregularities of $b$ , a natural requirement is that when zooming into small scales in a way that keeps the noise strength constant, the nonlinearity vanishes.", "Therefore, keeping () in mind, for a fixed parameter $H$ we call a normed space $V$ of functions (or distributions) on $\\mathbb {R}_+\\times \\mathbb {R}^d$ critical/subcritical/supercritical if for the rescaled drift coefficient bt(x)=1-H b(t, H x), one has $\\Vert b^\\lambda \\Vert _V=\\lambda ^\\gamma \\Vert b\\Vert _V$ with $\\gamma =0$ /$\\gamma >0$ /$\\gamma <0$ .", "Example 1.1 Consider the example of the Hölder-Besov spaces $V=C^\\alpha _x$ , i.e.", "when $b$ does not depend on the time variable.", "Then one has $\\gamma =1-H+\\alpha H$ and so the subcriticality condition reads as >1-1H.", "Few results are known in this regime and only when the Markovian toolbox is available.", "In the degenerate Brownian case weak well-posedness is proved in [15].", "In the classical Brownian case one gets the condition $\\alpha >-1$ , which remains out of reach.", "Weak well-posedness is known for $\\alpha >-1/2$ [31], and a nonstandard kind of well-posedness (where uniqueness is even weaker than uniqueness in law) is shown for $\\alpha >-2/3$ [25], [10].", "The classical works [71], [65] show strong well-posedness for $\\alpha \\ge 0$Here allowing equality is understood to allow $b$ to be bounded and measurable, which is consistent with our convention for $C^\\alpha _x$ for $\\alpha \\in \\mathbb {N}$ , see below..", "Beyond Markovianity the best known condition is the more restrictive >1-12H, under which strong well-posedness is known for all $H\\in (0,\\infty )\\setminus \\mathbb {N}$ [56], [12], [38], [40], [42].", "Example 1.2 Another well-studied example is the mixed Lebesgue space $V=L^q_tL^p_x$ .", "Then one has $\\gamma =1-H-1/q-(Hd)/p$ , and so the subcritical regime is 1q+Hdp<1-H.", "In the classical case $H=1/2$ this is precisely the condition from the classical work [44], where strong well-posedness is proved.", "This case has then been extensively studied by several authors, allowing also for multiplicative noise with Sobolev diffusion coefficients, see among others [68], [27], [69], [66].", "In recent years, even the critical case has been reached [43], [61] under certain constraints on $d,p,q$ ; let us also mention the recent work [70], which goes beyond condition (REF ), up to additional constraints on ${\\rm div}\\, b$ .", "For $H\\in (1/2,1)$ no results are known and for $H\\in (0, 1/2)$ the best previously known results for weak and strong well-posedness are both from [47], under the stronger conditions 1q+Hdp<12,      1q+Hdp<12-H, respectively, with the additional constraint $p\\in [2,\\infty ]$ .", "It is conjectured in [47] that the first condition in (REF ) is enough to guarantee strong well-posedness.", "One particular corollary of our result is that for $q\\in (1,2]$ even (REF ) is sufficient.", "Therefore we propose to update the conjecture of [47] (if $q\\in (1,2]$ , now a theorem) to assert strong well-posedness under the scaling condition (REF ).", "Let us also mention that we have recently learned about an ongoing work [52] towards improving (REF ).", "Example 1.3 A common generalisation of Examples REF and REF is the space $V=L^q_t C^\\alpha _x$ , where the scaling works out to be $\\gamma =1-H-1/q+\\alpha H$ .", "Therefore the subcriticality condition reads as >1-1H+1Hq=1-1q'H, where, and for the rest of the paper, $q$ and $q^{\\prime }$ are conjugate exponents, that is, $1/q+1/q^{\\prime }=1$ .", "This generality has only been studied recently, in [38], [39] strong well-posedness is proved under the stronger condition >1-12H+1Hq, with the additional constraints $H\\in (0,1/2]$ , $q\\in (2,\\infty ]$ .", "In summary, well-posedness results in a whole subcritical regime are available only in Markovian case $H=k+1/2$ , $k\\in \\mathbb {N}$ , and strong well-posedness only in the standard Brownian case $H=1/2$ .", "In the present paper we establish strong well-posedness in the full subcritical regime for all $H\\in (0,\\infty )\\setminus \\mathbb {N}$ , with coefficients from the class in Example REF , under the additional constraint $q\\in (1,2]$ .", "Therefore our main conditions are summarised in the assumption H(0,) N,      q(1,2],      (1-1q'H,1).A The solution theory we present in fact goes beyond strong well-posedness.", "We show existence in the strong sense not only of solutions but also of solution flows, and uniqueness in the path-by-path sense.", "Furthermore, several further properties of solutions are established such as stability, continuous differentiability of the flow and its inverse, Malliavin differentiability, and $\\rho $ -irregularity.", "Many of these results are even new in the time-independent case: if $b$ is only a function of $x$ and belongs to $C^\\alpha _x$ , then the optimal choice to put it in the framework of () is to choose $q=2$ , leading to the condition $\\alpha >1-1/(2H)$ .", "This is the classical condition under which strong well-posedness is known [56], [12], [40], but several of the further properties have not been previously established.", "The results are summarised (without aiming for full precision and full generality) in the following theorem, and the corresponding results (often in a somewhat sharper form) can be found throughout the paper in Theorems REF , REF , REF , REF (1), REF (2), REF (3), REF (4), REF (5), REF (6), REF (7).", "Theorem 1.4 Assume () and let $x_0\\in \\mathbb {R}^d$ , $b\\in L^q_t C^\\alpha _x$ .", "Let furthermore $m\\in [1,\\infty )$ .", "Then Strong existence and path-by-path uniqueness holds for (REF ); For any other $\\tilde{x}_0\\in \\mathbb {R}^d$ and $\\tilde{b}\\in L^q_t C^\\alpha _x$ , the associated solutions $X$ and $\\tilde{X}$ satisfy the stability estimate (E(t[0,1]|Xt-Xt|m)1/mN(|x0-x0|+b- bLqt C-1x); The solutions form a stochastic flow of diffeomorphisms $\\Phi _{s\\rightarrow t}(x)$ , whose spatial gradient $\\nabla \\Phi $ is continuous in all variables; moreover it holds $\\sup _{0\\le s\\le t\\le 1, x\\in \\mathbb {R}^d} \\mathbb {E}\\big [| \\nabla \\Phi _{s\\rightarrow t} (x) |^m\\big ] <\\infty ;$ For each $s<t$ and $x\\in \\mathbb {R}^d$ , the random variable $\\omega \\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ is Malliavin differentiable; moreover it holds $\\sup _{0\\le s\\le t\\le 1, x\\in \\mathbb {R}^d} \\mathbb {E}\\big [ \\Vert D \\Phi _{s\\rightarrow t}(x)\\Vert _{\\mathcal {H}^H}^m \\big ]<\\infty ,$ where $D$ denotes the Malliavin derivative and $\\mathcal {H}^H$ the Cameron-Martin space of $B^H$ ; Strong existence and uniqueness holds also for the McKean-Vlasov equation Xt=x0+0t(brr)(Xr)dr +BHt,      t=L(Xt); Solutions are $\\rho $ -irregular for any $\\rho <1/(2H)$ ; If additionally $\\alpha >0$ , then for any $p>1$ strong existence and path-by-path uniqueness holds for solutions $u\\in L^\\infty _t W^{1,p}_x$ to the transport equation t u + bu + BHtu=0 for all initial data $u_0\\in W^{1,p}_x$ .", "The various aspects of the main results are discussed in detail in their respective sections, so here let us just briefly comment on them.", "The notion of path-by-path uniqueness in (1), as a strengthening of the classical pathwise uniqueness, was first established in the seminal work [23], and later popularised by [63], [12].", "Stability estimates in the style of (2) are useful to bypass abstract Yamada-Watanabe arguments and get strong existence directly.", "Stability estimates also have applications for McKean-Vlasov equations in Section .", "The study of stochastic flows (3) for SDEs goes back to the classical work [45], see also [27], [17] for flows in irregular settings.", "In (4), we can in fact derive differentiability with respect to perturbations of the noise in quite a bit more general than Cameron-Martin directions (see Remark REF ), in line with the observations from [46], [32].", "Concerning (5), regularisation by fractional noise for distribution dependent SDEs has been investigated in [39] and recently in [41].", "Above we only stated the simplest example of McKean-Vlasov equation for the sake of presentation, Theorem REF below allows for more general dependence on $(X,\\mu )$ .", "The notion of $\\rho $ -irregularity in (6) was introduced by [12] as a powerful measurement of the averaging properties of paths.", "Extending $\\rho $ -irregularity from Gaussian processes to perturbed Gaussian processes has previously only been achieved via Girsanov transform, here we provide a simple and more robust alternative.", "Concerning (7), regularisation by noise results for the transport equation were first established for Brownian noise in [30], see also [13], [54], [38] for further investigations in the fractional case.", "Finally, let us mention that the scope of some intermediate estimates are larger than (), and therefore in some regime where we do not obtain strong well-posedness, we still obtain compactness and therefore existence of weak solutions.", "This is the content of Section .", "Remark 1.5 One fundamental stochastic analytic tool that does apply in the non-Markovian setting is Girsanov's transform.", "Indeed, it is heavily used in the seminal works [56], [12] and many subsequent ones.", "However, it has its limitations: in our setting it only applies when the critical exponent $1-1/(q^{\\prime }H)$ is negative (which in turn may only happen if $H\\in (0,1/2)$ ), for details see Appendix .", "Therefore, throughout the article we avoid Girsanov's transform altogether.", "Another motivation for a Girsanov-free approach is to develop tools that are robust enough to extend to Lévy-driven SDEs, see [9] for some first results on such equations via stochastic sewing.", "Remark 1.6 Theorem REF gives new results also in the classical $H=1/2$ case.", "Indeed, to solve (REF ) with classical tools, one would require a good solution theory of the corresponding Kolmogorov equation t u-12u=bu.", "Suppose that $b\\in L^q_t C^\\alpha _x$ with $q\\in (1,2)$ .", "Then the naive power counting fails: replacing first $u$ by a smooth function on the right-hand side gives, by Schauder estimates, $u\\in L^\\infty _t C^\\beta _x$ with $\\beta =\\alpha +2-2/q$ , and so $b\\cdot \\nabla u\\in L^q_t C^{\\alpha +1-2/q}_x$ .", "Since $\\alpha +1-2/q<\\alpha $ , iterating the procedure implies worse and worse spatial regularity on $u$ , and after finitely many steps the product $b\\cdot \\nabla u$ becomes even ill-defined.", "This is somewhat similar to the issue of the Kolmogorov equation of Lévy SDEs with low stability index, which was circumvented in [16].", "Remark 1.7 By the embedding $L^p_x\\subset C^{-d/p}_x$ our result immediately implies well-posedness of (REF ) with $L^q_t L^p_x$ drift in the full subcritical regime (with respect to $p$ ) (REF ) if $q\\in (1,2]$ , which can be seen as a fractional analogue of [44].", "Note that unlike in [47], $p\\in [1,2)$ is also allowed.", "Remark 1.8 At the price of slightly anticipating some key concepts which will be introduced throughout the paper, let us discuss how our methods extend hold for a larger class of random perturbations $B^H$ than just pure fBm.", "The main requirement we need is for $B^H$ to be a Gaussian process satisfying $\\frac{1}{C} |t-s|^{2H} I \\le {\\rm Var}(B^H_t\\vert \\mathcal {F}_s)\\le C |t-s|^{2H} I$ for all $s<t$ with $|t-s|$ sufficiently small; here $\\mathcal {F}_t$ is the natural filtration of $B^H$ and $I$ denotes the $d\\times d$ identity matrix.", "More precisely, the upper bound in (REF ) provides a priori estimates in the style of Proposition REF , while the lower bound, which usually goes by the name of local nondeterminism, ensures the regularising effect of $B^H$ and the application of stochastic sewing techniques.", "Standard examples of processes satisfying (REF ) are deterministic additive perturbations of fBm (cf.", "Lemma REF ), the so called type-II fBm [51] and mixed fBm introduced in [18]; given any $H_1\\ne H_2$ , the process $B^{H_1}+B^{H_2}$ will satisfy condition (REF ) with $H=H_1\\wedge H_2$ , both in the case $B^{H_1}$ and $B^{H_2}$ are sampled independently and the one instead where they are constructed from the same reference Brownian motion.", "In this sense, our results are also a far reaching generalization and extension of the ones provided in [57], while again not requiring highly technical use of Girsanov transform as therein.", "There are some passages where condition (REF ) alone is not enough and we exploited other properties of fBm.", "Specifically, the counterxamples below assume $B^H$ to be $(H-\\varepsilon )$ -Hölder continuous and symmetric; the flows constructed in Sections - need some basic time-continuity $\\mathbb {E}[|B^H_t-B^H_s|]\\lesssim |t-s|^{H\\wedge 1}$ in order to apply Kolmogorov-type criteria; more substantially, the results from Section rely on a Volterra representation $B^H_t =\\int _0^t K(t,s) \\mathrm {d}W_s$ .", "All these properties are satisfied both by type-II and mixed fBm.", "The only section truly specific to fBm is Appendix , where ad hoc criteria to check Girsanov transform for fBm are presented; any extension to other processes would require precise knowledge of the associated kernel $K(t,s)$ .", "The rest of the article is structured as follows.", "We conclude the introduction by presenting counterexamples in the supercritical regime, demonstrating that (up to the strict inequality) condition () can not be improved, and we introduce the notation used throughout the paper.", "In Section we state and prove some fundamental lemmata, including a priori estimates on solutions of (REF ) and two new forms of the stochastic sewing lemma of [47].", "Section contains further a priori estimates, on additive functionals of processes as well as a key stability property of solutions.", "In Sections and we use these estimates to establish well-posedness of (REF ) in the $\\alpha >0$ and $\\alpha <0$ cases, respectively.", "Along the way we prove the existence of a solution of a semiflow, which we upgrade to a flow of diffeomphisms in Section .", "Section contains applications of our stability estimates to McKean-Vlasov equations.", "In Section we construct weak solutions in some regimes beyond (), via a compactness argument enabled by the a priori estimates.", "In Section we show $\\rho $ -irregularity of solutions and more general perturbations of fractional Brownian motions.", "Finally, Section contains applications to transport and continuity equations.", "In the appendices we collect some useful tools for which we did not find exact references in the literature: Appendix contains variants of Kolmogorov continuity criterion, Appendix gives a basic bound for solutions of affine linear Young differential equations, and in Appendix we summarise relations of various Sobolev spaces and their use in Girsanov transform for fractional Brownian motions." ], [ "Acknowledgments", "MG thanks Konstantinos Dareiotis for valuable discussions during the development of the parallel article [22].", "MG was funded by the Austrian Science Fund (FWF) Stand-Alone programme P 34992.", "LG was funded by the DFG under Germanys Excellence Strategy - GZ 2047/1, project-id 390685813.", "The authors thank the institutions MFO Oberwolfach and TU Wien for their hospitality during their research visits." ], [ "Counterexamples", "Although the scaling argument is heuristic, one can often construct counterexamples in the supercritical case.", "The construction below is motivated by [14], which gives counterexamples in the $q=\\infty $ , $\\alpha >0$ case.", "So we assume that $q\\in (1,\\infty )$ , $\\alpha <1-1/(q^{\\prime }H)$ , and $d=1$ .", "Take $\\tilde{q}\\in (q,\\infty )$ such that $\\alpha <1-1/(\\tilde{q}^{\\prime } H)$ .", "Define the function bt(x)=t-1/qsign(x)|x|.", "We will further assume $\\alpha >-1$ , so one has clearly $b\\in L^q_t C^\\alpha _x$ .", "We claim that with this $b$ and initial condition $x_0=0$ , even weak uniqueness of (REF ) fails.", "First consider the case $\\alpha >0$ .", "Let $\\gamma =1/(\\tilde{q}^{\\prime }(1-\\alpha ))$ .", "By definition, $\\gamma $ satisfies the identity =-1q++1, and furthermore $\\gamma <H$ thanks to the choice of $\\alpha $ .", "Fix furthermore a $\\delta >0$ such that $\\delta ^\\alpha /\\gamma >2\\delta $ .", "Such $\\delta $ obviously exists.", "Take $x\\in (0,1]$ and consider a weak solution $(X^x,B^{H,x})$ of (REF ), which is well-known to exist due to the spatial continuity of $b$ .", "Set the stopping time x:={t0:  Xtxt}1.", "Notice that for $t\\le \\tau $ we can use the equation to get Xtx>0ts-1/q(s) ds+BH,xt= (/) t+BH,xt>t+(t+BH,xt).", "From () we see that defining x:={t>0:  |BH,xt|t}1, then $\\tau _x\\ge \\tilde{\\tau }_x$ .", "Note furthermore that the stopping time $\\tilde{\\tau }_x$ is a.s. strictly positive and is identically distributed for all $x\\in (0,1]$ .", "In particular, there exist $\\rho >0$ such that P(x>)3/4.", "The laws of $(X^x,B^{H,x})$ on $C([0,1])^2$ are tight, and therefore by Skorohod's representation theorem, we may assume that for a sequence $x_n\\rightarrow 0$ the random variables $(X^{x_n},B^{H,x_n})$ live on the same probability space and converge in $C([0,1])^2$ a.s.", "The limit $(X^0,B^{H,0})$ is a solution to (REF ) with initial condition 0 and satisfies P(X0t>0  t(0,])P(X0tt  t[0,]) =nP(Xxntt  t[0,])=3/4.", "By the symmetry of $b$ and the symmetry in law of the fractional Brownian motion, we have that $(-X^0,-B^{H,0})$ is also a weak solution to (REF ) with initial condition 0 and satisfies P(-X0t<0  t(0,])=P(X0t>0  t(0,])3/4.", "This shows that $X^0$ and $-X^0$ do not have the same law, yielding weak non-uniqueness (we leave it as an exercise to the reader to show that their laws are in fact mutually singular).", "In the distributional case $\\alpha \\in (-1,0)$ , we have to be a bit more careful, since the meaning of the equation is more ambiguous.", "We can use the the barriers as above to define local solutions: taking $\\gamma $ as above and arbitrary $\\delta >0$ , if a process $Y$ on an interval $[0,t_0]$ satisfies $|Y_t|\\ge \\delta t^\\gamma $ , then by a calculation like () the integral form of the equation (REF ) is meaningful up to time $t_0$ with understanding the integral in the classical sense.", "We call such a process a solution on $[0,t_0]$ if it satisfies (REF ) in this classical sense.", "Fix some $\\delta $ to be specified later.", "Take $x\\in (0,1]$ and note that local solutions $X^x$ with initial condition $x$ exist even in the strong sense, at least up to the stopping time $\\tau _x$ .", "Defining $\\tilde{\\tau }$ similarly to $\\tilde{\\tau }_x$ (note that for now the driving noise does not depend on $x$ ), for $t\\le \\tau _x\\wedge \\tilde{\\tau }$ we have similarly to () Xtxx+(/+)t. To turn this into a lower bound on $X^x_t$ , first note that if $x>2\\delta t^\\gamma $ , then $X^x_t> \\delta t^\\gamma $ is trivial.", "If on the other hand $x\\le 2\\delta t^\\gamma $ , we can write Xtx>t/2ts-1/q(x+(/+)s) ds t/2ts-1/q(22s+(/+)s) ds (1/2)(/++22)t, using again the definition of $\\gamma $ in the last step.", "Since $\\alpha \\in (-1,0)$ , we can choose $\\delta >0$ small enough so that the prefactor of $t^\\gamma $ is bigger than $\\delta $ .", "Therefore we can again conclude $\\tau _x\\ge \\tilde{\\tau }$ , which implies $X^x\\ge \\delta t^\\gamma $ for $t\\le \\tilde{\\tau }$ , and similarly we have $X^{-x}\\le -\\delta t^\\gamma $ for $t\\le \\tilde{\\tau }$ .", "We now want to pass to the $x\\rightarrow 0$ limit, which we can do by noticing that the laws of $(B^H,\\tilde{\\tau },X^x,X^{-x})$ are tight on the space S= C([0,1]){(a,g): a(0,1],gC([0,a])2} with the metric d((f,a,g),(f',a',g'))=f-f'C([0,1])+|a-a'|+g-g'C([0,aa'])2.", "Therefore similarly as above, we get a sequence $x_n\\rightarrow 0$ and on another probability space a sequence $(\\bar{B}^{H,x_n},\\bar{\\tilde{\\tau }}^{x_n},\\bar{X}^{x_n},\\bar{X}^{-x_n})\\overset{\\mathrm {law}}{=}(B^H,\\tilde{\\tau },X^{x_n},X^{-x_n})$ converging in $\\mathcal {S}$ almost surely.", "The limits $X^{0,+}:=\\lim \\bar{X}^{x_n}$ and $X^{0,-}:=\\lim \\bar{X}^{-x_n}$ both solve (REF ) with initial condition 0 and driving noise $B^{H,0}:=\\lim \\bar{B}^{H,x_n}$ .", "Moreover, $X^{0,+}_t\\ge \\delta t^\\gamma $ for $t\\le \\tilde{\\tau }^0:=\\lim \\bar{\\tilde{\\tau }}^{x_n}$ and $X^{0,-}_t\\le -\\delta t^\\gamma $ for $t\\le \\tilde{\\tau }^0$ .", "Since $\\tilde{\\tau }^0\\overset{\\mathrm {law}}{=}\\tilde{\\tau }$ , it is a.s. positive, and therefore the laws of $X^{0,+}$ and $X^{0,-}$ are mutually singular (for example on $C([0,1])$ after extending them as constants after $\\tilde{\\tau }^0$ ).", "We fix a filtered probability space $(\\Omega ,\\mathcal {F},\\mathbb {F}=(\\mathcal {F}_t)_{t\\in [0,1]},\\mathbb {P})$ such that $\\mathcal {F}_0$ is complete.", "We use the shortcut notation: $\\mathbb {E}_s Y =\\mathbb {E} [Y| \\mathcal {F}_s]$ .", "$B^H$ is a fractional Brownian motion (fBM) with dimension $d\\in \\mathbb {N}$ and Hurst parameter $H\\in (0,1)$ , that is, a centered continuous Gaussian process with covariance matrix $\\mathbb {E}(B^H_t\\otimes B^H_s)=\\tfrac{1}{2}\\big (|t|^{2H}+|s|^{2H}-|t-s|^{2H}\\big ) I$ , where $I$ is the $d\\times d$ identity matrix.", "We say that $B^H$ is a $\\mathbb {F}$ -fBM if it satisfies [56] (more precisely, it admits the representation () for a $\\mathbb {F}$ -Brownian motion (BM) $W$ ).", "For $H\\in (1,\\infty )\\setminus \\mathbb {N}$ , we define fBM-s and $\\mathbb {F}$ -fBM-s inductively via the identity $B^H_t=\\int _0^t B^{H-1}_s\\mathrm {d}s$ .", "Convenient constructions fractional Brownian motions can be found in e.g.", "[50] (for $H\\in (0,1)$ ) and [59] (for $H\\in (1,\\infty )\\setminus \\mathbb {N}$ ).", "An immediate consequence of the definition that for $0\\le s\\le t$ the conditional distribution of $B^H_t$ given $\\mathcal {F}_s$ is Gaussian with mean $\\mathbb {E}_sB^H_t$ and variance $c|t-s|^{2H}I$ with some positive constant $c$ that does not depend on $s$ and $t$ .", "Using this constant $c$ , we denote by $P_t$ the convolution with a $d$ -dimensional Gaussian density with mean 0 and variance $ctI$ .", "Therefore, for $0\\le s\\le t\\le 1$ , an $\\mathcal {F}_s$ -measurable random variable $X$ , it holds for any bounded measurable function $f$ that Es f(BHt+X)=P|t-s|2Hf(Es BHt+X).", "$L^m$ norms without further notation are understood with respect to $\\omega $ , that is, $\\Vert Y\\Vert _{L^m}=\\big (\\mathbb {E}|Y|^m\\big )^{1/m}$ .", "For conditional $L^m$ norms we use the notation $\\Vert Y\\Vert _{L^m|\\mathcal {F}_s}=\\big (\\mathbb {E}(|Y|^m|\\mathcal {F}_s)\\big )^{1/m}.$ Recall that for any $X,Y\\in L^m$ such that $Y$ is $\\mathcal {F}_s$ -measurable, one has almost surely X-Es XLm|Fs2X-YLm|Fs.", "Apart from the usual $L^m$ norms, will also use the norms $\\big \\Vert \\,\\Vert \\cdot \\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^n}$ .", "We will always consider $n\\ge m$ , in which case the norm $\\big \\Vert \\,\\Vert \\cdot \\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^n}$ is stronger than $\\Vert \\cdot \\Vert _{L^m}$ , but for $n=m$ they coincide.", "More generally, for $t\\ge s$ , $\\big \\Vert \\,\\Vert \\cdot \\Vert _{L^m|\\mathcal {F}_t}\\big \\Vert _{L^n}$ is stronger than $\\big \\Vert \\,\\Vert \\cdot \\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^n}$ .", "Lebesgue spaces $L^q$ , $q\\in [1,\\infty ]$ are also used with respect to the temporal variable, this will be reflected in the notation $L^q_t$ .", "Similarly, function spaces in the variable $x\\in \\mathbb {R}^d$ are denoted by the subscript $x$ .", "For $\\alpha \\in \\mathbb {R}\\setminus \\mathbb {N}$ , by $C^\\alpha $ we mean the Hölder-Besov space $B^{\\alpha }_{\\infty ,\\infty }$ .", "For nonnegative integer $\\alpha $ , we denote by $C^\\alpha $ the space of bounded measurable functions whose all partial weak derivatives up to order $\\alpha $ are also essentially bounded and measurable.", "With this convention elements of $C^0$ are not necessarily continuous.", "The space of continuous functions equipped with the supremum norm is denoted by $C$ .", "Recall that for $\\alpha \\in (0,1)$ the space $C^\\alpha =B^\\alpha _{\\infty ,\\infty }$ coincides with the usual space of bounded $\\alpha $ -Hölder continuous functions, in particular it is trivial to extend the notion to Banach space-valued functions.", "This will be used in the following conventions: when writing spaces with respect to different variables, for example $L^q_t C^\\alpha _x L^m$ , this is to be read from left to right, i.e.", "it means $L^q\\big ([0,1], C^\\alpha (\\mathbb {R}^d,L^m(\\Omega ))\\big )$ (mind in particular that with this convention $C^\\alpha _t C^\\alpha _x\\ne C^\\alpha _{t,x}$ !", ").", "By $C^{\\alpha ,\\mathrm {loc}}_x$ we mean the space of functions $f$ such that for all compactly supported smooth $g$ one has $fg\\in C^\\alpha _x$ .", "More quantitative versions of them are the weighted Hölder spaces $C^{\\alpha ,\\lambda }_x$ , for $\\alpha \\in (0,1]$ and $\\lambda \\in \\mathbb {R}$ , defined through the (semi)norms fC,x:=|f(0)|+fC,x:=|f(0)|+R1 xyBR |f(x)-f(y)||x-y|  R, where $B_R$ is the ball of radius $R$ around the origin.", "Increments of functions on $[0,1]$ are denoted by $f_{s,t}:=f_t-f_s$ .", "Given $p\\in [1,\\infty )$ , we say that a continuous finite dimensional vector-valued function $f$ on $[0,1]$ is of finite $p$ -variation, in notation $f\\in C^{p-{\\mathrm {var}}}_t$ , if fp-varp:=i=1n|fti-1,ti|p<, where the supremum runs over all possible partitions $0=t_0\\le t_1\\le \\cdots \\le t_n=1$ of $[0,1]$ .", "The $p$ -variation seminorm on subintervals $[s,t]\\subset [0,1]$ is defined in the obvious way and is denoted by $\\llbracket \\cdot \\rrbracket _{p-{\\mathrm {var}};[s,t]}$ .", "Let us recall the standard heat kernel estimates: for any $\\alpha \\ge \\beta $ there exists a constant $N=N(d,H,\\alpha ,\\beta )$ such that for all $t\\in (0,1]$ one has the bound, PtfCxN t(-)/2fCx.", "For $0\\le S\\le T\\le 1$ , we denote $[S,T]^2_\\le =\\lbrace (s,t)\\in [S,T]^2:\\,s\\le t\\rbrace $ .", "For $(s,t)\\in [S,T]^2_\\le $ , denote $s_-=s-(t-s)$ .", "We then set the slightly more restricted sets of pairs/triples as $\\overline{[S,T]}^2_\\le =\\lbrace (s,t)\\in [S,T]^2_\\le :\\,s_-\\ge S\\rbrace $ , $[S,T]^3_\\le =\\lbrace (s,u,t)\\in [S,T]^3:\\,s\\le u\\le t\\rbrace $ , and $\\overline{[S,T]}^3_\\le =\\lbrace (s,u,t)\\in [S,T]^3_\\le :\\,(u-s)\\wedge (t-u)\\ge (t-s)/3,\\,s_-\\ge S\\rbrace $ .", "We say that a function $w:[0,1]^2_\\le \\rightarrow \\mathbb {R}_+ $ is a control if it is continuous and superadditive, i.e.", "$w(s,u)+w(s,t)\\le w(s,t)$ for all $(s,u,t)\\in [S,T]^3_\\le $ .", "The most common controls for us will be of the form wb,,q(s,t):=stbrCq dr. Recall that for any two controls $w_1,w_2$ and $\\theta _1,\\theta _2\\in [0,\\infty )$ such that $\\theta _1+\\theta _2\\ge 1$ , $w=w_1^{\\theta _1}w_2^{\\theta _2}$ is also a control (see [34]).", "Note also that if $w$ is a control and $\\alpha \\in (0,1]$ , then (1/)-varw(0,1)0s<t1|s,t|w(s,t).", "Conversely, for $p\\ge 1$ , if $\\psi \\in C^{p-{\\mathrm {var}}}$ then $w(s,t)=\\llbracket \\psi \\rrbracket _{p-{\\mathrm {var}}; [s,t]}^p$ is a control and trivially $|\\psi _{s,t}|\\le w(s,t)^{1/p}$ .", "The space of probability measures on $\\mathbb {R}^d$ is denoted by $\\mathcal {P}(\\mathbb {R}^d)$ .", "The law of a random variable $X$ is denoted by $\\mathcal {L}(X)$ .", "For $p\\ge 1$ we denote the $p$ -Wasserstein distance on $\\mathcal {P}(\\mathbb {R}^d)$ by $\\mathbb {W}_p$ , defined as Wp(,)p=(,)RdRd|x-y|p(dx,dy), where $\\Gamma (\\mu ,\\nu )$ is the set of all probability measures on $\\mathbb {R}^{d}\\times \\mathbb {R}^d$ whose first and second marginals are $\\mu $ and $\\nu $ , respectively.", "When a statement contains an estimate with a constant depending on a certain set of parameters, in the proof we do not carry the constants from line to line.", "Rather, we write $A\\lesssim B$ to denote the existence of a constant $N$ depending on the same set of parameters such that $A\\le N B$ .", "Whenever such set of parameters includes a parameter that is a norm (this will typically be the norm of the coefficient $b$ ), this dependence is always monotone increasing." ], [ "A priori estimates and stochastic sewing", "The key consequence of the subcriticality condition () is that in terms of local nondeterminism, drifts of solutions are more regular than the noise.", "This can be formulated as follows, adapting [40], [9] to the present setting.", "Note that the restriction $q\\le 2$ here is not necessary.", "We however do restrict to $\\alpha \\ge 0$ first, for distributional drift the analogous bound will have to be derived from stochastic sewing, see Lemma REF below.", "Lemma 2.1 Let $H\\in (0,\\infty )\\setminus \\mathbb {N}$ , $q\\in [1,\\infty )$ , and $\\alpha \\in [0,1]$ satisfy $\\alpha >1-1/(q^{\\prime }H)$ .", "Take $b\\in L^q_t C^\\alpha _x$ and set $w=w_{b,\\alpha ,q}$ .", "Then for any $m\\in [1,\\infty )$ there exists a constant $N=N(d,H,\\alpha ,m,\\Vert b\\Vert _{L^q_t C^\\alpha _x} )$ such that any solution $X=\\varphi +B^H$ of (REF ) satisfies the bound t-EstLm|FsLNw(s,t)1/q|t-s|1/q'+H for all $0\\le s\\le t\\le 1$ , Assume that for some $\\beta $ the bound (REF ) holds with $\\beta $ in place of $1/q^{\\prime }+\\alpha H$ .", "This is definitely the case with $\\beta =0$ , as one sees from t-EstLm|FsL2t-sLm|FsL2w(s,t)1/q.", "By (REF ) it holds that $\\Vert \\varphi _t -\\mathbb {E}_s \\varphi _t \\Vert _{L^m|\\mathcal {F}_s}& \\le 2 \\Big \\Vert \\varphi _t - \\varphi _s - \\int _s^t b_r ( \\mathbb {E}_s \\varphi _r +\\mathbb {E}_s B^H_r) \\mathrm {d}r \\Big \\Vert _{L^m|\\mathcal {F}_s}\\\\& \\le 2 \\int _s^t \\big \\Vert b_r (\\varphi _r + B^H_r) - b_r (\\mathbb {E}_s \\varphi _r +\\mathbb {E}_s B^H_r) \\big \\Vert _{L^m|\\mathcal {F}_s} \\mathrm {d}r\\\\& \\le 2 \\int _s^t \\Vert b_r \\Vert _{C^{\\alpha }_x} \\big \\Vert \\varphi _r -\\mathbb {E}_s \\varphi _r + B^H_r -\\mathbb {E}_s B^H_r \\big \\Vert _{L^m|\\mathcal {F}_s}^{\\alpha }\\mathrm {d}r.$ By the independence of $B_r^H-\\mathbb {E}_s B^H_r$ and $\\mathcal {F}_s$ , we have $\\Vert B^H_r -\\mathbb {E}_s B^H_r \\Vert _{L^m|\\mathcal {F}_s}\\lesssim |r-s|^{ H}$ with probability 1.", "As for the term $\\varphi _r -\\mathbb {E}_s \\varphi _r$ , we use our assumption on $\\varphi $ , so after taking $L^\\infty $ norms and using Hölder's inequality for the integral, we get t -Es t Lm|FsL w(s,t)1/q(|t-s|H+1/q'+w(s,t)/q|t-s|+1/q').", "It remains to note that the condition $\\alpha >1-1/(q^{\\prime }H)$ guarantees that starting from $\\beta =0$ a finite number of iterations the map $\\beta \\mapsto \\alpha \\beta +1/q^{\\prime }$ exceeds $H$ .", "Remark 2.2 The case $m=\\infty $ can be handled with an appropriate stopping argument, for details see [40].", "It can also be useful to have similar bounds for processes that are not exactly solutions (for example Picard iterates), but we do not need this generality.", "The next ingredient is the aforementioned a priori estimate in the case $\\alpha <0$ , analogous to Lemma REF .", "Recall that for any adapted process $\\varphi $ one has $\\big \\Vert \\Vert \\varphi _t-\\mathbb {E}_s\\varphi _t\\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^\\infty }\\le 2\\big \\Vert \\Vert \\varphi _{s,t}\\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^\\infty }$ , and in the distributional case we will directly bound the latter quantity.", "Unlike Lemma REF , here we can not extend for any $q\\in (2,\\infty ]$ , $\\alpha $ subcritical, rather we impose the following stronger assumption $H\\in (0,1),\\quad q\\in (1,\\infty ],\\quad \\alpha >\\frac{1}{2}-\\frac{1}{2H}, \\quad \\alpha >1-\\frac{1}{H q^{\\prime }}.\\qquad \\mathrm {(B)}$ Remark 2.3 As mentioned, condition (REF ) will be only used for treating $\\alpha <0$ , so $H<1$ is not a real restriction as it follows from the first condition on $\\alpha $ .", "Note further that in the case $q\\in (1,2]$ , (REF ) reduces to ().", "In the case $q\\in (2,\\infty )$ the a priori estimate below will be relevant also later in Section , where we establish existence of weak solutions in a regime where their uniqueness is not known.", "Lemma 2.4 Assume (REF ) and in addition $\\alpha <0$ .", "Let $b\\in L^q_tC^1_x$ and with some initial condition $x_0\\in \\mathbb {R}^d$ let $X$ be the unique strong solution to (REF ).", "Set $w:=w_{b,\\alpha ,q}$ and $\\varphi :=X-B^H$ .", "Then for any $m\\in [2,\\infty )$ there exists a constant $N=N(m,d,\\alpha ,q,H,\\Vert b\\Vert _{L^q_t C^\\alpha _x})$ such that for all $(s,t)\\in [0,1]_\\le ^2$ one has the bound $\\big \\Vert \\Vert \\varphi _{s,t}\\Vert _{L^m\\vert \\mathcal {F}_s}\\big \\Vert _{L^\\infty } \\le N w(s,t)^{1/q} |t-s|^{\\alpha H + 1/q^{\\prime }}.$ Up to shifting, we can assume without loss of generality $x_0=0$ .", "Fix $m\\in [2,\\infty )$ , set the shorthand $\\beta :=\\alpha H +1/q^{\\prime }$ .", "Note that by (REF ) one has $\\beta >H$ .", "Given any closed subinterval $I\\subset [0,1]$ , define $\\llbracket \\varphi \\rrbracket _{\\beta ,I} := \\sup _{s,t\\in I, s<t} \\frac{ \\big \\Vert \\Vert \\varphi _{s,t}\\Vert _{L^m\\vert \\mathcal {F}_s} \\big \\Vert _{L^\\infty }}{|t-s|^\\beta w(s,t)^{1/q}}\\,.$ Since $b\\in L^q_t C^1_x$ and $\\beta <1/q^{\\prime }$ because of $\\alpha <0$ , it is clear that $\\llbracket \\varphi \\rrbracket _{\\beta ,I}$ is finite.", "Fix $(s,t)\\in [0,1]_\\le ^2$ and for any $(s^{\\prime },t^{\\prime })\\in [s,t]_\\le ^2$ set $A_{s^{\\prime },t^{\\prime }}:=\\mathbb {E}_{s^{\\prime }} \\int _{s^{\\prime }}^{t^{\\prime }} b_r(\\varphi _{s^{\\prime }}+B^H_r)\\mathrm {d}r.$ We can therefore apply the stochastic sewing lemma (in the version given by [33]) to $A$ in order to find a closed estimate for $\\llbracket \\varphi \\rrbracket _{\\beta ,I}$ .", "We have almost surely | As',t'| s't' P|r-s'|2H brC0 dr s't' |r-s'|H brC dr |t'-s'|w(s',t')1/q, where the $L^{q^{\\prime }}$ integrability of $|r-s|^{\\alpha H}$ follows from (REF ).", "Similarly, we have almost surely $| \\mathbb {E}_{s^{\\prime }} \\delta A_{s^{\\prime },u^{\\prime },t^{\\prime }} |& = \\bigg | \\mathbb {E}_{s^{\\prime }} \\mathbb {E}_{u^{\\prime }} \\int _{u^{\\prime }}^{t^{\\prime }} b_r(\\varphi _{s^{\\prime }}+B_r)- b_r(\\varphi _{u^{\\prime }} + B_r) \\mathrm {d}r \\bigg |\\\\& \\lesssim \\bigg | \\int _{u^{\\prime }}^{t^{\\prime }} |r-u^{\\prime }|^{H(\\alpha -1)} \\Vert b_r\\Vert _{C^\\alpha } \\mathrm {d}r\\, \\mathbb {E}_{s^{\\prime }}| \\varphi _{s^{\\prime },u^{\\prime }} | \\bigg |\\\\& \\lesssim |t^{\\prime }-s^{\\prime }|^{2\\beta -H} w(s^{\\prime },t^{\\prime })^{2/q} \\llbracket \\varphi \\rrbracket _{\\beta ,[s,t]}.$ The integrability of the power follows again from (REF ), as do the inequalities $\\beta +1/q>1/2$ , $2\\beta -H +2/q>1$ (we remark that it is only the latter for which the additional condition in (REF ) was introduced).", "Therefore the stochastic sewing lemma [33] applies and one can easily identify the process $\\mathcal {A}_\\cdot $ with $\\varphi _{s,\\cdot }$ .", "Indeed, due to the spatial regularity of $b$ , it is easy to get a bound s',t'-As',t'LmK (t'-s')wb,1,1(s',t') with some constants $K$ and $\\varepsilon >0$ .", "This is more than enough to conclude $\\mathcal {A}_{\\cdot }=\\varphi _{s,\\cdot }$ .", "We can therefore conclude that there exists a constant $N_0=N_0(m,d,\\alpha ,q,H)$ such that the following bound holds $\\big \\Vert \\Vert \\varphi _{s,t}\\Vert _{L^m\\vert \\mathcal {F}_s} \\big \\Vert _\\infty \\le N_0 |t-s|^\\beta w(s,t)^{1/q} + N_0 |t-s|^{2\\beta -H} w(s,t)^{2/q} \\llbracket \\varphi \\rrbracket _{\\beta ,[s,t]}.$ Define now another control $w_\\ast $ by $w_\\ast (s,t)^{1/q+\\beta -H}=w(s,t)^{1/q} |t-s|^{\\beta -H}$ and define an increasing sequence $t_n$ by $t_0=0$ and $w_\\ast (t_n,t_{n+1})^{1/q+\\beta -H}=(2N_0)^{-1}$ .", "Then dividing by $|t-s|^\\beta w(s,t)^{1/q}$ in (REF ) and taking the supremum over $[s,t]\\subset [t_n,t_{n+1}]$ , we obtain $\\llbracket \\varphi \\rrbracket _{\\beta ,[t_n,t_{n+1}]}\\le N_0 + N_0 w_\\ast (t_n,t_{n+1})^{1/q+\\beta -H} \\llbracket \\varphi \\rrbracket _{\\beta ,[t_n,t_{n+1}]}.$ which readily implies $\\llbracket \\varphi \\rrbracket _{\\beta ,[t_n,t_{n+1}]} \\le 2N_0$ .", "If $t_1=1$ , this immediately yields the conclusion.", "Suppose this is not the case, then for any pair $s<t$ which do not belong to the same subinterval $[t_n,t_{n+1}]$ , there exist $\\ell ,m\\in \\mathbb {N}$ such that $t_{\\ell -1}< s\\le t_\\ell \\le \\ldots \\le t_m\\le t<t_{m+1}$ .", "Set $\\tau _{\\ell -1}=s$ , $\\tau _i=t_i$ for $i=\\ell ,\\ldots , m$ and $\\tau _{m+1}=t$ .", "It holds $\\big \\Vert \\Vert \\varphi _{s,t}\\Vert _{L^m\\vert \\mathcal {F}_s}\\big \\Vert _{L^\\infty }& \\le \\sum _{i=\\ell -1}^{m} \\big \\Vert \\Vert \\varphi _{\\tau _i,\\tau _{i+1}}\\Vert _{L^m\\vert \\mathcal {F}_s} \\big \\Vert _{L^\\infty }\\le \\sum _{i=\\ell -1}^{m} \\big \\Vert \\Vert \\varphi _{\\tau _i,\\tau _{i+1}}\\Vert _{L^m\\vert \\mathcal {F}_{t_i}} \\big \\Vert _{L^\\infty }\\\\& \\lesssim \\sum _{i=\\ell -1}^{m} w(\\tau _i,\\tau _{i+1})^{1/q} |\\tau _i-\\tau _{i+1}|^\\beta \\\\& \\le (m+1-\\ell )^{-\\alpha H} \\Big ( \\sum _{i=\\ell -1}^{m} \\big [w(\\tau _i,\\tau _{i+1})^{1/q} |\\tau _i-\\tau _{i+1}|^\\beta \\big ]^{\\frac{1}{1+\\alpha H}} \\Big )^{1+\\alpha H}\\\\& \\le (m+1-\\ell )^{-\\alpha H} w(s,t)^{1/q} |t-s|^\\beta $ where in the last two passages we used the fact that $\\beta +1/q=1+\\alpha H\\in (0,1)$ , Jensen's inequality and the superadditivity of $[w(s,t)^{1/q} |t-s|^\\beta ]^{\\frac{1}{1+\\alpha H}}$ .", "Observe that $m+1-\\ell $ is controlled by the overall amount of intervals $[t_n,t_{n+1}]$ .", "In turn, by their definition and subadditivity of $w_\\ast $ , this is bounded by a multiple of $ w_\\ast (0,1)=w(0,1)^{(\\alpha H + H-1)^{-1}/q}=\\Vert b\\Vert _{L^q C^\\alpha }^{(\\alpha H + H-1)^{-1}}$ which readily yields the conclusion.", "Next we formulate two appropriate versions of the stochastic sewing lemma (SSL).", "Invented in the work of Lê [47], the SSL has seen many variations in recent years.", "Our first SSL combines three modifications: it incorporates shifting (as in [40]), as well as controls and general $\\big \\Vert \\,\\Vert \\cdot \\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^n}$ norms (as in [33], [48]).", "Let us remark however this combination is not completely trivial and is not without a price: due to the shifting, we need a nontrivial “time component” in our estimates, which does not appear in [33], [48].", "Recall the notations introduced in Section .", "Lemma 2.5 Let $w_1,w_2$ be controls, and let $m,n$ satisfy $2\\le m\\le n\\le \\infty $ and $m<\\infty $ .", "Let $(S,T)\\in [0,1]_\\le $ .", "Assume that $(A_{s,t})_{(s,t)\\in \\overline{[S,T]}^2_\\le }$ is a continuous mapping from $\\overline{[S,T]}^2_\\le $ to $L^m$ such that for all $(s,t)\\in \\overline{[S,T]}^2_\\le $ , $A_{s,t}$ is $\\mathcal {F}_t$ -measurable.", "Suppose that there exist constants $\\varepsilon _1,\\varepsilon _2>0$ such that the bounds As,tLm|FsLnw1(s-,t)1/2|t-s|1, Es-As,u,tLnw2(s-,t)|t-s|2 hold for all $(s,u,t)\\in \\overline{[S,T]}^3_\\le $ .", "Then for all $S<s\\le t\\le T$ the Riemann sums j=02-1 As+j2-(t-s),s+(j+1)2-(t-s) converge as $\\ell \\rightarrow \\infty $ in $L^m$ , to the increments $\\mathcal {A}_t-\\mathcal {A}_s$ of an adapted stochastic process $(\\mathcal {A}_t)_{t\\in [S,T]}$ that is continuous as a mapping from $[S,T]$ to $L^m$ and $\\mathcal {A}_S=0$ .", "Moreover $\\mathcal {A}$ is the unique such process that satisfies the bounds At-As-As,tLm|FsLnK1 w1(s-,t)1/2|t-s|1+K2 w2(s-,t)|t-s|2, Es-(At-As-As,t)LnK2 w2(s-,t)|t-s|2, with some $K_1,K_2$ for all $(s,u,t)\\in \\overline{[S,T]}^3_\\le $ .", "Furthermore, there exist a constant $K$ depending only on $\\varepsilon _1,\\varepsilon _2,m,n,d$ such that the bounds (REF )-(REF ) hold with $K_1=K_2=K$ , and moreover the bound At-AsLm|FsLnK( w1(s,t)1/2|t-s|1+w2(s,t)|t-s|2) holds for all $(s,t)\\in [S,T]^2_\\le $ .", "[Proof (Sketch)] Since by the time of the present work there is an abundance of SSLs in the recent literature, we do not aim to give a fully self-contained proof.", "We only provide the details as long as the combination of the arguments of [40] and [33], [48] is nontrivial.", "Step 1 (convergence along dyadic partitions).", "Let $(s,t)\\in \\overline{[S,T]}_\\le ^2$ and for each $k=0,1,\\ldots $ define $\\mathcal {D}_k=\\lbrace t_0^k,t_1^k,\\ldots ,t_{2^k}^k\\rbrace $ , where $t_i^k=s+i2^{-k}(t-s)$ , and set Aks,t=i=12kAti-1k,tik.", "We claim that $\\mathcal {A}^k_{s,t}$ converges and its limit $\\tilde{\\mathcal {A}}_{s,t}$ satisfies the bounds (REF )-(REF ) with $K=K_1=K_2$ when replacing $\\mathcal {A}_t-\\mathcal {A}_s$ by it.", "In particular, this would also imply the bound As,tLm|FsLnK( w1(s-,t)1/2|t-s|1+w2(s-,t)|t-s|2) for all $(s,t)\\in \\overline{[}S,T]^2_\\le $ .", "The claim clearly follows from the following two bounds: Ak-1 s,t -Ak s,t Lm|FsLn w1(s-,t)1/2|t-s|12-k1+w2(s-,t)|t-s|22-k2, Es- ( Ak-1 s,t -Ak s,t ) Ln w2(s-,t)|t-s|22-k2.", "It is no loss of generality to assume $k\\ge 2$ (otherwise the trivial bounds below suffice), in which case we write Ak+1s,t-Aks,t=-At0k,t1k,t2k-j=12k-1-1At2jk,t2j+1k,t2j+2k.", "For the first term we have use the conditions (REF )-(REF ) in a trivial way:  At0k,t1k,t2kLm|FsLn w1(t0k-(t2k-t0k),t2k)1/2|t2k-t0k|1w1(s-,t)1/2|t-s|12-k1,  Es- At0k,t1k,t2kLn w2(t0k-(t2k-t0k),t2k)|t2k-t0k|2w2(s-,t)|t-s|22-k2.", "For the sum in () we write j=12k-1-1At2jk,t2j+1k,t2j+2k= j=12k-1-1Et2j-2kAt2jk,t2j+1k,t2j+2k       +=01j=02k-2(id-Et4j+2k)At4j+2+2k,t4j+2+3k,t4j+2+4k =:I1+I2, where the term $\\delta A_{t_{2^k}^k,t_{2^k+1}^k,t_{2^k+2}^k}$ is defined to be 0.", "The point of this unaesthetic decomposition is twofold.", "First, since $t^k_{2j-2}=t^k_{2j}-(t^k_{2j+2}-t^k_{2j})$ , in the terms in the first sum there is sufficient shifting in the conditioning so that they can be estimated via the assumed bound (REF ).", "Second, for each $\\ell =0,1$ , the inner sum above is one of martingale differences.", "Therefore, we first estimate by the triangle inequality I1Lm|FsLnj=12k-1-1Et2j-2kAt2jk,t2j+1k,t2j+2kLm|FsLn j=12k-1-1Et2jk-(t2j+2k-t2jk)At2jk,t2j+1k,t2j+2kLn j=12k-1-1 w2(t2j-2k,t2j+2k)|t2j+2k-t2jk|2 |t-s|22-k2w2(s,t), using the superadditivity of $w_2$ in the last line.", "Similarly, but replacing the triangle inequality by the Burkholder-Davis-Gundy and Minkowski inequalities, in the form given in [33], we have I2Lm|FsLn=01(j=02k-2At4j+2+2k,t4j+2+3k,t4j+2+4kLm|FsLn2)1/2 2-k1=01 (j=02k-2w1(t4j+2k,t4j+2+4k))1/2 |t-s|12-k1w1(s,t)1/2.", "This proves ().", "As for (), it is only easier: noting that Esj=12k-1-1At2jk,t2j+1k,t2j+2k=Es I1, we can bound $\\Vert \\mathbb {E}_sI_1\\Vert _{L^n}\\le \\Vert I_1\\Vert _{L^n}$ just as in ().", "This concludes the proof of ()-().", "Step 2 (convergence along regular partitions).", "Let us say that a partition $\\pi =\\lbrace s=t_0<t_1<\\cdots <t_n=t\\rbrace $ is regular, if $|\\pi |:=\\max (t_i-t_{i-1})\\le 2\\min (t_i-t_{i-1})$ .", "For any partition we can define As,t=i=1n Ati-1,ti.", "Very similarly to Step 1, we get that for any sequence of regular partitions $(\\pi _n)_{n\\in \\mathbb {N}}$ with $|\\pi _n|\\rightarrow 0$ , $\\mathcal {A}^{\\pi }_{s,t}$ converges (for details see [40]).", "Therefore on one hand this limit has to coincide with $\\tilde{\\mathcal {A}}_{s,t}$ , on the other hand, this limit is clearly additive.", "Moreover notice that by construction $\\tilde{\\mathcal {A}}_{s,t}$ is $\\mathcal {F}_t$ -measurable for all $(s,t)\\in \\overline{[S,T]}_\\le ^2$ , and since it vanishes in $L^m$ , the additivity implies that it is continuous in both arguments as a two-parameter process with values in $L^m$ .", "Step 3 (the process $\\mathcal {A}$ and its bounds).", "For any $t\\in (S,T]$ we set $t_i:=S+2^{-i}(t-S)$ .", "We then claim that the series At:=i=1A(S+2-i)t,(S+2-i+1)t=:i=1Asi,si-1 converges.", "Indeed, since $(s_i,s_{i-1})\\in \\overline{[S,T]}^2_\\le $ , we may use the bound ().", "By the trivial bounds $w((s_i)_-,s_{i-1})\\le w(S,t)$ and $|s_{i-1}-s_i|\\le 2^{-i}\\mathbf {1}_{t-S\\ge 2^{-i}}$ , we get not only the convergence of the series but also the bound AtLm|FSLnK( w1(S,t)1/2|t-S|1+w2(S,t)|t-S|2).", "This is precisely (REF ) with $s=S$ .", "The case for general $(s,t)\\in [S,T]_{\\le }^2$ follows in the same way.", "It is also clear that $\\mathcal {A}_0=0$ , and by the remarks in Step 2, that $\\mathcal {A}$ is adapted and continuous in $L^m$ .", "Therefore $\\mathcal {A}$ satisfies all of the claimed properties.", "Step 4 (Uniqueness) The proof of this is standard and can be found in e.g.", "[48].", "The other version of SSL that we use seems to be new.", "In Lemma REF one can transfer $L^m$ bounds from $A$ to $\\mathcal {A}$ if $m<\\infty $ .", "The $m=\\infty $ case is a bit different: $L^\\infty $ bounds on $A$ imply Gaussian moment bounds on $\\mathcal {A}$ .", "An alternative way to obtain Gaussian moment bounds via stochastic sewing is presented in [8] (see e.g.", "Theorem 3.3. and Lemma 4.6. therein), but the conditions herein are easier to verify.", "The proof relies on a conditional version of Azuma–Hoeffding inequality, see Lemma REF in Appendix .", "Lemma 2.6 Let the conditions of Lemma REF hold with $m=n=\\infty $ .", "Then there exists positive constants $\\mu $ and $K$ depending only on $\\varepsilon _1,\\varepsilon _2,d$ such that the bound E[( |At-As|2(w1(s,t)1/2|t-s|1+w2(s,t)|t-s|2)2)Fs]K holds for all $(s,t)\\in [S,T]_{\\le }^2$ .", "We continue using the notation of the proof of Lemma REF .", "Let $(s,t)\\in \\overline{[S,T]}_\\le ^2$ and $k=0,1,\\ldots $ , and let us bound $\\mathcal {A}_{s,t}^{k+1}-\\mathcal {A}_{s,t}^k$ .", "The first term on the right-hand side of () is trivially bounded by $2w_1(s_-,t)^{1/2}|t-s|^{\\varepsilon _1}2^{-k\\varepsilon _1}$ with probability 1.", "Decomposing the second term into $I_1$ and $I_2$ as in (), a simple use of triangle inequality as in () yields the almost sure bound |I1|2-k2|t-s|2w2(s,t).", "As for $I_2$ , recalling that it is the sum of two martingales, for each we may use the Azuma-Hoeffding inequality.", "The role of $\\delta _j$ as in Lemma REF is played by $4w_1(t_{4j+2\\ell }^k,t_{4j+2\\ell +4}^k)^{1/2}$ , so similarly to the calculation as in (), we get :=i i2 2-2k1|t-s|21w1(s,t).", "Therefore by (REF ), combined with the aforementioned almost sure bounds, we get that with some $\\mu _1>0$ , $K_1$ E[(12k(12)|As,tk+1-As,tk|2(w1(s-,t)1/2|t-s|1+w2(s-,t)|t-s|2)2)FS]K1.", "Since one can write |(At-As)-As,t|k=02-k(12)2k(12)|As,tk+1-As,tk|, we get by conditional Jensen's inequality E[(1|(At-As)-As,t|2(w1(s-,t)1/2|t-s|1+w2(s-,t)|t-s|2)2)FS]k=02-k(12)K1.", "Using again the assumed bounds on $A_{s,t}$ , we get with some other constant $K_2$ E[(1|At-As|2w1(s-,t)1/2|t-s|1+w2(s-,t)|t-s|2)FS]K2.", "It only remains to remove the shifts in the denominator and substitute $\\mathcal {F}_S$ with $\\mathcal {F}_s$ , which can be done just as in Step 3 of the proof of Lemma REF , and therefore we obtain (REF )." ], [ "Stability", "The use of the tools from Section is illustrated by the following lemma, which will play a key role in our later analysis.", "Let us emphasise the important feature of the statement that although $h$ is assumed to have $\\delta $ spatial regularity, in the estimate only its $\\alpha -1$ norm is used.", "Lemma 3.1 Assume () and let $(S,T)\\subset [0,1]_\\le ^2$ .", "Suppose that $h\\in L^q_t C^\\delta _x$ for some $\\delta >0$ and let $\\varphi $ be an adapted process satisfying (REF ) with $m=1$ , some control $w$ , and $N=1$ .", "For $t\\in [S,T]$ , define the process t=St hr(BHr+r)dr and set $\\varepsilon =1/q^{\\prime }+(\\alpha -1)H$ .", "Then there exists positive constants $\\mu $ and $K$ , depending only on $H$ , $q$ , $\\alpha $ , and $d$ , such that for all $(s,t)\\in [S,T]^2_\\le $ one has the bound E[ ( |t-s|2wh,-1,q(s,t)2/q|t-s|2 (1+w(s,t)1/q|t-s|)2)Fs]K. As a consequence, for any $\\widetilde{m}\\in [1,\\infty )$ there exists a constant $\\tilde{K}$ , depending only on $\\tilde{m}$ , $H$ , $q$ , $\\alpha $ , and $d$ , such that for all $(s,t)\\in [S,T]^2_\\le $ one has the bound t-sLm|FsLK wh,-1,q(s,t)1/q|t-s|(1+w(s,t)1/q|t-s|).", "Note that thanks to the condition (), $\\varepsilon >0$ .", "For $(s,t)\\in \\overline{[S,T]}^2_\\le $ let us set As,t=Es-(t-s)st hr(BHr+Es-(t-s)r)dr, and verify the conditions of Lemma REF (which are in fact the conditions of Lemma REF with $m=n=\\infty $ ).", "Fix $(s,u,t)\\in \\overline{[S,T]}_\\le ^3$ and denote $s_1=s-(t-s)$ , $s_2=s-(u-s)$ , $s_3=u-(t-u)$ , $s_4=s$ , $s_5=u$ , $s_6=t$ .", "These points are almost ordered according to their indices, except $s_3$ and $s_4$ , for which $s_4\\le s_3$ may happen, but this plays no role whatsoever.", "First, we have As,t=st P|r-s1|2Hhr(Es1(BHr+r))dr.", "Therefore, by (REF ) and Hölder's inequality, we have |As,t|st P|r-s1|2HhrC0xdrst|r-s1|(-1) HhrC-1xdr |t-s|1/q'+(-1) Hwh,-1,q(s,t)1/q.", "The exponent $1/q^{\\prime }+(\\alpha -1) H$ is by definition $\\varepsilon $ .", "Since $q\\le 2$ , (REF ) is satisfied with $\\varepsilon _1=\\varepsilon $ and $w_1=N w_{h,\\alpha -1,q}^{2/q}$ .", "Next, we need to bound $\\mathbb {E}_{s-(t-s)}\\delta A_{s,u,t}=\\mathbb {E}_{s_1}\\delta A_{s_4,s_5,s_6}$ .", "After an elementary rearrangement we get Es1As4,s5,s6=I+J:=Es1Es2s4s5 hr(BHr+Es1r)-hr(BHr+Es2r)dr    +Es1Es3s5s6 h(BHr+Es1r)-hr(BHr+Es3r)dr.", "The two terms are treated in exactly the same way, so we only detail $I$ .", "We use (REF ) similarly as before to get |I|Es1s4s5|P|r-s2|2Hhr(Es2BHr+Es1r)-P|r-s2|2Hhr(Es2BHr+Es2r)|dr Es1s4s5P|r-s2|2H hrC1x|Es1r-Es2r|dr Es1s4s5|r-s2|(-2)HhrC-1x|Es1r-Es2r|dr.", "By Jensen's inequality and the assumption on $\\varphi $ we have the almost sure bound Es1|Es1r-Es2r|Es1|Es1r-r|w(s1,r)1/q|t-s|1/q'+H.", "Also note that $r\\mapsto |r-s_2|^{(\\alpha -2)H}\\in L^{q^{\\prime }}([s_4,s_5])$ because of the shifted basepoint, in general this would not be true with $s_2$ replaced by $s_4$ .", "Therefore, by Hölder's inequality |I||t-s|1/q'+(-2)H+1/q'+Hwh,-1,q(s,t)1/qw(s1,t)1/q.", "Note that the exponent of $|t-s|$ is simply $2\\varepsilon $ .", "Using again that $q\\le 2$ , we see that condition (REF ) is satisfied with $\\varepsilon _2=2\\varepsilon $ and $w_2=N w_{h,\\alpha -1,q}(s,t)^{1/q}w(s_1,t)^{1/q}$ .", "It remains to verify that the process $\\mathcal {A}$ of Lemma REF is given by $\\psi $ .", "Since $\\psi _0=0$ , it suffices to show that t-s-As,tL1w(s-,t)|t-s| for all $(s,t)\\in \\overline{[S,T]}^2_{\\le }$ , with some control $\\tilde{w}$ and some $\\kappa >0$ .", "This follows from three easy bounds: first, t-s-st hr(BHr+Es-r)drL1 sthrCx w(s-,r)/qdr wh,,q(s,t)1/q|t-s|1/q'w(s-,t)/q, second, st hr(BHr+Es-r)dr-st hr(Es-BHr+Es-r)drL1 sthrCx |r-s-|Hdr wh,,q(s,t)1/q|t-s|1/q'+H, and third, st hr(Es-BHr+Es-r)dr-As,tL1 sthr-P|r-s-|2HhrC0x dr wh,,q(s,t)1/q|t-s|1/q'+H.", "Hence we can conclude $\\psi =\\mathcal {A}$ and (REF ) follows from (REF ).", "We will often consider (REF ) with nonzero initial time.", "If $b$ is a function, a solution of (REF ) on some interval $[S,T]\\subset [0,1]$ with initial condition $X_S$ is a process $X$ satisfying Xt=XS+St br(Xr)dr+BHt-BHS for all $t\\in [S,T]$ .", "Our main stability estimate for solutions is then formulated as follows.", "Theorem 3.2 Assume ().", "Let $\\delta >0$ .", "Let $[S,T]\\subset [0,1]$ and for $i=1,2$ let $X^i$ be adapted continuous processes satisfying (REF ) on $[S,T]$ with initial conditions $X^i_S$ and drifts $b^i\\in L^q_t C^{1+\\delta }_x$ .", "Denote $M=\\max _{i=1,2}\\Vert b^i\\Vert _{L^q_t C^\\alpha _x}$ .", "Then for any $m\\in [2,\\infty )$ there exists a positive constant $N=N(m,M,H,\\alpha ,q,d)$ , such that one has the almost sure bound t[S,T]|X1t-X2t|Lm|FsN(|XS1-XS2|+b1-b2Lqt([S,T];C-1x)).", "Moreover, if $b^1=b^2$ , then one also has the almost sure bound t[S,T](|X1t-X2t|-1)Lm|FsN |XS1-XS2|-1.", "As usual, we denote $\\varphi ^1=X^1-B^H$ and $\\varphi ^2=X^2-B^H$ .", "For $t\\in [S,T]$ , we write X1t-X2t=X1S-X2S+St(01b1r(BHr+1r+(1-)2r)d)(X1r-X2r)dr       +0t(b1-b2)r(BHr+2r)dr.", "Note that $\\nabla b^1\\in L^q_tC^\\delta _x$ , and therefore the process At:=01Atd:=01(Stb1r(BHr+1r+(1-)2r)dr)d is well defined.", "Define furthermore zt:=0t(b1-b2)r(BHr+2r)dr. We then apply Lemma REF with $\\varphi =\\lambda \\varphi ^1_r+(1-\\lambda )\\varphi ^2_r$ and $h=\\nabla b^1$ , as well as with $\\varphi =\\varphi ^2$ and $h=b^1-b^2$ .", "Since $\\varphi ^1$ and $\\varphi ^2$ are the drift parts of solutions, by Lemma REF the processes $\\varphi =\\lambda \\varphi ^1+(1-\\lambda )\\varphi ^2$ satisfy the bound (REF ) with control $w=w_{b^1,\\alpha ,q}+w_{b^2,\\alpha ,q}$ , and so Lemma REF indeed applies.", "Combining the bound (REF ) with Lemma REF , we get that there exists random variables $\\eta _A,\\eta _z$ with Gaussian momentsNote that in terms of the coefficients, the moments of $\\eta _A$ depend on $w_{b^1,\\alpha ,q}+w_{b^2,\\alpha ,q}$ , while the moments of $z$ depend only on $w_{b^2,\\alpha ,q}$ .", "conditionally on $\\mathcal {F}_S$ , as well as $\\delta >0$ and $p\\in (1,2)$ , such that Ap-var;[S,T]wb1,,q(S,T)1/qSs<tT|At-As|wb1,,q(s,t)1/q|t-s| wb1,,q(S,T)1/qA, zp-var;[S,T]wb1-b2,-1,q(S,T)1/qSs<tT|zt-zs|wb1-b2,-1,q(s,t)1/q|t-s| wb1-b2,-1,q(S,T)1/qz.", "If we rewrite () as d(X1t-X2t)=Adtxt+dzt,    (X1t-Xt2)t=S=X1S-X2S, then we are in the realm of Lemma REF from Appendix , for $\\tilde{p}=p$ .", "We therefore get t[S,T]|Xt1-Xt2|eCAp-var;[S,T]p(|X1S-X2S|+zp-var;[S,T]).", "Recall that $\\eta _A$ satisfies $\\mathbb {E}_S [e^{\\mu \\eta _A^2}]\\lesssim 1$ for some $\\mu >0$ , thus also $\\mathbb {E}_S[e^{K \\eta _A^p}] \\lesssim _{K,p} 1$ for all $K>0$ .", "Therefore we obtain ES[ t[S,T]|Xt1-Xt2|m ] ES[emC Ap-var; [S,T]p ] |X1S-X2S|m       + ES[ emC Ap-var;[S,T]p zp-var;[S,T]m ] |X1S-X2S|m + wb1-b2,-1,q(S,T)m/q, using conditional Hölder's inequality to get the last line.", "This gives (REF ).", "In case $b^1=b^2$ , we have $z=0$ and the Young equation () becomes homogeneous.", "Moreover, note that Young equations allow time-reversal: if we fix $\\tau \\in [S,T]$ , write $\\tilde{A}_t=A_{\\tau -t}$ , and dYt=AdtYt,   Ytt=0=X1-X2, then $Y_{\\tau -S}=X_S^1-X_S^2$ .", "Therefore by Lemma REF we have |XS1-XS2|eCAp-var;[0,-S]p|X1-X2|.", "Of course $\\Vert \\tilde{A}\\Vert _{p-{\\mathrm {var}};[0,\\tau -S]}^p=\\Vert A\\Vert _{p-{\\mathrm {var}};[S,\\tau ]}^p \\le \\Vert A \\Vert _{p-{\\mathrm {var}};[S,T]}^p$ , so after rearranging for the inverses, taking supremum in $\\tau \\in [S,T]$ , and taking $L^m|\\mathcal {F}_s$ norms, we get (REF )." ], [ "Strong well-posedness for functional drift", "We first apply the stability estimate to establish existence and uniqueness of solutions of (REF ) in the case of $\\alpha >0$ .", "In this case the meaning of solutions is unambiguous, but we will also need the following stronger concepts of solutions.", "Definition 4.1 Assume $b\\in L^q_t C^\\alpha _x$ for some $q\\ge 1$ , $\\alpha >0$ and let $\\gamma :[0,1]\\rightarrow \\mathbb {R}^d$ be bounded and measurable.", "A semiflow associated to the ODE Xt = 0t bs (Xs) ds + t is a jointly measurable map $\\Phi :[0,1]^2_\\le \\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^d$ such that for all $(s,x)\\in [0,1]\\times \\mathbb {R}^d$ and all $t\\in [s,1]$ one has st(x)=x+st br(sr(x))dr+t-s; for all $(s,r,t,x)\\in [0,1]^3_\\le \\times \\mathbb {R}^d$ one has $\\Phi _{s\\rightarrow t}(x)=\\Phi _{r\\rightarrow t}\\big (\\Phi _{s\\rightarrow r}(x)\\big )$ .", "A flow is a semiflow such that for all $(s,t)\\in \\times [0,1]^2_\\le $ the map $x\\mapsto \\Phi _{s\\rightarrow t}(x)$ is a self-homeomorphism of $\\mathbb {R}^d$ .", "If $\\gamma $ is a stochastic process, a random (semi)flow is a jointly measurable map $\\Phi :\\Omega \\times [0,1]^2_\\le \\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^d$ such that for almost all $\\omega \\in \\Omega $ , the map $\\Phi ^\\omega :[0,1]^2_\\le \\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^d$ is a (semi)flow associated to (REF ) with $\\gamma =\\gamma (\\omega )$ .", "We say that a random (semi)flow is adapted if for all $(s,t,x)\\in [0,1]^2_\\le \\times \\mathbb {R}^d$ , the random variable $\\Phi _{s\\rightarrow t}(x)$ is $\\mathcal {F}_t$ -measurable.", "Given $\\beta \\in (0,1)$ , we say that a (semi)flow is locally $\\beta $ -Hölder continuous if for all $K$ there exists a constant $N$ such that for all $(s,t,x,y)\\in [0,1]_\\le ^2\\times B_K^2$ one has $|\\Phi _{s\\rightarrow t}(x)-\\Phi _{s\\rightarrow t}(y)|\\le N|x-y|^\\beta $ .", "Remark 4.2 Definition REF is based on Kunita's classical one, cf.", "[45]; it is slightly different (in fact, stronger) from other definitions proposed in the literature, like [27], due to the ordering of the quantifiers.", "One can draw a nice analogy between this kind of difference and the one between so called crude and perfect random dynamical systems, cf.", "[68].", "Theorem 4.3 Assume (), $\\alpha >0$ , and let $b\\in L^q_t C^\\alpha _x$ .", "Then there exists an adapted random semiflow of solutions to (REF ) that is furthermore locally $\\beta $ -Hölder continuous almost surely for all $\\beta \\in (0,1)$ .", "Let $m\\in [2,\\infty )$ , to be specified later.", "Take a sequence of functions $(b^{(n)})_{n\\in \\mathbb {N}}$ such that $b^{(n)}\\in L^q_tC^{2}_x$ and $\\Vert b^{(n)}\\Vert _{L^q_t C^\\alpha _x}\\le \\Vert b\\Vert _{L^q_t C^\\alpha _x}$ for all $n\\in \\mathbb {N}$ , and $\\Vert b^{(n)}-b\\Vert _{L^q_t C^{\\alpha -1}_x}\\rightarrow 0$ as $n\\rightarrow \\infty $ .", "Replacing $b$ by $b^{(n)}$ in (REF ), the equation clearly admits an adapted random semiflow which we denote by $\\Phi ^{(n)}$ .", "For fixed $(s,t)\\in [0,1]^2_\\le $ , $x\\in \\mathbb {R}^d$ , and $n,n^{\\prime }\\in \\mathbb {N}$ , we may apply Theorem REF to obtain the bound st(n)(x)-st(n')(x)Lmb(n)-b(n')Lqt C-1x.", "Here and below the only important feature of the hidden proportionality constant in $\\lesssim $ is that it is independent of $n,n^{\\prime }$ .", "Next, let $(s,s^{\\prime },t),(s,s^{\\prime },t^{\\prime })\\in [0,1]^3_\\le $ , $x,x^{\\prime }\\in \\mathbb {R}^d$ , and $n\\in \\mathbb {N}$ .", "Then from applying Theorem REF again we get st(n)(x)-st(n)(x')Lm|x-x'|; by trivial estimate we get st(n)(x)-st'(n)(x)Lm|t-t'|H(1/q'); and using the semigroup property and Theorem REF once more we have [eq:s-regularity] st(n)(x)-s't(n)(x)Lm=s't(n)(ss'(n)(x))-s't(n)(x)Lm ss'(n)(x)-xLm|s'-s|H(1/q').", "We therefore get that the sequence $\\big (\\Phi ^{(n)}\\big )_{n\\in \\mathbb {N}}$ is on the one hand Cauchy in $C^0_{s,t,x} L^m_\\omega $ , and on the other hand, bounded in $C^{0}_{s,t}C^1_xL^m_\\omega \\cap C^0_x C^{H\\wedge (1/q^{\\prime })}_{s,t}L^m_\\omega $ .", "This implies that for some random field $\\Phi $ , one has $\\Phi ^{(n)}\\rightarrow \\Phi $ in $C^0_{s,t}C^{1-\\kappa }_xL^m_\\omega \\cap C^0_x C^{H\\wedge (1/q^{\\prime })-\\kappa }_{s,t} L^m_\\omega $ , where $\\kappa >0$ is arbitrary.", "By Kolmogorov's continuity theorem, for sufficiently large $m$ , the convergence also holds in $L^m_\\omega C^0_{s,t}C^{1-2\\kappa ,\\mathrm {loc}}_x\\cap L^m_\\omega C^{0,\\mathrm {loc}}_x C^{H\\wedge (1/q^{\\prime })-2\\kappa }_{s,t}$ .", "This yields the claimed spatial regularity of $\\Phi $ , while the fact that it is indeed a semiflow for (REF ) follows immediately from the locally uniform convergence and the spatial continuity of the coefficient.", "Theorem 4.4 Assume (), $\\alpha >0$ , and let $b\\in L^q_t C^\\alpha _x$ .", "Then there exists an event $\\tilde{\\Omega }$ of full probability such that for all $\\omega \\in \\tilde{\\Omega }$ , for all $(S,T)\\in [0,1]^2_\\le $ , $x\\in \\mathbb {R}^d$ , there exists only one solution to (REF ) on $[S,T]$ with initial condition $x$ .", "The theorem will follow immediately from Theorem REF and the following lemma, which is a refinement of the technique illustrated in [63].", "Lemma 4.5 Let $\\gamma :[0,1]\\rightarrow \\mathbb {R}^d$ be bounded and measurable and consider the ODE $x_t = \\int _0^t b_s (x_s) \\mathrm {d}s + \\gamma _t.$ Suppose $b\\in L^1_t C^{\\alpha ,\\mathrm {loc}}_x$ and that the ODE admits a locally $\\beta $ -Hölder continuous semiflow $\\Phi $ with $\\beta (1+\\alpha )>1.$ Then for any $(S,T)\\in [0,1]_\\le ^2$ and $y\\in \\mathbb {R}^d$ there exists a unique solution to the ODE on the interval $[S,T]$ with initial condition $y$ , given by $\\Phi _{S\\rightarrow \\cdot }(y)$ .", "Suppose that,there exists another solution to the ODE, given by $(z_t)_{t\\in [S,T]}$ .", "Since both $z$ an $\\Phi _{S\\rightarrow \\cdot }(y)$ are bounded, we may and will assume $b\\in L^1_t C^{\\alpha }_x$ and that $\\Phi $ globally $\\beta $ -Hölder continuous.", "Define the control $w=w_{b,\\alpha ,1}$ .", "Now let us fix $\\tau \\in [S,T]$ and define the map $f_t:= \\Phi _{t\\rightarrow \\tau } (z_t) - \\Phi _{S\\rightarrow \\tau }(y)$ .", "If we are able to show that $f$ is constant in time, then $f \\equiv f_0=0$ , which implies $\\Phi _{t\\rightarrow \\tau }(z_t)=\\Phi _{S\\rightarrow \\tau }(y)$ and in turn by choosing $t=\\tau $ gives $z_\\tau =\\Phi _{\\tau \\rightarrow \\tau }(z_\\tau )=\\Phi _{S\\rightarrow \\tau }(y)$ .", "In particular, if we above argument holds for any $\\tau \\in [S,T]$ , we reach the conclusion.", "It remains to prove that $f$ is constant on $[S,\\tau ]$ .", "To this end, first observe that for any $S\\le s\\le t\\le \\tau $ it holds [eq:sha trick] |fs,t| =|t(zt)-s(zs)| =|t(zt)-t(st(zs))| |st(zs)-zt|.", "Next, by definition of flow it holds $ \\Phi _{s\\rightarrow t}(z_s)-z_t=\\int _s^t [b_r(\\Phi _{s\\rightarrow r} (z_s))-b_r(z_r)] \\mathrm {d}r $ which immediately implies $|\\Phi _{s\\rightarrow t}(z_s)-z_t|\\lesssim w(s,t)$ ; we can improve the estimate by recursively inserting it in the above identity: $|\\Phi _{s\\rightarrow t}(z_s)-z_t|& \\le \\int _s^t |b_r(\\Phi _{s\\rightarrow r} (z_s))-b_r(z_r)| \\mathrm {d}r\\\\& \\le \\int _s^t \\Vert b_r\\Vert _{C^\\alpha } |\\Phi _{s\\rightarrow r} (z_s))-z_r|^\\alpha \\mathrm {d}r\\le w(s,t)^{1+\\alpha }.$ Inserting it in the initial estimate, we can conclude that $ |f_{s,t}| \\lesssim |\\Phi _{s\\rightarrow t}(z_s)-z_t|^\\beta \\lesssim w(s,t)^{\\beta (1+\\alpha )}.$ Since $\\beta (1+\\alpha )>1$ and $w$ is a continuous control, $f$ must be necessarily constant.", "Remark 4.6 Path-by-path uniqueness clearly implies pathwise uniqueness, which in turn implies uniqueness in law by [67].", "Remark 4.7 The semiflow is based on deterministic initial data, but immediately extends to random ones: if $X_0$ is a $\\mathcal {F}_0$ -measurable random variable, then $(\\Phi _{0\\rightarrow t}(X_0)\\big )_{t\\in [0,1]}$ is clearly the unique adapted solution with initial condition $X_0$ ." ], [ "Strong well-posedness for distributional drift", "When $\\alpha <0$ , the very first question one has to address is the meaning of the equation, more precisely the meaning of the integral in (REF ).", "We start by some consequences of Lemma REF .", "Denote by $\\overline{C^\\alpha }$ the closure of $C^1$ in $C^\\alpha $ .", "Recall that for any $\\alpha <\\alpha ^{\\prime }$ one has $C^{\\alpha ^{\\prime }}\\subset \\overline{C^\\alpha }$ .", "Corollary 5.1 Assume () and $\\alpha <0$ , and take $\\delta >0$ .", "Define the linear map $T^{B^H}:L^q_tC^{1+\\delta }_x\\rightarrow L^\\infty _\\omega C_tC^\\delta _x$ by (TBHh)t(x)=0t hr(BHr+x)dr. Denote $w=w_{h,\\alpha ,q}$ .", "Then, for any $m\\in [2,\\infty )$ , there exists a constant $K=K(m,H,\\alpha ,q,d,w(0,1))$ such that for all $(s,t)\\in [0,1]^2_\\le $ and $x,y\\in \\mathbb {R}^d$ one has the bound (TBHh)s,t(x)-(TBHh)s,t(y)Lm|FsL       K|x-y|w(s,t)1/q|t-s|1/q'+(-1)H. Moreover, for any $\\kappa \\in (0,1)$ there exists a constant $K=K(m,H,\\alpha ,q,d,w(0,1),\\kappa )$ such that one has the bound 0s<t1(TBHh)s,tC1-,2xw(s,t)1/q|t-s|1/q'+(-1)H-LmK.", "Consequently with $p=\\big (1+(\\alpha -1)H\\big )^{-1}\\in (1,2)$ , the mapping $h\\rightarrow T^{B^H} h$ takes values in $L^m_\\omega C^{(p+\\kappa )-{\\mathrm {var}}}_tC^{1-\\kappa ,2\\kappa }_x$ and as such, it extends continuously to $L^q_t \\overline{C^\\alpha _x}$ .", "This extension also satisfies the bounds (REF )-(REF ).", "Applying Lemma REF with $t,z\\mapsto (x-y)\\cdot \\int _0^1\\nabla h_t(z+\\theta x+(1-\\theta )y)\\mathrm {d}\\theta $ in place of $h$ yields (REF ).", "The bound (REF ) follow from (REF ) and (REF ) by Kolmogorov's continuity theorem in the form of Corollary REF .", "Corollary REF motivates introducing some temporary notation.", "Given (), set $p_{\\alpha ,H}=\\big (\\big (1+(\\alpha -1)H\\big )^{-1}+2\\big )/2\\in (1,2)$ and for any $h\\in L^q C^\\alpha $ denote by $\\Omega _h$ the probability 1 event $\\lbrace T^{B^H}h\\in C^{p_{\\alpha ,H}-{\\mathrm {var}}}_tC^{1-2\\kappa ,\\kappa }_x,\\,\\,\\forall \\kappa >0\\rbrace $ .", "The properties of $T^{B^H}$ obtained in Corollary REF are sufficient to define the notion of solution via nonlinear Young formalism.", "For details we refer to [35], whose setup we adapt to the $p$ -variation framework.", "Lemma 5.2 Let $A:[0,1]\\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^n$ and $x:[0,1]\\rightarrow \\mathbb {R}^d$ satisfy $A\\in C_t^{p-{\\mathrm {var}}}C^{\\gamma ,\\mathrm {loc}}_x$ and $x\\in C^{q-{\\mathrm {var}}}$ such that the exponents $p,q\\in [1,\\infty )$ , $\\gamma >0$ satisfy 1p+q>1.", "Then the nonlinear Young integral yt=01 Adt(xt):=j=02-1Aj2-,(j+1)2-(xj2-) is well-defined.", "If $A\\in C_t^{p-{\\mathrm {var}}}C^{\\gamma }_x$ , then $y$ satisfies for all $(s,t)\\in [0,1]_\\le ^2$ the bound |ys,t-As(ys)|N A1/pp-var;Cx;[s,t]yq-var;[s,t]/q, where the constant $N$ depends only on $1/p+\\gamma /q$ .", "Definition 5.3 Assume () and $\\alpha <0$ and $b\\in L^q_t C^\\alpha _x$ .", "Given $\\omega \\in \\Omega _b$ a process $X$ is a solution to (REF ) if $X=\\varphi +B^H$ , $\\varphi \\in C^{\\tilde{p}-{\\mathrm {var}}}$ for some $\\tilde{p}<p_{\\alpha ,H}^{\\prime }$ , and the equality t=0t (TBHb)ds(s) holds for all $t\\in [0,1]$ .", "Typically we encounter more special cases of nonliner Young integrals than the generality that Lemma REF allows.", "First of all, the spatial growth of $A$ is often quantified (see e.g.", "Corollary REF ).", "Second, when $x$ is in fact a solution to a nonlinear Young equation, its temporal regularity of $x$ is controlled by that of $A$ (see e.g.", "[35] for a proof in the Hölder case).", "We can then define the notion of flows similarly to Definition REF .", "In fact, the following definition extends Definition REF : taking $A=T^\\gamma b$ one has $A\\in C_t^{1-{\\mathrm {var}}} C^{\\alpha }_x$ and therefore the two definitions coincide via the change of variables $\\Psi _{s\\rightarrow t}(x)=\\Phi _{s\\rightarrow t}(x+\\gamma _s)-\\gamma _t$ .", "Definition 5.4 Assume $A\\in C_t^{p-{\\mathrm {var}}} C^{\\alpha ,\\mathrm {loc}}_x$ for some $\\gamma \\in (0,1]$ , $p\\in [1,2)$ satisfying $(1+\\gamma )/p>1$ .", "A semiflow associated to the nonlinear Young equation Xt = 0t Ads (Xs) is a jointly measurable map $\\Psi :[0,1]^2_\\le \\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^d$ such that for all $(s,x)\\in [0,1]\\times \\mathbb {R}^d$ one has $\\Psi _{s\\rightarrow \\cdot }(x)\\in C^{p-{\\mathrm {var}}}$ and for all $t\\in [s,1]$ one has the equality st(x)=x+st Adr(sr(x)); for all $(s,r,t,x)\\in \\times [0,1]^3_\\le \\times \\mathbb {R}^d$ one has $\\Psi _{s\\rightarrow t}(x)=\\Psi _{r\\rightarrow t}\\big (\\Psi _{s\\rightarrow r}(x)\\big )$ .", "The definitions of flow, random (semi)flow, adaptedness, and Hölder continuity are then exactly as in Definition REF .", "We are now in position to state and prove our existence and uniqueness theorems in the case of distributional drift.", "Theorem 5.5 Assume (), $\\alpha <0$ , and let $b\\in L^q_t C^\\alpha _x$ .", "Then there exists an adapted random semiflow of solutions to (REF ) that is furthermore locally $\\beta $ -Hölder continuous almost surely for all $\\beta \\in (0,1)$ .", "By sacrificing a small regularity, we may and will assume $b\\in L^q_t \\overline{C^\\alpha _x}$ .", "The proof follows similar steps as that of Theorem REF .", "We take $m\\in [2,\\infty )$ , to be chosen large enough later as well a sequence of functions $(b^{(n)})_{n\\in \\mathbb {N}}$ such that $b^{(n)}\\in L^q_tC^{2}_x$ and $\\Vert b^{(n)}\\Vert _{L^q_t C^\\alpha _x}\\le \\Vert b\\Vert _{L^q_t C^\\alpha _x}$ for all $n\\in \\mathbb {N}$ , and $\\Vert b^{(n)}-b\\Vert _{L^q_t C^{\\alpha -1}_x}\\rightarrow 0$ as $n\\rightarrow \\infty $ .", "Replacing $b$ by $b^{(n)}$ in (REF ), the equation clearly admits an adapted random semiflow $\\Psi ^{(n)}_{s\\rightarrow t}$ .", "For fixed $(s,t)\\in [0,1]^2_\\le $ , $x\\in \\mathbb {R}^d$ , and $n,n^{\\prime }\\in \\mathbb {N}$ , by Theorem REF one has the bound st(n)(x)-st(n')(x)Lmb(n)-b(n')Lqt C-1x.", "Similarly, for $(s,t)\\in [0,1]^2_\\le $ , $x,x^{\\prime }\\in \\mathbb {R}^d$ , and $n\\in \\mathbb {N}$ , Theorem REF yields st(n)(x)-st(n)(x')Lm|x-x'|.", "The temporal regularity is obtained from Lemma REF : in our present notation we get st(n)(x)-st'(n)(x)Lmwb,,q(t,t')1/q|t'-t|H+1/q'=:w(t,t')1+H with $\\tilde{w}$ defined by the above equality.", "Regularity in the $s$ variable is obtained precisely as in ().", "From these estimates we obtain the convergence (n)      in LmC0s,tC1-,locxLmC0,locx Cp,H-vars,t to a limit $\\Psi $ just as in of Theorem REF with all the required properties shown in the same way, except for the fact that $\\Psi _{s\\rightarrow \\cdot }(x)$ solves the equation on $[s,1]$ with initial condition $x$ in the nonlinear Young sense.", "Since at this point $s$ and $x$ are fixed, we assume for simplicity $s=0, x=0$ and denote $\\Psi ^{(n)}_{0\\rightarrow t}(0)=\\psi ^{(n)}_t$ , $\\Psi _{0\\rightarrow t}(0)=\\psi _t$ .", "It is sufficient to show the convergence 0t (TBHb(n))ds((n)s)0t(TBHb)ds(s) in probability for each $t\\in [0,1]$ .", "Recall that by Corollary REF we have that TBH(b(n)-b)0       in Cp,H-vartC1-,locx in probability.", "From the above, we have that $\\psi ^{(n)}$ converges to $\\psi $ (and in particular is bounded) in $C^{p_{\\alpha ,H}-{\\mathrm {var}}}$ in probability.", "Therefore if we take an auxiliary $\\ell \\in \\mathbb {N}$ and write 0t (TBHb(n))ds((n)s)- 0t(TBHb)ds(s) =0t (TBHb())ds((n)s)- 0t(TBHb())ds(s)       -0t (TBH(b()-b(n)))ds((n)s)+ 0t (TBH(b()-b))ds(s), then we can first choose $\\ell $ and $n$ large enough to make the third and fourth integrals small, and then we can keep the same $\\ell $ and increase $n$ further to make the difference of the first two terms small, using the Lipschitzness of $b^{(\\ell )}$ .", "This concludes the proof.", "Theorem 5.6 Assume (), $\\alpha <0$ , and let $b\\in L^q_t C^\\alpha _x$ .", "Then there exists an event $\\tilde{\\Omega }$ of full probability such that for all $\\omega \\in \\tilde{\\Omega }$ , for all $(S,T)\\in [0,1]^2_\\le $ , $x\\in \\mathbb {R}^d$ , there exists only one solution to (REF ) on $[S,T]$ with initial condition $x$ .", "The theorem will follow from a version of Lemma REF in the nonlinear Young setting, which is a generalization of Theorem 5.1 from [35].", "Lemma 5.7 Let $A\\in C^{p-{\\mathrm {var}}}_t C^{\\gamma ,\\mathrm {loc}}_x$ for some $\\gamma \\in (0,1]$ , $p\\in [1,2)$ satisfying $(1+\\gamma )/p>1$ .", "Suppose that the nonlinear YDE $x_t = \\int _0^t A_{\\mathrm {d}s}(x_s)$ admits a locally $\\beta $ -Hölder continuous semiflow $\\Psi $ with any $\\beta \\in (0,1)$ .", "Then for any $(S,T)\\in [0,1]_\\le ^2$ and $y\\in \\mathbb {R}^d$ there exists a unique solution to the nonlinear YDE on $[S,T]$ given by $\\Psi _{S\\rightarrow \\cdot }(y)$ .", "The proof is very similar to that of Lemma REF , so we will mostly sketch it.", "Let $z$ be a solution on $[S,T]$ starting from $y$ , which by definition belongs to $C^{q-{\\mathrm {var}}}$ with some $q$ such that $1/p+\\gamma /q>1$ .", "As a consequence, $z$ is bounded, and in particular after localizing the argument we may assume assume that $\\Psi $ is globally $\\beta $ -Hölder and that $A\\in C^{p-{\\mathrm {var}}}_t C^{\\gamma }_x$ .", "Denote $w(s,t)=\\llbracket A\\rrbracket _{p-{\\mathrm {var}},C^{\\gamma }_x;[s,t]}$ .", "As before, we fix $\\tau \\in [S,T]$ and set $f_t:= \\Psi _{t\\rightarrow \\tau }(z_t)-\\Psi _{S\\rightarrow \\tau }(y)$ ; in order to conclude, it suffices to show that $f$ is constant.", "As in (), we have $|f_{s,t}|\\lesssim |\\Psi _{s\\rightarrow t}(z_s)-z_t|^\\beta $ .", "Moreover by definition of solution to the YDE, it holds that $|\\Psi _{s\\rightarrow t}(z_s)-z_t|= \\big |\\Psi _{s\\rightarrow t}(z_s)-z_s - A_{s,t}(z_s) - (z_t-z_s-A_{s,t}(z_s))\\big |\\lesssim w(s,t)^{\\frac{1}{p}+\\frac{\\gamma }{p\\vee q}}.$ Combining the two estimates we get $|f_{s,t}|\\lesssim w(s,t)^{\\beta \\, \\big (\\frac{1}{p}+\\frac{\\gamma }{p\\vee q}\\big )}$ , and for $\\beta $ close enough to 1 the exponent is bigger that 1, implying the conclusion." ], [ "Flow regularity and Malliavin differentiability", "So far we have established the existence of a random Hölder continuous semiflow $\\Phi _{s\\rightarrow t}(x)$ ; the aim of this section is to strengthen this result, by establishing better properties for $\\Phi $ .", "We will start by showing that $\\Phi $ is a random flow, in the sense that for each fixed $s<t$ the maps $x\\mapsto \\Phi _{s\\rightarrow t}(x)$ are invertible, see Theorem REF below.", "The main body of the section is devoted to the proof of Theorem REF , showing that both $\\Phi _{s\\rightarrow t}$ and its inverse admit continuous derivatives.", "We conclude the section by showing that the random variables $\\Phi _{s\\rightarrow t}(x)$ possess a rather strong form of Malliavin differentiability, see Theorem REF below.", "From now on, we will use both $\\Phi _{s\\rightarrow t}(x)$ and $\\Phi _{s\\rightarrow t}(x;\\omega )$ to denote the semiflow, so to stress the dependence on the fixed element $\\omega \\in \\Omega $ whenever needed; we start with the promised invertibility.", "Theorem 6.1 Let () hold, $b\\in L^q_tC^\\alpha _x$ , and denote by $\\Phi _{s\\rightarrow t}(x;\\omega )$ the semiflow of solutions constructed in Theorems REF and REF .", "Then there exists an event $\\tilde{\\Omega }$ of full probability such that, for all $\\omega \\in \\tilde{\\Omega }$ and all $(s,t)\\in [0,1]^2_\\le $ , the map $x\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ is a bijection.", "We follow closely the classical arguments by Kunita, cf.", "[45], as they are completely independent from the driving noise being Brownian.", "First, let us define the family of random variables $\\eta _{s,t}(x,y) := |\\Phi _{s\\rightarrow t}(x) - \\Phi _{s\\rightarrow t}(y)|^{-1}$ Set $\\gamma =H\\wedge 1/q^{\\prime }$ for $\\alpha \\ge 0$ , $\\gamma = \\alpha H + 1/q^{\\prime }$ in the case $\\alpha <0$ .", "Recall that the estimates in the proof of Theorem REF , respectively Theorem REF , overall yield $\\Vert \\Phi _{s\\rightarrow t}(x) - \\Phi _{s^{\\prime }\\rightarrow t^{\\prime }}(y)\\Vert _{L^m} \\lesssim |s-s^{\\prime }|^\\gamma + |t-t^{\\prime }|^\\gamma + |x-y|;$ moreover, by taking expectation in (REF ), we have $\\Vert |\\Phi _{s\\rightarrow t}(x) - \\Phi _{s\\rightarrow t}(y)|^{-1} \\Vert _{L^m} \\lesssim |x-y|^{-1}.$ We can combine estimates (REF ) and (REF ) and argue as in [45] to find $\\begin{split}\\Vert & \\eta _{s,t} (x,y)-\\eta _{s^{\\prime },t^{\\prime }}(x^{\\prime },y^{\\prime })\\Vert _{L^m}\\\\& \\lesssim \\delta ^{-2} \\Big [ |x-x^{\\prime }|+|y-y^{\\prime }|+ (1+|x|+|x^{\\prime }|+|y|+|y^{\\prime }|)(|t-t^{\\prime }|^\\gamma + |s-s^{\\prime }|^\\gamma ) \\Big ]\\end{split}$ for all $s<t$ and all $x,x^{\\prime },y,y^{\\prime }$ such that $|x-y|>\\delta $ and $|x^{\\prime }-y^{\\prime }|>\\delta $ .", "From (REF ), one can apply Kolmogorov's continuity theorem to deduce that the map $(s,t,x,y)\\mapsto \\eta _{s,t}(x,y;\\omega )$ is continuous on the domain $\\lbrace s<t, |x-y|>\\delta \\rbrace $ for $\\mathbb {P}$ -a.e.", "$\\omega $ .", "As the argument works for any $\\delta >0$ , we can find an event $\\tilde{\\Omega }$ of full probability such that, for all $\\omega \\in \\tilde{\\Omega }$ , the map $\\eta _{s,t}(x,y;\\omega )$ is continuous on $\\lbrace s<t, |x-y|\\ne 0\\rbrace $ , which implies that it must also be finite for all $s<t, x\\ne y$ .", "This clearly implies injectivity of $x\\mapsto \\Phi _{s,t}(x;\\omega )$ for all $s<t$ and $\\omega \\in \\tilde{\\Omega }$ .", "We move to proving surjectivity, which this time is closely based on [45], having established the key inequalities (REF ) and (REF ).", "Let $\\hat{\\mathbb {R}}^d=\\mathbb {R}^d\\cup \\lbrace \\infty \\rbrace $ be the one-point compactification of $\\mathbb {R}^d$ ; set $\\hat{x}=x/|x|^2$ for $x\\in \\mathbb {R}^d\\setminus \\lbrace 0\\rbrace $ and $\\hat{x}=\\infty $ for $x=0$ .", "Define $\\tilde{\\eta }_{s,t}(\\hat{x}) ={\\left\\lbrace \\begin{array}{ll} (1+ |\\Phi _{s\\rightarrow t}(x)|)^{-1}\\quad & \\text{if } \\hat{x}\\in \\mathbb {R}^d\\\\0 & \\text{if } \\hat{x}=0\\end{array}\\right.", "}$ Arguing as in [45] we find $\\Vert \\tilde{\\eta }_{s,t}(\\hat{x}) -\\tilde{\\eta }_{s^{\\prime },t^{\\prime }}(\\hat{y})\\Vert _{L^m} \\lesssim |\\hat{x}-\\hat{y}| + |t-t^{\\prime }|^\\gamma + |s-s^{\\prime }|^\\gamma ;$ by Kolmogorov's theorem, we can find an event of full probability, which we still denote by $\\tilde{\\Omega }$ , such that $\\tilde{\\eta }_{s,t}(\\hat{x};\\omega )$ is continuous at $\\hat{x}=0$ and so that $\\Phi _{s,t}(\\cdot ;\\omega )$ can be extended to a continuous map from $\\hat{\\mathbb {R}}^d$ to itself for any $s<t$ and $\\omega \\in \\tilde{\\Omega }$ .", "This extension, denoted by $\\tilde{\\Phi }_{s\\rightarrow t}(x;\\omega )$ , is continuous in $(s,t,x)$ for every $\\omega \\in \\tilde{\\Omega }$ and thus $\\Phi _{s\\rightarrow t}(\\cdot \\,; \\omega )$ is homotopic to the identity map $\\tilde{\\Phi }_{s\\rightarrow s}(\\cdot \\,;\\omega )$ , making it surjective.", "Its original restriction $\\Phi _{s\\rightarrow t}(\\cdot \\,; \\omega )$ must then be surjective as well, from which we can conclude that $x\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ is surjective for all $s<t$ and $\\omega \\in \\tilde{\\Omega }$ .", "Our next goal is to establish that $\\Phi $ is in fact a random flow of diffeomorphisms; by this we mean that, in addition to the map $(s,t,x,\\omega )\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ satisfying all the properties listed in Definition REF , there exists an event of full probability $\\tilde{\\Omega }$ such that $x\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ is a diffeomorphism for all $s<t$ and $\\omega \\in \\tilde{\\Omega }$ .", "We will in fact prove a little bit more: Theorem 6.2 Let () hold, $b\\in L^q_tC^\\alpha _x$ , and $\\Phi $ be the associated random flow.", "Then there exists a constant $\\delta (\\alpha ,H)>0$ and an event $\\tilde{\\Omega }$ of full probability such that for any $\\omega \\in \\tilde{\\Omega }$ and any $s<t$ , the map $x\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ and its inverse are both $C^{1+\\delta ,\\mathrm {loc}}_x$ .", "In order to prove Theorem REF , we will first assume $b$ to be sufficiently smooth ($b\\in L^q_t C^{1+\\kappa }_x$ would suffice), so that the associated $\\Phi $ is already known to be a flow of diffeomorphism, and derive estimates which only depend on $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ (cf.", "Lemma REF and Proposition REF below).", "Estabilishing the result rigorously for general $b$ is then accomplished by standard approximation procedures, in the style of Theorems REF , REF .", "We will frequently use the exponent $\\varepsilon =(\\alpha -1) H+1/q^{\\prime }$ from Lemma REF , recall that () is equivalent to $\\varepsilon >0$ .", "Recall that, for regular $b$ , the Jacobian of the flow, namely the matrix $J_{s\\rightarrow t}^x := \\nabla \\Phi _{s\\rightarrow t}(x)\\in \\mathbb {R}^{d\\times d}$ , is known to satisfy the variational equation $J_{s\\rightarrow t}^x = I + \\int _s^t \\nabla b_r(\\Phi _{s\\rightarrow r}(x)) J_{s\\rightarrow r}^x \\mathrm {d}r.$ Already from this fact we can deduce useful moment estimates for $J^x_{s\\rightarrow t}$ .", "Lemma 6.3 Assume () and let $b\\in L^q_t C^2_x$ .", "Then there exists $p(\\alpha ,H)<2$ with the following property: for any $m\\in [1,\\infty )$ , there exists a constant $N=N(m,p,H,\\alpha ,q,d,\\Vert b\\Vert _{L^q_t C^\\alpha _x})$ such that, for all $x\\in \\mathbb {R}^d$ and $s\\in [0,1]$ , it holds $\\Big \\Vert \\sup _{t\\in [s,1]} |J^x_{s\\rightarrow t}|\\, \\Big \\Vert _{L^m} + \\big \\Vert \\llbracket J^x_{s\\rightarrow \\cdot }\\rrbracket _{p-{\\mathrm {var}};[s,1]} \\big \\Vert _{L^m} \\le N;$ moreover, for fixed $\\delta <\\varepsilon $ , for any $x\\in \\mathbb {R}^d$ and $s \\le t \\le t^{\\prime }$ it holds $\\Vert J^x_{s\\rightarrow t} - J^x_{s\\rightarrow t^{\\prime }} \\Vert _{L^m} \\lesssim |t-t^{\\prime }|^\\delta .$ For fixed $s\\in [0,1]$ and $x\\in \\mathbb {R}^d$ , setting $A_{s,t}:= \\int _s^t \\nabla b_r(\\Phi _{s\\rightarrow r}(x)) \\mathrm {d}r$ , equation (REF ) can be regarded as a linear Young differential equation.", "Arguing as in the proof of Theorem REF , one can show that $A$ has finite $p$ -variation for some $p<2$ and that in fact there exists $\\mu >0$ (depending on the usual parameters and $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ , but not on $x$ nor $s$ ) such that $\\mathbb {E}\\bigg [ \\exp \\bigg ( \\mu \\bigg | \\sup _{s\\le t<t^{\\prime }\\le 1} \\frac{|A_{t,t^{\\prime }}|}{w_{b,\\alpha ,q}(t,t^{\\prime })^{1/q} |t-t^{\\prime }|^\\delta } \\bigg |^2\\bigg )\\bigg ] <\\infty $ Lemma REF in Appendix (with $\\tilde{p}=p$ ) implies the pathwise estimate $\\sup _{t\\in [s,1]} |J^x_{s,t}| + \\llbracket J^x_{s\\rightarrow \\cdot }\\rrbracket _{p-{\\mathrm {var}};[s,1]} \\le C \\exp \\big ( C \\llbracket A \\rrbracket _{p-{\\mathrm {var}}; [s,1]}^p \\big ).$ Claim (REF ) then follows by taking $L^m$ -norms on both sides and observing (as in the proof of Theorem REF ) that (REF ) implies $\\mathbb {E}[\\exp (\\lambda \\llbracket A \\rrbracket _{p-{\\mathrm {var}}}^p)]<\\infty $ for all $\\lambda >0$ .", "Similarly, claim (REF ) also follows from Lemma REF (this time applying estimate (REF ) therein) combined with (REF ).", "The next step in the proof of Theorem REF is given by the following key estimate.", "Proposition 6.4 Let $b$ be a regular drift, define $J^x_{s\\rightarrow t}$ as above; set $\\varepsilon =(\\alpha -1)H + 1/q^{\\prime }$ .", "Then there exists $\\gamma \\in (0,1)$ such that, for any $m\\in [1,\\infty )$ , there exists $N=N(m,\\gamma ,H,\\alpha ,q,d,\\Vert b\\Vert _{L^q_t C^\\alpha _x})$ such that $\\Vert J^x_{s\\rightarrow t} - J^y_{s^{\\prime }\\rightarrow t^{\\prime }}\\Vert _{L^m} \\le N\\big [ |x-y|^{\\gamma } + |t-t^{\\prime }|^{\\varepsilon \\gamma } + |s-s^{\\prime }|^{\\varepsilon \\gamma } \\big ].$ for all $(s,t), (s^{\\prime },t^{\\prime })\\in [0,1]^2_\\le $ and $x,y\\in \\mathbb {R}^d$ .", "The proof requires the following technical refinement of Lemma REF .", "Lemma 6.5 Assume (), $h\\in L^q_t C^1_x$ , and let $\\varphi ^i$ , $i=1,2$ , be two processes satisfying the assumptions of Lemma REF for the same control $w$ ; define $\\varepsilon $ as therein and set $\\psi ^i_t=\\int _S^t h_r(B^H_r+\\varphi ^i_r) \\mathrm {d}r$ .", "Then for $\\gamma \\in (0,1)$ satisfying -H>0,    (2-)-H>0,    (2-)-H + (2-)/q>1, and any $m\\in [2,\\infty )$ , there exists $N=N(m,\\gamma ,H,\\alpha ,q,d,\\Vert h\\Vert _{L^q_t C^{\\alpha -1}_x})$ such that $\\Vert (\\psi ^1-\\psi ^2)_{s,t} \\Vert _{L^m} \\le N |t-s|^{\\varepsilon -\\gamma H} w_{h,\\alpha -1,q}(s,t)^{\\frac{1}{q}} \\big (1+w(s,t)\\big ) \\sup _{r\\in [S,T]} \\Vert \\varphi ^1_r-\\varphi ^2_r\\Vert _{L^m}^\\gamma .$ Remark 6.6 The conditions in (REF ) should be understood as “$\\gamma $ small enough”.", "Indeed, note that all three conditions are upper bounds on $\\gamma $ and under condition () we can always find $\\gamma >0$ satisfying (REF ): as $\\gamma \\downarrow 0$ , the three conditions become respectively $\\varepsilon >0$ , $2\\varepsilon >0$ , and $2\\varepsilon +2/q>1$ , all of which are trivial since $q\\le 2$ .", "The proof is very similar to that of Lemma REF , so we will mostly sketch it; the main differences are just the use of Lemma REF with $n=m$ and some interpolation arguments.", "Define $A^i_{s,t} = \\mathbb {E}_{s-(t-s)}\\int _s^t h_r (B^H_r + \\mathbb {E}_{s-(t-s)}\\varphi _r) \\mathrm {d}r$ , so that $\\psi ^1-\\psi ^2$ is the stochastic sewing of $A^1-A^2$ .", "Arguing similarly as in Lemma REF , we have the estimate $\\Vert A_{s,t}\\Vert _{L^m}& \\le \\bigg \\Vert \\int _s^t \\Vert P_{|r-s_1|^{2H}} h_r\\Vert _{C^\\gamma _x}\\, |\\mathbb {E}_{s_1} \\varphi ^1_r-\\mathbb {E}_{s_1} \\varphi ^2_r|^\\gamma \\mathrm {d}r \\bigg \\Vert _{L^m}\\\\& \\lesssim |t-s|^{\\varepsilon -\\gamma H} w_{h,\\alpha -1,q}(s,t)^{1/q} \\sup _{r\\in [S,T]} \\Vert \\varphi ^1_r-\\varphi ^2_r\\Vert _{L^m}^\\gamma ;$ the first condition of Lemma REF is verified, since $\\varepsilon -\\gamma H>0$ and $1/q \\ge 1/2$ .", "To control $\\mathbb {E}_{s_1} \\delta A_{s,u,t}=\\mathbb {E}_{s_1} \\delta A^1_{s,u,t}-\\mathbb {E}_{s_1} \\delta A^2_{s,u,t}$ , we can decompose it as $\\mathbb {E}_{s_1} \\delta A_{s,u,t} = I^1-I^2+J^1-J^2$ , similarly to Lemma REF .", "Estimating each one of them separately as therein yields $\\sup _i \\lbrace |I^i|,|J^i|\\rbrace \\lesssim |t-s|^{2\\varepsilon }w_{h,\\alpha -1,q}(s,t)^{1/q}w(s_1,t)^{1/q};$ on the other hand, we have $\\Vert I^1-I^2\\Vert _{L^m}& \\le \\bigg \\Vert \\int _{s_4}^{s_5}\\big |P_{|r-s_2|^{2H}}h_r(\\mathbb {E}_{s_2}B^H_r+\\mathbb {E}_{s_1}\\varphi ^1_r)-P_{|r-s_2|^{2H}}h_r(\\mathbb {E}_{s_2}B^H_r+\\mathbb {E}_{s_2}\\varphi ^1_r)\\big |\\mathrm {d}r \\\\& \\quad -\\int _{s_4}^{s_5}\\big |P_{|r-s_2|^{2H}}h_r(\\mathbb {E}_{s_2}B^H_r+\\mathbb {E}_{s_1}\\varphi ^2_r)-P_{|r-s_2|^{2H}}h_r(\\mathbb {E}_{s_2}B^H_r+\\mathbb {E}_{s_2}\\varphi ^2_r)\\big |\\mathrm {d}r\\bigg \\Vert _{L^m}\\\\& \\le \\int _{s_4}^{s_5} \\Vert P_{|r-s_2|^{2H}}h_r\\Vert _{C^1_x} \\big ( \\Vert \\mathbb {E}_{s_1} \\varphi ^1_r-\\mathbb {E}_{s_1} \\varphi ^2_r\\Vert _{L^m} + \\Vert \\mathbb {E}_{s_2} \\varphi ^1_r-\\mathbb {E}_{s_2} \\varphi ^2_r\\Vert _{L^m}\\big ) \\mathrm {d}r\\\\& \\lesssim |t-s|^{(\\alpha -2)H + 1/q^{\\prime }} w_{h,\\alpha -1,q}(s,t)^{1/q} \\sup _{r\\in [S,T]} \\Vert \\varphi _r^1-\\varphi ^2_r\\Vert _{L^m},$ similarly for $\\Vert J^1-J^2\\Vert _{L^m}$ .", "Interpolating the two bounds together overall yields $\\Vert \\mathbb {E}_{s_1} \\delta A_{s,u,t}\\Vert _{L^m}\\lesssim |t-s|^{\\varepsilon (2-\\gamma )-\\gamma H} w_{h,\\alpha -1,q}(s,t)^{1/q} w(s_1,t)^{\\frac{1-\\gamma }{q}}\\sup _{r\\in [S,T]} \\Vert \\varphi ^1_r-\\varphi ^2_r\\Vert _{L^m}^\\gamma .$ By the hypothesis (REF ), the power of $|t-s|$ is positive and the total power of all the controls is greater than 1.", "The conclusion then follows from Lemma REF .", "[Proof of Proposition REF ] As usual, we can split estimate (REF ) into three subestimates, with two of the three parameters $(s,t,x)$ fixed and only one varying.", "From now on we will fix $\\gamma \\in (0,1)$ satisfying condition (REF ).", "Step 1: $(s,x)$ fixed, $t<t^{\\prime }$ .", "In this case the desired estimate is just (REF ) from Lemma REF , for the choice $\\delta =\\gamma \\varepsilon < \\varepsilon $ .", "Step 2: $(s,t)$ fixed, $x\\ne y$ .", "The difference process $v_t:=J^x_{s,t}-J^{y}_{s,t}$ satisfies an affine Young equation of the form $ \\mathrm {d}v_t = \\mathrm {d}A_t\\, v_t + \\mathrm {d}z_t$ , $v_s=0$ , for $A_t = \\int _s^t \\nabla b_r(\\Phi _{s\\rightarrow r}(x)) \\mathrm {d}r, \\quad z_t = \\int _s^t \\big [ \\nabla b_r(\\Phi _{s\\rightarrow r}(x)) - \\nabla b_r(\\Phi _{s\\rightarrow r}(y))\\big ] J^y_{s\\rightarrow r} \\mathrm {d}r;$ invoking as usual Lemma REF (for $\\tilde{p}=1/2$ ) and applying estimate (REF ), one ends up with $ \\Vert J^x_{s,t}-J^y_{s,t}\\Vert _{L^m} \\lesssim \\big \\Vert \\llbracket z\\rrbracket _{2-{\\mathrm {var}}} \\big \\Vert _{L^m}.$ Observe that $z$ itself can be interpreted as a Young integral: $z_t= \\int _s^t \\mathrm {d}\\tilde{A}_r J^y_{s\\rightarrow r}$ for $\\tilde{A}_u:=\\int _s^u \\big [ \\nabla b_r(\\Phi _{s\\rightarrow r}(x)) - \\nabla b_r(\\Phi _{s\\rightarrow r}(y))\\big ] \\mathrm {d}r$ Standard properties of Young integral, together with Cauchy's inequality, then yield $\\big \\Vert \\llbracket z\\rrbracket _{2-{\\mathrm {var}}} \\big \\Vert _{L^m}\\lesssim \\big \\Vert \\llbracket \\tilde{A}\\rrbracket _{2-{\\mathrm {var}}}\\, \\llbracket J^y_{s\\rightarrow \\cdot }\\rrbracket _{p-{\\mathrm {var}}} \\big \\Vert _{L^m}\\lesssim \\big \\Vert \\llbracket \\tilde{A}\\rrbracket _{2-{\\mathrm {var}}}\\Vert _{L^{2m}} \\big \\Vert \\llbracket J^y_{s\\rightarrow \\cdot }\\rrbracket _{p-{\\mathrm {var}}} \\big \\Vert _{L^{2m}};$ by estimate (REF ), it only remains to find a bound for $\\llbracket \\tilde{A}\\rrbracket _{2-{\\mathrm {var}}}$ .", "Recall that by construction $\\Phi _{s\\rightarrow r}(x) = \\varphi _{s\\rightarrow r}(x) + B^H_r$ , where the process $\\varphi _{s\\rightarrow \\cdot }(x)$ satisfies condition (REF ) (or even (REF ) for $\\alpha <0$ ) for $w=w_{b,\\alpha ,q}$ .", "We can apply Lemma REF with the choice $h=\\nabla b$ , $\\varphi ^1_r=\\varphi _{s\\rightarrow r}(x)$ , $\\varphi ^2_r=\\varphi _{s\\rightarrow r}(y)$ to obtain, for all $s\\le r<u\\le 1$ and all $m\\in [1,\\infty )$ , $\\Vert \\tilde{A}_{r,u}\\Vert _{L^m}&\\lesssim |r-u|^{\\varepsilon -\\gamma H} w(r,u)^{1/q} (1+ \\Vert b\\Vert _{L^q_t C^\\alpha _x}^q) \\sup _{r\\in [s,1]} \\Vert \\varphi ^1_r - \\varphi ^2_r\\Vert _{L^m}^\\gamma \\\\& \\lesssim |r-u|^{\\varepsilon -\\gamma H} w(r,u)^{1/q} |x-y|^\\gamma $ where in the second inequality we used estimate (REF ).", "By Lemma REF in Appendix we deduce that, for any $m\\in [1,\\infty )$ and $\\delta <\\varepsilon -\\gamma H$ , it holds $\\big \\Vert \\llbracket \\tilde{A}\\rrbracket _{2-{\\mathrm {var}}}\\Vert _{L^{2m}} \\lesssim \\bigg \\Vert \\sup _{r<u} \\frac{|\\tilde{A}_{r,u}|}{|r-u|^{\\delta } w(r,u)^{1/q}}\\bigg \\Vert _{L^{2m}} \\lesssim |x-y|^\\gamma .$ Combining all the above estimates yields the conclusion in this case.", "Step 3: $(t,x)$ fixed, $s<s^{\\prime }$ .", "This step is mostly a variation on the arguments presented in the previous cases, so we mostly sketch it.", "We can write $J^x_{s,t}= J^x_{s,s^{\\prime }} + \\int _{s^{\\prime }}^t \\nabla b(\\Phi _{s\\rightarrow t}(x)) J^x_{s,r} \\mathrm {d}r$ so that the difference $v_t= J^x_{s,t} - J^x_{s^{\\prime },t}$ can be regarded as the solution to an affine Young equation on $[s^{\\prime },t]$ , for $A$ and $z$ defined similarly as in Step 2; the only difference is that now $v_{s^{\\prime }} = J^x_{s,s^{\\prime }}-I$ and $z_t = \\int _{s^{\\prime }}^t \\mathrm {d}\\tilde{A}_r J^z_{s^{\\prime }\\rightarrow r}$ for the choice $\\tilde{A}_u:=\\int _{s^{\\prime }}^u \\big [ \\nabla b_r(\\Phi _{s\\rightarrow r}(x)) - \\nabla b_r(\\Phi _{s^{\\prime }\\rightarrow r}(x))\\big ] \\mathrm {d}r.$ From here, the estimates are almost identical to those of Step 2, relying on a combination of Lemmas REF , REF and REF ; however in this case an application of Step 1 and estimate (REF ) gives us $\\Vert J^x_{s^{\\prime }\\rightarrow s}-I\\Vert _{L^m} \\lesssim |s-s^{\\prime }|^{\\varepsilon \\gamma }, \\quad \\sup _{r\\in [s^{\\prime },1]} \\Vert \\Phi _{s\\rightarrow r}(x)-\\Phi _{s^{\\prime }\\rightarrow r}(x)\\Vert _{L^m}^\\gamma \\lesssim |s-s^{\\prime }|^{\\varepsilon \\gamma }.", "$ We are now finally ready to complete the [Proof of Theorem REF ] The argument is based on Theorem II.4.4 from [45]; assume first $b$ to be a regular field.", "It is clear from (REF ) that, for any $\\delta <\\varepsilon \\gamma $ , the map $(s,t,x)\\mapsto \\nabla J_{s\\rightarrow t}^x$ is $\\mathbb {P}$ -a.s. locally $\\delta $ -Hölder continuous, suitable moment estimates depending only on $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ .", "Furthermore, letting $K_{s\\rightarrow t}^x$ denote the inverse of $J_{s\\rightarrow t}^x$ in the sense of matrices, it is well-known that it solves the linear equation Kstx = I - st Ksrx  br(xsr(x)) dr; arguing as in the proof of Proposition REF , one can prove that $\\Vert K^x_{s\\rightarrow t} - K^y_{s^{\\prime }\\rightarrow t^{\\prime }}\\Vert _{L^m} \\lesssim |x-y|^\\gamma + |t-t^{\\prime }|^{\\varepsilon \\gamma } + |s-s^{\\prime }|^{\\varepsilon \\gamma }$ and so that it is $\\mathbb {P}$ -a.s. $\\delta $ -Hölder continuous as well.", "In the case of general case of $b\\in L^q_t C^\\alpha _x$ , we can consider a sequence $b^n$ of regular functions such that $b^n\\rightarrow b$ in $L^q_t C^\\alpha _x$ (up to sacrificing a little bit of spatial regularity as usual), in which case we already know that the associated flows $\\Phi ^n$ converge to $\\Phi $ in $L^m_\\omega C^0_{s,t} C^{\\delta ,\\mathrm {loc}}_x$ ; combined with the aforementioned moments estimates, one can then upgrade it to convergence in $L^m_\\omega C^0_{s,t} C^{1+\\delta ,\\mathrm {loc}}_x$ .", "In particular, the fields $J^{x,n}_{s\\rightarrow t}=\\nabla \\Phi ^n_{s\\rightarrow t}(x)$ and $K^{x,n}_{s\\rightarrow t}=(\\nabla \\Phi ^n_{s\\rightarrow t}(x))^{-1}$ converge respectively to $J^x_{s\\rightarrow t}$ and $K^x_{s\\rightarrow t}$ ; by the limiting procedure, there exists an event $\\tilde{\\Omega }$ of full probability such that, for all $\\omega \\in \\tilde{\\Omega }$ , it holds $J^x_{s\\rightarrow t}(\\omega )=\\nabla \\Phi _{s\\rightarrow t}(x;\\omega )$ and and $J^x_{s\\rightarrow t}(\\omega ) K^x_{s\\rightarrow t}(\\omega )=I$ for all $s<t$ and $x\\in \\mathbb {R}^d$ , as well as $J(\\omega ), K(\\omega )\\in C^0_{s,t} C^{\\delta ,\\mathrm {loc}}_x$ .", "Overall, for every $\\omega \\in \\tilde{\\Omega }$ , the map $(s,t,x)\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ has regularity $C^0_{s,t} C^{1+\\delta ,\\mathrm {loc}}_x$ and its Jacobian admits a continuous inverse $K^x_{s\\rightarrow t}(\\omega )$ .", "But this implies that, for any $s<t$ , $\\nabla \\Phi _{s\\rightarrow t}(x;\\omega )$ is a non degenerate matrix for all $x\\in \\mathbb {R}^d$ , which by the implicit function theorem readily implies that the inverse of $x\\mapsto \\Phi _{s\\rightarrow t}(x;\\omega )$ must belong to $C^{1+\\delta ,\\mathrm {loc}}_x$ as well.", "This concludes the proof.", "It is well known in the regular case that the Jacobian of the flow and the Malliavin derivative satisfy the same type of linear equation.", "Therefore, as the last main result of the section, we show Malliavin differentiability of the random variables $X^x_{s\\rightarrow t}(\\omega ):= \\Phi _{s\\rightarrow t}(x;\\omega )$ .", "To this end, we start with a simple, yet powerful lemma, showing that deterministic perturbations of the driving noise $B^H$ do not affect our solution theory.", "Lemma 6.7 Assume (), $b\\in L^q_t C^\\alpha _x$ , and $h: [0,1]\\rightarrow \\mathbb {R}^d$ be a deterministic, measurable function; then for any $s\\in [0,1]$ and any $x\\in \\mathbb {R}^d$ , there exists a pathwise unique strong solution to the perturbed SDE $X_t = x + \\int _s^t b_r(X_r) \\mathrm {d}r + B^H_{s,t} + h_{s,t} \\quad \\forall \\, t\\in [s,1]$ which we denote by $X_{s\\rightarrow \\cdot }(x;h)$ ; in the distributional case $\\alpha <0$ , eq.", "(REF ) must be interpreted in the sense of Definition REF .", "We give two short alternative arguments to verify the claim.", "On one hand, carefully going through the proofs of Sections -, the only key properties needed on the process $B^H$ are its Gaussianity and the two sided bounds $\\mathbb {E}[ |B^H_t - \\mathbb {E}_s B^H_t|^2] \\sim |t-s|^{2H}$ which are clearly still true for $\\tilde{B}^H=B^H+h$ , due to $h$ being deterministic.", "Alternatively, if we define $\\tilde{b}_t(z):= b_r(z+h_r)$ , $y=x+h_s$ , then any solution $X$ to (REF ) must be in a 1-1 correspondence with a solution $Y:=X+h$ to the unperturbed SDE $Y_t = y + \\int _s^t \\tilde{b}_r(Y_r)\\mathrm {d}r + B^H_{s,t}$ and it is clear that $\\tilde{b}$ still satisfies condition (), thus implying its well-posedness.", "We are now ready to verify Malliavin differentiability of $X^x_{s\\rightarrow t}$ .", "To this end, let us recall that if we denote by $\\mathcal {H}^H$ the Cameron-Martin space associated to $B^H$ , then the Malliavin derivative $D X^x_{s\\rightarrow t}$ , when it exists, can be identified as the (random) linear bounded operator from $\\mathcal {H}^H$ to $\\mathbb {R}^d$ given by $h\\mapsto \\partial _h X^x_{s\\rightarrow t}, \\quad \\text{where} \\quad \\partial _h X^x_{s\\rightarrow t}:= \\lim _{\\varepsilon \\rightarrow 0} \\frac{X_{s\\rightarrow t}(x; \\varepsilon h) - X_{s\\rightarrow t}(x ; 0)}{\\varepsilon }.$ We will use $\\Vert D X^x_{s\\rightarrow t}\\Vert _{\\mathcal {H}^H}$ to denote the operator norm of $D X^x_{s\\rightarrow t} $ as a linear operator from $\\mathcal {H}^H$ to $\\mathbb {R}^d$ .", "Theorem 6.8 Assume () and $b\\in L^q_t C^\\alpha _x$ .", "Then $\\mathbb {P}$ -a.s. the random variables $\\partial _h X^x_{s\\rightarrow t}$ exist for all $h\\in C^{2-{\\mathrm {var}}}_t$ and define a (random) linear map $h\\mapsto \\partial _h X^x_{s,t}$ .", "Moreover for any $m\\in [1,\\infty )$ it holds $\\sup _{s\\in [0,1],x\\in \\mathbb {R}^d} \\Big \\Vert \\sup _{t\\in [s,1]} \\Vert h\\mapsto \\partial _h X^x_{s,t}\\Vert _{\\mathcal {L}(C^{2-{\\mathrm {var}}};\\mathbb {R}^d)} \\Big \\Vert _{L^m}.$ In particular, $X^x_{s\\rightarrow t}$ is Malliavin differentiable and for any $m\\in [1,\\infty )$ it holds $\\sup _{s\\in [0,1], x\\in \\mathbb {R}^d} \\Big \\Vert \\sup _{t\\in [s,1]} \\Vert D X^x_{s\\rightarrow t} \\Vert _{\\mathcal {H}^H} \\Big \\Vert _{L^m} <\\infty .$ For simplicity, we give the proof in the case where $b$ is smooth, so that all the computations are rigorous, but keeping track that the estimate (REF ) only depends on $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ .", "The general case then follows by standard (but a bit tedious) approximation arguments, similar to those of Theorems REF -REF ; for estimate (REF ), one can alternatively invoke [55].", "For smooth $b$ , $\\partial _h X^x_{s\\rightarrow t}$ is classically characterized as the unique solution to the affine equation $\\partial _h X^x_{s\\rightarrow t} = \\int _s^t \\nabla b_r(X^x_{s\\rightarrow t}) \\partial _h X^x_{s\\rightarrow r} \\mathrm {d}r + h_{s,t}.$ Consider the process $A_t:= \\int _s^t \\nabla b_r(X^x_{s\\rightarrow r}) \\mathrm {d}r$ as usual, which satisfies (REF ), so that it has $\\mathbb {P}$ -a.s. finite $p$ -variation for some $p<2$ and moreover $\\mathbb {E}[\\exp (\\lambda \\llbracket A\\rrbracket _{p-{\\mathrm {var}};[s,1]}^p)]<\\infty $ for all $\\lambda \\in \\mathbb {R}$ , where the estimate only depends on $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ and does not depend on $x$ or $s$ .", "Interpreting (REF ) as an affine Young equation and applying Lemma REF from Appendix with $\\tilde{p}=2$ , we then find $C>0$ such that $|\\partial _h X^x_{s\\rightarrow t}| \\le C e^{C \\llbracket A\\rrbracket _{p-{\\mathrm {var}};[s,t]}^p} \\llbracket h\\rrbracket _{2-{\\mathrm {var}};[s,t]};$ taking first supremum over $h\\in C^{2-{\\mathrm {var}}}$ with $\\Vert h\\Vert _{2-{\\mathrm {var}}}=1$ and then over $t\\in [s,1]$ , we arrive at the pathwise $\\mathbb {P}$ -a.s. inequality $\\sup _{t\\in [s,1]} \\Vert h\\mapsto \\partial _h X^x_{s\\rightarrow t} \\Vert _{\\mathcal {L}(C^{2-{\\mathrm {var}}};\\mathbb {R}^d)} \\le e^{C \\llbracket A\\rrbracket _{p-{\\mathrm {var}};[s,1]}^p}.$ Taking the $L^m$ -norm on both sides, using (REF ), then readily yields (REF ).", "Estimate (REF ) then follows from the characterization of $D X^x_{s,t}$ as a random linear operator from $\\mathcal {H}^H$ to $\\mathbb {R}^d$ , combined with the functional embedding $\\mathcal {H}^H\\hookrightarrow C^{2-{\\mathrm {var}}}$ , see Lemma REF in Appendix for $H\\in (0,1/2)$ and recall that $\\mathcal {H}^H\\hookrightarrow C^{1-{\\mathrm {var}}}$ for $H\\ge 1/2$ .", "Remark 6.9 Results on differentiability beyond the usual Malliavin sense, in the sense of the existence of $\\partial _h X^x_{s,t}$ for $h$ belonging to a larger class than $\\mathcal {H}^H$ , were already observed for standard SDEs in [46] and have natural explanations in rough path theory, cf.", "[11], [32]; in these works however only $h\\in C^{\\tilde{p}-{\\mathrm {var}}}_t$ for some $\\tilde{p}<2$ are allowed.", "Here instead, not only are we able to reach $C^{2-{\\mathrm {var}}}_t$ , but the result can be further strengthened to allow for some $\\tilde{p}>2$ : indeed, the key point is a combination of estimate (REF ) and Lemma REF , which works as long as the condition $1/\\tilde{p}>1-1/p$ is satisfied." ], [ "McKean-Vlasov equations", "Armed with the stability estimate (REF ), we can now solve solve distribution dependent SDEs (henceforth DDSDEs) of the form $X_t = X_0 + \\int _0^t F_s(X_s,\\mu _s)\\mathrm {d}s + B^H_t, \\quad \\mu _t=\\mathcal {L}(X_t).$ The initial condition $X_0$ is assumed to be $\\mathcal {F}_0$ -measurable, in particular, independent of $B^H$ .", "The idea that estimates of the form (REF ), where the difference of two drifts only appears in the weaker norm of $L^q_t C^{\\alpha -1}_x$ , can be exploited to solve DDSDEs was first introduced in [39]; the results presented here can be regarded as a natural extension, requiring less time regularity on the drift and allowing to cover $H>1$ as well.", "In particular, as in the previous sections, we will not need to exploit Girsanov transform, which instead played a prominent role in [39].", "Since our analysis also includes the case of distributional drifts $F$ , we provide a meaningful definition of solution; observe that in the case $F$ is actually continuous in the space variable (i..e $\\alpha >0$ ), it reduces to the classical one.", "Definition 7.1 Let $H\\in (0,\\infty )\\setminus \\mathbb {N}$ and $F:[0,1]\\times \\mathcal {P}(\\mathbb {R}^d)\\rightarrow C^\\alpha _x$ be a measurable function.", "We say that a tuple $(\\Omega ,\\mathbb {F},\\mathbb {P}; X,B^H)$ is a weak solution to (REF ) if: i) $B^H$ is an $\\mathbb {F}$ -fBm of parameter $H$ and $X$ is $\\mathbb {F}$ -adapted; ii) setting $b^X_t(\\cdot ):=F_t(\\cdot ,\\mathcal {L}(X_t))$ , it holds $b^X\\in L^q_t C^\\alpha _x$ for some $(q,\\alpha )$ satisfying (); iii) $X$ solves the SDE associated to $b^X$ , in the sense of Section .", "Similarly to Definition REF , one can immediately extend the concepts of strong existence, pathwise uniqueness and uniqueness in law to the DDSDE (REF ).", "With a slight abuse, we will use the terminology input data of the DDSDE (REF ) to indicate both the pair $(X_0,B^H)$ (when discussing strong existence and/or pathwise uniqueness of solutions) and the pair $(\\xi ,\\mu ^H)=(\\mathcal {L}(X_0),\\mathcal {L}(B^H))$ (when discussing uniqueness in law).", "We are now ready to formulate our main assumptions on the drift $F$ .", "Assumption 7.2 Let $H\\in (0,\\infty )\\setminus \\mathbb {N}$ fixed, $F:[0,1]\\times \\mathcal {P}(\\mathbb {R}^d)\\rightarrow C^\\alpha _x$ be a measurable function; we assume that there exist parameters $(\\alpha ,q)$ satisfying () and $h\\in L^q_t$ such that: i) for all $t\\in [0,1],\\, \\mu \\in \\mathcal {P}(\\mathbb {R}^d)$ , it holds $\\Vert F_t(\\cdot ,\\mu )\\Vert _{C^\\alpha } \\le h_t$ ; ii) for all $t\\in [0,1],\\, \\mu ,\\nu \\in \\mathcal {P}(\\mathbb {R}^d)$ , it holds $\\Vert F_t(\\cdot ,\\mu )-F_t(\\cdot ,\\nu )\\Vert _{C^{\\alpha -1}} \\le h_t \\mathbb {W}_1(\\mu ,\\nu )$ ; Remark 7.3 Basic examples of $F$ satisfying Assumption (REF ) include the following (for their verification, we refer to Section 2.1 from [39]): i) The true McKean–Vlasov case $F_t(\\cdot ,\\mu )=f_t(\\cdot )+(g_t\\ast \\mu )(\\cdot )$ for $f,g\\in L^q_t C^\\alpha _x$ ; ii) Mean-dependence of the form $F_t(\\cdot ,\\mu )=f_t(\\cdot \\,-\\langle \\mu \\rangle )$ , where $\\langle \\mu \\rangle :=\\int y\\,\\mu (\\mathrm {d}y)$ ; iii) The mean $\\langle \\mu \\rangle $ in ii) can be replaced by other functions of statistics (e.g.", "$\\langle \\psi ,\\mu \\rangle $ for $\\psi \\in C^1$ ); one can also take linear combinations of the previous examples.", "Also, in Assumption REF we only considered the 1-Wasserstein distance $\\mathbb {W}_1$ , but in fact all the results below would also hold if we replaced $\\mathbb {W}_1$ with $\\mathbb {W}_p$ for some $p\\in (1,\\infty )$ .", "Theorem 7.4 Let $F$ satisfy Assumption REF .", "Then for any $\\mathcal {F}_0$ -measurable $X_0 \\in L^1_\\omega $ (respectively $\\xi \\in \\mathcal {P}_1(\\mathbb {R}^d)$ ) strong existence, pathwise uniqueness and uniqueness in law of solutions to (REF ) holds.", "We start by showing strong existence and pathwise uniqueness by means of a contraction argument.", "Specifically, suppose we are given a filtered probability space $(\\Omega ,\\mathbb {F},\\mathbb {P})$ on which are defined an $\\mathbb {F}$ -fBm $B^H$ and an $\\mathcal {F}_0$ -measurable $X_0\\in L^1_\\omega $ .", "Consider the space of adapted processes $ E:=\\Big \\lbrace Y:[0,1]\\rightarrow \\mathbb {R}^d:\\, Y \\text{ is adapted to }\\mathcal {F}_t, \\sup _{t\\in [0,1]} \\Vert Y_t\\Vert _{L^1}<\\infty \\Big \\rbrace $ which is a complete metric space when endowed with the metric $d_E(Y,Z):=\\sup _{t\\in [0,1]} e^{-\\lambda \\int _0^t |h_s|^q \\mathrm {d}s} \\Vert Y_t-Z_t\\Vert _{L^1}$ for a parameter $\\lambda >0$ to be chosen later.", "Define a map $I$ acting on $E$ by letting $I(Y)$ be the unique solution $X$ to the SDE driven by $B^H$ , with initial data $X_0$ (c.f.", "Remark REF ) and drift $b^Y_t:=F_t(\\cdot \\, ,\\mathcal {L}(Y_t))$ ; the map $I$ is well defined thanks to Point i) from Assumption REF , ensuring the solvability of such SDE.", "It is clear that $X$ is a solution to the DDSDE (REF ) on the space $(\\Omega ,\\mathbb {F},\\mathbb {P})$ with input data $(X_0,B^H)$ if and only if it is a fixed point for $I$ .", "We claim that $I$ is a contraction on $(E,d_E)$ ; indeed, given any $Y^1,\\,Y^2$ , by the stability estimate (REF ) and Assumption REF , for any $t\\in [0,1]$ it holds $\\Vert I(Y^1)_t-I(Y^2)_t\\Vert _{L^1}^q& \\lesssim \\int _0^t \\Vert F_s(\\cdot \\,,\\mathcal {L}(Y^1_s))-F_s(\\cdot \\,,\\mathcal {L}(Y^2_s))\\Vert _{C^{\\alpha -1}}^q \\mathrm {d}s\\\\& \\lesssim \\int _0^t |h_s|^q \\,\\mathbb {W}_1(\\mathcal {L}(Y^1_s),\\mathcal {L}(Y^2_s))^q \\mathrm {d}s\\\\& \\lesssim d_E(Y^1,Y^2)^q \\int _0^t |h_s|^q\\, e^{q \\lambda \\int _0^s |h_r|^q \\mathrm {d}r} \\mathrm {d}s\\\\& \\lesssim (q \\lambda )^{-1}\\, e^{\\lambda q \\int _0^t |h_r|^q \\mathrm {d}r}\\, d_E(Y^1,Y^2)^q.$ Rearranging the terms, we overall find the estimate $d_E\\big (I(Y^1),I(Y^2)\\big )^q \\le \\frac{C}{q \\lambda }\\, d_E(Y^1,Y^2)^q,$ from which contractivity follows by choosing $\\lambda $ appropriately.", "Pathwise uniqueness then readily follows; as the argument holds for any choice of $\\mathbb {F}$ , we can take $\\mathcal {F}_t=\\sigma \\lbrace X_0, B^H_s, s\\le t\\rbrace $ , yielding strong existence.", "To establish uniqueness in law, it suffices to observe that, if $X$ is a weak solution, then we can construct a copy of it on any reference probability space simply by solving therein the SDE associated to $b^X_t(\\cdot )= F_t(\\cdot ,\\mathcal {L}(X_t))$ : by weak uniqueness for the SDE associated to $b^X$ , see Remark REF , the solution $\\tilde{X}$ constructed in this way must have the same law as the original $X$ and thus be a solution to the DDSDE itself.", "Given any pair of weak solutions $X^1,X^2$ , possibly defined on different probability spaces, we can then construct a coupling $(\\tilde{X}^1,\\tilde{X}^2)$ of them on the same probability space, solving the DDSDE for the same imput data $(X_0,B^H)$ ; by the previous argument, it must hold $\\tilde{X}^1\\equiv \\tilde{X}^2$ and so $\\mathcal {L}(X^1)=\\mathcal {L}(X^2)$ .", "Remark 7.5 In fact, going through the same strategy of proof as in [39] not only allows to establish wellposedness of the DDSDE, but also to establish stability estimates for DDSDEs.", "Specifically, assume we are given fields $F^i$ , $i=1,2$ , satisfying Assumption (REF ) for the same parameters $(\\alpha ,q)$ and functions $h^i\\in L^q_t$ and define the quantity $\\Vert F^1-F^2\\Vert _{\\alpha ,q} :=\\bigg ( \\int _0^1 \\sup _{\\mu \\in \\mathcal {P}_1} \\big \\Vert F^1_t(\\cdot \\,,\\mu )-F^2_t(\\cdot \\,,\\mu )\\big \\Vert _{C^{\\alpha -1}_x}^q \\mathrm {d}t\\bigg )^{1/q}.$ Then for any $m\\in [1,\\infty )$ there exists a constant $C$ , depending on $\\alpha ,q,H,m,d, \\Vert h^i\\Vert _{L^q}$ , such that any two solutions $X^i$ defined on the same space with input data $(X_0^i, B^H)$ satisfy $\\big \\Vert \\Vert X^1-X^2\\Vert _{C^0_t} \\big \\Vert _{L^m} \\le C \\big (\\Vert X^1_0-X^2_0|\\Vert _{L^m} + \\Vert F^1-F^2\\Vert _{\\alpha ,q}\\big );$ in the case of solutions defined on different spaces, using (REF ) and coupling argument, we can easily deduce bounds on the Wasserstein distances of their laws.", "In the true McKean–Vlasov case, namely $F^i_t(\\cdot \\,,\\mu )=f^i_t+g^i_t\\ast \\mu $ with $f^i,g^i\\in L^q_t C^\\alpha _x$ , it holds $\\Vert F^1-F^2\\Vert _{q,\\alpha } \\lesssim \\Vert f^1-f^2\\Vert _{L^q_t C^{\\alpha -1}_x} + \\Vert g^1-g^2\\Vert _{L^q_t C^{\\alpha -1}_x}.$" ], [ "Weak compactness and weak existence", "So far we have shown that, under suitable conditions on $b$ , we have (very) strong existence and uniqueness results.", "However, as we are now going to show, stochastic sewing also allows to establish weak existence and weak compactness of solutions in the regime (REF ), similarly to [3], [2].", "This section is also our way to say something about the equation in the case $q>2$ that goes beyond the trivial inclusion $L^q_t\\subset L^2_t$ .", "Since here we assume $\\alpha <0$ , it is a priori not fully clear what it means to be a weak solution to the equation.", "Contrary to Section , where a robust interpretation was accomplished by the nonlinear Young formalism, here we will adopt the following, weaker notion, adapting the notion from [4].", "This allows us to prove weak existence more generally, see however Remark REF for a comparison.", "Definition 8.1 Let $b\\in L^q_t C^\\alpha _x$ for some $\\alpha <0$ .", "We say that a tuple $(\\Omega ,{\\mathbb {F}},\\mathbb {P};X,B^H)$ consisting of a filtered probability space and a pair of continuous processes $(X,B^H)$ is a weak solution to the SDE $X_t = x_0 + \\int _0^t b_s(X_s)\\mathrm {d}s + B^H_t$ if $B^H$ is a $\\mathbb {F}$ -fBm of parameter $H$ , $X$ is $\\mathbb {F}_t$ -adapted, and $X_t=x_0+V_t+B^H_t$ , where the process $V_t$ has the property that, for any sequence of smooth bounded functions $b^n$ converging to $b$ in $L^q_t C^\\alpha _x$ , it holds that $\\Big \\Vert \\int _0^t b^n(s,X_s)\\mathrm {d}s - V_t\\Big \\Vert _{C^0_t} \\rightarrow 0 \\quad \\mathbb {P}\\text{-a.s.}$ Theorem 8.2 Let $H\\in (0,1)$ and $b\\in L^q_t C^\\alpha _x$ satisfying (REF ).", "Then for any $x_0\\in \\mathbb {R}^d$ there exists a weak solution to the SDE (REF ) in the sense of Definition REF .", "Remark 8.3 The above result is only interesting in the regime $H\\in (0,1)$ and $q>2$ .", "Indeed, if $H>1$ then the condition $\\alpha >1/2-1/(2H)$ automatically enforces $\\alpha >0$ , for which existence is known by classical Peano-type results; instead if $q\\le 2$ , strong existence and uniqueness follows from the previous sections.", "First we need the following lemma.", "Lemma 8.4 Let $H\\in (0,1)$ , $(\\alpha ,q)$ be parameters satisfying (REF ); let $X$ be a process defined on a filtered probability space $(\\Omega ,\\mathbb {F},\\mathbb {P})$ of the form $X=\\varphi +B^H$ , where $B^H$ is an $\\mathbb {F}$ -fBm and $\\varphi $ satisfies the property (REF ).", "For any $f\\in L^q_t C^\\delta _x$ , $\\delta >0$ , let $w_f:=w_{f,\\alpha ,q}$ ; then for any $m\\in [2,\\infty )$ there exists a deterministic constant $K=K(m,d,\\alpha ,q,H,\\Vert b\\Vert _{L^q_t C^\\alpha _x})$ , such that $\\bigg \\Vert \\Big \\Vert \\int _s^t f_r(X_r)\\mathrm {d}r\\Big \\Vert _{L^m|\\mathcal {F}_s} \\bigg \\Vert _{L^\\infty } \\le K w_f(s,t)^{1/q} |t-s|^{\\alpha H + 1/q^{\\prime }}.$ As a consequence, for any $\\varepsilon >0$ there exists a constant $K=K(\\varepsilon ,m,d,\\alpha ,q,H,\\Vert b\\Vert _{L^q_t C^\\alpha _x})$ such that $\\bigg \\Vert \\Big \\Vert \\int _0^\\cdot f_r(X_r)\\mathrm {d}r \\Big \\Vert _{C^{\\alpha H + 1/q^{\\prime }-\\varepsilon }_t} \\bigg \\Vert _{L^m} \\le K \\Vert f\\Vert _{L^q_t C^\\alpha _x}.$ By linearity and density, this allows to continuous extend in a unique way the map $f\\mapsto \\int _0^\\cdot f_r(X_r)\\mathrm {d}r$ from $L^q_t \\overline{C^\\alpha _x}$ to $L^m_\\omega C^0_t$ .", "We only sketch the proof, since it is very similar to others already presented (cf.", "Lemma REF ).", "By Lemma REF and the stochastic sewing (again in the version of [33]), setting $A_{s,t}:=\\mathbb {E}_s \\int _s^t f_r(\\varphi _s+B^H_r)\\mathrm {d}r$ and denoting $\\beta =1/q^{\\prime }+\\alpha H$ , standard computations imply $\\Vert A_{s,t}\\Vert _{L^\\infty } & \\lesssim |t-s|^\\beta w_f(s,t)^{1/q},\\\\\\big \\Vert \\Vert \\mathbb {E}_s\\delta A_{s,u,t}\\Vert _{L^m| \\mathcal {F}_s} \\big \\Vert _{L^\\infty }& \\lesssim |t-s|^{\\beta -H} w_f(s,t)^{1/q} \\big \\Vert \\Vert \\varphi _{s,u}\\Vert _{L^m|\\mathcal {F}_s}\\big \\Vert _{L^\\infty }\\\\& \\lesssim |t-s|^{2\\beta -H} w_f(s,t)^{1/q} w_b(s,t)^{1/q}.$ Under condition (REF ), one can check that the hypothesis of [33] are satisfied, which easily yields all the desired estimates.", "For passing to the weak limit, it will also be convenient to recall the Volterra representation of fractional Brownian motion: for any $H\\in (0,1)$ there exists a function $(s,t)\\mapsto K_H(s,t)$ such that for a $\\mathbb {F}$ -fBM $B^H$ one can construct a standard $\\mathbb {F}$ -BM $W$ having the property BHt=0t KH(t,s)dWs.", "[Proof of Theorem REF ] As before, we can assume $x_0=0$ without loss of generality.", "Let $b\\in L^q_t C^\\alpha _x$ with $(q,\\alpha )$ satisfying (REF ) be given.", "Since (REF ) is a strict inequality, we can assume without loss of generality that $q<\\infty $ , $b\\in L^q_t\\overline{C^\\alpha _x}$ , and in particular there exists a sequence $\\lbrace b^n\\rbrace _n\\subset L^q_t C^1_x$ such that $b^n\\rightarrow b$ in $L^q_t C^\\alpha _x$ and $\\int _s^t \\Vert b^n_r\\Vert _{C^\\alpha _x}^q \\mathrm {d}r \\le \\int _s^t \\Vert b_r\\Vert _{C^\\alpha _x}^q \\mathrm {d}r$ (this can be accomplished by taking $b^n_r=\\rho _{1/n}\\ast b_r$ for some standard mollifiers $\\lbrace \\rho _\\delta \\rbrace _{\\delta >0}$ , up to replacing $\\alpha $ with $\\alpha -\\varepsilon $ ).", "To each such $b^n$ we can associate a solution $X^n=\\varphi ^n + B^H$ , where $\\varphi ^n$ satisfy the bound (REF ) for $w=w_{\\alpha ,b,q}$ ; this implies in particular that $\\Vert \\varphi ^n_{s,t}\\Vert _m \\lesssim |t-s|^{\\alpha H + 1/q^{\\prime }}$ uniformly in $n$ , which by Kolmogorov's theorem readily implies the tightness of the family $\\lbrace \\varphi ^n\\rbrace _n$ .", "As a consequence, the family $\\lbrace (\\varphi ^n, B^H, W)\\rbrace _n$ is tight in $C_t\\times C_t\\times C_t$ .", "By Prokhorov's and Skorokhod's theorems, we can construct another probability space $(\\tilde{\\Omega },\\tilde{\\mathcal {F}},\\tilde{\\mathbb {P}})$ on which there exists a sequence $\\lbrace (\\tilde{\\varphi }^n,\\tilde{B}^{H,n}, \\tilde{W}^n)\\rbrace _n$ such that $(\\tilde{\\varphi }^n,\\tilde{B}^{H,n}, \\tilde{W}^n)$ is distributed as $(\\varphi ^n, B^H, W)$ for each $n$ and $(\\tilde{\\varphi }^n,\\tilde{B}^{H,n}, \\tilde{W}^n) \\rightarrow (\\tilde{\\varphi },\\tilde{B}^{H}, \\tilde{W})$ $\\tilde{\\mathbb {P}}$ -a.s. in $C_t\\times C_t\\times C_t$ .", "We claim that $\\tilde{X}=\\tilde{\\varphi }+\\tilde{B}^H$ is a weak solution to (REF ), in the sense of Definition REF .", "For notational simplicity, we drop the tildes for the rest of the proof.", "First of all, it is clear by passing to the limit that $B^H$ is still distributed as an fBm of parameter $H$ , $W$ as a standard Bm and that the relation $B^H_t = \\int _0^t K_H(t,s) \\mathrm {d}W_s$ still holds.", "Next, $X^n=\\varphi ^n+B^{H,n}$ is still a solution to the SDE (REF ) with regular drift $b^n$ , thus $\\varphi ^n$ is adapted to $\\mathcal {F}^n_t:=\\sigma \\lbrace B^{H,n}_s:s\\le t\\rbrace =\\sigma \\lbrace W^n_s: s\\le t\\rbrace $ ; so for any $s<t$ , any $t_1,\\ldots , t_n \\le s$ and any pair of continuous bounded functions $F,G$ it holds $\\mathbb {E}[F(W^n_{s,t})G(W^n_{t_1}, \\varphi ^n_{t_1},\\ldots ,W^n_{t_n},\\varphi ^n_{t_n})]= \\mathbb {E}[F(W^n_{s,t})]\\,\\mathbb {E}[G(W^n_{t_1}, \\varphi ^n_{t_1},\\ldots ,W^n_{t_n},\\varphi ^n_{t_n})].$ Passing to the limit as $n\\rightarrow \\infty $ , the same relation holds for $W$ and $\\varphi $ in place of $W^n$ and $\\varphi ^n$ , which shows that $W$ is an $\\mathbb {F}$ -Bm for $\\mathcal {F}_t:=\\sigma \\lbrace (W_s,\\varphi _s):s\\le t\\rbrace $ ; in particular, $B^H$ is an $\\mathbb {F}$ -fBm.", "Similarly, since $\\varphi ^n$ uniformly satisfy the bound (REF ) w.r.t.", "$\\mathcal {F}^n_t$ , it holds E[|ns,t|m   G(Wnt1, nt1,...,Wntn,ntn)] (w(s,t)1/q |t-s|H + 1/q')m E[G(Wnt1, nt1,...,Wntn,ntn)].", "Passing to the limit as $n\\rightarrow \\infty $ we can conclude that $\\varphi $ satisfies the bound (REF ) w.r.t.", "the filtration $\\mathcal {F}_t$ .", "Finally, it remains to show that $X$ satisfies the relation $X_t= V_t + B^H_t$ for $V$ satisfying the requirements of Definition REF ; firstly, since $B^H$ is an $\\mathbb {F}$ -fBm and $\\varphi $ satisfies (REF ), Lemma REF applies, so that the process $V_t:=\\int _0^\\cdot b_r(X_r)\\mathrm {d}r$ is well defined and by linearity satisfies the property $\\mathbb {E}\\bigg [ \\Big \\Vert \\int _0^\\cdot f_r(X_r)\\mathrm {d}r - V_\\cdot \\Big \\Vert _{C^{\\alpha H + 1/q^{\\prime }-\\varepsilon }_t}^m \\bigg ]^{1/m} \\lesssim \\Vert f-b\\Vert _{L^q_t C^\\alpha _x}.$ for any regular $f$ .", "Therefore if we take an auxiliary $\\ell \\in \\mathbb {N}$ and write $& \\mathbb {E}[\\Vert \\int _0^\\cdot b^n_r(X^n_r) \\mathrm {d}r - V_\\cdot \\Vert _{C^0_t}]\\\\& \\le \\mathbb {E}[\\Vert \\int _0^\\cdot [b^n-b^\\ell ]_r(X^n_r) \\mathrm {d}r \\Vert _{C^0_t}] + \\mathbb {E}[\\Vert \\int _0^\\cdot [b^\\ell _r(X^n_r)-b^\\ell _r(X_r)] \\mathrm {d}r \\Vert _{C^0_t}]\\\\&\\qquad + \\mathbb {E}[\\Vert \\int _0^\\cdot b^\\ell _r(X_r) \\mathrm {d}r - V_\\cdot \\Vert _{C^0_t}]\\\\& \\lesssim \\Vert b^n-b^\\ell \\Vert _{L^q_t C^\\alpha _x} + \\mathbb {E}[\\Vert \\int _0^\\cdot [b^\\ell _r(X^n_r)-b^\\ell _r(X_r)] \\mathrm {d}r \\Vert _{C^0_t}] + \\Vert b-b^\\ell \\Vert _{L^q_t C^\\alpha _x},$ then we get $\\limsup _{n\\rightarrow \\infty } \\mathbb {E}[\\Vert \\int _0^\\cdot b^n_r(X^n_r) \\mathrm {d}r - V_\\cdot \\Vert _{C^0_t}] \\lesssim 2 \\Vert b-b^\\ell \\Vert _{L^q_t C^\\alpha _x}.$ By the arbitrariness of $\\ell $ , we conclude that $\\varphi ^n = \\int _0^\\cdot b^n_r(X^n_r) \\mathrm {d}r \\rightarrow V = \\varphi $ , so that $X$ is a weak solution.", "Remark 8.5 In a certain range of exponents the weak solution constructed above is also a pathwise solution in the nonlinear Young sense.", "Let us only sketch the power counting, omitting the arbitrarily small exponents everywhere.", "The averaged field $T^{B^H}b$ can be constructed as in Corollary REF , as an element of $C^{2-{\\mathrm {var}}}_t C^{\\alpha +1/(2H),\\mathrm {loc}}_x$ .", "Furthermore, we know from Lemma REF that $\\varphi \\in C^{r-{\\mathrm {var}}}$ with $1/r=1+\\alpha H$ .", "Therefore if $\\frac{1}{2}+\\left(\\alpha + \\frac{1}{2H} \\right)(\\alpha H + 1) >1,$ then the nonlinear Young integral $\\int _0^\\cdot (T^{B^H}b)_{\\mathrm {d}t}(\\varphi _t)$ is well-defined and agrees with $V$ .", "Note that the regime (REF ) is nontrivial in the sense that it allows for drifts for which strong uniqueness is not known, since the right-hand side is strictly greater than 1 for $\\alpha =1-1/(2H)$ .", "We also remark that (REF ) is sufficient, but not necessary to define $\\int _0^\\cdot (T^{B^H}b)_{\\mathrm {d}t}(\\varphi _t)$ , since for particular cases of $b$ the averaged field $T^{B^H}b$ may enjoy better regularity than $C^{2-{\\mathrm {var}}}_t C^{\\alpha +1/(2H),\\mathrm {loc}}_x$ , see e.g.", "[2] for such situations." ], [ "$\\rho $ -irregularity", "The goal of this section is to derive some pathwise properties for solutions of (REF ), without appealing to Girsanov transform.", "Indeed, in the time-homogeneous setting Girsanov is unavailable for $H>1$ In the case $H>1$ and $b\\in C^\\alpha _x$ , $\\int _0^\\cdot b(B^H_r)\\mathrm {d}r\\in C^{1+\\alpha }_t$ , so that Girsanov would require the condition $1+\\alpha >H+1/2$ ; covering the whole regime $\\alpha >1-1/(2H)$ would lead to the condition $1-1/(2H)>H-1/2$ , which cannot hold for $H>1$ ., while in the time-dependent case it doesn't apply for any value of $H>0$ (since we can allow drifts which are only $L^q$ in time, for values of $q$ arbitrarily close to 1).", "For more details see Appendix .", "As a meaningful representative of a larger class of pathwise properties, we will focus on the notion of $\\rho $ -irregularity, first introduced in [12] in the context of regularisation by noise for ODEs; it has later found several applications in regularisation for PDEs, see [19], [21], [20], [36], and more recently in the inviscid mixing properties of shear flows [37].", "Let us also mention the recent work [62] for an alternative notion of irregularity, partially related to this one.", "Definition 9.1 Let $\\gamma \\in (0,1)$ , $\\rho >0$ .", "We say that a function $h\\in C([0,1],\\mathbb {R}^d)$ is $(\\gamma ,\\rho )$ -irregular if there exists a constant $N$ such that |steihrdr|N ||-|t-s|   Rd,   0st1; we denote by $\\Vert \\Phi ^h\\Vert _{\\mathcal {W}^{\\gamma ,\\rho }}$ the optimal constant.", "We say that $h$ is $\\rho $ -irregular for short if there exists $\\gamma >1/2$ such that it is $(\\gamma ,\\rho )$ -irregular.", "It was shown in [12], [36] that for any $H\\in (0,\\infty )\\setminus \\mathbb {N}$ , $B^H$ is $\\rho $ -irregular for any $\\rho <1/(2H)$ ; we establish the same for a class of perturbations of $B^H$ satisfying the following assumption.", "Assumption 9.2 Let $\\varphi :[0,1]\\rightarrow \\mathbb {R}^d$ be a continuous adapted process which admits moments of any order; moreover, there exist $\\beta > 0$ and a control $w$ such that, for any $m \\in [1,\\infty )$ , there exists a constant $C_m$ such that $\\big \\Vert \\Vert \\varphi _t -\\mathbb {E}_s \\varphi _t\\Vert _{L^1|\\mathcal {F}_s}\\big \\Vert _{L^m} \\le C_m w(s,t)^{1/2}|t-s|^{\\beta }\\quad \\forall \\, 0\\le s\\le t\\le 1.$ Theorem 9.3 Let $H\\in (0, +\\infty ) \\setminus \\mathbb {N}$ and let $\\varphi $ satisfy Assumption REF with $\\beta =H$ ; then $X:=\\varphi +B^H$ is almost surely $\\rho $ -irregular for any $\\rho < 1 /(2H)$ .", "More precisely, for any such $\\rho $ and any $m\\in [1,\\infty )$ there exists $\\gamma =\\gamma (m,\\rho )>1/2$ such that $\\mathbb {E}[ \\Vert \\Phi ^X\\Vert _{\\mathcal {W}^{\\gamma ,\\rho }}^m ]<\\infty .$ Remark 9.4 Let us make some observations on Assumption REF and Theorem REF : Lemmas REF and REF provide sufficient conditions on $q$ and $\\alpha $ that guarantee that solutions of (REF ) with $b\\in L^q C^\\alpha $ satisfy Assumption REF .", "Note that in some cases we can therefore obtain $\\rho $ -irregularity of solutions but not uniqueness.", "Our usual toolbox could in principle be also used to study Gaussian moments of $\\Phi ^X$ (under a somewhat stronger condition than (REF )).", "For simplicity we do not pursue this in detail.", "In terms of exponents, the condition (REF ) appears to require the same order of “regularity”, namely $1/2+H$ , as Girsanov transform (see Appendix ).", "However, (REF ) is a significantly weaker condition: instead of controlling the usual increments $\\varphi _t-\\varphi _s$ , one only needs to control the stochastic increments $\\varphi _t-\\mathbb {E}_s\\varphi _t$ , which can be much smaller.", "In [12], [36] the additive perturbation problem is studied in detail; the authors try to establish, in a deterministic framework, whether a path $h+\\varphi $ can be shown to be $\\rho $ -irregular, given the knowledge that $h$ is so and $\\varphi $ enjoys higher Hölder regularity.", "Such results usually come with a loss of regularity in the exponent $\\rho $ at least $1/2$ , cf.", "[12] and [36]; the use of more probabilistic arguments and stochastic sewing techniques from Theorem REF instead allows to cover the whole range $\\rho <1/(2H)$ without difficulties.", "In order to conclude, it suffices to prove the following claim: for any $\\rho <1/(2H)$ , we can find $\\gamma >1/2$ such that for any $m \\in [1, \\infty )$ it holds $\\Big \\Vert \\int _s^t e^{i \\xi \\cdot X_r} \\mathrm {d}r \\Big \\Vert _{L^m} \\lesssim _m |t-s|^{\\gamma } |\\xi |^{-\\rho }\\quad \\forall \\, \\xi \\in \\mathbb {R}^d, 0 \\leqslant s \\leqslant t \\leqslant 1.$ It's clear that in (REF ) we can restrict to $| \\xi | \\geqslant 1$ (or $| \\xi | \\geqslant R$ ) whenever needed, since for small $\\xi $ the estimate is trivial.", "Once (REF ) is obtained, we can deduce that, for any $\\tilde{\\rho }<\\rho -d/m$ , it holds $\\mathbb {E} \\left[ \\int _{\\mathbb {R}^d} | \\xi |^{\\tilde{\\rho }} \\bigg | \\int _s^t e^{i\\xi \\cdot X_r} \\mathrm {d}r \\bigg |^m \\mathrm {d}\\xi \\right]= \\mathbb {E} \\big [\\Vert \\mu ^X_{s, t} \\Vert _{\\mathcal {F} L^{\\tilde{\\rho }, m}}^m\\big ] \\lesssim |t-s|^{\\gamma m};$ here we follow the notation from [36], so that $\\mu ^X_{s,t}$ denotes the occupation measure of $X$ on $[s,t]$ and $\\mathcal {F} L^{\\rho , m}$ denote Fourier–Lebesgue spaces.", "Applying Lemma 57 from [36] to (REF ), together with Assumption REF , yields $\\mathbb {E} \\big [\\Vert \\mu ^X_{s, t} \\Vert _{\\mathcal {F} L^{\\tilde{\\rho }, \\infty }}^m\\big ]& \\lesssim \\mathbb {E} \\big [\\Vert X \\Vert _{C^0_t}^d \\Vert \\mu ^X_{s, t} \\Vert _{\\mathcal {F} L^{\\tilde{\\rho },m}}^m\\big ]\\\\& \\lesssim \\mathbb {E} \\big [\\Vert X \\Vert _{C^0_t}^{2 d} \\big ]^{1 / 2}\\, \\mathbb {E} \\big [\\Vert \\mu ^X_{s, t} \\Vert _{\\mathcal {F} L^{\\tilde{\\rho }, m}}^{2 m}\\big ]^{1/2}\\lesssim |t-s|^{\\gamma m}.$ By the arbitrariness of $m$ and Kolmogorov's continuity criterion, one then deduces that $\\mu ^X\\in C^{\\tilde{\\gamma }}_t \\mathcal {F}L^{\\tilde{\\rho },\\infty }_x$ for any $\\tilde{\\gamma }<\\gamma $ and $\\tilde{\\rho }<\\rho $ ; but this is equivalent to saying that $X$ is $(\\tilde{\\gamma },\\tilde{\\rho })$ -irregular, cf.", "[36].", "The arbitrariness of $\\rho <1/(2H)$ readily implies the conclusion as well as the moment estimate (REF ).", "In order to prove the claim (REF ), we will apply Lemma REF , with $(S,T)=(0,1)$ , and $n=m$ .", "Fix $\\xi \\in \\mathbb {R}^d$ ; arguing as in Lemma REF , it is easy to check that $\\int _0^\\cdot e^{i\\xi \\cdot X_r} \\mathrm {d}r$ is the stochastic sewing of $ A_{s, t} := \\mathbb {E}_{s-(t-s)} \\int _s^t e^{i \\xi \\cdot (\\mathbb {E}_{s-(t-s)} \\varphi _r + B^H_r)} \\mathrm {d}r. $ Note that for any $r\\in (s,t)$ one has |Es-(t-s)eiBHr|=|Es-(t-s)ei(BHr-Es-(t-s)BHr)|=e-c||2|r-s+(t-s))|2H and therefore we have $| A_{s,t} | \\lesssim e^{-c |\\xi |^2 |t-s|^{2 H}} |t-s|\\lesssim |\\xi |^{-\\rho } |t-s|^{1-\\rho H}$ where we used the basic inequality $e^{-c|y|^2}\\lesssim |y|^{-\\rho }$ .", "By the assumption on $\\rho $ , $\\varepsilon _1:=1/2-\\rho H>0$ , and therefore the condition (REF ) is satisfied with $w_1(s,t)=N|\\xi |^{-2\\rho }(t-s)$ .", "As for the second condition of Lemma REF , we have for $(s,u,t)\\in \\overline{[0,1]}_\\le ^3$ that $\\Vert \\mathbb {E}_{s_-} \\delta A_{s, u, t} \\Vert _{L^m}& \\le \\int _u^t \\big \\Vert \\mathbb {E}_{u-(t-u)}e^{i \\xi \\cdot B^H_r} (e^{i \\xi \\cdot \\mathbb {E}_{s-(t-s)} \\varphi _r} - e^{i \\xi \\cdot \\mathbb {E}_{u-(t-u)} \\varphi _r}) \\big \\Vert _{L^m}\\mathrm {d}r\\\\&\\quad +\\int _s^u \\big \\Vert \\mathbb {E}_{s-(t-s)}e^{i \\xi \\cdot B^H_r} (e^{i \\xi \\cdot \\mathbb {E}_{s-(t-s)} \\varphi _r} - e^{i \\xi \\cdot \\mathbb {E}_{s-(u-s)} \\varphi _r}) \\big \\Vert _{L^m}\\mathrm {d}r=:I+J.$ As usual, $I$ and $J$ are treated identically, so we only consider the former.", "We write $I & = \\int _u^t e^{-c | \\xi |^2 | r - u + t - u |^{2 H}} \\big \\Vert e^{i \\xi \\cdot \\mathbb {E}_{s - (t - s)} \\varphi _r} - e^{i \\xi \\cdot \\mathbb {E}_{u-(t-u)} \\varphi _r} \\big \\Vert _{L^m}\\mathrm {d}r\\\\& \\le e^{- \\tilde{c} | \\xi |^2 | t - s |^{2 H}} | \\xi | \\int _u^t \\Vert \\mathbb {E}_{s - (t - s)} \\varphi _r -\\mathbb {E}_{u - (t - u)} \\varphi _r \\Vert _{L^m}\\mathrm {d}r\\\\& \\lesssim e^{- \\tilde{c} |\\xi |^2 |t-s|^{2H}} |\\xi |\\, w(s_-,t)^{1/2} |t-s|^{1+H},$ where in the second line we used $(s,u,t)\\in \\overline{[0,1]}_{\\le }^3$ and in the last one we used Assumption REF .", "Applying again the basic inequality $e^{- \\tilde{c}|y|^{2}} \\lesssim |y|^{-1-\\rho }$ , we obtain Es- As,u,t Lm ||-w(s-,t)1/2|t-s|1-H.", "Therefore, condition (REF ) is satisfied with $\\varepsilon _2=\\varepsilon _1=1/2-\\rho H$ and $w_2(s,t)=N|\\xi |^{-\\rho }w^{1/2}(s,t)(t-s)^{1/2}$ and by (REF ) we finally get st ei Xr dr Lm ||-|t-s|1/2+1(1+w(s,t)), yielding (REF )." ], [ "Applications to transport and continuity equations", "Having established well-posedness of the characteristic lines $\\mathrm {d}X_t= b_t(X_t)\\mathrm {d}t + \\mathrm {d}B^H_t$ , a next natural step is to investigate the associated stochastic transport equation $\\partial _t u + b\\cdot \\nabla u + \\dot{B}^H\\cdot \\nabla u =0.$ Natural questions in PDE theory and regularization by noise for (REF ) are its well-posedness, see e.g.", "[30], and propagation of the regularity of initial data, first addressed in [28].", "Both features need not be true in the absence of noise: among the vast literature, let us mention [53], where counterexamples to uniqueness are provided even for Sobolev differentiable drifts; [6] where it is shown how uniqueness of the generalized Lagrangian flow (from the DiPerna Lions theory [26]) does not imply uniqueness of trajectorial solutions to the ODE; finally [7], providing sharp examples of DiPerna-Lions flows can at most propagate a \"logarithmic derivative\" of regularity of the initial data $u_0$ , but not better.", "As we will see in Theorem REF , the presence of $B^H$ allows to prevent all such pathologies, yielding nontrivial regularisation by noise results even in situations where uniqueness of solutions is already known to hold.", "Rather than working directly with equation (REF ), it is useful to introduce the the transformation $\\tilde{u}_t(x)=u_t(x+B^H_t)$ , $\\tilde{b}_t(x)=b_t(x+B^H_t)$ , which relates it to $\\partial _t \\tilde{u} + \\tilde{b}\\cdot \\nabla \\tilde{u}=0.$ This transformation formally assumes $B^H$ to be differentiable, but the resulting equation (REF ) is then well defined (at least for bounded $b$ ) for any continuous path $B^H$ .", "More rigorously, we are implicitly assuming that the chain rule applies, which amounts to working with $B^H$ as a geometric rough path, cf.", "[13].", "In the Brownian case it means that the multiplicative noise must be interpreted in the Stratonovich sense like in [30].", "The resulting PDE (REF ) can then be regarded as a PDE with random drift $\\tilde{b}$ rather than a stochastic PDE.", "A nice feature of the regular regime $H>1$ , included in our setting, is that here $B^H$ is $\\mathbb {P}$ -a.s. differentiable and so (REF ) is perfectly well defined and the above transformation is completely rigorous (as soon as $(u_t)_{t\\in [0,1]}$ is bounded in some function space) and does not involve any “choice” of the rough lift.", "The above considerations motivate the following definition; from now on we will use both notations $\\tilde{u}_t(x)$ and $\\tilde{u}_t(x;\\omega )$ to denote $u_t(x+B^H_t(\\omega ))$ , in order to stress the fixed realization $\\omega \\in \\Omega $ whenever needed; similarly for $\\tilde{b}_t(x)$ and $\\tilde{b}_t(x;\\omega )$ .", "Definition 10.1 For a fixed $\\omega \\in \\Omega $ , we say that $v$ is a weak solution to the PDE (REF ) associated to $\\tilde{b}_t(x;\\omega )$ if $v\\in L^1_t W^{1,1,\\mathrm {loc}}_{x}$ , $\\tilde{b}\\cdot \\nabla v\\in L^1_t L^{1,\\mathrm {loc}}_x$ and for any smooth, compactly supported function $\\varphi :[0,1]\\times \\mathbb {R}^d\\rightarrow \\mathbb {R}$ and any $t\\in [0,1]$ it holds $\\langle \\varphi _t,v_t\\rangle -\\langle \\varphi _0,v_0 \\rangle =\\int _0^t [\\langle \\partial _t\\varphi _s ,v_s\\rangle + \\langle \\varphi _s, \\tilde{b}_s(\\cdot ;\\omega )\\cdot \\nabla v_s \\rangle ] \\mathrm {d}s.$ We say that a stochastic process $u$ is a pathwise solution to the stochastic transport equation (REF ) if for $\\mathbb {P}$ -a.e.", "$\\omega \\in \\Omega $ , the corresponding $\\tilde{u}_t(x;\\omega )$ is a weak solution to (REF ) associated to $\\tilde{b}_t(x;\\omega )$ , in the above sense.", "Finally, a pathwise solution is said to be strong if it is adapted to the filtration generated by $B^H$ .", "Similarly to equations (REF )-(REF ), we can relate the stochastic continuity equation $\\partial _t \\mu + \\nabla \\cdot (b\\, \\mu ) + \\dot{B}^H\\cdot \\nabla \\mu =0$ to its counterpart t + (b  )=0 by means of the transformation $\\tilde{\\mu }_t(x;\\omega )=\\mu _t(x+B^H_t(\\omega ))$ .", "In the next definition, $\\mathcal {M}_+=\\mathcal {M}_+(\\mathbb {R}^d)$ denotes the set of nonnegative finite Radon measures.", "For $\\mu \\in \\mathcal {M}_+$ we write $\\mu \\in L^p_x$ to mean that $\\mu $ admits an $L^p$ -integrable density w.r.t.", "the Lebesgue measure, in which case with a slight abuse we will identify $\\mu (\\mathrm {d}x)=\\mu (x) \\mathrm {d}x$ .", "Definition 10.2 For a fixed $\\omega \\in \\Omega $ , we say that $\\rho $ is a weak solution to the PDE () associated to $\\tilde{b}_t(x;\\omega )$ if $\\rho _t\\in \\mathcal {M}_+$ for Lebesgue-a.e.", "$t$ , $\\int _0^1\\int _{\\mathbb {R}^d} |\\tilde{b}_t(x;\\omega )| \\rho _t(\\mathrm {d}x)<\\infty $ and for any smooth, compactly supported $\\varphi :[0,1]\\times \\mathbb {R}^d\\rightarrow \\mathbb {R}$ and any $t\\in [0,1]$ it holds $\\langle \\varphi _t,\\rho _t\\rangle -\\langle \\varphi _0,\\rho _0 \\rangle =\\int _0^t \\langle \\partial _t\\varphi _s + b_s(\\cdot ;\\omega )\\cdot \\nabla \\varphi ,\\mu _s\\rangle \\mathrm {d}s.$ We say that a stochastic process $\\mu $ is a pathwise solution to the stochastic continuity equation (REF ) if for $\\mathbb {P}$ -a.e.", "$\\omega \\in \\Omega $ , the corresponding $\\tilde{\\mu }_t(x;\\omega )$ is a weak solution to () associated to $\\tilde{b}_t(x;\\omega )$ , in the above sense.", "Finally, a pathwise solution is said to be strong if it is adapted to the filtration generated by $B^H$ .", "As it is clear from Definitions REF -REF , in order to treat equations (REF )-() in an analytically weak sense, we need $\\tilde{b}$ to enjoy some local integrability and thus to be a well defined measurable function (up to equivalence class).", "Therefore in the case of coefficients $b\\in L^q_t C^\\alpha _x$ with $\\alpha <0$ , throughout this section we will additionally impose that $b\\in L^r_t L^r_x + L^r_t L^\\infty _x \\quad \\text{for some } r>1;$ we denote by $r^{\\prime }$ the conjugate exponent, i.e.", "$1/r^{\\prime }+1/r=1$ .", "In the case $\\alpha >0$ , we will use the convention $r^{\\prime }=1$ ; in this case under () condition (REF ) is immediately satisfied for $r=q$ .", "Remark 10.3 Let us collect a few useful observations: i) By standard arguments, whenever a weak solution $v$ to (REF ) exists (in the sense of Definition REF ), then (up to redifining it on a Lebesgue negligible set of $t\\in [0,1]$ ) $t\\mapsto v_t$ is continuous w.r.t.", "to suitable weak topologies; in particular it always makes sense to talk about initial/terminal conditions for such equations.", "The same considerations apply for pathwise solutions, as well as solutions to the continuity equations (REF )-(); from now on we will always work with these weakly continuous in time versions, without specifying it.", "ii) If $\\rho $ is a weak solution to (), then its mass $\\rho _t(\\mathbb {R}^d)$ is preserved by the dynamics.", "In particular, if $\\rho \\in L^q_t L^p_x$ , then it actually belongs to $L^q_t L^{\\tilde{p}}_x$ for all $\\tilde{p}\\in [1,p]$ .", "iii) In Definition REF we enforce identity (REF ) to hold for all $\\varphi $ smooth and compactly suppported, but by standard density arguments it is clear that as soon as more information on $v$ (resp.", "$u$ ) and $b$ is available, then (REF ) can be extended to a larger class of $\\varphi $ , as long as all the terms appearing are well defined.", "For instance if $v\\in L^\\infty _t W^{1,p}_x$ and $b\\in L^\\infty _t L^\\infty _x$ , then it suffices to know that $\\varphi , \\partial _t \\varphi \\in L^1_t L^{p^{\\prime }}_x$ , $p^{\\prime }$ being the conjugate of $p$ .", "iv) Definitions REF -REF and the above observations extend easily to the case of backward equations on $[0,T]$ with terminal conditions $u_T$ , $\\mu _T$ , rather than forward ones with initial $u_0$ , $\\mu _0$ .", "The next statement summarizes the main result of this section.", "Theorem 10.4 Let $b$ satisfy Assumption () and additionally (REF ) if $\\alpha <0$ .", "Then: i) For any $p\\in [1,\\infty )$ and $u_0\\in W^{1,p}_x$ , there exists a strong pathwise solution $u$ to (REF ), which belongs to $L^m_\\omega L^\\infty _t W^{1,p}_x$ for all $m\\in [1,\\infty )$ .", "If moreover $p>r^{\\prime }$ , then path-by-path uniqueness holds in the class $L^\\infty _t W^{1,p}_x$ , in the following sense: there exists an event $\\tilde{\\Omega }$ of full probability such that, for all $\\omega \\in \\tilde{\\Omega }$ and all $v_0\\in W^{1,p}_x$ , there can exist at most one weak solution $v \\in L^\\infty _t W^{1,p}_x$ to the PDE (REF ) associated to $\\tilde{b}_t(x;\\omega )$ and with initial condition $v_0$ .", "ii) For any $p\\in [1,\\infty )$ and any positive measure $\\mu _0\\in L^p_x$ , there exists a strong pathwise solution $\\mu $ to (REF ), which belongs to $L^m_\\omega L^\\infty _t L^p_x$ for all $m\\in [1,\\infty )$ .", "If moreover $p\\ge r^{\\prime }$ , then path-by-path uniqueness holds in the class $L^\\infty _t L^p_x$ , in the following sense: there exists an event $\\tilde{\\Omega }$ of full probability such that, for all $\\omega \\in \\tilde{\\Omega }$ and all $\\mu _0\\in L^p_x$ , there can exist at most one weak solution $\\rho \\in L^\\infty _t L^p_x$ to the PDE () associated to $\\tilde{b}_t(x;\\omega )$ and with initial condition $\\mu _0$ .", "Theorem REF will be proved by mostly analytical techniques, once they are combined with the information coming from the previous sections.", "We will first establish existence of pathwise solutions to equations (REF )-(REF ) satisfying the desired a priori bounds, see Proposition REF .", "Uniqueness will be established by two different methods.", "In the transport case, we will first establish a priori bounds for solutions the dual equation (backward continuity equation) in Proposition REF and then perform a duality argument (Lemma REF ); see [26] and [5] for significant precursors in this direction.", "For the continuity equation we will instead infer uniqueness from Ambrosio's superposition principle (cf.", "Theorem REF ) combined with our path-by-path uniqueness results (Theorems REF -REF ).", "To the best of our knowledge, it is the first time these two results are combined in this way to infer path-by-path uniqueness for (REF ); let us mention however that in [5] the opposite idea is developed, proving path-by-path uniqueness for the SDE starting from the corresponding results for (REF ).", "Before giving the proofs, let us first recall a few notations and basic facts.", "We will use $\\Psi $ to denote the random flow of diffeomorphisms associated to the (random) ODE $\\dot{\\varphi }= \\tilde{b}_t(\\varphi )$ , where we recall the fundamental relation $X_t=\\varphi _t+B^H_t$ .", "Similarly to Section , we will use the notations $J^x_{s\\rightarrow t} := \\nabla \\Psi _{s\\rightarrow t}(x)$ , $K^x_{s\\rightarrow t} := (J^x_{s\\rightarrow t}(x))^{-1} = \\nabla \\Psi _{s \\leftarrow t}(\\Psi _{s\\rightarrow t}(x))$ ; we also set $j_{s\\rightarrow t}(x):=\\det J^x_{s\\rightarrow t}$ , similarly for $j_{s\\leftarrow t}(x)$ .", "Recall that, in the case of regular $b$ , we have the relations $j_{s\\rightarrow t}(x) = \\exp \\Big (\\int _s^t {\\rm div} b_r (\\Phi _{s\\rightarrow r}(x)) \\mathrm {d}r\\Big ), \\ \\ j_{s\\leftarrow t}(x) = \\exp \\Big (-\\int _s^t {\\rm div} b_r (\\Phi _{r\\leftarrow t}(x)) \\mathrm {d}r\\Big ).$ Proposition 10.5 Let $b$ satisfy Assumption () and additionally (REF ) if $\\alpha <0$ .", "Then: i) For any $p\\in [1,\\infty )$ and $u_0\\in W^{1,p}_x$ , there exists a strong pathwise solution $u$ to (REF ), which belongs to $L^m_\\omega L^\\infty _t W^{1,p}_x$ for all $m\\in [1,\\infty )$ .", "ii) For any $p\\in [1,\\infty )$ and any positive measure $\\mu _0$ such that $\\mu _0\\in L^p_x$ , there exists a strong pathwise solution $\\mu $ to (REF ), which belongs to $L^m_\\omega L^\\infty _t L^p_x$ for all $m\\in [1,\\infty )$ .", "Let us first assume $b$ to be smooth and derive estimates which only depend on $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ .", "In this case, the unique solution to (REF ) is given by $\\tilde{u}_t(x)= u_0(\\Psi _{0\\leftarrow t}(x))$ .", "Let us give the bound on $\\Vert \\nabla \\tilde{u}\\Vert _{L^p}$ , the one for $\\Vert \\tilde{u}\\Vert _{L^p}$ being similar; also observe that these quantities coincide with the corresponding ones for $u$ .", "It holds $\\sup _{t\\in [0,1]} \\Vert \\nabla \\tilde{u}_t\\Vert _{L^p}^p& = \\sup _{t\\in [0,1]} \\int _{\\mathbb {R}^d} |\\nabla \\tilde{u}_t(x)|^p \\mathrm {d}x\\\\& \\le \\sup _{t\\in [0,1]} \\int _{\\mathbb {R}^d} |\\nabla u_0(\\Psi _{0\\leftarrow t}(x))|^p |\\nabla \\Psi _{0\\leftarrow t}(x)|^p \\mathrm {d}x\\\\& = \\sup _{t\\in [0,1]} \\int _{\\mathbb {R}^d} |\\nabla u_0(y)|^p |\\nabla \\Psi _{0\\leftarrow t}(\\Psi _{0\\rightarrow t}(y))|^p j_{0\\rightarrow t}(y) \\mathrm {d}y\\\\& \\le \\int _{\\mathbb {R}^d} |\\nabla u_0(y)|^p \\sup _{t\\in [0,1]} |K_{0\\rightarrow t}(y))|^p \\, \\sup _{t\\in [0,1]} j_{0\\rightarrow t}(y)\\, \\mathrm {d}y$ Taking the $L^m_\\omega $ -norm on both sides, we arrive at $\\Big \\Vert \\sup _{t\\in [0,1]} \\Vert \\nabla \\tilde{u}_t\\Vert _{L^p}^p \\Big \\Vert _{L^m}& \\le \\int _{\\mathbb {R}^d} |\\nabla u_0(y)\\Vert _{L^p} \\Big \\Vert \\sup _{t\\in [0,1]} |K_{0\\rightarrow t}(y))|^p \\, \\sup _{t\\in [0,1]} j_{0\\rightarrow t}(y) \\Big \\Vert _{L^m} \\, \\mathrm {d}y\\\\& \\le \\Vert \\nabla u_0\\Vert _{L^p}^p\\, \\sup _{y\\in \\mathbb {R}^d} \\Big \\Vert \\sup _{t\\in [0,1]} |K_{0\\rightarrow t}(y))|^p \\Big \\Vert _{L^{2m}}^{1/2} \\, \\Big \\Vert \\sup _{t\\in [0,1]} j_{0\\rightarrow t}(y) \\Big \\Vert _{L^{2m}}^{1/2}.$ The finiteness of arbitrary moments of $\\sup _{t\\in [0,1]} j_{0\\rightarrow t}(y)$ comes from identity (REF ), combined with Lemma REF applied to $h={\\rm div} b$ and $\\varphi _r=\\Phi _{0\\rightarrow r}(y)-B^H_r$ .", "This estimate is clearly uniform in $y\\in \\mathbb {R}^d$ .", "The similar bounds for $K$ follow as in Section , using the fact that $K$ solves the linear Young equation ().", "Up to relabelling $m=m^{\\prime } p$ , we have thus shown that $\\Vert \\nabla u\\Vert _{L^m_\\omega L^\\infty _t L^p_x} \\lesssim \\Vert \\nabla u_0\\Vert _{L^p_x}.$ We now pass to the case of $\\mu $ .", "For regular $b$ , solutions are know to given by the identity $\\tilde{\\mu }_t(x) = \\mu _0(\\Psi _{0\\leftarrow t}(x)) \\exp \\Big (-\\int _0^t {\\rm div} b_r (\\Phi _{r \\leftarrow t}(x)) \\mathrm {d}r \\Big ).$ Arguing similarly to above, it holds $\\Big \\Vert \\sup _{t\\in [0,1]} \\Vert \\tilde{\\mu }_t\\Vert _{L^p_x}^p \\Vert _{L^m}& = \\Big \\Vert \\sup _{t\\in [0,1]} \\int _{\\mathbb {R}^d} |\\mu _0(\\Psi _{0\\leftarrow t}(x))|^p \\exp \\Big (-p\\int _0^t {\\rm div} b_r (\\Phi _{r \\leftarrow t}(x)) \\mathrm {d}r \\Big ) \\mathrm {d}x \\Big \\Vert _{L^m} \\\\& = \\Big \\Vert \\sup _{t\\in [0,1]} \\int _{\\mathbb {R}^d} |\\mu _0(y)|^p \\exp \\Big ((1-p) \\int _0^t {\\rm div} b_r (\\Phi _{0\\rightarrow r}(y)) \\mathrm {d}r \\Big ) \\mathrm {d}y \\Big \\Vert _{L^m}\\\\& \\le \\sup _{y\\in \\mathbb {R}^d} \\Big \\Vert \\sup _{t\\in [0,1]} \\exp \\Big ((1-p) \\int _0^t {\\rm div} b_r (\\Phi _{0\\rightarrow r}(y)) \\Big \\Vert _{L^m} \\int _{\\mathbb {R}^d} |\\mu _0(y)|^p \\mathrm {d}y,$ and so invoking again Lemma REF and relabelling $m$ we arrive at $\\Vert \\tilde{\\mu }\\Vert _{L^m_\\omega L^\\infty _t L^p_x} \\lesssim \\Vert \\mu _0\\Vert _{L^p_x}.$ Having established the uniform estimates (REF )-(REF ), both existence claims for general $b$ now follow from a standard compactness argument, see for instance [58] or [30], so we will only sketch it quickly.", "Consider smooth approximations $b^n\\rightarrow b$ , $u_0^n\\rightarrow u_0$ and denote by $u^n$ the associated solutions; by reflexivity of $L^p_t L^p_\\omega W^{1,p}_x$ , we can extract a (not relabelled) subsequence such that $u^n\\rightarrow u$ weakly in $L^p_t L^p_\\omega L^p_x$ .", "By properties of weak convergence, the limit $u$ still belongs to $L^m_\\omega L^\\infty _t W^{1,p}_x$ and is progressively measurable, since the sequence $u^n$ was so; also observe that, as in Remark REF -i), we can assume $u$ to be weakly continuous in time, so that it is in fact adapted.", "By the linear structure of the PDE, one can then finally verify that $u$ is indeed a pathwise solution; let us stress that here is where for $\\alpha <0$ the assumption (REF ) is crucial, since otherwise it is unclear whether $b^n\\cdot \\nabla u^n$ converges to $b\\cdot \\nabla u$ in a weak sense.", "The case of $\\mu $ can be treated similarly.", "We now turn to establishing existence of sufficiently regular solutions to the continuity equation with well chosen terminal data; handling the backward nature of the equation yields slightly worsened estimates compared to those of Proposition REF .", "Proposition 10.6 Let $T\\in [0,1]$ and $\\mu _T\\in L^p$ compactly supported.", "Then there exists a pathwise solution $\\mu $ to (REF ) on $[0,T]$ with terminal condition $\\mu \\vert _{t=T}=\\mu _T$ ; moreover for any $m\\in [1,\\infty )$ and any $\\tilde{p}<p$ it holds $\\mu \\in L^\\infty _t L^m_\\omega L^{\\tilde{p}}_x$ .", "We can assume ${\\rm supp} \\mu _T \\subset B_R$ for some $R\\ge 1$ .", "We will assume $b$ to be regular and show how to derive suitable a priori estimates; the general case then follows by arguing similarly to Proposition REF .", "The solution is given explicitly by $\\mu _t(x) = \\mu _T(\\Psi _{t\\rightarrow T}(x)) \\exp \\Big ( \\int _t^T {\\rm div} b_r (\\Psi _{t\\rightarrow r} (x) \\mathrm {d}r)\\Big ).$ For any fixed $t\\in [0,T]$ , it holds $\\int _{\\mathbb {R}^d} |\\mu _t(x)|^{\\tilde{p}} \\mathrm {d}x& = \\int _{\\mathbb {R}^d} |\\mu _T(\\Psi _{t\\rightarrow T}(x))|^{\\tilde{p}} \\exp \\Big ( \\tilde{p}\\int _t^T {\\rm div} b_r (\\Psi _{t\\rightarrow r} (x) \\mathrm {d}r)\\Big ) \\mathrm {d}x\\\\& = \\int _{\\mathbb {R}^d} |\\mu _T(y)|^{\\tilde{p}} \\exp \\Big ( (\\tilde{p}-1) \\int _t^T {\\rm div} b_r (\\Psi _{r\\leftarrow T} (y) \\mathrm {d}r)\\Big ) \\mathrm {d}y\\\\& \\le \\Vert \\mu _T \\Vert _{L^p_x}^{\\tilde{p}} \\bigg ( \\int _{B_R} \\exp \\Big ( \\frac{ p(\\tilde{p}-1)}{p-\\tilde{p}} \\int _t^T {\\rm div} b_r (\\Psi _{r\\leftarrow T} (y) \\mathrm {d}r)\\Big ) \\mathrm {d}y \\bigg )^{1-\\frac{\\tilde{p}}{p}}$ where in the last passage we used first ${\\rm supp} \\mu _T\\subset B_R$ and then Hölder's inequality.", "Applying again the change of variable $x=\\psi _{t\\leftarrow T}(y)$ and the formula for $j_{t\\rightarrow T}(x)$ , overall we find a costant $\\kappa =\\kappa (p,\\tilde{p})$ such that $\\big \\Vert \\Vert \\mu _t\\Vert _{L^{\\tilde{p}}_x} \\big \\Vert _{L^m}\\le \\Vert \\mu _T\\Vert _{L^p_x}^{\\tilde{p}}\\,\\bigg \\Vert \\int _{\\Psi _{t\\rightarrow T}(B_R)} \\exp \\Big ( \\kappa \\int _t^T {\\rm div} b_r (\\Psi _{t\\rightarrow r} (y) \\mathrm {d}r)\\Big ) \\mathrm {d}y \\bigg \\Vert _{L^m}^{1-\\frac{\\tilde{p}}{p}}.$ It remains to estimate the last quantity appearing on the r.h.s.", "above.", "To this end, let us set $N_y := j_{t\\rightarrow T}(x)^\\kappa $ ; as usual by Lemma REF it holds $\\Vert N_y\\Vert _{L^m}\\lesssim 1$ , with an estimate uniform in $y$ , $t$ and $T$ and only depending on $\\Vert b\\Vert _{L^q_t C^\\alpha _x}$ .", "Thanks to estimates (REF ) and Lemma REF , one can show that for any $\\tilde{m}\\in [1,\\infty )$ and $\\lambda >1$ , uniformly in $t\\in [0,T]$ it holds $\\big \\Vert \\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }} \\big \\Vert _{L^{\\tilde{m}}} <\\infty \\quad \\text{ where } \\quad \\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }}:= \\sup _{|x|\\ge 1} |x|^{-\\lambda } |\\Psi _{t\\rightarrow T}(x)|;$ this is because one can first show finiteness of the associated $C^{\\eta ,\\lambda ^{\\prime }}_x$ -norm by Lemma REF , and then deduce from it that $\\Psi _{t\\rightarrow T}$ also belongs to $C^{0,\\lambda }_x$ for $\\lambda =\\lambda ^{\\prime }+\\eta $ (such an embedding readily follows from the definitions of such spaces).", "Therefore it holds $\\Big \\Vert \\int _{\\Psi _{t\\rightarrow T}(B_R)} N_y \\mathrm {d}y \\Big \\Vert _{L^m}& \\le \\sum _{n\\in \\mathbb {N}} \\Big \\Vert \\chi _{ \\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }} \\in [n,n+1)} \\int _{\\Psi _{t\\rightarrow T}(B_R)} N_y\\, \\mathrm {d}y \\Big \\Vert _{L^m}\\\\& \\le \\sum _{n\\in \\mathbb {N}} \\Big \\Vert \\int _{B_{(n+1)R^\\lambda }} \\chi _{ \\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }}\\ge n} N_y\\, \\mathrm {d}y \\Big \\Vert _{L^m}\\\\& \\le \\sum _{n\\in \\mathbb {N}} \\int _{B_{(n+1)R^\\lambda }} \\Vert \\chi _{ \\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }}\\ge n}\\Vert _{L^{2m}} \\Vert N_y\\Vert _{L^{2m}}\\, \\mathrm {d}y\\\\& \\lesssim \\sum _{n\\in \\mathbb {N}} (n+1)^d R^{\\lambda d}\\, \\mathbb {P}(\\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }}\\ge n)^{\\frac{1}{2m}}\\\\& \\lesssim R^{\\lambda d} \\sum _{n\\in \\mathbb {N}} n^{d-\\frac{\\tilde{m}}{2m}} \\big \\Vert \\Vert \\Psi _{t\\rightarrow T}\\Vert _{C^{0,\\lambda }}\\big \\Vert _{L^{\\tilde{m}}}^{\\frac{\\tilde{m}}{2m}}$ where in the last passage we used Markov's inequality.", "Choosing $\\tilde{m}$ large enough, so to make the series convergent, then yields the conclusion.", "The importance of integrability of solutions to the backward continuity equation comes from the following (deterministic) duality lemma.", "Lemma 10.7 Let $b$ satisfy (REF ) and let $v$ , $\\rho $ be analytic weak solutions to respectively the forward transport and backward continuity equations associated to $\\tilde{b}_t(\\cdot ;\\omega )$ ; assume that $v\\in L^\\infty _t W^{1,p_1}_x$ and $\\rho \\in L^{r^{\\prime }}_t (L^1_x\\cap L^{p_2}_x)$ for some $p_1,\\,p_2$ satisfying $p_1,\\,p_2\\in [1,\\infty ),\\quad \\frac{1}{p_1}+\\frac{1}{p_2}+\\frac{1}{r}=1.$ Then it holds $\\langle v_T, \\rho _T\\rangle = \\langle v_0, \\rho _0\\rangle .$ The argument is relatively standard in the analytic community and is based on the use of mollifiers and commutators, see the seminal work [26].", "Let $v^\\varepsilon =v\\ast g^\\varepsilon $ for some standard mollifiers $g^\\varepsilon $ ; since $v^\\varepsilon $ is spatially smooth, we can test it against $\\rho $ (cf.", "Remark REF -iii)), which combined with the respective PDEs yields the relation $\\langle v^\\varepsilon _T , \\rho _T\\rangle - \\langle v^\\varepsilon _0 , \\rho _0\\rangle = \\int _0^T \\langle (\\tilde{b}\\cdot \\nabla v)^\\varepsilon - \\tilde{b}\\cdot \\nabla v^\\varepsilon , \\rho \\rangle \\mathrm {d}s.$ In order to conclude, it then suffices to show that the r.h.s.", "converges to 0 as $\\varepsilon \\rightarrow 0$ .", "Recall that by assumption $b= b^1+b^2$ with $b^1\\in L^r_t L^r_x$ , $b^2\\in L^r_t L^\\infty _x$ , so that the same holds for $\\tilde{b}$ ; we show how to deal with $\\tilde{b}^1$ , the other case being similar.", "By our assumptions, Hölder's inequality and properties of mollifiers, it is easy to check that both $(\\tilde{b}^1\\cdot \\nabla v)^\\varepsilon $ and $\\tilde{b}^1\\cdot \\nabla v^\\varepsilon $ converge to $\\tilde{b}^1\\cdot \\nabla v$ in $L^r_t L^{\\tilde{r}}_x$ , where $\\tilde{r}\\in (1,\\infty )$ is defined by $1/\\tilde{r}=1/r+1/p_1$ .", "But then $\\bigg | \\int _0^T \\langle (\\tilde{b}^1_t\\cdot \\nabla v_t)^\\varepsilon - \\tilde{b}^1_t\\cdot \\nabla v^\\varepsilon _t, \\rho _t\\rangle \\mathrm {d}t \\bigg |& \\le \\int _0^T \\Vert (\\tilde{b}^1_t\\cdot \\nabla v_t)^\\varepsilon - \\tilde{b}^1_t\\cdot \\nabla v^\\varepsilon _t\\Vert _{L^{\\tilde{r}}_x}\\, \\Vert \\rho _t\\Vert _{L^{p_2}_x} \\mathrm {d}t\\\\& \\le \\Vert (\\tilde{b}^1\\cdot \\nabla v)^\\varepsilon - \\tilde{b}^1\\cdot \\nabla v^\\varepsilon \\Vert _{L^r_t L^{\\tilde{r}}_x} \\Vert \\rho \\Vert _{L^{r^{\\prime }}_t L^{p_2}_x}$ where the last term converges to 0.", "As a final ingredient, we give the aforementioned Ambrosio's superposition principle; we stress that the statement is deterministic, but we will apply it for fixed realizations of the random drift $\\tilde{b}(\\cdot ;\\omega )$ .", "Although the full statement is a bit technical, we invite the reader to consult the (more heuristical) Theorem 3.1 from [1] to understand the role it plays in our analysis.", "Theorem 10.8 (Theorem 3.2 from [1]) Let $\\mu $ be a weak solution to the continuity equation $\\partial _t \\mu + \\nabla \\cdot (\\mu f)=0$ such that $\\mu _t\\in \\mathcal {M}_+(\\mathbb {R}^d)$ and $\\int _0^1 \\int _{\\mathbb {R}^d} |f_t(x)|\\, \\mu _t(\\mathrm {d}x)\\, \\mathrm {d}t<\\infty .$ Then $\\mu $ is a superposition solution, namely there exists a measure $\\eta \\in \\mathcal {M}_+(\\mathbb {R}^d \\times C_t)$ , concentrated on the pairs $(x,\\varphi )$ satisfying the relation $\\varphi _t = x + \\int _0^t f_s(\\varphi _s)\\mathrm {d}s,$ such that $\\mu _t = (e_t)_\\sharp \\eta $ for all $t\\in [0,1]$ , where $e_t(x,\\varphi )=\\varphi _t$ is the evaluation map and $(e_t)_\\sharp \\eta $ denote the pushforward measure.", "We are now ready to give the [Proof of Theorem REF ] Both existence statements come from Proposition REF , so we only need to check path-by-path uniqueness.", "Let us start with the continuity equation.", "We claim that the the event $\\tilde{\\Omega }$ of full probability on which path-by-path uniqueness for (REF ) holds is the one for which we have uniqueness of solutions to the ODE $\\dot{\\varphi }_t=\\tilde{b}_t(\\varphi _t;\\omega )$ for all $x\\in \\mathbb {R}^d$ ; its existence is granted by Theorems REF -REF , which additionally imply that $\\varphi _t=\\Psi _{0\\rightarrow t}(x;\\omega )$ .", "Indeed, suppose we are given any weak solution $\\rho \\in L^\\infty _t L^p_x$ to (); by our assumptions, and possibly Remark REF -ii), it holds $\\int _0^1 \\int _{\\mathbb {R}^d} |\\tilde{b}_t(x;\\omega )| \\mu _t(\\mathrm {d}x) \\mathrm {d}t<\\infty $ .", "We can then apply Theorem REF to deduce that $\\rho $ is a superposition solution; since uniqueness of solutions to $\\dot{\\varphi }_t=\\tilde{b}_t(\\varphi _t;\\omega )$ holds, we readily deduce that $\\rho _t = \\Psi _{0\\rightarrow t}(\\cdot ;\\omega )_\\sharp \\rho _0$ , which gives uniqueness.", "We now pass to consider the transport case; by linearity, we only need to find an event $\\tilde{\\Omega }$ on which any weak solution $v\\in L^\\infty _t W^{1,p}_x$ to (REF ) with $v_0=0$ is necessarily the trivial one.", "By Remark REF -i), we know that any solution is weakly continuous in time, thus it suffices to verify that $v_t=0$ for all $t$ in a dense subset of $[0,1]$ .", "To this end, let us fix a countable collection $\\lbrace f^n\\rbrace _n$ of compactly supported smooth functions which are dense in $C^\\infty _x$ and a countable dense set $\\Gamma \\subset [0,1]$ .", "By Proposition REF , for any $f^n$ and $\\tau \\in \\Gamma $ , we can find a pathwise solution $\\mu ^{\\tau ,n}$ to the backward equation on $[0,\\tau ]$ which $\\mathbb {P}$ -a.s. belongs to $L^q_t L^q_x$ for all $q\\in [1,\\infty )$ .", "Since everything is countable, we can then find an event $\\tilde{\\Omega }\\subset \\Omega $ on which $\\mu ^{\\tau ,n}(\\omega )$ are all defined at once and have the above regularity; we claim that this is the desired event where uniqueness of weak solutions to (REF ) in $L^\\infty _t W^{1,p}_x$ holds.", "Indeed, since $q$ is arbitrarily large and $p>r^{\\prime }$ , we can apply Lemma REF and use the fact that $v_0=0$ to deduce that $0 = \\langle v_0, \\mu ^{\\tau ,n}(\\cdot ;\\omega )\\rangle = \\langle v_\\tau , f^n\\rangle \\quad \\forall \\, \\tau \\in \\Gamma ,\\, f^n;$ by density of $f^n$ , it follows that $v_{\\tau }=0$ for all $\\tau \\in \\Gamma $ , which by density of $\\Gamma $ and continuity finally implies $v\\equiv 0$ .", "Remark 10.9 In [38], the authors show how to solve the transport equation (REF ) in a pathwise manner under the assumption that $T^{B^H}b \\in C^\\gamma _t C^2_x$ for some $\\gamma >1/2$ ; in this case, one can treat purely distributional drifts $b$ , without enforcing (REF ).", "However, this assumption is satisfied under more restrictive conditions than (), e.g.", "if $b\\in L^\\infty _t C^\\alpha _x$ for some $\\alpha >2-1/(2H)$ .", "We believe that existence and uniqueness for (REF ) (resp.", "(REF )) should hold under () even when $\\alpha <0$ , without the need for (REF ), but we leave this problem for future investigations." ], [ "Kolmogorov continuity type criteria", "Let us recall (a conditional version of) the classical Azuma–Hoeffding inequality.", "Lemma A.1 Let $k\\in \\mathbb {N}$ and $\\lbrace Y_i\\rbrace _{i=0}^k$ be a sequence of $\\mathbb {R}^d$ -valued martingale differences with respect to some filtration $\\lbrace \\mathcal {F}_i\\rbrace _{i=0}^k$ , with $Y_0=0$ ; assume that there exist deterministic constants $\\lbrace \\delta _i\\rbrace _{i=1}^k$ such that $\\mathbb {P}$ -a.s. $|Y_i|\\le \\delta _i$ for all $i$ .", "Then for $S_j:=\\sum _{i=1}^j Y_i,\\qquad \\Lambda :=\\delta _1^2+\\cdots +\\delta _k^2,$ one has the $\\mathbb {P}$ -a.s. inequality $\\mathbb {E}\\bigg [ \\exp \\Big (\\frac{|S_k|^2}{4 d \\Lambda }\\Big )\\bigg \\vert \\mathcal {F}_0\\bigg ]\\le 3.$ The proof goes along the same lines as standard Azuma–Hoeffding; since we haven't found a direct reference in the literature, we present it here.", "Firstly, observe that we can reduce ourselves to the case $d=1$ by reasoning componentwise, the general one following from a simple application of conditional Jensen's inequality.", "Next, we claim that the following version of Hoeffding's lemma holds: given a random variable $X$ and a filtration $\\mathcal {F}$ such that $\\mathbb {E}[X\\vert \\mathcal {F}]=0$ and $a\\le X \\le b$ $\\mathbb {P}$ -a.s., it holds $\\mathbb {E}[\\exp (\\lambda X)\\vert \\mathcal {F}] \\le \\exp \\bigg ( \\frac{\\lambda ^2 (b-a)^2}{8}\\bigg )\\quad \\forall \\, \\lambda \\in \\mathbb {R}.$ By homogeneity, it suffices to prove (REF ) for $b-a=1$ ; in this case, we have the basic inequality $e^{\\lambda x} \\le (b-x)e^{\\lambda a} + (x-a)e^{\\lambda b}$ for all $x\\in [a,b]$ .", "Evaluating in $X$ and taking conditional expectation we obtain $\\mathbb {E}[e^{\\lambda X}\\vert \\mathcal {F}]\\le (a+1)e^{\\lambda a} - a e^{\\lambda (a+1)} = e^{H(\\lambda )}, \\quad H(\\lambda ):=\\lambda a + \\log (1+a - e^\\lambda a).$ It can be readily checked that $H(0)=H^{\\prime }(0)=0$ and $H^{\\prime \\prime }(\\lambda )\\le 1/4$ , which by Taylor expansion yields $H(\\lambda )\\le \\lambda ^2/8$ and thus (REF ).", "Next, given the sequence $\\lbrace Y_k\\rbrace _k$ as in the hypothesis, we can assume by homogeneity $\\Lambda =1$ and apply recursively Hoeffding's lemma as follows: $\\mathbb {E}[ \\exp (\\lambda S_k)\\vert \\mathcal {F}_0]& = \\mathbb {E}\\big [ \\exp (\\lambda S_{k-1})\\, \\mathbb {E}[\\exp (\\lambda Y_k) \\vert \\mathcal {F}_{k-1}] \\big \\vert \\mathcal {F}_0\\big ]\\\\& \\le \\exp \\big ( \\lambda ^2 (2 \\delta _k)^2/8\\big ) \\mathbb {E}[ \\exp (\\lambda S_{k-1})\\vert \\mathcal {F}_0]\\le \\ldots \\le e^{\\lambda ^2/2}.$ By the inequality $e^{|x|}\\le e^x+e^{-x}$ and Chernoff's conditional bound, we have $\\mathbb {P}(|S_k|>a\\vert \\mathcal {F}_0) \\le \\inf _{\\lambda >0}e^{-\\lambda a}\\, \\mathbb {E}[ e^{\\lambda |S_k|}] \\le 2 \\inf _{\\lambda >0} e^{-\\lambda a + \\lambda ^2/2} = 2 e^{-a^2/2}.$ Therefore we arrive at $\\mathbb {E}\\bigg [ \\exp \\Big ( \\frac{|S_k|^2}{4}\\Big )\\bigg \\vert \\mathcal {F}_0\\bigg ]= \\int _0^{+\\infty } \\mathbb {P}\\bigg (|S_k|> \\sqrt{4|\\log s|}\\bigg )\\, \\mathrm {d}s \\le 1 + 2\\int _1^{+\\infty } s^{-2} \\mathrm {d}s = 3.$ Next we pass to a conditional Kolmogorov-type lemma, stated in a way which is suitable for our purposes.", "Lemma A.2 Let $E$ be a Banach space, $X:[0,T]\\rightarrow E$ be a continuous random process; suppose there exist $\\alpha ,\\,\\beta \\in (0,1]$ , a control $w:[0,T]^2\\rightarrow [0,\\infty )$ , a constant $K>0$ and a $\\sigma $ -algebra $\\mathcal {F}$ such that $\\mathbb {E}\\bigg [\\exp \\bigg (\\frac{\\Vert X_{s,t}\\Vert _E^2}{|t-s|^{2\\alpha } \\, w(s,t)^{2\\beta }}\\bigg )\\bigg \\vert \\mathcal {F}\\bigg ] \\le K \\quad \\forall \\, s<t.$ Then for any $\\varepsilon >0$ there exists a constant $\\mu =\\mu (\\varepsilon )>0$ such that $\\mathbb {E}\\bigg [\\exp \\bigg ( \\mu \\, \\sup _{s<t} \\frac{\\Vert X_{s,t}\\Vert _E^2}{|t-s|^{2(\\alpha -\\varepsilon )} \\, w(s,t)^{2\\beta }}\\bigg )\\bigg \\vert \\mathcal {F}\\bigg ] \\le e \\, K.$ Since we are already assuming $X$ to be continuous, the supremum over $s<t$ appearing in (REF ) equals the supremum over $s,\\, t$ taken over dyadic points.", "Up to rescaling, we may assume wlog $T=1$ .", "For any $n\\in \\mathbb {N}$ and $k\\in \\lbrace 0,\\ldots , 2^n\\rbrace $ , set $t^n_k= k 2^{-n}$ and define a random variable $J=\\sum _{n=1}^\\infty 2^{-2n} \\sum _{k=0}^{2^n-1} \\exp \\bigg ( \\frac{\\Vert X_{t^n_k,t^n_{k+1}}\\Vert _E^2}{2^{-2n\\alpha } w(t^n_k, t^n_{k+1})^{2\\beta }}\\bigg );$ by (REF ), it holds $\\mathbb {E}[J\\vert \\mathcal {F}]\\le K$ .", "Now take $s,t$ to be dyadic points satisfying $|t-s|\\sim 2^{-m}$ , then by standard arguments it holds $\\Vert X_{s,t}\\Vert _E \\lesssim \\sum _{n\\ge m} \\sup _k \\Vert X_{t^n_k,t^n_{k+1}} \\Vert _E;$ on the other hand, by the definition of $J$ , it holds $\\Vert X_{t^n_k,t^n_{k+1}} \\Vert _E\\le 2^{-n\\alpha } w(t^n_k, t^n_{k+1})^\\beta \\sqrt{\\log (2^{2n} J)}\\lesssim _\\varepsilon 2^{-n(\\alpha -\\varepsilon )} w(s,t)^\\beta (1+\\sqrt{\\log J})$ so that $\\Vert X_{s,t} \\Vert _E& \\lesssim \\sum _{n\\ge m} 2^{-n(\\alpha -\\varepsilon )} w(s,t)^\\beta (1+\\sqrt{\\log J})\\\\& \\lesssim 2^{-m(\\alpha -\\varepsilon )} w(s,t)^\\beta (1+\\sqrt{\\log J})\\sim |t-s|^{\\alpha -\\varepsilon } w(s,t)^\\beta (1+\\sqrt{\\log J}).$ Overall, we deduce the existence of a constant $C=C(\\varepsilon )>0$ such that $\\sup _{s<t} \\frac{\\Vert X_{s,t}\\Vert _E}{|t-s|^{\\alpha -\\varepsilon } w(s,t)^\\beta } \\le C (1+\\sqrt{\\log J}).$ The conclusion now readily follows by applying $x\\mapsto \\exp (\\mu x^2)$ on both sides of (REF ) and choosing $\\mu =\\mu (\\varepsilon )$ so that $2\\mu C^2(\\varepsilon ) =1$ , so that $\\mathbb {E}\\Big [\\exp \\big (\\mu C^2(1+\\sqrt{\\log J}\\big )^2\\Big \\vert \\mathcal {F}\\Big ]\\le \\mathbb {E}\\Big [\\exp \\big (2\\mu C^2(1+\\log J)\\big )\\Big \\vert \\mathcal {F}\\Big ]= e\\, \\mathbb {E}[J\\vert \\mathcal {F}] \\le e K.$ Going through an almost identical argument, one can also obtain the following result, whose proof is therefore omitted.", "Lemma A.3 Let $E$ be a Banach space, $X:[0,T]\\rightarrow E$ be a continuous random process; suppose there exist $\\alpha ,\\,\\beta \\in (0,1]$ , $m\\in (1,\\infty )$ , a control $w:[0,T]^2\\rightarrow [0,\\infty )$ , a constant $K>0$ and a $\\sigma $ -algebra $\\mathcal {F}$ such that $\\mathbb {E}\\big [\\,\\Vert X_{s,t}\\Vert _E^m\\big ]^{1/m} \\le K |t-s|^\\alpha w(s,t)^\\beta \\quad \\forall \\, s<t.$ Then for any $0<\\gamma <\\alpha -1/m$ there exists a constant $C=C(\\alpha ,\\gamma ,m)>0$ such that $\\mathbb {E}\\bigg [ \\bigg (\\sup _{s<t} \\frac{\\Vert X_{s,t}\\Vert _E}{|t-s|^{\\gamma } \\, w(s,t)^{\\beta }}\\bigg )^m \\bigg \\vert \\mathcal {F}\\bigg ]^{1/m} \\le C \\, K.$ Let us also mention that, although for simplicity we assumed in Lemmas REF and REF to work with a norm $\\Vert \\cdot \\Vert _E$ , it suffices for it to be a seminorm instead.", "Next, we need some basic lemmas in order to control the space-time regularity of random vector fields $A:[0,1]\\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^m$ .", "We start by considering the time independent case.", "Lemma A.4 Let $F:\\mathbb {R}^d\\rightarrow \\mathbb {R}^n$ be a continuous field and suppose there exist $\\alpha \\in (0,1]$ , $m\\in (1,\\infty )$ , a constant $K>0$ and a $\\sigma $ -algebra $\\mathcal {F}$ such that $\\Vert F(x)-F(y)\\Vert _{L^m\\vert \\mathcal {F}} \\le K |x-y|^\\alpha \\quad \\forall \\, x,y\\in \\mathbb {R}^d.$ Then for any choice of parameters $\\lambda ,\\eta \\in (0,1]$ such that $\\eta <\\alpha -d/m$ , $\\lambda > \\alpha -\\eta $ there exists a constant $C=C(\\alpha ,m,d,n,\\eta ,\\lambda )$ such that $\\big \\Vert \\, \\llbracket F\\rrbracket _{C^{\\eta ,\\lambda }} \\big \\Vert _{L^m\\vert \\mathcal {F}} \\le C \\, K.$ By arguing componentwise, we can restrict to $n=1$ ; by homogeneity, we can assume $K=1$ .", "Recall that by the classical Garsia-Rodemich-Rumsay lemma, there exists a constant $c = c(d,\\eta ,\\alpha ,m)$ such that, for any deterministic continuous function $f$ and any $R>0$ , it holds $\\llbracket f\\rrbracket _{C^\\eta (B_R)}^m \\le c \\int _{B_R\\times B_R} \\frac{|f(x)-f(y)|^m}{|x-y|^{2d+\\eta m}} \\mathrm {d}x \\mathrm {d}y;$ thus taking conditional expectation and applying Fubini, we find $\\mathbb {E}\\big [ \\llbracket F\\rrbracket _{C^\\eta (B_R)}^m \\big |\\mathcal {F}\\big ] \\lesssim R^{(\\alpha -\\eta )m} \\quad \\forall \\, R\\ge 1.$ Finally observe that $\\mathbb {E}\\big [ \\llbracket F\\rrbracket _{C^{\\eta ,\\lambda }}^m|\\mathcal {F}\\big ]& \\le \\sum _{R=2^j,\\,j\\in \\mathbb {N}} \\mathbb {E}\\big [ R^{-\\lambda m} \\llbracket F \\rrbracket _{C^\\eta (B_R)}^m |\\mathcal {F}\\big ] \\lesssim \\sum _{j\\in \\mathbb {N}} 2^{-j m(\\eta +\\lambda -\\alpha )}$ with the last quantity being finite under our assumptions.", "A combination of Lemmas REF and REF immediately yields the following.", "Corollary A.5 Let $G:[0,1]\\times \\mathbb {R}^d\\rightarrow \\mathbb {R}^n$ be a continuous random vector field and assume there exist parameters $\\alpha ,\\beta _1,\\beta _2\\in (0,1]$ , $m\\in (1,\\infty )$ , a control $w$ , a constant $K>0$ and a $\\sigma $ -algebra $\\mathcal {F}$ such that $\\Vert G_{s,t}(x)-G_{s,t}(y)\\Vert _{L^m\\vert \\mathcal {F}} \\le K |x-y|^\\alpha |t-s|^{\\beta _1} w(s,t)^{\\beta _2}\\quad \\forall \\, x,y\\in \\mathbb {R}^d,\\, s<t.$ Then for any choice of parameters $\\gamma <\\beta _1-\\frac{1}{m},\\quad \\eta <\\alpha -\\frac{d}{m},\\quad \\lambda >\\alpha -\\eta $ there exists $C>0$ , depending on all the previous parameters except $K$ , such that $\\bigg \\Vert \\sup _{0\\le s<t\\le 1} \\frac{\\llbracket G_{s,t}\\rrbracket _{C^{\\eta ,\\lambda }_x}}{|t-s|^\\gamma w(s,t)^{\\beta _2}}\\bigg \\Vert _{L^m|\\mathcal {F}}\\le C K.$" ], [ "Affine linear Young equations", "In this appendix we prove a basic bound on affine linear Young equations that is used several times in the article.", "Such estimates are folklore, but since we did not find an appropriate version in the literature, we provide a short proof.", "In the next statement the notation $\\int _0^t \\mathrm {d}A_s\\, x_s$ denotes a usual Young integral, equivalently the (deterministic) sewing of the germ $\\Sigma _{s,t}:= A_{s,t} x_s$ .", "Lemma B.1 Let $x$ be a solution to the affine Young equation $\\mathrm {d}x_t = \\mathrm {d}A_t\\, x_t + \\mathrm {d}z_t, \\quad x\\vert _{t=0}=x_0,$ where $A:[0,1]\\rightarrow \\mathbb {R}^{d\\times d}$ is of finite $p$ -variation and $z:[0,1]\\rightarrow \\mathbb {R}^d$ is of finite $\\tilde{p}$ -variation, for some $p\\in [1,2)$ and $\\tilde{p}\\ge p$ such that $1/p+1/\\tilde{p}>1$ ; assume $z_0=0$ .", "Then there exists a constant $C=C(p,\\tilde{p})>0$ such that $\\sup _{t\\in [0,1]} |x_t| + \\llbracket x \\rrbracket _{\\tilde{p}-{\\mathrm {var}}} \\le C e^{C \\llbracket A \\rrbracket _{p-{\\mathrm {var}}}^{p}} \\big (|x_0|+\\llbracket z \\rrbracket _{\\tilde{p}-{\\mathrm {var}}}\\big ).$ When $z=0$ , letting $w$ be a control such that $|A_{s,t}|\\le w(s,t)^{1/p}$ , it holds $\\sup _{0\\le s<t \\le 1} \\frac{|x_{s,t}|}{w(s,t)^{1/p}} \\le C e^{C \\llbracket A \\rrbracket _{p-{\\mathrm {var}}}^{p}} |x_0|.$ Let us first apply the change of variable $\\theta =x-z$ , so that $\\theta $ solves $ \\mathrm {d}\\theta _t = \\mathrm {d}A_t\\, \\theta _t + \\mathrm {d}A_t\\, z_t = \\mathrm {d}A_t\\, \\theta _t + \\mathrm {d}\\tilde{z}_t $ where $\\tilde{z}_t:=\\int _0^t \\mathrm {d}A_s\\, z_s$ .", "The advantage of this maneuver is that $\\tilde{z}$ is also of finite $p$ -variation and controlled by (a multiple of) $w^{1/p}$ , where $w$ is the control associated to $A$ .", "Indeed, by Young integration it holds |zs,t| |As,t zs| + w(s,t)1/p z p-var;[s,t]w(s,t)1/pz p-var.", "For any $s<t$ , define $\\llbracket \\theta \\rrbracket _{w;[s,t]} := \\sup _{s\\le r <u\\le t} \\frac{|\\theta _{r,u}|}{w(r,u)^{1/p}},$ and similarly for $\\tilde{z}$ .", "Manipulating the equation for $\\theta $ in a standard manner, one finds a constant $C>0$ such that, for any $s<t$ , it holds $\\llbracket \\theta \\rrbracket _{w;[s,t]} \\le |\\theta _s| + C w(s,t)^{1/p}\\llbracket \\theta \\rrbracket _{w;[s,t]} + \\llbracket \\tilde{z} \\rrbracket _{w;[s,t]}.$ If $Cw(0,1)^{1/p}\\le 1/2$ then the (REF ) buckles with $s=0,t=1$ .", "Otherwise, define recursively an increasing sequence $t_i$ by $t_0=0$ and $C w(t_i,t_{i+1})^{1/p}\\in (1/3,1/2)$ and $t_n=1$ for some $n$ .", "set $J_i:=\\sup _{r\\in [t_i,t_{i+1}]} |\\theta _r|$ with the convention $J_{-1}=|x_0|$ .", "Then thanks to our choice of $t_i$ and equation (REF ), it holds $J_i& \\le |\\theta _{t_i}| + w(t_i,t_{i+1})^{1/p} \\llbracket \\theta \\rrbracket _{w;[t_i,t_{i+1}]}\\\\& \\le (1+2 w(t_i,t_{i+1})^{1/p} )|\\theta _{t_i}| + 2 w(t_i,t_{i+1})^{1/p} \\llbracket \\tilde{z} \\rrbracket _{w;[t_i,t_{i+1}]}\\\\& \\le \\big (1+ C^{-1} \\big ) J_{i-1} + C^{-1} \\llbracket \\tilde{z} \\rrbracket _{w;[0,1]}$ Recursively this implies $\\sup _{t\\in [0,1]} |\\theta _t|= \\sup _i J_i\\le \\big ( 1+C^{-1}\\big )^n (J_0 + \\llbracket \\tilde{z} \\rrbracket _{w;[0,1]})\\le e^{\\frac{n}{C}} \\big (|x_0| + \\llbracket \\tilde{z} \\rrbracket _{w;[0,1]}\\big ).$ Finally observe that, by superadditivity of $w$ and our choice of $t_i$ , it holds $n = (3C)^p \\sum _i w(t_i,t_{i+1}) \\le (3C)^p w(0,1),$ and therefore by () t[0,1] |t|eC' A p-varp (|x0| + zp-var ) with some other constant $C^{\\prime }>0$ .", "Substituting this bound back to (REF ), we similarly get w;[0,1]eC' A p-varp (|x0| + zp-var ).", "Combining everything yields the claimed bounds (REF )-(REF )." ], [ "Fractional regularity and Girsanov transform", "We collect in this appendix several definitions of fractional regularity and show how, in certain regularity regimes, they can be combined with our results, so to verify the applicability of Girsanov transform to the singular SDEs in consideration.", "We start by recalling several classical definitions of fractional spaces for paths $f:[0,1]\\rightarrow E$ , $E$ being a Banach space.", "For $\\beta \\in (0,1)$ and $p\\in [1,\\infty )$ , the fractional Sobolev space $W^{\\beta ,p}=W^{\\beta ,p}(0,1;E)$ is defined as the set of $f\\in L^p(0,1;E)$ such that $\\Vert f\\Vert _{W^{\\beta ,p}} := \\Vert f\\Vert _{L^p} + \\llbracket f \\rrbracket _{W^{\\beta ,p}}<\\infty , \\quad \\llbracket f \\rrbracket _{W^{\\beta ,p}}:=\\Big ( \\int _{[0,1]^2} \\frac{\\Vert f_{s,t}\\Vert _E^p}{|t-s|^{\\beta p +1}}\\, \\mathrm {d}s \\mathrm {d}t\\Big )^{\\frac{1}{p}}.$ Similarly, we define the spaces the Besov–Nikolskii spaces $N^{\\beta ,p}=N^{\\beta ,p}(0,1;E)$ as the collections of all $f\\in L^p(0,1;E)$ such that $\\Vert f\\Vert _{N^{\\beta ,p}} := \\Vert f\\Vert _{L^p} + \\llbracket f \\rrbracket _{N^{\\beta ,p}}<\\infty , \\quad \\llbracket f \\rrbracket _{N^{\\beta ,p}}:=\\sup _{h\\in (0,1)} |h|^{-\\beta } \\Big ( \\int _0^{1-h} \\Vert f_{s,s+h}\\Vert _E^p \\, \\mathrm {d}s\\Big )^{\\frac{1}{p}}.$ In the case $p=\\infty $ , we will set $W^{\\beta ,p}=N^{\\beta ,p}=C^\\beta $ .", "Although we will not need it, let us mention that these spaces are particular instances of the Besov spaces $B^\\beta _{p,q}$ as defined in [64], indeed $W^{\\beta ,p}=B^\\beta _{p,p}$ and $N^{\\beta ,p}=B^\\beta _{p,\\infty }$ .", "There is a final class of spaces we will need, which is an original contribution of this work; many processes arising from stochastic sewing can be shown to belong to this class, thanks to Lemmas REF -REF .", "Given $\\beta \\in (0,1]$ , $p\\in [1,\\infty )$ with $\\beta > 1/p$ , we define the space $D^{\\beta ,p}=D^{\\beta ,p}(0,1;E)$ as the set of all $f$ for which there exists a continuous control $w=w(f)$ such that $\\Vert f_{s,t}\\Vert _E \\le |t-s|^{\\beta -\\frac{1}{p}}\\, w(s,t)^{\\frac{1}{p}}\\quad \\forall \\, s<t.$ Observe that by superadditivity, if such a control $w$ exists, then the optimal choice must be necessarily given by $w(s,t)=\\llbracket f\\rrbracket _{D^{\\beta ,p};[s,t]}^p:= \\sup \\sum _{i=1}^n \\frac{\\Vert f_{t_i,t_{i+1}}\\Vert _E^p}{|t_{i+1}-t_i|^{\\beta p-1}}$ where the supremum runs over all possible finite partitions $s=t_0 < t_1<\\ldots <t_n=t$ of $[s,t]$ .", "We can therefore endow the space $D^{\\beta ,p}$ with the norm $\\Vert f\\Vert _{D^{\\beta ,p}} := \\Vert f_0\\Vert _E + \\llbracket f\\rrbracket _{D^{\\beta ,p}}, \\quad \\llbracket f\\rrbracket _{D^{\\beta ,p}} = \\llbracket f\\rrbracket _{D^{\\beta ,p};[0,1]},$ which makes them Banach spaces; observe the analogy with the definition of $C^{p-{\\mathrm {var}}}$ and its characterization via controls.", "In particular, if a function $f$ is known to satisfy (REF ), then it must hold $\\llbracket f\\rrbracket _{D^{\\beta ,p}}\\le w(0,1)^{1/p}$ .", "For $\\beta >1/p$ , we define $W^{\\beta ,p}_0=\\lbrace f\\in W^{\\beta ,p}: f_0=0\\rbrace $ (as we will shortly see, this is a good definition, as elements of $W^{\\beta ,p}$ are continuous functions); similarly for $N^{\\beta ,p}_0$ and $D^{\\beta ,p}_0$ .", "The next proposition provides summarises the embeddings between these classes of spaces, as well the Cameron–Martin spaces $\\mathcal {H}^H$ and spaces of finite $q$ -variation.", "Proposition C.1 Let $\\beta \\in (0,1]$ , $p\\in [1,\\infty )$ with $\\beta > 1/p$ ; the following hold: i) for any $\\varepsilon >0$ , we have $ W^{\\beta ,p} \\hookrightarrow D^{\\beta ,p} \\hookrightarrow N^{\\beta ,p} \\hookrightarrow W^{\\beta -\\varepsilon ,p}$ ; ii) if $\\bar{\\beta }\\le \\beta $ and $\\beta -1/p\\ge \\bar{\\beta }-1/\\bar{p}$ , then $N^{\\beta ,p}\\hookrightarrow N^{\\bar{\\beta },\\bar{p}}$ ; in particular, $N^{\\beta ,p} \\hookrightarrow C^{\\beta -1/p}$ ; iii) $N^{\\beta ,p} \\hookrightarrow C^{1/\\beta -{\\mathrm {var}}} \\hookrightarrow N^{\\beta ,1/\\beta }$ ; iv) let $H\\in (0,1/2)$ and $E=\\mathbb {R}^d$ , then for any $\\varepsilon >0$ it holds $W^{H+\\frac{1}{2}+\\varepsilon ,2}_0 \\hookrightarrow \\mathcal {H}^H\\hookrightarrow W^{H+\\frac{1}{2}-\\varepsilon ,2}_0;$ in particular, $\\mathcal {H}^H\\hookrightarrow C^{q-{\\mathrm {var}}}$ for any $q>(H+1/2)^{-1}$ .", "i) The last embedding $N^{\\beta ,p} \\hookrightarrow W^{\\beta -\\varepsilon ,p}$ is classical and can be found in [64].", "The embedding $W^{\\beta ,p} \\hookrightarrow D^{\\beta ,p}$ follows from [32]; in particular, by Garsia-Rodemich-Rumsay lemma, the associated control $w_f$ can be taken as $w_f(s,t) = \\int _{[s,t]^2} \\frac{\\Vert f_{r,u}\\Vert _E^p}{|r-u|^{1+\\beta p}} \\, \\mathrm {d}r \\mathrm {d}u.$ It remains to show the embedding $\\mathcal {D}^{\\beta ,p} \\hookrightarrow N^{\\beta ,p}$ ; this follows the same technique used to show that $C^{p-{\\mathrm {var}}}\\hookrightarrow N^{1/p,p}$ , see e.g.", "[49].", "Indeed, for any $h\\in [0,T]$ , it holds $\\Vert f_{h+\\cdot } - f_{\\cdot }\\Vert _{L^p}^p= \\int _0^{1-h} \\Vert f_{t,h+t}\\Vert _E^p \\mathrm {d}t \\le |h|^{\\beta p-1} \\int _0^{1-h} w(t,h+t) \\mathrm {d}t,$ where $w(s,t)=\\llbracket f\\rrbracket _{D^{\\beta ,p};[s,t]}^p$ .", "Denoting by $K$ the largest integer such that $Kh\\le 1-h$ , we have $\\int _0^{1-h} w(t,h+t) \\mathrm {d}t&\\le \\int _0^{Kh} w(t,h+t) +|h| w(0,1)\\\\& = \\sum _{i=0}^{K-1} \\int _{ih}^{(i+1)h} w(s,h+s) \\mathrm {d}s+|h| w(0,1)\\\\&= \\int _0^h \\sum _{i=0}^{K-1} w(ih+s,(i+1)h+s) \\mathrm {d}s+|h| w(0,1)\\\\&\\le \\int _0^h w(0,1) \\mathrm {d}s +|h| w(0,1)= 2 |h| w(0,1)$ where in the last inequality we used the superadditivity of $w$ .", "Overall we conclude that $\\llbracket f\\rrbracket _{N^{\\beta ,p}}^p\\le 2\\llbracket f\\rrbracket _{D^{\\beta ,p}}^p$ .", "ii) These embeddings can be found in e.g.", "[64], [64].", "iii) These embeddings can be found in e.g.", "[49], [49].", "iv) The second embedding $\\mathcal {H}^H\\hookrightarrow W^{H+\\frac{1}{2}-\\varepsilon ,2}_0$ is the result of [32]; the last one follows from it combined with $N^{q,2}\\hookrightarrow C^{1/q-{\\mathrm {var}}}$ .", "It only remains to show the first embedding.", "Although we believe it to be common knowledge, we haven't found a proof in the literature, thus we give a detailed one.", "Given $ f\\in W^{H+1/2+\\varepsilon ,2}_0$ , in order to verify that $f\\in \\mathcal {H}^H$ , we need to check that $K^{-1}_H f\\in L^2$ , where $K^{-1}_H f = s^{1/2-H} D_{0+}^{1/2-H} s^{H-1/2} D^{2H}_{0+},$ see eq.", "(12) from [56]; $D_{0+}^\\gamma $ denotes the Riemann-Liouville fractional derivative of order $\\gamma $ , for which again we refer to [56].", "By using standard embeddings between $W^{\\delta ,2}$ spaces and potential spaces $I^+_{\\delta ,2}$ (cf.", "[24]), up to losing an arbitrary small fraction of regularity, we know that for any $f\\in W^{H+1/2+\\varepsilon ,2}_0$ it holds $h:=D^{2H}_{0+} f \\in W^{1/2-H+\\varepsilon /2,2}$ (this is the only point in the proof where the condition $f(0)=0$ is needed).", "Thus we are left with verifying that, for the choice $\\gamma =1/2-H$ , it holds $(K^{-1}_H f)_t = C_{\\gamma } \\bigg ( t^{-\\gamma } h_t + \\gamma t^\\gamma \\int _0^t \\frac{t^{-\\gamma } h_t - s^{-\\gamma } h_s}{|t-s|^{1+\\gamma }} \\mathrm {d}s \\bigg ) \\in L^2(0,1;\\mathbb {R}^d).$ From now on we will drop the constants $C_\\gamma $ and $\\gamma $ for simplicity.", "For the first term, observing that $t^{-\\gamma }\\in L^r$ for any $r$ such that $1/r<1/2-H$ and that $h\\in W^{1/2-H+\\varepsilon /2}\\hookrightarrow L^p$ for $1/p=H-\\varepsilon /2$ , it's easy to check by Hölder's inequality that $t^{-\\gamma } h_t \\in L^2$ .", "By time rescaling and addition and subtraction, we can split the integral term respectively into $I^1_t := \\int _0^t \\frac{h_t-h_s}{|t-s|^{1+\\gamma }} \\mathrm {d}s, \\quad I^2_t := t^{-\\gamma } \\int _0^1 \\frac{1-s^{-\\gamma }}{(1-s)^{1+\\gamma }} h_s \\mathrm {d}s.$ For the first term it holds $\\int _0^1 |I^1_t|^2 \\mathrm {d}t\\le \\int _0^1 \\bigg ( \\int _0^1 \\frac{|h_t-h_s|}{|t-s|^{1+\\gamma }} \\mathrm {d}s\\bigg )^2 \\mathrm {d}t\\lesssim \\int _{[0,1]^2} \\frac{|h_t-h_s|^2}{|t-s|^{1+2\\gamma +\\varepsilon }} \\mathrm {d}s \\mathrm {d}t \\lesssim \\Vert h\\Vert _{W^{\\gamma +\\varepsilon /2,2}}$ where in the middle passage we used Jensen's inequality; for the second one, we have $t^{-\\gamma }\\in L^2$ and $\\bigg | \\int _0^1 \\frac{1-s^{-\\gamma }}{(1-s)^{1+\\gamma }} h_s \\mathrm {d}s\\bigg | \\le \\bigg \\Vert \\frac{1-\\cdot ^{-\\gamma }}{(1+\\cdot )^{1+\\gamma }} \\bigg \\Vert _{L^2} \\Vert h\\Vert _{L^2} \\lesssim \\Vert h\\Vert _{W^{\\gamma +\\varepsilon /2,2}}.$ Indeed, observe that the function $s\\mapsto (1-s^{-\\gamma })/(1-s)^{1+\\gamma }$ is only unbounded at the points $s=0$ and $s=1$ , where it behaves asymptotically respectively as $-s^{-\\gamma }$ and $(1-s)^{-\\gamma }$ ; its $L^2$ -integrability immediately follows.", "Remark C.2 By Proposition REF , for a deterministic path $g$ to belong to the Cameron-Martin space $\\mathcal {H}^H$ for $H\\in (0,1/2)$ , it suffices to verify that $g\\in \\mathcal {D}^{\\beta ,p}$ for parameters $p\\in (1,2]$ and $\\beta >0$ satisfying -1p>H, in which case we have the estimate $\\Vert g\\Vert _{\\mathcal {H}^H} \\lesssim \\Vert g\\Vert _{\\mathcal {D}^{\\beta ,p}}$ .", "Therefore, if a stochastic process $h$ is adapted and belongs to $\\mathcal {D}^{\\beta ,p}$ , then for a sequence of stopping times $(\\tau _n)_{n\\in \\mathbb {N}}$ satisfying $\\tau _n\\nearrow \\infty $ , the laws of $B^H$ are $B^H_\\cdot +h_{\\cdot \\wedge \\tau _n}$ are mutually absolutely continuous.", "If the stronger Novikov-type condition E[hD,p2]<    >0 holds, then one can infer the stronger conclusion that the laws of $B^H$ are $B^H_\\cdot +h$ are equivalent and that the Radon-Nikodym derivative admits moments of any order, see [39] for a similar statement.", "With the above considerations in mind, we are now ready to present a result on the applicability of Girsanov transform, which is the main motivation for this appendix.", "Lemma C.3 Assume () and that 1-1/(Hq')<0.", "Let $b\\in L^q_t C^\\alpha _x$ , $x_0\\in \\mathbb {R}^d$ , and denote by $\\mu $ the law of the solution $X$ to the associated SDE (REF ).", "Then Girsanov transform applies and $\\mu $ is equivalent to $\\mathcal {L}(x_0+B^H)$ .", "As a consequence, ${\\rm supp}\\, \\mu = C([0,1];\\mathbb {R}^d)$ .", "Without loss of generality we may assume $\\alpha <0$ and $x_0=0$ .", "In view of Remark REF , we need to verify (REF ) with $h=\\varphi =X-B^H$ and with some $\\beta $ , $p$ satisfying (REF ).", "Let $\\kappa >0$ small enough so that $H$ , $\\alpha -\\kappa $ , and $q$ also satisfy (), and let $\\tilde{b}\\in L^q_tC^{\\alpha -\\kappa }_x$ with norm 1.", "By Lemmas REF , REF , and REF we have that with some $\\mu >0$ E[0br(BHr+r)dr D1+(-) H-,q2]<.", "Note that for sufficiently small $\\kappa $ the exponents satisfy (REF ) as a consequence of ().", "Therefore () looks like (REF ), except the arbitrariness of the coefficient.", "One can then proceed by an interpolation argument as in [39]: for any $\\kappa >0$ and $\\lambda >0$ there exists $b^{-}$ and $b^{+}$ such that $b=b^-+b^+$ and 2b-LqtC-x2/1,            b+Lqt C0x=:K<, where $K$ may depend on all parameters.", "Then we can write E[0b(BHr+r)dr D1+(-) H-,q2] e2K2E[(2/)0b-(BHr+r)dr D1+(-) H-,q2]<, applying () with $\\sqrt{2\\lambda /\\mu }b^-$ in place of $\\tilde{b}$ in the last step.", "Remark C.4 The restriction (REF ) in Lemma REF is necessary.", "Indeed, even taking a space-independent drift $b\\in L^q$ , so that $\\varphi \\in W^{1,q}$ , the condition $1-1/q>(H+1/2)-1/2$ necessary for the Sobolev embedding implies (REF ).", "The reader may feel this pathological and rightly so: for such a $b$ we can deduce everything about the law of $B^H+\\varphi $ from the law of $B^H$ .", "Note that this also motivates the use of “stochastic regularity” as in e.g.", "(REF ), which assigns deterministic functions (like $\\varphi $ in this example) infinite regularity.", "Note also that (REF ) enforces $H\\in (0,1/2)$ .", "We do not discuss the regime of large $H$ in detail, as Girsanov's transform becomes less end less useful as $H$ increases.", "For example, for $H>2$ one has $B^H\\in C^2$ and (in the nontrivial case $\\alpha <1$ ) $\\varphi \\notin C^2$ , yielding trivially the mutual singularity of the laws of $B^H$ and $X=B^H+\\varphi $ .", "Once again, the way out is to use “stochastic regularity” as a substitute for Girsanov." ] ]
2207.03475
[ [ "Athena charged particle diverter simulations: effects of micro-roughness\n on proton scattering using Geant4" ], [ "Abstract The last generation of X-ray focusing telescopes operating outside the Earth's radiation belt discovered that optics were able to focus not only astrophysical X-ray photons, but also low-energy heliophysical protons entering the Field of View (FOV).", "This \"soft proton\" contamination affects around 40\\% of the observation time of XMM-Newton.", "The ATHENA Charged Particle Diverter (CPD) was designed to use magnetic fields to move these soft protons away from the FOV of the detectors, separating the background-contributing ions in the focused beam from the photons of interest.", "These magnetically deflected protons can hit other parts of the payload and scatter back to the focal plane instruments.", "Evaluating the impact of this secondary scattering with accurate simulations is essential for the CPD scientific assessment.", "However, while Geant4 simulations of grazing soft proton scattering on X-ray mirrors have been recently validated, the scattering on the unpolished surfaces of the payload (e.g.", "the baffle or the diverter itself) is still to be verified with experimental results.", "Moreover, the roughness structure can affect the energy and angle of the scattered protons, with a scattering efficiency depending on the specific target volume.", "Using Atomic Force Microscopy to take nanometer-scale surface roughness measurements from different materials and coating samples, we use Geant4 together with the CADMesh library to shoot protons at these very detailed surface roughness models to understand the effects of different material surface roughnesses, coatings, and compositions on proton energy deposition and scattering angles.", "We compare and validate the simulation results with laboratory experiments, and propose a framework for future proton scattering experiments." ], [ "INTRODUCTION", "The last generation of X-ray observatories were extremely successful in resolving astrophysical objects and answering many questions regarding the cosmos and our observable universe.", "Launched in 1999, the NASA Chandra X-ray Observatory (Chandra) and the European Space Agency (ESA) X-ray Multi-Mirror Mission (XMM) Newton telescopes are complementary missions, with Chandra providing better spatial resolution, while XMM-Newton provides a larger collecting area offering better spectroscopic information.", "With these modern observatories, it was discovered that low energy protons ($\\le $ 300 keV) can also scatter through the optics at low angles and get funneled towards the focal plane.", "These “soft protons” manifest as sudden flares in the detector background count rates.", "They populate the interplanetary space, as well as different parts of the Earth's magnetosphere, and are collected during the orbit cycle of the observatory [1], [2], [3], [4].", "This soft proton contamination is highly variable over time and can not be efficiently modeled, as their signal in the detector is registered as events similar to the ones created by X-ray photons.", "Current treatment methods involve looking for anomalous periods of high count rate in the detector light curves and removing all data during these events [5].", "This soft proton contamination currently affects around 40% of XMM-Newton observations, and can be a significant contribution to the background for future X-ray missions [3].", "Over the last 20 years, many attempts have been made to better understand and predict soft proton flares [5], [6].", "ESA's second large-class mission is also the next generation X-ray focusing observatory mission, the Advanced Telescope for High-ENergy Astrophysics (Athena), which will launch in the 2030's and will provide unprecedented spectroscopic and imaging capabilities from the 0.5 keV to 12 keV band [7].", "Athena will use Silicon Pore Optics (SPO) technology together with two focal plane instruments, the Wide Field Imager (WFI) and a calorimeter spectrometer, the X-ray Integral Field Unit (X-IFU), with the science goal of exploring the hot and energetic Universe.", "Athena should be placed in a halo orbit around L1, the first Lagrange point of the Sun-Earth system, after studies of the plasma environment in both L1 and L2 [8], [9].", "To limit the soft proton contamination, the WFI has a challenging scientific requirement for the soft proton background to be $< 5\\times 10^{-4}$ cts/cm$^2$ /keV/s, less than 10% of the non-focused background requirement [10].", "Athena will directly address this soft proton contamination problem using a new device, which will be placed between the optics and the detector systems.", "This Charged Particle Diverter (CPD) will be close to the instruments in the Science Instrument Module (SIM), and will create a magnetic field which will deflect the soft energy protons outside the field of view[11].", "However, simulations also predict a potential secondary scattering of the deflected protons with the CPD walls and surfaces of the focal plane assembly (e.g.", "WFI baffle).", "To test if the CPD is compliant with the background requirement, it is first necessary to understand how surface roughness and composition can affect the proton energy and angles after scattering.", "To do this, an accurate simulation of the proton scattering with the surfaces of the focal plane assembly as well as the wall of the WFI baffle and CPD must be performed to know the impact of the roughness on the scattering efficiency.", "Previous validations of Geant4 simulations of soft proton scattering were performed at grazing angles and on polished X-ray mirror samples [8].", "Since roughness and composition can affect the proton energy and angles after scattering, laboratory measurements of proton scattering from representative samples were taken to verify dedicated Geant4 and SRIM simulations.", "This work specifically focuses on the validation of the proton scattering simulations performed in Geant4 using the experimental results, with the preliminary results already giving some general implications on the roughness and composition effects on low-energy proton scattering.", "This work fits into the context of a larger scale experiment to characterize the soft proton background, as part of the scientific assessment of the CPD.", "Our working group aims to perform end-to-end simulations of these soft protons, starting with ray-tracing simulations of the proton interaction with the mirror assembly, followed by the effect of the CPD induced magnetic field on the way towards the detector systems, the proton interaction and scattering against the walls of the focal assembly, and finally the endpoint of these protons.", "Section 2 aims to explain the surface data, the experimental set up, as well as the experimental set up within Geant4 and SRIM.", "Section 3 will discuss a statistical systematic uncertainty introduced as a result of a computational optimization originating from the binning of the surface roughness model.", "Sections 4 and 5 will discuss the results and validity of the simulations, and end with some concluding remarks and details for the future work.", "The basis for all of the experiments began with measurements of the WFI baffle surface, performed via Scanning Probe Microscope (SPM) Bruker Dimension Icon at Brno CEITEC Nanohttps://www.ceitec.eu/scanning-probe-microscope-bruker-dimension-icon/e1359, which gave ASCII text output of a 512$^2$ pixel matrix of measured values spanning the 92$^2$ $\\mu $ m$^2$ dimensions of the physical sample.", "The atomic composition of the sample was also measured, by EDS X-Max 20 by Oxford Instruments installed in the Scanning Electron Microsope TESCAN Mira in BUT CEITEC Nanohttps://nano.ceitec.cz/scanning-electron-microscope-e-beam-writer-tescan-mira3raith-lis-mira/.", "The WFI baffle sample has an average surface roughness of 6.5 $\\mu $ m, a calculated density of 2.15 g/cm$^3$ , and also has an optical coating called “Magic Black”, with the composition in weight of 44% Oxygen and 56% Aluminum.", "Figure: Highly detailed CAD model of the WFI baffle surface sample, where over a half million triangles represent the 92 2 ^2 μ\\mu m 2 ^2 area.", "The samples were imported into Geant4 using the CADMesh library and measured atomic composition, and were created from measurements using Matlab and Blender.The first step to prepare this measurement for the simulations is to convert the measured data from a text file into a Standard Tessellation Language (STL) file, the Computer Aided Design (CAD) standard file format that describes the whole geometry of an object in the form of a triangular mesh.", "Matlab has some user-defined functions that manage to make this conversion to STL by first defining the sample as a surface mesh object; however, this object has no thickness, and can be considered as an empty wireframe.", "The second step in creating the model is to import this empty wireframe STL file into Blender, a free and open-source 3D computer graphics software toolset, often used for 3D modelling.", "An example of this high resolution model is shown in Fig.", "REF , where over a half million small triangles represent the 92$^2$ $\\mu $ m$^2$ surface.", "The 92$^2$ $\\mu $ m$^2$ model object is given a equivalent thickness of 300 $\\mu $ m. We then close and fill the extruded model, making it a full solid and not an empty shell, and rotate and flip the model so that it has the correct orientation after it is loaded into Geant4." ], [ "EXPERIMENTAL SET UP", "The WFI baffle sample was then placed in an experimental configuration as seen in Fig.", "REF , used to characterize the scattering efficiency, performed at the Tandem laboratory of Uppsala Universityhttps://www.tandemlab.uu.se/?languageId=1.", "The WFI baffle surface sample was shot by a beam of 100 keV protons at an incidence angle of both 20 degrees and 40 degrees, with the energy and scattering angles ($\\theta $ ) of the deflected protons being measured by a circular detector.", "The detector provides a signal from three separate areas, each with a diameter of 4 degrees, and simultaneously covers three different scattering angles per experiment, specifically, at scattering angles $\\theta $ and $theta\\pm 5$ degrees.", "Fig.", "REF shows a schematic of the experimental set up.", "The azimuth of proton scattering ($\\phi $ ) is not explicitly measured with this experimental set up.", "Figure: Schematic view of the experimental setup: a particle beam shoots 100 keV protons at incidence angles of 20 degrees and 40 degrees, with the energy and scattering angles (θ\\theta ) of the deflected protons being measured by a detector at various positions, θ\\theta and θ±5\\theta \\pm 5.As the detector covers three scattering angles simultaneously, the detector was placed at 30 degrees, 50 degrees, and 65 degrees for the 20 degree incidence angle experiment, measuring all proton scattering angles $\\theta $ from 30, 35, 40, etc.", "up to 70 degrees with respect to the measured surface plane.", "For the 40 degree proton incidence angle case, the circular detector was moved to lower scattering angles, measuring scattered protons from 10, 15, 20, etc.", "up to 50 degrees.", "The experiment was performed several times to normalize the proton beam." ], [ "GEANT4 MICRO-ROUGHNESS SIMULATION", "The Geant4 simulation framework is a toolkit library for particle transport codes based on Monte Carlo simulations, initially developed by CERN [12], [13], [14].", "All simulations in this work were based on Geant4 version 10.7.1, using the previously validated single Coulomb scattering (SS) model [8].", "To establish the framework to simulate the particle scattering, we additionally used version 1.1 of the CADMesh library, which allows us to load triangular mesh-based CAD files into Geant4[15].", "This combination of tools is particularly powerful, as it opens up many possibilities for future work and exploring the effects of different materials and treatments on scattering.", "The Geant4 set-up is similar to the schematic from the experiment shown in Fig.", "REF , where a particle gun shoots the digitized WFI baffle sample with a parallel beam of 100 keV protons at incidence angles of 20 and 40 degrees.", "The WFI baffle sample geometry is imported into Geant4 using the CADMesh library and is given the same atomic composition obtained from measurements.", "We deviate from the experimental set up in that we instead build a half-sphere around the sample surface to function as the main particle detector and record all the scattered particle hits.", "This half-sphere records the same proton energy and polar scattering angles $\\theta $ as from the experiment, as well as the azimuthal scattering angles $\\phi $ .", "Additionally, this sphere provides a 2D angular distribution of $\\theta $ vs $\\phi $ , which can be integrated over the detector area from the experiment, and provides the basis for validating the simulation results with the experimental results.", "We observed that shooting the 90 degree edges of the surface sample greatly increased the scattering efficiency of the protons at low scattering angles.", "We try to limit the particle interactions with the edges, sides, and corners of the CAD model, so the surface area of the reference sample which the proton gun covers is restricted to the central 72$^2$ $\\mu $ m$^2$ of the sample, corresponding to 80.4% of the total sample." ], [ "SRIM SIMULATION", "A new algorithm for the calculation of the scattering of protons was implemented in a FORTRAN tracing code using SRIM [16].", "SRIM itself calculates only ideal surface scattering, and in this case, was first used to pre-calculate the scattering of protons of various incidence energies and covering a full range of incidence angles.", "The total probability of proton scattering for each energy and incidence angle is then determined as the number of scattered protons to incident protons.", "Similar to the Geant4 and CADMesh implementation, the surface roughness of the WFI baffle sample is interpreted using 3D triangular meshes, but in this implementation, the protons do not penetrate or directly interact with the material.", "The FORTRAN code instead calculates the proton incidence angle with respect to the intersecting triangle and samples the derived probability distribution of scattering for that angle [17].", "The scattering direction of the proton is randomly chosen from the precalculated proton list, and the previous step is repeated in case of an intersection with another triangle from the 3D triangular mesh.", "The collecting of protons is done in FORTRAN, with detectors in the same positions according to the experiment, shown in Fig.", "REF , making it possible to compare results with the experiment and other simulations." ], [ "COMPARISON OF BINNED SAMPLES WITH FULL SAMPLES", "One of these SPM-measured surface roughness samples can have over 1 million small triangles, meaning that the memory and computational requirements for simulating protons are non-trivial.", "As a benchmark, one hundred thousand 100 keV protons at 40 degree incidence angle takes over 70 hours to simulate on the full resolution sample using 48 threads.", "To have reasonable statistics for validating this simulation via comparison with the laboratory measurements, we require several million proton simulations, equivalent to about a month of non-interrupted computation time.", "The computational time requirements are reduced for both lower incidence angles and for samples with less surface roughness.", "This is because the scattering efficiency at lower incidence angles is higher, so obtaining better statistics requires less proton simulations; meanwhile, surfaces that are more smooth reduce the likelihood that a particle enters and exits the surface material several times before scattering.", "With the future objective of creating a proton transfer function and understanding the proton scattering efficiency at many different incidence angles and energies, the computational time requirement can go up to several years, and can easily increase to over a lifetime to further explore different levels of surface roughness, compositions, and surface treatments.", "The memory specific requirements are mainly dependent on the requested binning of the angle in the 2D angular distribution and the total number of threads.", "Besides the basic memory requirements for loading the sample and running Geant4, the estimated memory requirements can be described by the cumulative product between the bit-size of double variables, the number of threads used in a simulation, the specified binning for the output histogram, and the number of protons simulated.", "Assuming histogram bin sizes of 1 deg$^2$ , memory requirements are easily met even for the full resolution sample.", "Figure: (left) 2D visualization of the full WFI baffle sample roughness profile (with height in meters), treated with the “Magic Black” optical surface coating, same as the 3D version shown in Fig. .", "(middle) Visualization of the same sample, but binned by 4.", "(right) Power spectrum density plot comparing the binned and full resolution samples with regards to the effective power of each spatial frequency, from small scales up to the Nyquist frequency.", "As expected, at large scales the sample looks fairly identical, but some information is lost on the small scales as it becomes averaged by the binning.", "This binning by 4 specifically reduces the effective power of smaller scale features by around 15%.Binning the surface roughness samples was one of the optimization paths taken to reduce computation time.", "Extensive Geant4 simulations were run on both the full sample and the binned sample to determine the proton scattering efficiency of both distributions.", "Fig.", "REF shows the full resolution WFI baffle sample, the binned version of the same sample, and a comparison of the two using a normalized power spectrum density plot.", "As evident in the plot, at least when binning the sample by four, 25% of the smallest scale information is encoded in the averaging of the bins, but this binning only reduces the effective power of smaller scale features by around 15%; in other words, the majority of the information is conserved.", "To test the effect of this binning on the proton scattering efficiency, millions of protons are simulated for both the full resolution WFI baffle sample, as well as the binned sample, over several weeks.", "The respective probability density distributions are then compared with each other using a Kolmogorov–Smirnov (KS) test to determine how similar the results are to each other, as well as to verify whether the binned samples are statistically self-similar enough to their own full resolution versions to be realistically used for the remainder of the simulation campaign.", "As can be seen in the left and middle of Fig.", "REF , the binning has a trivial effect in the shape of the distributions, with 99% self similarity in proton scattering energy and scattering angle $\\theta $ .", "Meanwhile, the KS test determined that the final proton scattering efficiencies are 98% percent similar to each other when integrating the results in the 2D angular distribution over the area of the detector used for the experiment, shown in the right plot of Fig.", "REF .", "Figure: (left) Comparison of proton scattering efficiency as a function of the scattering angle θ\\theta between the full resolution sample and the binned sample.", "(middle) Comparison of proton scattering efficiency as a function of the proton energy between the full resolution sample and the binned sample.", "(right) Comparison of the integrated proton scattering angles between the full resolution sample and the binned sample, in the frame of the laboratory detector.", "The pvalue result of the KS tests, from left to right, are 0.9999, 0.9999, and 0.9831, meaning that the difference between the full resolution and binned samples is less than 2% when integrated to the frame of the lab experiment.As mentioned previously, the benchmark of one hundred thousand 100 keV protons at 40 degree incidence angle took around 68 hours to simulate on the full resolution sample using 48 threads.", "The same configuration using the binned sample takes 6.3 hours for the same number of simulated protons.", "The difference in computation time between the full resolution sample and the binned sample is around 10x, so we can justify using the binned samples, since the statistical uncertainty introduced as a systematic of the binning is less than 2% with respect to the integrated result.", "Millions of protons need to be simulated to obtain decent statistics, so this small change greatly reduces the computational overhead for the project at the cost of introducing a rather negligible systematic error." ], [ "VALIDATION OF SIMULATIONS WITH EXPERIMENTAL RESULTS", "Fig.", "REF and Fig.", "REF show the simulated proton scattering efficiencies compared with the measured results, with initial incidence angles of the proton beam at 20 degrees and 40 degrees, respectively.", "The left plot of Fig.", "REF and Fig.", "REF show our high resolution WFI baffle Geant4 simulations as a red line, with additional comparisons with other simulations, specifically, a simulation baseline from Geant4, where an ideal surface was used with no roughness, as well as results obtained from the SRIM simulations.", "The top and bottom right plots of Fig.", "REF and Fig.", "REF show, respectively, a distribution of the proton scattering angles and proton scattering energies, after interactions with the WFI baffle surface sample, compared with the ideal surface.", "Figure: From the CPD Scientific Assessment: Results for the WFI baffle sample with incidence angle of 20 deg, and 100 keV protons fired from the beam.", "(left) Comparison between a baseline Geant4 simulation of an ideal surface, the SRIM simulation, the experimental results, and our Geant4 simulations with roughness and atomic composition.", "(top) A distribution of the proton scattering angles after interactions with the WFI baffle surface sample, compared with the ideal surface.", "(bottom) A distribution of the proton scattering energies after interactions with the WFI baffle surface sample, compared with the ideal surface.As can be seen in both Fig.", "REF and Fig.", "REF , our simulation results which include the binned roughness and exact composition of the samples are the closest with the experimental results.", "The SRIM simulation shows lower scattering efficiency than Geant4, but this could be due to an incorrect approximation of the scattering distribution, since many assumptions were necessary to optimize for computation time.", "The 40 degree proton incidence angle case is not a smoking gun, but is still the closest match to the experimental data out of all the simulations.", "Due to the lower scattering efficiency of larger incidence angles, more proton simulations and more computation time is required to reduce the size of the error bars and to better converge the integrated results to the true distribution.", "Figure: From the CPD Scientific Assessment: Results for the WFI baffle sample with incidence angle of 40 deg, and 100 keV protons fired from the beam.", "(left) Comparison between a baseline Geant4 simulation of an ideal surface, the SRIM simulation, the experimental results, and our Geant4 simulations with roughness and atomic composition.", "(top) A distribution of the proton scattering angles after interactions with the WFI baffle surface sample, compared with the ideal surface.", "(bottom) A distribution of the proton scattering energies after interactions with the WFI baffle surface sample, compared with the ideal surface.Looking at the top and bottom right plots of Fig.", "REF and Fig.", "REF , which compare the surface sample with roughness with a smooth ideal surface, we can see that modeling the roughness structure in Geant4 not only lowers the scattering angle of the protons, but also shifts the scattering proton energy, with less protons at higher energies and more protons below 40 keV.", "The shape of these two distributions also change a lot between the 20 degree to 40 degree cases, with the peak of the proton scattering angle shifting to larger scattering angles, and the same but more pronounced effect on the proton scattering energy distribution as previously observed." ], [ "Conclusions and Future Activities", "It is clear when comparing the simulation on a rough surface to the simulation on an ideal surface, that the current micro-roughness scale of 6.5 $\\mu $ m contributes to increase reflectivity at smaller scattering angles, with proton energies skewed to lower energies.", "We also proved that including a refined modeling of the micro-structure of the surface where particles scatter improves the precision of the Geant4 simulation, with benefits in various applications where Coulomb scattering is the dominant interaction.", "Long computational requirements for simulations remain a concern, but we have been able to make some optimizations to this through the binning of surface sample CAD models with the minimal introduction of systematic uncertainties.", "Table: Table of all of the available surface roughness samples measured by Scanning Probe Microscope (SPM) Bruker Dimension Icon.", "Base roughness indicates the arithmetical mean roughness in micrometers.", "All samples listed were simulated, but the sample investigated with the most computation and laboratory time, and focus of this report, is marked in bold.The final goal of this research project is to create a proton transfer function to cut down on simulation time for other proton experiments.", "Now that the Geant4 simulation framework has been successfully validated by the experimental results, we can move on to the next phase of the project, which is to begin a simulation campaign for a greater range of proton incidence angles and energies.", "Table: Summary of the samples with measured compositions and calculated densities.", "Atomic composition was measured by EDS X-Max 20 by Oxford Instruments installed in Scanning Electron Microscope TESCAN Mira in BUT CEITEC Nano.", "The sample investigated with the most computation and laboratory time, and the focus of this report, is marked in bold.At lower incidence angles, the incoming proton may enter and exit a rough surface sample several times before being scattered away, so we know that the scattering energy distribution is related to the final scattering angle to some extent; however, it is difficult to speculate on how the surface features directly contribute to these final distributions.", "By creating a probability density cube from all of our simulations, we can create an accurate sampling distribution function that statistically accounts for the effect of these unknown surface features, so given some input proton angle and energy we can quickly output a new scattering angle and energy for these protons.", "Besides the WFI baffle sample presented in this report, several other materials and surfaces were measured and digitized, shown in Tab.", "REF and Tab.", "REF .", "Given that much infrastructure was developed for these high-performance computing routines, we hope to further explore these samples in the future and create various proton transfer functions for different materials and surfaces.", "JPB and NW are supported by the GACR grant 21-13491X.", "Part of the funding was provided by the Internal Grant Agency of Masaryk University, specifically the Operational Programme Research, Development and Education within the framework of the implemented project IGA MU reg.", "No.", "CZ.02.2.69/0.0/0.0/19_073/0016943 as well as by the MASH3 grant.", "Computational resources were supplied by the project “e-Infrastruktura CZ\" (e-INFRA CZ LM2018140 ) supported by the Ministry of Education, Youth and Sports of the Czech Republic.", "The work was partly supported by the ESA go/esacloud computing infrastructure.", "CzechNanoLab project LM2018110 funded by MEYS CR is gratefully acknowledged for the financial support of the measurements at CEITEC Nano Research Infrastructure." ] ]
2207.03502
[ [ "Exact Parallel Waves in General Relativity" ], [ "Abstract We conduct a review of the basic definitions and the principal results in the study of wavelike spacetimes, that is spacetimes whose metric models massless radiation moving at the speed of light, focusing in particular on those geometries with parallel rays.", "In particular, we motivate and connect their various definitions, outline their coordinate descriptions and present some classical results in their study in a language more accessible to modern readers, including the existence of \"null coordinates\" and the construction of Penrose limits.", "We also present a thorough summary of recent work on causality in pp-waves, and describe progress in addressing an open question in the field - the Ehlers-Kundt conjecture." ], [ "Introduction", "The goal of this article is first to make explicit the definitions of wavelike spacetimes in general relativity, and as was done in the development of the theory, to motivate a number of these definitions with reference to the well-established theory of electromagnetism.", "This is the subject of Section .", "By “wavelike spacetimes\" we mean those spacetimes which themselves model wavelike behaviour, in contrast to the spacetimes which model objects that produce radiationFor such spacetimes, see e.g.", "the review .", "We then examine the coordinate descriptions of the wavelike spacetimes in Section , where the “adapted\" or “Brinkmann\" coordinates in which these metrics are typically written are derived.", "Section discusses the properties of the wavelike spacetimes, in particular the details of their so-called “wavefronts\", that such a spacetime appears as a limit of any spacetime via the “Penrose limit\", and their causal properties.", "In Section we discuss progress in addressing the “Ehlers–Kundt conjecture\", which is a statement about our expectations of the wavelike spacetimes based on physical intuition.", "We begin by providing a brief historical perspective on the development of the theory of waves in general relativity (GR) as in with some relevant additions.", "For a more detailed review of the development of the mathematics in particular, see .", "rp0.9 1915    Albert Einstein establishes the field equation of general relativity 1916    Einstein demonstrates that the linearised vacuum field equation admits wavelike solutions which are rather similar to electromagnetic waves 1918    Einstein derives the quadrupole formula according to which gravitational waves are produced by a time-dependent mass quadrupole moment 1925    Hans Brinkmann finds a class of exact wavelike solutions to the vacuum field equation, later called pp-waves (“plane-fronted waves with parallel rays\") by Jürgen Ehlers and Wolfgang Kundt 1936    Einstein submits, together with Nathan Rosen, a manuscript to Physical Review in which they claim that gravitational waves do not exist 1937    After receiving a critical referee report from Howard P. Robertson, Einstein withdraws the manuscript with the erroneous claim and publishes, together with Rosen, a strongly revised manuscript on wavelike solutions (Einstein-Rosen waves) in the Journal of the Franklin Institute 1957    Felix Pirani gives an invariant (i.e.", "coordinate-independent) characterisation of gravitational radiation, and Bondi independently writes down a metric for the plane wave which is singularity-free and carries energy 1958    Anderzej Trautman reformulates Sommerfeld’s radiation boundary conditions for a general field theory, and applies this approach to relativity to find the boundary conditions to be imposed at infinity due to bounded sources in GR.", "1960    Ivor Robinson and Trautman discover a class of exact solutions to Einstein's vacuum field equation that describe outgoing gravitational radiation 1962    Ehlers and Kundt conjecture that the gravitational pp-waves cannot be complete 1965    Roger Penrose shows that the plane waves (gravitational or otherwise) are not globally hyperbolic 1976    Penrose demonstrates a limiting procedure by which any spacetime reduces to a plane wave, by “blowing up\" a neighbourhood of a null geodesic In the decades since, the existence of gravitational waves has been confirmed indirectly by the decay of binaries (Hulse and Taylor, Nobel prize 1993) and directly detected by the LIGO collaboration (Weiss, Barish and Thorne, Nobel prize 2017).", "The theory has also been developed significantly, such as the gravitational memory effects which have been studied extensivelySee for example and the references given therein.", "but these developments are outside the scope of this article." ], [ "The Coordinate Description", "We have defined a parallel wave as a Lorentzian manifold admitting a covariantly constant, null vector field, and a pp-wave as a parallel wave with flat wavefront.", "In this section, we first derive the most general form of a Lorentzian metric satisfying these conditions, and then discuss the various simplifications which have been studied in the literature.", "These simplifications remain exact wavelike solutions to the Einstein equations, but have the benefit of being easier to understand and work with.", "The simplest and most widely known example we call the “classical pp-wave\", which is discussed in Section REF .", "Notation: Our goal is to develop a local coordinate system on a parallel wave of dimension $n$ which we will denote $\\lbrace u,v,\\mathbf {x}\\rbrace $ , where $\\mathbf {x} = x^1, \\ldots ,x^{n-2}$ are the so-called “wavefront coordinates\".", "This name is justified by examining the definition of a wavefront (Def. )", "in the context of the coordinate description of a parallel wave metric Eq.", "REF .", "We will use Greek indices when referring to all coordinates $\\lbrace u,v,\\mathbf {x}\\rbrace $ , and Latin indices (other than the letters $u$ and $v$ ) when referring to only the wavefront coordinates.", "For example, the sum $g_{va}X^{a}$ for some vector field $X \\in \\mathfrak {X}(M)$ (the space of vector fields on $M$ ) will have $n-2$ terms ($a \\ne u,v$ ), whereas the sum $g_{v\\sigma }X^{\\sigma }$ will have $n$ terms.", "To avoid confusion with the coordinates $u$ and $v$ , we will not use the typical $\\mu $ and $\\nu $ Greek indices in this section, and instead we will favor $\\sigma , \\rho , \\gamma $ .", "For the Latin indices, we use $a,b,c$ and $i,j,k$ .", "Additionally, when a coordinate is labelled $x^i$ , we will denote its corresponding coordinate vector field by $\\partial _{x^i} =\\partial _i$ ." ], [ "General Parallel Waves and pp-Waves", "Consider the $n$ -dimensional Lorentzian manifold $(M,g)$ .", "Denote the covariantly constant null vector field on $M$ by $Z$ , that is $\\nabla Z = 0$ and $g(Z,Z) = 0$ for $\\nabla $ the Levi-Civita connection on $(M,g)$ and $Z$ nontrivial.", "Theorem 2.1 Coordinates adapted to covariantly constantNote that for this particular result, one may relax the condition that $Z$ be covariantly constant.", "For details see Sec.", "REF .", "In this context however, $Z$ is always assumed to be covariantly constant., null vector field.", "If a Lorentzian manifold $(M,g)$ admits a covariantly constant, null vector field $Z$ , then in a neighbourhood $U$ of each $p \\in M$ there exists a local coordinate chart $\\varphi = \\lbrace u,v,\\mathbf {x}\\rbrace $ on $U$ which is “adapted to $Z$ \" such that $Z\\vert _U = \\partial _v = \\nabla u.$ We perform this proof in three steps: first we construct local coordinates in which $Z$ is a coordinate vector field, then we show there exists a function $u:M\\mapsto {\\rm I\\!R}$ such that $Z = \\text{grad}(u) = \\nabla u$ .", "Finally we show that such a function $u$ may replace one of the coordinate functions in the initially constructed system, while maintaining the property $Z = \\partial _v$ , giving the desired result.", "Step 1: Local coordinates including $\\mathbf {v}$ First note that $Z$ is nowhere 0.", "This is because the zero vector is by convention spacelike, but by definition $Z$ is everywhere null.", "This can also be seen from the fact that $Z$ is Killing, since the Killing condition is trivially satisfied.", "A Killing field is uniquely determined by $Z\\vert _p$ and $\\nabla Z \\vert _p$ for some $p \\in M$ .", "Therefore since $\\nabla Z = 0$ everywhere, if for some $p$ we had $Z\\vert _p = 0$ then $Z$ would be identically 0.", "By the straightening (or “flow-box\", or “canonical form\") theorem for vector fields, since $Z$ is everywhere regular (i.e.", "nonzero), we can always construct a local coordinate system such that $Z$ is a coordinate vector field.", "We label this coordinate $v$ , such that we have $Z = \\tilde{\\partial }_v$ in the local coordinates $\\lbrace \\tilde{x}^0,v,\\tilde{x}^1,\\ldots ,\\tilde{x}^{n-2}\\rbrace $ .", "We define these coordinates with a tilde because we are interested in constructing a new coordinate system from these coordinates, and $v$ is the second coordinate to agree with usual pp-wave notational conventions.", "Let us also write the coordinate vector fields for this coordinate system with a tilde as $\\tilde{\\partial }_v$ and $\\tilde{\\partial }_i$ .", "Step 2: Introducing the coordinate $\\mathbf {u}$ To obtain the function $u$ , let us consider the one-form $Z^{\\text{{}}}$ dual to $Z$ via the metric $g$ .", "As shown in Appendix , the exterior derivative $d\\omega $ of a one-form $\\omega $ is proportional to $Alt(\\nabla \\omega )$ , the antisymmetric part of the two-form $\\nabla \\omega $ .", "Since $Z$ is covariantly constant, by the compatibility of the metric with $\\nabla $ we have that $\\nabla Z^{\\text{{}}}= 0$ and thus $Alt(\\nabla Z^{\\text{{}}}) = 0$ .", "Therefore $d(Z^{\\text{{}}}) = 0$ , that is $Z^{\\text{{}}}$ is closed, and via the Poincaré lemma for covector fieldsFor details see Lee, Smooth manifolds (2nd edition) Theorem 11.49 and Corollary 11.50., any closed one-form can locally be written as $Z^{\\text{{}}}= du$ for some function $u: M \\rightarrow {\\rm I\\!R}$ .", "Then by definition of the gradient, we have $Z = grad(u) = \\nabla u$ .", "Step 3: Constructing the local coordinate system $\\lbrace u,v,\\mathbf {x}\\rbrace $ We now transform the coordinate system $\\lbrace \\tilde{x}^0,v,\\tilde{x}^1,\\ldots ,\\tilde{x}^{n-2}\\rbrace $ into a new coordinate system $\\lbrace u,v,x^1,\\ldots ,x^{n-2}\\rbrace $ , and show that the property $Z = \\partial _v$ also holds in the new coordinates It is necessary to show this even though the function $v$ is used in both coordinate systems, as given $n$ functions $\\tilde{f}^i$ with linearly independent differentials $\\tilde{df}^i$ , we may form a local coordinate system $\\lbrace \\tilde{f}^1,\\ldots ,\\tilde{f}^n\\rbrace $ in which the coordinate vector fields $\\tilde{\\partial _i}$ are determined by the $n^2$ equations $\\tilde{df}^j(\\tilde{\\partial _i})=\\delta ^j_i$ .", "That is, the coordinate vector field $\\tilde{\\partial _{i}}$ depends on all the coordinate functions.", "If we transform to a new coordinate system $\\lbrace f^1,\\ldots ,f^{i-1},\\tilde{f}^i,f^{i+1},\\ldots ,f^n\\rbrace $ which contains $\\tilde{f}^i$ , we have $\\partial _i = \\tilde{\\partial _i} \\iff df^j(\\tilde{\\partial _i}) = \\delta ^j_i$ for all $i,j$ .. To find the appropriate transformation, first consider $du$ acting on the coordinate vector fields $\\tilde{\\partial _i}$ .", "We have $du(\\tilde{\\partial _v}) = Z^{\\text{{}}}(Z) = g(Z,Z) = 0$ and we define $du(\\tilde{\\partial _i}) =c_i$ where the $c_i$ are smooth functions on $M$ .", "Since the coordinate vector fields form a frame, at any $p \\in M$ we cannot have $c_i = 0$ for all $i$ and $c_0=0$ , as if this were true we would have $du(X) = 0$ for all vector fields $X$ , which in turn implies $du = Z^{\\text{{}}}= 0$ .", "But since $Z$ is nonzero, so too is $Z^{\\text{{}}}$ .", "Without loss of generality, assume that $c_0 \\ne 0$ (can always be done by reordering/relabelling the coordinate system).", "We now claim that by replacing $\\tilde{x}^0$ by $u$ and taking $x^i = \\tilde{x}^i$ for $i \\in \\lbrace 1,\\ldots ,n-2\\rbrace $ , the set of functions $\\lbrace u,v,x^1,\\ldots ,x^{n-2}\\rbrace $ form a valid coordinate system.", "This can be achieved by verifying that the Jacobian $J$ of the coordinate transform is invertible.", "This is easily seen from the fact that $J$ in Eq.", "REF (where ${1}_{n-1}$ is the identity matrix in $n-1$ dimensions) has linearly independent columns for $c_0 \\ne 0$ .", "$J =\\begin{pmatrix}du(\\tilde{\\partial _0}) & dv(\\tilde{\\partial _0}) & dx^1(\\tilde{\\partial _0}) & \\dots & dx^{n-2}(\\tilde{\\partial _0}) \\\\du(\\tilde{\\partial _v}) & dv(\\tilde{\\partial _v}) & dx^1(\\tilde{\\partial _v}) & \\dots & dx^{n-2}(\\tilde{\\partial _v}) \\\\du(\\tilde{\\partial _1}) & dv(\\tilde{\\partial _1}) & dx^1(\\tilde{\\partial _1}) & \\dots & dx^{n-2}(\\tilde{\\partial _1}) \\\\\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\du(\\tilde{\\partial }_{n-2}) & dv(\\tilde{\\partial }_{n-2}) & dx^1(\\tilde{\\partial }_{n-2}) & \\dots & dx^{n-2}(\\tilde{\\partial }_{n-2})\\end{pmatrix}=\\begin{pNiceMatrix}[columns-width = 4mm]c_0 & 0 & & & 0 \\\\0 & {4-4}<\\Large >{{1}_{n-1}} & & & \\\\c_1 & & & & \\\\\\vdots & & & & \\\\c_{n-2} & & & &\\end{pNiceMatrix}$ It remains to show that $Z = \\partial _v$ in the new coordinates.", "That is, we must have $du(Z) = dx^i(Z) = 0$ for all $i \\in \\lbrace 1,\\dots ,n-2\\rbrace $ and $dv(Z) = 1$ .", "This however can be read directly from the second row of the Jacobian (Equation REF ).", "We therefore have that $\\lbrace u,v,\\mathbf {x}\\rbrace = \\lbrace u,v,x^1,\\dots ,x^{n-2}\\rbrace $ is a valid local coordinate system on $M$ .", "The following proposition outlines the properties of the metric $g$ when it is written in these adapted coordinates.", "Proposition 2.2 If a Lorentzian manifold $(M,g)$ admits a covariantly constant, null vector field $Z$ , and $\\lbrace u,v,\\mathbf {x}\\rbrace $ are the local coordinates adapted to $Z$ of theorem REF , then the metric components in this coordinate system have the following properties on the domain of definition of the coordinates: All metric components are independent of $v$ , that is $\\partial _v(g_{\\mu \\nu }) = 0$ $g_{v\\sigma } = \\delta _{\\sigma }^{u}$ $(g_{ab})$ forms a positive-definite matrix, and therefore the embedded codimension-2 submanifolds defined by $u = \\text{const}$ , $v = \\text{const}$ are Riemannian manifolds.", "$ $ A covariantly constant vector field $Z$ is in particular a Killing vector field.", "By definition of a Killing vector field we have $\\mathcal {L}_Z (g) = 0$ , but since $Z = \\partial _v$ we have $0 = \\left[\\mathcal {L}_Z (g)\\right]_{\\sigma \\rho } = Z (g_{\\sigma \\rho }) = \\partial _v (g_{\\sigma \\rho })$ .", "First note that $Z^{\\sigma } = \\delta _v^{\\sigma }$ and therefore $Z_{\\sigma } = g_{v\\sigma }$ .", "Then since $Z = \\nabla u = du^{\\text{{}}}$ , we have $Z_{\\sigma } = du_{\\sigma } = \\delta _{\\sigma }^{u}$ .", "Therefore $g_{v\\sigma } = \\delta _{\\sigma }^{u}$ .", "First, the hypersurfaces $\\Sigma _c = u^{-1}(c) = \\left\\lbrace q \\in U: \\varphi (q)=\\left(c, v(q), x^1(q),\\dots , x^{n-2}(q)\\right)\\right\\rbrace $ are null hypersurfaces since the normal to these surfaces is the null $grad(u) = Z$ .", "Via the previous point, the normal $Z = \\partial _v$ is orthogonal to $\\partial _i$ for all $i\\in \\lbrace 1,\\dots ,n-2\\rbrace $ and to itself and therefore all these coordinate vectors lie in the null hypersurfaces $\\Sigma _c$ .", "Via we have that a null hypersurface can contain only one null vector (here, $Z = \\partial _v$ itself) and so the remaining coordinate vector fields must be timelike or spacelike.", "Via point (2) of the same lemma, we have that there are no timelike vectors, and therefore the $\\partial _i$ for all $i\\in \\lbrace 1,\\dots ,n-2\\rbrace $ are spacelike and thus $g_{ii}>0$ for all $i$ , that is $(g_{ab})$ is positive-definite.", "Using the results of Theorem REF and Proposition REF , we can now write the explicit form of the metric $g$ in adapted coordinates for a general parallel wave: $g = 2 d u d v+g_{u u}\\left(u, \\mathbf {x}\\right) d u^{2}+ 2g_{a u}\\left(u, \\mathbf {x}\\right) d x^{a} d u+g_{a b}\\left(u, \\mathbf {x}\\right) d x^{a} d x^{b}$ The functions $g_{u u}\\left(u, \\mathbf {x}\\right)$ and $g_{a u}\\left(u, \\mathbf {x}\\right)$ will be useful for the classification of parallel wave spacetimes, and we will therefore label them $H\\left(u, \\mathbf {x}\\right)$ and $A_a \\left(u, \\mathbf {x}\\right)$ respectively.", "We then have the metric of a general parallel wave in local adapted coordinates $\\boxed{g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+2A_a\\left(u, \\mathbf {x}\\right) d x^{a} d u+g_{a b}\\left(u, \\mathbf {x}\\right) d x^{a} d x^{b}.", "}$ One could also write this metric in matrix notation as $g=\\begin{pNiceMatrix}H & 1 & A_1 & \\dots & A_{n-2} \\\\1 & 0 & 0 & \\dots & 0 \\\\A_1 & 0 & {3-3}{(g_{ab})}& & \\\\\\vdots & \\vdots & & & \\\\A_{n-2} & 0 & & &\\end{pNiceMatrix}.$ In fact, this result can be viewed as a special case of a more general result by , which derives this form of a metric admitting a parallel null plane rather than a parallel null vector field.", "Conceptually the generalisation is simple, as a parallel null $r$ -plane is pointwise a set of $r$ linearly independent vectors, such that the field of planes (replacing the vector field in the above example) is a parallel null $r$ -dimensional section of the tangent bundle $TM$ .", "In this case, the metric takes a form similar to Eq.", "REF , though with some individual elements replaced by matrix blocks.", "If we then impose the curvature condition Eq.", "to obtain a pp-wave, as demonstrated in one finds the metric of a general pp-wave in local adapted coordinates $\\boxed{g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+2A_a\\left(u, \\mathbf {x}\\right) d x^{a} d u+\\delta _{a b}\\left(u, \\mathbf {x}\\right) d x^{a} d x^{b}.", "}$ Note that in the context of pp-waves, these coordinates are sometimes referred to as Brinkmann coordinates due to their original discovery in a primarily mathematical context.", "The properties of this general metric and some of the various special cases are discussed in Section .", "The remainder of this section focuses on defining these special cases, which are obtained by making additional assumptions on $H, A_a, g_{ab}$ , the topology of the manifold, or the dimension $n$ .", "Remark 2.3 Gauge Freedom: In vacuum regions it is standard to utilize local gauge freedoms to eliminate the cross terms $dx^adu$ , though in certain cases one can “lose\" some global information about the nature of the wave source in doing so.", "Both the process of changing the coordinates to eliminate these terms and extensive detail about which global information is lost in performing such a transformation can be found in , and will be discussed again in Sec.", "REF .", "Upon eliminating these terms, the metric locally takes the form $g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+g_{a b}\\left(u, \\mathbf {x}\\right) d x^{a} d x^{b}$ which one can summarise as $g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+h(u),$ where $h$ is a $u$ -dependent family of Riemannian metrics on the codimension-2 hypersurface $u =$ const, $v=$ const.", "The construction of this form of the metric can be found for the special case of classical pp-waves (Eq.", "REF below) in .", "For such a metric, it was shown in that the coordinate changes which leave this form Eq.", "REF invariant are $\\begin{split}v &\\longrightarrow v^\\prime = \\frac{1}{a}v + f_1(u, \\mathbf {x})\\\\u &\\longrightarrow u^\\prime = au + b\\\\\\mathbf {x} & \\longrightarrow \\mathbf {x}^\\prime = \\mathbf {f}_2(u, \\mathbf {x}),\\end{split}$ where $a \\ne 0$ and $b$ are constants and $f_1$ , $\\mathbf {f}_2$ are smooth functions independent of $v$ on the domain of the coordinate chart.", "In such coordinates, the metric would retain its form $g = 2 d u^{\\prime } d v^{\\prime } + H^{\\prime }\\left(u^{\\prime }, \\mathbf {x}^{\\prime }\\right) d u^{\\prime }{}^{2}+h^{\\prime }(u^\\prime ).$ The authors showed that this fact may be used to transform to so-called normal Brinkmann coordinates centred at $p$ , in which it holds that $\\varphi (p) = 0 \\in {\\rm I\\!R}^n$ where $\\varphi $ is the coordinate chart and $H(u,\\mathbf {0}) = 0,~~~\\frac{\\partial H}{\\partial x^i}(u,\\mathbf {0}) = 0$ for all $u$ in an interval around 0." ], [ "Standard pp-Wave", "The class of pp-wave most commonly studied in the physics literature has been referred to by as a standard pp-wave.", "The defining characteristics of a standard pp-wave metric when written in the coordinate chart $\\lbrace u,v,\\mathbf {x}\\rbrace $ of theorem REF are: The coordinates $\\lbrace u,v,\\mathbf {x}\\rbrace $ exist globally The metric is written with no cross terms $dx^a du$ , that is $A_a = 0$ for all $a$ .", "and thus our metric takes the form $g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+\\delta _{a b}d x^{a} d x^{b}.$ One can see that in coordinates, the codimension-2 hypersurface defined by $u =$ const, $v =$ const corresponds precisely to the wavefront of Definition .", "Unsurprisingly, for this $n$ -dimensional standard pp-wave, the wavefront (or “transverse space\") is simply Euclidean ${\\rm I\\!R}^{n-2}$ .", "By assuming the coordinates $u$ and $v$ exist globally, we are making assumptions on the properties of the spacetime manifold $M$ .", "Certainly, that $M$ is simply-connected is a sufficient condition for the coordinate $u$ being global (since then the construction involving the Poincaré lemma would hold globally) but this is certainly not a necessary condition (for example the $(N,h)$ p-waves of Sec.", "REF with any non-simply connected $N$ still admit a global $u$ ).", "In the case of the $v$ coordinate, one expects that the integral curves of $Z$ should be complete and non-closedIt appears the analysis of weakest conditions under which such coordinates exist globally is not present in the literature, and remains an open question..", "Typically physical research involving pp-wave spacetimes begins with the assumption of a Lorentzian manifold $(M = {\\rm I\\!R}^n,g)$ with a metric of the form above." ], [ "Classical pp-Waves", "These are the pp-waves for which the wavefront is two-dimensional Euclidean space, that is they are standard pp-waves on ${\\rm I\\!R}^4$ such that the metric takes the form $g = 2 d u d v+ H(u, x,y) d u^{2}+ dx^2 + dy^2,$ where the usual adapted coordinates on the wavefront $(x^1,x^2)$ have been relabelledIn some expressions it will be useful to label these coordinates as the usual $x^i$ , such that for example $\\sum _{i\\in \\lbrace 1,2\\rbrace } x^i = x + y$ to $(x,y)$ .", "This metric is the most widely-known and well-studied pp-wave metric, due to its relevance to physics, and its simplicity while still exhibiting the key features of a pp-wave.", "The most important types of classical waves are the plane waves, whose properties will be discussed extensively in Sec.", "REF and Sec.", "." ], [ "Plane Waves", "A plane wave is a classical pp-wave in which the characteristic function $H(u,x,y)$ is quadraticBoth in the literature and here, “quadratic\" means that $H$ is purely quadratic, that is contains only quadratic terms in the variables $x$ and $y$ as in Eq.", "REF (e.g.", "$H(u,x,y) = f(u)x^2 + 3xy$ as an example without much physical meaning).", "This is because any linear or constant terms in $H$ can be removed via a coordinate transformation, as noted in .", "in $(x,y)$ , i.e.", "$({\\rm I\\!R}^4, g = 2 d u d v+ H(u, x,y) d u^{2}+ dx^2 + dy^2),$ where $H(u,x,y) = \\sum _{i, j=1}^{2} h_{i j}(u) x^{i} x^{j}$ for a symmetric $2\\times 2$ and $u$ -dependent matrix $h_{ij}(u)$ .", "The vacuum Einstein equations imply that $h_{ij}$ should be trace-free, which means we can write $(h_{ij})(u) =\\begin{pNiceMatrix}f_+(u) & f_\\times (u)\\\\f_\\times (u) & -f_+ (u)\\end{pNiceMatrix}.$ Had we wanted to describe a purely electromagnetic wave rather than a gravitational wave, one should have $(h_{ij}) = \\text{diag}(f(u),f(u))$ for some arbitrary smooth $f$ .", "A sandwich wave is obtained when the support of the profile functions is compact; for details see .", "Note that the presence of two functions necessary to describe the wave, as in the linear regime, means that the gravitational wave described by such a metric possesses two linearly independent polarization states.", "Note that we have used the analogous subscripts as we had on the coefficients $C_{{\\mu \\nu }}$ , as the $f_+$ and $f_\\times $ functions again describe the components of the wave in each polarisation state.", "If we had not imposed the vacuum condition, the plane wave would instead have described a coupled system of both gravitational and electromagnetic plane waves.", "Such plane waves were originally studied in .", "Let us now examine the affect of these polarisation states as in , where we skip some steps due to the similarity with the analysis of the linear regime.", "For a plane wave, the geodesic equation for $u$ is simply $\\ddot{u} = 0$ , that for $v$ is $\\ddot{v}=\\frac{1}{2} \\left(f_{+}^{\\prime }(u)(x^2 - y^2)+2 f_{\\times }^{\\prime }(u) xy\\right) \\dot{u}^{2} +\\big (f_{+}(u)(x \\dot{x}-y \\dot{y})+f_{\\times }(u)\\left(x \\dot{y}+y \\dot{x}\\right)\\big ) \\dot{u}$ and for $x$ and $y$ we have $\\begin{pNiceMatrix}\\ddot{x} \\\\\\ddot{y}\\end{pNiceMatrix}= \\frac{1}{2}\\begin{pNiceMatrix}f_+(u) & f_\\times (u)\\\\f_\\times (u) & -f_+ (u)\\end{pNiceMatrix}\\begin{pNiceMatrix}x \\\\y\\end{pNiceMatrix}.$ Since $\\ddot{u} = 0$ , we have that $u(s) = as + b$ for curve parameter $s$ and $a,b \\in {\\rm I\\!R}$ .", "Therefore as the affine parameterisation along a geodesic is only unique up to a transformation of the form $s \\mapsto cs + d$ , $u$ itself can be used as an affine parameter and we may take $u(s) = s$ .", "For the “$+$ \" mode, we have $f_\\times = 0$ , and one finds the geodesic equations reduce to $\\begin{pNiceMatrix}\\ddot{x}(s) \\\\\\ddot{y}(s)\\end{pNiceMatrix}= \\frac{f_+(s)}{2}\\begin{pNiceMatrix}x(s) \\\\-y(s)\\end{pNiceMatrix}.$ That is, the motion decouples and takes place only in the transverse directions (as expected by analogy with the linear theory).", "This motion is such that where $f_+(s)$ is positive, there is a “focusing\" in the $x$ direction and a defocusing in the $y$ direction.", "Where $f_+$ is negative, one sees the converse effect.", "By introducing coordinates $(w,z)$ rotated by $45^\\circ $ relative to $(x,y)$ , and taking the “$\\times $ \" polarisation mode $f_+ = 0$ , one finds precisely the same equation of motion for the rotated variables $\\begin{pNiceMatrix}\\ddot{w}(s) \\\\\\ddot{z}(s)\\end{pNiceMatrix}= \\frac{f_\\times (s)}{2}\\begin{pNiceMatrix}w(s) \\\\-z(s)\\end{pNiceMatrix},$ where $\\begin{pNiceMatrix}w \\\\z\\end{pNiceMatrix}= \\frac{1}{\\sqrt{2}}\\begin{pNiceMatrix}1 & 1\\\\-1 & 1\\end{pNiceMatrix}\\begin{pNiceMatrix}x \\\\y\\end{pNiceMatrix}.$ Thus the two polarization modes have precisely the same effect as in the linearised theory, but now there is no requirement that the separations be “small\".", "This is in line with the interpretation of the characteristic function $H$ as corresponding to the perturbation $h_{{\\mu \\nu }}$ of the linear theory, but without the requirement that it be “small\" in some sense.", "We now demonstrate that the above expression for the metric of a plane wave (Eq.", "REF ) corresponds to our previous definitions of a plane wave.", "The correspondence between the dimension of the symmetry group and the form of the line element has already been succinctly and fully described by , and so we will not reproduce the calculation here.", "This establishes the connection with Definition , and we now illustrate the connection with Definition .", "Lemma 2.4 The plane wave of Definition corresponds to a classical pp-wave (Eq.", "REF ) for which the characteristic function $H$ in Brinkmann coordinates is quadratic in $(x,y)$ .", "That is, the condition $\\nabla _X R = 0 \\quad \\forall \\quad X \\in Z_{\\perp },$ where $Z=\\partial _v$ in these coordinates, $R$ is the curvature tensor and $Z_\\perp =\\lbrace X \\in TM ~|~ g(X,Z) = 0\\rbrace $ is equivalent to $H_{xxx}=H_{yxx}=H_{xyy}=H_{yyy}=0$ for classical pp-waves.", "First note that $\\partial _x$ and $\\partial _y$ are elements of $Z_\\perp $ .", "Let us begin by examining $\\nabla _{\\partial _x}R$ which we assume to be 0, and we will see that this implies $H_{xxx}=H_{yxx}=0$ .", "$0 &=& (\\nabla _{\\!\\partial _x}R)(\\partial _u,\\partial _x,\\partial _u)\\\\&=& \\partial _x(R(\\partial _u,\\partial _x)\\partial _u) - R(\\nabla _{\\!\\partial _x}{\\partial _u},\\partial _x)\\partial _u\\nonumber - {0}{R(\\partial _u,\\nabla _{\\!\\partial _x}{\\partial _x})\\partial _u} - R(\\partial _u,\\partial _x)\\nabla _{\\!\\partial _x}{\\partial _u}\\nonumber \\\\&=&\\!\\!", "\\partial _x(\\nabla _{\\!\\partial _u}\\nabla _{\\!\\partial _x}\\partial _u - \\nabla _{\\!\\partial _x}\\nabla _{\\!\\partial _u}\\partial _u) - \\frac{H_x}{2}{0}{R(\\partial _v,\\partial _x)\\partial _u}\\nonumber -\\frac{H_x}{2}{0}{R(\\partial _u,\\partial _x)\\partial _v}\\nonumber \\\\&=&\\!\\!", "{0}{\\frac{H_{xux}}{2}\\partial _v - \\frac{H_{uxx}}{2}\\partial _v} + \\frac{H_{xxx}}{2}\\partial _x + \\frac{H_{yxx}}{2}\\partial _y,\\nonumber $ where we have used that the nonzero Christoffel symbols are given by $\\nabla _{\\partial _{x}} \\partial _{u} &=&\\nabla _{\\partial _{u}} \\partial _{x}=\\frac{H_{x}}{2} \\partial _{v}, \\\\\\nabla _{\\partial _{y}} \\partial _{u} &=&\\nabla _{\\partial _{u}} \\partial _{y}=\\frac{H_{y}}{2} \\partial _{v}, \\\\\\nabla _{\\partial _{u}} \\partial _{u} &=&\\frac{H_{u}}{2} \\partial _{v}-\\frac{H_{x}}{2} \\partial _{x}-\\frac{H_{y}}{2} \\partial _{y}.$ Thus $H_{xxx}=H_{yxx}=0$ , and the remainder of the proof then follows by considering $(\\nabla _{\\!\\partial _y}R)(\\partial _u,\\partial _y,\\partial _u)$ , from which the result is obtained in precisely the same manner as for $\\partial _x$ .", "The reverse direction of the equivalence then follows from the fact that $Z_\\perp $ is pointwise spanned by $\\partial _x,\\partial _y$ and $\\partial _v$ , and that $\\partial _v$ is a Killing vector field." ], [ "Gyratonic pp-Waves", "The gyratonic pp-waves are those pp-waves with flat wavefront $g_{ab} = \\delta _{ab}$ and nonvanishing $A_a$ , that is the metric can be written as $g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+2A_a\\left(u, \\mathbf {x}\\right) d x^{a} d u+\\delta _{a b} d x^{a} d x^{b}.$ Such pp-waves have been studied extensively in , wherein work by is used to conclude that in the Ricci-flat case, they correspond to the exterior vacuum field of spinning particles moving with the speed of light.", "In reference to the off-diagonal terms with coefficients $A_a$ , the authors state: In vacuum regions it is a standard and common procedure to completely remove these functions by a gauge (coordinate) transformation.", "However, such a freedom is generally only local and completely ignores the global (topological) properties of the spacetimes.", "$\\dots $ In particular the possible rotational character of the source of the gravitational waves (its internal spin/helicity) is obscured.", "What one finds () is that the physical characteristics one can define in a pp-wave spacetime can be obscured via the local gauge transformations which eliminate the $A_a$ , and in general it may be necessary to keep such terms.", "Most notably, one should pay close attention to such terms when attempting to define the angular momentum density of pp-waves in an analogous manner to the linearised theory .", "In the end, such a physical property depends manifestly on the $A_a$ via the contour integral (see ) $\\oint _{C} A_{a} dx^{a},$ where $C$ is a (not completely arbitrary) contour in the transverse space." ], [ "$\\mathbf {(N,h)}$ p-Waves", "These spacetimes are a subclass of the parallel waves which roughly correspond to a standard pp-wave with a Riemannian manifold replacing the planar wavefront of a pp-wave.", "That is, they are the parallel waves which the following conditions hold: In the adapted coordinates of theorem REF , the metric components of the wavefront $g_{ab}$ are independent of the coordinate $u$ .", "The spacetime decomposes as $M = {\\rm I\\!R}^2 \\times N$ where $(N,h = g_{a b}\\left(\\mathbf {x}\\right) d x^{a} d x^{b})$ is a connected Riemannian manifold.", "Note that this implies the coordinates $u$ and $v$ are globally defined.", "The name we suggest for such spacetimes is in analogy to the “pp-wave\" spacetimes (plane-fronted waves with parallel waves) as here we have a wavefront $(N,h)$ and the rays remain parallel, as they are the integral curves of $Z$ and $Z$ remains, as always, covariantly constant.", "Such spacetimes have also been called “generalised plane waves\" and “PFWs\" (plane-fronted waves) & , but the authors find this suggested naming scheme to be the most transparent and accurate.", "We may write the $(N,h)$ p-wave metric as $g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+2A_a\\left(u, \\mathbf {x}\\right) d x^{a} d u+h.$ We can write this metric without referencing coordinates on $N$ if we instead consider $H$ as a map $H\\colon {\\rm I\\!R}\\rightarrow C^{\\infty }(N)$ .", "That is for each $u$ , $H$ is a smooth function on $N$ .", "Similarly for the mixed terms $dx^{a}du$ , we define $A \\colon {\\rm I\\!R}\\rightarrow \\Gamma (T^*N)$ where $\\Gamma (T^*N)$ is the space of sections of the cotangent bundle of $N$ .", "With these redefinitions (unique to this Section) we may write g as $g = 2 d u d v+H(u) d u^{2}+2 A(u) du + h.$ Such spacetimes have been studied extensively in & , in which the geodesic completeness, geodesic connectedness and causality have been determined." ], [ "Rosen Coordinates", "The coordinates for pp-waves which make manifest the symmetries/Killing vector fields are called Rosen coordinates, after .", "The transformation between Brinkmann and Rosen coordinates is well-documented, for example see and .", "We simply present the local form of a plane wave metric in Rosen coordinates $g = 2 d u d v+K_{ij}\\left(u\\right) dy^idy^j,$ where $K_{ij}$ is positive-definite on the domain of validity of these coordinates.", "Note that in such coordinates, the Minkowski metric could be represented as $\\eta = 2 d u d v+ \\delta _{ij}dy^idy^j$ which is simply the usual metric written in light-cone coordinates.", "Rosen coordinates can often exhibit singularities, and are therefore often avoided in favour of Brinkmann coordinates .", "Generically, the plane wave metric has $2n - 3$ linearly independent Killing vectors, which in a suitable basis generate the Heisenberg algebra .", "In Rosen coordinates, half ($+1$ ) of the Killing vector fields are manifest (independent of $K_{ij}$ ) and the remaining symmetries can be obtained in terms of $K^{ij}$ , the inverse of $K_{ij}$ .", "The Killing vector fields are thus (as in ) $e_{+}=\\frac{\\partial }{\\partial v}, \\quad e_{i}=\\frac{\\partial }{\\partial y^{i}}, \\quad e_{i}^{*}=y^{i} \\frac{\\partial }{\\partial v}-\\sum _{j} \\int K^{i j}(u) d u \\frac{\\partial }{\\partial y^{j}}.$ These correspond to the defining symmetry of the parallel wave $Z$ and the translations and rotations of the $y^j$ .", "Note that the $e_{i}^{*}$ are the usual rotations when we have $K_{ij} = \\delta _{ij}$ , that is the Minkowski metric eq.", "REF ." ], [ "Vanishing Scalar Invariants", "A well-known property of the plane pp-wave spacetimes is that all scalar curvature invariants (a scalar constructed from the metric, Riemann tensor and covariant derivatives of the Riemann tensor) are zeroOf course in the context of a general parallel wave, the scalar invariants of the wavefront will be inherited by the full spacetime..", "There are two approaches to prove this fact, the first by explicitly calculating the curvature tensor and the second by showing that each point $p$ in a plane wave spacetime is the fixed point of a homothety, and that any curvature invariant must be 0 at such a point.", "We will present the second such approach here, the proof of which is due to Schmidt , where we follow closely the presentation in .", "Theorem 3.1 All curvature invariants of a standard pp-wave vanish.", "We will proceed via the following series of arguments: An elementary curvature invariant cannot be invariant under constant rescalings of the metric (called a homothety).", "If there exists a coordinate transformation which induces a homothety, then due to the previous point, at the fixed points of the transformation (i.e.", "points which are invariant under the transformation) any elementary curvature invariant must be 0.", "Any point in a pp-wave spacetime is the fixed point of a homothety These statements are proved as follows: A general curvature invariant of a manifold $(M,g)$ is constructed from the metric and elementary curvature invariants.", "An elementary curvature invariant is obtained by taking covariant derivatives of the Riemann tensor $\\nabla _{\\mu _{1}} \\ldots \\nabla _{\\mu _{p}} R_{\\nu \\lambda \\rho }{}^{\\mu }$ and “tracing out\" all free indices with the inverse metric $g^{\\mu \\nu }$ .", "The Levi-Civita connection $\\nabla $ is invariant under a constant rescaling of the metric (homothety), which is conformal transformation, in which the conformal factor $\\lambda $ is a nonzero constant $g_{\\mu \\nu }\\longrightarrow \\tilde{g}_{\\mu \\nu }= e^{2\\lambda }g_{\\mu \\nu }$ That is, we have a second manifold $(M,\\tilde{g})$ conformally related to $(M,g)$ (the homothety is in particular not an isometry).", "Since $\\nabla $ is invariant under such a transformation, so too is the Riemann tensor.", "Since the (certainly not invariant) inverse metric is required to make a scalar, the elementary invariants cannot be invariant under such a homothety.", "Rather, a curvature invariant $J$ will change as $J(x) \\longrightarrow e^{m\\lambda }J(x)$ for some $x \\in M$ and some natural number $m$ which depends on the order of $J$ (number of covariant derivatives).", "Assume there exists a coordinate transformation of the Lorentzian manifold $(M,g)$ which induces a homothety with $x$ a fixed point.", "Since $x$ is a fixed point of the homothety we have $J(x) = e^{m\\lambda }J(x)$ differing from above in the equals sign alone.", "Such an equality can only hold (for natural $m$ and constant nonzero $\\lambda $ ) if $J(x) = 0$ .", "We simply need to construct the coordinate change for plane waves which induces a nontrivial homothety.", "As in Sec.", "REF , any standard pp-wave metric can be written in the so-called “Rosen coordinates\" as $g = 2dudv + g_{ij}(u)dy^idy^j.$ Such a form exhibits obvious translational symmetry in the $y^j$ and $v$ directions.", "Due to these symmetries, without loss of generality we can take a general point to be written as $x = (u_0,0,0)$ , which is fixed point of the coordinate transformation $(u,v,y^j) \\longrightarrow (u,\\lambda ^2 v, \\lambda y^j)$ for some constant $\\lambda $ .", "Such a coordinate transformation is in fact a homothety, and scales the metric as $g \\longrightarrow \\lambda ^2 g$ .", "Since we have shown that this is true for general $u_0$ , the result holds for any point $(u,v,\\mathbf {y})$ of a pp-wave.", "For further details of all classes of spacetimes in which the curvature invariants identically vanish, see ." ], [ "pp-waves via their Wavefronts", "As mentioned in Definition above, a distinguishing feature of a null vector field $Z$ is, of course, that it lies in its own orthogonal complement, $Z_{\\perp }$ , leading to the Wavefront $Z_{\\perp }/Z$ , a vector bundle whose elements are equivalence classes “$[X]$ \" of vector fields $X$ orthogonal to $Z$ .", "Because such vector fields are necessarily spacelike (see, e.g., ), $Z_{\\perp }/Z$ will inherit a (positive-definite) inner product from the Lorentzian metric $g$ .", "It turns out that when $Z$ is also parallel, as it is a for a pp-wave, then $Z_{\\perp }/Z$ will also inherit a well defined linear connection, and this can be used to give an alternative — and very geometric — definition of a pp-wave.", "This alternative formulation of a pp-wave, which we now provide, is well known; see, e.g., , .", "In the following, $\\Gamma (E)$ represents the space of sections of the vector bundle $E$ .", "Theorem 3.2 Let $(M,g)$ be a Lorentzian manifold and $Z$ a null, parallel vector field defined in an open subset ${U} \\subseteq M$ , with orthogonal complement $Z_{\\perp } \\subset T{U}$ .", "Then the wavefront $Z_{\\perp }/Z$ admits a positive-definite inner product $\\bar{g}$ , $\\bar{g}([X],[Y]) \\mathrel {\\unknown.", "{\\raisebox {0.3ex}{\\m@th \\cdot }}\\raisebox {-0.3ex}{\\m@th \\cdot }}=g(X,Y)\\hspace{14.45377pt}\\text{for all}\\hspace{14.45377pt}[X], [Y] \\in \\Gamma (Z_{\\perp }/Z),$ and a corresponding linear connection $\\overline{\\nabla }\\colon \\mathfrak {X}({U})\\times \\Gamma (Z_{\\perp }/Z) \\rightarrow \\Gamma (Z_{\\perp }/Z)$ , $\\overline{\\nabla }_{\\!V}{[Y]} \\mathrel {\\unknown.", "{\\raisebox {0.3ex}{\\m@th \\cdot }}\\raisebox {-0.3ex}{\\m@th \\cdot }}=[\\nabla _{\\!V}{Y}] \\hspace{14.45377pt}\\text{for all}\\hspace{14.45377pt} V \\in \\mathfrak {X}({U})\\hspace{14.45377pt}\\text{and}\\hspace{14.45377pt}[Y] \\in \\Gamma (Z_{\\perp }/Z).$ This connection is flat if and only if $({U},g|_{{U}})$ is a pp-wave.", "The metric $\\bar{g}$ will be well defined, and positive definite, whenever $Z$ is null; indeed, every $X \\in \\Gamma (Z_{\\perp })$ not proportional to $Z$ is necessarily spacelike, so that $\\bar{g}$ is nondegenerate (and positive-definite), and if $[X] = [X^{\\prime }]$ and $[Y] =[Y^{\\prime }]$ , so that $X^{\\prime } = X +fZ$ and $Y^{\\prime } = Y+kZ$ for some smooth functions $f,k$ , then $\\bar{g}([X^{\\prime }],[Y^{\\prime }]) = g(X^{\\prime },Y^{\\prime }) = g(X,Y) = \\bar{g}([X],[Y]).$ On the other hand, the connection $\\overline{\\nabla }$ requires $Z$ to be parallel or else it is not well defined: $\\nabla _{\\!V}{Y} \\in \\Gamma (Z_{\\perp })$ if and only if $Z$ is parallel, in which case $\\overline{\\nabla }_{\\!V}{[Y^{\\prime }]} = [\\nabla _{\\!V}{Y^{\\prime }}] = [\\nabla _{\\!V}{Y}]+{0}{[V(k)Z]} + {0}{[k\\nabla _{\\!V}{Z}]}\\, = \\overline{\\nabla }_{\\!V}{[Y]}.$ That $\\overline{\\nabla }$ is indeed a linear connection follows easily.", "Now, if this connection is flat, then by definition its curvature endomorphism, which is the mapping $\\overline{\\text{R}}\\colon \\mathfrak {X}({U}) \\times \\mathfrak {X}({U}) \\times \\Gamma (Z_{\\perp }/Z) \\rightarrow \\Gamma (Z_{\\perp }/Z),\\nonumber $ whose action is given by $\\overline{\\text{R}}(V,W)[X] \\mathrel {\\unknown.", "{\\raisebox {0.3ex}{\\m@th \\cdot }}\\raisebox {-0.3ex}{\\m@th \\cdot }}=\\overline{\\nabla }_{\\!V}[\\overline{\\nabla }_{\\!W}[X]] - \\overline{\\nabla }_{\\!W}[\\overline{\\nabla }_{\\!V}[X]] - \\overline{\\nabla }_{\\!", "[V,W]}{[X]},\\nonumber $ will vanish, for any section $[X] \\in \\Gamma (Z_{\\perp }/Z)$ and vector fields $V,W \\in \\mathfrak {X}({U})$ .", "Using the metric $\\bar{g}$ , this flatness condition is equivalent to $\\bar{g}(\\overline{\\text{R}}(V,W)[X],[Y]) = 0\\hspace{7.22743pt}\\text{for all}\\hspace{7.22743pt}V,W \\in \\mathfrak {X}({U})\\hspace{7.22743pt},\\hspace{7.22743pt}[X],[Y] \\in \\Gamma (Z_{\\perp }/Z).$ But if we unpack the definitions of $\\overline{\\nabla }$ and $\\bar{g}$ , we see that $\\bar{g}(\\overline{\\text{R}}(V,W)[X],[Y]) = \\text{Rm}(V,W,X,Y) = \\text{Rm}(X,Y,V,W).$ It follows that $\\overline{\\text{R}} = 0$ if and only if $R(X,Y)V = 0$ for all $X,Y \\in \\Gamma (Z_{\\perp })$ and $V \\in \\mathfrak {X}({U})$ ; by () and Definition , this is precisely the condition to be a pp-wave." ], [ "Penrose Limits", "We now outline the importance and prove the existence of the famous “Penrose limit\", which assigns a plane wave metric Eq.", "REF as a limit of any spacetime $(M,g)$ in a neighbourhood of a null geodesic $\\gamma $ .", "This is not a property of the parallel wave metrics, but rather a remarkable feature of all spacetimes.", "This fact was originally demonstrated by Penrose in 1976 , where he described the limiting procedure as a null analogy to the procedure by which one obtains the tangent space (that is, “zooming in\" on a small neighbourhood and scaling those neighbourhoods up in a complementary manner).", "It is worth pointing out that applications of Penrose's limit in physics continue to the present day, particularly in higher dimensions and in relation to string theory and the AdS/CFT correspondence; see, e.g., , , and the references therein.", "We adopt a different notation to that of Penrose's work to be consistent with the majority of modern literature regarding pp-waves, and in particular Theorem REF of this article.", "We take inspiration from the discussion of , who is consistent in explicitly writing the appropriate pullbacks which appear only implicitly in the original work .", "Theorem 3.3 Consider a hyperbolic normal $n$ -dimensional semi-Riemannian manifold $(M,g)$ .", "Along any conjugate-point free portion $\\gamma ^{\\prime }$ of a null geodesic $\\gamma $ , one can write the metric g in the so-called “null coordinates\" as $g = 2 d u d v+H d u^{2}+2A_a d x^{a} d u+g_{a b} d x^{a} d x^{b},$ where $H$ , $A_a$ and $g_{ab}$ (with $a,b \\in 1,\\dots ,n-2$ ) are smooth functions of the coordinates and $(g_{ab})$ is a positive-definite matrix, i.e.", "a family of Riemannian metrics on the $(n-2)$ -dimensional embedded submanifolds defined by $u=\\text{const}, v=\\text{const}$ .", "One could represent this metric in matrix notation as $g=\\begin{pNiceMatrix}H & 1 & A_1 & \\dots & A_{n-2} \\\\1 & 0 & 0 & \\dots & 0 \\\\A_1 & 0 & {3-3}{(g_{ab})}& & \\\\\\vdots & \\vdots & & & \\\\A_{n-2} & 0 & & &\\end{pNiceMatrix}.$ First, note that null gradient vector field $Z = \\text{grad}(u)$ for some function $u$ on $M$ always exist locally, since the partial differential equation $g(\\text{grad}(u),\\text{grad}(u)) = 0$ is a Hamilton-Jacobi equation for $u$ which always admits local solutions (see ).", "As is suggested by the similarity of the result, we take inspiration from the proof of Theorem REF , noting that we no longer assume that $Z$ be covariantly constant.", "The necessary adjustment to the proof is as follows: That $Z$ is nonzero in a neighbourhood of $\\gamma ^{\\prime }$ holds again by the fact that it is null, but also by the fact that $\\gamma ^{\\prime }$ is geodesic, that is $\\nabla _{\\dot{\\gamma ^{\\prime }}} \\dot{\\gamma ^{\\prime }} = 0$ .", "Thus the remainder of step 1 remains valid, and we may construct a coordinate system $\\lbrace \\tilde{x}^0,v,\\tilde{x}^1,\\ldots ,\\tilde{x}^{n-2}\\rbrace $ with $\\text{grad}(u)=Z=\\tilde{\\partial }_v$ via the straightening theorem.", "Step 2 is not necessary in this context, as $u$ has already been introduced by the above argument.", "Step 3 follows as before, yielding a coordinate system $\\lbrace u,v,\\mathbf {x}\\rbrace = \\lbrace u,v,x^1,\\dots ,x^{n-2}\\rbrace $ on an open set $U \\subset M $ containing $\\gamma ^{\\prime }$ .", "The form of the metric and the positive-definiteness of $(g_{ab})$ then follow from parts (ii) and (iii) of Proposition REF .", "Note that an alternative and succinct version of this proof was provided by , but the reader should note that their “$U,V$ \" is our “$v,u$ \".", "We now describe the limiting procedure by which one can “zoom in\" on a null geodesic (called the Penrose limit) while simultaneously scaling up the metric, in a manner analogous to obtaining the tangent space of a Riemannian manifold.", "The primary difference however is that in the Riemannian case, the space obtained via this procedure is a flat space, whereas in the Penrose limit we will obtain an intrinsically curved space, which will turn out to be the plane wave Eq.", "REF written in the Rosen coordinates of Sec.", "REF ." ], [ "Limiting Procedure: Penrose's Construction", "This section follows Penrose's original construction but is presented in a more modern language, in a self-contained manner using the proofs of Section , and explicitly generalised to arbitrary dimension.", "The procedure by which we will define the Penrose limit of a spacetime will be (schematically) as follows: Take a spacetime $(M,g)$ and write the metric in null coordinates in a neighbourhood of a null geodesic $\\gamma $ .", "Define a new coordinate system whose coordinate functions are those of the null coordinates divided by powers of a parameter $\\Omega $ (which we will let go to 0 later, causing those coordinates to “blow up\") and write $g$ in these coordinates.", "Define another metric $h$ on $M$ conformal to $g$ with constant factor $h = \\Omega ^{-2} g$ Show that in the limit $\\Omega \\rightarrow 0$ , $h$ (that is, $\\Omega ^{-2} g$ ) is simply the metric of a plane wave.", "This is the “Penrose limit\" of $(M,g)$ in a neighbourhood of $\\gamma $ , and importantly, the construction was independent of the properties of the spacetime metric $g$ .", "That is, all spacetimes look like a plane wave when we simultaneously scale up the coordinates and scale up the metric near a null geodesic $\\gamma $ , which amounts to “zooming in\" on $\\gamma $ , or equivalently, blowing up a neighbourhood of $\\gamma $ to cover the whole spacetime.", "To understand the complementary scaling of the coordinates and the metric, Penrose interprets this procedure as first scaling up the coordinates to “blow up\" the points of interest (just as one does when looking at the tangent space of any point), then, to account for the fact that a general curvature tensor will appear to blow up as the coordinates do, we must simultaneously scale up the metric to scale down the curvature tensor and obtain finite results.", "Physically, Penrose interprets this procedure as boosting an observer closer and closer to the speed of light, and a complementary re-calibration of their clocks in such a manner so as to keep the affine parameter $u$ along the null geodesic $\\gamma $ invariant under the procedure.", "For details see the original work and for a more modern description.", "We now begin the explicit construction.", "Consider an $n$ -dimensional Lorentzian manifold $(M,g)$ and an open set $U \\subset M$ (containing a conjugate point-free segment of a null geodesic $\\gamma $ ) on which the null coordinates Eq.", "REF are defined, and label this null coordinate chart $\\psi $ .", "Then consider a new coordinate chart $\\left(\\tilde{u}, \\tilde{v}, \\tilde{x}^{1}, \\ldots , \\tilde{x}^{n-2}\\right)$ on $U$ defined by the change of coordinates $\\phi _\\Omega =\\psi ^{-1}\\circ \\varphi _{\\Omega } \\circ \\psi U \\rightarrow U$ where $\\varphi _\\Omega ~ ~ &\\mathbb {R}^n \\rightarrow \\mathbb {R}^n\\\\~ &\\left(u,v, x^{1}, \\ldots , x^{n-2}\\right) \\mapsto \\underbrace{\\left(\\frac{u}{\\Omega ^{2}}, v, \\frac{x^{1}}{\\Omega }, \\ldots , \\frac{x^{n-2}}{\\Omega } \\right)}_{=\\left(\\tilde{u}, \\tilde{v}, \\tilde{x}^{1}, \\ldots , \\tilde{x}^{n-2}\\right) }$ for $\\Omega > 0$ a constant.", "The map $\\phi _\\Omega $ is then a diffeomorphism for $\\Omega \\ne 0$ .", "Define a second metric The metric $h$ will turn out to be conformal to $(\\phi _\\Omega ^{-1})^*g$ , but we could also start from that fact and define $h = \\Omega ^{-2} (\\phi _\\Omega ^{-1})^*g$ and the calculate its explicit form, which will be Eq.", "REF .", "$h$ whose representation in the tilde coordinates is $h=\\begin{pNiceMatrix}\\Omega ^{2} \\tilde{H} & 1 & \\Omega \\tilde{A}_1 & \\dots & \\Omega \\tilde{A}_{n-2} \\\\1 & 0 & 0 & \\dots & 0 \\\\\\Omega \\tilde{A}_1 & 0 & {3-3}{(\\tilde{g}_{ab})}& & \\\\\\vdots & \\vdots & & & \\\\\\Omega \\tilde{A}_{n-2} & 0 & & &\\end{pNiceMatrix},$ where $\\tilde{H}$ , the $\\tilde{A}_a$ and the $\\tilde{g}_{ab}$ are implicitly functions of all the tilde coordinates defined (strategically) in the following manner $\\begin{split}\\tilde{H} &= H(\\Omega ^2 \\tilde{u}, \\tilde{v}, \\Omega \\tilde{x}^1, \\dots ,\\Omega \\tilde{x}^{n-2}) = H(u,v,x^1,\\dots ,x^{n-2}),\\\\\\tilde{A}_a &= A_a(\\Omega ^2 \\tilde{u}, \\tilde{v}, \\Omega \\tilde{x}^1, \\dots ,\\Omega \\tilde{x}^{n-2}) = A_a(u,v,x^1,\\dots ,x^{n-2}),\\\\\\tilde{g}_{ab} &= g_{ab}(\\Omega ^2 \\tilde{u}, \\tilde{v}, \\Omega \\tilde{x}^1, \\dots ,\\Omega \\tilde{x}^{n-2}) = g_{ab}(u,v,x^1,\\dots ,x^{n-2}).\\end{split}$ The metric $h$ is conformal to $(\\phi _\\Omega ^{-1})^*g$ , which can be seen as follows: First, by definition of the tilde coordinate system and Eq.", "REF , we relate the components of $g$ and $h$ as: $\\begin{array}{c @{{}={}} c @{{}={}} c @{{}={}} c}g_{uv} ~ du \\otimes dv & du \\otimes dv & \\Omega ^2 d\\tilde{u} \\otimes d\\tilde{v} & \\Omega ^2 h_{\\tilde{u}\\tilde{v}}~d\\tilde{u} \\otimes d\\tilde{v},\\\\g_{uu} ~ du \\otimes du & H du \\otimes du & \\Omega ^{4} H d\\tilde{u} \\otimes d\\tilde{u} & \\Omega ^{2} h_{\\tilde{u}\\tilde{u}} ~ d\\tilde{u} \\otimes d\\tilde{u},\\end{array}$ and one obtains a similar relationship for the remaining components: $g_{\\rho \\sigma } ~ dx^\\rho \\otimes dx^\\sigma = \\Omega ^2 h_{\\rho \\sigma }~d\\tilde{x}^\\rho \\otimes d\\tilde{x}^\\sigma .$ Second, since $\\phi _\\Omega $ is a change of coordinates, it holds that $g_{\\rho \\sigma }~dx^\\rho \\otimes dx^\\sigma = ((\\phi _\\Omega ^{-1})^*g)_{\\rho \\sigma }~d\\tilde{x}^\\rho \\otimes d\\tilde{x}^\\sigma $ and thus $((\\phi _\\Omega ^{-1})^*g)_{\\rho \\sigma }~d\\tilde{x}^\\rho \\otimes d\\tilde{x}^\\sigma = \\Omega ^2 h_{\\rho \\sigma }~d\\tilde{x}^\\rho \\otimes d\\tilde{x}^\\sigma ,$ that is, $h$ and $(\\phi _\\Omega ^{-1})^*g$ are homothetic (conformal with constant conformal factor) as $h = \\frac{1}{\\Omega ^2}(\\phi _\\Omega ^{-1})^*g.$ We now actually take the Penrose limit of $(M,g,\\gamma )$ , which is a neighbourhood of $\\gamma $ in the spacetime formed by $M$ equipped with the metric $\\lim _{\\Omega \\rightarrow 0} \\frac{1}{\\Omega ^2}(\\phi _\\Omega ^{-1})^*g = \\lim _{\\Omega \\rightarrow 0}h.$ In this limit in the tilde coordinates, $h$ reduces to $\\lim _{\\Omega \\rightarrow 0}h=\\begin{pNiceMatrix}0 & 1 & 0 & \\dots & 0 \\\\1 & 0 & 0 & \\dots & 0 \\\\0 & 0 & {3-3}{(\\tilde{g}_{ab})}& & \\\\\\vdots & \\vdots & & & \\\\0 & 0 & & &\\end{pNiceMatrix},$ where $(\\tilde{g}_{ab})$ is now a function of $\\tilde{v} = v$ only as $\\tilde{g}_{ab} = g_{ab}(0,\\tilde{v}, 0,\\dots ,0)$ .", "This is precisely the Rosen coordinate representation of the plane wave metric Eq.", "REF (under an appropriate relabelling/reordering of the coordinates).", "What we have demonstrated is that in an appropriate limit around a null geodesic $\\gamma $ , any spacetime approaches a plane wave in a manner analogous to how a Riemannian manifold locally approaches Euclidean space in an appropriate limit.", "A collection of the Penrose limits of common spacetimes and a comprehensive overview of the properties of Penrose limits has already been established by , such as the hereditary properties (those properties of the limit which are inherited from the original spacetime).", "A covariant description of the limiting procedure is also provided, making significantly clearer the connection between the original metric $g$ and the properties of the resulting plane wave limit, which are encoded in the wave profile $H$ when written in the “Brinkmann coordinates\" as in Eq.", "REF .", "We close our discussion of Penrose's limit by illustrating a family of examples.", "These examples are taken from , wherein full derivations can be found; here we write down only the resulting plane wave limit itself, restricting out attention to dimension 4.", "Indeed, for both the Scharzschild metric and the Friedmann–Lemaître–Robertson–Walker (FRW) cosmological models, their Penrose plane wave limits take the form $ds^2 = 2dudv + \\sum _{a,b=1}^2\\frac{A_{ab}x^ax^b}{u^2}du^2 + dx^2+dy^2,$ where each $A_{ab}$ is a constant depending on the original metric, and where $x^1=x$ and $x^2=y$ ." ], [ "Causality in Parallel Waves", "We now review some basic results in the causal properties of parallel waves, starting with the well-known “remarkable property of plane waves\" proven by Penrose which spurred on much of this research." ], [ "A Remarkable Property of Plane Waves", "Roughly, Penrose showed that a (not necessarily gravitational) plane wave exhibits a “focusing property\" on the null cones (see Fig.", "REF ), and as a consequence, there exists no Cauchy hypersurface sufficient for the specification of Cauchy data.", "This is because the past null cone of any event is focused to a single point (anastygmatism) or line (astygmatism), and since a Cauchy hypersurface has the property that it intersects any causal curve exactly once, it is concluded that this focusing property forces many causal curves to intersect any potential Cauchy hypersurface at least twice.", "To begin, let us first define the relevant objects.", "As in Sec.", "REF (with a small relabelling), a plane wave is defined as a 4-dimensional standard pp-wave in adapted coordinates $\\lbrace u,v,x^1,x^2\\rbrace $ for which the characteristic function $H(u,x^1,x^2)$ is quadratic in $(x^1,x^2)$ , that is the spacetime $(M ={\\rm I\\!R}^4,g)$ where $&g = 2\\mathrm {~d} u \\mathrm {~d} v + H(u,x^1,x^2)~\\mathrm {d}u^2 + (\\mathrm {d} x^1)^{2}+(\\mathrm {d} x^2)^{2}\\\\&H(u,x^1,x^2) = \\sum _{i, j=1}^{2} h_{i j}(u) x^{i} x^{j}$ for some symmetric matrix formed by the $h_{ij}$ .", "We also define the null cone: Definition 1 Null Cone The null cone (denoted $\\kappa _3$ ) at a point $Q \\in M$ is defined as the set of points lying on all null geodesics through $Q$ .", "In this section, Penrose utilises the so-called “sandwich waves\", defined by the characteristic that the amplitudes $h_{ij}(u) = 0$ unless $u \\in (a,b)\\subset {\\rm I\\!R}$ .", "One can visualise such a plane wave as in Fig.", "REF , in which it becomes clear that a sandwich wave is a plane wave for which the infinite extent in the $u$ direction is removed.", "Figure: (Left) The wave profile of a sandwich plane wave, in which the uu coordinate range of the “curved\" region is (a,b)(a,b).", "(Right) The focusing effect of such a wave on the past null cone of a point R in the electromagnetic case.", "Figures from Fig.", "1 & 2.We now outline the primary result of , where some details are omitted and only the main steps of the proof are reproduced.", "Theorem 3.4 The past null cone of any point $Q$ in a plane wave $(M,g)$ is focused to a single point (for an electromagnetic plane wave) or a line (gravitational plane wave).", "To begin, choose a point $Q$ in the flat region of $M$ , such that the components of $Q$ are $u = u_0 < a, \\quad v = v_0, \\quad x^i = 0,$ where $a$ is the lower bound of the interval on which $u$ is nonzero for the sandwich wave.", "Close to $Q$ , the equation of the null cone $\\kappa _3$ is $(u - u_0)(v - v_0) - x^ix^i = 0$ which can be written $v = f_{ij}(u) x^ix^j + v_0,$ where $f_{ij}(u) = (u - u_0)^{-1}\\delta _{ij}$ near $Q$ .", "We now wish to obtain a description of $\\kappa _3$ valid away from $Q$ , that is to find an appropriate $f_{ij}(u)$ .", "If the surface is to remain null even in the curved regions of $M$ , then one can show that $f_{ij}$ should be both symmetric and satisfyThe original paper lists the condition as $\\frac{d}{du}f_{ij} + f_{ik}f_{kl} + h_{ij} = 0$ , the $l$ index likely being erroneous.", "$\\frac{d}{du}f_{ij} + f_{ik}f_{kj} + h_{ij} = 0.$ With “initial condition\" Eq.", "REF one obtains an $f_{ij}$ which describes the null cone $\\kappa _3$ even in the curved region of $(M,g)$ .", "This extension is only valid while $f_{ij}$ is finite, and so we now examine if and when $f_{ij} \\rightarrow \\infty $ .", "To do so, consider the trace of the above differential equation, noting that $h_{ij}$ is trace-free for a vacuum solution and in general $h_{ii} > 0$ .", "$\\frac{d}{du}f_{ii} + \\frac{1}{2}f_{ii}f_{jj} = -\\frac{1}{2}\\left(f_{i k} f_{i k} \\delta _{j l} \\delta _{j l}-f_{i k} \\delta _{i k} f_{j l} \\delta _{j l}\\right)-h_{i i} \\le 0$ via Schwarz' inequality.", "Therefore, one finds the integro-differential inequality on the trace of $f$ $\\frac{d^2}{du^2}\\left\\lbrace \\frac{1}{2} \\int _{u_0}^u f_{ii}(\\bar{u})d\\bar{u} \\right\\rbrace =\\frac{d^2}{du^2} \\rho (u) \\le 0,$ where the inequality is sharp for at least some values of $u$ .", "Since our choice of $u_0$ in $Q$ was arbitrary, consider the limit $u_0\\longrightarrow \\infty $ .", "Then from the definition of $f_{ij}$ near $Q$ , we see that $f_{ij} = 0 ~\\forall ~ u < a$ .", "Then via Eq.", "REF , we see that $\\kappa _3$ is described by the equation $v = v_0$ , that is the null cone is a null hyperplane in the flat region.", "When $f_{ij} = 0$ then in particular $\\rho ^{\\prime } = 0$ initially (prime meaning $u$ -derivative), and therefore by Eq.", "REF we have that an initially positive $\\rho $ will become 0 for finite $u$ .", "If $\\rho =0 $ then some component of $f_{ij}$ must become singularThis is related to the fact that there are often coordinate singularities when writing a pp-wave metric in Rosen coordinates, which originally lead to the belief that there did not exist non-singular plane wave solutions of the full Einstein equations.. Denote the $u$ at which $f_{ij}$ exhibits singularity by $u_1 > a$ (since for $u_1 \\le a$ we have $f_{ij} \\equiv 0$ ).", "If this singularity occurs outside the curved region, i.e.", "$u_1 > b$ then the null cone $\\kappa _3$ encounters a singularity on the “past\" side of the sandwich wave.", "In fact, one needs to consider large and negative $u_0$ as opposed to the $-\\infty $ limit, but this does not affect the relevant equations here.", "Now consider the flat region containing this singularity.", "In this region Eq.", "REF may be written as $p_{ij}^{\\prime } = \\delta _{ij}$ where $p_{ij}$ is the inverseIn the original reference this is written as $p_{ij}f_{jk} = \\delta _{ij}$ , again likely to be erroneous.", "matrix to $f_{ij}$ , i.e.", "$p_{ij}f_{jk} = \\delta _{ik}$ .", "The solution of this differential equation for $p_{ij}$ is $p_{ij}(u) = u\\delta _{ij} - q_{ij}$ for constant and symmetric $q_{ij}$ (since $f$ is symmetric).", "therefore $f_{ij}$ has singularity whenever $u$ is an eigenvalue for $q_{ij}$ .", "Either these eigenvalues are distinct or they are degenerate, in which case $q_{ij} = u_1\\delta _{ij}$ .", "In this degenerate case, $p_{ij}$ has the form $(u - u_1)\\delta _{ij}$ , and $\\kappa _3$ has two vertices, namely $p$ and the point $R = (u_1,v_0,\\mathbf {0})$ .", "This is because the equation of $\\kappa _3$ reduces to a single point at both $p$ and $R$ , as in fig.REF .", "In fact, that $\\kappa _3$ is focused to a single point (anastygmatic) is specific to the purely electromagnetic case in which $h_{ij}$ is purely diagonal.", "For the gravitational case, one finds that $\\kappa _3$ is focused onto a line.", "Since the arguments used are very similar, we omit this proof here.", "See for details.", "To explain why this result shows that plane waves are not globally hyperbolic, consider a candidate for a Cauchy hypersurface.", "Such a hypersurface would have to intersect the $v$ -line through $R$ .", "But then some of the other past-oriented lightlike geodesics from $R$ to $Q$ have to be intersected twice.", "Looking to Fig.", "REF , a connected spacelike hypersurface such as the proposed Cauchy hypersurface containing $Q$ must initially lie entirely in the past of (drawn as “below\" on the diagram) the future null cone of $Q$ .", "A Cauchy hypersurface can never meet the null line ${R}_1$ , as if it were to do so then it would intersect the null geodesics through $Q$ twice (since they are all focused onto ${R}_1$ ).", "As a result, the proposed Cauchy hypersurface must “bend downwards\" to avoid ${R}_1$ , and can never extend through it while remaining everywhere spacelike, and as in : “Cauchy data on such a hypersurface could thus give no information for specifying amplitudes for a parallel waveCuriously, this appears to be one of the first uses of the name “parallel wave\".", "Note however that this phrasing is not consistent with the definitions of this article, and the object in question is more accurately referred to as a “plane wave\".", "which might lie beyond ${R}_1$ \"." ], [ "Generic Position on the Causal Ladder", "After Penrose showed that the plane waves are not globally hyperbolic, interest was spurred in discovering the exact position of both the plane waves and pp-waves on the causal ladder.", "This question has been categorically answered for the plane waves by , and then for the $(N,h)$ p-waves by .", "Note that the causality properties of the more general class of parallel waves does not appear to have been studied.", "Let us first recall the causal ladder for Lorentzian manifolds: Globally hyperbolic ($\\exists $ a Cauchy surface) $\\Downarrow $ Causally simple (pasts and futures are closed + causality) $\\Downarrow $ Causally continuous (“continuity\" of pasts and futures + distinguishing) $\\Downarrow $ Stably causal ($\\exists $ a global time function) $\\Downarrow $ Strongly causal ($\\nexists $ closed or “almost closed\" causal curves) $\\Downarrow $ Distinguishing ($\\nexists $ points with same pasts and futures) $\\Downarrow $ Causal ($\\nexists $ closed causal curves) $\\Downarrow $ Chronological ($\\nexists $ closed timelike curves) $\\Downarrow $ Non–totally vicious ($\\exists $ points $p\\in M$ with $p\\lnot \\ll p$ ) from and .", "Note that “stably causal\" was first understood as the causality being a stable property under perturbations, but Hawking showed that this is equivalent to the existence of a global time function.", "Also note that $x \\ll y$ means that $x$ chronologically precedes $y$ , that is there exists a future-directed chronological (timelike) curve from $x$ to $y$ .", "To make explicit our conventions, and to align with the conventions of we choose the signature of our spacetimes $(M,g)$ to be $(-,+,\\dots ,+)$ , i.e., a non-zero vector field $X \\in TM$ is timelike $\\iff $ $g(X,X) < 0$ , lightlike $\\iff $ $g(X,X) = 0$ , spacelike $\\iff $ $g(X,X) > 0$ , and we take the zero vector to be spacelike.", "We also use “causal\" to mean lightlike or timelike when referring to a vector field.", "Also to remain consistent with , when dealing with parallel waves we will fix our time-orientation such that $\\partial _v$ is past-directed.", "We now examine the causal classification of the parallel waves, starting with the relatively simple result: Proposition 3.5 All $(N,h)$ p-waves are chronological.", "For a parallel wave defined by a covariantly constant, null vector field $Z$ , in the adapted coordinates of Theorem REF , we have $Z = \\nabla u = \\partial _v$ .", "For any future-directed causal curve $\\gamma (s) = (u(s),v(s),\\mathbf {x}(s))$ it holds that $\\dot{u}(s) = g(\\dot{\\gamma }(s),\\partial _v) \\ge 0$ where the inequality is sharp for $\\gamma (s)$ timelike.", "Such an inequality prevents the existence of closed timelike curves, and thus the spacetime is chronological.", "Being one of the “lower rungs\" of the causal ladder, being chronological is not a relatively strong restriction.", "We can however show that a generic $(N,h)$ p-wave lies one step higher on the ladder: Theorem 3.6 All $(N,h)$ p-waves are causal.", "We will prove this theorem below using Proposition REF .", "The proof of this result follows from , which we will reproduce here.", "To do so, we first introduce the concept of a quasi-time function.", "Definition 2 Quasi-time function.", "On a Lorentzian manifold $(M,g)$ a smooth function $f: M \\mapsto {\\rm I\\!R}$ is called a quasi-time function for $(M,g)$ if $\\nabla f$ is everywhere nonzero, causal and past-directed, and if every null geodesic segment $\\gamma $ such that $f \\circ \\gamma $ is constant, is injective.", "Now we may reproduce the afformentioned for completeness, which is stated as: Proposition 3.7 Any spacetime admitting a quasi-time function is causal.", "Assume $f$ is a quasi-time function as in Definition REF , then due to (i) we have that $f$ is strictly increasing along all future-directed timelike curves in $M$ , and hence $(M,g)$ is chronological.", "We now prove causality by contradiction.", "Assume $(M,g)$ is not causal, then $M$ would contain a non-trivial, smooth, future-directed null geodesic segment $\\tilde{\\gamma }:[0,1] \\rightarrow M$ with $\\tilde{\\gamma }(0)=\\tilde{\\gamma }(1)$ and $\\tilde{\\gamma }^{\\prime }(0)=\\tilde{\\gamma }^{\\prime }(1)$ .", "Furthermore $\\tilde{\\gamma }$ may be extended to an inextendible geodesic $\\gamma : {\\rm I\\!R}\\rightarrow M$ by letting $\\gamma (s)=\\tilde{\\gamma }(s \\bmod 1) .$ Again because of $(i)$ and by continuity of all the relevant properties, $f$ is non-decreasing along $\\gamma $ ; hence $f \\circ \\gamma (s)=\\lambda _{0}$ for all $s \\in {\\rm I\\!R}$ , constant $\\lambda _0 \\in {\\rm I\\!R}$ , which would contradict $(ii)$ , since $\\gamma (0)=\\gamma (1)$ .", "Thus, $(M,g)$ must be causal.", "We now return to the proof of Theorem REF , armed with the knowledge of the above proposition.", "Proof of Theorem REF All that we require is that any $(N,h)$ p-wave admits a quasi-time function.", "This is proven in and again is reproduced here.", "The claim is as follows: Claim: When an $(N,h)$ p-wave is written in the adapted coordinates of Theorem REF , the coordinate function $u$ is a quasi-time function as in Definition REF .", "To prove this, note that by definition we have a covariantly constant, null vector field $Z$ such that $Z = \\nabla u = \\partial _v$ .", "Thus $\\nabla u$ is causal by definition.", "Since $Z = \\nabla u$ is nontrivial and covariantly constant, we have that $\\nabla u$ is everywhere nonzero.", "Furthermore $\\nabla u$ is past-directed since $\\nabla u = \\partial _v$ and the time-orientation on $(M,g)$ can be determined by the condition that $\\partial _v$ be past-directed.", "Therefore point $(i)$ in the definition of a quasi-time function is satisfied.", "Next, note that since the restriction of $g$ (Eq.", "REF ) to the null hypersurface $\\Pi _{u_0} = u^{-1}(u_0)$ for some $u_0 \\in {\\rm I\\!R}$ is independent of the characteristic function $H$ and the wavefront is spacelike, the null geodesic segments will be of the form $\\gamma : v\\in {\\rm I\\!R}\\mapsto (u_0,v,\\mathbf {x}_0) \\in \\Pi _{u_0}.$ Such a map is injective, and thus point $(ii)$ in the definition of a quasi-time function also holds." ], [ "Conditions for Stronger Causal Character", "We now shift our focus to finding the conditions under which an $(N,h)$ p-wave exhibits stronger causality properties.", "This was the subject of , in which is was shown that the criterion for determining causal character is the spatial asymptotic behaviour of the characteristic function $H$ (when the parallel wave is written in adapted coordinates), and in some cases the completeness of the Riemannian manifold corresponding to the wavefront.", "A summary of the results of this work is given in Table REF , where one uses $-H$ to classify asymptotic behaviour as opposed to $H$ to be consistent with work which will be presented in Sec.", ".", "A precise definition of the asymptotic behaviour of $H$ follows from: Definition 3 Subquadratic Growth.", "We say that $-H(u,\\mathbf {x})$ behaves subquadratically at spatial infinity if there exists some $\\mathbf {x}_0\\in N$ (where $N$ is the wavefront) and continuous functions $R_1(u), R_2(u)(\\ge 0), p(u) < 2$ such that: $-H(\\mathbf {x}, u) \\le R_{1}(u) d^{p(u)}(\\mathbf {x}, \\mathbf {x}_0)+R_{2}(u) \\quad \\forall (u,\\mathbf {x}) \\in {\\rm I\\!R}\\times N,$ where $d$ is the distance canonically associated to the Riemannian metric on $N$ .", "When $p(u) \\equiv 2$ , then we say $-H(u,\\mathbf {x})$ behaves (at most) quadratically at spatial infinity.", "Table: Causal properties of an (N,h)(N,h)p-wave under certain conditions on the characteristic function HH.", "The rightmost column lists the (non-generic) causal character of certain examples with the corresponding asymptotic behaviour of -H-H.", "Results as in .In light of Table REF we can identify $H$ being quadratic as critical for the causal behaviour, in the sense that small perturbations either in the superquadratic or in the subquadratic direction may introduce significative qualitative differences in the causal character." ], [ "The Ehlers–Kundt Conjecture", "The Ehlers–Kundt conjecture is a statement about the role of gravitational plane waves (Eq.", "REF ) in the mathematical description of gravitational waves.", "Roughly, it claims that the plane waves act as a mathematical idealisation of gravitational waves, and is stated as follows: “Prove the plane waves to be the only complete (gravitational) pp-waves.", "\"The Ehlers–Kundt conjecture originally contained the addendum “no matter which topology one chooses\", but as discussed in the extension of the conjecture to manifolds of general topology is nontrivial.", "This extension was provided by , which reduces to the statement above under the appropriate conditions.", "The conjecture stems from the idea that gravitational radiation should not arise in a spacetime in which there is no source to create it.", "If a spacetime is complete and Ricci-flatFor clarity, Ricci-flat = purely gravitational = vacuum = no matter present.", "but the metric describes a propagating wave, then that wave would be produced independent of any source.", "Since complete spacetimes are inextendible, that is they are not part of some larger spacetime, we can be sure that we are not just “missing\" the part of the spacetime containing a source.", "If a vacuum spacetime contains a wave but is not complete, it is certainly possible that we are missing the source in our description.", "An analogy would be a room with light coming from behind a curtain.", "In this analogy light is the pp-wave, “vacuum\" means we cant see any lightbulbs (sources), and completeness equates to removing the curtain, so we can see everywhere in the room.", "If the curtain is present and we see light in the room, it is reasonable to say there must be a source behind the curtain.", "However it seems impossible that there is light in the room, we can see everywhere, and there is no lightbulb.", "To translate back to our terminology, it seems it should be impossible that our spacetime contains a wave, is complete, and is also Ricci-flat.", "Figure: An analogy for the Ehlers–Kundt conjecture.", "Art courtesy of Christopher Martin.Ehlers and Kundt showed that the plane waves are always complete, even in the vacuum case.", "That is they correspond to the apparently unphysical case of a lit room with no curtain and no lightbulb.", "The Ehlers–Kundt conjecture assigns the plane waves the role of mathematical idealisations, and claims that any other pp-wave (REF ) must be incomplete, so that the source which “must have\" created the waves is simply not part of our description.", "This is strongly related to the fact proven by , wherein Penrose shows that the plane waves are not globally hyperbolic, as discussed in Sec.", "REF .", "Spacetimes which are bothIt is necessary to have both completeness and non-global hyperbolicity to claim the spacetime is unphysical.", "This is because by removing a point from a globally hyperbolic spacetime, one “destroys\" that global hyperbolicity.", "If that spacetime can be extended (here, by adding that point back) to a globally hyperbolic one, we should not consider it necessarily unphysical.", "However, a complete spacetime is inextendible, meaning there is no possibility to “get back\" the global hyperbolicity.", "For this reason, if a non-globally hyperbolic spacetime is complete, we can safely consider it unphysical.", "complete and not globally hyperbolic are generally considered unphysical, since the development of the spacetime from arbitrary initial data in the initial value formulation of the Einstein equations is not unique in this case.", "This construction is outlined in section REF .", "The EK-conjecture for gravitational pp-waves can be summarised as “spacetime is complete\" $\\iff $ it is a plane wave.", "However since the $$ direction was already proven by , the conjecture in fact only refers to the $\\Rightarrow $ direction.", "Although there is no known counterexample (i.e.", "a complete pp-wave in 4 dimensions other than the plane wave), the conjecture remains an open question.", "Significant progress has been made in addressing it however, and the remainder of this section will outline that progress.", "To begin, let us formulate the conjecture in more precise mathematical terms, and focus our attention on the classical pp-waves on $M = {\\rm I\\!R}^4$ so that our metric takes the form $g = 2 d u d v - V(u, x,y) d u^{2}+ dx^2 + dy^2,$ where to be Ricci-flat/vaccum we must have that $V = -H$ is harmonic in $(x,y)$ .", "That is, $V_{xx} + V_{yy}=0$ .", "The Ehlers–Kundt conjecture in this case states: if $(M,g)$ is geodesically complete, then $V(u,x,y)$ must be quadratic in $(x,y)$ .", "We may replace the “complete\" in the original statement with “geodesically complete\" and study the geodesic equations of $(M,g)$ .", "Upon calculating the geodesic equations, one finds $\\ddot{u} &=0 \\\\\\ddot{v} &=\\frac{\\dot{u}}{2}\\left(\\dot{u} V_{u}(u, x, y)+2 \\dot{x} V_{x}(u, x, y)+2 \\dot{y} V_{y}(u, x, y)\\right) \\\\\\ddot{x} &=-\\frac{\\dot{u}^{2}}{2} V_{x}(u, x, y) \\\\\\ddot{y} &=-\\frac{\\dot{u}^{2}}{2} V_{y}(u, x, y),$ where a dot represents the derivative with respect to an affine parameterNote that since the solution for $u(t)$ is $at+b$ for constants $a$ and $b$ , then $u$ can be used as an affine parameter along the geodesic.", "This fact extends also to $n$ dimensions and does not depend on the properties of $H$ .", "$t$ .", "Since the boundary conditions determine $u$ entirely, and the completeness of $v(t)$ evidently depends only on the completeness of $x(t)$ and $y(t)$ , in studying the completeness the geodesic equations reduce to $\\begin{split}\\ddot{x}(u) &= - V_x(u,x,y),\\\\\\ddot{y}(u) &= - V_y(u,x,y).\\end{split}$ These equations can be recast as a Hamiltonian system by defining $q(u) = (x(u),y(u))$ , $p = \\dot{q}$ , and $\\nabla $ the Euclidean gradient on ${\\rm I\\!R}^2$ , such that we have $\\dot{p} = -\\nabla V(u,q).$ In this section we will use only $V$ as opposed to $H$ , in order to maintain the interpretation as the potential of a dynamical system in classical mechanics.", "The Ehlers–Kundt conjecture can be restated in this language as: Prove that for $V(u,x,y)$ harmonic in $(x,y)$ , if the Hamiltonian system $\\dot{p} = -\\nabla V(u,q)$ admits global solutions for all initial data, then the $u$ -constant function $V(u,\\cdot )$ is an at most quadratic polynomial in $(x,y)$ .", "As mentioned above, this statement has not been proven in general.", "Before moving on to examine the special cases in which the conjecture have been proven, beginning with the so-called polynomial EK-conjecture, we pause to mention a beautiful connection this conjecture has with complex dynamics, an observation due to G. Cox (private communication)." ], [ "Relation to Complex Dynamics", "In what follows, assume that $V$ is independent of $u$ (“autonomous\"), and consider the complex-valued function $f\\colon \\mathbb {C} \\rightarrow \\mathbb {C}$ constructed from the partial derivatives $V_x,V_y$ of $V$ : $z = x+iy \\hspace{14.45377pt},\\hspace{14.45377pt}f(z) = -V_x(x,y) + i V_y(x,y).$ The Cauchy-Riemann equations are $-V_{xx} = V_{yy} \\hspace{14.45377pt},\\hspace{14.45377pt} -V_{xy} = -V_{yx},$ and observe that, while the second equation holds trivially, the first equation is satisfied precisely when $V(x,y)$ is harmonic (this is also the case for $f(z) = V_y + i V_x$ ).", "It was shown in that, given any entire function $f(z)$ (i.e., a function holomorphic on the entire complex plane $\\mathbb {C}$ ), the complex-valued ODE $\\ddot{z} = f(z)$ admits global solutions for all initial data if and only if $f(z)$ is affine linear.", "If we apply this result to Eq.", "REF , one finds $\\ddot{x} + i\\ddot{y} = \\ddot{z} = f(z) = -V_x + iV_y,$ then yields that this system is complete if and only if $V_{xxx} = V_{yyy} = 0$ ; i.e., if and only if $V$ is quadratic in $x,y$ .", "This is not quite a proof of the EK-conjecture, however, since the pair of real ODEs to which Eq.", "REF gives rise is not the usual Hamiltonian system Eq.", "REF , but rather the following variation of it: $\\ddot{x} = -V_x \\hspace{14.45377pt},\\hspace{14.45377pt} \\ddot{x} = V_y.$ Indeed, to obtain the usual Hamiltonian ODEs if we had chosen instead the function $f(z) = -V_x - iV_y.$ (See also Eq.", "REF in Remark 5.1 below.)", "Unfortunately, this function is holomorphic if and only if the harmonic function $V$ is linear; indeed, owing to Eq.", "REF , this choice of $f(z)$ is precisely anti-holomorphic (i.e., its complex-conjugate is holomorphic).", "We therefore come to the beautiful realization that the EK conjecture is the anti-holomorphic analogue of and, as such, forms a bridge connecting general relativity to complex dynamics.", "The main ingredient in the proof of is a classification of the complete complex orbits of $\\ddot{z} = f(z)$ which shows that they must be isomorphic to certain Riemann surfaces ; it is an intriguing question to see if the complete orbits of Eq.", "REF , in the case when $V$ is harmonic, can be similarly classified." ], [ "Polynomial EK-Conjecture", "In this section we will outline some of the work done by Flores and Sánchez in , who studied the EK-conjecture in the case that the potential $V$ is polynomially bounded.", "We refer to the case when $V$ does not depend on $u$ as the “autonomous case\", that is $V = V(x,y)$ .", "The $u$ -dependence of $V$ is not restricted by any of the previous discussion, and so it is natural to first consider the autonomous case.", "To make statements about the completeness of trajectories, the authors make use of confinement properties of the relevant ODEs, and so we begin by developing some intuition for this:" ], [ "Motivation for Proof", "As a point of entry into thinking about the Ehlers–Kundt conjecture, consider for a moment the case when $V$ is an autonomous harmonic polynomial that is even in $y$ , namely, $V(x,-y) = V(x,y)$ ; e.g., $V(x,y) = -x^3+3xy^2\\hspace{7.22743pt}\\text{and}\\hspace{7.22743pt}V(x,y) = -x^4+6x^2y^2-y^4$ are two such examples.", "The virtue of this class of harmonic polynomials is that, since the partial derivative $V_y$ is necessarily odd in $y$ , we must have $V_y(x,0) = 0$ .", "As a consequence, the ODE $\\ddot{y} = -V_y(x(t),y(t))$ admits the trivial solution $y(t) = 0$ , for which choice the remaining ODE in $x$ takes the form $\\ddot{x} = -V_x(x(t),0).$ Any solution $x(t)$ to Eq.", "REF then yields a solution $(x(t),0)$ of our original two-dimensional ODE — and the advantage to this approach is that Eq.", "REF permits a much easier blow-up analysis.", "Indeed, consider any autonomous harmonic polynomial that is not even in $y$ , but, like the examples in Eq.", "REF , has negative leading term in $x$ :In fact any harmonic polynomial that is even in $y$ can be put in such a form by a rotation of the $xy$ -plane, where we note that rotations are isometries of the pp-wave metric, and that they also preserve the property of being harmonic.", "$V(x,0) = - (a_dx^d + a_{d-1}x^{d-1} + \\cdots + a_1x + a_0)\\hspace{14.45377pt},\\hspace{14.45377pt}a_d > 0\\ ,\\ d \\ge 3.$ Then, since $a_d > 0$ , we can, by a translation $x \\mapsto x + a$ if necessary (which is an isometry of the standard pp-wave metric), assume that each $a_i \\ge 0$ as well.", "But now with “every term negative\", it follows easily that the solution $x(t)$ to Eq.", "REF satisfying $x(0) = 1$ and $\\dot{x}(0) = \\sqrt{2a_d}$ must be bounded above (i.e.", "bounded below in absolute value) by the corresponding solution to $\\bar{V}(x) = -a_dx^d \\hspace{14.45377pt},\\hspace{14.45377pt}\\ddot{x} = \\bar{V}^{\\prime }(x(t)) = -da_dx(t)^{d-1}.$ since $V < \\bar{V}$ .", "This latter, bounding solution is $\\bar{x}(t) = \\frac{b}{(c-t)^{\\frac{2}{d-2}}} \\hspace{14.45377pt},\\hspace{14.45377pt}b \\mathrel {\\unknown.", "{\\raisebox {0.3ex}{\\m@th \\cdot }}\\raisebox {-0.3ex}{\\m@th \\cdot }}=\\Big [\\underbrace{\\frac{2}{a_d(d-2)^2}}_{>\\,0}\\Big ]^{\\!\\frac{1}{d-2}} \\hspace{14.45377pt},\\hspace{14.45377pt}c \\mathrel {\\unknown.", "{\\raisebox {0.3ex}{\\m@th \\cdot }}\\raisebox {-0.3ex}{\\m@th \\cdot }}=b^{\\frac{d-2}{2}},$ which blows up in finite time.", "Thus, since $|x(t)|$ is bounded below by a function that blows up in finite time, it follows that the solution $(x(t),0)$ also blows up in finite time.", "What made this approach work?", "It was the property of being even in $y$ that allowed us to find geodesics that stay in a confined region of the $xy$ -plane — namely, the $x$ -axis — which confinement simplified the resulting ODEs to the point where their behavior was dominated by the leading term of just one polynomial.", "This is an effective means of symplifying the analysis, but, of course, not every harmonic polynomial is even in $y$ .", "The questions remains, therefore, as to whether this technique of “concentrating in a particular region of the plane\" can work in general.", "Indeed it was demonstrated in that this technique does work in full generality, thereby resolving the polynomial case of the Ehlers–Kundt conjecture.", "Remark 4.1 In their work on the polynomial case of the EK-conjecture the authors use a complex variable approach, wherein $z = x + iy$ takes the place of the vector $q$ and similarly $\\dot{z} = p$ .", "There is a good reason that we should consider the polynomial case in the complex numbers $\\mathbb {C}$ as opposed to the real numbers.", "As explained in , in the autonomous case $V:{\\rm I\\!R}^2 \\rightarrow {\\rm I\\!R}$ we may identify $\\mathbb {C}$ with ${\\rm I\\!R}^2$ .", "The completeness of the trajectories of a potential $V$ is equivalent to the completeness of a corresponding vector field $X$ on the tangent bundle, and there exists a well-established theory about completeness of holomorphic vector fields $X$ on $\\mathbb {C}^2$ in the case that they are polynomial.", "The more general case where $V$ is not polynomially bounded does not admit an obvious advantage in the complex language.", "In this notation, the geodesic equations take the form $\\dot{p} = -\\nabla V(q) \\Rightarrow \\ddot{z} = -V_x(x,y) - iV_y(x,y)$ For the purposes of this review, we will continue to explicitly write $x$ and $y$ in place of $z$ .", "We now ask ourselves if the above ODE Eq.", "REF admits global solutions for $V$ harmonic in $(x,y)$ , that is we wonder if the corresponding spacetime manifold in the original statement of the EK conjecture is geodesically complete.", "In fact, this is an open question in general.", "The following partial result by became an important motivation for the so-called polynomial EK-conjecture: Theorem 4.2 (Candela, Romero & Sánchez ’13) For $V: {\\rm I\\!R}^2 \\rightarrow {\\rm I\\!R}$ harmonic in $q = (x,y) \\in {\\rm I\\!R}^2$ , if there is a constant $b\\in {\\rm I\\!R}$ such that $V(q) \\ge - b|q|^2$ for all $q \\in {\\rm I\\!R}^2$ , then the ODE $\\ddot{q} = -\\nabla V(q)$ admits global solutions for all initial data.", "In other words, this is the statement that the Ehlers–Kundt conjecture holds in the case that $H = -V$ is subquadratic.", "We reproduce now a short version of the proof which is originally due to G. Cox (private communication): It is sufficient to assume $b > 0$ .", "Since we have translated the original conjecture to the realm of Newtonian dynamics, we may apply simple energy conservation $\\frac{1}{2}|p|^2 + V(q) = E \\Rightarrow |p|^2 \\le 2(E + b|q|^2).$ We then bound $|p|$ by $|q|$ in the cases of negative and non-negative energy: $\\begin{aligned}E<0 & \\Rightarrow |p|^{2} \\le 2 b|q|^{2}, \\\\E \\ge 0 & \\Rightarrow 2\\left(E+b|q|^{2}\\right)=\\underbrace{2(\\sqrt{E}+\\sqrt{b}|q|)^{2}-4 \\sqrt{E b}|q|}_{\\le 2(\\sqrt{E}+\\sqrt{b}|q|)^{2}}.", "\\\\\\end{aligned}$ Such that in both cases we have the bound $|p| \\le a+c|q| \\quad , \\quad a \\ge 0, c>0.$ We can then bound $|q(t)|$ using $|q(0)|$ as follows: $\\begin{aligned}\\underbrace{\\left|\\int _{0}^{t} p(s) d s\\right|}_{|q(t)|-|q(0)| \\le } & \\le \\int _{0}^{t}|p(s)| d s \\le \\underbrace{\\int _{0}^{t}(a+c|q(s)|) d s}_{a t+c \\int _{0}^{t}|q(s)| d s} \\\\& \\Rightarrow |q(t)| \\le (|q(0)|+a t)+c \\int _{0}^{t}|q(s)| d s \\\\& \\Rightarrow |q(t)| \\le \\underbrace{(|q(0)|+a t) e^{c t}}_{\\text{bounded on compact int.", "}}\\end{aligned}$ where in the final step we have used the integral form of Grönwall's inequality.", "The result then follows by Picard-Lindelöf.", "This result was proven in even in the case that $V$ is non-autonomous and where $|\\cdot |$ is replaced by a general distance function $d_g(\\cdot ~,\\cdot )$ associated to a Riemannian metric $g$ .", "Therefore the previous result also holds true for a gravitational $(N,h)$ -fronted wave REF .", "That the EK-conjecture is true for a harmonic and subquadratic $H = -V$ motivates one to ask if the same is true for harmonic and polynomially bounded $H$ .", "This question was answered by , but before stating the theorem let us first make precise the idea of a polynomially bounded $H$ .", "Remark 4.3 Following the terminology of , a function $H:{\\rm I\\!R}\\times {\\rm I\\!R}^2 \\rightarrow {\\rm I\\!R}$ is called “polynomially $u$ -bounded\" (meaning polynomially upper bounded along finite $u$ -times) when for each $u_{0} \\in \\mathbb {R}$ , there exists $\\epsilon _{0}>0$ and a polynomial $P_{0}:{\\rm I\\!R}^{2} \\rightarrow {\\rm I\\!R}$ such that $H(u,q) \\le P_{0}(q)$ for all $(u,q) \\in \\left(u_{0}-\\epsilon _{0}, u_{0}+\\epsilon _{0}\\right) \\times \\mathbb {R}^{2}$ .", "Note that we say H is quadratically polynomially $u$ -bounded when $P_0$ can be chosen of degree 2 for all $u_0 \\in {\\rm I\\!R}$ ." ], [ "Outline of Proof", "The Polynomial EK-conjecture is stated as follows: Theorem 4.4 (Flores & Sánchez ’19) Let $V:{\\rm I\\!R}\\times {\\rm I\\!R}^2 \\rightarrow {\\rm I\\!R}$ be a polynomially $u$ -bounded $C^1$ -potential which is also $C^2$ and harmonic in the pair of variables q = (x, y).", "Then: all the solutions to the dynamical system Eq.", "REF are complete if and only if the function $V (u,\\cdot )$ is an at most quadratic polynomial for each $u \\in {\\rm I\\!R}$ .", "We will present here only a rough outline of the arguments behind the proof, following loosely .", "The proof of Theorem REF goes as follows: It is first shown that if a harmonic function $V$ is upper bounded by a polynomial of degree $n$ , that is if $V (x, y) \\le A(x^2+y^2)^{n/2}$ for some $n \\in \\mathbb {N}, A > 0$ at large $(x,y)$ , then $V$ must itself be a harmonic polynomial of degree $\\le n$ .", "The homogeneous, harmonic polynomials of degree $m>0$ on ${\\rm I\\!R}^2$ form a two-dimensional vector space.", "In the standard polar coordinates of ${\\rm I\\!R}^2$ , such polynomials take the form $p_m(\\rho ,\\theta ) = \\lambda _m \\rho ^m \\cos (m(\\theta + \\alpha _m))$ for $\\lambda _m > 0$ and $\\alpha _m \\in (-\\pi ,\\pi ]$ .", "Therefore any harmonic polynomial $P$ on ${\\rm I\\!R}^2$ of degree $n \\in \\mathbb {N}$ can be written as $P(\\rho ,\\theta ) = \\sum _{m=0}^np_m(\\rho ,\\theta )$ for some $p_0 \\in {\\rm I\\!R}$ .", "In particular, the autonomous potential $V(q)$ of Eq.", "REF can be written as such a sumThe extension to the non-autonomous case contains some subtleties which are explained in detail in .", "Loosely, for a polynomially $u$ -bounded and non-autonomous potential $V$ , the $\\lambda _m$ and $\\alpha _m$ of Eq.", "REF (and therefore Eq.", "REF ) become continuous functions of $u$ .. For simplicity in this summary, let us take the simple case of a homogeneous degree $n>2$ polynomial $V_n$ with $\\lambda _n=-1$ and $\\alpha _n=0$ , that is $V_n(\\rho ,\\theta ) = -\\rho ^n\\cos (n\\theta )$ .", "In the homogeneous case one can always obtain this via rotations, scaling or adding a real number to $V$ , none of which affect the completeness or harmonic characters necessary for our discussion.", "Consider the radial curves in polar coordinates $\\gamma _k(t) = (\\rho (t),\\hat{\\theta }_k)$ , $k \\in \\lbrace 0,\\dots ,n-1\\rbrace $ where $\\hat{\\theta }_k = 2\\pi k/n$ ($n$ is the degree of the potential $V$ being considered).", "Such curves are solutions of $\\ddot{q} = -\\nabla V_n(q)$ if and only if the radial component $\\rho (t)$ satisfies $\\ddot{\\rho }(t) = n\\rho ^{n-1}(t)$ .", "It is then proved that for any real number $n>2$ and $C^1$ function $\\lambda :[0,\\infty )\\rightarrow {\\rm I\\!R}$ , the solutions of the differential inequality $\\ddot{\\rho }(t) \\ge n\\lambda \\rho ^{n-1}(t)$ with initial conditions $\\rho (0)>0$ and $\\dot{\\rho }(0) > 0$ are incomplete under the following conditions: The solutions are incomplete if there exists some $\\lambda _0 >0$ such that $\\lambda \\ge \\lambda _0$ .", "If $\\lambda (0)>0$ then there exists some $k>0$ such that such that all solutions with initial conditions $\\rho (0)>k$ or $\\dot{\\rho }(0)>k$ are incomplete.", "The first of these points tells us immediately that the solutions $\\gamma _k$ satisfying $\\ddot{\\rho }(t) = n\\rho ^{n-1}(t)$ are incomplete, as in this case $\\lambda $ is the constant function equal to one, such that any $0<\\lambda _0<1$ provides the necessary bound.", "In fact a confinement property is shown, whereby there exists regions “around\" the $\\gamma _k$ labelled $D_k[\\rho _0,\\pi /(2n)]$ such that trajectories starting in $D_k[\\rho _0,\\pi /(2n)]$ (with suitable initial conditions) stay in $D_k[\\rho _0,\\pi /(2n)]$ , and these confined solutions satisfy the differential inequality Eq.", "REF , allowing us to prove that they too are incomplete .", "The existence of the confining regions $D_k[\\rho _0,\\pi /(2n)]$ for a homogeneous potential $V_n$ can be understood as follows: Along each $\\gamma _k = (\\rho (t),\\hat{\\theta }_k = 2\\pi k/n)$ , $V_n(\\rho ,\\theta ) = -\\rho _n\\cos (n\\theta )$ is decreasing and concave.", "Furthermore, the harmonicityHarmonicity implies that $V_n \\sim \\cos (n\\theta )$ such that $\\frac{\\partial V_n}{\\partial \\theta } \\sim \\sin (n\\theta )$ and evaluating at any $\\hat{\\theta }_k$ yields 0.", "This is the easily shown to be a minimum by taking another derivative.", "of $V_n$ implies that $\\frac{\\partial V_n}{\\partial \\theta }(\\gamma _k(t)) = 0 $ and that this is in fact a minimum.", "That is, the $\\hat{\\theta }_k$ are stable equilibria of trajectories close to the $\\gamma _k$ .", "This can be visualised by looking at the potential $V_n$ for some choice of $n$ .", "In Figure REF the case $n = 5$ is demonstrated, in which one can see $n = 5$ different “channels\" with centers corresponding to the $\\gamma _k, k\\in \\lbrace 0,\\dots ,4\\rbrace $ .", "To prove the case in which $V$ is not homogeneous, it is first written as a linear combination of polynomials like $V_n$ .", "Then the $\\gamma _k$ are no longer solutions of the full dynamical system $\\ddot{q} = -\\nabla V(q)$ , but it is shown that there still exists regions “around\" the $\\gamma _k$ labelled $D[\\rho _0,\\theta _+]$ which have qualitatively the same behaviour as the $D_k[\\rho _0,\\pi /(2n)]$ .", "This is achieved by showing that the radial component of a trajectory $\\gamma $ grows sufficiently fast compared to the angular oscillation that $\\gamma $ never escapes the $D[\\rho _0,\\theta _+]$ .", "To prove the case when $V$ is non-autonomous a similar procedure is followed to that of the autonomous case, with some technical complications.", "The first notable difference is that the polar expressions of a harmonic potential $V(u,q)$ Eq.", "REF and Eq.", "REF become valid only on an interval in $u$ , that is $p_m(u,\\rho ,\\theta ) = \\lambda _m(u) \\rho ^m \\cos (m(\\theta + \\alpha _m(u))),~~~u\\in (u_0 - c, u_0 + c) \\subset {\\rm I\\!R}$ for some $0 < c \\in {\\rm I\\!R}$ .", "Here we can only choose $\\alpha (u_0) = 0$ , and in general $\\alpha (u) \\ne 0$ .", "As a result, in the non-autonomous case we have that the $\\hat{\\theta }_k$ are no longer constant: $\\hat{\\theta }_k(u) = \\frac{2\\pi k - \\alpha (u)}{n},~~~k = 0,\\dots ,n-1.$ The remaining differences follow a similar pattern, whereby objects become $u$ -dependent and are defined on intervals.", "However since the rough details are the same as the autonomous case, these details will be omitted here.", "Figure: Homogeneous degree-5 potential V 5 V_5 as a surface (left) and contour plot (right).", "These images make clear the nn stable trajectories γ k \\gamma _k for k∈{0,⋯,n-1}k\\in \\lbrace 0,\\dots ,n-1\\rbrace .", "The same features can be seen for any natural number n>2n>2.Summary – Polynomial EK-conjecture We first saw the Ehlers–Kundt conjecture, stated as: “Prove the plane waves to be the only complete (gravitational) pp-waves.\"", "This was a statement about the completeness of the solutions of the geodesic equation for a metric $g = 2 d u d v - V(u, x,y) d u^{2}+ dx^2 + dy^2$ where $V$ is harmonic in $(x,y)$ .", "The geodesic equations were reduced to a Hamiltonian system $\\dot{p} = -\\nabla V(q)$ with $q = (x,y)$ and $p = \\dot{q}$ .", "In mathematical terms, the conjecture states: $\\parbox {15em}{\\centering The solutions of \\dot{p} = -\\nabla V(q)\\\\exist for all times} \\iff \\text{$V(u,x,y)$ is quadratic in $(x,y)$}.$ The $$ direction is already known to hold (see Sec.", "REF ), and the $\\Rightarrow $ direction is an open question.", "The fact that a quadratically-bounded and harmonic $V$ was proven to have complete trajectories motivated us to ask what happens if the harmonic $V$ is polynomially bounded.", "This question was answered by where it was proven that for such a $V$ , all the solutions to the dynamical system Eq.", "REF are complete if and only if the function $V (u,\\cdot )$ is an at most quadratic polynomial for each $u \\in {\\rm I\\!R}$ .", "That is, the Ehlers–Kundt conjecture is proved to hold in the case that $V$ is polynomially bounded.", "We may then ask ourselves if it is reasonable to expect that $V$ be polynomially bounded.", "In fact in the causal study, it was discovered that in the autonomous case unless $V$ were quadratically polynomially bounded, the pp-wave would not be strongly causal.", "For further evidence supporting such a bound see .", "Therefore this is arguably the strongest known result addressing the EK conjecture.", "It is not, however, the only one; indeed, in the case of an autonomous potential, the EK conjecture has also been settled in the case when the spacetime is strongly causal, in ." ], [ "The Compact Case", "One may also wonder if the Ehlers–Kundt conjecture could be answered in the case that a pp-wave $(M,g)$ is a compact Lorentzian manifold, since such manifolds are known to be complete under a wealth of circumstancesThough compact Lorentzian manifolds are not always complete, in contrast to compact Riemannian manifolds which are always complete (see Hopf–Rinow theorem )..", "Some examples include when they are flat, have constant curvature, are homogeneous (and even locally homogeneous in the 3 dimensional case), or admit a time-like conformal Killing vector field .", "Unfortunately, general pp-waves do not satisfy any of these properties, and so some additional results are required to address the EK conjecture in this case.", "The question of completeness for compact pp-waves has indeed been answered by , and that work is the subject of this section.", "Example: Compact pp-wave.", "Consider the flat metric $h$ on the n-torus $\\mathbb {T}^n$ , then the product manifold $M = \\mathbb {T}^2 \\times \\mathbb {T}^n$ with the metric $g = 2d\\theta d\\phi + 2H d\\theta ^2 + h$ with $H \\in C^{\\infty }(\\mathbb {T}^n)$ is compact and is in fact a standard pp-wave with defining covariantly constant vector field represented as $\\partial _{\\phi }$ .", "Note however that a “wave\" is not a very accurate name in the compact case, since as mentioned in Sec.", "it is the (null) asymptotics which signal the physical presence of radiation, and the compact case does not admit the same notion of “null infinity\" as was used to define the presence of radiation.", "The principal results of can be summarised as follows: The universal cover of a compact pp-wave is globally isometric to a standard pp-wave (Eq.", "REF ) Every compact pp-wave $(M, g)$ is geodesically complete.", "Every compact Ricci-flat pp-wave is a plane wave.", "Point A is instrumental in proving point B.", "Point B appears to be in contradiction to the EK conjecture, but such an apparent problem is resolved by point C. That is, there are no non-plane compact vacuum pp-waves, so we need not wonder about their completeness on physical grounds.", "Thus these results solve the Ehlers–Kundt conjecture in the compact case.", "Or rather, the authors have proven that one need not conjecture about the incompleteness of non-plane vacuum compact pp-waves, as there are no such pp-waves.", "The remainder of this section will outline the methods by which these results are obtained.", "Let us begin with result (A) in more detail: Theorem 4.5 The universal cover of an $n$ -dimensionalNote that the authors of the original work use $n$ as the dimension of only the wavefront, and in this article it is the dimension of the spacetime.", "Therefore $n_{\\text{this article}} = n_{\\text{Leistner et al.}}", "+ 2$ .", "Similarly, a different convention on $H$ is used, such that the $H_{\\text{this article}} = 2H_{\\text{Leistner et al}}$ .", "This does not impact the methods used in any meaningful way.", "compact pp-wave defined by a covariantly constant null vector field $Z$ is globally isometric to a standard pp-wave (Eq.", "REF ) which can be written as $({\\rm I\\!R}^{n}, g^H = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+\\delta _{a b}d x^{a} d x^{b} )$ and under this isometry, the lift of $Z$ is mapped to the coordinate vector field $\\frac{\\partial }{\\partial v}$ Though we don't present the proof of this theorem here, we remark that it makes significant use of the “screen bundle\" which is closely related to the “wavefront\" of our Definition .", "However, as remarked in in the compact case this nomenclature is perhaps inappropriate.", "Using Theorem REF , it is then proven that: Theorem 4.6 Every compact pp-wave $(M, g)$ is geodesically complete.", "To prove this statement, let us first examine the completeness of a standard pp-wave (Eq.", "REF ).", "Then via Theorem REF we can make statements about the completeness of compact pp-waves.", "Recall that a standard pp-wave may be written in the global coordinate chart $\\lbrace u,v,x^1,\\dots ,x^{n-2}\\rbrace $ as $g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+\\delta _{a b}d x^{a} d x^{b}.$ Proposition 4.7 The standard pp-wave metric is geodesically complete if $\\left|\\frac{\\partial ^{2} H}{\\partial x^{i} \\partial x^{j}}\\right| \\le c$ for $0 < c \\in {\\rm I\\!R}$ for all $i,j \\in \\lbrace 1\\dots ,n-2\\rbrace $ Let us examine the geodesic equations of the standard pp-wave metric: For a curve $\\gamma $ with components $(u(s), v(s), x^1(s), \\dots , x^{n-2}(s))$ , the geodesic equation for the $u$ -component is given by: $\\ddot{u}(s) =0 \\Longrightarrow u(s)=a s+b~\\text{for some }a,b \\in {\\rm I\\!R}\\\\$ that is, the $u$ component is defined on all of ${\\rm I\\!R}$ .", "The remaining components of the geodesic equations are given by $\\ddot{v}(s) &=-2 a \\dot{x}^{k}(s) \\frac{\\partial H}{\\partial x^{k}}-a^{2} \\frac{\\partial H}{\\partial u},\\\\\\ddot{x}^{k}(s) &=a^{2} \\frac{\\partial H}{\\partial x^{k}}.$ Since the $v$ equation only depends on the $x^k$ and not on $v$ , then the solution is defined on ${\\rm I\\!R}$ provided that the $x^{k}$ are defined on ${\\rm I\\!R}$ .", "Unfortunately the $x^k$ equation does not in general admit solutions on all of ${\\rm I\\!R}$ .", "An example (as in ) is found when $H = \\frac{1}{2}(x^j)^4$ for some $j\\in \\lbrace 1,\\dots ,n-2\\rbrace $ .", "In this case, the only nontrivial equation (when $a\\ne 0$ ) for the $x^k(s)$ is $\\ddot{x}^j(s) = 2a^2(x^j)^3$ which has solution $x^j(s) = \\frac{1}{1-as}, ~~s\\in (-\\infty ,1/a).$ Since this solution develops a singularity, so too does the solution for $v$ , and we conclude that the standard pp-wave is geodesically incomplete in this case.", "So then when are the solutions of the $\\ddot{x}^k$ equations defined on all of ${\\rm I\\!R}$ (thus making the pp-wave geodesically complete)?", "This is guaranteed when the second derivatives of $H$ are bounded; as then by the mean value theorem the first derivatives are Lipschitz continuous which suffices in view of the Picard–Lindelöf theorem.", "One may think that this result yields many examples of complete pp-waves which are non-plane (and are instead just bounded in second derivative of $H$ ) but in fact we have not imposed that the pp-wave is gravitational.", "For a gravitational pp-wave $H$ is harmonic, and a harmonic function can only have bounded second derivatives (corresponding to a complete pp-wave by the previous proposition) if it is quadratic and thus a plane waveNote that this is the content of Remark 5 of the original work .", "Their Remark 5 concludes with “thus a pp-wave\", but this should in fact read “thus a plane wave\".", "The correct conclusion is reached in this article, and we thank Prof. Leistner for confirming..", "In order to apply this result to our case, that is to prove that a compact pp-wave is geodesically complete (Theorem REF ), we must prove that the second derivatives of $H$ are bounded in the compact case.", "The following proposition resolves this question: Proposition 4.8 Consider a compact pp-wave.", "By Theorem REF , its universal cover is a standard pp-wave $({\\rm I\\!R}^{n}, g = 2 d u d v+H\\left(u, \\mathbf {x}\\right) d u^{2}+\\delta _{a b}d x^{a} d x^{b} )$ .", "Then the second derivatives of $H$ are bounded $0 \\le \\frac{\\partial ^{2} H}{\\partial x^{i} \\partial x^{j}} \\le c ~~~\\forall ~i,j = 1,\\dots ,n-2.$ We again omit the proof in favour of brevity.", "See .", "Thus one arrives at a proof of theorem (B): Let $(M,g)$ be a compact pp-wave.", "By theorem (A) the universal cover is isometric to a standard pp-wave, and by the above proposition such a standard pp-wave is complete.", "Therefore $(M,g)$ itself is complete.", "We finally arrive at the statement which resolves the EK conjecture in the case of compact pp-waves.", "Theorem 4.9 Every compact Ricci-flat pp-wave is a plane waveNote that there are examples of compact non-plane pp-waves, but they are not Ricci-flat.. Let $(M, g)$ be a compact pp-wave and let $({\\rm I\\!R}^{n+2}, g^H)$ be the standard pp-wave that is globally isometric to the universal cover of $(M, g)$ .", "As in Proposition REF , we have that the second derivatives of $H$ are bounded.", "If $g$ is Ricci-flat, so too is $g^H$ , and thus $H$ is harmonic with respect to the $x^i$ directions $\\sum _{i=1}^{n-2} \\partial _i^2 H = 0.$ But this implies that also $\\partial _i \\partial _j H$ is harmonic in the same sense, and thus, by the maximum principle for harmonic functions , independent of the $x^i$ components.", "Hence, $H(u,\\mathbf {x})=\\sum _{i, j=1}^{n-2} a_{i j}(u) x^{i} x^{j}+b_{i}(u) x^{i}+c(u),$ where $a_{ij}$ , $b_i$ and $c$ depend only on $u$ and not the $x^i$ , and thus since $H$ is quadratic in $x^i$ , $(M,g)$ is a plane wave.", "Therefore as stated, one need not conjecture about the incompleteness of non-plane vacuum compact pp-waves, as there are no such pp-waves.", "As a result, the Ehlers–Kundt conjecture has been resolved in the compact case." ], [ "Case of Failure", "Let us outline very briefly the following case in which the Ehlers–Kundt conjecture is known not to hold: Impulsive case: Though usually omitted for brevity in this article, the continuity of the characteristic function $H$ of a pp-wave in $u$ of the adapted coordinates is in fact vital.", "To quote from : Impulsive waves have a non-continuous profile type $H(u,z=(x,y)) = f(z)\\delta (u)$ for some (generalized) delta-function $\\delta $ and smooth $f$ .", "Thus, the function $H$ can be regarded as $z$ -harmonic when $\\delta f = 0$ .", "The mentioned results of completeness yield counterexamples to the EK conjecture in the impulsive setting, showing the necessity of continuity in $u$ as well as the appropriate smoothness of $H$ .", "This necessary smoothness and continuity in the non-autonomous case ($H$ not independent of $u$ ) amounts to $H$ should be $C^1$ in $u$ (for constructing Levi-Civita Connection) $H$ should be $C^2$ in $z$ (to impose harmonicity, i.e.", "vacuum condition) The relevant references in the study of such impulsive waves is also given in ." ], [ "Acknowledgements", "The authors wish to thank Prof. Miguel Sánchez for numerous helpful discussions, and Prof. Paweł Nurowski for valuable clarifications.", "We also thank Luke Timmons and John Walker for their continued support and assistance.", "Data sharing not applicable to this article as no datasets were generated or analysed during the current study." ], [ "Exterior derivative of one-forms", "The following is based on .", "There are two primary conventions for defining a wedge product, which are in fact proportional to each other.", "The first is that which is used in : for $\\alpha $ a $k$ -form and $\\beta $ an $l$ -form $\\alpha \\wedge \\beta =\\frac{(k+l) !", "}{k !", "l !}", "\\operatorname{Alt}(\\alpha \\otimes \\beta ).$ The second is that of , and is given by $\\alpha \\wedge \\beta =\\operatorname{Alt}(\\alpha \\otimes \\beta )$ In both cases, the wedge product is proportional to $\\operatorname{Alt}(\\alpha \\otimes \\beta )$ .", "Let us choose convention 1 and write explicitly $\\operatorname{Alt}(\\nabla \\omega )$ for a $k$ -form $\\omega $ $\\operatorname{Alt}(\\nabla \\omega )\\left(X_{1}, \\cdots , X_{k+1}\\right)=\\frac{1}{(k+1) !}", "\\sum _{\\sigma \\in S_{k+1}}(\\operatorname{sgn} \\sigma ) \\nabla \\omega \\left(X_{\\sigma (1)}, \\cdots , X_{\\sigma (k+1)}\\right)$ for smooth vector fields $X_j$ .", "The exterior derivative of a $k$ -form $\\omega $ is given by $\\begin{array}{c}\\mathrm {d} \\omega \\left(X_{1}, \\cdots , X_{k+1}\\right)=\\sum _{i=1}^{k}(-1)^{i+1} X_{i}\\left(\\omega \\left(X_{1}, \\cdots , \\widehat{X}_{i}, \\cdots , X_{k+1}\\right)\\right)+ \\\\\\sum _{i<j}(-1)^{i+j} \\omega \\left(\\left[X_{i}, X_{j}\\right], X_{1}, \\cdots , \\widehat{X}_{i}, \\cdots , \\widehat{X}_{j}, \\cdots , X_{k+1}\\right),\\end{array}$ where the $\\widehat{X}_j$ denotes that the argument $X_j$ is to be omitted.", "By taking an alternating product proportional to the wedge product with any constant of proportionality (i.e.", "convention) and comparing both the $d\\omega $ and $Alt(\\nabla \\omega )$ in Riemann normal coordinates (as they are both tensors and thus can be compared pointwise), one finds that the expressions simplify greatly and are indeed proportional to each other." ] ]
2207.03591
[ [ "Trapped-particle microrheology of active suspensions" ], [ "Abstract In microrheology, the local rheological properties such as viscoelasticity of a complex fluid are inferred from the free or forced motion of embedded colloidal probe particles.", "Theoretical machinery developed for forced-probe microrheology of colloidal suspensions focused on either constant-force (CF) or constant-velocity (CV) probes while in experiments neither the force nor the kinematics of the probe is fixed.", "More importantly, the constraint of CF or CV introduces a difficulty in the meaningful quantification of the fluctuations of the probe due to a thermodynamic uncertainty relation.", "It is known that for a Brownian particle trapped in a harmonic potential well, the product of the standard deviations of the trap force and the particle position is $d k_BT$ in $d$ dimensions with $k_BT$ being the thermal energy.", "As a result, if the force (position) is not allowed to fluctuate, the position (force) fluctuation becomes infinite.", "To allow the measurement of fluctuations, in this work we consider a microrheology model in which the embedded probe is dragged along by a moving harmonic potential so that both its position and the trap force are allowed to fluctuate.", "Starting from the full Smoluchowski equation governing the dynamics of $N$ hard active Brownian particles, we derive a pair Smoluchowski equation describing the dynamics of the probe as it interacts with one bath particle by neglecting hydrodynamic interactions among particles in the dilute limit.", "From this, we determine the mean and the variance (i.e., fluctuation) of the probe position in terms of the pair probability distribution.", "We then characterize the behavior of the system in the limits of both weak and strong trap.", "By taking appropriate limits, we show that our generalized model can be reduced to the well-studied CF or CV microrheology models." ], [ "Introduction", "Rheology is the study of flow and deformation of complex materials in response to an applied force.", "Traditional (bulk) rheological measurements are performed by shearing a macroscopic sample of the material confined between two solid surfaces, such as in the cone-and-plate rheometer.", "Bulk rheological studies such as shear rheometry provide a measurement of the macroscopic rheological behavior of complex materials.", "Recently, particle-tracking microrheology has become a standard tool for studying the mechanical properties of materials on a much smaller scale [1], [2], [3], [4].", "In contrast to bulk rheology, microrheology only requires a small sample volume and can be used to quantify spatial heterogeneity.", "As a result, microrheology is particularly useful for examining soft biological materials.", "For example, classical bulk rheometry cannot be used to probe the microenvironment inside living cells without disrupting their mechanical structure while particle-tracking microrheology can be performed [5], [6], [7], [8], [9].", "To aid in the understanding of experimental measurements and in the prediction of colloidal microrheology, [10] developed a theoretical framework in which a colloidal probe is pulled through a suspension of neutrally buoyant bath colloids.", "This model has been used and generalized to study the microrheology of passive colloids [11], [12], [13], [14], [15], [16] and active colloids [17], [18].", "When the external pulling force is absent, the probe “collides” with bath particles as it undergoes Brownian motion—the so-called tracer diffusion problem.", "To characterize the nonlinear response, forced microrheology is considered in which an external force, often larger than the thermodynamic restoring force, is applied to the probe.", "Within forced microrheology, two operating modes—constant-force (CF) and constant-velocity (CV)—are often considered from a theoretical perspective.", "In the CF mode, the probe is driven by a constant external force ${F}^\\text{ext}$ and the velocity of the probe is fluctuating.", "Conversely, for a CV probe, the probe velocity ${U}_1$ is a constant vector (Therefore, the position of the probe is known at all times.)", "and the force required to maintain such a steady motion must fluctuate.", "To characterize the micro-viscous response of colloidal suspensions, an effective microviscosity $\\eta ^\\text{eff}$ can be defined using the Stokes drag law.", "For a spherical probe of radius $a$ in the CF mode, this is given by $F^\\text{ext} = 6 \\pi \\eta ^\\text{eff} a\\langle U_1 \\rangle $ , where $\\langle U_1 \\rangle $ the probe velocity in the direction of ${F}^\\text{ext}$ averaged over Brownian fluctuations.", "The ratio between the effective microviscosity and the solvent viscosity, $\\eta ^\\text{eff}/\\eta $ , is the main quantity of interest in colloidal microrheology.", "For the CV mode, the average external force is used in the definition of the effective microviscosity: $\\langle F^\\text{ext} \\rangle = 6 \\pi \\eta ^\\text{eff} aU_1$ .", "In order to measure the microviscoelastic response of suspensions, an oscillatory driving force is considered [11].", "While the CF (or CV) model is successful in quantifying the mean velocity (or mean force) of a probe driven through colloidal suspensions.", "The fluctuation from this mean value is largely unexplored.", "Taking the CV mode as an example, one could calculate the variance of the mean force using the probe-distorted microstructure.", "The question is what does this variance physically imply?", "In particular, how does this variance relate to the fluctuations in the suspension?", "In an experimental setting, neither the force nor the velocity of the probe is fixed; they are both allowed to fluctuate [13], [1], [2], [19].", "To mimic the experimental realization more closely and motivate later discussions, consider the simple case of an isolated Brownian particle in a harmonic trap that is centered at the origin (arbitrary).", "In this physical picture, both the position and the velocity of the particle is fluctuating.", "A statistical mechanical description can be adopted in which one defines the probability density, $P({r}, t)$ , of finding the particle at position ${r}$ relative to the fixed trap at time $t$ .", "Conservation of probability dictates that $P({r}, t)$ is governed by the Smoluchowski equation, which reads $\\partial P/\\partial t + \\nabla \\cdot {j}=0$ , where the flux vector ${j}= P{F}^\\text{trap}/\\zeta - D_T \\nabla P$ .", "Here, ${F}^\\text{trap}$ is the trap force and for a harmonic trap is given by ${F}^\\text{trap} = -k {r}$ with $k$ being the spring constant; $\\zeta $ is the drag coefficient and $D_T$ is the thermal diffusivity given by the Stokes-Einstein-Sutherland relation, $\\zeta D_T = k_BT$ , where $k_BT$ is the thermal energy.", "The mean external force exerted on the Brownian particle is $\\langle {F}^\\text{trap} \\rangle = \\int {F}^\\text{trap} P d{r}= -k \\int {r}P d{r}= -k \\langle {r}\\rangle $ .", "Because the trap is harmonic, the mean force is proportional to the mean displacement with $-k$ being the constant of proportionality.", "For a fixed trap, the mean position (therefore the mean force) is zero, $\\langle {r}\\rangle = {0}$ .", "The variance of the force, $\\operatorname{Var}\\left({F}^\\text{trap} \\right) = k^2 \\operatorname{Var}({r})$ .", "A straightforward calculation leads to the result $\\operatorname{Var}({r}) = \\frac{k_BT}{k}{I},$ where ${I}$ is the identity tensor.", "Introducing the shorthand $\\Delta {F}^\\text{trap} = {F}^\\text{trap} - \\langle {F}^\\text{trap}\\rangle $ , we can write the fluctuation relation as $\\left< (\\Delta {F}^\\text{trap})^2 \\right>^{1/2}\\left< (\\Delta {r}_1)^2 \\right>^{1/2} = d k_BT,$ where $d$ is the spatial dimensionality.", "Equation (REF ) is a fundamental result and a few comments on its implications are in order.", "First, by harmonically trapping a particle immersed in a solvent, the product of the standard deviations of the trap force and the particle position gives precisely the thermal fluctuations of the solvent—$dk_BT$ .", "Second, one can decrease the uncertainty in the position by increasing the stiffness of the trap [see equation (REF )].", "However, the trade-off is that the fluctuation in the force must increase due to (REF ).", "Said differently, this constitutes a thermodynamic uncertainty relation in which one cannot decrease the fluctuations in both the force and the position simultaneously.", "If the fluctuation in the position vanishes (infinitely stiff trap), the fluctuation in the force blows up.", "We note that (REF ) is observed elsewhere.", "For example, consider an ideal Gaussian polymer chain with one end localized in a harmonic trap.", "The fluctuations of the trap force and the position from the trap center satisfy an identical relation [20].", "We are now in a position to consider the fluctuations in the microrheology problem.", "Instead of considering either CF or CV, we must allow both the position of and the force on the probe to fluctuate in order to have a meaningful quantification of fluctuations.", "Equation (REF ) also implies that we should consider the position not the velocity of the probe.", "In the CF mode, therefore, the quantity of interest for fluctuations is the variance of the position of the probe, which is just the force-induced tracer diffusion problem.", "That is, the tracer diffusivity under the influence of a constant force should be considered—not the variance of the velocity.", "For the CV mode, the position of the probe is also prescribed and the fluctuation in the force is infinite.", "As a result, in the CV mode the computed variance of the force does not have a physical meaning.", "In this paper, to closely mimic the setup of microrheological experiments, we consider a trapped-particle microrheology model in which the colloidal probe particle is driven by a translating harmonic trap.", "Because biological materials examined by microrheology such as the microenvironment inside living cells often contain active “particles”, we model the suspension as an active colloidal suspension.", "Compared to passive suspensions, the study of the microrheology of active suspensions is more recent [21], [22], [23], [24], [25], [26], [17], [18], [27], [28], [29].", "The colloidal particles in an active suspension are able to self-propel, which can be a model for either biologically active microswimmers or synthetic phoretic particles.", "This active colloidal suspension model also includes passive (not self-propelled) colloidal systems, which can be obtained by setting the self-propulsive swim speed to zero.", "The paper is organized as follows.", "In section , we present the general $N$ -particle dynamics from a continuum perspective using the Smoluchowski equation governing the evolution of the positions and orientations of $N$ active Brownian particles.", "In section we first derive the mean and variance (fluctuation) of the probe position relative to the trap center from the $N$ -particle formulation.", "Neglecting hydrodynamic interactions in the dilute limit, we then derive the pair-level Smoluchowski equation governing the dynamics of the probe and one bath particle.", "We discuss the asymptotic behavior of the system in the limits of both weak and strong traps.", "We then show in section that our generalized theoretical framework includes the well-studied CF and CV microrheology models when appropriate limits are taken.", "Finally, we conclude in section ." ], [ "Mechanics of active Brownian suspensions", "Consider a colloidal suspension consisting of $N$ particles dispersed in an incompressible Newtonian fluid (solvent) of dynamic viscosity $\\eta $ .", "The particles could be active and are subject to fluctuating thermal (Brownian) forces from the solvent.", "Furthermore, the inertia of the fluid and the particles are assumed to be negligible.", "In this low-Reynolds-number regime, the fluid dynamics is governed by the linear Stokes equations and the probability distribution of the particles are described by the Smoluchowski equation.", "In general, all $N$ particles could be active, and we model them as active Brownian particles.", "The probability distribution for finding the $N$ particles in positions $\\lbrace {x}_\\alpha \\rbrace $ and orientations $\\lbrace {q}_\\alpha \\rbrace $ at a given time $t$ is denoted as $P_N({x}^N, {q}^N, t)$ where $\\alpha = 1,\\cdot \\cdot \\cdot ,N$ is the particle label.", "In the laboratory frame of reference, the $N$ -particle Smoluchowski equation is given by $\\frac{\\partial P_N}{\\partial t} + \\sum _{\\alpha =1}^N\\nabla _\\alpha ^T \\cdot {j}_\\alpha ^T + \\sum _{\\alpha =1}^N \\nabla _\\alpha ^R \\cdot {j}_\\alpha ^R = 0,$ where $\\nabla _\\alpha ^T = \\partial /\\partial {x}_\\alpha $ is the spatial gradient operator with respect to the position vector (${x}_\\alpha $ ) of particle $\\alpha $ in the laboratory frame and $\\nabla _\\alpha ^R = {q}_\\alpha \\times \\left( \\partial /\\partial {q}_\\alpha \\right)$ is the orientational gradient operator of particle $\\alpha $ .", "The translational and rotational fluxes in equation (REF ) are, respectively, given by ${j}_\\alpha ^T = {U}_\\alpha P_N$ and ${j}_\\alpha ^R = {}_\\alpha P_N$ , where ${U}_\\alpha $ (${}_\\alpha $ ) is the instantaneous linear (angular) velocity of particle labeled $\\alpha $ relative to the laboratory frame.", "The conservation of probability is $\\int _{\\Gamma _N} P_N d\\Gamma ^N = 1,$ where $d\\Gamma ^N = \\prod _{\\alpha =1}^N d \\Gamma _\\alpha $ denotes the volume element of the $N$ -particle phase space and $d \\Gamma _\\alpha = d {x}_\\alpha d {q}_\\alpha $ is the volume element of the phase space of particle $\\alpha $ .", "In the absence of a background flow, the linear and angular velocities of any active particle $\\alpha $ are given by U- U0 - 0 = =1N M Fe + FP - kBTT PN Le + LP - kBTR PN + 0 - DR R PN , where ${\\mathcal {M}}_{\\alpha \\beta }$ is the configuration-dependent grand hydrodynamic mobility tensor coupling the linear and angular velocity of particle $\\alpha $ to the force and torque exerted on particle $\\beta $ .", "Note that for general particle shapes ${\\mathcal {M}}_{\\alpha \\beta }$ is a function of the instantaneous $N$ -particle configuration—both positions and orientations.", "The forces on any particle $\\beta $ include the external force ${F}_\\beta ^e$ , the interparticle colloidal force ${F}_\\beta ^P$ and the thermal or entropic force $-k_BT\\nabla _\\beta ^T\\ln P_N$ .", "Similarly, the torques on any particle $\\beta $ include the external torque ${L}_\\beta ^e$ , the interparticle colloidal torque ${L}_\\beta ^P$ and the thermal torque $-k_BT\\nabla _\\beta ^R\\ln P_N$ .", "The interparticle colloidal forces and torques are assumed to be conservative.", "For the case of hard-sphere interactions, the interparticle forces reduce to no-flux boundary conditions at any surface of contact between particles.", "In equation (), the activity of any particle $\\alpha $ is modeled by its undisturbed swim linear velocity ${U}_\\alpha ^0$ and angular velocity ${}_\\alpha ^0$ regardless of the presence of any other particles.", "For the case of simple ABPs, the swim angular velocity is often taken to be zero, ${}_\\alpha ^0 = {0}$ .", "Furthermore, a biological microswimmer may “decide” to change its orientation ${q}_\\alpha $ by, for example, actuating the flagella on a different side of its body without disturbing the flow.", "In this process, the body of the microswimmer does not turn.", "For non-spherical particles, this process means that the swim orientation ${q}_\\alpha $ is usually different from the orientation of the particle shape, in which case the shape orientation needs to be included as an additional phase space variable.", "For spherical particles, only the swim orientation matters and no such difficulty is introduced.", "This reorientation process of any particle $\\alpha $ is independent of the motion of other particles and is modeled by a simple rotary diffusion with a constant rotary diffusivity $D_\\alpha ^R$ .", "The reorientation time is $\\tau _\\alpha ^R = 1/D_\\alpha ^R$ , which defines the active run or persistence length of an ABP: $\\ell _\\alpha = U_\\alpha ^0 \\tau _\\alpha ^R$ .", "Because this reorientation process is biological rather than thermal in origin, $D_\\alpha ^R$ is not constrained by the fluctuation-dissipation theorem and may be inferred from experimental data." ], [ "Moving-trap microrheology", "In the context of microrheology, the particle with label 1 is identified as the probe particle.", "This particle could be a new particle placed into the suspension or one of the suspension particles tagged as the probe.", "Particles labeled $2-N$ are referred to as bath particles.", "In the following, we consider a suspension of neutrally buoyant, hard and active colloidal spheres with identical radii.", "The probe may have a different radius than the bath particles.", "Instead of fixing the external force ${F}_1^e$ or the velocity ${U}_1$ , the probe particle is trapped in a translating harmonic potential well.", "Denoting the position vector of the center of the potential well as ${x}_0(t)$ , we have $d{x}_0/dt = {U}^\\mathrm {trap}(t)$ , where ${U}^\\mathrm {trap}(t)$ is the prescribed velocity of the moving trap relative to the laboratory frame.", "The trap force ${F}_1^e$ is assumed to be only a function of the relative position between the probe and the potential well.", "All bath particles experience no external forces or torques.", "We first consider a general derivation in which all particles are ABPs and the probe is a tagged ABP in the suspension.", "In the constant-force or constant-velocity mode of microrheology, the position of the probe does not matter, and the system is statistically homogeneous.", "In contrast, the introduction of a moving trap defines a specific origin in the system and the position of the probe relative to the trap needs to be considered explicitly.", "To this end, we first change to a coordinate system moving with the instantaneous trap velocity and measure all particle positions relative to the trap.", "This change of variables is written as ${z}_\\alpha = {z}_\\alpha (\\lbrace {x}\\rbrace , t) = {x}_\\alpha - \\int _0^t {U}^\\mathrm {trap}(s)ds - {x}_0(0)$ for any $\\alpha $ and $t^\\prime = t^\\prime (\\lbrace {x}\\rbrace , t) = t$ .", "Using the chain rule we obtain $\\partial /\\partial t = - \\sum _{\\alpha =1}^N {U}^\\mathrm {trap}\\cdot \\partial /\\partial {z}_\\alpha +\\partial /\\partial t^\\prime $ and $\\partial /\\partial {x}_\\alpha = \\partial /\\partial {z}_\\alpha $ .", "The Smoluchowski equation (REF ) in the new coordinate system becomes $\\frac{\\partial P_N}{\\partial t^\\prime } + \\sum _{\\alpha =1}^N \\frac{\\partial }{\\partial {z}_\\alpha }\\cdot \\left({j}_\\alpha ^T - {U}^\\mathrm {trap}P_N\\right) + \\sum _{\\alpha =1}^N \\nabla _\\alpha ^R\\cdot {j}_\\alpha ^R=0,$ where ${j}_\\alpha ^T$ and ${j}_\\alpha ^R$ remain unchanged.", "In the context of microrheology, it is more convenient to measure the positions of all bath particles relative to that of the probe.", "We therefore introduce another change of variables such that for the probe ${r}_1 = {r}_1({z}^N, t^\\prime ) = {z}_1 $ , and ${r}_\\alpha ={r}_\\alpha ({z}^N, t^\\prime ) = {z}_\\alpha - {z}_1 $ for all bath particles ($\\alpha = 2,\\cdot \\cdot \\cdot ,N$ ).", "In this coordinate system, the probe position is measured relative to the trap and the positions of all bath particles are measured relative to the probe.", "The change of variables allows us to write $\\partial /\\partial {z}_1 = \\partial /\\partial {r}_1 - \\sum _{\\alpha =2}^N \\partial /\\partial {r}_\\alpha $ and $\\partial /\\partial {z}_\\alpha = \\partial /\\partial {r}_\\alpha $ for $\\alpha = 2,\\cdot \\cdot \\cdot ,N$ .", "The Smoluchowski equation (REF ) transforms to PNt + 1T(j1T - UtrapPN) + =2N T(jT - j1T) + =1N RjR=0.", "It is understood that in equation () we have used $t$ for the time variable and $\\nabla _\\alpha ^T = \\partial /\\partial {r}_\\alpha $ for any $\\alpha $ .", "Formally, the probability density in equation () is the conditional probability of find all particles at a given configuration provided that the trap is at ${x}_0$ at time $t$ , i.e., $P_N = P_N\\left({r}^N, {q}^N, t\\vert {x}_0, t\\right)$ .", "The translational flux of particle $\\alpha $ can be written as ${j}_\\alpha ^T &=& U_\\alpha ^0 {q}_\\alpha P_N + {M}_{\\alpha 1}^{UF}\\cdot {F}_1^eP_N \\nonumber \\\\& &- \\sum _{\\beta =1}^N \\left({D}_{\\alpha \\beta }^{UF} - {D}_{\\alpha 1}^{UF}\\right)\\cdot \\nabla _\\beta ^T P_N\\nonumber \\\\& & - {D}_{\\alpha 1}^{UF}\\cdot \\nabla _1^T P_N - \\sum _{\\beta =1}^N {D}_{\\alpha \\beta }^{UL} \\cdot \\nabla _\\beta ^R P_N,$ where we have taken ${U}_\\alpha ^0 = U_\\alpha ^0 {q}_\\alpha $ and used the Stokes-Einstein-Sutherland relations ${D}_{\\alpha \\beta }^{UF} = k_BT{M}_{\\alpha \\beta }^{UF}, {D}_{\\alpha \\beta }^{UL} = k_BT{M}_{\\alpha \\beta }^{UL}$ .", "For all accessible configurations, the inter-particle forces are zero and the hard-particle interaction between two spheres do not induce torques.", "Similarly, the rotary flux of particle $\\alpha $ is given by ${j}_\\alpha ^R &= & {M}_{\\alpha 1}^{\\Omega F}\\cdot {F}_1^eP_N - \\sum _{\\beta =1}^N \\left({D}_{\\alpha \\beta }^{\\Omega F} - {D}_{\\alpha 1}^{\\Omega F}\\right)\\cdot \\nabla _\\beta ^T P_N \\nonumber \\\\&&- {D}_{\\alpha 1}^{\\Omega F}\\cdot \\nabla _1^T P_N - \\sum _{\\beta =1}^N {D}_{\\alpha \\beta }^{\\Omega L} \\cdot \\nabla _\\beta ^R P_N -D_\\alpha ^R \\nabla _\\alpha ^R P_N.$ There are no external force or torque on the bath particles, $\\alpha =2\\mbox{--}N$ , nor a torque on the probe, ${L}_1^e = {0}$ .", "The Smoluchowski equation () together with the flux expressions (REF ) and (REF ) fully specify the $N$ -particle phase space dynamics.", "Some comments regarding equations ()-(REF ) are in order.", "First, the above derivation is an extension of the model considered by [10] for passive Brownian suspensions.", "We have generalized their model to a suspension of ABPs in which one of the particles is tagged as the probe that is driven by a translating trap.", "Realizing that the grand mobility tensor does not depend on the swim orientation vectors of spherical particles, one can set ${U}_\\alpha ^0 =0$ and integrate over the orientations of all particles to obtain the trapped probe microrheology problem of a passive Brownian suspension.", "Note that even for passive suspensions, if the probe or the bath particles are non-spherical, their shape orientations need to be included in the above formulation.", "Second, the hydrodynamic interactions between all $N$ -particles are included in the grand mobility tensor.", "In particular, this leads to the fact that a gradient in orientation space of particle $\\beta $ induces a translational flux of particle $\\alpha $ , and vice versa, due to the hydrodynamic translation-rotation coupling.", "Third, due to the dependence on particle orientations, the phase space of $N$ ABPs has a dimension of $5N$ : the physical space has a dimension of $3N$ and the orientation space has a dimension of $2N$ if the orientation of each particle is parametrized by the azimuthal and polar angles of a spherical coordinate system." ], [ "Mean and fluctuation of the probe position", "The average position or mean displacement of the probe relative to the trap is defined by $\\langle {r}_1\\rangle (t) = \\int {r}_1 P_N d\\Gamma ^N,$ where the angle bracket denotes integration against $P_N$ over the configuration space of all particles.", "Multiplying equation () by ${r}_1$ and integrating over the configuration space $\\Gamma ^N$ , we obtain $\\frac{\\partial \\langle {r}_1\\rangle }{\\partial t} +{U}^\\mathrm {trap}&= & U_1^0 \\langle {q}_1\\rangle + \\bigl \\langle {M}_{11}^{UF}\\cdot {F}_1^e\\bigr \\rangle - \\bigl \\langle {D}_{11}^{UF}\\cdot \\nabla _1^T\\ln P_N\\bigr \\rangle \\nonumber \\\\& & - \\sum _{\\beta =1}^N \\bigl \\langle \\left({D}_{1\\beta }^{UF} - {D}_{1 1}^{UF}\\right)\\cdot \\nabla _\\beta ^T \\ln P_N \\bigr \\rangle .$ Similarly, the mean squared displacement, a second order tensor, is governed by $\\frac{\\partial \\langle {r}_1{r}_1\\rangle }{\\partial t} + 2\\left[{U}^\\mathrm {trap}\\langle {r}_1\\rangle \\right]^\\mathrm {sym} = 2 \\int \\left[ {j}_1^T {r}_1\\right]^\\mathrm {sym}d\\Gamma ^N ,$ where the integral $\\int {j}_1^T{r}_1 d \\Gamma ^N&= & U_1^0 \\langle {q}_1{r}_1\\rangle + \\Bigl \\langle {M}_{11}^{UF}\\cdot {F}_1^e {r}_1\\Bigr \\rangle \\nonumber \\\\&& - \\sum _{\\beta =1}^N \\Bigl \\langle \\left({D}_{1\\beta }^{UF} - {D}_{1 1}^{UF}\\right)\\cdot \\left(\\nabla _\\beta ^T \\ln P_N\\right) {r}_1 \\Bigr \\rangle \\nonumber \\\\&&- \\Bigl \\langle {D}_{11}^{UF}\\cdot \\left(\\nabla _1^T\\ln P_N\\right) {r}_1\\Bigr \\rangle ,$ and the superscript “sym” denotes the symmetric part of a tensor (see equation (REF )).", "The main quantities of interest in the present problem are the mean displacement $\\langle {r}_1\\rangle $ and the fluctuation Var(r1)=Cov(r1, r1)=r1r1 = r1r1 - r1r1, where we have introduced the shorthand $\\Delta {r}_1 = {r}_1 - \\langle {r}_1\\rangle $ and $\\operatorname{Var}({r}_1)$ denotes the variance tensor of ${r}_1$ .", "For a harmonic trap, the mean force is related to the mean displacement via $\\langle {F}_1^e\\rangle = -k \\langle {r}_1\\rangle ,$ and similarly the fluctuation in the force is given by $\\operatorname{Var}({F}_1^e)=\\bigl \\langle \\Delta {F}_1^e \\Delta {F}_1^e\\bigr \\rangle = k^2 \\langle \\Delta {r}_1\\Delta {r}_1\\rangle .$" ], [ "The pair problem", "To proceed analytically, we restrict the analysis to the dilute limit in which only pair interactions between a bath particle and the probe is considered.", "Furthermore, we neglect hydrodynamic interactions between the bath particle and the probe, and only consider hard-sphere interactions.", "The reduction from the $N$ -particle formulation to the pair problem and the consideration of hydrodynamic interactions are discussed in appendix .", "Because the bath particles are indistinguishable, it is convenient to define the two-particle probability density function $\\rho _2({r}_2, {q}_2, {r}_1, {q}_1, t)$ , which denotes the joint probability density function of finding the probe at $({r}_1, {q}_1)$ and any bath particle at $({r}_2, {q}_2)$ at time $t$ .", "In terms of $P_2({r}_2, {q}_2, {r}_1, {q}_1, t)$ , which is the joint probability density function of finding the probe at $({r}_1, {q}_1)$ and the bath particle labeled 2 (i.e., the first bath particle) at $({r}_2, {q}_2)$ at time $t$ , we have $\\rho _2 = (N-1)P_2$ .", "Here, the factor of $N-1$ comes from removing the labels from the $N-1$ bath particles.", "The joint probability can be written as 2 = 1/1(r2, q2, t r1, q1, t) P1(r1, q1, t) = nb g1/1(r2, q2, t r1, q1, t) P1(r1, q1, t), where $n_b=(N-1)/V$ is the number density of bath particles.", "For a passive and CF (or CV) probe, $g_{1/1}$ becomes independent of the configuration (${r}_1$ and ${q}_1$ ) of the probe due to statistical homogeneity; in this case the probe distribution $P_1$ can be integrated over and one only needs to consider $g_{1/1}$ [10], [30], [17].", "The joint probability $\\rho _2$ (see appendix ) is governed by 2t +1T( j1T -Utrap2) + 2T( j2T -j1T) + =12 RjR=0, where j1T = U10 q1 2 + 11F1e 2 + D1T 2T 2 - D1T 1T 2, j2T = U20 q2 2 - D2T 2T 2, jR = -DR R 2.", "Figure: Schematic of the pair problem of a spherical probe particle in a moving harmonic trap interacting with a spherical bath particle.", "Both the probe and the bath particles can be active.At contact, $r_2 = R_c$ , no relative flux is allowed: ${n}_2\\cdot \\left({j}_2^T-{j}_1^T\\right)=0.$ Far away from the probe, the bath distribution is undisturbed by the probe and the probe distribution is that in the absence of the bath particles, $\\rho _2({r}_2, {q}_2, {r}_1, {q}_1, t) \\rightarrow \\frac{n_b}{\\Omega _b} P_1({r}_1,{q}_1, t)\\quad \\mathrm {as}\\quad \\vert {r}_2\\vert \\rightarrow \\infty ,$ where $\\Omega _b$ is the total solid angle of the orientation space of the bath particle.", "In 3D, $\\Omega _b=4\\pi $ .", "Far away from the trap, the probability vanishes $\\rho _2 \\rightarrow 0 \\quad \\mathrm {as}\\quad \\vert {r}_1\\vert \\rightarrow \\infty .$ Equation (REF ) governing the mean displacement becomes r1 t + 1kr1 =-Utrap+ U10q1 + D1T2T 2 d2, where $d\\Gamma ^2=d\\Gamma _1d\\Gamma _2$ , and we have defined the viscoelastic timescale $\\tau _k = \\frac{\\zeta _1}{k},$ which is set by the balance between the viscous force $\\zeta _1 \\partial \\langle {r}_1\\rangle /\\partial t$ and the elastic force $k\\langle {r}_1\\rangle $ .", "Using the divergence theorem and the far-field condition (REF ), the last term on the rhs of (REF ) can be written as $D_1^T\\int \\nabla _2^T \\rho _2 d\\Gamma ^2 = D_1^T\\int d{q}_2 d\\Gamma _1\\oint _{S_c} {n}_2 \\rho _2 dS_2,$ where $S_c=\\lbrace {r}_2: \\vert {r}_2\\vert =R_c \\rbrace $ is the contact surface and ${n}_2$ is the unit normal vector of $S_c$ that points out of particle 2.", "As shown in appendix , the position fluctuation of the probe is governed by $\\frac{1}{2}\\frac{\\partial \\operatorname{Var}({r}_1)}{\\partial t} + \\frac{1}{\\tau _k} \\operatorname{Var}({r}_1)= D_1^T{I}+\\left[ U_1^0\\operatorname{Cov}({q}_1, {r}_1)+D_1^T\\int \\Delta {r}_1\\nabla _2^T\\rho _2 d\\Gamma ^2 \\right]^\\mathrm {sym},$ where the covariance of ${q}_1$ and ${r}_1$ satisfies Cov(q1, r1)t +1 Cov(q1, r1) = U10Var(q1) + D1Tq1 2T2 d2.", "In equation (REF ), we have defined the relaxation time $\\tau $ using 1 = 1k+d-11R.", "Regardless of the presence of the trap or the bath particles, at long times ($t\\rightarrow \\infty $ ) the net polar and nematic orders of the probe are given by $\\langle {q}_1\\rangle ={0}$ and $\\bigl \\langle {q}_1{q}_1\\bigr \\rangle = {I}/d$ , respectively (see appendix ).", "As a result, $\\operatorname{Var}({q}_1) = {I}/d$ at long times.", "It is convenient to consider the rank $m$ polyadic spatial moment tensor ${M}_m({r}_2, {q}_2, {q}_1, t) = \\int \\underbrace{{r}_1\\cdot \\cdot \\cdot {r}_1}_{m} \\rho _2 d{r}_1,\\quad (m=0,1,2,...).$ Multiplying equation (REF ) by the $m$ -adic product of ${r}_1$ and integrating over the physical space of the probe, we obtain $&&\\frac{\\partial {M}_m}{\\partial t} - m \\left[U_1^0{q}_1 {M}_{m-1} - \\frac{k}{\\zeta _1}{M}_m +(m-1) D_1^T {M}_{m-2}{I}+D_1^T \\nabla _2^T{M}_{m-1} - {U}^\\mathrm {trap}{M}_{m-1} \\right]^\\mathrm {sym}\\nonumber \\\\&& +\\nabla _2^T\\cdot \\left({U}_r^0 {M}_m - D_r^T \\nabla _2^T{M}_m + \\frac{k}{\\zeta _1}{M}_{m+1}\\right) - m D_1^T \\left[\\nabla _2^T {M}_{m-1} \\right]^\\mathrm {sym} - \\sum _{\\alpha =1}^2 D_\\alpha ^R\\nabla _\\alpha ^R\\cdot \\nabla _\\alpha ^R {M}_m={0},$ where we have defined the relative swim velocity and the relative diffusivity as, respectively, ${U}_r^0 = U_2^0{q}_2-U_1^0{q}_1,\\quad D_r^T = D_1^T+D_2^T,$ whereas $\\left[{A}\\right]^\\mathrm {sym}$ denotes the symmetric part of any rank $m$ Cartesian tensor ${A}$ such that $\\left[{A}\\right]^\\mathrm {sym}_{i_1i_2\\cdot \\cdot \\cdot i_m} =\\frac{1}{m!}", "\\sum _{\\sigma \\in \\mathfrak {S}_m} A_{i_{\\sigma 1} i_{\\sigma 2}\\cdot \\cdot \\cdot i_{\\sigma m}},$ in which $\\mathfrak {S}_m$ is the set containing the $m!$ permutations of indices.", "For $m=2$ , this reduces to the familiar definition of the symmetric part of a rank 2 tensor, ${A}^\\mathrm {sym} = \\left({A}+{A}^\\intercal \\right)/2$ .", "For any rank $m$ tensor ${A}$ , its symmetric part $\\left[{A}\\right]^\\mathrm {sym}$ is invariant under a permutation of all indices.", "In equation (REF ), ${M}_m$ for $m <0$ is understood to be zero.", "At contact, $r_2 = R_c$ , the no-flux boundary condition is satisfied: n2(Ur0 Mm - DrT 2TMm + k1Mm+1) - m D1T[n2 Mm-1 ]sym=0.", "The far-field condition for the spatial moment of rank $m$ is ${M}_m \\rightarrow \\frac{n_b}{\\Omega _b} {\\Phi }_m({q}_1, t) \\quad \\mathrm {as}\\quad r_2 \\rightarrow \\infty ,$ where ${\\Phi }_m({q}_1, t) = \\int \\underbrace{{r}_1\\cdot \\cdot \\cdot {r}_1}_{m} P_1({r}_1, {q}_1, t) d {r}_1$ is the rank $m$ spatial moment of the single particle probability $P_1$ of the probe.", "Discussion of the single particle behavior and the method to obtain ${\\Phi }_m$ is deferred to section REF .", "From (REF ) and (), to obtain the mean and mean-squared displacements, one only needs to calculate the zeroth and first spatial moments, respectively.", "On the other hand, the definitions of mean and mean-squared displacements allow us to write $\\langle {r}_1\\rangle = \\int {M}_1/(N-1) d{q}_1d\\Gamma _2$ and $\\langle {r}_1{r}_1\\rangle = \\int {M}_2/(N-1) d{q}_1d\\Gamma _2$ .", "Because in obtaining $\\langle {r}_1{r}_1\\rangle $ only the integral of ${M}_2$ is required, it's not necessary to first calculate the distribution of ${M}_2$ explicitly before carrying out the integration.", "Instead, one can show that integrating equation (REF ) for $m=2$ leads to the same equation as ().", "Due to the presence of the harmonic trap force, the equation for ${M}_m$ is coupled to ${M}_{m+1}$ .", "To truncate this infinite set of equations and obtain a finite set of closed equations, a closure model may be used.", "To see the structure of the spatial moments more clearly, we write out the first few moment equations explicitly using (REF ).", "The zeroth moment, $M_0 = \\int \\rho _2 d {r}_1$ , satisfies the equation M0t + 2T[Ur0 M0 - DrT2T M0 + k1M1 ] - =12 DR RR M0=0, and the normalization $\\int M_0d{q}_1d\\Gamma _2 = N-1$ .", "In addition to being advected by the relative velocity ${U}_r^0$ in the physical space of the bath particle, $M_0$ is forced by the trap via the divergence of the first moment.", "The equation governing the evolution of the first spatial moment is $&&\\frac{\\partial {M}_1}{\\partial t} - \\left(U_1^0 {q}_1 M_0 -\\frac{k}{\\zeta _1} {M}_1 + D_1^T \\nabla _2^T M_0 \\right) +{U}^\\mathrm {trap}M_0\\nonumber \\\\&&+\\nabla _2^T\\cdot \\left[{U}_r^0 {M}_1 - D^T_r\\nabla _2^T{M}_1 + \\frac{k}{\\zeta _1}{M}_2 -D_1^T{I}M_0\\right] \\nonumber \\\\&&-\\sum _{\\alpha =1}^2 D_\\alpha ^R \\nabla _\\alpha ^R\\cdot \\nabla _\\alpha ^R {M}_1=\\mathbf {0}.$ Similarly, the second moment is governed by M2t -2[ U10q1 M1 - k1M2 +D1T M0I+D1T 2TM1 - UtrapM1 ]sym +2T[Ur0 M2 -DrT 2T M2 + k1 M3 ] - 2 D1T [2T M1]sym -=12 DR RR M2=0." ], [ "The probe distribution in the absence of bath particles", "The simplest problem in the above formulation is that of a single particle (the probe) interacting with the trap.", "One can formulate this single-particle problem by neglecting all bath particles or taking the limit $\\phi _b = 4\\pi b^3n_b/3 \\rightarrow 0$ in the above $N$ -particle formulation.", "The single-particle probability $P_1({r}_1, {q}_1, t \\vert {x}_0, t)$ of the active probe satisfies P1t+1T(11F1e P1 - D1TP1 - UtrapP1 +U10 q1 P1 ) - D1R 1R1R P1 =0, where the conservation of probability dictates that $\\int P_1 d\\Gamma _1 =1$ and the harmonic trap force ${F}_1^e = -k {r}_1$ .", "We emphasize that in equation (REF ) the probe is also considered as an ABP.", "The rank $m$ ($m=0,1,...$ ) spatial moment of $P_1$ defined by (REF ) satisfies mt - m [U10q1m-1 - k1m +(m-1)D1T m-2I- Utrapm-1 ]sym - D1R1R1R m = 0, where ${\\Phi }_m$ for $m <0$ is defined to be zero.", "Different from equation (REF ) in which the moment ${M}_m$ is coupled to ${M}_{m+1}$ , the rank $m$ spatial moment of $P_1$ only depends on lower order moments, which leads to a set of closed equations.", "The solution to the preceding equation provides the far-field condition for ${M}_m$ as given by equation (REF ).", "The zeroth-order spatial moment $\\Phi _0$ is the net orientational distribution, which is unaffected by the trap and is governed by the orientational diffusion equation: $\\frac{\\partial \\Phi _0}{\\partial t} - D_1^R \\nabla _1^R\\cdot \\nabla _1^R \\Phi _0=0,$ where the conservation of $P_1$ gives $\\int \\Phi _0 d {q}_1 = 1$ .", "At long times, the solution is simply the uniform distribution, $\\Phi _0({q}_1, t \\rightarrow \\infty ) = 1/(4\\pi )$ in 3D.", "The above formulation also allows us to consider the mean and fluctuation of the probe displacement in the absence of bath particles.", "Equation (REF ) or (REF ) in the absence of bath particles reduces to $\\frac{\\partial \\langle {r}_1\\rangle }{\\partial t}+\\frac{1}{\\tau _k} \\langle {r}_1\\rangle = -{U}^\\mathrm {trap}+ U_1^0 \\langle {q}_1\\rangle ,$ where for the single particle $\\langle {r}_1\\rangle = \\int {r}_1P_1d\\Gamma _1 = \\int {\\Phi }_1 d{q}_1$ .", "Similarly, equation (REF ) or () for the single particle becomes $\\frac{1}{2}\\frac{\\partial \\langle {r}_1{r}_1\\rangle }{\\partial t} +\\frac{1}{\\tau _k}\\bigl \\langle {r}_1{r}_1\\bigr \\rangle =D_1^T {I}+\\left[U_1^0 \\bigl \\langle {q}_1{r}_1\\bigr \\rangle - {U}^\\mathrm {trap}\\langle {r}_1\\rangle \\right]^\\mathrm {sym}.$ It can be seen from equations (REF ) and (REF ) that in order to calculate the mean and mean-squared displacements, one needs to obtain the net polar order $\\langle {q}_1\\rangle $ and the covariance of the position and orientation $\\operatorname{Cov}({q}_1,{r}_1)$ .", "The governing equation for $\\operatorname{Cov}({q}_1,{r}_1)$ follows from (REF ) and is given by $\\frac{\\partial \\operatorname{Cov}({q}_1, {r}_1)}{\\partial t} + \\frac{1}{\\tau }\\operatorname{Cov}({q}_1, {r}_1) =U_1^0\\operatorname{Var}({q}_1)$ which depends on the net nematic order $\\langle {q}_1{q}_1\\rangle $ .", "At steady state, it is shown that $\\langle {q}_1\\rangle = {0}$ and $\\langle {q}_1{q}_1\\rangle = {I}/d$ , where $d=2,3$ is the dimensionality of the physical space.", "This allows us to obtain r1 = -1 Utrapk, Cov(q1,r1) = U10dk/1 + d(d-1)D1R I, r1r1 = 12k2 UtrapUtrap+ 1D1TkI + 1 D1swimk 11 + k 1R1 1d-1I, where $D_1^\\mathrm {swim} = \\left(U_1^0\\right)^2\\tau _1^R/[d(d-1)]$ is the swim diffusivity of a freely swimming ABP.", "The average position of the ABP relative to the trap is given by the balance between the average trap force $k\\langle {r}_1\\rangle $ and the viscous drag $\\zeta _1{U}^\\mathrm {trap}$ .", "If the trap is strong, $k\\rightarrow \\infty $ , the ABP is tightly confined and pushing against the trap `boundary', which has been observed in experiments [31].", "On the other hand, for $k \\rightarrow 0$ , the average position of the ABP becomes unbounded.", "Solving the steady state first and then taking the limit $k\\rightarrow 0$ in (REF ) is singular because in the absence of the trap ($k=0$ ) the average position is unbounded and at long times the particle motion is diffusive.", "For $k\\equiv 0$ , we are simply measuring the motion of an ABP in a frame of reference moving with velocity ${U}^\\mathrm {trap}$ relative to the laboratory frame, which gives $d \\langle {r}_1\\rangle /dt = -{U}^\\mathrm {trap}$ .", "[31] studied the transient and long-time dynamics of self-propelled Janus particles in a fixed acoustic trap.", "They showed that the experimentally measured density distribution of Janus particles follow closely the theoretical predictions using a harmonic trap.", "Equation (REF ) in the absence of ${U}^\\mathrm {trap}$ agrees with that obtained in [31].", "The fluctuation relation is given by $\\bigl \\langle \\left(\\Delta {F}_1^e \\right)^2\\bigr \\rangle ^{1/2}\\bigl \\langle \\left(\\Delta {r}_1\\right)^2\\bigr \\rangle ^{1/2} = d \\left[k_BT+ \\frac{k_sT_s}{1+ \\tau _1^R/[(d-1)\\tau _k]}\\right],$ where the thermal energy $k_BT= \\zeta _1D_1^T$ and analogously an active energy scale $k_sT_s$ has been defined such that $k_sT_s= \\zeta _1D_1^\\mathrm {swim}$ [32].", "In equation (REF ), the fluctuation consists of the thermal (passive) energy $dk_BT$ and an active energy contribution.", "This active energy is different from $k_sT_s$ due to the presence of the harmonic trap, which introduces an orientational decorrelation timescale $\\tau _k$ in addition to the reorientation time $\\tau _1^R$ of the ABP.", "For a weak trap, $\\tau _1^R/\\tau _k \\ll 1$ , the decorrelation occurs on the timescale of $\\tau _1^R$ , and the active contribution scales as $\\zeta _1\\left(U_1^0\\right)^2\\tau _1^R$ .", "As a result, the fluctuation $\\bigl \\langle \\left(\\Delta {F}_1^e \\right)^2\\bigr \\rangle ^{1/2}\\bigl \\langle \\left(\\Delta {r}_1\\right)^2\\bigr \\rangle ^{1/2} \\rightarrow d(k_BT+k_sT_s)$ as $\\tau _1^R/\\tau _k \\rightarrow 0$ .", "This is often referred to as Rule #1 of active matter—when all length scales are large compared to the run length $\\ell _1$ , one can replace $k_BT$ with $k_BT+ k_sT_s$ .", "As another example, consider the sedimentation of active colloids under gravity.", "At steady state, the number density follows Boltzmann distribution but with $k_BT+ k_sT_s$ in place of $k_BT$ [33].", "When $\\tau _1^R/\\tau _k \\gg 1$ , the relevant timescale is $\\tau _k$ , and the active contribution scales as $\\zeta _1 \\left(U_1^0\\right)^2\\tau _k$ .", "In this limit, the ABP is pushing against the edge of the potential well and the fluctuation comes from passive Brownian motion alone, $\\bigl \\langle \\left(\\Delta {F}_1^e \\right)^2\\bigr \\rangle ^{1/2}\\bigl \\langle \\left(\\Delta {r}_1\\right)^2\\bigr \\rangle ^{1/2} \\rightarrow d k_BT$ as $\\tau _1^R/\\tau _k \\rightarrow \\infty $ .", "Regardless of the trap strength, the product of the square root of the fluctuations in the force and the position is always bounded.", "For a strong trap, the position fluctuation vanishes, $\\bigl \\langle \\Delta {r}_1\\Delta {r}_1\\bigr \\rangle = O(1/k) \\rightarrow 0$ , but the force fluctuation blows up linearly since $\\bigl \\langle \\Delta {F}_1^e\\Delta {F}_1^e\\bigr \\rangle = O(k) \\rightarrow \\infty $ as $k \\rightarrow \\infty $ .", "Conversely, the position fluctuation grows unboundedly while the force fluctuation vanishes as $k \\rightarrow 0$ .", "In the weak trap limit, equation (REF ) can be equivalently written as $\\frac{k}{\\zeta _1}\\bigl \\langle \\Delta {r}_1\\Delta {r}_1\\bigr \\rangle = \\frac{\\bigl \\langle \\Delta {r}_1\\Delta {r}_1\\bigr \\rangle }{\\tau _k} \\rightarrow {D}_1^\\mathrm {eff}\\quad \\mathrm {as}\\quad \\frac{\\tau _1^R}{\\tau _k}\\rightarrow 0,$ where ${D}_1^\\mathrm {eff}= D_1^T{I}+D_1^\\mathrm {swim}{I}$ is the long-time effective diffusivity of the ABP in the absence of the trap (see appendix for the asymptotic analysis).", "This relation implies the equivalence of the position fluctuation divided by $\\tau _k$ in the limit of vanishing harmonic trapping force and the effective diffusion of a free ABP.", "In other words, one could calculate the position fluctuation in a trap and then take the limit of $\\bigl \\langle \\Delta {r}_1\\Delta {r}_1\\bigr \\rangle /\\tau _k$ as $k\\rightarrow 0$ to obtain the long-time diffusivity that the particle would have in the absence of the trap, or vice versa.", "Because the trap is weak, the ABP is able to explore space via both thermal fluctuation and its undisturbed active run-and-reorientation, both processes contribute to the position fluctuation.", "In the presence of bath particles, this equivalence still holds in which ${D}_1^\\mathrm {eff}$ is the diffusivity of the probe affected by collisions with bath particles (i.e., tracer diffusion)." ], [ "A weak trap", "For a weak trap, $\\epsilon = \\tau _1^R/\\tau _k = k \\tau _1^R/\\zeta _1 \\ll 1$ , the probe is allowed to explore and reorient freely before reaching the “boundary” of the potential well.", "The viscoelastic timescale $\\tau _k$ is well separated from the reorientation timescale $\\tau _1^R$ .", "In the intermediate timescale characterized by $t/\\tau _1^R \\gg 1$ and $t/\\tau _k \\ll 1$ , the probe has explored the suspension but has not reached the boundary of the potential; we expect a diffusive behavior of the probe.", "At times much longer than the viscoelastic timescale ($t/\\tau _k \\gg 1$ ), the variance of the probe position becomes bounded due to the trapping force.", "Therefore, the motion of the probe exhibits a transition from diffusive to bounded behavior.", "The separation of the two timescales allows us to consider a multiple-scale analysis.", "By defining the fast variable $t_1 = t$ and the slow variable $t_2 = \\epsilon t$ , we have $\\partial /\\partial t = \\partial /\\partial t_1 + \\epsilon \\partial /\\partial t_2$ .", "Regular perturbation expansions of the pair probability distribution and its spatial moments in terms of $\\epsilon $ are written as $\\rho _2 &=&\\rho _2^{(0)}+\\epsilon \\rho _2^{(1)}+\\cdot \\cdot \\cdot ,\\\\{M}_m &=& {M}_m^{(0)} + \\epsilon {M}_m^{(0)}+\\cdot \\cdot \\cdot ,$ where ${M}_m^{(k)}$ is the rank $m$ spatial moment of $\\rho _2^{(k)}$ .", "At $O(1)$ , the zeroth moment satisfies M0(0)t1 + 2T(Ur0 M0(0) - DrT2T M0(0) ) - =12 DR RR M0(0)=0, n2(Ur0 M0(0) - DrT2T M0(0) ) = 0,   r2 Sc.", "Similarly, the first moment at this order is given by M1(0)t1 - (U10 q1 M0(0) + D1T 2T M0(0) ) +UtrapM0(0) +2T(Ur0 M1(0) - DTr2TM1(0) -D1TIM0(0)) -=12 DR RR M1(0)=0, n2(Ur0 M1(0) - DTr2TM1(0) -D1TIM0(0))=0,   r2 Sc.", "Expanding the covariances similarly, e.g., $\\operatorname{Var}({r}_1) = \\operatorname{Var}^{(0)}({r}_1) + \\epsilon \\operatorname{Var}^{(1)}({r}_1)+\\cdot \\cdot \\cdot ,$ we obtain at $O(1)$ 12Var(0)(r1)t1= D1TI+[ U10Cov(0)(q1, r1)+ D1T2T2(0)r1 d2 ]sym, Cov(0)(q1, r1)t1 +d-11R Cov(0)(q1, r1) =U10Var(q1) + D1T q12T2(0) d2.", "Note that $\\operatorname{Var}({q}_1)$ is not affected by the presence of the trap (see appendix ) and therefore only has the $O(1)$ term in the small-$\\epsilon $ expansion.", "Equations (REF )–(REF ) govern the dynamics of a probe in a bath of active particles in the absence of the trapping force (The presence of ${U}^\\mathrm {trap}$ in (REF ) is only due to the fact that we are in a frame of reference moving with ${U}^\\mathrm {trap}$ relative to the laboratory frame).", "This problem is the so-called tracer—an active one—diffusion in an active Brownian suspension.", "Even in the absence of the trap, the correlation between ${q}_1$ and ${r}_1$ has a steady-state (time-independent) solution due to the presence of the decorrelation time $\\tau _1^R$ in equation (REF ).", "Dropping the time derivative in (REF ) at steady state, we obtain Cov(0)(q1, r1) = 1d(d-1)I + 1Rd-1D1T q12T2(0) d2, where it is understood that the steady-state distribution of $\\rho _2^{(0)}$ is used, and $\\ell _1=U_1^0\\tau _1^R$ is the run length of the active probe.", "Therefore, equation (REF ) is written as D1eff = ( D1T +D1swim)I + D1T[( 1d-1 q1 +r1 )2T 2(0)d2 ]sym, where ${D}_1^\\mathrm {eff}=\\partial \\operatorname{Cov}^{(0)}({r}_1,{r}_1)/(2\\partial t_1)$ is the long-time diffusivity of the probe in the absence of the trapping force.", "As expected, one could obtain the same result by setting ${F}_1^e = {0}$ from the outset (see section REF ).", "This is done in [26] but with the free tracer particle being passive.", "So long as the trapping force is not identically zero, the probe will eventually reach the boundary of the trap.", "This confinement happens at very large distances from the trap (or at long times if the probe is started near the trap center)." ], [ "A strong trap", "For a strong trap, the viscoelastic time scale $\\tau _k = \\zeta _1/k$ is much smaller than other timescales (e.g., the reorientation time ) of the problem.", "Due to the strong trapping force, both the mean and the variance of the probe have a steady-state solution that is time independent.", "The position fluctuation, governed by equation (REF ), becomes at steady state k1Var( r1) = D1TI+ [ U10Cov(q1, r1) +D1Tr1 2T 2d2 ]sym.", "Similarly, $\\operatorname{Cov}({q}_1, {r}_1)$ defined in (REF ) is given by $\\frac{k}{\\zeta _1} \\operatorname{Cov}({q}_1, {r}_1) = \\frac{1}{d}U_1^0 {I}+ D_1^T\\int \\Delta {q}_1 \\nabla _2^T \\rho _2d\\Gamma ^2.$ Because the last term in the preceding equation is finite as $k\\rightarrow \\infty $ , $\\operatorname{Cov}({q}_1, {r}_1)$ is small and on the order of $1/k$ .", "On the other hand, for a strong trap, the relative deviation of the probe position from the average position is small, $\\Delta |{r}_1|/\\langle |{r}_1|\\rangle \\ll 1$ , which leads to M1(r2, q2, t) = r1 2 dr1 = r1 M0 + r1 2 dr1 = r1 M0 + O(|r1|/|r1| ), where the decomposition ${r}_1 = \\langle {r}_1\\rangle +\\Delta {r}_1$ is used.", "Using the first line of (REF ), we have $\\int \\nabla _2^T \\rho _2 \\Delta {r}_1 d\\Gamma ^2 =\\int \\nabla _2^T\\big [{M}_1- \\langle {r}_1\\rangle M_0\\big ]d {q}_1 d\\Gamma _2,$ which is negligible due to the second line of (REF ).", "Taken together, we conclude that the last two terms on the rhs of (REF ) are subdominant.", "To leading-order, the fluctuation in the strong-trap limit is given by $k \\operatorname{Var}({r}_1) = \\zeta _1 D_1^T{I}= k_BT{I},$ regardless of the presence of the bath particles.", "Therefore, in this limit we have (F1e )21/2(r1)21/2 = d kBT." ], [ "Constant-force and constant-velocity microrheology", "In this section, we show that the trapped-particle microrheology problem can be reduced to either the CV or CF problem when appropriate limits are taken." ], [ "Constant-force microrheology", "To recover the constant-force microrheology problem, instead of a harmonic trapping force, we apply a constant force to the probe particle, ${F}_1^e = const$ , and set the trap velocity ${U}^\\mathrm {trap}={0}$ .", "In this mode of operation, the main quantity of interest is the average velocity $\\langle {U}_1\\rangle $ of the probe in response to the constant external driving force.", "By definition, $\\langle {U}_1\\rangle = \\partial \\langle {r}_1\\rangle /\\partial t$ , which can be obtained by considering the rhs of equation (REF ).", "Because the trap is absent, the position ${r}_1$ defines an arbitrary origin in the laboratory frame of reference and the system is statistically homogeneous [10].", "As a result, the conditional probability $P_{N-1/1}$ defined by $P_N = P_{N-1/1}({r}^{N-1}, {q}^{N-1}, t \\vert {r}_1, {q}_1, t)P_{1}( {r}_1, {q}_1, t)$ is not a function of ${r}_1$ .", "(Note that in general $P_{N-1/1}$ can be a function of ${q}_1$ .)", "The third term on the rhs of equation (REF ) becomes - D11UF1TPN = - d N-1 PN-1/1D11UF d 1 1T P1 = 0, where we have used the divergence theorem and the fact that $P_1$ vanishes at infinity.", "Further manipulations allow us to write equation (REF ) as U1 = U10 q1 + M11UFF1e - =1Nd1 P1dN-1(D1UF - D1 1UF)T PN-1/1 .", "If all $N$ particles (including both the probe and the bath particles) are passive, equation (REF ) upon integration over $\\Gamma _1$ reduces to the average velocity relation originally obtained by [10] (equation A4) for passive colloids.", "Neglecting hydrodynamic interactions in the dilute limit, the average velocity becomes $\\langle {U}_1\\rangle = \\frac{1}{\\zeta _1}{F}_1^e + U_1^0 \\langle {q}_1\\rangle + D_1^T \\int \\nabla _2^T \\rho _2d\\Gamma ^2.$ Recalling that $M_0 = \\int \\rho _2 d{r}_1$ , the last term in (REF ) can be calculated so long as $M_0$ can be obtained.", "We note that, at long times, $\\langle {q}_1\\rangle ={0}$ .", "If the probe is under the influence of external orienting fields, the net polar order $\\langle {q}_1\\rangle $ becomes non-zero [34].", "In the CF mode of microrheology, the equation governing the spatial moment ${M}_m$ is similar to (REF ) and can be shown to be $&&\\frac{\\partial {M}_m}{\\partial t} - m \\left[U_1^0{q}_1 {M}_{m-1} + \\frac{1}{\\zeta _1}{F}_1^e{M}_{m-1} +(m-1) D_1^T {M}_{m-2}{I}+D_1^T \\nabla _2^T{M}_{m-1} \\right]^\\mathrm {sym}\\nonumber \\\\&& +\\nabla _2^T\\cdot \\left({U}_r^0 {M}_m - D_r^T \\nabla _2^T{M}_m - \\frac{1}{\\zeta _1}{F}_1^e{M}_{m}\\right) - m D_1^T \\left[\\nabla _2^T {M}_{m-1} \\right]^\\mathrm {sym} - \\sum _{\\alpha =1}^2 D_\\alpha ^R\\nabla _\\alpha ^R\\cdot \\nabla _\\alpha ^R {M}_m={0}.$ Here, because the external force is constant, the moment equation at rank $m$ only depends on moments of lower ranks and the system up to any rank is a closed set of equations.", "At contact, $r_2 = R_c$ , we have the no-flux boundary condition: n2(Ur0 Mm - DrT 2TMm - 11F1eMm) - m D1T[n2 Mm-1 ]sym=0.", "The far-field condition as $r_2\\rightarrow \\infty $ is unchanged and given by equation (REF ), where ${\\Phi }_m$ for constant force satisfies mt - m [U10q1m-1 + F1e1m-1 +(m-1)D1T m-2I]sym - D1R1R1R m = 0.", "To find the average velocity given in equation (REF ), one needs to consider equations (REF ) and (REF ) for $m=0$ .", "We note that in the above general formulation, both the probe particle and the bath particle are ABPs.", "By setting $U_1^0, U_2^0=0$ and integrating out the orientational degrees of freedom of the probe and the bath particle, we obtain the CF microrheology problem of a passive Brownian probe in a passive Brownian suspension, which has been considered by [10].", "On the other hand, the CF microrheology of a passive Brownian probe in an active Brownian suspension ($U_1^0=0, U_2^0\\ne 0$ ) is studied by [17].", "Taking $m=0$ in equation (REF ) in the absence of the external force (${F}_1^e={0}$ ), we obtain M0t +2T(Ur0 M0 - DrT 2T M0 ) - =12 DRRR M0=0.", "Treating the probe as one of the suspension particles, this zeroth spatial moment is the pair-correlation function of an active Brownian suspension (subject to proper normalization) in the dilute limit by neglecting all higher order correlations.", "Equation (REF ) governing the pair-correlation at steady state in 2D has been studied [35], [36]." ], [ "Force-induced tracer diffusion", "In the constant-force mode of microrheology, it is also of importance to consider the force-induced diffusion of the probe particle.", "In this context, the probe is often referred to as the tracer, i.e., force-induced tracer diffusion.", "If no external force is applied, ${F}_1^e = {0}$ , the problem is simply called tracer diffusion.", "The long-time diffusivity of the tracer in the presence of bath particles can be written as D1eff = t12 ddt Var(r1) =t12 [tr1r1 - U1r1 - r1U1 ], where the covariance tensor of ${r}_1$ is governed by 12d d tVar(r1) = D1TI+U10[Cov(q1, r1) ]sym +D1T[r12T 2d2 ]sym, and the covariance of ${q}_1$ and ${r}_1$ satisfies dd t Cov(q1, r1) +d-11R Cov(q1, r1) =U10Var(q1) + D1Tq12T 2d2.", "At long times, we then obtain the diffusivity as D1eff = ( D1T +D1swim)I + D1T[( 1d-1 q1 +r1 )2T 2 d2 ]sym.", "In (REF ), the first bracketed term on the rhs is the diffusivity of a single ABP in free space and the remaining terms are the additional contributions due to the excluded-volume interaction with the bath particles.", "As alluded to earlier, equation (REF ) is identical to equation (REF ), which is obtained in the weak-trap limit.", "We note that in (REF ) there is a constant external force while the diffusivity obtained in (REF ) is for a free tracer, i.e., force-induced versus free tracer diffusion.", "It is clear that if the force is absent the diffusivities obtained from (REF ) and (REF ) are identical.", "Using the divergence theorem, we can relate the integrals on the rhs of (REF ) to the zeroth and first spatial moments, q12T 2d2 =q1 dq1dq2Sc n2M0 dS2, r12T 2d2 =q1 dq1dq2Sc( M1- r1M0)n2 dS2, where $\\langle {r}_1\\rangle = \\int {M}_1/(N-1) d{q}_1d\\Gamma _2$ .", "Therefore, one only needs to solve for $M_0$ and ${M}_1$ in equation (REF ) in order to calculate the diffusivity.", "The above formulation for the forced-induced diffusion of an active tracer in an active suspension is a direct extension of the generalized Taylor dispersion theory (GTDT).", "In particular, we have used the statistical moment method of [37].", "An equivalent approach is to derive the mean velocity and the diffusivity by first transforming the unbounded coordinate ${r}_1$ into the Fourier space and consider a small wave-number expansion [14], [26], [17].", "By setting $U_1^0, U_2^0 =0$ and integrating over the orientational degrees of freedom of both the probe and the bath particles, we recover the equations governing the force-induced diffusion of a passive probe in a passive suspension [14].", "To recover the problem of a passive free tracer in an active suspension studied by [26], one can set ${F}_1^e = {0}$ , $U_1^0=0$ and integrate over the orientational degrees of freedom of the probe." ], [ "Constant-velocity microrheology", "To obtain the equations for the CV microrheology problem, we first consider the probe to have deterministic dynamics with $U_1^0 =0$ , $D_1^T=0$ and $D_1^R=0$ .", "Equation (REF ) at steady-state then leads to $k\\langle {r}_1\\rangle /\\zeta _1 = -{U}^\\mathrm {trap}$ .", "Furthermore, we consider the limit of a strong trap in which case the probe tightly follows the trap velocity.", "In this limit, the probe velocity is the trap velocity to leading-order and we then achieve a CV probe.", "To see this, we first decompose the position of the probe via ${r}_1 = \\langle {r}_1\\rangle +\\Delta {r}_1$ .", "In the strong trap limit, the deviation of the probe from the mean position is small, $\\Delta |{r}_1|/\\langle |{r}_1|\\rangle \\ll 1$ .", "To leading-order, (REF ) allows us to obtain the first spatial moment as $k{M}_1/\\zeta _1 = -{U}^\\mathrm {trap}M_0$ (this relation can also be viewed as a closure for the spatial moments).", "Substitution of this relation into (REF ) leads to M0t + 2T(U20q2 M0 - D1T2T M0 -UtrapM0 ) - D2R 2R2R M0=0.", "Similarly, the no-flux condition at contact (${r}_2 \\in S_c$ ) is ${n}_2\\cdot \\left(U_2^0{q}_2 M_0 - D_1^T\\nabla _2^T M_0 -{U}^\\mathrm {trap}M_0 \\right) = 0.$ Equation (REF ) describes the distribution of the bath particle measured in a frame of reference that is co-moving with ${U}^\\mathrm {trap}$ .", "Realizing that the probe velocity is the same as the `trap', ${U}^\\mathrm {probe} = {U}^\\mathrm {trap}$ , this is the CV microrheology of an active Brownian suspension.", "We note that in (REF ) [cf.", "(REF )] the relative velocity is $U_2^0{q}_2 - {U}^\\mathrm {trap}$ and the relative diffusivity is $D_1^T$ because the probe has prescribed kinematics.", "The CV microrheology of an active Brownian suspension governed by (REF ) and (REF ) has been studied by [18] and [28].", "To recover the CV microrheology of a passive Brownian suspension considered by [10], one only needs to set $U_2^0=0$ and integrate over the orientational degrees of freedom of the bath ABP." ], [ "Conclusions", "In this paper we have considered the trapped-particle microrheology of an active colloidal suspension consisting of active Brownian spheres.", "In the classical models of colloidal microrheology, the applied external force or the probe velocity are fixed and not subject to random fluctuations.", "This constraint of either CF or CV allows a model simpler than that discussed in the present paper.", "For the purpose of quantifying the micro-viscous response of suspensions, the CF or CV models are often sufficient.", "The challenge arises if one wishes to consider the fluctuations of the probe as a result of its interactions with the bath particles and the solvent.", "More specifically, we have demonstrated that in order to provide a meaningful quantification of the fluctuations in the probe position, one must allow both the position of and the external force on the probe to fluctuate.", "To achieve this, we developed a generalized microrheology model in which the probe is driven by a translating harmonic trap.", "We explicitly formulated the equations governing the dynamics of the probe-bath pair in the dilute limit and showed that both the mean position and the fluctuation of the probe position can be given in terms of the joint probability distribution.", "In the weak-trap limit, we showed that at an intermediate time the probe exhibits a diffusive behavior in which the diffusivity is the effective diffusivity of a free tracer immersed in the suspension.", "At this timescale, the probe has explored the suspension but hasn't reached the boundary of the trap.", "In other words, it is equivalent to the free-tracer diffusion problem.", "For a strong trap, the fluctuations from the activity of the bath particles or from the collisions between the probe and the bath particles are suppressed due to the strong confinement of the trap.", "In this limit, the fluctuation of the probe originates from the thermal energy alone regardless of the presence (or activity) of the bath particles.", "To conclude, we note that the derived Smoluchowski equation—even at the pair level—has a high dimensionality, which presents a challenge for the computation of the probability density.", "To circumvent this, one can start from a micromechanical perspective using the Langevin equations and consider a dynamic simulation of the suspension and the probe [38], [39], [40].", "In a dynamic simulation, the discrete trajectory of the probe is recorded and the calculation of its mean and fluctuation is straightforward.", "This work is supported by the National Science Foundation under Grant No.", "CBET 1803662." ], [ "Data Availability Statement", "The data that support the findings of this study are available within the article." ], [ "Derivation of the pair problem", "We integrate equation () over the relative positions and the orientations of the bath particles labeled from 3 to $N$ to obtain P2t + 1T(j1T - UtrapPN )dN-2 + 2T(j2T-j1T)dN-2 + =12 RjR dN-2=0, where $d\\Gamma ^{N-2}$ is a shorthand for $\\prod _{\\beta =3}^N d\\Gamma _\\beta $ and $P_2 = \\int P_N d\\Gamma ^{N-2}$ .", "In deriving the preceding equation, the divergence theorem and the no-flux condition are used to eliminate the terms $\\int \\nabla _\\beta ^T\\cdot \\left({j}_\\beta ^T-{j}_1^T\\right) d\\Gamma ^{N-2}$ for $\\beta = 3,...,N$ .", "In addition, the relation $\\int \\nabla _\\alpha ^R\\cdot {j}_\\alpha ^R d {q}_\\alpha = 0$ is used.", "To proceed further, we define the conditional probability of finding the remaining $N-2$ particles, $P_{(N-2)/2}$ , given the configuration of the probe and the first bath particle: $P_N\\left({r}^N, {q}^N, t\\right) = P_{(N-2)/2}\\left({r}^{N-2}, {q}^{N-2},t\\big \\vert {r}_2, {q}_2, {r}_1, {q}_1,t \\right)P_2({r}_2, {q}_2, {r}_1,{q}_1, t).$ Notice that the conditional probability is conserved, $\\int P_{N-2/2}d\\Gamma ^{N-2} =1$ .", "In equation (), for $\\alpha =1$ or 2, we have jT d N-2= U0 qP2 + M1UF(N-2)/2F1e P2 - D2UF-D1UF (N-2)/22T P2 - =2N(DUF-D1UF)TP(N-2)/2 (N-2)/2P2 - D1UF1T P(N-2)/2(N-2)/2P2 - D1UF(N-2)/21TP2 - =12DULR P(N-2)/2 (N-2)/2P2 -=12DUL(N-2)/2R P2, and jR d N-2= M1F(N-2)/2F1e P2 - D2F-D1F (N-2)/22T P2 - =2N(DF-D1F)TP(N-2)/2 (N-2)/2P2 - D1F1T P(N-2)/2(N-2)/2P2 - D1F(N-2)/21TP2 -=12DLR P(N-2)/2 (N-2)/2P2 -=12DL(N-2)/2R P2 - DR R P2.", "In equations () and (), we have defined $\\langle (\\cdot )\\rangle _{(N-2)/2} = \\int (\\cdot ) P_{(N-2)/2}d\\Gamma ^{N-2}$ , and used the fact that the mobility tensors are independent of ${q}^N$ for spheres, i.e., ${M}_{\\alpha \\beta } = {M}_{\\alpha \\beta }({r}_2,...,{r}_N)$ .", "In the dilute limit, neglecting the terms involving the gradients of $\\ln P_{(N-2)/2}$ and using the pair mobility tensor in the absence of other particles in place of $\\langle {M}\\rangle _{(N-2)/2}$ , we obtain P2t + 1T(j1T - UtrapP2) + 2T( j2T - j1T) +=12 R jR=0, where using the same symbols as before ${j}_\\alpha ^T &=& U_\\alpha ^0 {q}_\\alpha P_2 + {M}_{\\alpha 1}^{UF}\\cdot {F}_1^e P_2 - \\left({D}_{\\alpha 2}^{UF}-{D}_{\\alpha 1}^{UF} \\right)\\cdot \\nabla _2^T P_2 \\nonumber \\\\& & - {D}_{\\alpha 1}^{UF}\\cdot \\nabla _1^T P_2 - \\sum _{\\beta =1}^2 {D}_{\\alpha \\beta }^{UL}\\cdot \\nabla _\\beta ^RP_2\\\\{j}_\\alpha ^R &=& {M}_{\\alpha 1}^{\\Omega F} \\cdot {F}_1^e P_2 - \\left({D}_{\\alpha 2}^{\\Omega F}-{D}_{\\alpha 1}^{\\Omega F} \\right)\\cdot \\nabla _2^T P_2- {D}_{\\alpha 1}^{\\Omega F}\\cdot \\nabla _1^T P_2\\nonumber \\\\&& - \\sum _{\\beta =1}^2 {D}_{\\alpha \\beta }^{\\Omega L}\\cdot \\nabla _\\beta ^RP_2- D_\\alpha ^R \\nabla _\\alpha ^R P_2.$ In the absence of hydrodynamic interactions, we have ${M}_{\\alpha \\beta }^{UF} = {I}\\delta _{\\alpha \\beta }/\\zeta _\\alpha ^T$ , ${M}_{\\alpha \\beta }^{\\Omega L} = {I}\\delta _{\\alpha \\beta }/\\zeta _\\alpha ^R$ , and ${M}_{\\alpha \\beta }^{UL}, {M}_{\\alpha \\beta }^{\\Omega F} = {0}$ , where $\\delta _{\\alpha \\beta }$ is the Kronecker delta.", "The conditional probability of finding a bath particle, $\\rho _{1/1}({r}_2, {q}_2, t \\vert {r}_1, {q}_1, t)$ , can be related to $P_2$ via the relation $\\rho _{1/1} = (N-1)P_{1/1}$ , where $P_{1/1}$ is defined by $P_2 = P_{1/1}P_1$ .", "The factor of $N-1$ comes from the process of removing the “labels” of the $N-1$ bath particles.", "From this, the joint probability density of finding a bath particle at ${r}_2$ , ${q}_2$ and the probe at ${r}_1$ , ${q}_1$ is defined as $\\rho _2 = \\rho _{1/1}P_1$ .", "Furthermore, we can define a dimensionless conditional distribution function $g_{1/1}$ such that $\\rho _2 = \\rho _{1/1}P_1 = n_b g_{1/1}P_1,$ where $n_b = (N-1)/V$ is the number density of bath particles.", "In the absence of hydrodynamic interactions, equations ()-() reduce to equations (REF )-(REF ) given in the text." ], [ "Derivation of the variance relations", "For the pair problem, equation (REF ) governing the mean-squared displacement reduces to 12r1r1t +1kr1r1= D1T I+[U10 q1r1 - Utrapr1 ]sym +D1T[2T 2 r1 d2 ]sym.", "Using equations (REF ) and (), one can show that the position fluctuation of the probe is governed by 12Var(r1)t + 1k Var(r1)= D1TI+ U10[Cov(q1, r1)]sym +D1T[2T 2 r1d2 ]sym, where $\\operatorname{Cov}({q}_1, {r}_1) = \\langle {q}_1{r}_1\\rangle - \\langle {q}_1\\rangle \\langle {r}_1\\rangle $ and recall that $\\Delta {r}_1 = {r}_1 - \\langle {r}_1\\rangle $ .", "To calculate the covariance of ${q}_1$ and ${r}_1$ appearing in equation (), we need $\\langle {q}_1{r}_1\\rangle $ , $\\langle {q}_1\\rangle $ and $\\langle {r}_1\\rangle $ .", "The net polar order of the probe satisfies $\\frac{\\partial \\langle {q}_1\\rangle }{\\partial t} +\\frac{d-1}{\\tau _1^R} \\langle {q}_1\\rangle = {0},$ where $d (=2,3)$ is the dimensionality of the problem.", "It can be seen that the net polar order of the probe is not affected by the trap or the bath particles.", "The full solution to (REF ) is readily obtained as $\\langle {q}_1\\rangle (t) = \\exp \\left[-(d-1)t/\\tau _1^R\\right] \\langle {q}_1\\rangle (0),$ where any initial net polar order $ \\langle {q}_1\\rangle (0)$ decays away exponentially due to the rotary diffusion.", "The average of ${q}_1{r}_1$ is governed by q1r1t + 1q1r1 = - q1Utrap+ U10 q1q1 +D1T q1 2T 2 d2 , where $\\langle {q}_1\\rangle $ is given by (REF ) and $\\bigl \\langle {q}_1{q}_1\\bigr \\rangle $ satisfies $\\frac{\\partial \\bigl \\langle {q}_1{q}_1\\bigr \\rangle }{\\partial t} + \\frac{2 d}{\\tau _1^R} \\left[\\bigl \\langle {q}_1{q}_1\\bigr \\rangle - \\frac{1}{d}{I}\\right]={0}.$ Similarly to $\\langle {q}_1\\rangle $ , the net nematic order of the probe regardless of the presence of the trap or the bath particles is given by $\\langle {Q} _1\\rangle (t) = \\exp \\left[-2dt/\\tau _1^R\\right]{Q} _1(0),$ where we have defined the net trace-free nematic tensor $\\langle {Q} _1\\rangle = \\bigl \\langle {q}_1{q}_1\\bigr \\rangle - {I}/d$ .", "At long times ($t\\rightarrow \\infty $ ), there is no net polar order of the probe, $\\langle {q}_1\\rangle = {0}$ and the net nematic order is isotropic, $\\bigl \\langle {q}_1{q}_1\\bigr \\rangle = {I}/d$ .", "Using equations (REF ), (REF ) and (), we obtain Cov(q1, r1)t + 1Cov(q1, r1) = U10Cov(q1,q1) + D1Tq12T 2d2 , where $\\Delta {q}_1 = {q}_1 - \\langle {q}_1\\rangle $ ." ], [ "Asymptotic analysis of the probe in the absence of bath particles", "In equation (REF ), the timescale of transient decay $\\tau $ can be written as $\\frac{1}{\\tau } = \\frac{d-1}{\\tau _1^R} + \\frac{1}{\\tau _k} = \\frac{d-1+\\epsilon }{\\tau _1^R},$ where $\\epsilon = \\tau _1^R/\\tau _k = k \\tau _1^R/\\zeta _1$ .", "Using this definition, the solution of (REF ) is given by Cov(q1, r1)(t) = e-t/ Cov(q1, r1)(0) + U10 0t (-t-s)Var( q1)(s)d s. From equation (REF ), the preceding equation becomes Cov(q1, r1)(t) = e-t/ Cov(q1, r1)(0) + U10Id (1-e-t/ ) - 1d+1-(e-2dt/R- e-t/)Q 1(0).", "In the long-time limit ($t/\\tau _R \\gg 1$ and $t/\\tau \\gg 1$ ), we obtain equation (REF ) in the text.", "Using equation () in the absence of bath particles, we obtain Var(r1)(t) = e-2t/kVar(r1)(0) + k (D1T+ (U10)2d)I(1 - e-2t/k) + 2 U10 0t [-2t-sk][Cov(q1, r1)(s)]sym d s, where $\\operatorname{Cov}^\\prime ({q}_1, {r}_1)(s) = \\operatorname{Cov}({q}_1, {r}_1)(s) - U_1^0\\tau {I}/d$ is the time-dependent (transient) part of the covariance of ${q}_1$ and ${r}_1$ .", "The integral in () can be carried out explicitly but is not important for the following discussion.", "In the presence of the harmonic trap, the system exhibit two timescales that are important: the reorientation time $\\tau _1^R$ and the viscoelastic timescale $\\tau _k$ ; their relative importance is characterized by the parameter $\\epsilon $ .", "In the weak-trap limit, $\\epsilon \\rightarrow 0$ , the two timescales are well-separated.", "It is useful to define the fast time variable $t_1 = t$ and the slow time variable $t_2 = \\epsilon t$ .", "We now consider the limit $\\epsilon \\rightarrow 0$ and the intermediate timescale in which the ABP has experienced many reorientations due to rotary diffusion but hasn't reached the “boundary” of the trap, i.e., $t_1/\\tau _1^R \\gg 1$ but $t_2/\\tau _1^R = \\epsilon t/\\tau _R \\ll 1$ .", "Differentiating equation () leads to ddtVar(r1)(t)= -2ke-2t/kVar(r1)(0) + 2 (D1T+ (U10)2d)Ie-2t/k + 2U10[Cov(q1, r1)(t)]sym + 2U10-2k 0t [-2t-sk][Cov(q1, r1)(s)]sym d s. Since $1/\\tau _k = \\epsilon /\\tau _1^R$ and $\\frac{t}{\\tau _k} = \\frac{t_2}{\\epsilon \\tau _k} = \\frac{t_2}{\\tau _1^R}\\ll 1, \\quad \\frac{t}{\\tau } = \\frac{t_1}{\\tau _1^R} (d+1-\\epsilon )\\gg 1,$ we have e-2t /k = e-2 t2/1R = 1+ O(t2/1R), (U10)2d = U101d(d-1)[ 1 +O()].", "Therefore, equation () at leading order is 12ddtVar(r1)(t) = (D1T + U101d(d-1))I = (D1T+D1swim)I.", "It is clear that in the weak-trap limit in this intermediate timescale, the ABP exhibits a diffusive behavior with the free-space diffusivity $D_1^T+D_1^\\mathrm {swim}$ .", "We now consider the weak-trap limit but at long-times, $t/\\tau _1^R\\gg 1, t/\\tau _k \\gg 1$ .", "So long as the trap strength is not identically zero, the ABP will eventually ($t/\\tau _k \\gg 1$ ) experience the confinement of the trap.", "Using equation (), we obtain at long times $\\frac{1}{\\tau _k} \\operatorname{Cov}({r}_1, {r}_1)\\rightarrow \\left(D_1^T+D_1^\\mathrm {swim}\\right){I}.$ In the strong-trap limit ($\\epsilon \\rightarrow \\infty $ ) and at long times, we have $\\tau /\\tau _1^R = O(1/\\epsilon ) $ and the position fluctuation of the probe $\\operatorname{Var}({r}_1)/\\tau _k = D_1^T$ ." ] ]
2207.03550
[ [ "TF-GNN: Graph Neural Networks in TensorFlow" ], [ "Abstract TensorFlow GNN (TF-GNN) is a scalable library for Graph Neural Networks in TensorFlow.", "It is designed from the bottom up to support the kinds of rich heterogeneous graph data that occurs in today's information ecosystems.", "Many production models at Google use TF-GNN and it has been recently released as an open source project.", "In this paper, we describe the TF-GNN data model, its Keras modeling API, and relevant capabilities such as graph sampling, distributed training, and accelerator support." ], [ "Introduction", "num Machine Learning (ML) techniques have applications across domains as varied as medicine, social networks, biochemistry, robotics, and more.", "The success of many ML models is driven by their ability to incorporate different modalities of data (e.g.", "vision, text, sound, timeseries and geometric), each with its own unique structural (ir)regularities.", "Traditionally, software frameworks for machine learning (e.g.", "TensorFlow [1], PyTorch [29]) have focused on modeling one modality at a time, such as vision [23], [22] or natural language [27], [32], [10].", "However, the development of graph representation learning [9] and subsequent industry interest [2], [15], [19], [6], [3], [30], [26], [37], [28] has motivated the need for better software frameworks for learning with graph-structured data.", "In this paper we introduce TF-GNN https://github.com/tensorflow/gnn, a Python framework that extends TensorFlow [1] with Graph Neural Networks (GNNs) [9], [17]: models that leverage graph-structured data.", "TF-GNN is motivated and informed by years of applying graph representation learning to practical problems at Google.", "In particular, TF-GNN focuses on the representation of heterogeneous graph data and supports the explicit modeling of an arbitrary number of relationships between an arbitrary number of entities (i.e., nodes).", "These relationships can be used in combination with other TensorFlow components, e.g., a TF-GNN model might connect representations from a language model to those of a vision model and fine-tune these features for a node classification task.", "Many teams at Google run TF-GNN models in production.", "We believe this to be a direct consequence of TF-GNN 's multi-layered API which is designed for accessibility to developers (regardless of their prior experience with machine learning).", "Other software frameworks for learning from graph data have been proposed, most notably PyTorch Geometric (PyG) [11] and Deep Graph Library (DGL) [34].", "We differ from DGL and PyG in three main ways.", "First, TF-GNN has been designed bottom-up for modeling heterogeneous graphs.", "Second, TF-GNN offers different levels of abstraction for increased modeling flexibility.", "Proficient users can leverage raw TensorFlow operations for message passing, limited only by their imagination.", "Intermediate users use the Keras modeling API and pre-built convolution layers, while beginner users can use the Orchestrator to quickly experiment with GNNs.", "Finally, TF-GNN is programmed on top of TensorFlow.", "As such, its goal is to support the many production-ready capabilities present in the TensorFlow ecosystem.", "Interestingly, these differences can provide advantages for several use cases.", "For instance, (i) while it is possible for PyG to model heterogeneous graph data, its syntax advocates partitioning a heterogeneous graph into a set of homogeneous graphs.", "This makes it (programmatically) challenging to create a graph layer that pools from multiple node features at once, or even create new node or feature types on the fly (i.e., through the network computation).", "However, TF-GNN 's flexibility allows for aggregating from different node or edge types at once.", "Furthermore, (ii) TF-GNN offers edge-centric, node-centric, and graph-centric building blocks for GNNs.", "Existing frameworks (e.g., PyG) are node-centric, making implementing some models more tedious.", "On the other hand, edge-centric models, such as Graph Transformers [35] can be natively expressed in TF-GNN .", "Finally, (iii) TF-GNN 's TensorFlow implementation inherits the benefits of the TensorFlow ecosystem, including access to model architectures for popular modalities (such as vision, text, and speech), and the ability to execute GNN models, for training and inference, on extremely fast hardware devices [18].", "Summary of contributions.", "We present TF-GNN , an open-source Python library to create graph neural network models that can leverage heterogeneous relational data.", "TF-GNN enables training and inference of Graph Neural Networks (GNNs) on arbitrary graph-structured data.", "TF-GNN 's four API levels allow developers of all skill levels access to powerful GNN models.", "Many TF-GNN models run in production at Google.", "Finally, as a native citizen of the TensorFlow ecosystem, TF-GNN shares its benefits, including pretrained models for various various modalities (e.g., a NLP model) and support for fast mathematical hardware such as Tensor Processing Units (TPUs)." ], [ "Overall design", "TF-GNN offers a layered API to build Graph Neural Networks that lets users trade off flexibility for abstraction.", "From least to most abstract, the layers (shown in Fig.", "REF ) are: Figure: TF-GNN 's layered API decomposes the tools needed to create graph models into four distinct components of increasing abstraction.", "[leftmargin=10mm] API Level 1: The Data Level (§) takes care of representing heterogeneous graphs and loading them into TensorFlow, including technicalities like batching and padding.", "API Level 2: The Data Exchange Level (§REF ) provides operations for sending information across the graph between its nodes, edges and the graph context.", "API Level 3: The Model Level (§REF ) facilitates writing trainable transformations of the data exchanged across the graph in order to update the state of nodes, edges and/or context.", "API Level 4: The Minimal-Code Experience Level (§) provides the Orchestrator, a toolkit for the easy composition of data input, feature processing, graph objectives, training, and validation.", "This layered design is one reason for TF-GNN 's successful adoption for graph models at Google.", "Users can start at a high level and only later go in deeper to tweak parts of the model.", "Some users may choose to only use the data level and its associated tooling (like the graph sampler, §REF ) and use their own modeling framework.", "The following sections describe the API levels in greater detail.", "Finally, we discuss other parts of the library designed for use in production models (§)." ], [ "TF-GNN Heterogeneous Data Model (API Level 1)", "To train a model on heterogeneous graph data, users first need to specify its node types, edge types and their respective features.", "This is done with the GraphSchema (§REF ).", "Based on that, the GraphTensor class (§REF ) can represent any graphs from the dataset." ], [ "Graph Schema", "A GraphSchema object defines: One or more named node sets and their respective features.", "Zero or more named edge sets and their respective features.", "Each edge set has a specified source node set and a specified target node set.", "All edges in the set connect these node sets.", "Context features, which pertain to the entire input graph.Section REF will refine the notion of context for graphs merged from a batch of inputs.", "As the name suggests, GraphSchema contains only an abstract definition of how entities are related (similar to an entity relational diagram [24]) and has no actual data points.", "By definition, the node sets are disjoint, so they can serve as node types; same for edges.", "For each feature, the graph schema defines its name, its datatype (int, float, or string) and its shape, as in TensorFlow [1].", "That means the graph as a whole is heterogeneous, but within each node or edge set, the features are uniformly typed and shaped.", "Each feature can have a different shape, so the features of a node set might comprise a scalar (say, a categorical feature), a variable-length sequence (e.g., tokenized text), a fixed-length vector (such as a precomputed embedding), a rank-3 tensor with an RGB image, and so on.", "Figure: The recommending system example in the text uses the GraphSchema visualized in part (a).", "Part (b) shows a possible graph with the node sets, connecting edge sets, and features prescribed by the schema.Figure: A GraphTensor to store the graph from Fig.", "comprises tensors for all the features and for the adjacency data of each edge.", "(The size tensors are not shown here.)", "A Python expression for this GraphTensor object is shown in the Appendix.An example schema for a prototypical recommendation systems problem is shown in Figure REF .", "It defines a heterogeneous graph structure with the following structure: [leftmargin=10mm] Two node sets: “items” and “users”.", "Item nodes have two features for their “category” (an integer scalar, corresponding to enumeration), and their “price” (a floating-point vector to hold item advertised prices).", "Person nodes have three features, representing their “name” (string), “age” (int), and “country” (int).", "Two distinct sets of edges: “purchased” and “is-friend”.", "Purchased edges connect users to items they have purchased, while is-friend edges connect users together.", "One context feature “scores”, which applies to the graph as a whole." ], [ "GraphTensor", "Figure REF shows a graph that conforms to the example schema from above: Each circle corresponds to a node (colored by its node set), and the two types of lines correspond to the two different edge sets.", "Each node contains the features specified for its node set.", "Figure REF shows our approach to representing one such a graph in TensorFlow.", "We index the nodes in each node set and the edges in each edge set as $0, 1, 2, ..., n-1$ .", "Then any one feature on a node set or edge set can be represented by a tensor of shape $[n, f_1, \\ldots , f_k]$ , where $[f_1, \\ldots , f_k]$ , $k\\ge 0$ , is the feature shape from the GraphSchema.", "Moreover, for an edge set, the indices of source and target nodes are stored as two integer tensors of shape $[n]$ whose values are node indices in the node set specified by the graph schema.", "The GraphTensor class expands on this approach to represent graphs as tensors through all stages of a TensorFlow program that builds a GNN model.", "Roughly, the stages are Reading, shuffling, batching and parsing GraphTensor values from tf.Example records on disk; possibly distributing them between replicas for data-parallel training.", "Transforming one or more input features per node or edge into a fixed-size representation for deep learning.", "Running a graph neural network for several rounds to update the hidden states of nodes (and possibly edges) from neighboring parts of the graph, followed by reading out the relevant hidden states and computing the model's output.", "GraphTensor supports batching natively, and is indeed a tensor of graphs with a shape $[g_1, \\ldots , g_r]$ .", "A scalar GraphTensor has shape $[\\hspace{3.99994pt}]$ and holds a single graph, while a GraphTensor of shape $[g_1]$ holds a vector of $g_1$ graphs, as usual for training with minibatches.", "Ranks $r>1$ are rarely needed.", "Each node set and edge set holds a dictionary of named features.", "Each feature is a tensor of shape $[g_1, \\ldots , g_r, n, f_1, \\ldots , f_k]$ .", "GraphTensor allows that a feature dimension $f_i$ may vary between the items of one node/edge set, or that the number $n$ of items varies between the multiple graphs in a GraphTensor.", "In both cases, the solution is to store the feature as a tf.RaggedTensor, not a tf.Tensor.", "Under the hood, this stores an explicit partitioning for each non-uniform, or “ragged”, dimension.", "The shape reports its size as None.", "To support feature processing, TF-GNN lets you create a new GraphTensor from an old one by replacing some or all of the features while keeping track of the implied schema change.", "Finally, in service of training models, GraphTensor provides a method to merge a batch of inputs to a scalar GraphTensor.", "For each node/set, this method concatenates its elements across the batch of inputs and adjusts the node indices stored on edges correspondingly.", "The result is a GraphTensor of shape $[\\hspace{3.99994pt}]$ with a flat index space $0, 1, \\ldots , n_\\mathrm {total} - 1$ for each node/edge set, across the boundaries of batched examples.", "Features get the shape $[n_\\mathrm {total}, f_1, \\ldots , f_k]$ .", "Conveniently, such features can be represented as a tf.Tensor if all feature dimensions are fixed, even if a node/edge set's size $n_\\mathrm {total}$ is not constant between batches.", "If necessary, it can be made constant as well by adding a suitably sized padding graph to each batch of input graphs and assigning it weight 0 for training the GNN.", "Standard GNN operations on nodes and edges respect the boundaries between the merged input graphs, because there are no edges connecting them.", "To achieve the same for context features, GraphTensor supports the notion of components in a graph, and stores context features indexed by component." ], [ "Modeling with TF-GNN ", "The core of TF-GNN is specifying how a computation utilizes graph structured data.", "In this section we detail the low level (TensorFlow) and high level (Keras) APIs used to construct GNNs." ], [ "Data Exchange Ops (API Level 2)", "TF-GNN sends data across the graph as follows.", "Broadcasting from a node set to an edge set returns for each edge the value from the specified endpoint (say, its source node).", "Pooling from an edge set to a node set returns for each node the specified aggregation (sum, mean, max, etc.)", "of the value on edges that have the node as the specified endpoint (say, their target node.)", "The tensors involved are shaped like features of the respective node/edge set in the GraphTensor and can, but need not, be stored in it.", "Similarly, graph context values can be broadcast to or pooled from the nodes or edges of each graph component in a particular a node set or edge set.", "Unlike multiplication with an adjacency matrix, this approach provides a natural place to insert per-edge computations with one or more values, such as computing attention weights [33], [7], integrating edge features into messages between nodes [12], or maintaining hidden states on edges [5]." ], [ "Model Building API (API Level 3)", "At API Level 3, TF-GNN follows standard TensorFlow practice and adopts Keras to express trainable transformations and their composition into models.", "API Levels 1 and 2 can serve other ways of modeling just as well.", "The shape $[n, \\ldots ]$ of feature tensors allows to reuse standard neural network layers for item-wise transformations of node/edge sets, with set size $n$ in place of a batch size.", "A typical GNN model (cf.", "§REF ) consists of (i) feature transformations, (ii) a GNN core, and (iii) the final readout and prediction.", "TF-GNN lets you express this as a sequence of Keras layers that each take a GraphTensor input and return a GraphTensor output with transformed features – or a Tensor for reading out the final prediction." ], [ "Feature transformation layers", "The feature transformations treat each node/edge set in isolation.", "Depending on the available features, they can range from simple numeric transformations to running an entire deep learning model to compute an embedding of, say, image or text data.", "TF-GNN lets you plug in other TensorFlow models and fine-tune them jointly while training the GNN on top.", "In the end, the representations of multiple input features are combined to form one initial \"hiddenstate\" feature.", "TF-GNN 's MapFeatures layer makes it easy to build Keras sub-models for each node/edge set that map a features dict to a transformed features dict, and eventually the hidden state." ], [ "Graph Neural Network layers", "The GNN core of the model is expressed as a sequence of graph update layers, each of which accepts a GraphTensor with \"hiddenstate\" features and returns a new GraphTensor with these features updated.", "Each layer object has its own trainable weights; weight sharing is achieved by using the same layer object repeatedly.", "Users can define their own graph update layers, or reuse those from the growing collection of models bundled with the TF-GNN library.", "A user-defined graph update can, if needed, contain free-form code with an arbitrary composition of trainable transformations and broadcast/pool operations across all parts of the graph.", "More commonly, graph updates are constructed from pieces operating on individual node sets or edge sets.", "TF-GNN provides a generic GraphUpdate class for that purpose, based on the following breakdown of updating a heterogeneous graph.", "Consider any node $v$ in some node set $V$ of a heterogeneous graph.", "Starting from the initial hidden state $\\mathbf {h}_v^{(0)}$ , successive GraphUpdates compute the hidden state of $v$ as $\\mathbf {h}_v^{(i+1)} = \\textsc {NextNodeState}_V^{(i+1)}(\\mathbf {h}_v^{(i)},\\overline{\\mathbf {m}}_{E_1,v}^{(i+1)},\\ldots ,\\overline{\\mathbf {m}}_{E_k,v}^{(i+1)})$ using the pooled messages $\\overline{\\mathbf {m}}_{E_j,v}^{(i+1)}$ received by node $v$ in round $i+1$ along all edge sets $E_1,\\ldots ,E_k$ incident to node set $V$ .", "Notice that their number $k$ is a constant from the graph schema for all $v \\in V$ .", "Let $\\mathcal {N}_{E_j}(v) =\\lbrace u \\mid (u,v) \\in E_j\\rbrace $ denote the neighbors of $v$ along one edge set $E_j$ , and notice that the size of $\\mathcal {N}_{E_j}(v)$ may vary with $v \\in V$ .", "GraphUpdate supports two ways of computing $\\overline{\\mathbf {m}}_{E_j}$ : in one step directly from the neighbor nodes as $\\overline{\\mathbf {m}}_{E_j,v}^{(i+1)} = \\textsc {Conv}_{E_j}^{(i+1)}(\\mathbf {h}_v^{(i)},\\lbrace \\mathbf {h}_u^{(i)} \\mid u \\in \\mathcal {N}_{E_j}(v)\\rbrace ),$ or in two steps, materializing a per-edge message in the GraphTensor as $\\mathbf {m}_{E_j,(u,v)}^{(i+1)} &=& \\textsc {NextEdgeState}_{E_j}^{(i+1)}(\\mathbf {h}_u^{(i)},\\mathbf {h}_v^{(i)},\\mathbf {m}_{E_j,(u,v)}^{(i)}), \\\\\\overline{\\mathbf {m}}_{E_j,v}^{(i+1)} &=& \\textsc {EdgePool}_{E_j}^{(i+1)}(\\mathbf {h}_v^{(i)},\\lbrace \\mathbf {m}_{E_j,(u,v)}^{(i+1)} \\mid u \\in \\mathcal {N}_{E_j}(v)\\rbrace ).$ The two-step approach of Eq.", "(REF ) supports recurrence in $\\mathbf {m}_{E_j,(u,v)}$ , effectively turning it into a hidden state for edges.", "With that, and a context (or “global”) state not shown here, this approach covers Graph Networks [5] and generalizes them to heterogeneous graphs.", "Without recurrence in $\\mathbf {m}_{E_j,(u,v)}$ (but possibly a constant edge feature in its place) this approach also covers Message Passing Neural Networks [12], generalized to heterogeneous graphs.", "The Conv abstraction from Eq.", "(REF ) is useful to express any of a number of successful GNN architectures in a single Python class and to transfer them directly to heterogeneous graphs with an arbitrary schema.", "Section REF and the Appendix review some concrete cases, including Graph Convolutional Networks [21], which popularized the term graph convolution that we adopt here." ], [ "Implementing Popular Architectures", "Graph Convolutional Networks [21]: GCNs for homogeneous graphs rely on adding loops $(v,v)$ to the single edge set $E$ to feed $\\mathbf {h}_v^{(i)}$ into the computation of $\\mathbf {h}_v^{(i+1)}$ along with the neighbor nodes.", "TF-GNN implements them by specializing Eq.", "(REF ) and (REF ) to $\\mathbf {h}_v^{(i+1)} = \\overline{\\mathbf {m}}_{v}^{(i+1)}= \\sigma \\Big (\\sum _{u \\in \\mathcal {N}(v) \\cup \\lbrace v\\rbrace }\\frac{1}{\\sqrt{d_u d_v}} \\mathbf {W}^{(i+1)} \\mathbf {h}_u^{(i)}\\Big ),$ where $d_u$ is the in-degree of node $u$ including loops and $\\sigma $ is an activation function, such as ReLU.", "Observe how Conv from Eq.", "(REF ) is the graph convolution, and NextNodeState trivially passes through its result.", "The relational extension, R-GCN, [31] considers heterogeneous graphs and uses $\\mathbf {h}_v^{(i+1)}= \\sigma \\Big (\\sum _{j=1}^k \\overline{\\mathbf {m}}_{E_j,v}^{(i+1)}+ \\mathbf {W}_V^{(i+1)} \\mathbf {h}_v^{(i)} \\Big ),\\\\\\overline{\\mathbf {m}}_{E_j,v}^{(i+1)}= \\frac{1}{|{\\mathcal {N}_{E_j}(v)}|}\\sum _{u \\in \\mathcal {N}_{E_j}(v)}\\mathbf {W}_{E_j}^{(i+1)} \\mathbf {h}_u^{(i)} $ with separate weights for each edge set and node set.", "This translates immediately to Conv and NextNodeState maps.", "GraphSAGE [16] considers homogeneous sampled subgraphs and proposes several aggregator architectures, which translate directly to choices for Conv in Eq.", "(REF ).", "Its NextNodeState function turns out to be the special case $k=1$ of Eq.", "(REF ) for R-GCN.", "In the GraphUpdate framework, GraphSAGE generalizes naturally to the heterogeneous case by running its Convs on multiple edge sets and combining them as in Eq.", "(REF ) for $k>1$ .", "GAT [33] extends GCN by replacing the weighted sum in Eq.", "(REF ) by a concatenation of multiple weighted sums (attention heads), each with its own data-dependent weighting.", "Both GAT and its modification GATv2 [7] can be expressed as a Conv operation (formulas omitted for brevity).", "The GraphUpdate framework allows to generalize GAT/GATv2 directly to the heterogeneous case, with no extra coding, analogous to the generalization from GCN to R-GCN.", "Attention is distributed separately between the edges of each edge set; learning the relative importance of different edge sets (relation types) is left to their separate weight matrices $\\mathbf {W}_{E_j}^{(i+1)}$ in Eq.", "(REF ).", "TF-GNN provides a base class for implementing Conv operations that allows a unified implementation of attention (i) from a node onto its neighbors, possibly combining the neighbor node state with a feature from the connecting edge; (ii) from a node onto its incoming edges; (iii) from the graph context onto all nodes; (iv) from the graph context onto all edges.", "Cases (ii–iv) provide attention for all aggregation steps of Graph Networks [5].", "The appendix shows the unified implementation of GATv2 attention for all four cases." ], [ "Orchestration (API Level 4)", "At the highest level, TF-GNN provides the Orchestrator: a quick-start toolkit with solutions for common graph learning tasks.", "It includes popular graph learning objectives, distributed training capabilities, accelerator support and the handling of numerous TensorFlow idiosyncrasies.", "The Orchestrator aims to collect the tools necessary for (i) elevating the novice to a TF-GNN power user and (ii) increasing the scope of the graph learning expert's innovation.", "The toolkit supports graph learning research by offering both a standard framework for the reproduction of results and a shared catalog of SotA and convenience graph learning techniques and objectives.", "Specifically, the Orchestrator performs the following steps: Reading input data to extract input graph(s) as GraphTensor instances $\\mathbf {X}$ and corresponding label(s) $\\mathbf {Y}$ .", "Processing features for a specific dataset $\\mathbf {X} \\rightarrow \\mathbf {X}^\\prime $ Adapting a model to the graph learning objective $\\mathbf {M} \\rightarrow \\mathbf {M}^\\prime $ .", "Training the adapted model $\\mathbf {M}^\\prime \\colon \\mathbf {X}^\\prime \\rightarrow \\mathbf {H}$ to minimize the loss between $\\mathbf {H}$ and $\\mathbf {Y}$ .", "Exporting the model ($\\mathbf {M}^\\prime $ ) for inference or deployment.", "The Orchestrator provides abstractions for the composition of these five steps as follows.", "Data source (Python protocol DatasetProvider): An arbitrary source (e.g., files on disk) that produces GraphTensor and corresponding schema.", "Feature processing (Python Protocol GraphTensorProcessorFn): Feature manipulations for a GraphTensor–typically associated with a specific dataset.", "Task (Python Protocol Task): A collection of the ancillary pieces for a graph learning objective.", "Provides mechanisms for extending an arbitrary model to a graph learning objective (e.g., node classification or regression) and processing for the data source manipulations of that same graph learning objective.", "Model: An arbitrary model that operates on GraphTensor.", "The model is adapted to a graph learning objective by the above Task.", "Training Hyperparameters: Including the choice of optimization algorithm (e.g., ADAM [4]), learning rate, as well as model-hyperparemters (e.g., number of layers and their widths).", "Integrated with a automated hyperparameter tuning service [13]." ], [ "Sampling and Scaling", "At Google, we need to build neural network models for graphs of incredible size (trillions of edges).", "Heterogeneous graphs of this scale that have rich node and edge features cannot fit in the memory of single machines.", "Furthermore, for fast training and inference, deep graph models must be able to exploit parallel computations on specialized hardware." ], [ "Distributed Sampling for Training and Inference", "On massive, well-connected graphs, even deep GNNs rarely aggregate features from distant nodes.", "As a result, when making a prediction on a root node (or a root activation node for edge/neighborhood prediction), it is nearly-equivalent for the GNN to operate on a sufficiently-wide subgraph around the root.", "For example, training and inference with a 2-layer GCN [21] only requires the 2-hop subgraph around the root.", "To train GNNs on massive graphs, we first run a distributed subgraph sampling operation [16] which queries each root node for a subgraph, and writes each subgraph to disk as an individual GraphTensor.", "The subgraph GraphTensors are (randomly) grouped into file shards, allowing distributed training using TensorFlow.", "Once the model is trained, inference on large graphs is done identically by sharding the graph into rooted subgraphs at each inference node.", "See Figure REF for an illustration.", "Figure: Diagram of massive-graph sampling and training pipeline with TF-GNN ." ], [ "In-memory Learning", "In settings where the input graphs are small enough to fit into memory (e.g., of the GPU), then it might be possible to perform entire-graph learning: where all graph nodes, edges, and features, are loaded for processing at once.", "Here, one can express the objective function on the entire graph, e.g., as a node classification cross-entropy loss on all labeled nodes.", "TF-GNN offers functionality and tutorials for these settings.https://github.com/tensorflow/gnn/tree/main/examples/in_memory" ], [ "Related Work", "Here we provide an overview of related work, which we separate into two broad categories: single-machine and distributed software frameworks.", "While it is more common for single-machine frameworks to be on the limits of research and innovation, industrial applications require distributed support and pose challenges rarely addressed by pure research applications." ], [ "Single-machine libraries.", "PyG (PyTorch-Geometric) [11] is a de-facto standard framework for GNNs in the PyTorch ecosystem.", "PyG provides automatic batching support, GPU acceleration, and an interface to common graph learning datasets.", "Its performance is further enhanced by a set of optimized sparse GPU kernels tailored towards graph learning workloads.", "Spektral [14] is a prominent TensorFlow framework that follows Keras model building principles.", "It offers a similar experience to PyG in the TensorFlow ecosystem but without any batching support.", "TF-GNN differs from these in many ways.", "For example, Spektral computes edge-centric functions (such as, GAT) on minibatches by converting sparse adjacency matrices to dense ones.", "In general, these frameworks are not designed to scale to large graphs, but allow for easy experimentation by researchers on small graphs.", "On the other hand, TF-GNN 's primary purpose is to scale to large graphs." ], [ "Distributed libraries.", "Deep Graph Library (DGL) [34] allows switching the backend platform between PyTorch, TensorFlow, and Apache MXnet with minimal code modifications.", "DistDGL expension [36] enables efficient multi-machine training with DGL.", "Training is supposed to be done on a fleet of CPU-heavy instances connected in a cluster with a wide communication channel.", "DistDGL is mainly focused on reducing communication between the workers, each of which is performing sampling and training simultaneously.", "It partitions the input graph with METIS [20] and uses each partition as an example.", "Unlike DistDGL, TF-GNN 's distributed training is more general.", "TF-GNN does not assume that the the data contains clusters, or that the graph structure fit into memory for partitioning via METIS.", "Graph-Learn (formerly AliGraph) [38] is an open-source industrial graph learning framework built on top of TensorFlow.", "It is designed to natively handle large heterogeneous graphs, and it employs several techniques to facilitate large-scale training.", "Their distribution strategy relies on distributing the graph among workers machines, with a requirement that all worker machines must be alive at the same time: their training would stop if any worker machine fails.", "This differs from the distribution strategy of TF-GNN.", "In particular, TF-GNN samples a large graph into subgraphs using a resilient distributed system [8].", "Similarly, TF-GNN can be used with the asynchronous distributed model training in TensorFlow, which is robust to machine failures.", "Paddle Graph Learning (PGL) [25] is probably the most similar to TF-GNN .", "It is founded upon message passing over heterogeneous graphs.", "There are two notable differences between PGL and TF-GNN.", "First, PGL is more restrictive: each node must have a single feature (it is non-trivial to combine visual feature and textual feature, per node, per se) and dictates that all nodes must have the same feature dimensions.", "In contrast, TF-GNN support multiple features per node type (including ragged feature dimensions) and two node types can hold different features.", "Second, PGL uses Paddle as the computation backend whereas TF-GNN uses TensorFlow.", "From a practical sense, it is easier to find state-of-the-art (SotA) network architectures and pre-trained models in TensorFlow, which would make it easier to combine per-modality SotA models within a GNN." ], [ "Conclusions", "In this work we have presented TF-GNN , our open source framework used for Graph Neural Networks at Google.", "TF-GNN is a software framework which reduces the technical burden for GNN productization and facilitates experimentation with GNNs.", "TF-GNN 's expressive modeling capability allows complex relationships between nodes, edges, and graph-level elements in a model.", "This enabling the straight-forward implementation of intricate models.", "TF-GNN offers four levels of increasingly abstract APIs, serving a range of use-cases from beginners with a graph problem but little ML experience to ML researchers who desire complete control of their graph learning system.", "Many models at Google already use TF-GNN , and we believe that this project will accelerate the industrial adaptation of these promising models at more organizations." ], [ "Graph Tensor Example", "Consider again the example schema and graph provided in Figures REF and REF .", "This example graph would be represented with the following tensors: [leftmargin=4mm] Node feature tensors: [leftmargin=4mm] “items” category, string vector tensor: [\"food\", \"show ticket\", \"shoes\", \"book\", \"flight\", \"groceries\"], with shape [6] “item” price, float tensor: [[22.34, 23.42, 12.99], [27.99, 34.50], [89.99], [24.99, 45.00], [350.00], [45.13, 79.80, 12.35]], with shape [6, None]; an instance of tf.RaggedTensor “users” name, string vector tensor: [\"Shawn\", \"Jeorg\", \"Yumiko\", \"Sophie\"], with shape [4] “users” age, int vector tensor: [24, 32, 27, 38], with shape [4].", "“users” country: int vector tensor: [3, 2, 1, 0], with shape `[4]`, assuming that the country vocabulary enumeration is dict(france=0, japan=1, uk=2, usa=3).", "Edge feature tensors: [leftmargin=4mm] “purchased” with [leftmargin=4mm] source indices, int vector tensor: [0, 1, 2, 3, 4, 5, 5].", "target indices, int vector tensor: [1, 1, 0, 0, 2, 3, 0], both with shape [7].", "“is-friend” with [leftmargin=4mm] source indices, int vector tensor: [1, 2, 3].", "target indices, int vector tensor: [0, 0, 0], both with shape [3].", "Graph-level feature tensors: [leftmargin=4mm] “scores”, float matrix tensor: [[0.45, 0.98, 0.10, 0.25]] with shape [1, 4].", "Note how the edges connectivity is encoded as source and target indices in the arrays of node features.", "The indices are indicating, for each edge, which position in the node feature array they are referring to.", "For example, the fifth values of the `purchased/#source` and `purchased/#target` is `[4, 2]`, which link together nodes `\"flight\"` and `\"Yumiko\"`." ], [ "GraphTensor Features & Shapes", "Shapes of feature tensors, stored in GraphTensor, are crucial for building models in TF-GNN .", "Each tensor has a particular shape constraint based on its containing node set, edge set, or context dictionary.", "The shapes are as follows: Node features.", "All the features associated with a node set share the initial dimension, which is the total number of nodes in the node set.", "In the example above, features for node “items” have 6 nodes, and so all their tensor shapes are of the form [6, ...].", "In general, node features will have the [numnodes, feature...] shape.", "For example, a simple 64-dimensional embedding of each node results in shape [6, 64], while a 224x224 image with 3 color channels stored at each node could result in [6, 224, 224, 3].", "Edge features and indices.", "Similarly, all the features associated with an edge set shares the leading dimension, which is the total number of edges in the edge set.", "This includes the edge indices (rendered above as special features source and target).", "In the example, features for edges “purchased”, both of these have shape [7].", "If edges have information encoded as features, e.g., an embedding of shape [32], then the edge feature tensor would be of shape [7, 32].", "In general, edge features will have shape [numedges, feature...].", "Context features.", "Context features apply to one component of the graph.", "An input graph parsed with a GraphSchema has a single component, so its context features have shapes [1, ...].", "After batching inputs and merging batches of inputs into components of a single graph (see §REF ), there are multiple components, and their number appears as the outermost dimension in the shape of context features." ], [ "GraphTensor Interface", "The GraphTensor objects you obtain from the parser are lightweight containers for all the dense and ragged features that are part of an example graph, as well as the adjacency information.", "These can contain a single example graph or a batch of multiple graphs.", "You can access the tensors with an interface similar to that of Python dicts.", "For example, to access the age feature in the example, you would do this: graph.nodesets[\"users\"][\"age\"] <tf.Tensor: shape=(4,), dtype=int32, numpy=array([24, 32, 27, 38], dtype=int32)> Edge indices are accessed with their \"adjacency\" property: graph.edgesets[\"purchased\"].adjacency.source <tf.Tensor: shape=(7,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 5], dtype=int32)> And similarly for context features: graph.context[\"scores\"] <tf.Tensor: shape=(1, 4), dtype=float32, numpy=array([[0.45, 0.98, 0.1 , 0.25]], dtype=float32)>" ], [ "Creating GraphTensors", "Instances of GraphTensor can also be created as constants or from existing tensors.", "This is useful for writing unit tests and working in a Colab.", "To create a GraphTensor, you have to provide the various pieces and features forming the GraphTensor.", "This is best described by an example: graph = tfgnn.GraphTensor.frompieces( context=tfgnn.Context.fromfields( features= \"scores\": [[0.45, 0.98, 0.10, 0.25]], ), nodesets= \"items\": tfgnn.NodeSet.fromfields( sizes=[6], features= \"category\": [\"food\", \"show ticket\", \"shoes\", \"book\", \"flight\", \"groceries\"], \"price\": tf.ragged.constant([[22.34, 23.42, 12.99], [27.99, 34.50], [89.99], [24.99, 45.00], [350.00], [45.13, 79.80, 12.35]]), ), \"users\": tfgnn.NodeSet.fromfields( sizes=[4], features= \"name\": [\"Shawn\", \"Jeorg\", \"Yumiko\", \"Sophie\"], \"age\": [24, 32, 27, 38], \"country\": [\"usa\", \"uk\", \"japan\", \"france\"], ), , edgesets= \"purchased\": tfgnn.EdgeSet.fromfields( sizes=[7], features=, adjacency=tfgnn.Adjacency.fromindices( source=(\"items\", [0, 1, 2, 3, 4, 5, 5]), target=(\"users\", [1, 1, 0, 0, 2, 3, 0]), )), \"is-friend\": tfgnn.EdgeSet.fromfields( sizes=[3], features=, adjacency=tfgnn.Adjacency.fromindices( source=(\"users\", [1, 2, 3]), target=(\"users\", [0, 0, 0]), )), ) The data types for tfgnn.Context, tfgnn.NodeSet, tfgnn.EdgeSet, and tfgnn.Adjacency are exposed to the API for this construction to be possible.", "The classes verify that the ranks and shapes of the tensors are matching each other as constrained by the graph structure (i.e., in the same set they share a common prefix dimensions)." ], [ "Reading GraphTensors", "Graphs can be written to disk by encoding them into streams of tf.train.Example protos.", "Graphs can be read into a TensorFlow program by decoding these Example protos to GraphTensor objects.", "To configure the parsing routine, the GraphSchema message is read and converted to a GraphTensorSpec object which describes the layout of the graph tensor in the TensorFlow runtime: the list of its various node sets, edge sets, and all the features associated with them and the context features.", "This data structure is analogous to the tf.TensorSpec object of TensorFlow.", "A GraphTensorSpec object is attached to all instances of GraphTensor and flows along with it.", "To read a stream of graph tensors, first create a spec, like this: import tensorflowgnn as tfgnn graphschema = tfgnn.readschema(schemapath) graphspec = tfgnn.creategraphspecfromschemapb(graphschema) You can then use the library’s own parser to decode the tf.train.Example features into GraphTensor objects: datapaths = tf.data.Dataset.listfiles(...) ds = tf.data.TFRecordDataset(datapaths) ds = ds.batch(batchsize) parserfn = functools.partial(tfgnn.parseexample, graphspec) ds = ds.map(parserfn) The dataset ds yields a stream of instances of GraphTensor.", "The parsing function ingests a batch of serialized graphs (a rank-1 tensor of strings).", "Similarly to tf.io.parsesingleexample, there is also a function tfgnn.parsesingleexample to parse an unbatched stream of encoded tf.train.Example strings with GraphTensor instance in them, and you can also run batch on top of that (i.e., you can batch GraphTensor instances): datapaths = tf.data.Dataset.listfiles(...) ds = tf.data.TFRecordDataset(datapaths) parserfn = functools.partial(tfgnn.parsesingleexample, graphspec) ds = ds.map(parserfn) ds = ds.batch(batchsize) Typically this step is followed by a feature engineering step that normalizes, embeds and concatenates the various features into a single embedding for each set, and then batches multiple subgraphs into modified batched GraphTensor instances.", "The containers can then be used to pick out various tensors and build a model, or simply to inspect their contents: for graph in ds.take(10): tensor = graph.nodesets[\"item\"][\"category\"] print(tensor) You will go through important details of batching and flattening to a single graph below." ], [ "Broadcast and pool operations", "Consider how the message passing API could help us to find the total user spending on purchased items.", "As a preparation step, let's calculate for each \"item\" the latest price value and materialize the result to the graph tensor.", "itemfeatures = graph.nodesets[\"items\"].getfeaturesdict() itemfeatures[\"latestprice\"] = itemfeatures[\"price\"][:, :1].values graph = graph.replacefeatures(nodesets=\"items\": itemfeatures) print(graph.nodesets[\"items\"]) User spending can now be calculated by first computing purchase prices by broadcasting item latest prices to the \"purchase\" edges.", "The total user spendings are computed by sum-pooling purchase prices to their users.", "purchaseprices = tfgnn.broadcastnodetoedges( graph, \"purchased\", tfgnn.SOURCE, featurename=\"latestprice\") totaluserspendings = tfgnn.pooledgestonode( graph, \"purchased\", tfgnn.TARGET, reducetype=\"sum\", featurevalue=purchaseprices) print(totaluserspendings) Note that API allows to reference feature values either by their name and by their values.", "For latter it is not required that feature exists in the graph tensor as soon as its shape is correct.", "Message passing is also supported between the graph context and any node set or edge set.", "As an example, let's compare individual user spendings to the maximum amount spent by any users.", "The code below first max-pool all individual user spendings and then broadcast them back to \"users\" to compute fractions.", "maxamountspend = tfgnn.poolnodestocontext( graph, \"users\", featurevalue=totaluserspendings, reducetype=\"max\") maxamountspend = tfgnn.broadcastcontexttonodes( graph, \"users\", featurevalue=maxamountspend) print(totaluserspendings / maxamountspend)" ], [ "Implementing GATv2", "The following code snippet shows the essence of how GATv2 [7] has been implemented in TF-GNN .", "It performs attention from many senders to one receiver.", "In the simplest case, the convolution is applied to an edge set whose target nodes are the receivers, and each receiver attends to the adjacent source nodes.", "However, TF-GNN lets you configure this is many ways: (1) The roles of source and target nodes can be swapped by setting the receivertag; for example, consider how an edge set of hyperlinks between HTML docs can reasonably used in either direction.", "(2) The sender value can be supplied by the neighbor node, the connecting edge, or both; this is controlled by the sender*feature args, at least one of which must not be None.", "(3) Attention can also happen with receivertag=tfgnn.CONTEXT and either all nodes or all edges in the respective graph component as senders.", "This comes in handy for attention in the context update of a Graph Network [5] or for readout of a graph-level feature by smart pooling of node or edge states.", "All those case distinctions are handled by the superclass, which passes the suitable broadcast and pool ops as arguments into the actual GATv2Conv.convolve() method.", "class GATv2Conv(tfgnn.keras.layers.AnyToAnyConvolutionBase):     def __init__(self, *,                num_heads, per_head_channels,                receiver_tag=None,  # SOURCE, TARGET or CONTEXT.", "receiver_feature=tfgnn.HIDDEN_STATE,  # Required.", "sender_node_feature=tfgnn.HIDDEN_STATE,  # Set None to disable.", "sender_edge_feature=None,  # Set tfgnn.HIDDEN_STATE to enable.", "attention_activation=\"leaky_relu\", activation=\"relu\",                edge_dropout=0., **kwargs):     # Save initializer args.", "super().__init__(         receiver_tag=receiver_tag, receiver_feature=receiver_feature,         sender_node_feature=sender_node_feature,         sender_edge_feature=sender_edge_feature,         extra_receiver_ops={\"softmax\": tfgnn.softmax}, **kwargs)     self._num_heads = num_heads     self._per_head_channels = per_head_channels     self._edge_dropout_layer = tf.keras.layers.Dropout(edge_dropout)     self._attention_activation = tf.keras.activations.get(attention_activation)     self._activation = tf.keras.activations.get(activation)     # Create the transformations for the query input in all heads.", "self._w_query = tf.keras.layers.Dense(per_head_channels * num_heads)     # Create the transformations for value input from sender nodes and edges.", "self._w_sender_node = self._w_sender_edge = None     if self.takes_sender_node_input:       self._w_sender_node = tf.keras.layers.Dense(per_head_channels * num_heads)     if self.takes_sender_edge_input:       self._w_sender_edge = tf.keras.layers.Dense(           per_head_channels * num_heads,           use_bias=self._w_sender_node is None)  # Avoid two biases.", "# Create the transformation to attention scores.", "self._attention_logits_fn = tf.keras.layers.experimental.EinsumDense(         \"...ik,ki->...i\", output_shape=(num_heads,))     def convolve(self, *,                sender_node_input, sender_edge_input, receiver_input,                broadcast_from_sender_node, broadcast_from_receiver,                pool_to_receiver, extra_receiver_ops, training):     # Form the attention query for each head.", "query = broadcast_from_receiver(self._split_heads(self._w_query(         receiver_input)))     # Form the attention value for each head.", "value_terms = []     if sender_node_input is not None:       value_terms.append(broadcast_from_sender_node(           self._split_heads(self._w_sender_node(sender_node_input))))     if sender_edge_input is not None:       value_terms.append(           self._split_heads(self._w_sender_edge(sender_edge_input)))     value = tf.add_n(value_terms)     # Compute the attention coefficients.", "attention_features = self._attention_activation(query + value)     logits = tf.expand_dims(self._attention_logits_fn(attention_features), -1)     attention_coefficients = extra_receiver_ops[\"softmax\"](logits)     attention_coefficients = self._edge_dropout_layer(attention_coefficients)     # Apply the attention coefficients to the transformed query.", "messages = value * attention_coefficients     pooled_messages = pool_to_receiver(messages, reduce_type=\"sum\")     # Apply the nonlinearity.", "return self._activation(self._merge_heads(pooled_messages))     # The following helpers map forth and back between tensors with...   #  - a separate heads dimension: shape [..., num_heads, channels_per_head],   #  - all heads concatenated:    shape [..., num_heads * channels_per_head].", "def _split_heads(self, tensor):     extra_dims = tensor.shape[1:-1]  # Possibly empty.", "new_shape = (-1, *extra_dims, self._num_heads, self._per_head_channels)     return tf.reshape(tensor, new_shape)     def _merge_heads(self, tensor):     extra_dims = tensor.shape[1 : -2]  # Possibly empty.", "merged_dims = tensor.shape[-2:]     new_shape = (-1, *extra_dims, merged_dims.num_elements())     return tf.reshape(tensor, new_shape)" ] ]
2207.03522
[ [ "Differentially Private Sliced Wasserstein Distance" ], [ "Abstract Developing machine learning methods that are privacy preserving is today a central topic of research, with huge practical impacts.", "Among the numerous ways to address privacy-preserving learning, we here take the perspective of computing the divergences between distributions under the Differential Privacy (DP) framework -- being able to compute divergences between distributions is pivotal for many machine learning problems, such as learning generative models or domain adaptation problems.", "Instead of resorting to the popular gradient-based sanitization method for DP, we tackle the problem at its roots by focusing on the Sliced Wasserstein Distance and seamlessly making it differentially private.", "Our main contribution is as follows: we analyze the property of adding a Gaussian perturbation to the intrinsic randomized mechanism of the Sliced Wasserstein Distance, and we establish the sensitivityof the resulting differentially private mechanism.", "One of our important findings is that this DP mechanism transforms the Sliced Wasserstein distance into another distance, that we call the Smoothed Sliced Wasserstein Distance.", "This new differentially private distribution distance can be plugged into generative models and domain adaptation algorithms in a transparent way, and we empirically show that it yields highly competitive performance compared with gradient-based DP approaches from the literature, with almost no loss in accuracy for the domain adaptation problems that we consider." ], [ "Introduction", "Healthcare and computational advertising are examples of domains that could find a tremendous benefit from the continous advances made in Machine Learning (ML).", "However, as ethical and regulatory concerns become prominent in these areas, there is the need to devise privacy preserving mechanisms allowing i) to prevent the access to individual and critical data and ii) to still leave the door open to the use of elaborate ML methods.", "Differential privacy (DP) offers a sound privacy-preserving framework to tackle both issues and effective DP mechanisms have been designed for, e.g., logistic regression and Support Vector Machines [27], [8].", "Here, we address the problem of devising a differentially private distribution distance with, in the hindsight, tasks such as learning generative models and domain adaptation —which both may rely on a relevant distribution distance [20], [10].", "In particular, we propose and analyze a mechanism that transforms the sliced Wasserstein distance (SWD) [26] into a differentially private distance while retaining the scalability advantages and metric properties of the base SWD.", "The key ingredient to our contribution: to take advantage of the combination of the embedded sampling process of SWD and the so-called Gaussian mechanism.", "Our contributions are as follows: i) we analyze the effect of a Gaussian mechanism on the sliced Wasserstein distance and we establish the DP-compliance of the resulting mechanism DP-SWD; ii) we show that DP-SWD boils down to what we call Gaussian smoothed SWD, that inherits some of the key properties of a distance, a novel result that has value on its own; iii) extensive empirical analysis on domain adaptation and generative modeling tasks show that the proposed DP-SWD is competitive, as we achieve DP guarantees without almost no loss in accuracy in domain adaptation, while being the first to present a DP generative model on the $64\\times 64$ RGB CelebA dataset." ], [ "Outline.", "Section  states the problem we are interested in and provides background on differential privacy and the sliced Wasserstein distance.", "In Section , we analyze the DP guarantee of random direction projections and we characterize the resulting Gaussian Smoothed Sliced Wasserstein distance.", "Section discusses how this distance can be plugged into domain adaptation and generative model algorithms.", "After discussing related works in Section , Section presents empirical results, showing our ability to effectively learn under DP constraints." ], [ "Privacy, Gaussian Mechanism and Random Direction Projections", "We start by stating the main problem we are interested in: to show the privacy properties of the random mechanism ${\\cal M}(\\mathbf {X}) = \\mathbf {X}\\mathbf {U}+ \\mathbf {V},$ where $\\mathbf {X}\\in \\mathbb {R}^{n\\times d}$ is a matrix (a dataset), $\\mathbf {U}\\in \\mathbb {R}^{d\\times k}$ a random matrix made of $k$ uniformly distributed unit-norm vectors of $\\mathbb {R}^d$ and $\\mathbf {V}\\in \\mathbb {R}^{n\\times k}$ a matrix of $k$ zero-mean Gaussian vectors (also called the Gaussian Mechanism).", "We show that ${\\cal M}$ is differentially private and that it is the core component of the Sliced Wassertein Distance (SWD) computed thanks to random projection directions (the unit-norm matrix $\\mathbf {U}$ ) and, in turn, SWD inheritsThis is a slight abuse of vocabulary as the Sliceblackd Wasserstein Distance takes two inputs and not only one.", "the differential private property of ${\\cal M}$ .", "In the way, we show that the population version of the resulting differentially private SWD is a distance, that we dub the Gaussian Smoothed SWD." ], [ "Differential Privacy (DP)", "DP is a theoretical framework to analyze the privacy guarantees of algorithms.", "It rests on the following definitions.", "Definition 1 (Neighboring datasets) Let $\\mathcal {X}$ (e.g.", "$\\mathcal {X}= \\mathbb {R}^d$ ) be a domain and $\\mathcal {D}\\doteq \\operatorname{\\cup }_{n=1}^{+\\infty }\\mathcal {X}^n$ .", "$D, D^{\\prime }\\in \\mathcal {D}$ are neighboring datasets if $|D|=|D^{\\prime }|$ and they differ from one record.", "Definition 2 ([11]) Let $\\varepsilon ,\\delta > 0$ .", "Let $\\mathcal {A}:\\mathcal {D}\\rightarrow \\text{Im }{\\mathcal {A}}$ be a randomized algorithm, where $\\text{Im }{\\mathcal {A}}$ is the image of $\\mathcal {D}$ through ${\\mathcal {A}}$ .", "${\\mathcal {A}}$ is $(\\varepsilon ,\\delta )$ -differentially private, or $(\\varepsilon ,\\delta )$ -DP, if for all neighboring datasets $D,D^\\prime \\in \\mathcal {D}$ and for all sets of outputs $\\mathcal {O}\\in \\text{Im }{\\mathcal {A}}$ , the following inequality holds: $\\mathbb {P}[\\mathcal {A}(D) \\in \\mathcal {O}] \\le e^\\varepsilon \\mathbb {P}[\\mathcal {A}(D^\\prime ) \\in \\mathcal {O}] + \\delta $ where the probability relates to the randomness of ${\\mathcal {A}}$ .", "Remark 1 Note that given $D\\in \\mathcal {D}$ and a randomized algorithm ${\\mathcal {A}}:\\mathcal {D}\\rightarrow \\text{Im }{\\mathcal {A}}$ , ${\\mathcal {A}}(D)$ defines a distribution $\\pi _D:\\text{Im }{\\mathcal {A}}\\rightarrow [0,1]$ on (a subspace of) $\\text{Im }{\\mathcal {A}}$ with $\\forall \\mathcal {O}\\in \\text{Im }{\\mathcal {A}},\\, \\pi _D(\\mathcal {O})\\propto \\mathbb {P}[\\mathcal {A}(D) \\in \\mathcal {O}],$ where $\\propto $ means equality up to a normalizing factor.", "The following notion of privacy, proposed by [22], which is based on Rényi $\\alpha $ -divergences and its connections to $(\\varepsilon ,\\delta )$ -differential privacy will ease the exposition of our results (see also [2], [3], [32]): Definition 3 ([22]) Let $\\varepsilon > 0$ and $\\alpha >1$ .", "A randomized algorithm $\\mathcal {A}$ is $(\\alpha , \\varepsilon )$ -Rényi differential private or $(\\alpha , \\varepsilon )$ -RDP, if for any neighboring datasets $D,D^\\prime \\in \\mathcal {D}$ , $\\mathbb {D}_\\alpha \\left(\\mathcal {A}(D)\\Vert \\mathcal {A}(D^\\prime )\\right) \\le \\varepsilon $ where $\\mathbb {D}_\\alpha (\\cdot \\Vert \\cdot )$ is the Rényi $\\alpha $ -divergence [28] between two distributions (cf.", "Remark REF ).", "Proposition 1 ([22], Prop.", "3) An $(\\alpha , \\varepsilon )$ -RDP mechanism is also $(\\varepsilon + \\frac{log(1/\\delta )}{\\alpha -1},\\delta )$ -DP, $\\forall \\delta \\in (0,1)$ .", "Remark 2 A folk method to make up an (R)DP algorithm based a function $f:\\mathcal {X}\\rightarrow \\mathbb {R}^d$ is the Gaussian mechanism $\\mathcal {M}_\\sigma $ defined as follows: $\\mathcal {M}_\\sigma f(\\cdot ) = f(\\cdot ) + \\mathbf {v}$ where $\\mathbf {v}\\sim \\mathcal {N}(0,\\sigma ^2I_d)$ .", "If $f$ has $\\Delta _2$ - (or $\\ell _2$ -) sensitivity $\\Delta _2f \\doteq \\max _{D,D^\\prime \\text{neighbors}} \\Vert f(D) - f(D^\\prime )\\Vert _2,$ then $\\mathcal {M}_\\sigma $ is $\\left(\\alpha , \\frac{\\alpha \\Delta _2^2f }{2 \\sigma ^2}\\right)$ -RDP.", "As we shall see, the role of $f$ will be played by the Random Direction Projections operation or the Sliced Wasserstein Distance (SWD), a randomized algorithm itself, and the mechanism to be studied is the composition of two random algorithms, SWD and the Gaussian mechanism.", "Proving the (R)DP nature of this mechanism will rely on a high probability bound on the sensitivy of the Random Direction Projections/SWD combined with the result of Remark REF ." ], [ "Sliced Wasserstein Distance", "Let $\\Omega \\in \\mathbb {R}^d $ be a probability space and $\\mathcal {P}(\\Omega )$ the set of all probability measures over $\\Omega $ .", "The Wasserstein distance between two measures $\\mu $ , $\\nu \\in \\mathcal {P}(\\Omega )$ is based on the so-called Kantorovitch relaxation of the optimal transport problem, which consists in finding a joint probability distribution $\\gamma ^\\star \\in \\mathcal {P}(\\Omega \\times \\Omega )$ such that $\\gamma ^\\star \\doteq \\mathop {\\mathrm {arg\\,min}}_{\\gamma \\in \\Pi (\\mu ,\\nu )} \\int _{\\Omega \\times \\Omega }c(x,x^\\prime ) d\\gamma (x,x^\\prime )$ where $c(\\cdot ,\\cdot )$ is a metric on $\\Omega $ , known as the ground cost (blackwhich in our case will be the Euclidean distance), $\\Pi (\\mu ,\\nu )\\doteq \\lbrace \\gamma \\in \\mathcal {P}(\\Omega \\times \\Omega ) |\\pi _{1\\#} \\gamma =\\mu ,\\pi _{2\\#} \\gamma =\\nu \\rbrace $ and $\\pi _1, \\pi _2$ are the marginal projectors of $\\gamma $ on each of its coordinates.", "The minimizer of this problem is the optimal transport plan and for $q\\ge 1$ , the $q$ -Wasserstein distance is $W_q(\\mu ,\\nu ) = \\Big ( \\inf _{\\gamma \\in \\Pi (\\mu ,\\nu )} \\int _{\\Omega \\times \\Omega }c(x,x^\\prime )^q d\\gamma (x,x^\\prime ) \\Big )^\\frac{1}{q}$ A case of prominent interest for our work is that of one-dimensional measures, for which it was shown by [26], [5] that the Wasserstein distance admits a closed-form solution blackwhich is $W_q(\\mu ,\\nu ) \\doteq \\Big (\\int _0^1 |F^{-1}_\\mu (z) - F^{-1}_\\nu (z)|^qdz\\Big )^\\frac{1}{q}$ where $F^{-1}_\\cdot $ is the inverse cumulative distribution function of the related distribution.", "This combines well with the idea of projecting high-dimensional probability distributions onto random 1-dimensional spaces and then computing the Wasserstein distance, an operation which can be theoretically formalized through the use of the Radon transform [5], leading to the so-called Sliced Wasserstein Distance $\\text{SWD}_q^q(\\mu ,\\nu ) \\doteq \\int _{\\mathbb {S}^{d-1}} W_q^q(\\mathcal {R}_\\mathbf {u}\\mu ,\\mathcal {R}_\\mathbf {u}\\nu ) u_d(\\mathbf {u})d\\mathbf {u}$ where $\\mathcal {R}_\\mathbf {u}$ is the Radon transform of a probability distribution so that black $\\mathcal {R}_\\mathbf {u}\\mu (\\cdot ) = \\int \\mu (\\mathbf {s}) \\delta (\\cdot - \\mathbf {s}^\\top \\mathbf {u})d\\mathbf {s}$ with $\\mathbf {u}\\in \\mathbb {S}^{d-1}\\doteq \\lbrace \\mathbf {u}\\in \\mathbb {R}^d : \\Vert \\mathbf {u}\\Vert _2=1\\rbrace $ be the $d$ -dimensional hypersphere and $u_d$ the uniform distribution on $\\mathbb {S}^{d-1}$ .", "blackIn practice, we only have access to $\\mu $ and $\\nu $ through samples, and the proxy distributions of $\\mu $ and $\\nu $ to handle are $\\hat{\\mu }\\doteq \\frac{1}{n} \\sum _{i=1}^{n} \\delta _{\\mathbf {x}_i}$ and $\\hat{\\nu }\\doteq \\frac{1}{m} \\sum _{i=1}^{m} \\delta _{\\mathbf {x}^\\prime _i}.$ By plugging those distributions into Equation REF , it is easy to show that the Radon transform depends only the projection of $\\mathbf {x}$ on $\\mathbf {u}$ .", "Hence, computing the sliced Wasserstein distance amounts to computing the average of 1D Wasserstein distances over a set of random directions $\\lbrace \\mathbf {u}_j\\rbrace _{j=1}^k$ , with each 1D probability distribution obtained by projecting a sample (of $\\hat{\\mu }$ or $\\hat{\\nu }$ ) on $\\mathbf {u}_i$ by $\\mathbf {x}^\\top \\mathbf {u}_i$ .", "This gives the following empirical approximation of SWD ${\\text{SWD}}_q^q \\approx \\frac{1}{k} \\sum _{j=1}^k W_q^q\\left(\\frac{1}{n} \\sum _{i=1}^{n} \\delta _{{\\mathbf {x}_i}^\\top \\mathbf {u}_j},\\frac{1}{m} \\sum _{i=1}^{m} \\delta _{{\\mathbf {x}_i^\\prime }^\\top \\mathbf {u}_j}\\right)$ given $\\mathbf {U}$ a matrix of $\\mathbb {R}^{d \\times k}$ of unit-norm column $\\mathbf {u}_j$ .", "[t] Private and Smoothed Sliced Wasserstein Distance [1] A public $\\lbrace \\mathbf {X}_s\\rbrace $ and private $\\lbrace \\mathbf {X}_t\\rbrace $ matrix both in $\\mathbb {R}^{n \\times d}$ , $\\sigma $ the standard deviation of a Gaussian distribution, $k$ the number of direction in SWD, $q$ the power in the SWD.", "// random projection construct random projection matrix $\\mathbf {U}\\in \\mathbb {R}^{d \\times k}$ with unit-norm columns.", "construct two random Gaussian, with standard deviation $\\sigma $ noise, matrices $\\mathbf {V}_s$ and $\\mathbf {V}_t$ of size $n \\times k$ // Gaussian mechanism compute $\\mathcal {M}(\\mathbf {X}_s) = \\mathbf {X}_s\\mathbf {U}+ \\mathbf {V}_s$ , $\\mathcal {M}(\\mathbf {X}_t) = \\mathbf {X}_t\\mathbf {U}+ \\mathbf {V}_t$ $\\text{DP}_\\sigma \\text{SWD}_q^q \\leftarrow $ compute Equation (REF ) using $\\mathcal {M}(\\mathbf {X}_s)$  and $\\mathcal {M}(\\mathbf {X}_t)$ as the locations of the Diracs.", "$\\text{DP}_\\sigma \\text{SWD}_q^q$" ], [ "Private and Smoothed Sliced Wasserstein Distance", "We now introduce how we obtain a differentially private approximation of the Sliced Wasserstein Distance.", "To achieve this goal, we take advantage of the intrinsic randomization process that is embedded in the Sliced Wasserstein distance." ], [ "Sensitivity of Random Direction Projections", "In order to uncover its $(\\epsilon ,\\delta )$ -DP , we analyze the sensitivity of the random direction projection in SWD.", "Let us consider the matrix $\\mathbf {X}\\in \\mathbb {R}^{n \\times d}$ representing a dataset composed of $n$ examples in dimension $d$ organized in row (each sample being randomly drawn from the distribution $\\mu $ ).", "One mechanism of interest is $\\mathcal {M}_u(\\mathbf {X}) = \\mathbf {X}\\frac{\\mathbf {u}}{\\Vert \\mathbf {u}\\Vert _2} + \\mathbf {v}.$ where $\\mathbf {v}$ is a vector whose entries blackare drawn from a zero-mean Gaussian distribution.", "Let $\\mathbf {X}$ and $\\mathbf {X}^\\prime $ be two matrices in $\\mathbb {R}^{n \\times d}$ that differ only on one row, say $i$ and such that $ \\Vert \\mathbf {X}_{i,:} - \\mathbf {X}_{i,:}^\\prime \\Vert _2 \\le 1$ , where $\\mathbf {X}_{i,:}\\in \\mathbb {R}^d$ and $\\mathbf {X}_{i,:}^\\prime \\in \\mathbb {R}^d$ are the $i$ -th row of $\\mathbf {X}$ and $\\mathbf {X}^\\prime $ , respectively.", "For ease of notation, we will from now on use $\\mathbf {z}\\doteq (\\mathbf {X}_{i,:} - \\mathbf {X}_{i,:}^\\prime )^{\\top }.$ Lemma 1 Assume that $\\mathbf {z}\\in \\mathbb {R}^d$ is a unit-norm vector and $\\mathbf {u}\\in \\mathbb {R}^d$ a vector where each entry is drawn independently from $\\mathcal {N}(0,\\sigma _u^2)$ .", "Then $Y\\doteq \\Big (\\mathbf {z}^\\top \\frac{\\mathbf {u}}{\\Vert \\mathbf {u}\\Vert _2}\\Big )^2 \\sim B(1/2, (d-1)/2)$ where $B(\\alpha ,\\beta )$ is the beta distribution of parameters $\\alpha ,\\beta $ .", "See appendix.", "Instead of considering a randomized mechanism that projects only according to a single random direction, blackwe are interested in the whole set of projected (private) data according to the random directions sampled through the Monte-Carlo approximation of the Sliced Wasserstein distance computation (REF ).", "blackOur key interest is therefore in the mechanism $\\mathcal {M}(\\mathbf {X}) = \\mathbf {X}\\mathbf {U}+ \\mathbf {V}$ and in the sensitivity of $\\mathbf {X}\\mathbf {U}$ .", "Because of its randomness, we are interested in a probabilistic tail-bound of $\\Vert \\mathbf {X}\\mathbf {U}- \\mathbf {X}^\\prime \\mathbf {U}\\Vert _F$ , where the matrix $\\mathbf {U}$ has columns independently drawn from $\\mathbb {S}^{d-1}$ .", "Lemma 2 Let $\\mathbf {X}$ and $\\mathbf {X}^\\prime $ be two matrices in $\\mathbb {R}^{n \\times d}$ that differ only in one row, and for that row, say $i$ , $\\Vert \\mathbf {X}_{i,:} - \\mathbf {X}_{i,:}^\\prime \\Vert _2 \\le 1$ .", "Denote $\\mathbf {U}\\in \\mathbb {R}^{d \\times k}$ and $\\mathbf {U}$ has columns independently drawn from $\\mathbb {S}^{d-1}$ .", "With probability at least $1-\\delta $ , we have $\\left\\Vert \\mathbf {X}\\mathbf {U}- \\mathbf {X}^\\prime \\mathbf {U}\\right\\Vert _F^2\\le w(k,\\delta ),\\multicolumn{2}{l}{\\text{with}}\\\\w(k,\\delta )\\doteq \\frac{k}{d}+\\frac{2}{3}\\ln \\frac{1}{\\delta } + \\frac{2}{d}\\sqrt{k\\frac{d-1}{d+2}\\ln \\frac{1}{\\delta }}$ See appendix.", "The above bound on the squared sensitivity has been obtained by first showing that the random variable $\\left\\Vert \\mathbf {X}\\mathbf {U}- \\mathbf {X}^\\prime \\mathbf {U}\\right\\Vert _F^2$ is the sum of $k$ iid Beta-distributed random variables and then by using a Bernstein inequality.", "This bound, referred to as the Bernstein bound, is very conservative as soon as $\\delta $ is very small.", "By calling the Central Limit Theorem (CLT), assuming that $k$ is large enough ($k>30$ ), we get under the same hypotheses (proof is in the appendix) that $w(k,\\delta ) = \\frac{k}{d} + \\frac{z_{1-\\delta }}{d}\\sqrt{\\frac{2k(d-1)}{d+2}}$ where $z_{1-\\delta }=\\Phi ^{-1}(1-\\delta )$ and $\\Phi $ is the cumulative distribution function of a zero-mean unit variance Gaussian distribution.", "This bound is far tighter but is not rigourous due to the CLT approximation.", "Figure REF presents an example of the probability distribution histogram of $ \\Vert (\\mathbf {X}- \\mathbf {X}^\\prime )^\\top \\mathbf {U}\\Vert _F^2 = \\Vert (\\mathbf {X}_{i,:} - \\mathbf {X}_{i,:}^\\prime )^\\top \\mathbf {U}\\Vert _2^2$ for two fixed arbitrary $\\mathbf {X}_{i,:}$ , $\\mathbf {X}_{i,:}^\\prime $ and for 10000 random draws of $\\mathbf {U}$ .", "blackIt shows that the CLT bound is numerically far smaller than the Bernstein bound of Lemma REF .", "Then, using the $\\mathbf {w}(k,\\delta )$ -based bounds jointly with the Gaussian mechanism property gives us the following proposition.", "Figure: Estimated density probability of ∑ i k Y i \\sum _i^k Y_i and the Normal distribution of same mean and standard deviation.", "Here, we have k=200k=200 and d=784d=784 which corresponds to the dimensionality of MNIST digits and the number of random projections we use in the experiments.", "We illustrate also some bounds (on the squared sensitivity) that can be derived from this Normal distribution as well as our CLT bound.", "Note that the Bernstein bound is above 1 andin this example, that the CLT bound, which is numerically equal to the inverse CDF of the Normal distribution at desired δ\\delta .Proposition 2 Let $\\alpha >1$ and $\\delta \\in [0,1/2]$ , given a random direction projection matrix $\\mathbf {U}\\in \\mathbb {R}^{d\\times k}$ , then the Gaussian mechanism $\\mathcal {M}(\\mathbf {X})= \\mathbf {X}\\mathbf {U}+ \\mathbf {V}$ , where $\\mathbf {V}$ is a Gaussian matrix in $\\mathbb {R}^{n \\times k}$ with entries drawn from $\\mathcal {N}(0,\\sigma ^2)$ is $(\\frac{\\alpha {w(k,\\delta /2)}}{2\\sigma ^2} + \\frac{log(2/\\delta )}{\\alpha -1},\\delta )$ -DP.", "The claim derives immediately by the relation between RDP and DP and by Lemma REF with $\\frac{\\delta }{2}$ .", "The above DP guarantees apply to the full dataset.", "Hence, when learning through mini-batches, we benefit from the so-called privacy amplification by the “subsampling” principle, which ensures that a differentially private mechanism run on a random subsample of a population blackleads to higher privacy guarantees than when run on the full population [4].", "On the contrary, gradient clipping/sanitization acts individually one each gradient and thus do not fully benefit from the subsampling amplification, as its DP property may still depend on the batch size [9].", "blackThis Gaussian mechanism on the random direction projections $\\mathcal {M}(\\mathbf {X})$ can be related to the definition of the empirical SWD as each $\\mathbf {x}_i^\\top \\mathbf {u}_j$ corresponds to one entry of $\\mathbf {X}\\mathbf {U}$ .", "Hence, by adding a Gaussian noise to each projection, we naturally derive our empirical DP Sliced Wasserstein distance, which inherits the differential property of $\\mathcal {M}(\\mathbf {X})$ , owing to the post-processing proposition [12]." ], [ "Metric Properties of DP-SWD", "We have analyzed the sensitivity of the random direction projection central to SWD and we have proposed a Gaussian mechanism to obtain a differentially private SWD (DP-SWD) which steps are blackdepicted in Algorithm REF .", "In our use-cases, DP-SWD is used in a context of learning to match two distributions (one of them requiring to be privately protected).", "Hence, the utility guarantees of our DP-SWD is more related to the ability of the mechanism to distinguish two different distributions rather than on the equivalence between SWD and DP-SWD.", "Our goal in this section is to investigate the impact of adding Gaussian noise to the source $\\mu $ and target $\\nu $ distributions in terms of distance property in the population case.", "Since $\\mathcal {R}_\\mathbf {u}$ , as defined in Equation black(REF ), is a push-forward operator of probability distributions, the Gaussian mechanism process implies that the Wasserstein distance involved in SWD compares two 1D probability distributions which are respectively the convolution of a Gaussian distribution and $\\mathcal {R}_\\mathbf {u}\\mu $ and $\\mathcal {R}_\\mathbf {u}\\nu $ .", "Hence, we can consider DP-SWD uses as a building block the 1D smoothed Wasserstein distance between $\\mathcal {R}_\\mathbf {u}\\mu $ and $\\mathcal {R}_\\mathbf {u}\\nu $ with the smoothing being ensured by $\\mathcal {N}_\\sigma $ and its formal definition blackbeing, for $q \\ge 1$ , $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu ) \\doteq \\int _{\\mathbb {S}^{d-1}} W_q^q(\\mathcal {R}_\\mathbf {u}\\mu * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\nu * \\mathcal {N}_\\sigma ) u_d(\\mathbf {u})d\\mathbf {u}$ While some works have analyzed the theoretical properties of the Smoothed Wasserstein distance [15], [14], as far as we know, no theoretical result is available for the smoothed Sliced Wasserstein distance, and we provide in the sequel some blackinsights that help its understanding.", "The following property shows that DP-SWD preserves the identity of indiscernibles.", "Property 1 For continuous probability distributions $\\mu $ and $\\nu $ , we have, for $q \\ge 1$ , $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu ) = 0 \\Leftrightarrow \\mu =\\nu $ $\\forall \\sigma > 0$ .", "Showing that $\\mu =\\nu \\Rightarrow \\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu ) = 0$ is trivial as the Radon transform and the convolution are two well-defined maps.", "We essentially would like to show that $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu ) = 0$ implies $\\mu =\\nu $ .", "If $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu ) = 0$ then $\\mathcal {R}_\\mathbf {u}\\mu * \\mathcal {N}_\\sigma = \\mathcal {R}_\\mathbf {u}\\nu * \\mathcal {N}_\\sigma $ for almost every $\\mathbf {u}\\in {\\mathbb {S}^{d-1}}$ .", "blackAs convolution yields to multiplication in the Fourier domain and because, the Fourier transform of a Gaussian is also a Gaussian and thus is always positive, one can show that we have for all $\\mathbf {u}$ equality of the Fourier transforms of $\\mathcal {R}_\\mathbf {u}\\mu $ and $\\mathcal {R}_\\mathbf {u}\\nu $ .", "Then, owing to the continuity of $\\mu $ and $\\nu $ and by the Fourier inversion theorem, we have $\\mathcal {R}_\\mathbf {u}\\mu = \\mathcal {R}_\\mathbf {u}\\nu $ .", "Finally, as for the SWD proof [6], this implies that $\\mu = \\nu $ , owing to the projection nature of the Radon Transform and because the Fourier transform is injective.", "Property 2 For $q \\ge 1$ , $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu )$ is symmetric and satisfies the triangle inequality.", "The proof easily derives from the metric properties of Smoothed Wasserstein distance [14] and details are in the appendix.", "These properties are strongly relevant in the context of our machine learning applications.", "Indeed, while they do not tell us how the value of DP-SWD compares with SWD, at fixed $\\sigma >0$ or when $\\sigma \\rightarrow 0$ , they show that they can properly act as (for any $\\sigma >0$ ) loss functions to minimize if we aim to match distributions (at least in the population case).", "Naturally, there are still several theoretical properties of $\\text{DP}_\\sigma \\text{SWD}_q^q$ that are worth investigating but that are beyond the scope of this work." ], [ "DP-Distribution Matching Problems", "[t] Differentially private DANN with DP-SWD [1] $\\lbrace \\mathbf {X}_s,\\mathbf {y}_s\\rbrace , \\lbrace \\mathbf {X}_t\\rbrace $ , respectively the public and private domain, $\\sigma $ standard deviation of the Gaussian mechanism Initialize representation mapping $g$ , the classifier $h$ with parameters $\\theta _g$ , $\\theta _h$ sample minibatches $\\lbrace x_B^s,y_B^s\\rbrace $ from $\\lbrace x_i^s,y_i^s\\rbrace $ compute $g(x_B^s)$ compute the classification loss ${L}_c = \\sum _{i \\in B} L(y_i^s,h(g(x_i^s)))$ $\\theta _h \\leftarrow \\theta _h - \\alpha _h \\nabla _{\\theta _h} {L}_c $ // Private steps : $g(x_B^t)$ is computed in a private way.", "$g(\\cdot )$ is either transferred or has shared weights between public and private clients.", "sample minibatches $\\lbrace x_B^t\\rbrace $ from $\\lbrace x_B^t\\rbrace $ compute $g(x_B^t)$ normalize each sample $g(x_{B}^s)$ wrt $2\\max _j{\\Vert g(x_{B,j}^s)\\Vert _2}$ normalize each sample $g(x_{B}^t)$ wrt $2\\max _j{\\Vert g(x_{B,j}^t)\\Vert _2}$ compute $\\text{DP}_\\sigma \\text{SWD}(g(x_B^s), g(x_B^t))$ publish $\\nabla _{\\theta _g} \\text{DP}_\\sigma \\text{SWD}$ // public step $\\theta _g \\leftarrow \\theta _g - \\alpha _g \\nabla _{\\theta _g} L_c - \\alpha _g\\nabla _{\\theta _g} \\text{DP}_\\sigma \\text{SWD}$ a convergence condition is met There exists several machine learning problems where distance between distributions is the key part of the loss function to optimize.", "In domain adaptation, one learns a classifier from public source dataset but looks to adapt it to private target dataset (target domain examples are available only through a privacy-preserving mechanism).", "In generative modelling, the goal is to generate samples similar to true data which are accessible only through a privacy-preserving mechanism.", "In the sequel, we describe how our $\\text{DP}_\\sigma \\text{SWD}_q^q$ distance can be instantiated into these two learning paradigms for measuring adaptation or for measuring similarity between generated an true samples.", "For unsupervised domain adaptation, given source examples $\\mathbf {X}_s$ and their label $\\mathbf {y}_s$ and unlabeled private target examples $\\mathbf {X}_t$ , the goal is to learn a classifier $h(\\cdot )$ trained on the source examples that generalizes well on the target ones.", "One usual technique is to learn a representation mapping $g(\\cdot )$ that leads to invariant latent representations, invariance being measured as blacksome distance between empirical distributions of mapped source and target samples.", "Formally, this leads to the following learning problem $\\min _{g,h} L_c(h(g(\\mathbf {X}_s)),\\mathbf {y}_s) + \\text{DP}_\\sigma \\text{SWD}(g(\\mathbf {X}_s), g(\\mathbf {X}_t))$ where $L_c$ can be any loss function of interest and $\\text{DP}_\\sigma \\text{SWD}=\\text{DP}_\\sigma \\text{SWD}_q$ .", "We solve this problem through stochastic gradient descent, similarly to many approaches that use Sliced Wasserstein Distance as a distribution distance [20], except that in our case, the gradient of $\\text{DP}_\\sigma \\text{SWD}$ involving the target dataset is $(\\varepsilon ,\\delta )$ -DP.", "Note that in order to compute the $\\text{DP}_\\sigma \\text{SWD}$ , one needs the public dataset $\\mathbf {X}_s$ and the public generator.", "In practice, this generator can either be transferred, after each update, from the private client curating $\\mathbf {X}_t$ or can be duplicated on that client.", "The resulting algorithm is presented in Algorithm .", "In the context of generative modeling, we follow the same steps as [10] but use our $\\text{DP}_\\sigma \\text{SWD}$ instead of SWD.", "Assuming that we have some examples of data $\\mathbf {X}_t$ sampled from a given distribution, the goal of the learning problem is to learn a generator $g(\\cdot )$ to output samples similar to those of the target distribution, with at its input a given noise vector.", "This is usually achieved by solving $\\min _g \\text{DP}_\\sigma \\text{SWD}(\\mathbf {X}_t, g(z))$ where $z$ is for instance a Gaussian vector.", "In practice, we solve this problem using a mini-batching stochastic gradient descent strategy, following a similar algorithm than the one for domain adaptation.", "The main difference is that the private target dataset does not pass through the generator." ], [ "Tracking the privacy loss", "Given that we consider the privacy mechanism within a stochastic gradient descent framework, we keep track of the privacy loss through the RDP accountant proposed by [32] for composing subsampled private mechanisms.", "Hence, we used the PyTorch package [33] that they made available for estimating the noise standard deviation $\\sigma $ given the $(\\varepsilon ,\\delta )$ budget, a number of epoch, a fixed batch size, the number of private samples, the dimension $d$ of the distributions to be compared and the number $k$ of projections used for $\\text{DP}_\\sigma \\text{SWD}$ .", "Some examples of Gaussian noise standard deviation are reported in Table REF in the appendix." ], [ "DP Generative Models", "Most recent approaches [13] that proposed DP generative models considered it from a GAN perspective and applied DP-SGD [1] for training the model.", "The main idea for introducing privacy is to appropriately clip the gradient and to add calibrated noise into the model's parameter gradient during training [29], [9], [34].", "This added noise make those models even harder to train.", "Furthermore, since the DP mechanism applies to each single gradient, those approaches do not fully benefit from the amplification induced by subsampling (mini-batching) mechanism [4].", "The work of [9] uses gradient sanitization and achieves privacy amplification by training multiple discriminators, as in [18], and sampling on them for adversarial training.", "While their approach is competitive in term of quality of generated data, it is hardly tractable for large scale dataset, due to the multiple (up to 1000 in their experiments) discriminator trainings.", "Instead of considering adversarial training, some DP generative model works have investigated the use of distance on distributions.", "[17] proposed random feature based maximum-mean embedding distance for computing distance between empirical distributions.", "[7] considered the Sinkhorn divergence for computing distance between true and generated data and used gradient clipping and noise addition for privacy preservation.", "Their approach is then very similar to DP-SGD in the privacy mechanism.", "Instead, we perturb the Sliced Wasserstein distance by smoothing the distributions to compare.", "This yields a privacy mechanism that benefits subsampling amplification, as its sensitivity does not depend on the number of samples, and that preserves its utility as the smoothed Sliced Wasserstein distance is still a distance." ], [ "Differential Privacy with Random Projections", "Sliced Wasserstein Distance leverages on Radon transform for mapping high-dimensional distributions into 1D distributions.", "This is related to projection on random directions and the sensitivity analysis of those projections on unit-norm random vector is key.", "The first use of random projection for differential privacy has been introduced by [19].", "Their approach was linked to the distance preserving property of random projections induced by the Johnson-Lindenstrauss Lemma.", "As a natural extension, [21] and [16] have applied this idea in the context of optimal transport and classification.", "The fact that we project on unit-norm random vector, instead of any random vector as in [19], requires a novel sensitivity analysis and we show that this sensitivity scales gracefully with ratio of the number of projections and dimension of the distributions." ], [ "Numerical Experiments", "In this section, we provide some numerical results showing how our differentially private Sliced Wasserstein Distance works in practice.", "The code for reproducing some of the results is available in https://github.com/arakotom/dp_swd." ], [ "Toy Experiment", "The goal of this experiment is to illustrate the behaviour of the DP-SWD compared with the SWD in controlled situations.", "We consider the source and target distributions as isotropic Normal distributions of unit variance with added blackprivacy-inducing Gaussian noise of different variances.", "Both distributions are Gaussian of dimension 5 and the means of the source and target are respectively $\\mathbf {m}_\\mu =0$ and $\\mathbf {m}_\\nu = c 1$ with $c \\in [0,1]$ .", "Figure REF presents the evolution of the distances averaged over 5 random draws of the Gaussian and noise.", "When source and target distributions are different, this experiment shows that DP-SWD follows the same increasing trend as SWD.", "This suggests that the order relation between distributions as evaluated using SWD is preserved by DP-SWD, and that the distance DP-SWD is minimized when $\\mu =\\nu $ , which are important features when using DP-SWD as a loss.", "Figure: Comparing SWD and DP-SWD by measuring the distance between two normal distributions (averaged over 5 draws of all samples).", "The comparison holds when the distance between the means of the Gaussians increases linearly, for different noise amplitudes of the Gaussian mechanism and different number of samples.", "(left) σ=1\\sigma =1.", "(right) σ=3\\sigma =3." ], [ "Domain Adaptation", "We conduct experiments for evaluating our DP-SWD distance in the context of classical unsupervised domain adaptation (UDA) problems such as handwritten digit recognitions (MNIST/USPS), synthetic to real object data (VisDA 2017) and Office 31 datasets.", "Our goal is to analyze how DP-SWD performs compared with its public counterpart SWD [20], with one DP deep domain adaptation algorithm DP-DANN that is based on gradient clipping [31] and with the classical non-private DANN algorithm.", "Note that we need not compare with [21] as their algorithm does not learn representation and does not handle large-scale problems, as the OT transport matrix coupling need be computed on the full dataset.", "For all methods and for each dataset, we used the same neural network architecture for representation mapping and for classification.", "Approaches differ only on how distance between distributions have been computed.", "Details of problem configurations as well as model architecture and training procedure can be found in the appendix.", "Sensitivity has been computed using the Bernstein bound of Lemma REF .", "Table REF presents the accuracy on the target domain for all methods averaged over 10 iterations.", "We remark that our private model outperforms the DP-DANN approach on all problems except on two difficult ones.", "Interestingly, our method does not incur a loss of performance despite the private mechanism.", "This finding is confirmed in Figure REF where we plot the performance of the model with respect to the noise level $\\sigma $ (and thus the privacy parameter $\\varepsilon $ ).", "Our model is able to keep accuracy almost constant for $\\varepsilon \\in [3,10]$ .", "Table: Table of accuracy on the private target domain for different domain adaptation problems M-U, U-M refers to MNIST-USPS and USPS-MNIST the first listed data being the source domain.", "(D,W,A) refers to domains in the Office31 dataset.", "For all the problems, ε=10\\varepsilon =10 and δ\\delta depends on the number of examples in target domain.", "δ\\delta has been respectively setto 10 -3 10^{-3},10 -5 10^{-5},10 -6 10^{-6} for Office31, MNIST-USPS and VisDA.Figure: Examples of generate MNIST samples from (top) non-private SWD (middle) DP-SWD-b (bottom) MERF.Figure: Examples of generate FashionMNIST samples from (top) non-private SWD (middle) DP-SWD-b (bottom) MERF." ], [ "Generative Models", "In the context of generative models, our first task is to generate synthetic samples for MNIST and Fashion MNIST dataset that will be afterwards used for learning a classifier.", "We compare with different gradient-sanitization strategies like DP-CGAN [29], and GS-WGAN [9] and a model MERF [17] that uses MMD as distribution distance.", "We report results for our DP-SWD using two ways for computing the sensitivity, by using the CLT bound and the Bernstein bound, respectively noted as DP-SWD-c and DP-SWD-b.", "All models are compared with the same fixed budget of privacy $(\\varepsilon ,\\delta )=(10,10^{-5})$ .", "Our implementation is based on the one of MERF [17], in which we just plugged our DP-SWD loss in place of the MMD loss.", "The architecture of ours and MERF's generative model is composed of few layers of convolutional neural networks and upsampling layers with approximately 180K parameters while the one of GS-WGAN is based on a ResNet with about 4M parameters.", "MERF's and our models have been trained over 100 epochs with an Adam optimizer and batch size of 100.", "For our DP-SWD we have used 1000 random projections and the output dimension is the classical $28\\times 28= 784$ .", "Table REF reports some quantitative results on the task.", "We note that MERF and our DP-SWD perform on par on these problems (with a slight advantage for MERF on FashionMNIST and for DP-SWD on MNIST).", "Note that our results on MERF are better than those reported in [9].", "We also remark that GS-WGAN performs the best at the expense of a model with 20-fold more parameters and a very expensive training time (few hours just for training the 1000 discriminators, while our model and MERF's take less than 10min).", "Figure REF and REF present some examples of generated samples for MNIST and FashionMNIST.", "We can note that the samples generated by DP-SWD present some diversity and are visually more relevant than those of MERF, although they do not lead to better performance in the classification task.", "Our samples are a bit blurry compared to the ones generated by the non-private SWD: this is an expected effect of smoothing.", "We also evaluate our DP-SWD distance for training generative models on large RGB datasets such as the $64 \\times 64 \\times 3$ CelebA dataset.", "To the best of our knowledge, no DP generative approaches have been experimented on such a dataset.", "For instance, the GS-WGAN of [9] has been evaluated only on grayscale MNIST-like problems.", "For training the model, we followed the same approach (architecture and optimizer) as the one described in [23].", "In that work, in order to reduce the dimension of the problems, distributions are compared in a latent space of dimension $d=8192$ .", "We have used $k=2000$ projections which leads to a ratio $\\frac{k}{d} <0.25$ .", "Noise variance $\\sigma $ and privacy loss over 100 iterations have been evaluated using the PyTorch package of [32] and have been calibrated for $\\epsilon =10$ and $\\delta =10^{-6}$ , since the number of training samples is of the order of 170K.", "Details are in the appendix.", "Figure REF presents some examples of samples generated from our DP-SWD and SWD.", "We note that in this high-dimensional context, the sensitivity bound plays a key role, as we get a FID score of 97 vs 158 respectively using CLT bound and Bernstein bound, the former being smaller than the latter.", "Figure: Images generated on CelebA dataset.", "From top to bottom.", "Non-private SWD, DP-SWD with noise calibrated according to Gaussian approximation (CLT bound), DP-SWD with noise calibrated according to the Bernstein bound.", "The FID score computed over 10000 generated examples of this three models are respectively 58, 97 and 149." ], [ "Conclusion", "This paper presents a differentially private distance on distributions based on the sliced Wasserstein distance.", "We applied a Gaussian mechanism on the random projection inherent to SWD and analyzed its properties.", "We proved that a bound (à la Bernstein) on sensitivity of the mechanism as an inverse dependence on the problem dimension and that a Central limit theorem bound, although approximate, gives a tighter bound.", "One of our key findings is that the privacy-inducing mechanism we proposed turns the SWD into a smoothed sliced Wasserstein distance, which inherits all the properties of a distance.", "Hence, our privacy-preserving distance can be plugged seamlessly into domain adaptation or generative model algorithms blackto give effective privacy-preserving learning procedures." ], [ "Lemma 1 and its proof", "Lemma 1 Assume that $\\mathbf {z}\\in \\mathbb {R}^d$ is a unit-norm vector and $\\mathbf {u}\\in \\mathbb {R}^d$ a vector where each entry is drawn independently from $\\mathcal {N}(0,\\sigma _u^2)$ .", "Then $Y\\doteq \\Big (\\mathbf {z}^\\top \\frac{\\mathbf {u}}{\\Vert \\mathbf {u}\\Vert _2}\\Big )^2 \\sim B(1/2, (d-1)/2)$ where $B(\\alpha ,\\beta )$ is the beta distribution of parameters $\\alpha ,\\beta $ .", "At first, consider a vector of unit-length in $\\mathbb {R}^d$ , say $\\mathbf {e}_1$ , that can be completed to an orthogonal basis.", "A change of basis from the canonical one does not change the length of a vector as the transformation is orthogonal.", "Thus the distribution of $\\frac{(\\mathbf {e}_1^\\top \\mathbf {u})^2}{\\Vert \\mathbf {u}\\Vert _2^2} = \\frac{(\\mathbf {e}_1^\\top \\mathbf {u})^2}{\\sum _i^d u_i^2}$ does not depend on $\\mathbf {e}_1$ .", "$\\mathbf {e}_1$ can be either the vector $(1,0,\\cdots ,0)$ in $\\mathbb {R}^d$ or $ \\mathbf {z}$ (as $\\mathbf {z}$ is an unit-norm vector).", "However, for simplicity, let us consider $\\mathbf {e}_1$ as $(1,0,\\cdots ,0)$ , we thus have $\\frac{(\\mathbf {e}_1^\\top \\mathbf {u})^2}{\\Vert \\mathbf {u}\\Vert _2^2} = \\frac{u_1^2}{\\sum _i^d u_i^2}$ where the $u_i$ are iid from a normal distribution of standard deviation $\\sigma _u$ .", "Hence, because $u_1$ and the $\\lbrace u_i\\rbrace _{i=2}^d$ are independent, the above distribution is equal to the one of $\\frac{\\sigma _u^2V}{\\sigma _u^2V+\\sigma _u^2 Z}$ where $V = u_1^2/\\sigma _u^2 \\sim \\Gamma (1/2)$ (and is a chi-square distribution) ) and $Z = (\\sum _{i=2}^{d} u_i^2)/\\sigma _u^2 \\sim \\Gamma ((d-1)/2)$ and thus $V/(V+Z)$ follows a beta distribution $B(1/2,(d-1)/2)$ .", "And the fact that $\\mathbf {z}$ is also a unit-norm vector concludes the proof.", "A simulation of the random $Y$ and resulting histogram is depicted in Figure REF .", "Remark 3 From the properties of the beta distribution, expectation and variances are given by $\\operatorname{\\mathbb {E}}Y=\\frac{1}{d}\\quad \\text{and}\\quad \\operatorname{\\mathbb {V}}Y=\\frac{2(d-1)}{d^2(d+2)}$ Remark 4 Note that if $\\mathbf {z}$ is not of unit-length then $Y$ follows $\\Vert \\mathbf {z}\\Vert B(1/2,(d-1)/2)$ Figure: Estimation of the pdf of YY in Lemma 1, for a fixed 𝐳\\mathbf {z}, based on a histogramover 100000 samples of 𝐮\\mathbf {u}.", "Here, we have d=5d=5." ], [ "Lemma 2 and its proof", "Lemma 2 Suppose again that $\\mathbf {z}$ is unit norm.", "With probability at least $1-\\delta $ , we have $\\left\\Vert \\mathbf {X}\\mathbf {U}- \\mathbf {X}^\\prime \\mathbf {U}\\right\\Vert _F^2\\le w(k,\\delta ),\\multicolumn{2}{l}{\\text{with}}\\\\w(k,\\delta )\\doteq \\frac{k}{d}+\\frac{2}{3}\\ln \\frac{1}{\\delta } + \\frac{2}{d}\\sqrt{k\\frac{d-1}{d+2}\\ln \\frac{1}{\\delta }}$ First observe that: $H&\\doteq \\left\\Vert \\mathbf {X}\\mathbf {U}- \\mathbf {X}^\\prime \\mathbf {U}\\right\\Vert _F^2=\\left\\Vert (\\mathbf {X}- \\mathbf {X}^\\prime )\\mathbf {U}\\right\\Vert _F^2\\\\& = \\left\\Vert \\mathbf {z}^{\\top }\\mathbf {U}\\right\\Vert _2^2=\\sum _{j=1}^k\\left(\\mathbf {z}^{\\top }\\frac{\\mathbf {u}_j}{\\Vert \\mathbf {u}_j\\Vert }\\right)^2\\\\& = \\sum _{j=1}^k Y_j,\\text{ where } Y_j\\doteq \\left(\\mathbf {z}^{\\top }\\frac{\\mathbf {u}_j}{\\Vert \\mathbf {u}_j\\Vert }\\right)^2.$ Therefore, $H$ is the sum of $k$ iid $B(1/2, (d-1)/2)$ -distributed random variables.", "It is thus possible to use any inequality bounding $H$ from its mean to state a highly probable interval for $H$ .", "We here use inequality, that is tighter than Hoeffding inequality, whenever some knowledge is provided on the variance of the random variables considered.", "Recall that it states that if $Y_1,\\ldots ,Y_k$ and zero-mean independent RV with such that $|Y_i|\\le M$ a.s: $\\mathbf {P}\\left(\\sum _{j=1}^k Y_j\\ge t\\right)\\le \\exp \\left(-\\frac{t^2}{2\\sum _{j=1}^k\\mathbf {E}Y_j^2+\\frac{2}{3}Mt}\\right)$ For $H$ , we have $\\mathbf {E}H = \\sum _{j=1}^k\\mathbf {E}\\left(\\mathbf {z}^{\\top }\\frac{\\mathbf {u}_j}{\\Vert \\mathbf {u}_j\\Vert }\\right)^2=\\sum _{j=1}^k\\frac{1}{d}=\\frac{k}{d}$ and Bernstein's inequality gives $\\mathbf {P}\\left(H\\ge \\frac{k}{d}+t\\right)\\le \\exp \\left(-\\frac{t^2}{2kv_d+\\frac{2}{3}t}\\right),$ where $v_d=\\frac{2(d-1)}{d^2(d+2)}$ is the variance of each $(\\mathbf {z}^{\\top }\\mathbf {u}_j/\\Vert \\mathbf {u}_j\\Vert )^2$ beta distributed variable.", "Making the right hand side be equal to $\\delta $ , solving the second-order equation for $t$ give that, with probability at least $1-\\delta $ $H\\le \\frac{k}{d}+\\frac{2}{3}\\ln \\frac{1}{\\delta } + \\sqrt{2kv_d\\ln \\frac{1}{\\delta }}$ The proof follows directly from Lemma REF and the fact From the above lemma, we have a probabilistic bound on the sensitivity of the random direction projection and SWD .", "The lower this bound is the better it is, as less noise needed for achieving a certain $(\\varepsilon ,\\delta )$ -DP.", "Interestingly, the first and last terms in this bound have an inverse dependency on the dimension.", "Hence, if the dimension of space in which the DP-SWD has to be chosen, for instance, when considering latent representation, a practical compromise has to be performed between a smaller bound and a better estimation.", "Also remark that if $k<d$ , the bound is mostly dominated by the term $\\log (1/\\delta )$ .", "Compared to other random-projection bounds [30] which have a linear dependency in $k$ .", "For our bound, dimension also help in mitigating this dependency." ], [ "Proof of the Central Limit Theorem based bound", "Proof with the Central Limit Theorem According to the Central Limit Theorem — whenever $k>30$ is the accepted rule of thumb — we may consider that $\\frac{H}{k}\\sim {\\cal N}\\left(\\frac{1}{d},\\frac{v_d}{k}\\right)$ i.e.", "$\\left(\\frac{H}{k}-\\frac{1}{d}\\right)\\sqrt{\\frac{k}{v_d}}\\sim {\\cal N}(0,1)$ and thus $\\mathbf {P}\\left(\\left(\\frac{H}{k}-\\frac{1}{d}\\right)\\sqrt{\\frac{k}{v_d}}\\ge t\\right)\\le 1 - \\Phi (t)$ Setting $1 - \\Phi (t)=\\delta $ gives $t=\\Phi ^{-1}(1-\\delta )\\doteq z_{1-\\delta }$ , and thus with probability at least $1-\\delta $ $H&\\le \\frac{k}{d} + z_{1-\\delta }\\sqrt{k{v_d}}\\\\&=\\frac{k}{d} + \\frac{z_{1-\\delta }}{d}\\sqrt{\\frac{2k(d-1)}{d+2}}$" ], [ "Proof of Property 2.", "Property 2 $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu )$ is symmetric and satisfies the triangle inequality for $q = 1$ .", "The symmetry trivially comes from the definition of $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu )$ that is $\\text{DP}_\\sigma \\text{SWD}_q^q(\\mu ,\\nu ) =\\mathbf {E}_{ \\mathbf {u}\\sim \\mathbb {S}^{d-1}} W_q^q(\\mathcal {R}_\\mathbf {u}\\mu * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\nu * \\mathcal {N}_\\sigma )$ and the fact the Wasserstein distance is itself symmetric.", "Regarding the triangle inequality for $q \\ge 1$ , our result is based on a very recent result showing that the smoothed Wasserstein for $q\\ge 1$ is also a metric [25] (Our proof is indeed valid for $q \\ge 1$ , as this recent result generalizes the one of [14] ).", "Hence, we have $\\text{DP}_\\sigma \\text{SWD}_q(\\mu ,\\nu ) &=\\Big [\\mathbf {E}_{ \\mathbf {u}\\sim \\mathbb {S}^{d-1}} W_q^q(\\mathcal {R}_\\mathbf {u}\\mu * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\nu * \\mathcal {N}_\\sigma )\\Big ]^{1/q} \\nonumber \\\\&\\le \\Big [\\mathbf {E}_{ \\mathbf {u}\\sim \\mathbb {S}^{d-1}} \\big ( W_q(\\mathcal {R}_\\mathbf {u}\\mu * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\xi * \\mathcal {N}_\\sigma ) \\nonumber \\\\&+ W_q(\\mathcal {R}_\\mathbf {u}\\xi * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\nu * \\mathcal {N}_\\sigma )\\big )^q \\Big ]^{1/q} \\nonumber \\\\ &\\le \\Big [\\mathbf {E}_{ \\mathbf {u}\\sim \\mathbb {S}^{d-1}} W_q^q(\\mathcal {R}_\\mathbf {u}\\mu * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\xi * \\mathcal {N}_\\sigma ) \\Big ]^{1/q} \\nonumber \\\\&+ \\Big [\\mathbf {E}_{ \\mathbf {u}\\sim \\mathbb {S}^{d-1}} W_q^q(\\mathcal {R}_\\mathbf {u}\\xi * \\mathcal {N}_\\sigma ,\\mathcal {R}_\\mathbf {u}\\nu * \\mathcal {N}_\\sigma ) \\Big ]^{1/q} \\nonumber \\\\ &\\le \\text{DP}_\\sigma \\text{SWD}_q(\\mu ,\\xi ) + \\text{DP}_\\sigma \\text{SWD}_q(\\xi ,\\nu ) \\hfill ~\\nonumber $ where the first inequality comes from the fact that the smoothed Wassertein distance $W_q( \\mu * \\mathcal {N}_\\sigma , \\nu * \\mathcal {N}_\\sigma ) $ is a metric and satisfies the triangle inequality and the second one follows from the application of the Minkowski inequality." ], [ "Dataset details", "We have considered 3 families of domain adaptation problems based on Digits, VisDA, Office-31.", "For all these datasets, we have considered the natural train/test number of examples.", "For the digits problem, we have used the MNIST and the USPS datasets.", "For MNIST-USPS and USPS-MNIST, we have respectively used 60000-7438, 7438-10000 samples.", "The VisDA 2017 problem is a 12-class classification problem with source and target domains being simulated and real images.", "The Office-31 is an object categorization problem involving 31 classes with a total of 4652 samples.", "There exists 3 domains in the problem based on the source of the images : Amazon (A), DSLR (D) and WebCam (W).", "We have considered all possible pairwise source-target domains.", "For the VisDA and Office datasets, we have considered Imagenet pre-trained ResNet-50 features and our feature extractor (which is a fully-connected feedforword networks) aims at adapting those features.", "We have used pre-trained features freely available at https://github.com/jindongwang/transferlearning/blob/master/data/dataset.md." ], [ "Digits", "For the MNIST-USPS problem, the architecture of our feature extractor is composed of the two CNN layers with 32 and 20 filters of size $5\\times 5$ .", "The feature extractor uses a ReLU activation function a max pooling at the first layer and a sigmoid activation function at the second one.", "For the classification head, we have used a 2-layer fully connected networks as a classifier with 100 and 10 units." ], [ "VisDA", "For the VisDA dataset, we have considered pre-trained 2048 features obtained from a ResNet-50 followed by 2 fully connected networks with 100 units and ReLU activations.", "The latent space is thus of dimension 100.", "Discriminators and classifiers are also a 2 layer fully connected networks with 100 and respectively 1 and “number of class” units." ], [ "Office 31", "For the Office dataset, we have considered pre-trained 2048 features obtained from a ResNet-50 followed by two fully connected networks with output of 100 and 50 units and ReLU activations.", "The latent space is thus of dimension 50.", "Discriminators and classifiers are also a 2 layer fully connected networks with 50 and respectively 1 and “number of class” units.", "For Digits, VisDA and Office 31 problems, all models have been trained using Adam with learning rate validated on the non-private model." ], [ "Architecture details for generative modelling.", "For the MNIST, FashionMNIST generative modelling problems, we have used the implementation of MERF available at https://github.com/frhrdr/dp-merf and plugged in our $\\text{DP}_\\sigma \\text{SWD}$ distance.", "The generator architecture we used is the same as theirs and detailed in Table REF .", "The optimizer is an Adam optimizer with the default $0.0001$ learning rate.", "The code dimension is 10 and is concatenated with the one-hot encoding of the 10 class label, leading to an overall input distribution of 20.", "For the CelebA generative modelling, we used the implementation of [24] available at https://github.com/VinAIResearch/DSW.", "The generator mixes transpose convolution and batch normalization as described in Table REF .", "The optimizer is an Adam optimizer with a learning rate of $0.0005$ .", "Again, we have just plugged in our $\\text{DP}_\\sigma \\text{SWD}$ distance.", "Table: Description of the generator for the MNIST and FashionMNIST dataset.", "Table: Model hyperparameters and privacy for achievinga ε-δ\\varepsilon -\\delta privacy with ε=10\\varepsilon =10 and δ\\delta depending onthe size of the private dataset.", "The four first lines refers to the domain adaptation problems and the data to protect is the private one.", "The last two rows refer to the generative modelling problems.", "The noise σ\\sigma has been obtained usingthe RDP based moment accountant of .Table: Description of the generator for the CelebA dataset.", "The input codeis of size 32 and the output is 64×64×364 \\times 64 \\times 3." ] ]
2107.01848
[ [ "Schubert Eisenstein series and Poisson summation for Schubert varieties" ], [ "Abstract The first author and Bump defined Schubert Eisenstein series by restricting the summation in a degenerate Eisenstein series to a particular Schubert variety.", "In the case of $\\mathrm{GL}_3$ over $\\mathbb{Q}$ they proved that these Schubert Eisenstein series have meromorphic continuations in all parameters and conjectured the same is true in general.", "We revisit their conjecture and relate it to the program of Braverman, Kazhdan, Lafforgue, Ng\\^o, and Sakellaridis aimed at establishing generalizations of the Poisson summation formula.", "We prove the Poisson summation formula for certain schemes closely related to Schubert varieties and use it to refine and establish the conjecture of the first author and Bump in many cases." ], [ "Introduction", "In this paper we prove the Poisson summation conjecture of Braverman-Kazhdan, Lafforgue, Ngô, and Sakellaridis for a particular family of varieties related to Schubert varieties (see Theorem REF ).", "As an application we prove (in many cases) the functional equations of Schubert Eisenstein series conjectured to exist by Bump and the first author in [4].", "We begin by recalling the definition of the Schubert Eisenstein series and then move to a discussion of the Poisson summation formula." ], [ "Generalized Schubert Eisenstein series", "Let $F$ be a global field with ring of adeles $\\mathbb {A}_F,$ and let $G$ be a (connected) split reductive group over $F.$ We let $T \\le B \\le P \\le G$ be a maximal split torus, a Borel subgroup, and a parabolic subgroup of $G.$ Moreover we let $T \\le M \\le P$ be the Levi subgroup of $P$ containing $T$ .", "Let $N_G(T)$ be the normalizer of $T$ in $G$ and let $W(G,T):=N_G(T)(F)/T(F)$ be the Weyl group of $T$ in $G.$ Finally we let $\\omega _P :M^{\\mathrm {ab}} \\longrightarrow \\mathbb {G}_m^{k+1}$ be the isomorphism of (REF ).", "Here $M^{\\mathrm {ab}}=M/M^{\\mathrm {der}}$ is the abelianization of $M$ , where $Q^{\\mathrm {der}}$ is the derived group of an algebraic group $Q$ .", "Let $A_{\\mathbb {G}_m} <F_\\infty ^\\times $ be the usual subgroup (see (REF )) and let $\\chi :(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )^{k+1} \\rightarrow \\mathbb {C}^\\times $ be a character.", "For $s \\in \\mathbb {C}^{k+1}$ we define $\\chi _s(a_0,\\dots ,a_k):=\\chi (a_0,\\dots ,a_k)\\prod _{i=0}^k|a_i|^{s_i}.$ We form the induced representation $I_P(\\chi _s):=\\mathrm {Ind}_{P}^G(\\chi _s \\circ \\omega _P),$ normalized so that it is unitary when $s \\in (i\\mathbb {R})^{k+1}.$ Using the Bruhat decomposition of $G$ one has a decomposition of the generalized flag variety $P \\backslash G=\\coprod _{w \\in W(M,T) \\backslash W(G,T)}P \\backslash PwB.$ Here we have used the same symbol $w$ for a class in $W(M,T) \\backslash W(G,T)$ and for a representative of that class.", "Let $X_w$ be the (Zariski) closure of the Schubert cell $P \\backslash PwB$ in $P \\backslash G.$ It is a Schubert variety.", "The Schubert Eisenstein series attached to a section $\\Phi ^{\\chi _s} \\in I_P(\\chi _s),$ $g \\in G(\\mathbb {A}_F)$ and $w \\in W(G,T)$ is defined as $SE_w(g,\\Phi ^{\\chi _s}):=\\sum _{\\gamma \\in X_w(F)} \\Phi ^{\\chi _s}(\\gamma g).$ It converges absolutely provided that $\\mathrm {Re}(s)$ lies in a suitable cone.", "The function $SE_w(g,\\Phi ^{\\chi _s})$ is no longer left $G(F)$ -invariant.", "However, it is invariant under the stabilizer of the Schubert cell $X_w$ under the natural action of $G$ on $P \\backslash G.$ Since this stabilizer contains $B,$ it is a parabolic subgroup, and it is often larger than $B.$ In [4] Bump and the first author posed the following questions: Does the Schubert Eisenstein series admit a meromorphic continuation to all $s$ ?", "Do a subset of the functional equations for the full Eisenstein series continue to hold for the Schubert Eisenstein series?", "Is it possible to find a linear combination of Schubert Eisenstein series which is entire?", "We generalize the definition of Schubert Eisenstein series in higher rank in a manner that can be specialized to the definition in the $G=\\mathrm {GL}_3$ case treated in [4].", "We then answer all of these questions in great generality.", "We refer to Theorem REF and the subsequent remarks for details.", "Before proceeding, let us examine the setting considered in [4] and isolate the change in viewpoint that is the germ of our work.", "Let $G=\\mathrm {GL}_3$ and let $B<\\mathrm {GL}_3$ be the Borel subgroup of upper triangular matrices.", "From the point of view of this paper the most interesting Schubert variety in this setting is that attached to the Bruhat cell $Bs_1s_2B$ where $s_1={\\left({\\begin{matrix} & 1& \\\\ 1 & & \\\\ & & & 1\\end{matrix}}\\right)}, \\quad s_2= {\\left({\\begin{matrix} 1 & & \\\\ & & 1 \\\\ &1& \\end{matrix}}\\right)}.$ In this case using standard facts on the Bruhat ordering one has $ \\overline{Bs_1s_2B}=Bs_1s_2B \\amalg Bs_1B \\amalg Bs_2B \\amalg B.$ Let $R$ be an $F$ -algebra.", "Then $ \\overline{Bs_1s_2B}(R)=\\left\\lbrace {\\left({\\begin{matrix} a & b & c\\\\ d & e & f\\\\ & g & h \\end{matrix}}\\right)} \\in \\mathrm {GL}_3(R)\\right\\rbrace .$ Indeed, the set on the right is the $R$ -points of an irreducible closed subscheme of $\\mathrm {GL}_3$ that contains the irreducible closed subscheme $\\overline{Bs_1s_2B},$ hence we deduce equality by considering dimensions.", "If we let $P_{2,1}(R):=\\left\\lbrace {\\left({\\begin{matrix} a & b & c\\\\ d & e & f\\\\ & & h \\end{matrix}}\\right)} \\in \\mathrm {GL}_3(R)\\right\\rbrace , \\quad P_{1,2}(R):=\\left\\lbrace {\\left({\\begin{matrix} a & b &c \\\\ & e & f\\\\ & g & h \\end{matrix}}\\right)} \\in \\mathrm {GL}_3(R)\\right\\rbrace $ then $ \\overline{Bs_1s_2B}=P_{2,1}s_1s_2P_{1,2}.$ Indeed, it is easy to see that each Bruhat cell on the right of (REF ) is contained in $P_{2,1}s_1s_2P_{1,2},$ and it is clear on the other hand from (REF ) that $P_{2,1}s_1s_2P_{1,2} \\subseteq \\overline{Bs_1s_2B}.$ The natural map $P_{2,1} \\times P_{1,2} \\longrightarrow \\overline{Bs_1s_2B}$ is a lift of the Bott-Samelson resolution of the image $X_{s_1s_2}$ of $\\overline{Bs_1s_2B}$ in $B \\backslash \\mathrm {SL}_3.$ This was the point of departure for the arguments in [4].", "Instead of pursuing Schubert varieties and the Bott-Samelson resolution to study higher rank analogues of the left hand side of (REF ), we generalize and study the right hand side directly.", "Let us step back to consider the situation for a general reductive group $G.$ Consider the closure $\\overline{PwB}$ in $G.$ It is the union of Bruhat cells $\\cup _{w^{\\prime }\\le w} Pw^{\\prime }B,$ where $\\le $ denotes the Bruhat order.", "The group $G$ acts on itself on the left, and we let $Q$ be the stabilizer of the closed subscheme $\\overline{PwB} \\subset G$ (i.e.", "the algebraic subgroup of $G$ sending $\\overline{PwB}$ to itself).", "This is a closed algebraic subgroup of $G$ [35].", "The group $Q$ contains $P,$ and hence is a parabolic subgroup.", "Thus rather than studying $\\overline{PwB},$ for any parabolic subgroup $P^{\\prime }$ with $P \\le P^{\\prime } \\le Q$ we can study generalized Schubert Eisenstein series built from sums over $P^{\\prime }wB.$ In fact, there is nothing special about $B,$ and it is more interesting from an automorphic perspective to replace $B$ by the largest possible group that preserves $\\overline{PwB}$ under right multiplication.", "Thus we could work with $ P^{\\prime } \\gamma H$ where $P \\le P^{\\prime } \\le G$ are a pair of parabolic subgroups, $\\gamma \\in G(F)$ and $H$ is an arbitrary algebraic subgroup of $G.$ From the automorphic point of view this may be the most important situation.", "However, not all Schubert cells are the image in $P \\backslash G$ of a set of the form (REF ).", "Indeed, Schubert cells are often nonsmooth, whereas the image of any set of the form $P^{\\prime }\\gamma H$ in $P \\backslash G$ is a smooth subscheme (this follows from lemmas REF and REF ).", "In order to treat Eisenstein series indexed by sets of the form $Y$ and $\\overline{PwB}$ simultaneously we work with an arbitrary (locally closed) subscheme $Y \\subseteq G$ that is stable under left multiplication by $P^{\\prime }.$ Let $X_P^\\circ :=P^{\\mathrm {der}} \\backslash G$ be the Braverman-Kazhdan space associated to $P$ and $G.$ Let $ Y_{P}=\\mathrm {Im}(Y \\longrightarrow X_{P}^\\circ ).$ To be more precise, the set theoretic image of $Y \\rightarrow X_P^{\\circ }$ is locally closed by Lemma REF .", "This set is the underlying topological space of a subscheme $Y_P$ of $X_P^{\\circ }.$ For some examples when $G=\\mathrm {SL}_n$ we refer to §REF .", "The subscheme $Y_P \\subseteq X_P^\\circ $ is quasi-affine.", "Let $X_P$ be the affine closure of $X_P^\\circ $ and let $ Y_{P,P^{\\prime }}\\subseteq X_{P}$ be the partial closure of $Y_{P}$ in $X_P$ constructed in (REF ) below.", "We observe that there is a natural action of $M^{\\mathrm {ab}}$ on $Y_{P,P^{\\prime }}$ that preserves $Y_{P}$ (see (REF )).", "Without essential loss of generality we assume $G$ is simple and simply connected.", "We additionally make the following technical assumption: $ P \\textrm { is maximal in }P^{\\prime }.$ Under (REF ) there is a unique parabolic subgroup $P^{*} <P^{\\prime }$ with Levi subgroup $M$ that is not equal to $P.$ Thus one might think of $P^{*}$ as the opposite parabolic of $P$ with respect to $P^{\\prime }.$ In § we define Schwartz spaces $\\mathcal {S}(Y_{Q,P^{\\prime }}(\\mathbb {A}_F))$ for $Q \\in \\lbrace P,P^*\\rbrace $ together with a Fourier transform $\\mathcal {F}_{P|P^*}:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) \\tilde{\\longrightarrow } \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)).$ The Schwartz space $\\mathcal {S}(Y_{Q,P^{\\prime }}(\\mathbb {A}_F))$ is contained in the set of restrictions to $Y_{P}(\\mathbb {A}_F)$ of functions in $C^\\infty (X_P^\\circ (\\mathbb {A}_F))$ .", "Let $H \\le G$ be a subgroup, and consider the action of $H$ on $G$ by right multiplication.", "Assume that $Y$ is stable under the action of $H.$ Then the Schwartz spaces $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ and $\\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ are preserved under the action of $M^{\\mathrm {ab}}(\\mathbb {A}_F) \\times H(\\mathbb {A}_F)$ of (REF ) and the Fourier transform satisfies a twisted equivariance property by Lemma REF .", "Let $I_{P^*}^*(\\chi _s):=\\mathrm {Ind}_{P^*}^{G}(\\chi \\circ \\omega _P).$ The $*$ indicates that we are inducing $\\chi \\circ \\omega _P,$ not $\\chi \\circ \\omega _{P^*}.$ The group $M^{\\mathrm {ab}} $ acts on $Y_{P}$ and $Y_{P^*}$ on the left, and hence we obtain Mellin transforms $ \\begin{split}\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow I_P(\\chi _s)|_{Y_{P}(\\mathbb {A}_F)} \\\\f &\\longmapsto f_{\\chi _s}(\\cdot ):=f_{\\chi _s,P}(\\cdot ):=\\int _{M^{\\mathrm {ab}}(F)}\\delta _{P}^{1/2}(m)\\chi _s(\\omega _P(m))f(m^{-1}\\cdot )dm, \\\\\\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow I_{P^*}^*(\\chi _s)|_{Y_{P^*}(\\mathbb {A}_F)} \\\\f &\\longmapsto f_{\\chi _s}^*(\\cdot ):=f_{\\chi _s,P^*}^*(\\cdot ):=\\int _{M^{\\mathrm {ab}}(F)}\\delta _{P^*}^{1/2}(m)\\chi _s(\\omega _P(m))f(m^{-1}\\cdot )dm.\\end{split}$ Here $\\delta _Q$ is the modular quasi-character of an algebraic group $Q$ .", "The fact that the Mellin transform $f_{\\chi _s}$ (resp.", "$f_{\\chi _s}^*$ ) is absolutely convergent for $\\mathrm {Re}(s_0)$ large (resp.", "$\\mathrm {Re}(s_0)$ small) is built into the definition of the Schwartz space.", "Remark We will write $\\Phi ^{\\chi _s}$ for a section of $I_P(\\chi _s)$ that is not necessarily a Mellin transform of an element $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ We take the analogous convention in the local setting and when $I_P(\\chi _s)$ is replaced by $I_{P^*}^*(\\chi _s).$ For $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)),$ $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ we define generalized Schubert Eisenstein series $ \\begin{split}E_{Y_P}(f_{1\\chi _s}):&=\\sum _{y\\in M^{\\mathrm {ab}}(F) \\backslash Y_{P}(F)}f_{1\\chi _s}(y),\\\\E_{Y_{P^*}}^*(f_{2\\chi _s}^*):&=\\sum _{y^* \\in M^{\\mathrm {ab}}(F) \\backslash Y_{P^*}(F)}f_{2\\chi _s}^*(y^*).\\end{split}$ These sums converge absolutely for $\\mathrm {Re}(s_0)$ sufficiently large (resp. small).", "To help motivate this definition we point out that Lemma REF implies that $P(F) \\backslash Y(F) &\\longrightarrow M^{\\mathrm {ab}}(F) \\backslash Y_{P}(F)$ is a bijection, and the same is true if we replace $P$ by $P^*.$ Theorem 1.1 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ .", "Assume that $F$ is a function field, or $F$ is a number field and Conjecture REF is valid.", "Then $E_{Y_P}(f_{\\chi _s})$ and $E_{Y_{P^*}}(\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*)$ are meromorphic in $s$ .", "Moreover one has $E_{Y_P}(f_{\\chi _s})=E^*_{Y_{P^*}}(\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ Conjecture REF below states that certain normalized degenerate Eisenstein series have only finitely many poles.", "It is known if $G=\\mathrm {SL}_n$ and in several other cases (see the remark after the statement of the conjectures below).", "Remarks.", "(a) Let $P_{\\mathrm {max}}$ be the stabilizer of $\\overline{PwB}$ under the left action of $G$ .", "Then one expects a family of functional equations and analytic continuation for $E_{P \\backslash \\overline{PwB}}(f_{\\chi _s})$ analogous to the family of functional equations for the degenerate Eisenstein series on a Levi subgroup $M_{\\mathrm {max}}$ of $P_{\\mathrm {max}}$ attached to the parabolic subgroup $P \\cap M_{{\\mathrm {max}}} < M_{{\\mathrm {max}}}$ .", "In the special case where $P<P_{\\mathrm {max}}$ is maximal, the theorem provides this, and answers questions (a) and (b) above.", "The proof also shows that we can choose a linear combination of Schubert Eisenstein series that is entire, answering question (c).", "With some effort, our methods should generalize to treat the case where $P$ is not necessarily maximal in $P_{\\mathrm {max}}.$ We have already seen that one can choose the data so that $Y_{P,P^{\\prime }}$ is a lift to $X_P$ of a partial closure of a Schubert cell.", "Schubert varieties admit analogues in any Kac-Moody group, and it would be interesting to generalize Theorem REF to this setting.", "The key observation here is that, unlike Kac-Moody groups and their flag varieties, which in general are infinite-dimensional, the Schubert cells in the flag varieties of Kac-Moody groups are finite-dimensional.", "It is an intriguing problem to investigate whether generalized Schubert Eisenstein series can be used to produce integral representations of automorphic $L$ -functions.", "See §REF below for more details.", "For example, one could try to generalize the famous doubling method of Piatetski-Shapiro and Rallis [20].", "For some information about the possible poles of $E_{Y_P}(f_{\\chi _s}),$ see Corollary REF .", "The Poisson summation conjecture To prove Theorem REF we prove new cases of a seminal conjecture due to Braverman and Kazhdan [7].", "The conjecture was later discovered independently by Lafforgue [30] and refined by Ngô [37].", "It was partially set in the framework of spherical varieties by Sakellaridis [41].", "Here a spherical variety $X$ for a reductive group $G$ over $F$ is a normal integral separated $G$ -scheme $X$ of finite type over $F$ such that $X_{\\overline{F}}$ admits an open orbit under a Borel subgroup of $G_{\\overline{F}}.$ The conjecture can be roughly formulated as follows.", "Assume that $X$ is an affine spherical variety with smooth locus $X^{\\mathrm {sm}}$ .", "Then there should be a Schwartz space $ \\mathcal {S}( X(\\mathbb {A}_F)) < C^{\\infty }(X^{\\mathrm {sm}}(\\mathbb {A}_F))$ and a Fourier transform $\\mathcal {F}_X:\\mathcal {S}( X(\\mathbb {A}_F)) \\longrightarrow \\mathcal {S}(X (\\mathbb {A}_F))$ satisfying a certain twisted-equivariance property under $G(\\mathbb {A}_F)$ such that for $f \\in \\mathcal {S}(X(\\mathbb {A}_F))$ satisfying certain local conditions $\\sum _{x\\in X^{\\mathrm {sm}}(F)}f(x)=\\sum _{x \\in X^{\\mathrm {sm}}(F)}\\mathcal {F}_X(f)(x).$ We refer to this (somewhat vaguely stated) conjecture as the Poisson summation conjecture.", "The original motivation, explored in [7], [37], [38], is that it implies the meromorphic continuation and functional equation of Langlands $L$ -functions in great generality.", "By the converse theorem [12] this would imply Langlands functoriality in great generality.", "Remark We highlight the possibly confusing convention that functions in $\\mathcal {S}(X(\\mathbb {A}_F))$ need not be defined on all of $X(\\mathbb {A}_F),$ only on $X^{\\mathrm {sm}}(\\mathbb {A}_F).$ One expects that for each place $v$ elements of $\\mathcal {S}(X(F_v))$ are functions in $C^\\infty (X^{\\mathrm {sm}}(F_v))$ that are rapidly decreasing away from the singular locus $(X-X^{\\mathrm {sm}})(F_v)$ and have particular asymptotic behavior as one approaces $(X-X^{\\mathrm {sm}})(F_v).$ This was conjectured in [38] in a special case and is expected to be true in general.", "The only case of the Poisson summation conjecture that is completely understood is the case where $X$ is a vector space.", "For the affine closures $X_P$ of the Braverman-Kazhdan space $X_P^\\circ $ much of the conjecture is known [19], [15], [16], [29], [43].", "There are some additional examples in [18], [15], [22].", "However the cases that are known are still very limited.", "In order to prove Theorem REF we prove the Poisson summation conjecture for $Y_{P,P^{\\prime }}$ .", "We do not know if $Y_{P,P^{\\prime }}$ is affine, but it is clearly quasi-affine.", "We also do not know whether it is always spherical under the action of a suitable reductive subgroup of $H$ , but this is true in many cases [14], [25].", "In Theorem REF we state our Poisson summation formula in an imprecise form.", "The precise form is given in Theorem REF below.", "Let $K_M$ be the maximal compact subgroup of $M^{\\mathrm {ab}}(\\mathbb {A}_F)$ .", "Theorem 1.2 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ .", "Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $\\sum _{y \\in Y_{P}(F)}f(y )+*&=\\sum _{y^* \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)(y^* )+**.$ The sums over $y$ and $y^*$ are absolutely convergent.", "In the theorem the contributions marked $*$ and $**$ are certain boundary terms coming from residues of auxilliary degenerate Eisenstein series.", "Using Lemma REF one can choose many $f$ so that these contributions vanish.", "Theorem REF is our main theorem in the context of Poisson summation formulae.", "It is a vast generalization of the Poisson summation formula for Braverman-Kazhdan spaces of maximal parabolic subgroups in reductive groups [8] which in turn is a vast generalization of the Poisson summation formula for a vector space.", "Remark In the degenerate case $H=G,$ Theorem REF reduces to the Poisson summation formula for the Braverman-Kazhdan space $X_{P}.$ In this special case under suitable assumptions on $f$ the formula was proved in [8].", "When $G=\\mathrm {Sp}_{2n}$ and $P$ is the Siegel parabolic it was proved for general test functions in [19].", "Conjectures REF and REF Let $M_{\\beta _0}$ be the simple normal subgroup of the Levi subgroup $M^{\\prime }$ of $P^{\\prime }$ defined in (REF ) below.", "For any topological abelian group $A$ we denote by $\\widehat{A}$ the set of quasi-characters of $A,$ that is, continuous homomorphisms $A \\rightarrow \\mathbb {C}^\\times .$ For $Q \\in \\lbrace P,P^*\\rbrace $ let $\\mathcal {S}(X_{Q \\cap M_{\\beta _0}}(\\mathbb {A}_F))$ be the Schwartz space of (REF ).", "For any $(m,f_1,f_2,\\chi ,s) \\in M_{\\beta _0}(\\mathbb {A}_F) \\times \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)) \\times \\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(\\mathbb {A}_F)) \\times \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times } \\times \\mathbb {C}$ let $\\chi _s:=\\chi |\\cdot |^s$ and form the degenerate Eisenstein series $ \\begin{split}E(m,f_{1\\chi _s})&=\\sum _{x\\in (P \\cap M_{\\beta _0}) \\backslash M_{\\beta _0}(F)}f_{1\\chi _s}(xm)\\\\ E^*(m,f_{2\\chi _s}^*)&=\\sum _{x \\in (P^* \\cap M_{\\beta _0}) \\backslash M_{\\beta _0}(F)}f_{2\\chi _s}^*(xm).\\end{split}$ They converge for $\\mathrm {Re}(s)$ large enough (resp.", "$\\mathrm {Re}(s)$ small enough).", "Here $f_{1\\chi _s}$ and $f_{2\\chi _s}^*$ are the Mellin transforms of (REF ) in the special case $P^{\\prime }=M_{\\beta _0}.$ Let $K \\le M_{\\beta _0}(\\mathbb {A}_F)$ be a maximal compact subgroup.", "The following conjecture appeared in the statements of theorems REF and REF above: Conjecture 1.3 For each character $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a finite set $\\Upsilon (\\chi ) \\subset \\mathbb {C}$ such that if $E(m,f_{\\chi _s})$ has a pole for any $K$ -finite $f \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F))$ then $s \\in \\Upsilon (\\chi ).$ In fact we expect the following stronger conjecture to be true: Conjecture 1.4 There is an integer $n$ and a finite set $\\Upsilon \\subset \\mathbb {C}$ depending only on $M_{\\beta _0}$ such that if $E(m,f_{\\chi _s})$ has a pole for any $K$ -finite $f \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F))$ and $\\chi \\in \\widehat{A_{\\mathbb {G}_m}F^\\times \\backslash \\mathbb {A}_F^\\times }$ then $\\chi ^n=1$ and $s \\in \\Upsilon .$ In § we prove Conjecture REF (or more accurately extract it from the literature) when $M_{\\beta _0}$ is $\\mathrm {SL}_n$ and Conjecture REF when $P \\cap M_{\\beta _0}$ is a Siegel parabolic in the symplectic group $M_{\\beta _0}.$ We point out that the natural analogues of conjectures REF and REF for general sections of $I_P(\\chi _s)$ are false.", "It is important that one uses Mellin transforms of elements of the Schwartz space.", "On integral representations of $L$ -functions In this subsection we make some comments on how one might try to use Theorem REF to study integral representations of $L$ -functions.", "Though it is purely speculative we include this discussion because it allows us to pose several questions in the study of algebraic homogenous spaces that are of independent interest.", "Let a subgroup $H \\le G$ act on $G$ via right multiplication, and assume that $Y$ is stable under the action of $H.$ For example, we could take $Y=P^{\\prime }\\gamma H$ for some $\\gamma \\in G(F).$ Assume that $Y$ admits an open $H$ -orbit.", "Let $\\varphi $ be a cusp form in a cuspidal automorphic representation $\\pi $ of $H(\\mathbb {A}_F)$ .", "If one wanted to use Theorem REF to study integral representations of $L$ -functions one could try to investigate expressions of the form $ \\int _{H(F) \\backslash H(\\mathbb {A}_F)}\\varphi (h) E_{Y_P}(R(h)f_{\\chi _s}) dh.$ where $R(h)f_{\\chi _s}(y)=f_{\\chi _s}(yh).$ If convergent, (REF ) admits a functional equation because $E_{Y_P}(R(h)f_{\\chi _s})$ admits a functional equation by Theorem REF .", "The problem is deciding whether or not (REF ) or some variant of it yields any new $L$ -functions.", "Ignoring all questions of convergence, the integral above unfolds into a sum $\\sum _{y_0 \\in M^{\\mathrm {ab}}(F) \\backslash Y_{P}(F)/H(F)} \\int _{H_{y_0}(F) \\backslash H(\\mathbb {A}_F)}\\varphi (h)f_{\\chi _s}(y_0h)dh.$ where $H_{y_0}$ is the stabilizer of $y_0 \\in (M^{\\mathrm {ab}} \\backslash Y_P)(F)$ in $H.$ Due to the conjectures of Sakellaridis and Venkatesh [41], [44] one expects this expression to unfold into a sum of Eulerian integrals when $M^{\\mathrm {ab}} \\backslash Y_{P}$ is spherical as an $H$ -scheme.", "This motivates the following questions: When is $M^{\\mathrm {ab}} \\backslash Y_P$ spherical as an $H$ -scheme?", "If $M^{\\mathrm {ab}} \\backslash Y_P$ is spherical, what is the stabilizer in $H$ of a point in the open $H$ -orbit?", "If $H$ and a spherical subgroup $H^{\\prime } \\le H$ are fixed, can one classify the possible spherical embeddings of $H^{\\prime } \\backslash H$ obtained from $M^{\\mathrm {ab}} \\backslash Y_P$ as $P,$ $P^{\\prime },$ $G,$ and $Y$ vary?", "We can also pose analogous questions when $H$ is not necessarily reductive by replacing $H$ by a Levi subgroup as in [14], [25].", "Finally, it is of interest to consider the questions above in the Kac-Moody setting mentioned in Remark 2 after Theorem REF .", "Outline We outline the paper and give some indication of the proofs.", "We begin with some comments on the underlying geometry we are considering in §.", "We then define the Schwartz space of $Y_{P,P^{\\prime }}$ locally and adelically in §.", "We also construct a Fourier transform and prove that the Fourier transform preserves the Schwartz space.", "To accomplish this we reduce the question to a similar statement on an auxilliary Braverman-Kazhdan space and then use the methods developed in [19], [16].", "We then prove a Poisson summation formula for an auxilliary Braverman-Kazhdan space in §.", "This formula is proved in [8] for a different definition of the Schwartz space with some restrictions on the test functions involved.", "It was obtained for arbitrary test functions in [19] in a special case.", "In Theorem REF below we deduce it for all Braverman-Kazhdan spaces attached to maximal parabolic subgroups.", "We point out that it is most natural in our setting, and perhaps even necessary, to work with sections that are not finite under a maximal compact subgroup of $G(F_\\infty )$ when $F$ is a number field (see §REF ).", "Hence, some care is required in applying well-known results on Eisenstein to our setting.", "Indeed, our sections need not even be standard, so we cannot even use Lapid's work in [34].", "In § we prove theorems REF and REF .", "The procedure is to first prove Theorem REF by reducing it to a summation formula on an auxilliary Braverman-Kazhdan space so that we can apply the results of §.", "We then use Theorem REF to deduce Theorem REF .", "Finally, in § we verify Conjecture REF (restated below as Conjecture REF ) in certain cases.", "Acknowledgments We appreciate the encouragement and questions of D. Bump.", "Answering his questions led to a generalization of our original main result and simplifications in exposition.", "The authors thank S. Leslie for answering questions about parabolic subgroups and M. Brion for answering several questions about Schubert cells (and in particular for observing Lemma REF ).", "We also thank D. Ginzburg and F. Shahidi for encouragement, S. Kudla for answering a question on Eisenstein series and M. Hanzer for explaining how to derive Theorem REF from her paper [24] with G. Muic.", "The first author thanks H. Hahn for her constant support and encouragement and for her help with editing and the structure of the paper.", "Preliminaries Braverman-Kazhdan spaces We work over a field $F.$ Let $G$ be a connected split semisimple group over $F.$ We only consider parabolic subgroups of $G$ containing a fixed split torus $T$ ; for such a subgroup $P$ we write $N_P$ for its unipotent radical.", "We fix a Levi subgroup $M$ of $G$ containing $T$ and write $M^{\\mathrm {ab}}:=M/M^{\\mathrm {der}}.$ For all parabolic subgroups $P$ of $G$ with Levi subgroup $M$ we have a Braverman-Kazhdan space $X_{P}^\\circ :=P^{\\mathrm {der}} \\backslash G.$ We observe that $ P^{\\mathrm {der}}=M^{\\mathrm {der}}N_P$ where $N_P$ is the unipotent radical of $P.$ The scheme $X_P^{\\circ }$ is strongly quasi-affine, i.e.", "$X_P:=\\overline{X_P^{\\circ }}^{\\mathrm {aff}}:=\\mathrm {Spec}(\\Gamma (X_P^{\\circ },\\mathcal {O}_{X_P^{\\circ }}))$ is an affine scheme of finite type over $F$ and the natural map $X_P^{\\circ } \\rightarrow X_P$ is an open immersion [6].", "Strictly speaking, in loc. cit.", "Braverman and Gaitsgory work over an algebraically closed field, but their results hold in our setting by fpqc descent along $\\mathrm {Spec}(\\overline{F}) \\rightarrow \\mathrm {Spec}(F).$ A convenient reference for fpqc descent is [39].", "Lemma 2.1 The torus $M^{\\mathrm {ab}}$ is split.", "The maps $M(F) \\longrightarrow M^{\\mathrm {ab}}(F) \\quad \\textrm {and} \\quad G(F) \\longrightarrow (P^{\\mathrm {der}} \\backslash G)(F)$ are surjective.", "In [16] this is proved in the special case where $P$ is a maximal parabolic.", "The same proof implies the more general statement here.", "We have a right action of $M^{\\mathrm {ab}} \\times G$ on $X_P^{\\circ }$ given on points in an $F$ -algebra $R$ by $ \\begin{split}X^{\\circ }_{P}(R) \\times M^{\\mathrm {ab}}(R) \\times G(R) &\\longrightarrow X^\\circ _{P}(R)\\\\(x,m,g) &\\longmapsto m^{-1}xg.", "\\end{split}$ This action extends to an action of $M^{\\mathrm {ab}} \\times G$ on $X_P.$ We now discuss the Plücker embedding of $P^{\\mathrm {der}} \\backslash G$ as explained in a special case in [16].", "We assume for simplicity that $G$ is simply connected.", "Let $T\\le B \\le P$ be a Borel subgroup, let $\\Delta _G$ be the set of simple roots of $T$ in $G$ defined by $B$ and $\\Delta _M$ the set of simple roots of $T$ in $M$ defined by $B \\cap M.$ Let $ \\Delta _P:=\\Delta _G-\\Delta _M$ be the set of simple roots in $\\Delta _G$ attached to $P.$ For each $\\beta \\in \\Delta _G$ we let $B \\le P_{\\beta } \\le G$ be the unique maximal parabolic subgroup containing $B$ that does not contain the root group attached to $-\\beta .$ As usual let $X^*(T)$ be the character group of $T.$ For each $\\beta \\in \\Delta _P$ let $\\omega _{\\beta } \\in X^*(T)_{\\mathbb {Q}}$ be the fundamental weight dual to the coroot $\\beta ^\\vee ,$ in other words, $ \\omega _{\\beta }(\\alpha ^{\\vee })={\\left\\lbrace \\begin{array}{ll}1 &\\textrm { if }\\alpha =\\beta \\\\0 &\\textrm { otherwise.}\\end{array}\\right.", "}$ Here $\\alpha \\in \\Delta _G.$ Since $G$ is simply connected, the fundamental weight $\\omega _{\\beta }$ lies in $X^*(T).$ There is an irreducible representation $ V_{\\beta } \\times G \\longrightarrow V_{\\beta }$ of highest weight $-\\omega _{\\beta }$ .", "Here $G$ acts on the right.", "Let $ V:=\\prod _{\\beta \\in \\Delta _P} V_\\beta , \\quad V^{\\circ }=\\prod _{\\beta \\in \\Delta _P} (V_\\beta -\\lbrace 0\\rbrace ).$ Choose a highest weight vector $v_\\beta \\in V_\\beta (F)$ for each $\\beta .$ Lemma 2.2 There is a closed immersion $ \\mathrm {Pl}_P:X_P^\\circ \\longrightarrow V^\\circ $ given on points by $\\mathrm {Pl}_P(g):=(v_\\beta g).$ It extends to a $G$ -equivariant map $\\mathrm {Pl}_P:X_P \\rightarrow V.$ The character $\\omega _{\\beta }$ extends to a character of $M$ for all $\\beta $ and the map $ \\omega _P:=\\prod _{\\beta \\in \\Delta _P}\\omega _{\\beta }:M^{\\mathrm {ab}} \\longrightarrow \\mathbb {G}_m^{\\Delta _P}$ is an isomorphism.", "If we let $M^{\\mathrm {ab}}$ act via $\\omega _P$ on $V$ then $\\mathrm {Pl}_P$ is $M^{\\mathrm {ab}}$ -equivariant.", "In particular for $(m,g) \\in M^{\\mathrm {ab}}(R) \\times G(R)$ one has $ \\mathrm {Pl}_P(m^{-1}g)=(\\omega _\\beta (m)v_{\\beta } g).$ Here $\\mathbb {G}_m^{\\Delta _P}:=\\prod _{\\beta \\in \\Delta _P}\\mathbb {G}_m.$ We will use similar notation below without comment.", "In the introduction, we identified $\\Delta _P$ with the set $\\lbrace 0,1,\\dots ,k\\rbrace $ .", "To ease notation, for any subset $\\Delta \\subset \\Delta _G$ let $ V_{\\Delta }=\\prod _{\\beta \\in \\Delta }V_{\\beta } \\quad \\textrm { and }\\quad V_{\\Delta }^\\circ :=\\prod _{\\beta \\in \\Delta }(V_{\\beta }-\\lbrace 0\\rbrace ).$ Denote by $P_\\beta $ the unique maximal parabolic subgroup of $G$ containing $B$ that does not contain the root group attached to $-\\beta $ .", "Then the stabilizer of the line spanned by $v_\\beta $ in $V_\\beta $ is $P_\\beta $ [27].", "Since $P\\le P_{\\beta }$ we deduce that $\\mathrm {Pl}_P$ is well-defined, that $\\omega _{\\beta }$ extends to a character on $M,$ and that (REF ) holds.", "Since $V$ is affine the map $\\mathrm {Pl}_P$ tautologically extends to a $M^{\\mathrm {ab}} \\times G$ -equivariant map $\\mathrm {Pl}:X_P \\rightarrow V.$ We are left with checking that $\\mathrm {Pl}_P$ is a closed immersion and that (REF ) is an isomorphism.", "We first check that (REF ) is an isomorphism.", "The homomorphism $\\mathbb {G}^{\\Delta _P}_m \\rightarrow M^{\\mathrm {ab}}$ given on points by $x\\mapsto \\prod _{\\beta } \\beta ^\\vee (x)$ is a section of $\\omega _P$ .", "Thus it suffices to show that the image of this section is all of $M^{\\mathrm {ab}}$ .", "Since the image is closed and $M^{\\mathrm {ab}}$ is irreducible, it suffices to check that $|\\Delta _P|=\\dim \\mathbb {G}^{\\Delta _P}_m$ is equal to $\\dim M^{\\mathrm {ab}}$ .", "The group $M^{\\mathrm {ab}}$ is isogenous to the center $Z_M$ of $M$ which is contained in the subgroup of $T$ on which all the $\\beta \\in \\Delta _M=\\Delta _G-\\Delta _P$ vanish.", "Thus $Z_M$ has at most dimension $|\\Delta _P|$ .", "To check that $\\mathrm {Pl}_P$ is a closed immersion we first point out that we have a commutative diagram $ \\begin{tikzcd}X_P^{\\circ } [d,twoheadrightarrow] [r,\"\\mathrm {Pl}_P\"] &V^\\circ [d,twoheadrightarrow]\\\\P \\backslash G [r,\"\\overline{\\mathrm {Pl}}_P\",hookrightarrow] & \\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }\\end{tikzcd}$ where $\\overline{\\mathrm {Pl}}_{P}$ is induced by $\\mathrm {Pl}_P$ .", "We claim that $\\overline{\\mathrm {Pl}}_P$ is a closed immersion.", "Since $P \\backslash G$ is proper and $\\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }$ is separated the map $\\mathrm {Pl}_P$ has closed image.", "It is an immersion provided that $P$ is the stabilizer of the image of $(v_{\\beta })$ in $\\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }$ by [35].", "The stabilizer of the image of $(v_\\beta )$ is $ \\bigcap _{\\beta \\in \\Delta _P}P_\\beta .$ But (REF ) is a parabolic subgroup containing $B$ .", "By considering the root groups contained in (REF ) we deduce that it is $P$ .", "This completes the proof of the claim.", "Extend $\\omega _P$ to a character of $P$ in the canonical manner.", "Since the stabilizer of the image of $(v_\\beta )$ in $\\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }$ is $P$ we deduce that the stabilizer of $(v_\\beta )$ is contained in $P$ and hence equal to $\\ker (\\omega _P:P \\rightarrow \\mathbb {G}_m^{\\Delta _P}).$ Since (REF ) is an isomorphism $\\ker (\\omega _P:P \\rightarrow \\mathbb {G}_m^{\\Delta _P})=P^{\\mathrm {der}}.$ Thus the map $\\mathrm {Pl}_P$ is an immersion of $X_P^{\\circ }$ onto the orbit of $(v_\\beta )$ [35].", "The left vertical arrow in (REF ) is the geometric quotient by the action of $M^{\\mathrm {ab}}$ , and the right vertical arrow is the geometric quotient by $\\mathbb {G}_m^{\\Delta _P}$ .", "In view of the equivariance property (REF ) and the fact that (REF ) is an isomorphism, we deduce that the image of $\\mathrm {Pl}_P$ is closed, and hence $\\mathrm {Pl}_P$ is a closed immersion.", "Example Let $G=\\mathrm {SL}_3$ and $P=B,$ the Borel of upper triangular matrices.", "The representations $V_{\\beta _1}$ and $V_{\\beta _2}$ are just the standard representation $\\mathbb {G}_a^3$ and $\\wedge ^2 \\mathbb {G}_a^3.$ If we choose $(0,0,1)$ and $(0,1,0) \\wedge (0,0,1)$ as our highest weight vectors then $\\mathrm {Pl}_B{\\left({\\begin{matrix} a \\\\ b\\\\ c \\end{matrix}}\\right)} =\\left(c,b \\wedge c\\right).$ Under the Plücker embedding the image of $X_B$ is the cone $C$ whose points in an $F$ -algebra $R$ are given by $C(R)=\\lbrace (v_1,v_2) \\in R^3 \\times \\wedge ^2 R^3: v_1\\wedge v_2=0\\rbrace .$ The $Y_{P}$ We assume that we are in the situation of the introduction.", "Thus $Y \\subseteq G$ is a subscheme whose stabilizer under the left action of $G$ contains a parabolic subgroup $P^{\\prime }> P.$ We assume (REF ), that is, $P$ is maximal in $P^{\\prime }.$ Moreover we let $P^*\\le P^{\\prime }$ be the unique parabolic subgroup with Levi subgroup $M$ that is not equal to $P.$ There is a unique simple root $\\beta _0$ in $\\Delta _G-\\Delta _M$ such that the root group attached to $-\\beta _0$ is a subgroup of $P^{\\prime }$ but not $P.$ Let $M^{\\prime }$ be the unique Levi subgroup of $P^{\\prime }$ containing $M.$ The set of simple roots of $T$ in $M^{\\prime }$ with respect to the Borel $B$ is $ \\Delta _{M^{\\prime }}=\\lbrace \\beta _0\\rbrace \\cup \\Delta _M.$ Since $G$ is simply connected, it follows from [11] that the derived group $M^{\\prime \\mathrm {der}}$ is simply connected.", "Hence it is a direct product of its simple factors.", "There is a unique decomposition $ M^{\\prime \\mathrm {der}}=M_{\\beta _0} \\times M^{\\beta _0}$ where $M_{\\beta _0}$ is the unique simple factor containing the root group of $\\beta _0.$ It then is the unique simple factor containing $\\beta ^\\vee _0(\\mathbb {G}_m).$ Let $N_P$ denote the unipotent radical of $P.$ Lemma 2.3 The group $T_{\\beta _0}:=T \\cap M_{\\beta _0}$ is a maximal torus of $M_{\\beta _0}$ and $B \\cap M_{\\beta _0}$ is a Borel subgroup of $M_{\\beta _0}$ .", "The group $P \\cap M_{\\beta _0}$ is the unique maximal parabolic subgroup of $M_{\\beta _0}$ containing $B \\cap M_{\\beta _0}$ that does not contain the root group attached to $-\\beta _0,$ and $M \\cap M_{\\beta _0}$ is a Levi subgroup of $P \\cap M_{\\beta _0}.$ We have $ \\begin{split}N_{P \\cap M_{\\beta _0}}&=N_{P} \\cap M_{\\beta _0}\\\\M^{\\mathrm {der}} \\cap M_{\\beta _0}&=(M \\cap M_{\\beta _0})^{\\mathrm {der}},\\\\ P^{\\mathrm {der}} \\cap M_{\\beta _0}&=(P \\cap M_{\\beta _0})^{\\mathrm {der}}\\\\P^{\\mathrm {der}}&=(P^{\\mathrm {der}} \\cap M_{\\beta _0}) M^{\\beta _0}N_{P^{\\prime }} \\end{split}$ The first two sentences follow from [1].", "Since $N_{P \\cap M_{\\beta _0}}$ is the maximal connected normal unipotent subgroup of $P \\cap M_{\\beta _0}$ it is clearly contained in $N_P,$ the maximal connected normal unipotent subgroup of $P.$ Thus $N_{P \\cap M_{\\beta _0}} \\le N_P \\cap M_{\\beta _0}.$ On the other hand the intersection of $N_P \\cap M_{\\beta _0} <(M \\cap M_{\\beta _0})N_{P \\cap M_{\\beta _0}}=P \\cap M_{\\beta _0}$ and $M \\cap M_{\\beta _0}$ is the identity.", "It follows that $N_{P \\cap M_{\\beta _0}}=N_P \\cap M_{\\beta _0}.$ Let $Q^\\circ $ denote the neutral component of an algebraic group $Q.$ One can check the identities $(M^{\\mathrm {der}} \\cap M_{\\beta _0})^{\\circ }&=(M \\cap M_{\\beta _0})^{\\mathrm {der}},\\\\ (P^{\\mathrm {der}} \\cap M_{\\beta _0})^{\\circ }&=(P \\cap M_{\\beta _0})^{\\mathrm {der}}\\\\P^{\\mathrm {der}}&=(P^{\\mathrm {der}} \\cap M_{\\beta _0})^\\circ M^{\\beta _0}N_{P^{\\prime }}$ by observing that the groups on either side are connected and checking the corresponding identity on the Lie algebras using well-known facts about the decomposition of parabolic subalgebras into root spaces.", "Thus to complete the proof it suffices to prove that $M^{\\mathrm {der}} \\cap M_{\\beta _0}$ and $P^{\\mathrm {der}} \\cap M_{\\beta _0}$ are connected.", "The group $M^{\\mathrm {der}}<M^{\\prime \\mathrm {der}}=M_{\\beta _0} \\times M^{\\beta _0}$ is connected and is the direct product $ M^{\\mathrm {der}}=(M^{\\mathrm {der}} \\cap M_{\\beta _0}) \\times M^{\\beta _0}$ In particular $M^{\\mathrm {der}} \\cap M_{\\beta _0}$ is connected.", "We also conclude from (REF ) that $P^{\\mathrm {der}}$ is the semidirect product of $(M^{\\mathrm {der}} \\cap M_{\\beta _0})N_{P \\cap M_{\\beta _0}}$ and $M^{\\beta _0}N_{P^{\\prime }}.$ It follows that $P^{\\mathrm {der}} \\cap M^{\\beta _0}$ is $(M^{\\mathrm {der}} \\cap M_{\\beta _0})N_{P \\cap M_{\\beta _0}}$ , which is connected.", "Let $\\mathrm {pr}:G \\longrightarrow X_P^\\circ $ be the natural map.", "In the following it is convenient to denote by $|X|$ the underlying topological space of a scheme $X.$ As usual, a subset of a topological space is locally closed if it is open in its closure.", "Lemma 2.4 The set $\\mathrm {pr}(|Y|) \\subset |X_P^\\circ |$ is locally closed.", "Let $\\overline{Y}$ be the schematic closure of $Y$ in $G.$ Then $\\overline{Y}$ is stable under left multiplication by $P^{\\mathrm {der}}.$ The map $\\mathrm {pr}$ is a geometric quotient [40].", "In particular the underlying map of topological spaces is a quotient map.", "It follows that $\\mathrm {pr}(|\\overline{Y}|)$ is closed in $|X_{P}^\\circ |$ .", "The restriction of $\\mathrm {pr}$ to $|\\overline{Y}|$ is again a topological quotient map, so $\\mathrm {pr}(|Y|)$ is open in $\\mathrm {pr}(|\\overline{Y}|).$ It is easy to check that the closure of $\\mathrm {pr}(|Y|)$ is $\\mathrm {pr}(|\\overline{Y}|).$ As in the introduction, $Y_{P}$ is the subscheme of $X_{P}^\\circ $ with underlying topological space $\\mathrm {pr}(|Y|),$ given the reduced induced subscheme structure.", "Lemma 2.5 If $Y=P^{\\prime }\\gamma H$ for some subgroup $H \\le G$ and $\\gamma \\in G(F)$ then the subscheme $Y_{P} \\subset X_P^{\\circ }$ is smooth.", "The space $Y$ is smooth, and $\\mathrm {pr}$ is a locally trivial fibration.", "Since $Y$ is left $P^{\\mathrm {der}}$ -invariant $\\mathrm {pr}$ restricts to a locally trivial fibration $\\mathrm {pr}:Y \\rightarrow Y_{P}.$ Lemma REF implies the following lemma: Lemma 2.6 The map $Y(F) \\longrightarrow Y_{P}(F)$ is surjective.", "$\\Box $ Using notation from (REF ) and (REF ) let $ \\begin{split}X_{P,P^{\\prime }}:&=\\mathrm {Pl}^{-1}_P(V_{\\beta _0} \\times V_{\\Delta _{P^{\\prime }}}^{\\circ }) \\subseteq X_P\\\\X_{P^*,P^{\\prime }}:&=\\mathrm {Pl}^{-1}_{P^*}(V_{\\beta _0^{-1}} \\times V_{\\Delta _{P^{\\prime }}}^{\\circ }) \\subseteq X_{P^*}.", "\\end{split}$ Thus $X_{P,P^{\\prime }}$ and $X_{P^*,P^{\\prime }}$ are subschemes of $X_{P}$ and $X_{P^*},$ respectively.", "Let $ \\begin{split}Y_{P,P^{\\prime }}:&=\\overline{Y_{P}} \\subseteq X_{P,P^{\\prime }}\\\\Y_{P^*,P^{\\prime }}:&=\\overline{Y_{P^*}} \\subseteq X_{P^*,P^{\\prime }}\\end{split}$ be the closures of $Y_{P}$ in $X_{P,P^{\\prime }}$ and $Y_{P^*}$ in $X_{P^*,P^{\\prime }},$ respectively.", "The natural map $ N_{P \\cap M_{\\beta _0}} \\longrightarrow N_P/N_{P^{\\prime }}$ is an isomorphism.", "For all $y \\in Y(F)$ we have a morphism $ \\iota _y:X_{P \\cap M_{\\beta _0}}^\\circ \\longrightarrow Y_{P}$ characterized by the requirement that the diagram $ \\begin{tikzcd}M_{\\beta _0} [r] [d,twoheadrightarrow] & Y [d,twoheadrightarrow]\\\\X^{\\circ }_{P \\cap M_{\\beta _0}} [r,\"\\iota _y\"] &Y_{P}\\end{tikzcd}$ commutes, where the top arrow is given on points by $m \\mapsto my$ and the vertical arrows are the canonical surjections.", "Lemma 2.7 Let $\\Xi \\subset Y(F)$ be a set of representatives for $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F).$ One has $Y_{P}(F)=\\coprod _{y \\in \\Xi } \\iota _{y}(X^\\circ _{P \\cap M_{\\beta _0}}(F))\\quad \\textrm {and} \\quad Y_{P^*}(F)=\\coprod _{y \\in \\Xi } \\iota _{y}(X^\\circ _{P^* \\cap M_{\\beta _0}}(F)).$ With notation as in (REF ) we have $M_{\\beta _0}M^{\\beta _0}N_{P^{\\prime }}=P^{\\prime \\mathrm {der}}.$ The fibers of the canonical projection $M^{\\beta _0}(F)N_{P^{\\prime }}(F) \\backslash Y(F) \\rightarrow P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ are $M_{\\beta _0}(F)$ -torsors, so $M_{\\beta _0}(F)\\Xi $ is a set of representatives for $M^{\\beta _0}(F)N_{P^{\\prime }}(F) \\backslash Y(F).$ Moreover no two elements of $\\Xi $ are in the same $M_{\\beta _0}(F)$ -orbit.", "By Lemma REF $P^{\\mathrm {der}}(F) \\backslash Y(F)=Y_{P}(F).$ By Lemma REF the fibers of the natural projection $M^{\\beta _0}(F)N_{P^{\\prime }}(F) \\backslash Y(F) \\longrightarrow P^{\\mathrm {der}}(F) \\backslash Y(F)=Y_{P}(F)$ are $(P^{\\mathrm {der}} \\cap M_{\\beta _0})(F)$ -torsors.", "Again by Lemma REF we have $P^{\\mathrm {der}} \\cap M_{\\beta _0}=(P \\cap M_{\\beta _0})^{\\mathrm {der}}$ and we deduce the first equality of the lemma.", "To obtain the second equality we replace $P$ by $P^*$ and argue by symmetry.", "Examples Let $G=\\mathrm {SL}_n,$ $n>2.$ We generalize the example of (REF ) to higher rank in two different manners.", "Let $s_{j} \\in \\mathrm {SL}_n(F),$ $1 \\le j \\le n-1,$ be the matrix that is the identity matrix with the rows $e_j$ and $e_{j+1}$ replaced by $-e_{j+1}$ and $e_j.$ Let $\\gamma _1=s_{1}s_{2}\\cdots s_{n-1}.$ This is a Coxeter element.", "Let $R$ be an $F$ -algebra.", "Let $B_n \\le \\mathrm {SL}_n$ be the Borel subgroup of upper triangular matrices and let $P^{\\prime }(R):&=\\lbrace {\\left({\\begin{matrix} g & x \\\\ & b \\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g,b) \\in \\mathrm {GL}_2(R) \\times B_{n-2}(R)\\rbrace \\\\H(R):&=\\lbrace {\\left({\\begin{matrix} b & x \\\\ & g \\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g,b) \\in \\mathrm {GL}_2(R) \\times B_{n-2}(R)\\rbrace $ Then $B_n \\gamma _1B_n \\subset P^{\\prime }\\gamma _1 H \\subseteq \\overline{B_n\\gamma _1 B_n}$ , and $\\overline{B_n\\gamma _1B_n}(R)=\\left\\lbrace g \\in \\mathrm {SL}_n(R): g_{ij}=0 \\textrm { if }i>j+1 \\right\\rbrace .$ If we take $P=B_n$ , then $P$ is maximal in $P^{\\prime }$ and $B_n^*(R):=\\left\\lbrace {\\left({\\begin{matrix} a & & x\\\\ b & c & z\\\\ & & d\\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): a,c \\in R^\\times , d \\in B_{n-2}(R) \\right\\rbrace .$ As another example, take $\\gamma _2= {\\left({\\begin{matrix} & & &J \\\\ (-1)^\\epsilon & & & \\\\ &1 & & \\end{matrix}}\\right)}$ where $J$ is the matrix with 1's on the antidiagonal and zeros elsewhere and $\\epsilon \\in \\lbrace 0,1\\rbrace $ is chosen so the matrix has determinant 1.", "Write $P_{a,b}(R):=\\left\\lbrace {\\left({\\begin{matrix} g & x \\\\ & h \\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g,h) \\in \\mathrm {GL}_a(R) \\times \\mathrm {GL}_b(R) \\right\\rbrace .$ Then $B_n\\gamma _2 B_n \\subset P_{n-1,1}\\gamma _2 P_{1,n-1} \\subseteq \\overline{B_n \\gamma _2 B_n}$ Choose positive integers $a,b$ such that $a+b=n-1.$ Then we may take $P(R):=\\left\\lbrace {\\left({\\begin{matrix} g_1 & x & y\\\\ & g_2 & z \\\\ & & a\\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g_1,g_2,a) \\in \\mathrm {GL}_{a}(R) \\times \\mathrm {GL}_b(R) \\times R^\\times \\right\\rbrace .$ In this case $P^*(R):=\\left\\lbrace {\\left({\\begin{matrix} g_1 & & y\\\\ x & g_2 & z \\\\ & & a\\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g_1,g_2,a) \\in \\mathrm {GL}_{a}(R) \\times \\mathrm {GL}_b(R) \\times R^\\times \\right\\rbrace .$ Function spaces Let $X$ be a quasi-projective scheme over a local field $F$ .", "We denote by $C^0(X(F))$ the space of complex valued continuous functions on $X(F).$ Assume that $F$ is nonarchimedean.", "In this case we denote by $\\mathcal {C}(X(F))$ the space of locally constant compactly supported functions on $X(F)$ , also denoted by $C_c^\\infty (X(F))$ .", "If $X$ is smooth, then we set $\\mathcal {S}(X(F))=\\mathcal {C}(X(F))=C_c^\\infty (X(F)).$ For certain nonsmooth schemes $X$ we will define $\\mathcal {S}(X(F)).$ Now assume that $F$ is archimedean.", "In this case we define $\\mathcal {C}(X(F))$ as follows.", "The set $X(F)=\\mathrm {Res}_{F/\\mathbb {R}}X(\\mathbb {R})$ is a real algebraic variety, and hence an affine real algebraic variety [5].", "In particular there is a closed embedding of real algebraic varieties $X(F) \\longrightarrow \\mathbb {R}^n$ for some $n$ .", "We define $\\mathcal {C}(X(F))$ to be the space of restrictions of the usual Schwartz space $\\mathcal {S}(\\mathbb {R}^n)$ to $X(F)$ .", "This space is independent of the choice of embedding and is naturally a Fréchet space [13].", "Thus $\\mathcal {C}(X(F))\\le C^0(X(F)).$ We observe that if $X$ is not smooth then $C^\\infty (X(F))$ is not defined when $F$ is archimedean.", "If $X$ is smooth then we set $\\mathcal {S}(X(F))=\\mathcal {C}(X(F)) \\le C^\\infty (X(F))$ We have $\\mathcal {S}(X(F)) \\ge C_c^\\infty (X(F))$ but the inclusion is strict in the archimedean case when $X(F)$ is noncompact.", "We will also define $\\mathcal {S}(X(F))$ for certain nonsmooth $X$ .", "Measures Let $F$ be a global field.", "We fix a nontrivial character $\\psi :F \\backslash \\mathbb {A}_F \\rightarrow \\mathbb {C}^\\times $ and a Haar measure $dx=\\otimes _v^{\\prime } dx_v$ on $\\mathbb {A}_F.$ We assume that $dx_v$ is self-dual with respect to $\\psi _v.$ We let $d^\\times x$ be the Haar measure on $\\mathbb {A}_{F}^\\times $ given by $\\otimes _v^{\\prime } \\frac{\\zeta _v(1)}{|x|_v}dx_v.$ We fix, once and for all, a Chevalley basis of the Lie algebra of $G$ with respect to $T.$ For every root of $T$ in $G$ this provides us with a root vector $X_{\\alpha }$ in each root space, and hence isomorphisms $\\mathbb {G}_a \\longrightarrow N_{\\alpha }$ where $N_{\\alpha }$ is the root group.", "This in turn provides us with a Haar measure on $N_{\\alpha }(\\mathbb {A}_F)$ for all $\\alpha .$ As a scheme (but not a group scheme) the unipotent radical of any parabolic subgroup $P$ with Levi subgroup $M$ is a product of various $N_{\\alpha }.$ Thus we obtain a Haar measure on $N_{P}(\\mathbb {A}_F).$ We use this normalization so that factorization of intertwining operators holds (otherwise it only holds up to a constant depending on the choice of Haar measures).", "We fix a Haar measure on $M^{\\mathrm {der}}(\\mathbb {A}_F).$ We give $M(\\mathbb {A}_F)$ the unique Haar measure such that, upon endowing $M^{\\mathrm {ab}}(\\mathbb {A}_F)$ with the quotient measure one has that $\\omega _P:M^{\\mathrm {ab}}(\\mathbb {A}_F) \\longrightarrow (\\mathbb {A}_F^\\times )^{\\Delta _P}$ is measure preserving.", "This is independent of the parabolic $P$ with $M$ as its Levi, as different choices will just replace various $\\omega _{\\beta }$ with their inverses.", "Each of the measures we fixed above factors over the places of $F$ into a product of local measures, normalized so that the local analogue of (REF ) is measure preserving.", "We use these local measures when working locally below.", "Quasi-characters Let $\\chi :=\\prod _{\\beta \\in \\Delta _P}\\chi _\\beta : (A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )^{\\Delta _P} \\longrightarrow \\mathbb {C}^\\times $ be a quasi-character.", "If $s=(s_\\beta ) \\in \\mathbb {C}^{\\Delta _P}$ we let $\\chi _{s}=\\prod _{\\beta \\in \\Delta _P}\\chi _{\\beta }|\\cdot |^{s_\\beta }.$ We define a subgroup $ A_{\\mathbb {G}_m} \\le F_\\infty ^\\times $ in the following manner.", "The global field $F$ is a finite extension of $F_0,$ where $F_0=\\mathbb {Q}$ or $F_0=\\mathbb {F}_p(t)$ for some prime $p.$ Let $Z \\le \\mathrm {Res}_{F/F_0}\\mathbb {G}_m$ be the maximal split subtorus.", "When $F$ is a number field we take $A_{\\mathbb {G}_m}$ to be the neutral component of $Z(\\mathbb {R}).$ Thus $A_{\\mathbb {G}_m}$ is just $\\mathbb {R}_{>0}$ embedded diagonally in $F^\\times _\\infty .$ When $F$ is a function field we choose an isomorphism $Z \\tilde{\\longrightarrow } \\mathbb {G}_m$ and let $A_{\\mathbb {G}_m}$ be the inverse image of $t^\\mathbb {Z}.$ The unramified setting For a nonarchimedean place $v$ of the global field $F$ let $\\mathbb {F}_v$ be the residue field of $F_v$ and $q_v:=|\\mathbb {F}_v|.$ Let $S$ be a finite set of places of $F$ including the infinite places.", "Upon enlarging $S$ if necessary, we can choose a reductive group scheme $\\mathcal {G}$ over $\\mathcal {O}_F^S$ (with connected fibers) and affine spaces $\\mathcal {V}_{\\beta } \\cong \\mathbb {G}_{a\\mathcal {O}_F^S}^{d_\\beta }$ for some integer $d_\\beta >0$ (isomorphism over $\\mathcal {O}_F^S$ ) equipped with homomorphisms $\\mathcal {G} \\longrightarrow \\mathrm {Aut}(\\mathcal {V}_\\beta )$ over $\\mathcal {O}_F^S$ whose generic fibers are the representations $G \\rightarrow \\mathrm {Aut}(V_\\beta ).$ By abuse of notation, write $G$ for $\\mathcal {G}.$ For any of the subgroups $M,$ $P,$ $P^{\\prime },$ etc.", "of $G_F$ we continue to use the same letter for their schematic closures in $G.$ Upon enlarging $S$ if necessary we assume that these groups are all smooth.", "We assume moreover that the groups whose generic fibers were reductive (resp.", "parabolic, etc.)", "extend to reductive group schemes (resp.", "parabolic group schemes, etc.)", "over $\\mathcal {O}_F^S.$ We also assume that $\\omega _{P}$ induces an isomorphism on $\\mathcal {O}_{F_v}$ -points for all $v \\notin S,$ the highest weight vectors $v_\\beta \\in V(F)$ lie in $V_\\beta (\\mathcal {O}_F^S),$ and their image in $V_\\beta (\\mathbb {F}_v)$ is again a highest weight vector for $G_{\\mathbb {F}_v}$ of weight $-\\omega _\\beta .$ Finally, we continue to denote by $Y$ the schematic closure of $Y$ in $G$ .", "It is a scheme over $\\mathcal {O}_F^S$ , and the action of $P^{\\prime }_F$ on $Y_F$ extends to an action of $P^{\\prime }$ on $Y.$ Under the assumptions above (which are no loss of generality for $S$ large enough) and we are considering the local setting over $F_v$ for $v \\notin S$ we say that we are in the unramified setting.", "The Schwartz space of $Y_{P,P^{\\prime }}(F)$ For all but the last subsection of this section $F$ is a local field.", "Induced representations Consider the induced representations: $ I_{P}(\\chi _s):=\\mathrm {Ind}_{P}^{G}(\\chi _s \\circ \\omega _P) \\quad \\textrm {and} \\quad I^*_{P^*}(\\chi _s):=\\mathrm {Ind}_{P^*}^{G}(\\chi _s \\circ \\omega _P).$ Let $\\Phi ^{\\chi _s}$ be a section of $I_{P}(\\chi _s).$ Assume $F$ is archimedean.", "We say $\\Phi ^{\\chi _s}$ is holomorphic (resp.", "meromorphic) if $s \\mapsto \\Phi ^{\\chi _s}(g)$ is holomorphic as a function of $s \\in \\mathbb {C}^{\\Delta _P}$ for all $g \\in G(F)$ and characters $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times .$ If $F$ is nonarchimedean with residue field of cardinality $q$ we say that $\\Phi ^{\\chi _s}$ is holomorphic if $\\Phi ^{\\chi _s}(g) \\in \\mathbb {C}[\\lbrace q^{s_\\beta },q^{-s_\\beta }:\\beta \\in \\Delta _P\\rbrace ]$ for all $g \\in G(F)$ and characters $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times .$ Similarly we say it is meromorphic if for all there is a $p \\in \\mathbb {C}[\\lbrace q^{s_\\beta },q^{-s_\\beta }:\\beta \\in \\Delta _P\\rbrace ]$ such that $p(s)\\Phi ^{\\chi _s}(g)$ is holomorphic.", "Fix a maximal compact subgroup $K \\le G(F)$ such that the Iwasawa decomposition holds: $G(F)=P(F)K.$ We then say that $\\Phi ^{\\chi _s}$ is standard if the restriction of the function $(s,g) \\mapsto \\Phi ^{\\chi _s}(g)$ to $\\mathbb {C}^{\\Delta _P} \\times K$ is independent of $s.$ We take the analogous conventions regarding sections of $I_{P^*}^*(\\chi _s).$ Let $\\mathcal {E}$ denote the ring of entire functions on $\\mathbb {C}^{\\Delta _P}$ when $F$ is archimedean and $\\mathbb {C}[\\lbrace q^{s_\\beta },q^{-s_\\beta }:\\beta \\in \\Delta _P\\rbrace ]$ when $F$ is nonarchimedean.", "For a fixed character $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ there is an obvious action of $\\mathcal {E}$ on the $\\mathbb {C}$ -vector space of holomorphic sections of $I_P(\\chi _s)$ preserving the subspace of $K$ -finite sections.", "As an $\\mathcal {E}$ -module, the subspace of holomorphic $K$ -finite sections is generated by the subspace of standard $K$ -finite sections.", "This allows us to apply results in the literature stated for $K$ -finite standard sections to sections that are $K$ -finite and merely holomorphic.", "We will use this observation without further comment below.", "The Schwartz space We define $P,P^*$ and $P^{\\prime }$ as in the introduction.", "Thus $P \\cap P^*$ is the Levi subgroup $M$ of the parabolic subgroups $P$ and $P^*.$ Moreover $P$ and $P^*$ are maximal (proper) parabolic subgroups of $P^{\\prime }.$ To define the Fourier transform $\\mathcal {F}_{P|P^*}$ we first apply an intertwining operator to certain functions on $Y_{P}(F)$ to arrive at functions on $Y_{P^*}(F).$ We then twist by certain operators that we recall in this section.", "Suppose we are given $\\lambda \\in X_*(M^{\\mathrm {ab}}).$ Let $s^{\\prime } \\in \\mathbb {C}.$ We define $\\lambda _{!", "}(s^{\\prime }):\\mathcal {C}(Y_{P^*}(F)) \\longrightarrow C^0(Y_{P^*}(F))$ by $\\lambda _{!", "}(s^{\\prime })(f)(x)=\\int _{F^\\times }\\delta _{P^*}^{1/2}(\\lambda (a))f(\\lambda (a^{-1})x)\\psi (a)|a|^{s^{\\prime }}da.$ Here $\\psi :F \\rightarrow \\mathbb {C}^\\times $ is the local factor of the global additive character fixed in §REF and $da$ is the Haar measure on $F.$ In the special case where $P$ is maximal and $P^{\\prime }=Y=G$ this reduces to [16], where it was denoted by $\\lambda _{!", "}(\\mu _s).$ The same operator is denoted by $\\lambda _!", "(\\eta _{\\psi }^{s^{\\prime }})$ in [8], [43].", "To extend the domain of definition of $\\lambda _!", "(s^{\\prime }),$ let $\\Phi \\in \\mathcal {S}(F)$ satisfy $\\Phi (0)=1$ and $\\widehat{\\Phi } \\in C_c^\\infty (F).$ Here $\\widehat{\\Phi }(x):=\\int _{F}\\Phi (y)\\psi \\left(xy \\right)dy.$ Define the regularized integral $\\lambda _!", "(s^{\\prime })^{\\mathrm {reg}}(f)(x):=\\lim _{|b| \\rightarrow \\infty }\\int _{F^\\times } \\Phi \\left(\\frac{a}{b} \\right)\\delta _{P^*}^{1/2}(\\lambda (a))f(\\lambda (a^{-1})x)\\psi (a)|a|^{s^{\\prime }}da.$ This limit is said to be well-defined if the integral $\\int _{F^\\times } { \\Phi \\left(\\frac{a}{b} \\right)}\\delta _{P^*}^{1/2}(\\lambda (a))f(\\lambda (a^{-1})x)\\psi (a)|a|^{s^{\\prime }}da$ is convergent provided $|b|$ is large enough and the limit in the definition of $\\lambda _!", "(s^{\\prime })^{\\mathrm {reg}}(x)$ exists and is independent of $\\Phi .$ Thus if the integral defining $\\lambda _!", "(s^{\\prime })$ is absolutely convergent then $\\lambda _!", "(s^{\\prime })^{\\mathrm {reg}}(f)=\\lambda _!", "(s^{\\prime })(f).$ In particular this is the case if $f \\in \\mathcal {C}(Y_{P^*}(F)).$ To avoid more proliferation of notation we will drop the superscript $\\mathrm {reg}.$ Let $L(s^{\\prime },\\chi )$ , $\\varepsilon (s^{\\prime },\\chi ,\\psi )$ , and $\\gamma (s^{\\prime },\\chi ,\\psi )=\\frac{\\varepsilon (s^{\\prime },\\chi ,\\psi )L(1-s^{\\prime },\\chi ^{-1})}{L(s^{\\prime },\\chi )}$ be the usual Tate local zeta function, $\\varepsilon $ -factor, and $\\gamma $ -factor attached to a quasi-character $\\chi :F^\\times \\rightarrow \\mathbb {C}^\\times $ and a complex number $s^{\\prime } \\in \\mathbb {C}$ .", "These factors are denoted $\\varepsilon ^{\\prime }(s^{\\prime },\\chi ,\\psi )$ in [17].", "We use a hat to denote the dual group of an $F$ -group (more precisely the neutral component of the Langlands dual).", "Let $\\mathcal {N}$ be a 1-dimensional representation of $Z_{\\widehat{M}}=\\widehat{M}^{\\mathrm {ab}}$ with $s^{\\prime } \\in \\tfrac{1}{2}\\mathbb {Z}$ attached to it.", "The action of $Z_{\\widehat{M}}$ is given by a character $\\lambda :Z_{\\widehat{M}} \\rightarrow \\mathbb {G}_m,$ which we may identify with a cocharacter $\\lambda :\\mathbb {G}_m \\rightarrow M^{\\mathrm {ab}}.$ Let $a_{\\mathcal {N}}(\\chi _s):=L\\left(-s^{\\prime },\\chi _s\\circ \\lambda \\right)\\quad \\textrm {and} \\quad \\mu _{\\mathcal {N}}(\\chi _s):=\\gamma (-s^{\\prime },\\chi _s \\circ \\lambda ,\\psi ).$ We let $\\widetilde{\\mathcal {N}}$ be $\\mathcal {N}^\\vee $ (on which $\\widehat{M}^{\\mathrm {ab}}$ acts via $-\\lambda $ ) with the real number $-1-s^{\\prime }$ attached to it.", "More generally, assume $\\mathcal {N}=\\oplus _i \\mathcal {N}_i$ is a finite-dimensional representation of $M^{\\mathrm {ab}}$ with each $\\mathcal {N}_i$ 1-dimensional.", "We let the $Z_{\\widehat{M}}$ -action on $\\mathcal {N}_i$ be given by $\\lambda _i$ and assume each $\\mathcal {N}_i$ is equipped with a complex number $s_i$ .", "Define $ a_{\\mathcal {N}}(\\chi _s):&=\\prod _{i}a_{\\mathcal {N}_i}(\\chi _s),\\\\\\mu _{\\mathcal {N}}(\\chi _s):&=\\prod _{i=1}^\\ell \\mu _{\\mathcal {N}_i}(\\chi _s).$ We also define $\\widetilde{\\mathcal {N}}:=\\oplus _i \\widetilde{\\mathcal {N}}_i.$ We will in fact only consider $\\mathcal {N}=\\oplus _{i \\in I} \\mathcal {N}_i$ where the parameters attached to $\\mathcal {N}_i$ are all of the form $(s_i,\\lambda _i)$ where $\\lambda _i$ is an integer multiple of $\\beta _0^\\vee $ .", "Here $\\beta _0$ is the simple root of (REF ).", "We will therefore abuse notation and allow ourselves to again write $\\lambda _i$ for the integer $n_i$ such that $\\lambda _i=n_i\\beta _0^\\vee $ .", "Assume that $\\lambda _i>0$ for all $i$ .", "Then we can order the $\\mathcal {N}_i$ so that $\\frac{s_{i+1}}{\\lambda _{i+1}} \\ge \\frac{s_i}{\\lambda _i}$ for all $i$ .", "Choosing such an ordering we define $\\mu _{\\mathcal {N}}:&=\\lambda _{1!", "}(s_1)\\circ \\dots \\circ \\lambda _{\\ell !", "}(s_\\ell ).$ Theorem REF below explains why it is reasonable to use the symbol $\\mu _{\\mathcal {N}}$ in both () and (REF ).", "Let $\\mathfrak {n}_P$ be the Lie algebra of the unipotent radical $N_P$ of $P$ and let $\\widehat{\\mathfrak {n}}_P$ be its Langlands dual.", "Let $ \\widehat{\\mathfrak {n}}_{P|P^*} :=\\widehat{\\mathfrak {n}}_{P}/\\widehat{\\mathfrak {n}}_P \\cap \\widehat{\\mathfrak {n}}_{P^*}.$ Let $\\lbrace e,h,f\\rbrace \\subset \\widehat{\\mathfrak {m}}$ be a principal $\\mathfrak {sl}_2$ -triple (here $\\widehat{\\mathfrak {m}}$ denotes the Langlands dual of $\\mathfrak {m},$ the Lie algebra of $\\mathfrak {m}$ ).", "Consider the subspace $\\widehat{\\mathfrak {n}}_{P|P^*}^e \\le \\widehat{\\mathfrak {n}}_{P|P^*}$ annihilated by $e.$ It admits a decomposition $ \\widehat{\\mathfrak {n}}_{P|P^*}^e=\\oplus _{i} \\mathcal {N}_i$ where the $\\mathcal {N}_i$ are 1-dimensional eigenspaces for the action of $Z_{\\widehat{M}}.$ We observe that $Z_{\\widehat{M}}$ acts via a power of $\\beta _0^\\vee $ on $\\mathcal {N}_i.$ We assign each $\\mathcal {N}_i$ the real number $s_i$ that is half the eigenvalue of $h,$ and define $ a_{P|P}(\\chi _s)=a_{\\widetilde{\\widehat{\\mathfrak {n}}_{P|P^*}^e}}((\\chi _s)^{-1}), \\quad a_{P|P^*}(\\chi _s)=a_{\\widehat{\\mathfrak {n}}_{P|P^*}^e}(\\chi _s).$ These factors enjoy the symmetry property $a_{P|P}(\\chi _s)=a_{P^*|P^*}(\\chi _s), \\quad a_{P|P^*}(\\chi _s)=a_{P^*|P}(\\chi _s)$ by the discussion on passing to the opposite parabolic contained in [16].", "Lemma 3.1 There is an $\\varepsilon >0$ depending only on $P$ and $P^{\\prime }$ such that for any character $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ the function $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s_{\\beta _0}) \\ge -\\varepsilon .$ Consider the set of parameters $\\lbrace (s_i,\\lambda _{i}\\beta _0^\\vee )\\rbrace $ attached to $\\widehat{\\mathfrak {n}}_{P|P^*}.$ It suffices to check that for each $i$ one has $s_i \\ge 0$ and $\\lambda _{i} >0 .$ Since $s_i$ is $\\tfrac{1}{2}$ the highest weight of a certain $\\mathfrak {sl}_2$ representation with respect to the usual Borel subalgebra $\\langle h,e\\rangle $ it is nonnegative.", "It is also clear that $\\lambda _i>0.$ As above, let $M^{\\prime }$ be the unique Levi subgroup of $P^{\\prime }$ containing $M$ and define $M_{\\beta _0}$ as in (REF ).", "The closed immersion $ N_{P^*} \\cap M_{\\beta _0} \\rightarrow N_{P^*}$ induces a bijection $N_{P^*}(F) \\cap M_{\\beta _0}(F) \\tilde{\\longrightarrow } N_{P^*}(F) \\cap N_{P}(F) \\backslash N_{P^*}(F).$ Thus the usual unnormalized intertwining operator restricts to define an operator $\\mathcal {R}_{P|P^*}:C_c^\\infty (Y_{P}(F)) \\longrightarrow C^\\infty (Y_{P^*}(F))$ given by $ \\mathcal {R}_{P|P^*}(f)(g):=\\int _{N_{P^*}(F) \\cap M_{\\beta _0}(F) } f(ug)du.$ We will use the same notation whenever $\\mathcal {R}_{P|P^*}(f)$ is defined (e.g.", "for more general smooth functions or via analytic continuation).", "In [43] Shahidi proves that this agrees with the operator defined by Braverman and Kazhdan.", "A section $\\Phi ^{\\chi _s}$ of $I_P(\\chi _s)$ is good if it is meromorphic and if the section $\\frac{\\mathcal {R}_{P|Q}\\Phi ^{\\chi _s}(g)}{a_{P|Q}(\\chi _s)}$ is holomorphic for all $g \\in G(F)$ and $Q \\in \\lbrace P,P^*\\rbrace $ (recall our conventions regarding meromorphic sections from §REF ).", "We defined adelic Mellin transforms in (REF ) above.", "We use the obvious local analogues of this notation.", "We write $f_{\\chi _s}$ for the Mellin transform of any function $f:Y_{P}(F) \\rightarrow \\mathbb {C}$ or $f:X_P^\\circ (F) \\rightarrow \\mathbb {C}$ such that the integral defining the Mellin transform is absolutely convergent or obtained by analytic continuation from some region of absolute convergence.", "Assume $F$ is nonarchimedean.", "Let $K \\le M^{\\mathrm {ab}}(F) \\times G(F)$ be a compact open subgroup.", "Let $\\mathcal {C}_{\\beta _0}(X_P(F))$ be the space of $K$ -finite $f \\in C^\\infty (X_{P}^\\circ (F))$ such that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large the integral defining the Mellin transform $f_{\\chi _s}$ converges absolutely and defines a good section.", "We define the Schwartz space of $Y_{P,P^{\\prime }}(F)$ to be the space of restrictions to $Y_{P}(F)$ of functions in $\\mathcal {C}_{\\beta _0}(X_P(F))$ : $ \\mathcal {S}(Y_{P,P^{\\prime }}(F))=\\mathrm {Im}(\\mathcal {C}_{\\beta _0}(X_P(F)) \\longrightarrow C^\\infty (Y_{P}(F))).$ Before explaining the archimedean analogue of this definition, let us write $K_{\\mathbb {G}_m}$ for the maximal compact subgroup of $F^\\times $ .", "Say that two quasi-characters $\\chi _1,\\chi _2:F^\\times \\rightarrow \\mathbb {C}^\\times $ are equivalent if $\\chi _1=\\chi _{2}|\\cdot |^s$ for some $s \\in \\mathbb {C}$ .", "Then the set of equivalence classes of quasi-characters of $F^\\times $ is in natural bijection with $\\widehat{K}_{\\mathbb {G}_m}$ .", "Thus we sometimes write $\\widehat{K}_{\\mathbb {G}_m}$ for a set of representatives of the quasi-characters of $F^\\times $ modulo equivalence.", "In the archimedean setting we fix the following sets of representatives: $\\widehat{K}_{\\mathbb {G}_m}:={\\left\\lbrace \\begin{array}{ll}\\lbrace 1,\\mathrm {sgn}\\rbrace &\\textrm { if }F=\\mathbb {R}\\\\\\lbrace z \\mapsto \\left(\\tfrac{z}{(\\overline{z}z)^{1/2}}\\right)^m:m \\in \\mathbb {Z}\\rbrace & \\textrm { if }F=\\mathbb {C}.\\end{array}\\right.", "}$ For extended real numbers $A,B \\in \\lbrace -\\infty \\rbrace \\cup \\mathbb {R}\\cup \\lbrace \\infty \\rbrace $ with $A<B$ let $ V_{A,B}:=\\lbrace s \\in \\mathbb {C}^{\\Delta _P}:A<\\mathrm {Re}(s_\\beta ) <B \\textrm { for }\\beta \\in \\Delta _P\\rbrace .$ For functions $\\phi :\\mathbb {C}^{\\Delta _P} \\rightarrow \\mathbb {C}^{\\Delta _P}$ and polynomials $p$ on $\\mathbb {C}^{\\Delta _P}$ let $ |\\phi |_{A,B,p}:=\\sup _{s \\in V_{A,B}}|\\phi (s)p(s)|$ (which may be infinite).", "Assume $F$ is archimedean.", "The action (REF ) induces an action of $U(\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g})$ on $C^\\infty (X_{P}^\\circ (F)).$ Here $U(\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g})$ is the universal enveloping algebra of the complexification of the Lie algebra $\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g}$ of $M^{\\mathrm {ab}} \\times G,$ viewed as a real Lie algebra.", "Let $\\mathcal {C}_{\\beta _0}(X_{P}(F))$ be the set of all $f \\in C^\\infty (X_P^\\circ (F))$ such that for all $D \\in U(\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g})$ and each character $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ the integral (REF ) defining $(D.f)_{\\chi _s}$ converges for $\\mathrm {Re}(s_{\\beta _0})$ large enough, is a good section, and satisfies the following condition: For all real numbers $A<B,$ $Q \\in \\lbrace P,P^*\\rbrace ,$ any polynomials $p_{P|Q}$ such that $p_{P|Q}(s)a_{P|Q}(\\eta _s)$ has no poles for all $(s,\\eta ) \\in V_{A,B} \\times \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}$ and all compact subsets $\\Omega \\subset X_{P}^\\circ (F)$ one has $|f|_{A,B,p_{P|Q},\\Omega ,D}:=\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}\\mathrm {sup}_{g \\in \\Omega }|\\mathcal {R}_{P|Q}(D.f)_{\\eta _s}(g)|_{A,B,p_{P|Q}}<\\infty .$ We observe that it is indeed possible to choose $p_{P|Q}(s)$ as above (independently of $\\eta $ ).", "This follows directly from the definition of $a_{P|Q}(\\eta _s).$ Since we have defined $\\mathcal {C}_{\\beta _0}(X_{P}(F))$ for archimedean $F$ we can and do define $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ as in (REF ).", "In the archimedean case the seminorms $|\\cdot |_{A,B,p_{P|Q},\\Omega ,D}$ give $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ the structure of a Fréchet space as we now explain.", "The seminorms $|\\cdot |_{A,B,p_{P|Q},\\Omega ,D}$ give $\\mathcal {C}_{\\beta _0}(X_{P}(F))$ the structure of a Fréchet space via a standard argument.", "See [15] for the proof in a special case, the proof in general is essentially the same.", "Using Mellin inversion, one checks that with respect to this Fréchet structure evaluating a function in $\\mathcal {C}_{\\beta _0}(X_P(F))$ at a point of $X_P^{\\circ }(F)$ is a continuous linear functional on $\\mathcal {C}_{\\beta _0}(X_P(F)).$ Thus the $\\mathbb {C}$ -linear subspace $I \\le \\mathcal {C}_{\\beta _0}(X_{P}(F))$ consisting of functions that vanish on $Y_{P}(F)$ is closed subspace.", "Restriction of functions to $Y_{P}(F)$ induces a $\\mathbb {C}$ -linear isomorphism $\\mathcal {C}_{\\beta _0}(X_P(F))/I \\longrightarrow \\mathcal {S}(Y_{P,P^{\\prime }}(F))$ and thus we obtain a Fréchet topology on $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ by transport of structure.", "This definition is inspired by [13].", "The set of seminorms giving $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ its topology are $|f|_{A,B,p_{P|Q},\\Omega ,D}:=\\mathrm {inf}\\lbrace |\\widetilde{f}|_{A,B,p_{P|Q},\\Omega ,D}: \\widetilde{f} \\in \\mathcal {C}_{\\beta _0}(X_P(F)) \\textrm { and } \\widetilde{f}|_{Y_{P}(F)}=f\\rbrace $ for $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))$ .", "Let $H\\le G$ act on $G$ via right multiplication, and assume that $H$ stabilizes $Y.$ Then (REF ) restricts to an action of $M^{\\mathrm {ab}}(F) \\times H(F)$ on $Y_{P}(F).$ This induces an action of $M^{\\mathrm {ab}}(F) \\times H(F)$ on $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ that is continuous in the archimedean case.", "Remarks.", "(a) We have defined the Schwartz space in terms of restrictions of functions on $X_P^\\circ (F)$ in order to take advantage of the transitive action of $G(F)$ on $X_P^{\\circ }(F).$ The space $Y_{P,P^{\\prime }}(F)$ plays no role in the definition of $\\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ However, it should be possible to give a characterization of $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ as the space of smooth functions on $Y_{P}(F)$ that have particular germs as one approaches the boundary $Y_{P,P^{\\prime }}(F)-Y_{P}(F)$ [38].", "As in the special cases treated in [19], [15], [16], we have defined the Schwartz space to be the space of smooth functions with sufficiently well-behaved Mellin transforms.", "This is reasonable because we can obtain information on $f$ from its Mellin transforms via Mellin inversion as in the proof of Lemma REF below.", "We now discuss the problem of bounding functions in $\\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ Since we have an embedding $\\mathrm {Pl}_P:X_P \\rightarrow V$ of $X_P$ into an affine space we will phrase our bounds in terms of this affine space.", "Let $K \\le G(F)$ be a maximal compact subgroup such that the Iwasawa decomposition $G(F)=P(F)K$ holds.", "If we are in the unramified setting in the sense of §REF we take $K=G(\\mathcal {O}_F).$ The group $K$ does not act on $Y_{P}(F)$ in general.", "The group $G(F)$ acts on each $V_\\beta (F).$ For each $\\beta \\in \\Delta _P$ choose a $K$ -invariant norm $|\\cdot |_\\beta $ on the $F$ -vector space $V_\\beta (F).$ As a warning, for $F=\\mathbb {C}$ we have $|cv|_\\beta =c\\overline{c}|v|_\\beta $ for $c \\in F$ and $v \\in V_{\\beta }(F)$ (this is the “number theorist's norm”).", "If we write $x=P^{\\mathrm {der}}(F)mk$ with $(m,k) \\in M^{\\mathrm {ab}}(F) \\times K$ then by Lemma REF one has $ |\\mathrm {Pl}_{P_\\beta }(x)|_\\beta =|\\mathrm {Pl}_{P_\\beta }(m)|_\\beta =|\\omega _{\\beta }(m)|^{-1}.$ The inverse here appears because $G$ is acting on $V_{\\beta }$ on the right.", "Choose $r_{\\beta } \\in \\mathbb {R}$ so that $ \\prod _{\\beta \\in \\Delta _P}|\\omega _\\beta (m)|^{r_{\\beta }}=\\delta _P^{1/2}(m).$ Recall the definition of $V_{\\beta _0}$ from (REF ) and $V_{\\Delta _{P^{\\prime }}}^{\\circ }$ from (REF ).", "Lemma 3.2 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ For sufficiently small $\\alpha >0$ there is a nonnegative Schwartz function $\\Phi _f \\in \\mathcal {S}(V_{\\beta _0}(F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (F))$ such that $|f(x)| \\le \\Phi _f(\\mathrm {Pl}_P(x))\\prod _{\\beta \\in \\Delta _{P}}|\\mathrm {Pl}_{P_{\\beta }}(x)|_\\beta ^{\\alpha -r_\\beta }.$ If $F$ is archimedean, then $\\Phi _f$ can be chosen continuously as a function of $f.$ Let $I_F:={\\left\\lbrace \\begin{array}{ll}i\\left[ -\\frac{\\log q}{\\pi },\\frac{\\log q}{\\pi }\\right] & \\textrm { if }F \\textrm { is non-archimedean}\\\\i\\mathbb {R}& \\textrm { if }F \\textrm { is archimedean.}\\end{array}\\right.", "}$ Let $c_\\psi \\in \\mathbb {R}_{>0}$ be chosen so that $c_\\psi dx$ is the standard Haar measure on $F$ , where $dx$ is normalized to be self-dual with respect to $\\psi .$ Here the standard Haar measure is the Lebesgue measure if $F=\\mathbb {R}$ , twice the Lebesgue measure if $F=\\mathbb {C}$ , and the unique Haar measure giving $\\mathcal {O}_F$ measure $|\\mathfrak {d}|^{1/2}$ if $F$ is nonarchimedean, where $\\mathfrak {d}$ is a generator for the absolute different.", "Then let $c_F:={\\left\\lbrace \\begin{array}{ll} c_\\psi \\log q &\\textrm { if } F \\textrm { is nonarchimedean}\\\\\\frac{c_\\psi }{2} &\\textrm { if }F=\\mathbb {R}\\\\\\frac{c_\\psi }{2\\pi } &\\textrm { if }F=\\mathbb {C}\\end{array}\\right.", "}$ For suitable continuous functions $f:Y_{P}(F) \\rightarrow \\mathbb {C}$ the Mellin inversion formula states that $ f(x)=\\int _{\\sigma +I_F^{\\Delta _P}}\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}f_{\\eta _s}(x) \\frac{c_F^{|\\Delta _P|}ds}{(2\\pi i)^{|\\Delta _P|}}$ for suitable $\\sigma \\in \\mathbb {R}^{\\Delta _P}$ .", "By [3] this formula holds whenever the integral defining $f_{\\eta _s}$ is absolutely convergent for all $\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}$ and $\\mathrm {Re}(s)=\\sigma $ and $\\int _{\\sigma +I_F^{\\Delta _P}}\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}|f_{\\eta _s}(x)| ds<\\infty .$ By Lemma REF we have that $a_{P|P}(\\chi )$ is holomorphic for $\\mathrm {Re}(s_{\\beta _0}) \\ge -\\varepsilon $ for some $\\varepsilon >0$ independent of the character $\\chi .$ It follows that, for $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))$ , (REF ) holds for $\\sigma =(-\\varepsilon /2,\\dots ,-\\varepsilon /2).$ Writing $x=P^{\\mathrm {der}}(F)mk$ with $(m,k) \\in M^{\\mathrm {ab}}(F) \\times K$ the above becomes $f(mk)=\\int _{\\sigma +iI_F^{\\Delta _P}}\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}\\delta _P^{1/2}(m)\\eta _s(\\omega _P(m))f_{\\eta _s}(k) \\frac{c_F^{|\\Delta _P|}ds}{(2\\pi i)^{|\\Delta _P|}}.$ To obtain the bound and the continuity statement from this expansion one uses the same argument as that proving [15].", "The Fourier transform To ease notation let $ \\begin{split}\\mu _P:&=\\mu _{\\widehat{\\mathfrak {n}}^e_{P|P^*}}\\\\\\mu _{P}(\\chi _s):&=\\mu _{\\widehat{\\mathfrak {n}}^e_{P|P^*}}(\\chi _s) \\end{split}$ where the operator (resp.", "function) on the right is defined as in (REF ) (resp. ()).", "By the same argument proving [16] we obtain the following theorem: Theorem 3.3 The map $\\mathcal {F}_{P|P^*}:=\\mu _P \\circ \\mathcal {R}_{P|P^*}:\\mathcal {S}(Y_{P,P^{\\prime }}(F)) \\longrightarrow \\mathcal {S}(Y_{P^*,P^{\\prime }}(F))$ is a well-defined isomorphism, bicontinuous in the archimedean case.", "Moreover the diagram $\\begin{tikzcd}\\mathcal {S}(Y_{P,P^{\\prime }}(F)) [r,\"\\mathcal {F}_{P|P^*}\"] [d,\"(\\cdot )_{\\chi _s}\"] & \\mathcal {S}(Y_{P^*,P^{\\prime }}(F)) [d,\"(\\cdot )^*_{\\chi _s}\"]\\\\I_P(\\chi _s) [r,\"\\mu _{P}(\\chi _s)\\mathcal {R}_{P|P^*}\"] &I_{P^*}^*(\\chi _s)\\end{tikzcd}$ commutes for all $\\chi :F^\\times \\rightarrow \\mathbb {C}^\\times $ and $s \\in \\mathbb {C}^\\times $ .", "$\\Box $ The commutativity of the diagram must be understood in the sense that one has an identity of meromorphic functions $\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*=\\mu _{P}(\\chi _s)\\mathcal {R}_{P|P^*}(f_{\\chi _s}).$ Let $H \\le G$ be a subgroup, and consider its action on $G$ via right multiplication.", "Assume that $Y$ is stable under the action of $H.$ For $(m,h,x_1,x_2) \\in M^{\\mathrm {ab}}(F) \\times H(F) \\times Y_{P}(F) \\times Y_{P^*}(F)$ and $(f_1,f_2) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)) \\times \\mathcal {S}(Y_{P^*,P^{\\prime }}(F))$ let $ L(m)R(h)f_1(x_1)=f_1(m^{-1}x_1h), \\quad L(m)R(h)f_2(x_2)= f_2(m^{-1}x_2h)$ be the left and right translation operators.", "It is easy to see that $L(m)R(h)$ preserves $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ and $\\mathcal {S}(Y_{P^*,P^{\\prime }}(F)).$ Lemma 3.4 One has $\\mathcal {F}_{P|P^*} \\circ L(m)R(h)=\\delta _{ P^* \\cap M_{\\beta _0}}(m)L(m)R(h) \\circ \\mathcal {F}_{P|P^*}.$ The operator $\\mu _P$ is $M^{\\mathrm {ab}}(F) \\times H(F)$ -equivariant.", "Thus the lemma follows from the definition (REF ) of $\\mathcal {R}_{P|P^*}.$ We have Schwartz spaces $ \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F)) \\quad \\textrm {and} \\quad \\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(F))$ defined as in [16] and a Fourier transform $\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}:\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F)) \\longrightarrow \\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(F))$ defined as in [16].", "It is an isomorphism, bicontinuous in the archimedean case.", "These facts are a special case of our construction of the Schwartz space $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ and the Fourier transform $\\mathcal {F}_{P|P^*}.$ One simply replaces $ (G,P,P^{\\prime },Y)$ by $ (M_{\\beta _0},P \\cap M_{\\beta _0},M_{\\beta _0},M_{\\beta _0}).$ Recall that for each $y \\in Y(F)$ we have a map $\\iota _y:X_{P \\cap M_{\\beta _0}}^\\circ \\rightarrow Y_{P}$ defined as in (REF ).", "Proposition 3.5 For each $y \\in Y(F)$ one has a map $\\iota _y^*:\\mathcal {S}(Y_{P,P^{\\prime }}(F)) &\\longrightarrow \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F))\\\\f &\\longmapsto f \\circ \\iota _y$ that fits into a commutative diagram $\\begin{tikzcd}[column sep=huge]\\mathcal {S}(Y_{P,P^{\\prime }}(F)) [r,\"\\mathcal {F}_{P|P^*}\"] [d,\"\\iota _y^*\"] & \\mathcal {S}(Y_{P^*,P^{\\prime }}(F)) [d,\"\\iota _y^*\"]\\\\\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F)) [r,\"\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}\"] &\\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(F)).\\end{tikzcd}$ If $F$ is archimedean then $\\iota _y^*$ is continuous.", "We recall that Langlands dual groups are contravariantly functorial with respect to morphisms of reductive algebraic groups $G \\rightarrow H$ with normal image, and behave as expected with respect to Levi and parabolic subgroups.", "For precise statements see [10].", "In particular the commutative diagram $\\begin{tikzcd}M \\cap M_{\\beta _0} [d,hookrightarrow][r,hookrightarrow] & P \\cap M_{\\beta _0} [d,hookrightarrow] [r,hookrightarrow] & M_{\\beta _0} [d,hookrightarrow]\\\\M [r,hookrightarrow]& P \\cap M^{\\prime } [r,hookrightarrow]& M^{\\prime }\\end{tikzcd}$ of inclusions of subgroups induces a commutative diagram $\\begin{tikzcd}\\widehat{M \\cap M_{\\beta _0}} [r,hookrightarrow] & \\widehat{P \\cap M_{\\beta _0} } [r,hookrightarrow] & \\widehat{M_{\\beta _0}} \\\\\\widehat{M} [u,twoheadrightarrow][r,hookrightarrow]& \\widehat{P \\cap M^{\\prime }} [r,hookrightarrow] [u,twoheadrightarrow]& \\widehat{M^{\\prime }} [u,twoheadrightarrow]\\end{tikzcd}$ where the horizontal arrows are inclusions of subgroups.", "Consider the representation $\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}=\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}}$ of $\\widehat{M \\cap M_{\\beta _0}}.$ We regard it as a representation of $\\widehat{M}$ via the quotient map $\\widehat{M} \\rightarrow \\widehat{M \\cap M}_{\\beta _0}$ .", "Choose a principal $\\mathfrak {sl}_2$ -triple in $\\widehat{\\mathfrak {m}}.$ Its image under the quotient map $\\widehat{\\mathfrak {m}} \\longrightarrow \\widehat{\\mathfrak {m} \\cap \\mathfrak {m}_{\\beta _0}}$ is a principal $\\mathfrak {sl}_2$ -triple in $\\widehat{\\mathfrak {m} \\cap \\mathfrak {m}_{\\beta _0}}.$ Using our comments on dual groups at the beginning of the proof one checks that the quotient map $\\widehat{\\mathfrak {n}}_{P|P^*} \\longrightarrow \\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}$ is an isomorphism of $\\widehat{M}$ -representations that restricts to a bijection $\\widehat{\\mathfrak {n}}_{P|P^*}^e \\longrightarrow \\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}^e.$ In view of these observations it is easy to check that the map $\\iota _y^*$ is well-defined, the diagram is commutative, and $\\iota _y^*$ is continuous when $F$ is archimedean.", "Corollary 3.6 One has $\\mathcal {F}_{P|P^*} \\circ \\mathcal {F}_{P^*|P}=\\mathrm {Id}.$ In [8] Braverman and Kazhdan prove that $\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}} \\circ \\mathcal {F}_{P^* \\cap M_{\\beta _0}|P \\cap M_{\\beta _0} }$ is the identity.", "Thus the corollary follows from Proposition REF .", "The unramified setting We assume that $F$ is nonarchimedean and is unramifed over its prime field, that $\\psi $ is unramified, and that we are in the unramified setting in the sense of §REF .", "Let $\\mathbb {1}_0 \\in C_c^\\infty (Y_{P}(F)).$ be the characteristic function of the image of $Y(\\mathcal {O}_F)$ in $Y_{P}(F)$ .", "We define the basic function $ b_{Y_{P,P^{\\prime }}}:Y_{P}(F) \\longrightarrow \\mathbb {C}$ to be the unique function in $C^\\infty (Y_{P}(F))$ that is finite under a compact open subgroup of $M^{\\mathrm {ab}}(F)$ such that $(b_{Y_{P,P^{\\prime }}})_{\\chi _s}=a_{P|P}(\\chi _s)(\\mathbb {1}_{0})_{\\chi _s}$ for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "As explained in (REF ) and (REF ), the spaces $X_{P \\cap M_{\\beta _0}}$ are special cases of $Y_{P,P^{\\prime }},$ so $b_{X_{P \\cap M_{\\beta _0}}}$ is defined.", "Lemma 3.7 Assume that $y \\in P^{\\mathrm {der}}(F)\\beta _0^\\vee (F^\\times ) Y(\\mathcal {O}_F).$ Then $\\iota _y^*(b_{Y_{P,P^{\\prime }}})=b_{X_{P \\cap M_{\\beta _0}}}.$ We have already explained the relation between $\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|(P \\cap M_{\\beta _0})^*}$ and $\\mathfrak {n}_{P|P^*}$ as representations of $\\widehat{M}$ and $\\widehat{M \\cap M_{\\beta _0}}$ in the proof of Proposition REF .", "This relationship implies that $a_{P|P}(\\chi _s)=a_{P \\cap M_{\\beta _0}|P \\cap M_{\\beta _0}}((\\chi _{\\beta _0})_{s_{\\beta _0}}).$ Define $r_\\beta $ as in (REF ).", "Arguing as in the proof of Lemma REF we obtain the following lemma: Lemma 3.8 There are constants $\\alpha ,c>0$ independent of the cardinality of the residue field $q$ such that if $q>c$ then $|b_{Y_{P,P^{\\prime }}}(x)| \\le \\mathbb {1}_{V_{\\beta _0}(\\mathcal {O}_F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (\\mathcal {O}_F)}(\\mathrm {Pl}_{P}(x))\\prod _{\\beta \\in \\Delta _P}|\\mathrm {Pl}_{P_\\beta }(x)|^{\\alpha -r_\\beta }.$ $\\Box $ Remark The claim on the independence of the residual characteristic is important because we will require this result for all but finitely many places of a global field.", "Proposition 3.9 One has $b_{Y_{P,P^{\\prime }}} \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ Moreover $\\mathcal {F}_{P|P^*}(b_{Y_{P,P^{\\prime }}})=b_{Y_{P^*,P^{\\prime }}}.$ The characters of $Z_{\\widehat{M}}$ that appear in $\\widehat{\\mathfrak {n}}_{P|P^*}$ are all of the form $\\lambda \\beta _{0}^\\vee $ with $\\lambda \\in \\mathbb {Z}_{>0}$ by the proof of Lemma REF .", "Let $V_{\\lambda }=\\widehat{\\mathfrak {n}}_{P|P^*}(\\lambda )=\\widehat{\\mathfrak {n}}_{M_{\\beta _0} \\cap P}(\\lambda )$ be the $\\lambda \\beta _0^\\vee $ -isotypic space and let $r_{\\lambda }:\\widehat{M}_{\\beta _0} \\longrightarrow \\mathrm {Aut}(V_{\\lambda })$ be the corresponding representation.", "It is irreducible [42] [32].", "Let $\\mathrm {triv}:(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ be the trivial character.", "Recall the comments on the relationship between $\\widehat{\\mathfrak {n}}_{P|P^*}$ and $\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0} }$ from the proof of Proposition REF .", "Let $\\pi $ be the trivial representation of $M_{\\beta _0}(F)$ .", "The Gindikin-Karpelevic formula implies that $\\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\mathrm {triv}_s})=(\\mathbb {1}_{0})_{\\mathrm {triv}_s}^*\\prod _{\\lambda } \\frac{L(\\lambda s_{\\beta _0}, \\pi ,r^\\vee _\\lambda )}{L(1+\\lambda s_{\\beta _0},\\pi ,r^\\vee _\\lambda )}$ where the product is over all $\\lambda \\in \\mathbb {Z}_{\\ge 1}$ such that $V_{\\lambda } \\ne 0$ [42] [31].", "Here the $L$ -functions are Langlands $L$ -functions and $r_\\lambda ^\\vee $ is the dual of $r_\\lambda .$ In more detail, $\\pi $ determines a Langlands class $c \\in \\widehat{M}_{\\beta _0}(\\mathbb {C})$ by the Satake isomorphism, and $L(s,\\pi ,r_{\\lambda }^\\vee )=\\det \\left(I_{V_\\lambda }-\\frac{r_{\\lambda }^\\vee (c)}{q^{s}} \\right)^{-1}.$ In fact, if $\\sigma :\\mathrm {SL}_2 \\rightarrow \\widehat{M}_{\\beta _0}$ is a principal $\\mathrm {SL}_2$ then $c=\\sigma {\\left({\\begin{matrix} q^{1/2} & \\\\ & q^{-1/2} \\end{matrix}}\\right)}$ [21].", "Consider $\\widehat{\\mathfrak {n}}_{P|P^*}(\\lambda ).$ As a representation of a principal $\\mathfrak {sl}_2$ -triple in $\\widehat{\\mathfrak {m}}$ it decomposes into a direct sum of irreducible representations in natural bijection with the $\\mathcal {N}_i$ in (REF ) that appear in $\\widehat{\\mathfrak {n}}^e_{P|P^*}(\\lambda ).$ The dimension of the corresponding irreducible representation is $2s_i+1,$ where $2s_i$ is the $h$ -eigenvalue on $\\mathcal {N}_i$ as above.", "We conclude that $\\frac{L(\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}{L(1+\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}&=\\prod _{i} \\left(\\frac{1-q^{-1-s_i-\\lambda s_{\\beta _0}}}{1-q^{-s_i-\\lambda s_{\\beta _0}}}\\frac{1-q^{-s_i-\\lambda s_{\\beta _0}}}{1-q^{1-s_i-\\lambda s_{\\beta _0}}} \\cdots \\frac{1-q^{-1+s_i-\\lambda s_{\\beta _0}})}{1-q^{s_i-\\lambda s_{\\beta _0}}}\\right)\\\\&=\\prod _{i} \\frac{1-q^{-1-s_i-\\lambda s_{\\beta _0}}}{1-q^{s_i-\\lambda s_{\\beta _0}}}$ where the product is over $\\mathcal {N}_i$ in (REF ) that appear in $\\widehat{\\mathfrak {n}}^e_{P|P^*}(\\lambda ).$ Thus $ \\prod _{\\lambda } \\frac{L(\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}{L(1+\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}=\\frac{a_{P|P^*}(\\mathrm {triv}_s)}{a_{P|P}(\\mathrm {triv}_s)}.$ We deduce for all unramified $\\chi :F^\\times \\rightarrow \\mathbb {C}^\\times $ that $\\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\chi _s})=\\frac{a_{P|P^*}(\\chi _s)}{a_{P|P}(\\chi _s)}(\\mathbb {1}_{0})_{\\chi _s}^*.$ It follows immediately that $b_{Y_{P,P^{\\prime }}} \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ For unramified $\\chi $ $\\mu _P(\\chi _s)=\\frac{a_{P^*|P^*}((\\chi _s)^{-1})}{a_{P|P^*}(\\chi _s)}$ where $\\mu _P(\\chi _s)$ is defined as in (REF ).", "Hence $\\mu _P(\\chi _s) \\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\chi _s})=\\frac{a_{P^*|P^*}((\\chi _s)^{-1})}{a_{P|P}(\\chi _s)}(\\mathbb {1}_{0})_{\\chi _s}^*.$ Applying Theorem REF we have $\\mathcal {F}_{P|P^*}(b_{Y_{P,P^{\\prime }}})_{\\chi _s}^*=a_{P|P}(\\chi _s)\\mu _P(\\chi _s) \\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\chi _s})=a_{P^*|P^*}((\\chi _s)^{-1})(\\mathbb {1}_{0})^*_{\\chi _s}$ Combining this with Mellin inversion we have $\\mathcal {F}_{P|P^*}(b_{Y_{P,P^{\\prime }}})=b_{Y_{P^*,P^{\\prime }}}.$ Let $\\varpi $ be a uniformizer for $F$ .", "For our use in the proof of Theorem REF below we require the following lemma: Lemma 3.10 Let $(\\alpha ,\\lambda ) \\in \\mathbb {C}\\times \\mathbb {Z}_{\\ne 0}.$ For any $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))^{M(\\mathcal {O}_F) \\times H(\\mathcal {O}_F)}$ $\\left(\\mathrm {Id}-\\frac{\\alpha }{\\delta _P^{1/2}(\\beta _0^{\\vee }(\\varpi ^{\\lambda }))}L(\\beta _0^{\\vee }(\\varpi ^{-\\lambda }))\\right)f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))^{M(\\mathcal {O}_F) \\times H(\\mathcal {O}_F)}$ and $\\left(\\left(\\mathrm {Id}-\\frac{\\alpha }{\\delta _P^{1/2}(\\beta _0^{\\vee }(\\varpi ^{\\lambda }))}L(\\beta _0^{\\vee }(\\varpi ^{-\\lambda }))\\right)f\\right)_{\\chi _s}=(1-\\alpha \\chi _{\\beta _0}(\\varpi ^{\\lambda }))f_{\\chi _s}.$ $\\Box $ The adelic setting Now let $F$ be a global field.", "We let $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))=\\widehat{\\otimes }_{v|\\infty }\\mathcal {S}(Y_{P,P^{\\prime }}(F_v)) \\otimes \\otimes ^{\\prime }_{v \\nmid \\infty }\\mathcal {S}(Y_{P,P^{\\prime }}(F_v))$ where the restricted direct product is taken with respect to the basic functions of (REF ).", "Here when $F$ is a number field the hat denotes the projective topological tensor product and when $F$ is a function field it is the algebraic tensor product.", "The tensor product of the local Fourier transforms induces an isomorphism $\\mathcal {F}_{P|P^*}:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) \\longrightarrow \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)).$ Here we are using Proposition REF .", "The following is the global analogue of Proposition REF : Proposition 3.11 For each $y \\in Y(\\mathbb {A}_F)$ one has a map $\\iota _y^*:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow \\mathcal {S}(X_{P \\cap M_{0}}(\\mathbb {A}_F))\\\\f &\\longmapsto f \\circ \\iota _y$ that fits into a commutative diagram $\\begin{tikzcd}[column sep=huge]\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) [r,\"\\mathcal {F}_{P|P^*}\"] [d,\"\\iota _y^*\"] & \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)) [d,\"\\iota _y^*\"]\\\\\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)) [r,\"\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}\"] &\\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(\\mathbb {A}_F)).\\end{tikzcd}$ Let $K=\\prod _vK_v \\le G(\\mathbb {A}_F)$ be a maximal compact subgroup.", "The element $y=(y_v) \\in Y_{P}(\\mathbb {A}_F)$ has the property that $y_v \\in K_v$ for almost all $v.$ Thus the proposition follows from its local analogue Proposition REF and the corresponding statement for basic functions in Lemma REF .", "Lemma 3.12 For $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ (resp.", "$f \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the integrals defining $f_{\\chi _s}$ (resp.", "$f_{\\chi _s}^*$ ) converge absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large (resp.", "sufficiently small).", "This follows from the estimates in lemmas REF and REF .", "Lemma REF implies that the Mellin transforms (REF ) define maps $(\\cdot )_{\\chi _s}:=(\\cdot )_{\\chi _s,P}:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow I_P(\\chi _s)|_{Y_{P}(\\mathbb {A}_F)} \\\\(\\cdot )_{\\chi _s}^*:=(\\cdot )_{\\chi _s,P^*}^*:\\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow I_P(\\chi _s)|_{Y_{P}(\\mathbb {A}_F)}$ for $\\mathrm {Re}(s_\\beta )$ sufficiently large (resp.", "sufficiently small).", "These Mellin transforms will be used in the following sections.", "The Poisson summation formula on $X_{P \\cap M_{\\beta _0}}(F)$ The Poisson summation formula on $X_{P \\cap M_{\\beta _0}}(F)$ was established under some local assumptions on the test functions involved in [8] with a slightly different definition of the Schwartz space.", "In this section we establish it in general following the arguments of [19].", "To ease notation, for this section only we assume that $P \\le M_{\\beta _0},$ which implies $M_{\\beta _0}=G.$ This amounts to assuming that $G$ is simple and $P$ is a maximal parabolic subgroup.", "Thus $X_{P \\cap M_{\\beta _0}}=X_P=X_{P,PIG}$ where $I \\in G(F)$ is the identity.", "The construction of the Schwartz space and the Fourier transform given in the previous section reduces to the construction of [16] in this case.", "We observe that $\\Delta _P=\\lbrace \\beta _0\\rbrace $ in the setting of this section.", "Thus we drop $\\Delta _P$ and $\\beta _0$ from notation when no confusion is likely.", "For a quasi-character $\\chi :A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times \\rightarrow \\mathbb {C}^\\times ,$ $s \\in \\mathbb {C}$ , $f_1 \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ and $f_2 \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ we have degenerate Eisenstein series $ \\begin{split}E(g,f_{1\\chi _s}):&=\\sum _{\\gamma \\in P(F) \\backslash G(F)}f_{1\\chi _s}(\\gamma g)\\\\E^*(g,f^*_{2\\chi _s}):&=\\sum _{\\gamma \\in P^*(F) \\backslash G(F)}f^*_{2\\chi _s}(\\gamma g).", "\\end{split}$ Here $\\chi _s=\\chi |\\cdot |^s$ .", "It is well-known that these converge absolutely for $\\mathrm {Re}(s)$ large enough (resp.", "small enough).", "For the proof of absolute convergence in a special case see [19]; the proof generalizes to our setting.", "Let $ a_{P|P}(\\chi _s):=\\prod _v a_{P|P}(\\chi _{vs}).$ Lemma 4.1 Let $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ .", "The function $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s)> 0.$ It admits a meromorphic continuation to the plane.", "Moreover there is an integer $n$ depending only on $G$ such that $a_{P|P}(\\chi _s)$ is holomorphic if $\\chi ^n \\ne 1.$ The first claim follows from the same remarks proving Lemma REF .", "Since $a_{P|P}(\\chi _s)$ is a product of (completed) Hecke $L$ -functions the second two assertions are clear.", "Theorem 4.2 (Langlands) Let $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ be finite under a maximal compact subgroup of $G(\\mathbb {A}_F).$ For a character $\\chi :A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times \\rightarrow \\mathbb {C}^\\times $ the Eisenstein series $E(g,f_{\\chi _s})$ has a meromorphic continuation to the $s$ -plane and admits a functional equation $E(g,f_{\\chi _s})=E^*(g,\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ If $\\mathrm {Re}(z)=0$ then the order of the pole of $E(g,f_{\\chi _s})$ at $s=z$ is bounded by the order of the pole of $a_{P|P}(\\chi _s)$ at $s=z.$ To ease translation with the manner the theory is usually phrased, let $Q$ be the unique parabolic subgroup containing $B$ that is conjugate to $P^*,$ and let $M_Q$ be the Levi subgroup of $P$ containing $T.$ Choose $w \\in G(F)$ so that $w P^*w^{-1}=Q$ and $wMw^{-1}=M_Q.$ There is an isomorphism $j:C^\\infty (X_{P^*}^\\circ (F)) &\\tilde{\\longrightarrow } C^\\infty (X_Q^\\circ (F))\\\\f &\\tilde{\\longmapsto } (x \\mapsto f(w^{-1}x)).$ For suitable sections $\\Phi ^{\\chi _s} \\in I_Q(\\chi _s)$ and $g \\in G(\\mathbb {A}_F)$ we can then form the Eisenstein series $E_Q(g,\\Phi ^{\\chi _s})=\\sum _{\\gamma \\in Q(F) \\backslash G(F)}\\Phi ^{\\chi _s}(\\gamma g).$ Let $K$ be a maximal compact subgroup of $G(\\mathbb {A}_F)$ .", "If $\\Phi ^{\\chi _s}$ is $K$ -finite and holomorphic for $\\mathrm {Re}(s)$ sufficiently large, then $E_Q(g,\\Phi ^{\\chi _s})$ is absolutely convergent for $\\mathrm {Re}(s)$ sufficiently large.", "We observe that for $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ one has $ E^*(g,f_{\\chi _{s}}^*)=E_Q(g,j (f_{\\chi _s}^*))$ for $\\mathrm {Re}(s)$ sufficiently small.", "By definition of the Schwartz space, for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ the section $f_{\\chi _s}$ is a holomorphic multiple of the product of completed Hecke $L$ -functions $a_{P|P}(\\chi _s).$ With this in mind, the meromorphy assertion is proven in [9], [33] for $K$ -finite functions $f.$ These references also contain a proof of the functional equation $E(g,f_{\\chi _s})=E_Q(g,j \\left( \\mathcal {R}_{P|P^*}(f_{\\chi _s})\\right))$ which implies by (REF ) that $E(g,f_{\\chi _s})=E^*(g, \\mathcal {R}_{P|P^*}(f_{\\chi _s})).$ We have $ \\mathcal {R}_{P|P^*}(f_{\\chi _s})=(\\mathcal {F}_{P|P^*}(f))_{\\chi _s}^*$ by Theorem REF and the argument of [19].", "Thus the functional equation stated in the theorem follows from (REF ).", "As mentioned above for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ the Mellin transform $f_{\\chi _s}$ is a holomorphic multiple of $a_{P|P}(\\chi _s)$ .", "Thus the last assertion of the theorem follows from the fact that Eisenstein series attached to $K$ -finite holomorphic sections are themselves holomorphic on the unitary axis [2], [33].", "Let $C(\\chi _s)$ be the analytic conductor of $\\chi _s,$ normalized as in [19].", "Using notation as in (REF ) and (REF ) one has the following estimate: Theorem 4.3 Assume that $F$ is a number field, that Conjecture REF is true, and $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ is $K$ -finite.", "Let $A<B$ be real numbers and let $p\\in \\mathbb {C}[x]$ be a polynomial such that $p(s)E(g,\\chi _s)$ is holomorphic in the strip $V_{A,B}.$ Then for all $N \\ge 0$ one has $|E(g,f_{\\chi _s})|_{A,B,p} \\ll _{N,f} C(\\chi _s)^{-N}.$ The argument is the same as that proving [19].", "Let $K_M <M^{\\mathrm {ab}}(\\mathbb {A}_F)$ be the maximal compact subgroup and let $ \\kappa _F:={\\left\\lbrace \\begin{array}{ll}\\frac{2^{r_1}(2\\pi )^{r_2}h_F R_F}{d_F^{1/2}e_F}& \\textrm { if }F \\textrm { is a number field}\\\\\\frac{h_F}{d_F^{1/2}(q-1)\\log q}&\\textrm { if }F \\textrm { is a function field}\\end{array}\\right.", "}$ where $r_1$ and $r_2$ are the number of real (resp.", "complex) places, $h_F$ is the class number, $R_F$ is the regulator, $d_F$ is the absolute discriminant, $e_F$ is the number of roots of unity in $F$ , and in the function field case $F$ has field of constants $\\mathbb {F}_q$ .", "For complex numbers $s_0$ let $ w(s_0)={\\left\\lbrace \\begin{array}{ll} \\tfrac{1}{2} \\textrm { if }\\mathrm {Re}(s_0)=0,\\\\1 \\textrm { otherwise.}\\end{array}\\right.", "}$ We now prove the following special case of Theorem REF : Theorem 4.4 Let $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ .", "Assume that $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $\\sum _{x \\in X_{P}^\\circ (F)}&f(x)+\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E^*(I,\\mathcal {F}_{P|P^*}(f)^*_{\\chi _{-s}})\\\\&=\\sum _{x^* \\in X_{P^*}^{\\circ }(F)}\\mathcal {F}_{P|P^*}(f)(x^*)+\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Here the sum on $\\chi $ is over characters of $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times .$ The sums over $x$ and $x^*$ are absolutely convergent.", "It is clear that if $f$ is $K_M$ -finite and Conjecture REF is valid then the double sum over $\\chi $ and $s_0$ has finite support.", "The same is true if $F$ is a function field or Conjecture REF is valid.", "This is how we are using these assumptions here and below.", "Remark Before proving Theorem REF we clarify its meaning.", "Assume $F$ is a number field and Conjecture REF is valid.", "For any $s_0 \\in \\mathbb {C}$ and $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ consider the linear functional $ f_\\infty \\longmapsto \\mathrm {Res}_{s=s_0}E(I,(f_\\infty f^\\infty )_{\\chi _s}).$ It is defined on the dense subspace of $\\mathcal {S}(X_P(F_\\infty ))$ consisting of $K_\\infty $ -finite functions.", "The proof of the theorem shows that this linear functional is continuous with respect to the Fréchet topology on $\\mathcal {S}(X_P(F_\\infty )).$ Hence it extends to all of $\\mathcal {S}(X_P(F_\\infty )).$ For $f_\\infty $ that are not $K_\\infty $ -finite this is the manner the expression $\\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ is to be interpreted in this paper.", "We take the obvious analogous conventions regarding the meaning of $\\mathrm {Res}_{s=s_0}E^*(I,f^{\\prime *}_{\\chi _s})$ for $f^{\\prime } \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ that are not $K_\\infty $ -finite.", "Let $K_\\infty \\le G(F_\\infty )$ be a maximal compact subgroup.", "Assume first that $f=f_\\infty f^\\infty $ where $f_\\infty \\in \\mathcal {S}(X_{P}(F_\\infty ))$ is $K_\\infty $ -finite.", "Let $I_F:={\\left\\lbrace \\begin{array}{ll}i\\left[-\\frac{\\pi }{\\log q},\\frac{\\pi }{\\log q} \\right] & \\textrm { if }F \\textrm { is a function field, and }\\\\i\\mathbb {R}&\\textrm { if }F \\textrm { is a number field.}", "\\end{array}\\right.", "}$ By Mellin inversion and Theorem REF in the number field case $\\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{\\chi } \\int _{\\sigma +I_F }E(I,f_{\\chi _s}) \\frac{ds}{c 2\\pi i}$ for $\\sigma $ sufficiently large and a suitable constant $c.$ Here the sum over $\\chi $ is as in the statement of the theorem.", "Since $f$ is $K_\\infty $ -finite, the support of the sum over $\\chi $ is finite.", "The constant $c$ may be computed as follows.", "It depends on our choice of Haar measure on $[M^{\\mathrm {ab}}],$ which is normalized to correspond to the measure on $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times $ described in §REF .", "This is induced as explained in §REF from a measure on $\\mathbb {A}_F$ that is self-dual with respect to $\\psi .$ The induced measure on $F \\backslash \\mathbb {A}_F$ is independent of the choice of $\\psi .$ Thus using [46] and a choice of self-dual measure one obtains $\\mathrm {meas}(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )={\\left\\lbrace \\begin{array}{ll} \\frac{2^{r_1}(2\\pi )^{r_2}h_F R_F}{d_{F}^{1/2}e_F} &\\textrm { if }F \\textrm { is a number field}\\\\\\frac{h_F }{d_{F}^{1/2}(q-1)} & \\textrm { if }F \\textrm { is a function field.}\\end{array}\\right.", "}$ This implies $c=\\kappa _F.$ Notice that in the function field case $\\kappa _F=\\frac{\\mathrm {meas}(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )}{\\log q}$ because of the measure of $I_F.$ We shift contours to $\\mathrm {Re}(\\sigma )$ very small to see that this is $\\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{\\chi } \\int _{\\mathrm {Re}(s)=\\sigma ^{\\prime }}E(I,f_{\\chi _s}) \\frac{ds}{\\kappa _F2\\pi i}+\\frac{1}{\\kappa _F}\\sum _{\\chi }\\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ where now $\\sigma ^{\\prime }$ is sufficiently small.", "Here the support of the sum over $\\chi $ and $s_0$ is finite by assumptions (2) or (3) in the number field case and the fact that $E(I,f_{\\chi _s})$ is rational in the sense of [36] in the function field case [36].", "In the number field case the bound required to justify the contour shift is provided by Theorem REF .", "We now apply the functional equation of Theorem REF and Mellin inversion to deduce the identity $ \\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{x^* \\in X_{P^*}^{\\circ }(F)}\\mathcal {F}_{P|P^*}(f)(x^*)+\\frac{1}{\\kappa _F}\\sum _{\\chi }\\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Since all elements of the Schwartz space $\\mathcal {S}(X_P(\\mathbb {A}_F))$ are $K_\\infty $ -finite in the function field case assume for the moment that $F$ is a number field.", "Write $K_{M\\infty }=K_{M} \\cap M^{\\mathrm {ab}}(F_\\infty ).$ We assume without loss of generality that $f=f_\\infty f^\\infty $ where $f_\\infty \\in \\mathcal {S}(X_P(F_\\infty ))$ , $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ .", "We additionally either assume (2) and that $f_\\infty $ transforms according to a particular character $\\eta $ under $K_{M\\infty }$ , or we assume (3).", "Since $K_\\infty $ -finite functions are dense in $\\mathcal {S}(X_{P}(F_\\infty ))$ by the standard argument [45] we can choose a sequence $\\lbrace f_i:i \\in \\mathbb {Z}_{\\ge 1}\\rbrace \\subset \\mathcal {S}(X_P(F_\\infty ))$ of $K_\\infty $ -finite $f_i$ such that $f_i \\rightarrow f$ in the Frechét topology on $\\mathcal {S}(X_P(F_\\infty )).$ Under assumption (2) we additionally assume that the $f_i$ transform under $K_{M\\infty }$ by $\\eta $ .", "We observe that the support of the sum over $\\chi $ and $s_0$ in (REF ) and its analogues when $f$ is replaced by $f_if^\\infty $ are contained in a finite set independent of $i$ under either assumption (2) or (3).", "In fact, the finite set can be taken to depend only on the $K_M^\\infty $ -type of $f^\\infty $ .", "It is clear from Lemma REF that for each fixed $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ the map $\\Lambda _{1,f^\\infty }:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\sum _{x \\in X_P^\\circ (F)}f_\\infty (x)f^\\infty (x)$ is continuous on $\\mathcal {S}(X_P(F_\\infty )).$ The same is true of $\\Lambda _{2,f^\\infty }:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\sum _{x^* \\in X_{P^*}^\\circ (F)}\\mathcal {F}_{P|P^*}(f_\\infty )(x^*)\\mathcal {F}_{P|P^*}(f^\\infty )(x^*)$ since the Fourier transform is continuous.", "Thus $\\Lambda _{j,f^\\infty }(f_i) \\rightarrow \\Lambda _{j,f^\\infty }(f)$ for as $i \\rightarrow \\infty $ for $j \\in \\lbrace 1,2\\rbrace .$ Finally consider $\\Lambda _{f^\\infty ,s_0}:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Using Lemma REF we can choose an $f^{\\prime \\infty }\\in \\mathcal {S}(X_{P}(\\mathbb {A}_F^\\infty ))$ such that $\\Lambda _{f^\\infty ,s_0}=\\Lambda _{1,f^{\\prime \\infty }}-\\Lambda _{2,f^{\\prime \\infty }}.$ In more detail one uses Lemma REF to choose $f^{\\prime \\infty }$ so that the contribution of $\\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ to (REF ) is unchanged, but the contribution of the other residues vanish.", "Thus $\\Lambda _{f^\\infty ,s_0}$ is continuous.", "We deduce that (REF ) is valid for all $K_M$ -finite $f$ under assumption (2) and for all $f$ under assumption (3).", "We now return to the case of a general global field.", "To obtain the expression in the theorem from (REF ) we use the functional equation and holomorphy assertion of Theorem REF and the observeration that for any meromorphic function $f$ on $\\mathbb {C}$ and any $s_0 \\in \\mathbb {C}$ one has $\\mathrm {Res}_{s=s_0}f(s)=-\\mathrm {Res}_{s=-s_0}f(-s).$ Lemma 4.5 Let $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }.$ Let $n$ be the maximal order of the pole at $s_0$ of $E(I,f_{\\chi _s})$ as $f$ ranges over $K_\\infty $ -finite elements of $\\mathcal {S}(X_P(F_\\infty )).$ For each $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ there are continuous linear functionals $\\Lambda _i((\\cdot )f^\\infty ):\\mathcal {S}(X_P(F_\\infty )) \\longrightarrow \\mathbb {C}$ such that if $f_\\infty $ is $K_\\infty $ -finite then $ \\delta _P^{-1/2}(\\beta ^\\vee _0(t))\\sum _{i=0}^{n-1} |t|^{-s_0}(\\log |t|)^i \\overline{\\chi }(t)\\Lambda _i(f_\\infty f^\\infty )=\\mathrm {Res}_{s=s_0}E(I,(L(\\beta _0^\\vee (t))f_\\infty f^\\infty )_{\\chi _s}).$ As usual, when $F$ is a function field we give $\\mathcal {S}(X_P(F_\\infty ))$ the discrete topology.", "We have $\\tfrac{1}{|t|^{s}}=\\sum _{j=0}^\\infty \\frac{(-(s-s_0)\\log |t|)^j}{ j!", "|t|^{s_0}}$ for $s$ in a neighborhood of $s_0$ .", "On the other hand if $f_\\infty $ is $K_\\infty $ -finite then we can write $E(I,(f_{\\infty }f^\\infty )_{\\chi _s})=\\sum _{i=0}^{n-1}\\frac{(-1)^{i}i!", "\\Lambda _i(f_\\infty f^\\infty )}{(s-s_0)^{i+1}} +g(s)$ where $g(s)$ is holomorphic in a neighborhood of $s_0$ and $\\Lambda _i(f_\\infty f_\\infty ) \\in \\mathbb {C}$ .", "Then the expression (REF ) is valid for $K_\\infty $ -finite $f_\\infty .$ The fact that $\\Lambda _i((\\cdot )f^\\infty )$ extends to a continuous functional on all of $\\mathcal {S}(X_P(F_\\infty ))$ is tautological in the function field case.", "In the number field case it follows from a variant of the proof of the continuity of the functional $\\Lambda _{f^\\infty ,s_0}$ in the proof of Theorem REF .", "Proofs of Theorem REF and Theorem REF For $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ consider the sum $ \\sum _{y \\in Y_{P}(F)}f(y ).$ Lemma 5.1 The sum (REF ) is absolutely convergent.", "Let $|\\cdot |_{\\beta }=\\prod _{v}|\\cdot |_{\\beta ,v}$ , where $|\\cdot |_{\\beta ,v}$ is the norm on $V_{\\beta }(F_v)$ fixed above (REF ).", "Using lemmas REF and REF and the notation in these lemmas we see that for $\\alpha >0$ sufficiently small there is a Schwartz function on $\\Phi \\in \\mathcal {S}(V_{\\beta _0}(\\mathbb {A}_F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (\\mathbb {A}_F))$ such that the sum is dominated by $\\sum _{\\xi \\in V^{\\circ }(F)}\\Phi (\\xi )\\prod _{\\beta \\in \\Delta _P}|\\xi |^{\\alpha -r_\\beta }_\\beta <\\infty $ where $V^\\circ $ is defined as in (REF ).", "Recall the function $w(s_0)$ from (REF ).", "The following theorem is the precise version of Theorem REF : Theorem 5.2 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ .", "Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $&\\sum _{x \\in Y_{P}(F)}f(x )+\\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(\\mathcal {F}_{P|P^*}(f))^*_{\\chi _{-s}})\\\\&=\\sum _{x^* \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)(x ^*)+\\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})$ where the sums on $\\chi $ are over all characters of $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times .$ Moreover the sums over $x$ and $x^*$ and the triple sums over $(y,\\chi ,s_0)$ are absolutely convergent.", "Remark We point out that conjectures REF and REF are assertions about functions in $\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)),$ whereas $f$ lies in $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ We use Lemma REF to write $ \\begin{split}\\sum _{x\\in Y_{P}(F)}f(x ) &=\\sum _{ y }\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}f(\\iota _y(\\gamma ))=\\sum _{ y}\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(f)(\\gamma ).\\end{split}$ Here and throughout the proof all sums over $y$ are over $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ .", "By Proposition REF we have $\\iota _y^*(f) \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ There is a natural map $(M \\cap M_{\\beta _0})^{\\mathrm {ab}} \\rightarrow M^{\\mathrm {ab}}$ and hence $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(\\mathbb {A}_F)$ acts on $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ With respect to this action $\\iota _y$ is $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(F)$ -equivariant.", "In particular if (2) is valid the assumption that $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ is $K_M$ -finite implies $\\iota _y(f)$ is finite under the maximal compact subgroup of $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(\\mathbb {A}_F).$ Hence applying Theorem REF we see that the above is $&-\\sum _{y }\\Bigg (\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E^*(I,(\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f)))^*_{\\chi _{-s}})\\Bigg )\\\\&+\\sum _{ y }\\Bigg (\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f))(\\gamma ) +\\sum _{\\chi }\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\Bigg ).$ By Proposition REF this is $&-\\sum _{y }\\Bigg (\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(\\mathcal {F}_{P |P^*}(f))^*_{\\chi _{-s}})\\Bigg )\\\\&+\\sum _{ y }\\Bigg (\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(\\mathcal {F}_{P|P^*}(f))(\\gamma ) +\\sum _{\\chi }\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\Bigg ).$ By Lemma REF $\\sum _{ y }\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(\\mathcal {F}_{P |P^*}(f))( \\gamma )=\\sum _{x^* \\in Y_{P^*}(F)}\\mathcal {F}_{P |P^*}(f)( x^*).$ This completes the proof of the identity in the theorem.", "For the absolute convergence statement in the theorem, we observe that (REF ) and (REF ) are absolutely convergent by Lemma REF .", "By the argument above and the functional equation in Theorem REF we see that $\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}&f(\\iota _y(\\gamma ))\\\\ &=\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f))(\\gamma )+\\frac{1}{\\kappa _F}\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})$ for all $y.$ We deduce that the sum over $y$ in $\\frac{1}{\\kappa _F}\\sum _{y} \\left(\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\right)$ is absolutely convergent as well.", "The support of the sum over $\\chi $ is finite and under assumptions (1) or (2) it depends only on the $K_M$ -type of $f.$ Under assumption (3) it depends only on the $K_M \\cap M(\\mathbb {A}_F^\\infty )$ -type of $f.$ With this in mind, for any fixed $s_0 \\in \\mathbb {C}$ we can use Lemma REF to choose an $f^{\\prime } \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ with the same $K_M$ -type as $f$ so that $&\\sum _{y } \\left(\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f^{\\prime })_{\\chi _s})\\right)=\\sum _{y} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s}).$ We deduce that the right hand side of (REF ) is absolutely convergent, and the same is true if we replace $f$ by $\\mathcal {F}_{P|P^*}(f)$ by Proposition REF and Theorem REF .", "Let $(f_1,f_2) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) \\times \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)).$ Recall the definition of the generalized Schubert Eisenstein series $E_{Y_P}(f_{1\\chi _s})$ and $E_{Y_{P^*}}^*(f_{2\\chi _s}^*)$ of (REF ).", "Using lemmas REF and REF and standard arguments one obtains the following lemma: Lemma 5.3 For $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ one has $\\sum _{x \\in M^{\\mathrm {ab}}(F) \\backslash Y_{P}(F)}\\int _{M^{\\mathrm {ab}}(\\mathbb {A}_F)} \\delta _{P}^{1/2}(m)|\\chi _s(\\omega _P(m))f_1(m^{-1}x)|dm<\\infty $ for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "In particular, $E_{Y_P}(f_{1\\chi _s})$ converges absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "Similarly, for $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the series $E_{Y_{P^*}}^*(f_{2\\chi _s}^*)$ converges absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently small.", "$\\Box $ With notation as in Lemma REF , let $A^{\\beta _0}:=\\ker \\omega _{\\beta _0}:M^{\\mathrm {ab}} \\longrightarrow \\mathbb {G}_m$ Thus $M^{\\mathrm {ab}}$ is the internal direct sum of $\\beta _0^\\vee (\\mathbb {G}_m)$ and $A^{\\beta _0}$ by Lemma REF .", "Recall that $M^{\\prime }$ is the unique Levi subgroup of $P^{\\prime }$ containing $M$ .", "Applying Lemma REF with $P$ replaced by $P^{\\prime }$ we see that the canonical map $M^{\\mathrm {ab}} \\rightarrow M^{\\prime \\mathrm {der}}$ restricts to induce an isomorphism $ A^{\\beta _0} \\tilde{\\longrightarrow } M^{\\prime \\mathrm {ab}}.$ We have a Haar measure on $\\mathbb {A}_F^\\times $ and we endow $A^{\\beta _0}(\\mathbb {A}_F)$ by declaring that the product measure on $A^{\\beta _0}(\\mathbb {A}_F)\\beta _0^\\vee (\\mathbb {A}_F^\\times )$ is our Haar measure on $M^{\\mathrm {ab}}(\\mathbb {A}_F).$ As usual let $(\\mathbb {A}_F^\\times )^1:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|=1\\rbrace , \\quad (\\mathbb {A}_F^\\times )^+:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|>1\\rbrace , \\quad (\\mathbb {A}_F^\\times )^-:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|<1\\rbrace .$ For algebraic $F$ -groups $G$ we set $[G]:=G(F) \\backslash G(\\mathbb {A}_F)$ and define $[\\mathbb {G}_m]^1:=F^\\times \\backslash (\\mathbb {A}_F^\\times )^1, \\quad [\\mathbb {G}_m]^ \\pm :=F^\\times \\backslash (\\mathbb {A}_F^\\times )^\\pm .$ We endow $(\\mathbb {A}_F^\\times )^1$ with the unique measure so that $|\\cdot |:\\mathbb {A}_F^\\times /(\\mathbb {A}_F^\\times )^1 \\tilde{\\longrightarrow } \\mathrm {Im}(|\\cdot |) \\subseteq \\mathbb {R}_{>0}$ is measure preserving.", "Here in the number field case $\\mathrm {Im}(|\\cdot |)=\\mathbb {R}_{>0}$ with its usual Haar measure $\\frac{dt}{t}$ (where $dt$ the Lebesgue measure on $\\mathbb {R}$ ).", "In the function field case the image is discrete and endowed with the counting measure.", "The following technical lemma will be used in the proof of Theorem REF : Lemma 5.4 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "For every $\\chi \\in (\\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times })^{\\Delta _P},$ $\\chi ^{\\prime } \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and $s_0 \\in \\mathbb {C}$ $ &\\sum _{y } \\int _{[\\mathbb {G}_m]^1 \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)|\\chi _s(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_{z}})|dmd^\\times t$ is finite.", "Moreover for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large $ &\\sum _{y}\\int _{[\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)|\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)^*_{\\chi ^{\\prime }_{ z}})|dm d^\\times t$ converges.", "The expression $ &\\sum _{y } \\int _{ [\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_{z}})dm d^\\times t$ originally defined for $\\mathrm {Re}(s_{\\beta _0})$ large, admits a meromorphic continuation to $s \\in \\mathbb {C}^{\\Delta _P}.$ Here all sums over $y$ are over $ P^{\\prime }(F) \\backslash Y(F)$ .", "The group $[\\mathbb {G}_m]^1$ is compact.", "Using this observation and the argument proving Lemma REF we see that for all $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ and $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the integrals $ \\begin{split}&\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m) \\sum _{x \\in Y_{P}(F)}|f_1((\\beta _0^\\vee (t)m)^{-1}x)|dm d^\\times t\\\\& \\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m) \\sum _{x^* \\in Y_{P^*}(F)}|f_2((\\beta _0^\\vee (t)m)^{-1}x^*)|dmd^\\times t, \\end{split}$ are convergent.", "Let $(f_\\infty ,f^\\infty ) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )) \\times \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F^\\infty ))$ , let $(t,m) \\in \\mathbb {A}_F^\\times \\times T^{\\beta _0}(\\mathbb {A}_F)$ , and let $y \\in Y(F).$ Arguing as in the proof of Theorem REF for each $\\chi ^{\\prime } \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and $s_0 \\in \\mathbb {C}$ there is an $f^{\\prime \\infty }\\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F^\\infty ))$ such that $ \\begin{split}&\\sum _{x \\in X_{P \\cap M_{\\beta _0}(F)}}\\iota _y(f_\\infty f^{\\prime \\infty })((\\beta _0^\\vee (t)m)^{-1} x)- \\sum _{x^*\\in X_{P^* \\cap M_{\\beta _0}}(F)}\\mathcal {F}_{P \\cap M_{\\beta _0} |P^* \\cap M_{\\beta _0}}\\iota _y^*(L(\\beta _0^{\\vee }(t)m)(f_\\infty f^{\\prime \\infty }))(x^*)\\\\&=\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{z }}) \\end{split}$ Multiplying both sides of (REF ) by $\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))$ , summing over $y \\in P^{\\prime \\mathrm {der}}(F) \\backslash G(F)$ and then integrating over $[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]$ we arrive at $ \\begin{split}&\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\sum _{y}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\Bigg (\\sum _{x \\in X_{P \\cap M_{\\beta _0}(F)}}\\iota _y(f_\\infty f^{\\prime \\infty })((\\beta _0^\\vee (t)m)^{-1} x)\\\\&- \\sum _{x^*\\in X_{P^* \\cap M_{\\beta _0}}(F)}\\mathcal {F}_{P \\cap M_{\\beta _0} |P^* \\cap M_{\\beta _0}}\\iota _y^*(L(\\beta _0^{\\vee }(t)m)(f_\\infty f^{\\prime \\infty }))(x^*)\\Bigg ) dm d^\\times t\\\\&=\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m))\\chi _s(t\\omega _P(m))\\sum _{y} \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{ z }})dmd^\\times t \\end{split}$ To prove that the bottom sum and integral in (REF ) converge absolutely it suffices to prove that the top sum and integral converge absolutely.", "Using Lemma REF and Proposition REF this reduces to the assertion that the integrals in (REF ) are convergent.", "Hence $ \\int \\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\delta _P^{1/2}(\\beta _0^\\vee (t)m))|\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{z }})|dmd^\\times t$ converges, where the integral is over $[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}].$ Using (REF ) we see that (REF ) unfolds to (REF ).", "This completes the proof of the first assertion of the lemma.", "Let $n$ be the maximal order of the pole of $E(I,f^{\\prime }_{\\chi ^{\\prime }_{z}})$ at $z=s_0$ as $f^{\\prime } \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F))$ varies.", "Recall the continuity assertion for $\\iota _y^*$ contained in Proposition REF .", "Using the first absolute convergence assertion of the lemma and Lemma REF there are continuous linear functionals $\\Lambda _{i,y}((\\cdot )f^\\infty ):\\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )) \\longrightarrow \\mathbb {C}$ for $0 \\le i \\le n-1$ such that $ \\begin{split}&\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F)}\\int _{ A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)^*_{\\chi ^{\\prime }_{ z}})dm \\\\&= \\sum _{y \\in P^{\\prime }(F) \\backslash Y(F) } \\int _{A^{\\beta _0}(\\mathbb {A}_F)} \\left( \\sum _{i=0}^{n-1}\\chi _s(t\\omega _P(m))|t|^{-s_{0}}(\\log |t|)^i\\overline{\\chi }(t) \\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))\\right) dm\\end{split}$ for all $t \\in \\mathbb {A}_F^\\times .$ Here when $F$ is a function field we give $\\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )$ the discrete topology.", "Varying $f^\\infty $ and using Lemma REF we deduce that $\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F)} \\int _{A^{\\beta _0}(\\mathbb {A}_F)}|\\chi _s(\\omega _P(m)\\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))|dm$ is convergent for each $i.$ Using this fact we see that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large the integral over $t \\in [\\mathbb {G}_m]^-$ of (REF ) is $\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F) } \\int _{A^{\\beta _0}(\\mathbb {A}_F)} \\left( \\int _{[\\mathbb {G}_m]^-}\\sum _{i=0}^{n-1}\\chi _s(t\\omega _P(m))|t|^{-s_{0}}(\\log |t|)^i\\overline{\\chi }(t)d^\\times t \\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))\\right) dm;$ in other words, it is permissible to bring the integral over $t$ inside the other integral and the sum.", "The integrals over $t$ may be evaluated in an elementary manner, yielding the remaining assertions of the lemma.", "Assume for the moment that $f$ is $K_M$ -finite and that $\\mathrm {Re}(s_{\\beta _0})$ is large.", "Then $E_{Y_P}(f_{\\chi _s})=&\\int _{[M^{\\mathrm {ab}}]}\\delta _P^{1/2}(m)\\chi _s(\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f(m^{-1} x )dm\\\\=&\\int _{[\\mathbb {G}_m]^+ \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\&+\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x\\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\&+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t.$ Here $d^\\times t$ is the measure on $[\\mathbb {G}_m].$ The subgroup $ [\\mathbb {G}_m]^1<[\\mathbb {G}_m]$ has nonzero measure with respect to $d^\\times t$ if and only if $F$ is a function field.", "By Lemma REF one has $ \\mathcal {F}_{P|P^*} \\circ L(m)=\\delta _{P^* \\cap M^{\\beta _0}}(m) L(m) \\circ \\mathcal {F}_{P|P^*}.$ Moreover $\\delta _{P}^{1/2}(m)\\delta _{P^* \\cap M}(m)=\\delta _{P^*}^{1/2}(m).$ With this in mind we apply the Poisson summation formula of Theorem REF to $\\tfrac{1}{2}$ the second integral and the third integral above to see that $E_{Y_P}(f_{\\chi _s})$ is the sum of (REF ) and (REF ) below: $&\\int _{[\\mathbb {G}_m]^+ \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dm d^\\times t\\\\ \\nonumber &+\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\ \\nonumber &+\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)((\\beta _0^\\vee (t)m)^{-1} x )dm d^\\times t\\\\ \\nonumber &+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)((\\beta _0^\\vee (t)m)^{-1}x )dm d^\\times t$ and $\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\chi _s(t\\omega _P(m))&\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0) \\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F}\\Bigg ( \\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_P(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _z^{\\prime }})\\\\&-\\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_{P^*}(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{\\chi ^{\\prime }_{-z}})\\Bigg )dmd^\\times t \\nonumber \\\\+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\chi _s(t\\omega _{P}(m))&\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F}\\Bigg (\\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_P(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_z})\\nonumber \\\\&-\\sum _{y,\\chi ^{\\prime }} \\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{\\chi _{-z}^{\\prime }}) \\nonumber \\Bigg )dm d^\\times t$ where the sums over $y$ are over $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ and the sums over $\\chi ^{\\prime }$ are over $\\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }.$ We recall that the sums over $s_0$ have finite support in a set depending only on the $K_M$ type of $f$ by Conjecture REF in the number field case and the fact that $E(I,f_{\\chi _s})$ is rational in the sense of [36] in the function field case [36].", "Since $\\omega _{P^*}(\\beta _0^{\\vee }(t))=\\omega _{P}(\\beta _0^\\vee (t))^{-1}$ Lemma REF implies that the lower two integrals in (REF ) converge for $\\mathrm {Re}(s_{\\beta _0})$ small enough.", "Thus they converge for all $s$ .", "Since we already know that the upper two converge absolutely for all $s$ we deduce that (REF ) is a holomorphic function of $s.$ Now consider (REF ).", "We have the functional equation $ E(I,\\iota _y^*(L(m)f)_{\\chi _{\\beta _0 s}})=\\delta _{P^* \\cap M_{\\beta _0}}(m)E^*(I,\\iota _y^*(L(m)\\mathcal {F}_{P|P^*}(f))_{\\chi _{\\beta _0s}})$ by Theorem REF and (REF ).", "Using Lemma REF and (REF ) to justify switching sums and integrals, we deduce that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large (REF ) is equal to $\\frac{1}{2}\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}}&\\frac{w(s_0)}{\\kappa _F}\\sum _{y} \\int _{[\\mathbb {G}_m]^1 \\times A^{\\beta _0}(\\mathbb {A}_F)}\\chi _s(t\\omega _P(m))\\Bigg ( \\delta _P^{1/2}(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _{\\beta _0z}})\\\\&- \\delta _{P^*}^{1/2}(t\\omega _P(m))\\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{(\\chi _{\\beta _0})_{-z}})\\Bigg )dm d^\\times t \\nonumber \\\\+\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0) \\ge 0\\end{array}}&\\frac{w(s_0)}{\\kappa _F}\\sum _y\\int _{[\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\chi _s(t\\omega _{P}(m))\\Bigg (\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _{\\beta _0z}}) \\nonumber \\\\&- \\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{(\\chi _{\\beta _0})_{-z}})\\Bigg )dmd^\\times t .", "\\nonumber $ where the sums on $y$ are now over $P^{\\prime }(F) \\backslash G(F).$ Here we are using (REF ) to unfold the integral.", "By Lemma REF and (REF ) the expression (REF ) is holomorphic for $\\mathrm {Re}(s)$ large, and admits a meromorphic continuation to the plane.", "Thus under the assumption that $f$ that is $K_M$ finite, we have proved the meromorphic continuation statement in the theorem.", "By a symmetric argument we deduce that the sum of (REF ) and (REF ) is $E^*_{Y_{P^*}}(\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ This proves the functional equation $E_{Y_P}(f_{\\chi _s})=E^*_{Y_{P^*}} (\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ Thus far we have assumed that $f$ is $K_M$ -finite.", "To remove this assumption we note that for any $f \\in \\mathcal {S}(X_{P}(\\mathbb {A}_F))$ and any $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a $K_M$ -finite $f^{\\prime } \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ such that $f_{\\chi _s}=f_{\\chi _s}^{\\prime }$ .", "Moreover, using Lemma REF , we can choose $f^{\\prime }$ so that $\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*=\\mathcal {F}_{P|P^*}(f^{\\prime })_{\\chi _s}^*$ .", "This allows us to remove the $K_M$ -finiteness assumption.", "The proof of Theorem REF shows that the poles of $E_{Y_P}(f_{\\chi _s})$ are controlled by the poles of $E(I,f^{\\prime }_{\\chi _{\\beta _0 s}})$ for $f^{\\prime } \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ In particular it is easy to deduce the following corollary from the proof: Corollary 5.5 Under the hypotheses of Theorem REF , for fixed $(s_{\\beta }) \\in \\mathbb {C}^{\\Delta _{P^{\\prime }}}$ the order of the pole of $E_{Y_P}(f_{\\chi _s})$ at $s_{\\beta _0}=s_0$ is no greater than the maximum of the orders of the pole of $E(I,f^{\\prime }_{\\chi _{\\beta _0z}})$ at $z=s_0$ as $f^{\\prime }$ ranges over $\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ $\\Box $ On the poles of degenerate Eisenstein series We assume for this section that $F$ is a number field.", "Our goal here is to verify conjectures REF and REF in some cases.", "Without loss of generality we take $G=M_{\\beta _0}$ , thus $P \\le G$ is now a maximal parabolic subgroup containing our fixed Borel $B$ and $P^*$ is the opposite parabolic.", "Let $Q$ be the unique maximal parabolic subgroup containing $B$ that is conjugate to $P^*.$ Let $K \\le G(\\mathbb {A}_F)$ be a maximal compact subgroup and let $\\mathbb {C}_+:=\\lbrace z \\in \\mathbb {C}:\\mathrm {Re}(z)>0\\rbrace .$ Lemma 6.1 To prove Conjecture REF it suffices to show that for each character $ \\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a finite set $\\Upsilon (\\chi ) \\subset \\mathbb {C}_+$ such that if $E(g,\\Phi ^{\\chi _s}_P)$ or $E(g,\\Phi ^{\\chi _s}_Q)$ has a pole at $s=s_0$ with $\\mathrm {Re}(s_0)>0$ for any holomorphic $K$ -finite section $\\Phi ^{\\chi _s}_P \\in I_P(\\chi _s)$ or $\\Phi ^{\\chi _s}_Q \\in I_Q(\\chi _s)$ then $s \\in \\Upsilon (\\chi ).$ By the observations in the proof of Lemma REF if $a_{P|P}(\\chi _s)$ has a pole at $s=s_0$ for $\\mathrm {Re}(s_0) \\ge 0$ then $s_0=0.$ Moreover, the order of the pole is bounded by an integer depending only on $P$ and $G.$ Thus for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ Theorem REF implies that the only possible pole of $E(g,f_{\\chi _s})$ on the line $\\mathrm {Re}(s)=0$ is at $s=0$ and its order is bounded in a sense depending only on $P$ and $G.$ Now consider poles of $E(g,f_{\\chi _s})$ for $\\mathrm {Re}(s)<0.$ Using notation from the proof of Theorem REF and arguing as in that proof we have $E(g,f_{\\chi _s})=E^*(g,\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s})=E_Q(g,j(\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s})).$ Thus the order of the pole of $E(g,f_{\\chi _s})$ at $s_0$ is equal to the order of the pole of $E_Q(g,j(\\mathcal {F}_{P|P^*}(f)^*_{\\chi _{s}}))$ at $s_0.$ Since $\\mathcal {F}_{P|P^*}(f) \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ and $a_{P^*|P^*}((\\chi _s)^{-1})$ is holomorphic for $\\mathrm {Re}(s)<0$ by Lemma REF we have that $\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s} \\in I_{P^*}^*(\\chi _s)$ is a holomorphic section for $\\mathrm {Re}(s)<0$ .", "This implies $j(\\mathcal {F}_{P|P}(f)^*_{\\chi _{s}})$ is a holomorphic section of $I_Q((\\chi ^{-1})_{-s})$ for $\\mathrm {Re}(s)<0.$ The lemma follows.", "The proof of the following lemma is analogous: Lemma 6.2 To prove Conjecture REF it suffices to show that there is a finite set $\\Upsilon \\subset \\mathbb {C}_+$ and an integer $n>0$ such that if $E(g,\\Phi ^{\\chi _s}_P)$ or $E(g,\\Phi ^{\\chi _s}_Q)$ has a pole at $s=s_0$ with $\\mathrm {Re}(s_0)>0$ for any $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and holomorphic $K$ -finite sections $\\Phi ^{\\chi _s}_P \\in I_P(\\chi _s)$ or $\\Phi ^{\\chi _s}_Q \\in I_Q(\\chi _s)$ then $s_0 \\in \\Upsilon $ and $\\chi ^n=1.$ $\\Box $ Theorem 6.3 Suppose that $G=\\mathrm {Sp}_{2n}$ and that $ P$ is the Siegel parabolic, that is, the parabolic subgroup with Levi subgroup isomorphic to $\\mathrm {GL}_n.$ Then Conjecture REF is true.", "We use the results of [26].", "We point out that $a_{P|P}(\\chi _s)=a_{I_{2n}}(\\chi _s)$ in the notation of loc. cit.", "by [16].", "Thus the theorem follows from Lemma REF , the fact that $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s) > 0$ by Lemma REF , and the Corollary of [26].", "Theorem 6.4 Suppose that $G=\\mathrm {SL}_n$ .", "Then Conjecture REF is true.", "This can be derived using Lemma REF and the same arguments as those proving [24].", "The proof comes down to two statements explained in [24].", "The first is the statement that a certain quotient of completed $L$ -functions depending on $\\chi $ has only finitely many poles for $\\mathrm {Re}(s)>0$ .", "The second is that a certain normalized intertwining operator has only finitely many poles.", "These are proven exactly as in loc. cit.", "The required facts on induced representations of $\\mathrm {GL}_2$ may be found in [28].", "Much of the work towards proving Conjecture REF for arbitrary parabolic subgroups of symplectic groups is contained in [23], and much of the work towards proving Conjecture REF for general linear groups is contained in [24].", "The additional steps necessary seem to require careful analysis of the reducibility points of local principal series representations at the archimedean places." ], [ "Braverman-Kazhdan spaces", "We work over a field $F.$ Let $G$ be a connected split semisimple group over $F.$ We only consider parabolic subgroups of $G$ containing a fixed split torus $T$ ; for such a subgroup $P$ we write $N_P$ for its unipotent radical.", "We fix a Levi subgroup $M$ of $G$ containing $T$ and write $M^{\\mathrm {ab}}:=M/M^{\\mathrm {der}}.$ For all parabolic subgroups $P$ of $G$ with Levi subgroup $M$ we have a Braverman-Kazhdan space $X_{P}^\\circ :=P^{\\mathrm {der}} \\backslash G.$ We observe that $ P^{\\mathrm {der}}=M^{\\mathrm {der}}N_P$ where $N_P$ is the unipotent radical of $P.$ The scheme $X_P^{\\circ }$ is strongly quasi-affine, i.e.", "$X_P:=\\overline{X_P^{\\circ }}^{\\mathrm {aff}}:=\\mathrm {Spec}(\\Gamma (X_P^{\\circ },\\mathcal {O}_{X_P^{\\circ }}))$ is an affine scheme of finite type over $F$ and the natural map $X_P^{\\circ } \\rightarrow X_P$ is an open immersion [6].", "Strictly speaking, in loc. cit.", "Braverman and Gaitsgory work over an algebraically closed field, but their results hold in our setting by fpqc descent along $\\mathrm {Spec}(\\overline{F}) \\rightarrow \\mathrm {Spec}(F).$ A convenient reference for fpqc descent is [39].", "Lemma 2.1 The torus $M^{\\mathrm {ab}}$ is split.", "The maps $M(F) \\longrightarrow M^{\\mathrm {ab}}(F) \\quad \\textrm {and} \\quad G(F) \\longrightarrow (P^{\\mathrm {der}} \\backslash G)(F)$ are surjective.", "In [16] this is proved in the special case where $P$ is a maximal parabolic.", "The same proof implies the more general statement here.", "We have a right action of $M^{\\mathrm {ab}} \\times G$ on $X_P^{\\circ }$ given on points in an $F$ -algebra $R$ by $ \\begin{split}X^{\\circ }_{P}(R) \\times M^{\\mathrm {ab}}(R) \\times G(R) &\\longrightarrow X^\\circ _{P}(R)\\\\(x,m,g) &\\longmapsto m^{-1}xg.", "\\end{split}$ This action extends to an action of $M^{\\mathrm {ab}} \\times G$ on $X_P.$ We now discuss the Plücker embedding of $P^{\\mathrm {der}} \\backslash G$ as explained in a special case in [16].", "We assume for simplicity that $G$ is simply connected.", "Let $T\\le B \\le P$ be a Borel subgroup, let $\\Delta _G$ be the set of simple roots of $T$ in $G$ defined by $B$ and $\\Delta _M$ the set of simple roots of $T$ in $M$ defined by $B \\cap M.$ Let $ \\Delta _P:=\\Delta _G-\\Delta _M$ be the set of simple roots in $\\Delta _G$ attached to $P.$ For each $\\beta \\in \\Delta _G$ we let $B \\le P_{\\beta } \\le G$ be the unique maximal parabolic subgroup containing $B$ that does not contain the root group attached to $-\\beta .$ As usual let $X^*(T)$ be the character group of $T.$ For each $\\beta \\in \\Delta _P$ let $\\omega _{\\beta } \\in X^*(T)_{\\mathbb {Q}}$ be the fundamental weight dual to the coroot $\\beta ^\\vee ,$ in other words, $ \\omega _{\\beta }(\\alpha ^{\\vee })={\\left\\lbrace \\begin{array}{ll}1 &\\textrm { if }\\alpha =\\beta \\\\0 &\\textrm { otherwise.}\\end{array}\\right.", "}$ Here $\\alpha \\in \\Delta _G.$ Since $G$ is simply connected, the fundamental weight $\\omega _{\\beta }$ lies in $X^*(T).$ There is an irreducible representation $ V_{\\beta } \\times G \\longrightarrow V_{\\beta }$ of highest weight $-\\omega _{\\beta }$ .", "Here $G$ acts on the right.", "Let $ V:=\\prod _{\\beta \\in \\Delta _P} V_\\beta , \\quad V^{\\circ }=\\prod _{\\beta \\in \\Delta _P} (V_\\beta -\\lbrace 0\\rbrace ).$ Choose a highest weight vector $v_\\beta \\in V_\\beta (F)$ for each $\\beta .$ Lemma 2.2 There is a closed immersion $ \\mathrm {Pl}_P:X_P^\\circ \\longrightarrow V^\\circ $ given on points by $\\mathrm {Pl}_P(g):=(v_\\beta g).$ It extends to a $G$ -equivariant map $\\mathrm {Pl}_P:X_P \\rightarrow V.$ The character $\\omega _{\\beta }$ extends to a character of $M$ for all $\\beta $ and the map $ \\omega _P:=\\prod _{\\beta \\in \\Delta _P}\\omega _{\\beta }:M^{\\mathrm {ab}} \\longrightarrow \\mathbb {G}_m^{\\Delta _P}$ is an isomorphism.", "If we let $M^{\\mathrm {ab}}$ act via $\\omega _P$ on $V$ then $\\mathrm {Pl}_P$ is $M^{\\mathrm {ab}}$ -equivariant.", "In particular for $(m,g) \\in M^{\\mathrm {ab}}(R) \\times G(R)$ one has $ \\mathrm {Pl}_P(m^{-1}g)=(\\omega _\\beta (m)v_{\\beta } g).$ Here $\\mathbb {G}_m^{\\Delta _P}:=\\prod _{\\beta \\in \\Delta _P}\\mathbb {G}_m.$ We will use similar notation below without comment.", "In the introduction, we identified $\\Delta _P$ with the set $\\lbrace 0,1,\\dots ,k\\rbrace $ .", "To ease notation, for any subset $\\Delta \\subset \\Delta _G$ let $ V_{\\Delta }=\\prod _{\\beta \\in \\Delta }V_{\\beta } \\quad \\textrm { and }\\quad V_{\\Delta }^\\circ :=\\prod _{\\beta \\in \\Delta }(V_{\\beta }-\\lbrace 0\\rbrace ).$ Denote by $P_\\beta $ the unique maximal parabolic subgroup of $G$ containing $B$ that does not contain the root group attached to $-\\beta $ .", "Then the stabilizer of the line spanned by $v_\\beta $ in $V_\\beta $ is $P_\\beta $ [27].", "Since $P\\le P_{\\beta }$ we deduce that $\\mathrm {Pl}_P$ is well-defined, that $\\omega _{\\beta }$ extends to a character on $M,$ and that (REF ) holds.", "Since $V$ is affine the map $\\mathrm {Pl}_P$ tautologically extends to a $M^{\\mathrm {ab}} \\times G$ -equivariant map $\\mathrm {Pl}:X_P \\rightarrow V.$ We are left with checking that $\\mathrm {Pl}_P$ is a closed immersion and that (REF ) is an isomorphism.", "We first check that (REF ) is an isomorphism.", "The homomorphism $\\mathbb {G}^{\\Delta _P}_m \\rightarrow M^{\\mathrm {ab}}$ given on points by $x\\mapsto \\prod _{\\beta } \\beta ^\\vee (x)$ is a section of $\\omega _P$ .", "Thus it suffices to show that the image of this section is all of $M^{\\mathrm {ab}}$ .", "Since the image is closed and $M^{\\mathrm {ab}}$ is irreducible, it suffices to check that $|\\Delta _P|=\\dim \\mathbb {G}^{\\Delta _P}_m$ is equal to $\\dim M^{\\mathrm {ab}}$ .", "The group $M^{\\mathrm {ab}}$ is isogenous to the center $Z_M$ of $M$ which is contained in the subgroup of $T$ on which all the $\\beta \\in \\Delta _M=\\Delta _G-\\Delta _P$ vanish.", "Thus $Z_M$ has at most dimension $|\\Delta _P|$ .", "To check that $\\mathrm {Pl}_P$ is a closed immersion we first point out that we have a commutative diagram $ \\begin{tikzcd}X_P^{\\circ } [d,twoheadrightarrow] [r,\"\\mathrm {Pl}_P\"] &V^\\circ [d,twoheadrightarrow]\\\\P \\backslash G [r,\"\\overline{\\mathrm {Pl}}_P\",hookrightarrow] & \\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }\\end{tikzcd}$ where $\\overline{\\mathrm {Pl}}_{P}$ is induced by $\\mathrm {Pl}_P$ .", "We claim that $\\overline{\\mathrm {Pl}}_P$ is a closed immersion.", "Since $P \\backslash G$ is proper and $\\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }$ is separated the map $\\mathrm {Pl}_P$ has closed image.", "It is an immersion provided that $P$ is the stabilizer of the image of $(v_{\\beta })$ in $\\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }$ by [35].", "The stabilizer of the image of $(v_\\beta )$ is $ \\bigcap _{\\beta \\in \\Delta _P}P_\\beta .$ But (REF ) is a parabolic subgroup containing $B$ .", "By considering the root groups contained in (REF ) we deduce that it is $P$ .", "This completes the proof of the claim.", "Extend $\\omega _P$ to a character of $P$ in the canonical manner.", "Since the stabilizer of the image of $(v_\\beta )$ in $\\prod _{\\beta \\in \\Delta _P}\\mathbb {P}V_{\\beta }$ is $P$ we deduce that the stabilizer of $(v_\\beta )$ is contained in $P$ and hence equal to $\\ker (\\omega _P:P \\rightarrow \\mathbb {G}_m^{\\Delta _P}).$ Since (REF ) is an isomorphism $\\ker (\\omega _P:P \\rightarrow \\mathbb {G}_m^{\\Delta _P})=P^{\\mathrm {der}}.$ Thus the map $\\mathrm {Pl}_P$ is an immersion of $X_P^{\\circ }$ onto the orbit of $(v_\\beta )$ [35].", "The left vertical arrow in (REF ) is the geometric quotient by the action of $M^{\\mathrm {ab}}$ , and the right vertical arrow is the geometric quotient by $\\mathbb {G}_m^{\\Delta _P}$ .", "In view of the equivariance property (REF ) and the fact that (REF ) is an isomorphism, we deduce that the image of $\\mathrm {Pl}_P$ is closed, and hence $\\mathrm {Pl}_P$ is a closed immersion.", "Example Let $G=\\mathrm {SL}_3$ and $P=B,$ the Borel of upper triangular matrices.", "The representations $V_{\\beta _1}$ and $V_{\\beta _2}$ are just the standard representation $\\mathbb {G}_a^3$ and $\\wedge ^2 \\mathbb {G}_a^3.$ If we choose $(0,0,1)$ and $(0,1,0) \\wedge (0,0,1)$ as our highest weight vectors then $\\mathrm {Pl}_B{\\left({\\begin{matrix} a \\\\ b\\\\ c \\end{matrix}}\\right)} =\\left(c,b \\wedge c\\right).$ Under the Plücker embedding the image of $X_B$ is the cone $C$ whose points in an $F$ -algebra $R$ are given by $C(R)=\\lbrace (v_1,v_2) \\in R^3 \\times \\wedge ^2 R^3: v_1\\wedge v_2=0\\rbrace .$" ], [ "The $Y_{P}$", "We assume that we are in the situation of the introduction.", "Thus $Y \\subseteq G$ is a subscheme whose stabilizer under the left action of $G$ contains a parabolic subgroup $P^{\\prime }> P.$ We assume (REF ), that is, $P$ is maximal in $P^{\\prime }.$ Moreover we let $P^*\\le P^{\\prime }$ be the unique parabolic subgroup with Levi subgroup $M$ that is not equal to $P.$ There is a unique simple root $\\beta _0$ in $\\Delta _G-\\Delta _M$ such that the root group attached to $-\\beta _0$ is a subgroup of $P^{\\prime }$ but not $P.$ Let $M^{\\prime }$ be the unique Levi subgroup of $P^{\\prime }$ containing $M.$ The set of simple roots of $T$ in $M^{\\prime }$ with respect to the Borel $B$ is $ \\Delta _{M^{\\prime }}=\\lbrace \\beta _0\\rbrace \\cup \\Delta _M.$ Since $G$ is simply connected, it follows from [11] that the derived group $M^{\\prime \\mathrm {der}}$ is simply connected.", "Hence it is a direct product of its simple factors.", "There is a unique decomposition $ M^{\\prime \\mathrm {der}}=M_{\\beta _0} \\times M^{\\beta _0}$ where $M_{\\beta _0}$ is the unique simple factor containing the root group of $\\beta _0.$ It then is the unique simple factor containing $\\beta ^\\vee _0(\\mathbb {G}_m).$ Let $N_P$ denote the unipotent radical of $P.$ Lemma 2.3 The group $T_{\\beta _0}:=T \\cap M_{\\beta _0}$ is a maximal torus of $M_{\\beta _0}$ and $B \\cap M_{\\beta _0}$ is a Borel subgroup of $M_{\\beta _0}$ .", "The group $P \\cap M_{\\beta _0}$ is the unique maximal parabolic subgroup of $M_{\\beta _0}$ containing $B \\cap M_{\\beta _0}$ that does not contain the root group attached to $-\\beta _0,$ and $M \\cap M_{\\beta _0}$ is a Levi subgroup of $P \\cap M_{\\beta _0}.$ We have $ \\begin{split}N_{P \\cap M_{\\beta _0}}&=N_{P} \\cap M_{\\beta _0}\\\\M^{\\mathrm {der}} \\cap M_{\\beta _0}&=(M \\cap M_{\\beta _0})^{\\mathrm {der}},\\\\ P^{\\mathrm {der}} \\cap M_{\\beta _0}&=(P \\cap M_{\\beta _0})^{\\mathrm {der}}\\\\P^{\\mathrm {der}}&=(P^{\\mathrm {der}} \\cap M_{\\beta _0}) M^{\\beta _0}N_{P^{\\prime }} \\end{split}$ The first two sentences follow from [1].", "Since $N_{P \\cap M_{\\beta _0}}$ is the maximal connected normal unipotent subgroup of $P \\cap M_{\\beta _0}$ it is clearly contained in $N_P,$ the maximal connected normal unipotent subgroup of $P.$ Thus $N_{P \\cap M_{\\beta _0}} \\le N_P \\cap M_{\\beta _0}.$ On the other hand the intersection of $N_P \\cap M_{\\beta _0} <(M \\cap M_{\\beta _0})N_{P \\cap M_{\\beta _0}}=P \\cap M_{\\beta _0}$ and $M \\cap M_{\\beta _0}$ is the identity.", "It follows that $N_{P \\cap M_{\\beta _0}}=N_P \\cap M_{\\beta _0}.$ Let $Q^\\circ $ denote the neutral component of an algebraic group $Q.$ One can check the identities $(M^{\\mathrm {der}} \\cap M_{\\beta _0})^{\\circ }&=(M \\cap M_{\\beta _0})^{\\mathrm {der}},\\\\ (P^{\\mathrm {der}} \\cap M_{\\beta _0})^{\\circ }&=(P \\cap M_{\\beta _0})^{\\mathrm {der}}\\\\P^{\\mathrm {der}}&=(P^{\\mathrm {der}} \\cap M_{\\beta _0})^\\circ M^{\\beta _0}N_{P^{\\prime }}$ by observing that the groups on either side are connected and checking the corresponding identity on the Lie algebras using well-known facts about the decomposition of parabolic subalgebras into root spaces.", "Thus to complete the proof it suffices to prove that $M^{\\mathrm {der}} \\cap M_{\\beta _0}$ and $P^{\\mathrm {der}} \\cap M_{\\beta _0}$ are connected.", "The group $M^{\\mathrm {der}}<M^{\\prime \\mathrm {der}}=M_{\\beta _0} \\times M^{\\beta _0}$ is connected and is the direct product $ M^{\\mathrm {der}}=(M^{\\mathrm {der}} \\cap M_{\\beta _0}) \\times M^{\\beta _0}$ In particular $M^{\\mathrm {der}} \\cap M_{\\beta _0}$ is connected.", "We also conclude from (REF ) that $P^{\\mathrm {der}}$ is the semidirect product of $(M^{\\mathrm {der}} \\cap M_{\\beta _0})N_{P \\cap M_{\\beta _0}}$ and $M^{\\beta _0}N_{P^{\\prime }}.$ It follows that $P^{\\mathrm {der}} \\cap M^{\\beta _0}$ is $(M^{\\mathrm {der}} \\cap M_{\\beta _0})N_{P \\cap M_{\\beta _0}}$ , which is connected.", "Let $\\mathrm {pr}:G \\longrightarrow X_P^\\circ $ be the natural map.", "In the following it is convenient to denote by $|X|$ the underlying topological space of a scheme $X.$ As usual, a subset of a topological space is locally closed if it is open in its closure.", "Lemma 2.4 The set $\\mathrm {pr}(|Y|) \\subset |X_P^\\circ |$ is locally closed.", "Let $\\overline{Y}$ be the schematic closure of $Y$ in $G.$ Then $\\overline{Y}$ is stable under left multiplication by $P^{\\mathrm {der}}.$ The map $\\mathrm {pr}$ is a geometric quotient [40].", "In particular the underlying map of topological spaces is a quotient map.", "It follows that $\\mathrm {pr}(|\\overline{Y}|)$ is closed in $|X_{P}^\\circ |$ .", "The restriction of $\\mathrm {pr}$ to $|\\overline{Y}|$ is again a topological quotient map, so $\\mathrm {pr}(|Y|)$ is open in $\\mathrm {pr}(|\\overline{Y}|).$ It is easy to check that the closure of $\\mathrm {pr}(|Y|)$ is $\\mathrm {pr}(|\\overline{Y}|).$ As in the introduction, $Y_{P}$ is the subscheme of $X_{P}^\\circ $ with underlying topological space $\\mathrm {pr}(|Y|),$ given the reduced induced subscheme structure.", "Lemma 2.5 If $Y=P^{\\prime }\\gamma H$ for some subgroup $H \\le G$ and $\\gamma \\in G(F)$ then the subscheme $Y_{P} \\subset X_P^{\\circ }$ is smooth.", "The space $Y$ is smooth, and $\\mathrm {pr}$ is a locally trivial fibration.", "Since $Y$ is left $P^{\\mathrm {der}}$ -invariant $\\mathrm {pr}$ restricts to a locally trivial fibration $\\mathrm {pr}:Y \\rightarrow Y_{P}.$ Lemma REF implies the following lemma: Lemma 2.6 The map $Y(F) \\longrightarrow Y_{P}(F)$ is surjective.", "$\\Box $ Using notation from (REF ) and (REF ) let $ \\begin{split}X_{P,P^{\\prime }}:&=\\mathrm {Pl}^{-1}_P(V_{\\beta _0} \\times V_{\\Delta _{P^{\\prime }}}^{\\circ }) \\subseteq X_P\\\\X_{P^*,P^{\\prime }}:&=\\mathrm {Pl}^{-1}_{P^*}(V_{\\beta _0^{-1}} \\times V_{\\Delta _{P^{\\prime }}}^{\\circ }) \\subseteq X_{P^*}.", "\\end{split}$ Thus $X_{P,P^{\\prime }}$ and $X_{P^*,P^{\\prime }}$ are subschemes of $X_{P}$ and $X_{P^*},$ respectively.", "Let $ \\begin{split}Y_{P,P^{\\prime }}:&=\\overline{Y_{P}} \\subseteq X_{P,P^{\\prime }}\\\\Y_{P^*,P^{\\prime }}:&=\\overline{Y_{P^*}} \\subseteq X_{P^*,P^{\\prime }}\\end{split}$ be the closures of $Y_{P}$ in $X_{P,P^{\\prime }}$ and $Y_{P^*}$ in $X_{P^*,P^{\\prime }},$ respectively.", "The natural map $ N_{P \\cap M_{\\beta _0}} \\longrightarrow N_P/N_{P^{\\prime }}$ is an isomorphism.", "For all $y \\in Y(F)$ we have a morphism $ \\iota _y:X_{P \\cap M_{\\beta _0}}^\\circ \\longrightarrow Y_{P}$ characterized by the requirement that the diagram $ \\begin{tikzcd}M_{\\beta _0} [r] [d,twoheadrightarrow] & Y [d,twoheadrightarrow]\\\\X^{\\circ }_{P \\cap M_{\\beta _0}} [r,\"\\iota _y\"] &Y_{P}\\end{tikzcd}$ commutes, where the top arrow is given on points by $m \\mapsto my$ and the vertical arrows are the canonical surjections.", "Lemma 2.7 Let $\\Xi \\subset Y(F)$ be a set of representatives for $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F).$ One has $Y_{P}(F)=\\coprod _{y \\in \\Xi } \\iota _{y}(X^\\circ _{P \\cap M_{\\beta _0}}(F))\\quad \\textrm {and} \\quad Y_{P^*}(F)=\\coprod _{y \\in \\Xi } \\iota _{y}(X^\\circ _{P^* \\cap M_{\\beta _0}}(F)).$ With notation as in (REF ) we have $M_{\\beta _0}M^{\\beta _0}N_{P^{\\prime }}=P^{\\prime \\mathrm {der}}.$ The fibers of the canonical projection $M^{\\beta _0}(F)N_{P^{\\prime }}(F) \\backslash Y(F) \\rightarrow P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ are $M_{\\beta _0}(F)$ -torsors, so $M_{\\beta _0}(F)\\Xi $ is a set of representatives for $M^{\\beta _0}(F)N_{P^{\\prime }}(F) \\backslash Y(F).$ Moreover no two elements of $\\Xi $ are in the same $M_{\\beta _0}(F)$ -orbit.", "By Lemma REF $P^{\\mathrm {der}}(F) \\backslash Y(F)=Y_{P}(F).$ By Lemma REF the fibers of the natural projection $M^{\\beta _0}(F)N_{P^{\\prime }}(F) \\backslash Y(F) \\longrightarrow P^{\\mathrm {der}}(F) \\backslash Y(F)=Y_{P}(F)$ are $(P^{\\mathrm {der}} \\cap M_{\\beta _0})(F)$ -torsors.", "Again by Lemma REF we have $P^{\\mathrm {der}} \\cap M_{\\beta _0}=(P \\cap M_{\\beta _0})^{\\mathrm {der}}$ and we deduce the first equality of the lemma.", "To obtain the second equality we replace $P$ by $P^*$ and argue by symmetry." ], [ "Examples", "Let $G=\\mathrm {SL}_n,$ $n>2.$ We generalize the example of (REF ) to higher rank in two different manners.", "Let $s_{j} \\in \\mathrm {SL}_n(F),$ $1 \\le j \\le n-1,$ be the matrix that is the identity matrix with the rows $e_j$ and $e_{j+1}$ replaced by $-e_{j+1}$ and $e_j.$ Let $\\gamma _1=s_{1}s_{2}\\cdots s_{n-1}.$ This is a Coxeter element.", "Let $R$ be an $F$ -algebra.", "Let $B_n \\le \\mathrm {SL}_n$ be the Borel subgroup of upper triangular matrices and let $P^{\\prime }(R):&=\\lbrace {\\left({\\begin{matrix} g & x \\\\ & b \\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g,b) \\in \\mathrm {GL}_2(R) \\times B_{n-2}(R)\\rbrace \\\\H(R):&=\\lbrace {\\left({\\begin{matrix} b & x \\\\ & g \\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g,b) \\in \\mathrm {GL}_2(R) \\times B_{n-2}(R)\\rbrace $ Then $B_n \\gamma _1B_n \\subset P^{\\prime }\\gamma _1 H \\subseteq \\overline{B_n\\gamma _1 B_n}$ , and $\\overline{B_n\\gamma _1B_n}(R)=\\left\\lbrace g \\in \\mathrm {SL}_n(R): g_{ij}=0 \\textrm { if }i>j+1 \\right\\rbrace .$ If we take $P=B_n$ , then $P$ is maximal in $P^{\\prime }$ and $B_n^*(R):=\\left\\lbrace {\\left({\\begin{matrix} a & & x\\\\ b & c & z\\\\ & & d\\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): a,c \\in R^\\times , d \\in B_{n-2}(R) \\right\\rbrace .$ As another example, take $\\gamma _2= {\\left({\\begin{matrix} & & &J \\\\ (-1)^\\epsilon & & & \\\\ &1 & & \\end{matrix}}\\right)}$ where $J$ is the matrix with 1's on the antidiagonal and zeros elsewhere and $\\epsilon \\in \\lbrace 0,1\\rbrace $ is chosen so the matrix has determinant 1.", "Write $P_{a,b}(R):=\\left\\lbrace {\\left({\\begin{matrix} g & x \\\\ & h \\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g,h) \\in \\mathrm {GL}_a(R) \\times \\mathrm {GL}_b(R) \\right\\rbrace .$ Then $B_n\\gamma _2 B_n \\subset P_{n-1,1}\\gamma _2 P_{1,n-1} \\subseteq \\overline{B_n \\gamma _2 B_n}$ Choose positive integers $a,b$ such that $a+b=n-1.$ Then we may take $P(R):=\\left\\lbrace {\\left({\\begin{matrix} g_1 & x & y\\\\ & g_2 & z \\\\ & & a\\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g_1,g_2,a) \\in \\mathrm {GL}_{a}(R) \\times \\mathrm {GL}_b(R) \\times R^\\times \\right\\rbrace .$ In this case $P^*(R):=\\left\\lbrace {\\left({\\begin{matrix} g_1 & & y\\\\ x & g_2 & z \\\\ & & a\\end{matrix}}\\right)} \\in \\mathrm {SL}_n(R): (g_1,g_2,a) \\in \\mathrm {GL}_{a}(R) \\times \\mathrm {GL}_b(R) \\times R^\\times \\right\\rbrace .$" ], [ "Function spaces", "Let $X$ be a quasi-projective scheme over a local field $F$ .", "We denote by $C^0(X(F))$ the space of complex valued continuous functions on $X(F).$ Assume that $F$ is nonarchimedean.", "In this case we denote by $\\mathcal {C}(X(F))$ the space of locally constant compactly supported functions on $X(F)$ , also denoted by $C_c^\\infty (X(F))$ .", "If $X$ is smooth, then we set $\\mathcal {S}(X(F))=\\mathcal {C}(X(F))=C_c^\\infty (X(F)).$ For certain nonsmooth schemes $X$ we will define $\\mathcal {S}(X(F)).$ Now assume that $F$ is archimedean.", "In this case we define $\\mathcal {C}(X(F))$ as follows.", "The set $X(F)=\\mathrm {Res}_{F/\\mathbb {R}}X(\\mathbb {R})$ is a real algebraic variety, and hence an affine real algebraic variety [5].", "In particular there is a closed embedding of real algebraic varieties $X(F) \\longrightarrow \\mathbb {R}^n$ for some $n$ .", "We define $\\mathcal {C}(X(F))$ to be the space of restrictions of the usual Schwartz space $\\mathcal {S}(\\mathbb {R}^n)$ to $X(F)$ .", "This space is independent of the choice of embedding and is naturally a Fréchet space [13].", "Thus $\\mathcal {C}(X(F))\\le C^0(X(F)).$ We observe that if $X$ is not smooth then $C^\\infty (X(F))$ is not defined when $F$ is archimedean.", "If $X$ is smooth then we set $\\mathcal {S}(X(F))=\\mathcal {C}(X(F)) \\le C^\\infty (X(F))$ We have $\\mathcal {S}(X(F)) \\ge C_c^\\infty (X(F))$ but the inclusion is strict in the archimedean case when $X(F)$ is noncompact.", "We will also define $\\mathcal {S}(X(F))$ for certain nonsmooth $X$ ." ], [ "Measures", "Let $F$ be a global field.", "We fix a nontrivial character $\\psi :F \\backslash \\mathbb {A}_F \\rightarrow \\mathbb {C}^\\times $ and a Haar measure $dx=\\otimes _v^{\\prime } dx_v$ on $\\mathbb {A}_F.$ We assume that $dx_v$ is self-dual with respect to $\\psi _v.$ We let $d^\\times x$ be the Haar measure on $\\mathbb {A}_{F}^\\times $ given by $\\otimes _v^{\\prime } \\frac{\\zeta _v(1)}{|x|_v}dx_v.$ We fix, once and for all, a Chevalley basis of the Lie algebra of $G$ with respect to $T.$ For every root of $T$ in $G$ this provides us with a root vector $X_{\\alpha }$ in each root space, and hence isomorphisms $\\mathbb {G}_a \\longrightarrow N_{\\alpha }$ where $N_{\\alpha }$ is the root group.", "This in turn provides us with a Haar measure on $N_{\\alpha }(\\mathbb {A}_F)$ for all $\\alpha .$ As a scheme (but not a group scheme) the unipotent radical of any parabolic subgroup $P$ with Levi subgroup $M$ is a product of various $N_{\\alpha }.$ Thus we obtain a Haar measure on $N_{P}(\\mathbb {A}_F).$ We use this normalization so that factorization of intertwining operators holds (otherwise it only holds up to a constant depending on the choice of Haar measures).", "We fix a Haar measure on $M^{\\mathrm {der}}(\\mathbb {A}_F).$ We give $M(\\mathbb {A}_F)$ the unique Haar measure such that, upon endowing $M^{\\mathrm {ab}}(\\mathbb {A}_F)$ with the quotient measure one has that $\\omega _P:M^{\\mathrm {ab}}(\\mathbb {A}_F) \\longrightarrow (\\mathbb {A}_F^\\times )^{\\Delta _P}$ is measure preserving.", "This is independent of the parabolic $P$ with $M$ as its Levi, as different choices will just replace various $\\omega _{\\beta }$ with their inverses.", "Each of the measures we fixed above factors over the places of $F$ into a product of local measures, normalized so that the local analogue of (REF ) is measure preserving.", "We use these local measures when working locally below." ], [ "Quasi-characters", "Let $\\chi :=\\prod _{\\beta \\in \\Delta _P}\\chi _\\beta : (A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )^{\\Delta _P} \\longrightarrow \\mathbb {C}^\\times $ be a quasi-character.", "If $s=(s_\\beta ) \\in \\mathbb {C}^{\\Delta _P}$ we let $\\chi _{s}=\\prod _{\\beta \\in \\Delta _P}\\chi _{\\beta }|\\cdot |^{s_\\beta }.$ We define a subgroup $ A_{\\mathbb {G}_m} \\le F_\\infty ^\\times $ in the following manner.", "The global field $F$ is a finite extension of $F_0,$ where $F_0=\\mathbb {Q}$ or $F_0=\\mathbb {F}_p(t)$ for some prime $p.$ Let $Z \\le \\mathrm {Res}_{F/F_0}\\mathbb {G}_m$ be the maximal split subtorus.", "When $F$ is a number field we take $A_{\\mathbb {G}_m}$ to be the neutral component of $Z(\\mathbb {R}).$ Thus $A_{\\mathbb {G}_m}$ is just $\\mathbb {R}_{>0}$ embedded diagonally in $F^\\times _\\infty .$ When $F$ is a function field we choose an isomorphism $Z \\tilde{\\longrightarrow } \\mathbb {G}_m$ and let $A_{\\mathbb {G}_m}$ be the inverse image of $t^\\mathbb {Z}.$" ], [ "The unramified setting", "For a nonarchimedean place $v$ of the global field $F$ let $\\mathbb {F}_v$ be the residue field of $F_v$ and $q_v:=|\\mathbb {F}_v|.$ Let $S$ be a finite set of places of $F$ including the infinite places.", "Upon enlarging $S$ if necessary, we can choose a reductive group scheme $\\mathcal {G}$ over $\\mathcal {O}_F^S$ (with connected fibers) and affine spaces $\\mathcal {V}_{\\beta } \\cong \\mathbb {G}_{a\\mathcal {O}_F^S}^{d_\\beta }$ for some integer $d_\\beta >0$ (isomorphism over $\\mathcal {O}_F^S$ ) equipped with homomorphisms $\\mathcal {G} \\longrightarrow \\mathrm {Aut}(\\mathcal {V}_\\beta )$ over $\\mathcal {O}_F^S$ whose generic fibers are the representations $G \\rightarrow \\mathrm {Aut}(V_\\beta ).$ By abuse of notation, write $G$ for $\\mathcal {G}.$ For any of the subgroups $M,$ $P,$ $P^{\\prime },$ etc.", "of $G_F$ we continue to use the same letter for their schematic closures in $G.$ Upon enlarging $S$ if necessary we assume that these groups are all smooth.", "We assume moreover that the groups whose generic fibers were reductive (resp.", "parabolic, etc.)", "extend to reductive group schemes (resp.", "parabolic group schemes, etc.)", "over $\\mathcal {O}_F^S.$ We also assume that $\\omega _{P}$ induces an isomorphism on $\\mathcal {O}_{F_v}$ -points for all $v \\notin S,$ the highest weight vectors $v_\\beta \\in V(F)$ lie in $V_\\beta (\\mathcal {O}_F^S),$ and their image in $V_\\beta (\\mathbb {F}_v)$ is again a highest weight vector for $G_{\\mathbb {F}_v}$ of weight $-\\omega _\\beta .$ Finally, we continue to denote by $Y$ the schematic closure of $Y$ in $G$ .", "It is a scheme over $\\mathcal {O}_F^S$ , and the action of $P^{\\prime }_F$ on $Y_F$ extends to an action of $P^{\\prime }$ on $Y.$ Under the assumptions above (which are no loss of generality for $S$ large enough) and we are considering the local setting over $F_v$ for $v \\notin S$ we say that we are in the unramified setting." ], [ "The Schwartz space of $Y_{P,P^{\\prime }}(F)$", "For all but the last subsection of this section $F$ is a local field." ], [ "Induced representations", "Consider the induced representations: $ I_{P}(\\chi _s):=\\mathrm {Ind}_{P}^{G}(\\chi _s \\circ \\omega _P) \\quad \\textrm {and} \\quad I^*_{P^*}(\\chi _s):=\\mathrm {Ind}_{P^*}^{G}(\\chi _s \\circ \\omega _P).$ Let $\\Phi ^{\\chi _s}$ be a section of $I_{P}(\\chi _s).$ Assume $F$ is archimedean.", "We say $\\Phi ^{\\chi _s}$ is holomorphic (resp.", "meromorphic) if $s \\mapsto \\Phi ^{\\chi _s}(g)$ is holomorphic as a function of $s \\in \\mathbb {C}^{\\Delta _P}$ for all $g \\in G(F)$ and characters $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times .$ If $F$ is nonarchimedean with residue field of cardinality $q$ we say that $\\Phi ^{\\chi _s}$ is holomorphic if $\\Phi ^{\\chi _s}(g) \\in \\mathbb {C}[\\lbrace q^{s_\\beta },q^{-s_\\beta }:\\beta \\in \\Delta _P\\rbrace ]$ for all $g \\in G(F)$ and characters $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times .$ Similarly we say it is meromorphic if for all there is a $p \\in \\mathbb {C}[\\lbrace q^{s_\\beta },q^{-s_\\beta }:\\beta \\in \\Delta _P\\rbrace ]$ such that $p(s)\\Phi ^{\\chi _s}(g)$ is holomorphic.", "Fix a maximal compact subgroup $K \\le G(F)$ such that the Iwasawa decomposition holds: $G(F)=P(F)K.$ We then say that $\\Phi ^{\\chi _s}$ is standard if the restriction of the function $(s,g) \\mapsto \\Phi ^{\\chi _s}(g)$ to $\\mathbb {C}^{\\Delta _P} \\times K$ is independent of $s.$ We take the analogous conventions regarding sections of $I_{P^*}^*(\\chi _s).$ Let $\\mathcal {E}$ denote the ring of entire functions on $\\mathbb {C}^{\\Delta _P}$ when $F$ is archimedean and $\\mathbb {C}[\\lbrace q^{s_\\beta },q^{-s_\\beta }:\\beta \\in \\Delta _P\\rbrace ]$ when $F$ is nonarchimedean.", "For a fixed character $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ there is an obvious action of $\\mathcal {E}$ on the $\\mathbb {C}$ -vector space of holomorphic sections of $I_P(\\chi _s)$ preserving the subspace of $K$ -finite sections.", "As an $\\mathcal {E}$ -module, the subspace of holomorphic $K$ -finite sections is generated by the subspace of standard $K$ -finite sections.", "This allows us to apply results in the literature stated for $K$ -finite standard sections to sections that are $K$ -finite and merely holomorphic.", "We will use this observation without further comment below." ], [ "The Schwartz space", "We define $P,P^*$ and $P^{\\prime }$ as in the introduction.", "Thus $P \\cap P^*$ is the Levi subgroup $M$ of the parabolic subgroups $P$ and $P^*.$ Moreover $P$ and $P^*$ are maximal (proper) parabolic subgroups of $P^{\\prime }.$ To define the Fourier transform $\\mathcal {F}_{P|P^*}$ we first apply an intertwining operator to certain functions on $Y_{P}(F)$ to arrive at functions on $Y_{P^*}(F).$ We then twist by certain operators that we recall in this section.", "Suppose we are given $\\lambda \\in X_*(M^{\\mathrm {ab}}).$ Let $s^{\\prime } \\in \\mathbb {C}.$ We define $\\lambda _{!", "}(s^{\\prime }):\\mathcal {C}(Y_{P^*}(F)) \\longrightarrow C^0(Y_{P^*}(F))$ by $\\lambda _{!", "}(s^{\\prime })(f)(x)=\\int _{F^\\times }\\delta _{P^*}^{1/2}(\\lambda (a))f(\\lambda (a^{-1})x)\\psi (a)|a|^{s^{\\prime }}da.$ Here $\\psi :F \\rightarrow \\mathbb {C}^\\times $ is the local factor of the global additive character fixed in §REF and $da$ is the Haar measure on $F.$ In the special case where $P$ is maximal and $P^{\\prime }=Y=G$ this reduces to [16], where it was denoted by $\\lambda _{!", "}(\\mu _s).$ The same operator is denoted by $\\lambda _!", "(\\eta _{\\psi }^{s^{\\prime }})$ in [8], [43].", "To extend the domain of definition of $\\lambda _!", "(s^{\\prime }),$ let $\\Phi \\in \\mathcal {S}(F)$ satisfy $\\Phi (0)=1$ and $\\widehat{\\Phi } \\in C_c^\\infty (F).$ Here $\\widehat{\\Phi }(x):=\\int _{F}\\Phi (y)\\psi \\left(xy \\right)dy.$ Define the regularized integral $\\lambda _!", "(s^{\\prime })^{\\mathrm {reg}}(f)(x):=\\lim _{|b| \\rightarrow \\infty }\\int _{F^\\times } \\Phi \\left(\\frac{a}{b} \\right)\\delta _{P^*}^{1/2}(\\lambda (a))f(\\lambda (a^{-1})x)\\psi (a)|a|^{s^{\\prime }}da.$ This limit is said to be well-defined if the integral $\\int _{F^\\times } { \\Phi \\left(\\frac{a}{b} \\right)}\\delta _{P^*}^{1/2}(\\lambda (a))f(\\lambda (a^{-1})x)\\psi (a)|a|^{s^{\\prime }}da$ is convergent provided $|b|$ is large enough and the limit in the definition of $\\lambda _!", "(s^{\\prime })^{\\mathrm {reg}}(x)$ exists and is independent of $\\Phi .$ Thus if the integral defining $\\lambda _!", "(s^{\\prime })$ is absolutely convergent then $\\lambda _!", "(s^{\\prime })^{\\mathrm {reg}}(f)=\\lambda _!", "(s^{\\prime })(f).$ In particular this is the case if $f \\in \\mathcal {C}(Y_{P^*}(F)).$ To avoid more proliferation of notation we will drop the superscript $\\mathrm {reg}.$ Let $L(s^{\\prime },\\chi )$ , $\\varepsilon (s^{\\prime },\\chi ,\\psi )$ , and $\\gamma (s^{\\prime },\\chi ,\\psi )=\\frac{\\varepsilon (s^{\\prime },\\chi ,\\psi )L(1-s^{\\prime },\\chi ^{-1})}{L(s^{\\prime },\\chi )}$ be the usual Tate local zeta function, $\\varepsilon $ -factor, and $\\gamma $ -factor attached to a quasi-character $\\chi :F^\\times \\rightarrow \\mathbb {C}^\\times $ and a complex number $s^{\\prime } \\in \\mathbb {C}$ .", "These factors are denoted $\\varepsilon ^{\\prime }(s^{\\prime },\\chi ,\\psi )$ in [17].", "We use a hat to denote the dual group of an $F$ -group (more precisely the neutral component of the Langlands dual).", "Let $\\mathcal {N}$ be a 1-dimensional representation of $Z_{\\widehat{M}}=\\widehat{M}^{\\mathrm {ab}}$ with $s^{\\prime } \\in \\tfrac{1}{2}\\mathbb {Z}$ attached to it.", "The action of $Z_{\\widehat{M}}$ is given by a character $\\lambda :Z_{\\widehat{M}} \\rightarrow \\mathbb {G}_m,$ which we may identify with a cocharacter $\\lambda :\\mathbb {G}_m \\rightarrow M^{\\mathrm {ab}}.$ Let $a_{\\mathcal {N}}(\\chi _s):=L\\left(-s^{\\prime },\\chi _s\\circ \\lambda \\right)\\quad \\textrm {and} \\quad \\mu _{\\mathcal {N}}(\\chi _s):=\\gamma (-s^{\\prime },\\chi _s \\circ \\lambda ,\\psi ).$ We let $\\widetilde{\\mathcal {N}}$ be $\\mathcal {N}^\\vee $ (on which $\\widehat{M}^{\\mathrm {ab}}$ acts via $-\\lambda $ ) with the real number $-1-s^{\\prime }$ attached to it.", "More generally, assume $\\mathcal {N}=\\oplus _i \\mathcal {N}_i$ is a finite-dimensional representation of $M^{\\mathrm {ab}}$ with each $\\mathcal {N}_i$ 1-dimensional.", "We let the $Z_{\\widehat{M}}$ -action on $\\mathcal {N}_i$ be given by $\\lambda _i$ and assume each $\\mathcal {N}_i$ is equipped with a complex number $s_i$ .", "Define $ a_{\\mathcal {N}}(\\chi _s):&=\\prod _{i}a_{\\mathcal {N}_i}(\\chi _s),\\\\\\mu _{\\mathcal {N}}(\\chi _s):&=\\prod _{i=1}^\\ell \\mu _{\\mathcal {N}_i}(\\chi _s).$ We also define $\\widetilde{\\mathcal {N}}:=\\oplus _i \\widetilde{\\mathcal {N}}_i.$ We will in fact only consider $\\mathcal {N}=\\oplus _{i \\in I} \\mathcal {N}_i$ where the parameters attached to $\\mathcal {N}_i$ are all of the form $(s_i,\\lambda _i)$ where $\\lambda _i$ is an integer multiple of $\\beta _0^\\vee $ .", "Here $\\beta _0$ is the simple root of (REF ).", "We will therefore abuse notation and allow ourselves to again write $\\lambda _i$ for the integer $n_i$ such that $\\lambda _i=n_i\\beta _0^\\vee $ .", "Assume that $\\lambda _i>0$ for all $i$ .", "Then we can order the $\\mathcal {N}_i$ so that $\\frac{s_{i+1}}{\\lambda _{i+1}} \\ge \\frac{s_i}{\\lambda _i}$ for all $i$ .", "Choosing such an ordering we define $\\mu _{\\mathcal {N}}:&=\\lambda _{1!", "}(s_1)\\circ \\dots \\circ \\lambda _{\\ell !", "}(s_\\ell ).$ Theorem REF below explains why it is reasonable to use the symbol $\\mu _{\\mathcal {N}}$ in both () and (REF ).", "Let $\\mathfrak {n}_P$ be the Lie algebra of the unipotent radical $N_P$ of $P$ and let $\\widehat{\\mathfrak {n}}_P$ be its Langlands dual.", "Let $ \\widehat{\\mathfrak {n}}_{P|P^*} :=\\widehat{\\mathfrak {n}}_{P}/\\widehat{\\mathfrak {n}}_P \\cap \\widehat{\\mathfrak {n}}_{P^*}.$ Let $\\lbrace e,h,f\\rbrace \\subset \\widehat{\\mathfrak {m}}$ be a principal $\\mathfrak {sl}_2$ -triple (here $\\widehat{\\mathfrak {m}}$ denotes the Langlands dual of $\\mathfrak {m},$ the Lie algebra of $\\mathfrak {m}$ ).", "Consider the subspace $\\widehat{\\mathfrak {n}}_{P|P^*}^e \\le \\widehat{\\mathfrak {n}}_{P|P^*}$ annihilated by $e.$ It admits a decomposition $ \\widehat{\\mathfrak {n}}_{P|P^*}^e=\\oplus _{i} \\mathcal {N}_i$ where the $\\mathcal {N}_i$ are 1-dimensional eigenspaces for the action of $Z_{\\widehat{M}}.$ We observe that $Z_{\\widehat{M}}$ acts via a power of $\\beta _0^\\vee $ on $\\mathcal {N}_i.$ We assign each $\\mathcal {N}_i$ the real number $s_i$ that is half the eigenvalue of $h,$ and define $ a_{P|P}(\\chi _s)=a_{\\widetilde{\\widehat{\\mathfrak {n}}_{P|P^*}^e}}((\\chi _s)^{-1}), \\quad a_{P|P^*}(\\chi _s)=a_{\\widehat{\\mathfrak {n}}_{P|P^*}^e}(\\chi _s).$ These factors enjoy the symmetry property $a_{P|P}(\\chi _s)=a_{P^*|P^*}(\\chi _s), \\quad a_{P|P^*}(\\chi _s)=a_{P^*|P}(\\chi _s)$ by the discussion on passing to the opposite parabolic contained in [16].", "Lemma 3.1 There is an $\\varepsilon >0$ depending only on $P$ and $P^{\\prime }$ such that for any character $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ the function $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s_{\\beta _0}) \\ge -\\varepsilon .$ Consider the set of parameters $\\lbrace (s_i,\\lambda _{i}\\beta _0^\\vee )\\rbrace $ attached to $\\widehat{\\mathfrak {n}}_{P|P^*}.$ It suffices to check that for each $i$ one has $s_i \\ge 0$ and $\\lambda _{i} >0 .$ Since $s_i$ is $\\tfrac{1}{2}$ the highest weight of a certain $\\mathfrak {sl}_2$ representation with respect to the usual Borel subalgebra $\\langle h,e\\rangle $ it is nonnegative.", "It is also clear that $\\lambda _i>0.$ As above, let $M^{\\prime }$ be the unique Levi subgroup of $P^{\\prime }$ containing $M$ and define $M_{\\beta _0}$ as in (REF ).", "The closed immersion $ N_{P^*} \\cap M_{\\beta _0} \\rightarrow N_{P^*}$ induces a bijection $N_{P^*}(F) \\cap M_{\\beta _0}(F) \\tilde{\\longrightarrow } N_{P^*}(F) \\cap N_{P}(F) \\backslash N_{P^*}(F).$ Thus the usual unnormalized intertwining operator restricts to define an operator $\\mathcal {R}_{P|P^*}:C_c^\\infty (Y_{P}(F)) \\longrightarrow C^\\infty (Y_{P^*}(F))$ given by $ \\mathcal {R}_{P|P^*}(f)(g):=\\int _{N_{P^*}(F) \\cap M_{\\beta _0}(F) } f(ug)du.$ We will use the same notation whenever $\\mathcal {R}_{P|P^*}(f)$ is defined (e.g.", "for more general smooth functions or via analytic continuation).", "In [43] Shahidi proves that this agrees with the operator defined by Braverman and Kazhdan.", "A section $\\Phi ^{\\chi _s}$ of $I_P(\\chi _s)$ is good if it is meromorphic and if the section $\\frac{\\mathcal {R}_{P|Q}\\Phi ^{\\chi _s}(g)}{a_{P|Q}(\\chi _s)}$ is holomorphic for all $g \\in G(F)$ and $Q \\in \\lbrace P,P^*\\rbrace $ (recall our conventions regarding meromorphic sections from §REF ).", "We defined adelic Mellin transforms in (REF ) above.", "We use the obvious local analogues of this notation.", "We write $f_{\\chi _s}$ for the Mellin transform of any function $f:Y_{P}(F) \\rightarrow \\mathbb {C}$ or $f:X_P^\\circ (F) \\rightarrow \\mathbb {C}$ such that the integral defining the Mellin transform is absolutely convergent or obtained by analytic continuation from some region of absolute convergence.", "Assume $F$ is nonarchimedean.", "Let $K \\le M^{\\mathrm {ab}}(F) \\times G(F)$ be a compact open subgroup.", "Let $\\mathcal {C}_{\\beta _0}(X_P(F))$ be the space of $K$ -finite $f \\in C^\\infty (X_{P}^\\circ (F))$ such that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large the integral defining the Mellin transform $f_{\\chi _s}$ converges absolutely and defines a good section.", "We define the Schwartz space of $Y_{P,P^{\\prime }}(F)$ to be the space of restrictions to $Y_{P}(F)$ of functions in $\\mathcal {C}_{\\beta _0}(X_P(F))$ : $ \\mathcal {S}(Y_{P,P^{\\prime }}(F))=\\mathrm {Im}(\\mathcal {C}_{\\beta _0}(X_P(F)) \\longrightarrow C^\\infty (Y_{P}(F))).$ Before explaining the archimedean analogue of this definition, let us write $K_{\\mathbb {G}_m}$ for the maximal compact subgroup of $F^\\times $ .", "Say that two quasi-characters $\\chi _1,\\chi _2:F^\\times \\rightarrow \\mathbb {C}^\\times $ are equivalent if $\\chi _1=\\chi _{2}|\\cdot |^s$ for some $s \\in \\mathbb {C}$ .", "Then the set of equivalence classes of quasi-characters of $F^\\times $ is in natural bijection with $\\widehat{K}_{\\mathbb {G}_m}$ .", "Thus we sometimes write $\\widehat{K}_{\\mathbb {G}_m}$ for a set of representatives of the quasi-characters of $F^\\times $ modulo equivalence.", "In the archimedean setting we fix the following sets of representatives: $\\widehat{K}_{\\mathbb {G}_m}:={\\left\\lbrace \\begin{array}{ll}\\lbrace 1,\\mathrm {sgn}\\rbrace &\\textrm { if }F=\\mathbb {R}\\\\\\lbrace z \\mapsto \\left(\\tfrac{z}{(\\overline{z}z)^{1/2}}\\right)^m:m \\in \\mathbb {Z}\\rbrace & \\textrm { if }F=\\mathbb {C}.\\end{array}\\right.", "}$ For extended real numbers $A,B \\in \\lbrace -\\infty \\rbrace \\cup \\mathbb {R}\\cup \\lbrace \\infty \\rbrace $ with $A<B$ let $ V_{A,B}:=\\lbrace s \\in \\mathbb {C}^{\\Delta _P}:A<\\mathrm {Re}(s_\\beta ) <B \\textrm { for }\\beta \\in \\Delta _P\\rbrace .$ For functions $\\phi :\\mathbb {C}^{\\Delta _P} \\rightarrow \\mathbb {C}^{\\Delta _P}$ and polynomials $p$ on $\\mathbb {C}^{\\Delta _P}$ let $ |\\phi |_{A,B,p}:=\\sup _{s \\in V_{A,B}}|\\phi (s)p(s)|$ (which may be infinite).", "Assume $F$ is archimedean.", "The action (REF ) induces an action of $U(\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g})$ on $C^\\infty (X_{P}^\\circ (F)).$ Here $U(\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g})$ is the universal enveloping algebra of the complexification of the Lie algebra $\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g}$ of $M^{\\mathrm {ab}} \\times G,$ viewed as a real Lie algebra.", "Let $\\mathcal {C}_{\\beta _0}(X_{P}(F))$ be the set of all $f \\in C^\\infty (X_P^\\circ (F))$ such that for all $D \\in U(\\mathfrak {m}^{\\mathrm {ab}} \\oplus \\mathfrak {g})$ and each character $\\chi :(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ the integral (REF ) defining $(D.f)_{\\chi _s}$ converges for $\\mathrm {Re}(s_{\\beta _0})$ large enough, is a good section, and satisfies the following condition: For all real numbers $A<B,$ $Q \\in \\lbrace P,P^*\\rbrace ,$ any polynomials $p_{P|Q}$ such that $p_{P|Q}(s)a_{P|Q}(\\eta _s)$ has no poles for all $(s,\\eta ) \\in V_{A,B} \\times \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}$ and all compact subsets $\\Omega \\subset X_{P}^\\circ (F)$ one has $|f|_{A,B,p_{P|Q},\\Omega ,D}:=\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}\\mathrm {sup}_{g \\in \\Omega }|\\mathcal {R}_{P|Q}(D.f)_{\\eta _s}(g)|_{A,B,p_{P|Q}}<\\infty .$ We observe that it is indeed possible to choose $p_{P|Q}(s)$ as above (independently of $\\eta $ ).", "This follows directly from the definition of $a_{P|Q}(\\eta _s).$ Since we have defined $\\mathcal {C}_{\\beta _0}(X_{P}(F))$ for archimedean $F$ we can and do define $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ as in (REF ).", "In the archimedean case the seminorms $|\\cdot |_{A,B,p_{P|Q},\\Omega ,D}$ give $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ the structure of a Fréchet space as we now explain.", "The seminorms $|\\cdot |_{A,B,p_{P|Q},\\Omega ,D}$ give $\\mathcal {C}_{\\beta _0}(X_{P}(F))$ the structure of a Fréchet space via a standard argument.", "See [15] for the proof in a special case, the proof in general is essentially the same.", "Using Mellin inversion, one checks that with respect to this Fréchet structure evaluating a function in $\\mathcal {C}_{\\beta _0}(X_P(F))$ at a point of $X_P^{\\circ }(F)$ is a continuous linear functional on $\\mathcal {C}_{\\beta _0}(X_P(F)).$ Thus the $\\mathbb {C}$ -linear subspace $I \\le \\mathcal {C}_{\\beta _0}(X_{P}(F))$ consisting of functions that vanish on $Y_{P}(F)$ is closed subspace.", "Restriction of functions to $Y_{P}(F)$ induces a $\\mathbb {C}$ -linear isomorphism $\\mathcal {C}_{\\beta _0}(X_P(F))/I \\longrightarrow \\mathcal {S}(Y_{P,P^{\\prime }}(F))$ and thus we obtain a Fréchet topology on $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ by transport of structure.", "This definition is inspired by [13].", "The set of seminorms giving $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ its topology are $|f|_{A,B,p_{P|Q},\\Omega ,D}:=\\mathrm {inf}\\lbrace |\\widetilde{f}|_{A,B,p_{P|Q},\\Omega ,D}: \\widetilde{f} \\in \\mathcal {C}_{\\beta _0}(X_P(F)) \\textrm { and } \\widetilde{f}|_{Y_{P}(F)}=f\\rbrace $ for $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))$ .", "Let $H\\le G$ act on $G$ via right multiplication, and assume that $H$ stabilizes $Y.$ Then (REF ) restricts to an action of $M^{\\mathrm {ab}}(F) \\times H(F)$ on $Y_{P}(F).$ This induces an action of $M^{\\mathrm {ab}}(F) \\times H(F)$ on $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ that is continuous in the archimedean case.", "Remarks.", "(a) We have defined the Schwartz space in terms of restrictions of functions on $X_P^\\circ (F)$ in order to take advantage of the transitive action of $G(F)$ on $X_P^{\\circ }(F).$ The space $Y_{P,P^{\\prime }}(F)$ plays no role in the definition of $\\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ However, it should be possible to give a characterization of $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ as the space of smooth functions on $Y_{P}(F)$ that have particular germs as one approaches the boundary $Y_{P,P^{\\prime }}(F)-Y_{P}(F)$ [38].", "As in the special cases treated in [19], [15], [16], we have defined the Schwartz space to be the space of smooth functions with sufficiently well-behaved Mellin transforms.", "This is reasonable because we can obtain information on $f$ from its Mellin transforms via Mellin inversion as in the proof of Lemma REF below.", "We now discuss the problem of bounding functions in $\\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ Since we have an embedding $\\mathrm {Pl}_P:X_P \\rightarrow V$ of $X_P$ into an affine space we will phrase our bounds in terms of this affine space.", "Let $K \\le G(F)$ be a maximal compact subgroup such that the Iwasawa decomposition $G(F)=P(F)K$ holds.", "If we are in the unramified setting in the sense of §REF we take $K=G(\\mathcal {O}_F).$ The group $K$ does not act on $Y_{P}(F)$ in general.", "The group $G(F)$ acts on each $V_\\beta (F).$ For each $\\beta \\in \\Delta _P$ choose a $K$ -invariant norm $|\\cdot |_\\beta $ on the $F$ -vector space $V_\\beta (F).$ As a warning, for $F=\\mathbb {C}$ we have $|cv|_\\beta =c\\overline{c}|v|_\\beta $ for $c \\in F$ and $v \\in V_{\\beta }(F)$ (this is the “number theorist's norm”).", "If we write $x=P^{\\mathrm {der}}(F)mk$ with $(m,k) \\in M^{\\mathrm {ab}}(F) \\times K$ then by Lemma REF one has $ |\\mathrm {Pl}_{P_\\beta }(x)|_\\beta =|\\mathrm {Pl}_{P_\\beta }(m)|_\\beta =|\\omega _{\\beta }(m)|^{-1}.$ The inverse here appears because $G$ is acting on $V_{\\beta }$ on the right.", "Choose $r_{\\beta } \\in \\mathbb {R}$ so that $ \\prod _{\\beta \\in \\Delta _P}|\\omega _\\beta (m)|^{r_{\\beta }}=\\delta _P^{1/2}(m).$ Recall the definition of $V_{\\beta _0}$ from (REF ) and $V_{\\Delta _{P^{\\prime }}}^{\\circ }$ from (REF ).", "Lemma 3.2 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ For sufficiently small $\\alpha >0$ there is a nonnegative Schwartz function $\\Phi _f \\in \\mathcal {S}(V_{\\beta _0}(F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (F))$ such that $|f(x)| \\le \\Phi _f(\\mathrm {Pl}_P(x))\\prod _{\\beta \\in \\Delta _{P}}|\\mathrm {Pl}_{P_{\\beta }}(x)|_\\beta ^{\\alpha -r_\\beta }.$ If $F$ is archimedean, then $\\Phi _f$ can be chosen continuously as a function of $f.$ Let $I_F:={\\left\\lbrace \\begin{array}{ll}i\\left[ -\\frac{\\log q}{\\pi },\\frac{\\log q}{\\pi }\\right] & \\textrm { if }F \\textrm { is non-archimedean}\\\\i\\mathbb {R}& \\textrm { if }F \\textrm { is archimedean.}\\end{array}\\right.", "}$ Let $c_\\psi \\in \\mathbb {R}_{>0}$ be chosen so that $c_\\psi dx$ is the standard Haar measure on $F$ , where $dx$ is normalized to be self-dual with respect to $\\psi .$ Here the standard Haar measure is the Lebesgue measure if $F=\\mathbb {R}$ , twice the Lebesgue measure if $F=\\mathbb {C}$ , and the unique Haar measure giving $\\mathcal {O}_F$ measure $|\\mathfrak {d}|^{1/2}$ if $F$ is nonarchimedean, where $\\mathfrak {d}$ is a generator for the absolute different.", "Then let $c_F:={\\left\\lbrace \\begin{array}{ll} c_\\psi \\log q &\\textrm { if } F \\textrm { is nonarchimedean}\\\\\\frac{c_\\psi }{2} &\\textrm { if }F=\\mathbb {R}\\\\\\frac{c_\\psi }{2\\pi } &\\textrm { if }F=\\mathbb {C}\\end{array}\\right.", "}$ For suitable continuous functions $f:Y_{P}(F) \\rightarrow \\mathbb {C}$ the Mellin inversion formula states that $ f(x)=\\int _{\\sigma +I_F^{\\Delta _P}}\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}f_{\\eta _s}(x) \\frac{c_F^{|\\Delta _P|}ds}{(2\\pi i)^{|\\Delta _P|}}$ for suitable $\\sigma \\in \\mathbb {R}^{\\Delta _P}$ .", "By [3] this formula holds whenever the integral defining $f_{\\eta _s}$ is absolutely convergent for all $\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}$ and $\\mathrm {Re}(s)=\\sigma $ and $\\int _{\\sigma +I_F^{\\Delta _P}}\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}|f_{\\eta _s}(x)| ds<\\infty .$ By Lemma REF we have that $a_{P|P}(\\chi )$ is holomorphic for $\\mathrm {Re}(s_{\\beta _0}) \\ge -\\varepsilon $ for some $\\varepsilon >0$ independent of the character $\\chi .$ It follows that, for $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))$ , (REF ) holds for $\\sigma =(-\\varepsilon /2,\\dots ,-\\varepsilon /2).$ Writing $x=P^{\\mathrm {der}}(F)mk$ with $(m,k) \\in M^{\\mathrm {ab}}(F) \\times K$ the above becomes $f(mk)=\\int _{\\sigma +iI_F^{\\Delta _P}}\\sum _{\\eta \\in \\widehat{K}_{\\mathbb {G}_m}^{\\Delta _P}}\\delta _P^{1/2}(m)\\eta _s(\\omega _P(m))f_{\\eta _s}(k) \\frac{c_F^{|\\Delta _P|}ds}{(2\\pi i)^{|\\Delta _P|}}.$ To obtain the bound and the continuity statement from this expansion one uses the same argument as that proving [15].", "The Fourier transform To ease notation let $ \\begin{split}\\mu _P:&=\\mu _{\\widehat{\\mathfrak {n}}^e_{P|P^*}}\\\\\\mu _{P}(\\chi _s):&=\\mu _{\\widehat{\\mathfrak {n}}^e_{P|P^*}}(\\chi _s) \\end{split}$ where the operator (resp.", "function) on the right is defined as in (REF ) (resp. ()).", "By the same argument proving [16] we obtain the following theorem: Theorem 3.3 The map $\\mathcal {F}_{P|P^*}:=\\mu _P \\circ \\mathcal {R}_{P|P^*}:\\mathcal {S}(Y_{P,P^{\\prime }}(F)) \\longrightarrow \\mathcal {S}(Y_{P^*,P^{\\prime }}(F))$ is a well-defined isomorphism, bicontinuous in the archimedean case.", "Moreover the diagram $\\begin{tikzcd}\\mathcal {S}(Y_{P,P^{\\prime }}(F)) [r,\"\\mathcal {F}_{P|P^*}\"] [d,\"(\\cdot )_{\\chi _s}\"] & \\mathcal {S}(Y_{P^*,P^{\\prime }}(F)) [d,\"(\\cdot )^*_{\\chi _s}\"]\\\\I_P(\\chi _s) [r,\"\\mu _{P}(\\chi _s)\\mathcal {R}_{P|P^*}\"] &I_{P^*}^*(\\chi _s)\\end{tikzcd}$ commutes for all $\\chi :F^\\times \\rightarrow \\mathbb {C}^\\times $ and $s \\in \\mathbb {C}^\\times $ .", "$\\Box $ The commutativity of the diagram must be understood in the sense that one has an identity of meromorphic functions $\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*=\\mu _{P}(\\chi _s)\\mathcal {R}_{P|P^*}(f_{\\chi _s}).$ Let $H \\le G$ be a subgroup, and consider its action on $G$ via right multiplication.", "Assume that $Y$ is stable under the action of $H.$ For $(m,h,x_1,x_2) \\in M^{\\mathrm {ab}}(F) \\times H(F) \\times Y_{P}(F) \\times Y_{P^*}(F)$ and $(f_1,f_2) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)) \\times \\mathcal {S}(Y_{P^*,P^{\\prime }}(F))$ let $ L(m)R(h)f_1(x_1)=f_1(m^{-1}x_1h), \\quad L(m)R(h)f_2(x_2)= f_2(m^{-1}x_2h)$ be the left and right translation operators.", "It is easy to see that $L(m)R(h)$ preserves $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ and $\\mathcal {S}(Y_{P^*,P^{\\prime }}(F)).$ Lemma 3.4 One has $\\mathcal {F}_{P|P^*} \\circ L(m)R(h)=\\delta _{ P^* \\cap M_{\\beta _0}}(m)L(m)R(h) \\circ \\mathcal {F}_{P|P^*}.$ The operator $\\mu _P$ is $M^{\\mathrm {ab}}(F) \\times H(F)$ -equivariant.", "Thus the lemma follows from the definition (REF ) of $\\mathcal {R}_{P|P^*}.$ We have Schwartz spaces $ \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F)) \\quad \\textrm {and} \\quad \\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(F))$ defined as in [16] and a Fourier transform $\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}:\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F)) \\longrightarrow \\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(F))$ defined as in [16].", "It is an isomorphism, bicontinuous in the archimedean case.", "These facts are a special case of our construction of the Schwartz space $\\mathcal {S}(Y_{P,P^{\\prime }}(F))$ and the Fourier transform $\\mathcal {F}_{P|P^*}.$ One simply replaces $ (G,P,P^{\\prime },Y)$ by $ (M_{\\beta _0},P \\cap M_{\\beta _0},M_{\\beta _0},M_{\\beta _0}).$ Recall that for each $y \\in Y(F)$ we have a map $\\iota _y:X_{P \\cap M_{\\beta _0}}^\\circ \\rightarrow Y_{P}$ defined as in (REF ).", "Proposition 3.5 For each $y \\in Y(F)$ one has a map $\\iota _y^*:\\mathcal {S}(Y_{P,P^{\\prime }}(F)) &\\longrightarrow \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F))\\\\f &\\longmapsto f \\circ \\iota _y$ that fits into a commutative diagram $\\begin{tikzcd}[column sep=huge]\\mathcal {S}(Y_{P,P^{\\prime }}(F)) [r,\"\\mathcal {F}_{P|P^*}\"] [d,\"\\iota _y^*\"] & \\mathcal {S}(Y_{P^*,P^{\\prime }}(F)) [d,\"\\iota _y^*\"]\\\\\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(F)) [r,\"\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}\"] &\\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(F)).\\end{tikzcd}$ If $F$ is archimedean then $\\iota _y^*$ is continuous.", "We recall that Langlands dual groups are contravariantly functorial with respect to morphisms of reductive algebraic groups $G \\rightarrow H$ with normal image, and behave as expected with respect to Levi and parabolic subgroups.", "For precise statements see [10].", "In particular the commutative diagram $\\begin{tikzcd}M \\cap M_{\\beta _0} [d,hookrightarrow][r,hookrightarrow] & P \\cap M_{\\beta _0} [d,hookrightarrow] [r,hookrightarrow] & M_{\\beta _0} [d,hookrightarrow]\\\\M [r,hookrightarrow]& P \\cap M^{\\prime } [r,hookrightarrow]& M^{\\prime }\\end{tikzcd}$ of inclusions of subgroups induces a commutative diagram $\\begin{tikzcd}\\widehat{M \\cap M_{\\beta _0}} [r,hookrightarrow] & \\widehat{P \\cap M_{\\beta _0} } [r,hookrightarrow] & \\widehat{M_{\\beta _0}} \\\\\\widehat{M} [u,twoheadrightarrow][r,hookrightarrow]& \\widehat{P \\cap M^{\\prime }} [r,hookrightarrow] [u,twoheadrightarrow]& \\widehat{M^{\\prime }} [u,twoheadrightarrow]\\end{tikzcd}$ where the horizontal arrows are inclusions of subgroups.", "Consider the representation $\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}=\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}}$ of $\\widehat{M \\cap M_{\\beta _0}}.$ We regard it as a representation of $\\widehat{M}$ via the quotient map $\\widehat{M} \\rightarrow \\widehat{M \\cap M}_{\\beta _0}$ .", "Choose a principal $\\mathfrak {sl}_2$ -triple in $\\widehat{\\mathfrak {m}}.$ Its image under the quotient map $\\widehat{\\mathfrak {m}} \\longrightarrow \\widehat{\\mathfrak {m} \\cap \\mathfrak {m}_{\\beta _0}}$ is a principal $\\mathfrak {sl}_2$ -triple in $\\widehat{\\mathfrak {m} \\cap \\mathfrak {m}_{\\beta _0}}.$ Using our comments on dual groups at the beginning of the proof one checks that the quotient map $\\widehat{\\mathfrak {n}}_{P|P^*} \\longrightarrow \\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}$ is an isomorphism of $\\widehat{M}$ -representations that restricts to a bijection $\\widehat{\\mathfrak {n}}_{P|P^*}^e \\longrightarrow \\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}^e.$ In view of these observations it is easy to check that the map $\\iota _y^*$ is well-defined, the diagram is commutative, and $\\iota _y^*$ is continuous when $F$ is archimedean.", "Corollary 3.6 One has $\\mathcal {F}_{P|P^*} \\circ \\mathcal {F}_{P^*|P}=\\mathrm {Id}.$ In [8] Braverman and Kazhdan prove that $\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}} \\circ \\mathcal {F}_{P^* \\cap M_{\\beta _0}|P \\cap M_{\\beta _0} }$ is the identity.", "Thus the corollary follows from Proposition REF .", "The unramified setting We assume that $F$ is nonarchimedean and is unramifed over its prime field, that $\\psi $ is unramified, and that we are in the unramified setting in the sense of §REF .", "Let $\\mathbb {1}_0 \\in C_c^\\infty (Y_{P}(F)).$ be the characteristic function of the image of $Y(\\mathcal {O}_F)$ in $Y_{P}(F)$ .", "We define the basic function $ b_{Y_{P,P^{\\prime }}}:Y_{P}(F) \\longrightarrow \\mathbb {C}$ to be the unique function in $C^\\infty (Y_{P}(F))$ that is finite under a compact open subgroup of $M^{\\mathrm {ab}}(F)$ such that $(b_{Y_{P,P^{\\prime }}})_{\\chi _s}=a_{P|P}(\\chi _s)(\\mathbb {1}_{0})_{\\chi _s}$ for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "As explained in (REF ) and (REF ), the spaces $X_{P \\cap M_{\\beta _0}}$ are special cases of $Y_{P,P^{\\prime }},$ so $b_{X_{P \\cap M_{\\beta _0}}}$ is defined.", "Lemma 3.7 Assume that $y \\in P^{\\mathrm {der}}(F)\\beta _0^\\vee (F^\\times ) Y(\\mathcal {O}_F).$ Then $\\iota _y^*(b_{Y_{P,P^{\\prime }}})=b_{X_{P \\cap M_{\\beta _0}}}.$ We have already explained the relation between $\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0}|(P \\cap M_{\\beta _0})^*}$ and $\\mathfrak {n}_{P|P^*}$ as representations of $\\widehat{M}$ and $\\widehat{M \\cap M_{\\beta _0}}$ in the proof of Proposition REF .", "This relationship implies that $a_{P|P}(\\chi _s)=a_{P \\cap M_{\\beta _0}|P \\cap M_{\\beta _0}}((\\chi _{\\beta _0})_{s_{\\beta _0}}).$ Define $r_\\beta $ as in (REF ).", "Arguing as in the proof of Lemma REF we obtain the following lemma: Lemma 3.8 There are constants $\\alpha ,c>0$ independent of the cardinality of the residue field $q$ such that if $q>c$ then $|b_{Y_{P,P^{\\prime }}}(x)| \\le \\mathbb {1}_{V_{\\beta _0}(\\mathcal {O}_F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (\\mathcal {O}_F)}(\\mathrm {Pl}_{P}(x))\\prod _{\\beta \\in \\Delta _P}|\\mathrm {Pl}_{P_\\beta }(x)|^{\\alpha -r_\\beta }.$ $\\Box $ Remark The claim on the independence of the residual characteristic is important because we will require this result for all but finitely many places of a global field.", "Proposition 3.9 One has $b_{Y_{P,P^{\\prime }}} \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ Moreover $\\mathcal {F}_{P|P^*}(b_{Y_{P,P^{\\prime }}})=b_{Y_{P^*,P^{\\prime }}}.$ The characters of $Z_{\\widehat{M}}$ that appear in $\\widehat{\\mathfrak {n}}_{P|P^*}$ are all of the form $\\lambda \\beta _{0}^\\vee $ with $\\lambda \\in \\mathbb {Z}_{>0}$ by the proof of Lemma REF .", "Let $V_{\\lambda }=\\widehat{\\mathfrak {n}}_{P|P^*}(\\lambda )=\\widehat{\\mathfrak {n}}_{M_{\\beta _0} \\cap P}(\\lambda )$ be the $\\lambda \\beta _0^\\vee $ -isotypic space and let $r_{\\lambda }:\\widehat{M}_{\\beta _0} \\longrightarrow \\mathrm {Aut}(V_{\\lambda })$ be the corresponding representation.", "It is irreducible [42] [32].", "Let $\\mathrm {triv}:(F^\\times )^{\\Delta _P} \\rightarrow \\mathbb {C}^\\times $ be the trivial character.", "Recall the comments on the relationship between $\\widehat{\\mathfrak {n}}_{P|P^*}$ and $\\widehat{\\mathfrak {n}}_{P \\cap M_{\\beta _0} }$ from the proof of Proposition REF .", "Let $\\pi $ be the trivial representation of $M_{\\beta _0}(F)$ .", "The Gindikin-Karpelevic formula implies that $\\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\mathrm {triv}_s})=(\\mathbb {1}_{0})_{\\mathrm {triv}_s}^*\\prod _{\\lambda } \\frac{L(\\lambda s_{\\beta _0}, \\pi ,r^\\vee _\\lambda )}{L(1+\\lambda s_{\\beta _0},\\pi ,r^\\vee _\\lambda )}$ where the product is over all $\\lambda \\in \\mathbb {Z}_{\\ge 1}$ such that $V_{\\lambda } \\ne 0$ [42] [31].", "Here the $L$ -functions are Langlands $L$ -functions and $r_\\lambda ^\\vee $ is the dual of $r_\\lambda .$ In more detail, $\\pi $ determines a Langlands class $c \\in \\widehat{M}_{\\beta _0}(\\mathbb {C})$ by the Satake isomorphism, and $L(s,\\pi ,r_{\\lambda }^\\vee )=\\det \\left(I_{V_\\lambda }-\\frac{r_{\\lambda }^\\vee (c)}{q^{s}} \\right)^{-1}.$ In fact, if $\\sigma :\\mathrm {SL}_2 \\rightarrow \\widehat{M}_{\\beta _0}$ is a principal $\\mathrm {SL}_2$ then $c=\\sigma {\\left({\\begin{matrix} q^{1/2} & \\\\ & q^{-1/2} \\end{matrix}}\\right)}$ [21].", "Consider $\\widehat{\\mathfrak {n}}_{P|P^*}(\\lambda ).$ As a representation of a principal $\\mathfrak {sl}_2$ -triple in $\\widehat{\\mathfrak {m}}$ it decomposes into a direct sum of irreducible representations in natural bijection with the $\\mathcal {N}_i$ in (REF ) that appear in $\\widehat{\\mathfrak {n}}^e_{P|P^*}(\\lambda ).$ The dimension of the corresponding irreducible representation is $2s_i+1,$ where $2s_i$ is the $h$ -eigenvalue on $\\mathcal {N}_i$ as above.", "We conclude that $\\frac{L(\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}{L(1+\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}&=\\prod _{i} \\left(\\frac{1-q^{-1-s_i-\\lambda s_{\\beta _0}}}{1-q^{-s_i-\\lambda s_{\\beta _0}}}\\frac{1-q^{-s_i-\\lambda s_{\\beta _0}}}{1-q^{1-s_i-\\lambda s_{\\beta _0}}} \\cdots \\frac{1-q^{-1+s_i-\\lambda s_{\\beta _0}})}{1-q^{s_i-\\lambda s_{\\beta _0}}}\\right)\\\\&=\\prod _{i} \\frac{1-q^{-1-s_i-\\lambda s_{\\beta _0}}}{1-q^{s_i-\\lambda s_{\\beta _0}}}$ where the product is over $\\mathcal {N}_i$ in (REF ) that appear in $\\widehat{\\mathfrak {n}}^e_{P|P^*}(\\lambda ).$ Thus $ \\prod _{\\lambda } \\frac{L(\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}{L(1+\\lambda s_{\\beta _0},\\pi ,r_\\lambda ^\\vee )}=\\frac{a_{P|P^*}(\\mathrm {triv}_s)}{a_{P|P}(\\mathrm {triv}_s)}.$ We deduce for all unramified $\\chi :F^\\times \\rightarrow \\mathbb {C}^\\times $ that $\\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\chi _s})=\\frac{a_{P|P^*}(\\chi _s)}{a_{P|P}(\\chi _s)}(\\mathbb {1}_{0})_{\\chi _s}^*.$ It follows immediately that $b_{Y_{P,P^{\\prime }}} \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F)).$ For unramified $\\chi $ $\\mu _P(\\chi _s)=\\frac{a_{P^*|P^*}((\\chi _s)^{-1})}{a_{P|P^*}(\\chi _s)}$ where $\\mu _P(\\chi _s)$ is defined as in (REF ).", "Hence $\\mu _P(\\chi _s) \\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\chi _s})=\\frac{a_{P^*|P^*}((\\chi _s)^{-1})}{a_{P|P}(\\chi _s)}(\\mathbb {1}_{0})_{\\chi _s}^*.$ Applying Theorem REF we have $\\mathcal {F}_{P|P^*}(b_{Y_{P,P^{\\prime }}})_{\\chi _s}^*=a_{P|P}(\\chi _s)\\mu _P(\\chi _s) \\mathcal {R}_{P|P^*}((\\mathbb {1}_{0})_{\\chi _s})=a_{P^*|P^*}((\\chi _s)^{-1})(\\mathbb {1}_{0})^*_{\\chi _s}$ Combining this with Mellin inversion we have $\\mathcal {F}_{P|P^*}(b_{Y_{P,P^{\\prime }}})=b_{Y_{P^*,P^{\\prime }}}.$ Let $\\varpi $ be a uniformizer for $F$ .", "For our use in the proof of Theorem REF below we require the following lemma: Lemma 3.10 Let $(\\alpha ,\\lambda ) \\in \\mathbb {C}\\times \\mathbb {Z}_{\\ne 0}.$ For any $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))^{M(\\mathcal {O}_F) \\times H(\\mathcal {O}_F)}$ $\\left(\\mathrm {Id}-\\frac{\\alpha }{\\delta _P^{1/2}(\\beta _0^{\\vee }(\\varpi ^{\\lambda }))}L(\\beta _0^{\\vee }(\\varpi ^{-\\lambda }))\\right)f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F))^{M(\\mathcal {O}_F) \\times H(\\mathcal {O}_F)}$ and $\\left(\\left(\\mathrm {Id}-\\frac{\\alpha }{\\delta _P^{1/2}(\\beta _0^{\\vee }(\\varpi ^{\\lambda }))}L(\\beta _0^{\\vee }(\\varpi ^{-\\lambda }))\\right)f\\right)_{\\chi _s}=(1-\\alpha \\chi _{\\beta _0}(\\varpi ^{\\lambda }))f_{\\chi _s}.$ $\\Box $ The adelic setting Now let $F$ be a global field.", "We let $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))=\\widehat{\\otimes }_{v|\\infty }\\mathcal {S}(Y_{P,P^{\\prime }}(F_v)) \\otimes \\otimes ^{\\prime }_{v \\nmid \\infty }\\mathcal {S}(Y_{P,P^{\\prime }}(F_v))$ where the restricted direct product is taken with respect to the basic functions of (REF ).", "Here when $F$ is a number field the hat denotes the projective topological tensor product and when $F$ is a function field it is the algebraic tensor product.", "The tensor product of the local Fourier transforms induces an isomorphism $\\mathcal {F}_{P|P^*}:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) \\longrightarrow \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)).$ Here we are using Proposition REF .", "The following is the global analogue of Proposition REF : Proposition 3.11 For each $y \\in Y(\\mathbb {A}_F)$ one has a map $\\iota _y^*:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow \\mathcal {S}(X_{P \\cap M_{0}}(\\mathbb {A}_F))\\\\f &\\longmapsto f \\circ \\iota _y$ that fits into a commutative diagram $\\begin{tikzcd}[column sep=huge]\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) [r,\"\\mathcal {F}_{P|P^*}\"] [d,\"\\iota _y^*\"] & \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)) [d,\"\\iota _y^*\"]\\\\\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)) [r,\"\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}\"] &\\mathcal {S}(X_{P^* \\cap M_{\\beta _0}}(\\mathbb {A}_F)).\\end{tikzcd}$ Let $K=\\prod _vK_v \\le G(\\mathbb {A}_F)$ be a maximal compact subgroup.", "The element $y=(y_v) \\in Y_{P}(\\mathbb {A}_F)$ has the property that $y_v \\in K_v$ for almost all $v.$ Thus the proposition follows from its local analogue Proposition REF and the corresponding statement for basic functions in Lemma REF .", "Lemma 3.12 For $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ (resp.", "$f \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the integrals defining $f_{\\chi _s}$ (resp.", "$f_{\\chi _s}^*$ ) converge absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large (resp.", "sufficiently small).", "This follows from the estimates in lemmas REF and REF .", "Lemma REF implies that the Mellin transforms (REF ) define maps $(\\cdot )_{\\chi _s}:=(\\cdot )_{\\chi _s,P}:\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow I_P(\\chi _s)|_{Y_{P}(\\mathbb {A}_F)} \\\\(\\cdot )_{\\chi _s}^*:=(\\cdot )_{\\chi _s,P^*}^*:\\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)) &\\longrightarrow I_P(\\chi _s)|_{Y_{P}(\\mathbb {A}_F)}$ for $\\mathrm {Re}(s_\\beta )$ sufficiently large (resp.", "sufficiently small).", "These Mellin transforms will be used in the following sections.", "The Poisson summation formula on $X_{P \\cap M_{\\beta _0}}(F)$ The Poisson summation formula on $X_{P \\cap M_{\\beta _0}}(F)$ was established under some local assumptions on the test functions involved in [8] with a slightly different definition of the Schwartz space.", "In this section we establish it in general following the arguments of [19].", "To ease notation, for this section only we assume that $P \\le M_{\\beta _0},$ which implies $M_{\\beta _0}=G.$ This amounts to assuming that $G$ is simple and $P$ is a maximal parabolic subgroup.", "Thus $X_{P \\cap M_{\\beta _0}}=X_P=X_{P,PIG}$ where $I \\in G(F)$ is the identity.", "The construction of the Schwartz space and the Fourier transform given in the previous section reduces to the construction of [16] in this case.", "We observe that $\\Delta _P=\\lbrace \\beta _0\\rbrace $ in the setting of this section.", "Thus we drop $\\Delta _P$ and $\\beta _0$ from notation when no confusion is likely.", "For a quasi-character $\\chi :A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times \\rightarrow \\mathbb {C}^\\times ,$ $s \\in \\mathbb {C}$ , $f_1 \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ and $f_2 \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ we have degenerate Eisenstein series $ \\begin{split}E(g,f_{1\\chi _s}):&=\\sum _{\\gamma \\in P(F) \\backslash G(F)}f_{1\\chi _s}(\\gamma g)\\\\E^*(g,f^*_{2\\chi _s}):&=\\sum _{\\gamma \\in P^*(F) \\backslash G(F)}f^*_{2\\chi _s}(\\gamma g).", "\\end{split}$ Here $\\chi _s=\\chi |\\cdot |^s$ .", "It is well-known that these converge absolutely for $\\mathrm {Re}(s)$ large enough (resp.", "small enough).", "For the proof of absolute convergence in a special case see [19]; the proof generalizes to our setting.", "Let $ a_{P|P}(\\chi _s):=\\prod _v a_{P|P}(\\chi _{vs}).$ Lemma 4.1 Let $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ .", "The function $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s)> 0.$ It admits a meromorphic continuation to the plane.", "Moreover there is an integer $n$ depending only on $G$ such that $a_{P|P}(\\chi _s)$ is holomorphic if $\\chi ^n \\ne 1.$ The first claim follows from the same remarks proving Lemma REF .", "Since $a_{P|P}(\\chi _s)$ is a product of (completed) Hecke $L$ -functions the second two assertions are clear.", "Theorem 4.2 (Langlands) Let $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ be finite under a maximal compact subgroup of $G(\\mathbb {A}_F).$ For a character $\\chi :A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times \\rightarrow \\mathbb {C}^\\times $ the Eisenstein series $E(g,f_{\\chi _s})$ has a meromorphic continuation to the $s$ -plane and admits a functional equation $E(g,f_{\\chi _s})=E^*(g,\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ If $\\mathrm {Re}(z)=0$ then the order of the pole of $E(g,f_{\\chi _s})$ at $s=z$ is bounded by the order of the pole of $a_{P|P}(\\chi _s)$ at $s=z.$ To ease translation with the manner the theory is usually phrased, let $Q$ be the unique parabolic subgroup containing $B$ that is conjugate to $P^*,$ and let $M_Q$ be the Levi subgroup of $P$ containing $T.$ Choose $w \\in G(F)$ so that $w P^*w^{-1}=Q$ and $wMw^{-1}=M_Q.$ There is an isomorphism $j:C^\\infty (X_{P^*}^\\circ (F)) &\\tilde{\\longrightarrow } C^\\infty (X_Q^\\circ (F))\\\\f &\\tilde{\\longmapsto } (x \\mapsto f(w^{-1}x)).$ For suitable sections $\\Phi ^{\\chi _s} \\in I_Q(\\chi _s)$ and $g \\in G(\\mathbb {A}_F)$ we can then form the Eisenstein series $E_Q(g,\\Phi ^{\\chi _s})=\\sum _{\\gamma \\in Q(F) \\backslash G(F)}\\Phi ^{\\chi _s}(\\gamma g).$ Let $K$ be a maximal compact subgroup of $G(\\mathbb {A}_F)$ .", "If $\\Phi ^{\\chi _s}$ is $K$ -finite and holomorphic for $\\mathrm {Re}(s)$ sufficiently large, then $E_Q(g,\\Phi ^{\\chi _s})$ is absolutely convergent for $\\mathrm {Re}(s)$ sufficiently large.", "We observe that for $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ one has $ E^*(g,f_{\\chi _{s}}^*)=E_Q(g,j (f_{\\chi _s}^*))$ for $\\mathrm {Re}(s)$ sufficiently small.", "By definition of the Schwartz space, for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ the section $f_{\\chi _s}$ is a holomorphic multiple of the product of completed Hecke $L$ -functions $a_{P|P}(\\chi _s).$ With this in mind, the meromorphy assertion is proven in [9], [33] for $K$ -finite functions $f.$ These references also contain a proof of the functional equation $E(g,f_{\\chi _s})=E_Q(g,j \\left( \\mathcal {R}_{P|P^*}(f_{\\chi _s})\\right))$ which implies by (REF ) that $E(g,f_{\\chi _s})=E^*(g, \\mathcal {R}_{P|P^*}(f_{\\chi _s})).$ We have $ \\mathcal {R}_{P|P^*}(f_{\\chi _s})=(\\mathcal {F}_{P|P^*}(f))_{\\chi _s}^*$ by Theorem REF and the argument of [19].", "Thus the functional equation stated in the theorem follows from (REF ).", "As mentioned above for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ the Mellin transform $f_{\\chi _s}$ is a holomorphic multiple of $a_{P|P}(\\chi _s)$ .", "Thus the last assertion of the theorem follows from the fact that Eisenstein series attached to $K$ -finite holomorphic sections are themselves holomorphic on the unitary axis [2], [33].", "Let $C(\\chi _s)$ be the analytic conductor of $\\chi _s,$ normalized as in [19].", "Using notation as in (REF ) and (REF ) one has the following estimate: Theorem 4.3 Assume that $F$ is a number field, that Conjecture REF is true, and $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ is $K$ -finite.", "Let $A<B$ be real numbers and let $p\\in \\mathbb {C}[x]$ be a polynomial such that $p(s)E(g,\\chi _s)$ is holomorphic in the strip $V_{A,B}.$ Then for all $N \\ge 0$ one has $|E(g,f_{\\chi _s})|_{A,B,p} \\ll _{N,f} C(\\chi _s)^{-N}.$ The argument is the same as that proving [19].", "Let $K_M <M^{\\mathrm {ab}}(\\mathbb {A}_F)$ be the maximal compact subgroup and let $ \\kappa _F:={\\left\\lbrace \\begin{array}{ll}\\frac{2^{r_1}(2\\pi )^{r_2}h_F R_F}{d_F^{1/2}e_F}& \\textrm { if }F \\textrm { is a number field}\\\\\\frac{h_F}{d_F^{1/2}(q-1)\\log q}&\\textrm { if }F \\textrm { is a function field}\\end{array}\\right.", "}$ where $r_1$ and $r_2$ are the number of real (resp.", "complex) places, $h_F$ is the class number, $R_F$ is the regulator, $d_F$ is the absolute discriminant, $e_F$ is the number of roots of unity in $F$ , and in the function field case $F$ has field of constants $\\mathbb {F}_q$ .", "For complex numbers $s_0$ let $ w(s_0)={\\left\\lbrace \\begin{array}{ll} \\tfrac{1}{2} \\textrm { if }\\mathrm {Re}(s_0)=0,\\\\1 \\textrm { otherwise.}\\end{array}\\right.", "}$ We now prove the following special case of Theorem REF : Theorem 4.4 Let $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ .", "Assume that $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $\\sum _{x \\in X_{P}^\\circ (F)}&f(x)+\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E^*(I,\\mathcal {F}_{P|P^*}(f)^*_{\\chi _{-s}})\\\\&=\\sum _{x^* \\in X_{P^*}^{\\circ }(F)}\\mathcal {F}_{P|P^*}(f)(x^*)+\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Here the sum on $\\chi $ is over characters of $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times .$ The sums over $x$ and $x^*$ are absolutely convergent.", "It is clear that if $f$ is $K_M$ -finite and Conjecture REF is valid then the double sum over $\\chi $ and $s_0$ has finite support.", "The same is true if $F$ is a function field or Conjecture REF is valid.", "This is how we are using these assumptions here and below.", "Remark Before proving Theorem REF we clarify its meaning.", "Assume $F$ is a number field and Conjecture REF is valid.", "For any $s_0 \\in \\mathbb {C}$ and $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ consider the linear functional $ f_\\infty \\longmapsto \\mathrm {Res}_{s=s_0}E(I,(f_\\infty f^\\infty )_{\\chi _s}).$ It is defined on the dense subspace of $\\mathcal {S}(X_P(F_\\infty ))$ consisting of $K_\\infty $ -finite functions.", "The proof of the theorem shows that this linear functional is continuous with respect to the Fréchet topology on $\\mathcal {S}(X_P(F_\\infty )).$ Hence it extends to all of $\\mathcal {S}(X_P(F_\\infty )).$ For $f_\\infty $ that are not $K_\\infty $ -finite this is the manner the expression $\\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ is to be interpreted in this paper.", "We take the obvious analogous conventions regarding the meaning of $\\mathrm {Res}_{s=s_0}E^*(I,f^{\\prime *}_{\\chi _s})$ for $f^{\\prime } \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ that are not $K_\\infty $ -finite.", "Let $K_\\infty \\le G(F_\\infty )$ be a maximal compact subgroup.", "Assume first that $f=f_\\infty f^\\infty $ where $f_\\infty \\in \\mathcal {S}(X_{P}(F_\\infty ))$ is $K_\\infty $ -finite.", "Let $I_F:={\\left\\lbrace \\begin{array}{ll}i\\left[-\\frac{\\pi }{\\log q},\\frac{\\pi }{\\log q} \\right] & \\textrm { if }F \\textrm { is a function field, and }\\\\i\\mathbb {R}&\\textrm { if }F \\textrm { is a number field.}", "\\end{array}\\right.", "}$ By Mellin inversion and Theorem REF in the number field case $\\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{\\chi } \\int _{\\sigma +I_F }E(I,f_{\\chi _s}) \\frac{ds}{c 2\\pi i}$ for $\\sigma $ sufficiently large and a suitable constant $c.$ Here the sum over $\\chi $ is as in the statement of the theorem.", "Since $f$ is $K_\\infty $ -finite, the support of the sum over $\\chi $ is finite.", "The constant $c$ may be computed as follows.", "It depends on our choice of Haar measure on $[M^{\\mathrm {ab}}],$ which is normalized to correspond to the measure on $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times $ described in §REF .", "This is induced as explained in §REF from a measure on $\\mathbb {A}_F$ that is self-dual with respect to $\\psi .$ The induced measure on $F \\backslash \\mathbb {A}_F$ is independent of the choice of $\\psi .$ Thus using [46] and a choice of self-dual measure one obtains $\\mathrm {meas}(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )={\\left\\lbrace \\begin{array}{ll} \\frac{2^{r_1}(2\\pi )^{r_2}h_F R_F}{d_{F}^{1/2}e_F} &\\textrm { if }F \\textrm { is a number field}\\\\\\frac{h_F }{d_{F}^{1/2}(q-1)} & \\textrm { if }F \\textrm { is a function field.}\\end{array}\\right.", "}$ This implies $c=\\kappa _F.$ Notice that in the function field case $\\kappa _F=\\frac{\\mathrm {meas}(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )}{\\log q}$ because of the measure of $I_F.$ We shift contours to $\\mathrm {Re}(\\sigma )$ very small to see that this is $\\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{\\chi } \\int _{\\mathrm {Re}(s)=\\sigma ^{\\prime }}E(I,f_{\\chi _s}) \\frac{ds}{\\kappa _F2\\pi i}+\\frac{1}{\\kappa _F}\\sum _{\\chi }\\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ where now $\\sigma ^{\\prime }$ is sufficiently small.", "Here the support of the sum over $\\chi $ and $s_0$ is finite by assumptions (2) or (3) in the number field case and the fact that $E(I,f_{\\chi _s})$ is rational in the sense of [36] in the function field case [36].", "In the number field case the bound required to justify the contour shift is provided by Theorem REF .", "We now apply the functional equation of Theorem REF and Mellin inversion to deduce the identity $ \\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{x^* \\in X_{P^*}^{\\circ }(F)}\\mathcal {F}_{P|P^*}(f)(x^*)+\\frac{1}{\\kappa _F}\\sum _{\\chi }\\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Since all elements of the Schwartz space $\\mathcal {S}(X_P(\\mathbb {A}_F))$ are $K_\\infty $ -finite in the function field case assume for the moment that $F$ is a number field.", "Write $K_{M\\infty }=K_{M} \\cap M^{\\mathrm {ab}}(F_\\infty ).$ We assume without loss of generality that $f=f_\\infty f^\\infty $ where $f_\\infty \\in \\mathcal {S}(X_P(F_\\infty ))$ , $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ .", "We additionally either assume (2) and that $f_\\infty $ transforms according to a particular character $\\eta $ under $K_{M\\infty }$ , or we assume (3).", "Since $K_\\infty $ -finite functions are dense in $\\mathcal {S}(X_{P}(F_\\infty ))$ by the standard argument [45] we can choose a sequence $\\lbrace f_i:i \\in \\mathbb {Z}_{\\ge 1}\\rbrace \\subset \\mathcal {S}(X_P(F_\\infty ))$ of $K_\\infty $ -finite $f_i$ such that $f_i \\rightarrow f$ in the Frechét topology on $\\mathcal {S}(X_P(F_\\infty )).$ Under assumption (2) we additionally assume that the $f_i$ transform under $K_{M\\infty }$ by $\\eta $ .", "We observe that the support of the sum over $\\chi $ and $s_0$ in (REF ) and its analogues when $f$ is replaced by $f_if^\\infty $ are contained in a finite set independent of $i$ under either assumption (2) or (3).", "In fact, the finite set can be taken to depend only on the $K_M^\\infty $ -type of $f^\\infty $ .", "It is clear from Lemma REF that for each fixed $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ the map $\\Lambda _{1,f^\\infty }:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\sum _{x \\in X_P^\\circ (F)}f_\\infty (x)f^\\infty (x)$ is continuous on $\\mathcal {S}(X_P(F_\\infty )).$ The same is true of $\\Lambda _{2,f^\\infty }:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\sum _{x^* \\in X_{P^*}^\\circ (F)}\\mathcal {F}_{P|P^*}(f_\\infty )(x^*)\\mathcal {F}_{P|P^*}(f^\\infty )(x^*)$ since the Fourier transform is continuous.", "Thus $\\Lambda _{j,f^\\infty }(f_i) \\rightarrow \\Lambda _{j,f^\\infty }(f)$ for as $i \\rightarrow \\infty $ for $j \\in \\lbrace 1,2\\rbrace .$ Finally consider $\\Lambda _{f^\\infty ,s_0}:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Using Lemma REF we can choose an $f^{\\prime \\infty }\\in \\mathcal {S}(X_{P}(\\mathbb {A}_F^\\infty ))$ such that $\\Lambda _{f^\\infty ,s_0}=\\Lambda _{1,f^{\\prime \\infty }}-\\Lambda _{2,f^{\\prime \\infty }}.$ In more detail one uses Lemma REF to choose $f^{\\prime \\infty }$ so that the contribution of $\\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ to (REF ) is unchanged, but the contribution of the other residues vanish.", "Thus $\\Lambda _{f^\\infty ,s_0}$ is continuous.", "We deduce that (REF ) is valid for all $K_M$ -finite $f$ under assumption (2) and for all $f$ under assumption (3).", "We now return to the case of a general global field.", "To obtain the expression in the theorem from (REF ) we use the functional equation and holomorphy assertion of Theorem REF and the observeration that for any meromorphic function $f$ on $\\mathbb {C}$ and any $s_0 \\in \\mathbb {C}$ one has $\\mathrm {Res}_{s=s_0}f(s)=-\\mathrm {Res}_{s=-s_0}f(-s).$ Lemma 4.5 Let $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }.$ Let $n$ be the maximal order of the pole at $s_0$ of $E(I,f_{\\chi _s})$ as $f$ ranges over $K_\\infty $ -finite elements of $\\mathcal {S}(X_P(F_\\infty )).$ For each $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ there are continuous linear functionals $\\Lambda _i((\\cdot )f^\\infty ):\\mathcal {S}(X_P(F_\\infty )) \\longrightarrow \\mathbb {C}$ such that if $f_\\infty $ is $K_\\infty $ -finite then $ \\delta _P^{-1/2}(\\beta ^\\vee _0(t))\\sum _{i=0}^{n-1} |t|^{-s_0}(\\log |t|)^i \\overline{\\chi }(t)\\Lambda _i(f_\\infty f^\\infty )=\\mathrm {Res}_{s=s_0}E(I,(L(\\beta _0^\\vee (t))f_\\infty f^\\infty )_{\\chi _s}).$ As usual, when $F$ is a function field we give $\\mathcal {S}(X_P(F_\\infty ))$ the discrete topology.", "We have $\\tfrac{1}{|t|^{s}}=\\sum _{j=0}^\\infty \\frac{(-(s-s_0)\\log |t|)^j}{ j!", "|t|^{s_0}}$ for $s$ in a neighborhood of $s_0$ .", "On the other hand if $f_\\infty $ is $K_\\infty $ -finite then we can write $E(I,(f_{\\infty }f^\\infty )_{\\chi _s})=\\sum _{i=0}^{n-1}\\frac{(-1)^{i}i!", "\\Lambda _i(f_\\infty f^\\infty )}{(s-s_0)^{i+1}} +g(s)$ where $g(s)$ is holomorphic in a neighborhood of $s_0$ and $\\Lambda _i(f_\\infty f_\\infty ) \\in \\mathbb {C}$ .", "Then the expression (REF ) is valid for $K_\\infty $ -finite $f_\\infty .$ The fact that $\\Lambda _i((\\cdot )f^\\infty )$ extends to a continuous functional on all of $\\mathcal {S}(X_P(F_\\infty ))$ is tautological in the function field case.", "In the number field case it follows from a variant of the proof of the continuity of the functional $\\Lambda _{f^\\infty ,s_0}$ in the proof of Theorem REF .", "Proofs of Theorem REF and Theorem REF For $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ consider the sum $ \\sum _{y \\in Y_{P}(F)}f(y ).$ Lemma 5.1 The sum (REF ) is absolutely convergent.", "Let $|\\cdot |_{\\beta }=\\prod _{v}|\\cdot |_{\\beta ,v}$ , where $|\\cdot |_{\\beta ,v}$ is the norm on $V_{\\beta }(F_v)$ fixed above (REF ).", "Using lemmas REF and REF and the notation in these lemmas we see that for $\\alpha >0$ sufficiently small there is a Schwartz function on $\\Phi \\in \\mathcal {S}(V_{\\beta _0}(\\mathbb {A}_F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (\\mathbb {A}_F))$ such that the sum is dominated by $\\sum _{\\xi \\in V^{\\circ }(F)}\\Phi (\\xi )\\prod _{\\beta \\in \\Delta _P}|\\xi |^{\\alpha -r_\\beta }_\\beta <\\infty $ where $V^\\circ $ is defined as in (REF ).", "Recall the function $w(s_0)$ from (REF ).", "The following theorem is the precise version of Theorem REF : Theorem 5.2 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ .", "Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $&\\sum _{x \\in Y_{P}(F)}f(x )+\\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(\\mathcal {F}_{P|P^*}(f))^*_{\\chi _{-s}})\\\\&=\\sum _{x^* \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)(x ^*)+\\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})$ where the sums on $\\chi $ are over all characters of $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times .$ Moreover the sums over $x$ and $x^*$ and the triple sums over $(y,\\chi ,s_0)$ are absolutely convergent.", "Remark We point out that conjectures REF and REF are assertions about functions in $\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)),$ whereas $f$ lies in $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ We use Lemma REF to write $ \\begin{split}\\sum _{x\\in Y_{P}(F)}f(x ) &=\\sum _{ y }\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}f(\\iota _y(\\gamma ))=\\sum _{ y}\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(f)(\\gamma ).\\end{split}$ Here and throughout the proof all sums over $y$ are over $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ .", "By Proposition REF we have $\\iota _y^*(f) \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ There is a natural map $(M \\cap M_{\\beta _0})^{\\mathrm {ab}} \\rightarrow M^{\\mathrm {ab}}$ and hence $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(\\mathbb {A}_F)$ acts on $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ With respect to this action $\\iota _y$ is $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(F)$ -equivariant.", "In particular if (2) is valid the assumption that $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ is $K_M$ -finite implies $\\iota _y(f)$ is finite under the maximal compact subgroup of $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(\\mathbb {A}_F).$ Hence applying Theorem REF we see that the above is $&-\\sum _{y }\\Bigg (\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E^*(I,(\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f)))^*_{\\chi _{-s}})\\Bigg )\\\\&+\\sum _{ y }\\Bigg (\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f))(\\gamma ) +\\sum _{\\chi }\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\Bigg ).$ By Proposition REF this is $&-\\sum _{y }\\Bigg (\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(\\mathcal {F}_{P |P^*}(f))^*_{\\chi _{-s}})\\Bigg )\\\\&+\\sum _{ y }\\Bigg (\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(\\mathcal {F}_{P|P^*}(f))(\\gamma ) +\\sum _{\\chi }\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\Bigg ).$ By Lemma REF $\\sum _{ y }\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(\\mathcal {F}_{P |P^*}(f))( \\gamma )=\\sum _{x^* \\in Y_{P^*}(F)}\\mathcal {F}_{P |P^*}(f)( x^*).$ This completes the proof of the identity in the theorem.", "For the absolute convergence statement in the theorem, we observe that (REF ) and (REF ) are absolutely convergent by Lemma REF .", "By the argument above and the functional equation in Theorem REF we see that $\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}&f(\\iota _y(\\gamma ))\\\\ &=\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f))(\\gamma )+\\frac{1}{\\kappa _F}\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})$ for all $y.$ We deduce that the sum over $y$ in $\\frac{1}{\\kappa _F}\\sum _{y} \\left(\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\right)$ is absolutely convergent as well.", "The support of the sum over $\\chi $ is finite and under assumptions (1) or (2) it depends only on the $K_M$ -type of $f.$ Under assumption (3) it depends only on the $K_M \\cap M(\\mathbb {A}_F^\\infty )$ -type of $f.$ With this in mind, for any fixed $s_0 \\in \\mathbb {C}$ we can use Lemma REF to choose an $f^{\\prime } \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ with the same $K_M$ -type as $f$ so that $&\\sum _{y } \\left(\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f^{\\prime })_{\\chi _s})\\right)=\\sum _{y} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s}).$ We deduce that the right hand side of (REF ) is absolutely convergent, and the same is true if we replace $f$ by $\\mathcal {F}_{P|P^*}(f)$ by Proposition REF and Theorem REF .", "Let $(f_1,f_2) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) \\times \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)).$ Recall the definition of the generalized Schubert Eisenstein series $E_{Y_P}(f_{1\\chi _s})$ and $E_{Y_{P^*}}^*(f_{2\\chi _s}^*)$ of (REF ).", "Using lemmas REF and REF and standard arguments one obtains the following lemma: Lemma 5.3 For $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ one has $\\sum _{x \\in M^{\\mathrm {ab}}(F) \\backslash Y_{P}(F)}\\int _{M^{\\mathrm {ab}}(\\mathbb {A}_F)} \\delta _{P}^{1/2}(m)|\\chi _s(\\omega _P(m))f_1(m^{-1}x)|dm<\\infty $ for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "In particular, $E_{Y_P}(f_{1\\chi _s})$ converges absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "Similarly, for $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the series $E_{Y_{P^*}}^*(f_{2\\chi _s}^*)$ converges absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently small.", "$\\Box $ With notation as in Lemma REF , let $A^{\\beta _0}:=\\ker \\omega _{\\beta _0}:M^{\\mathrm {ab}} \\longrightarrow \\mathbb {G}_m$ Thus $M^{\\mathrm {ab}}$ is the internal direct sum of $\\beta _0^\\vee (\\mathbb {G}_m)$ and $A^{\\beta _0}$ by Lemma REF .", "Recall that $M^{\\prime }$ is the unique Levi subgroup of $P^{\\prime }$ containing $M$ .", "Applying Lemma REF with $P$ replaced by $P^{\\prime }$ we see that the canonical map $M^{\\mathrm {ab}} \\rightarrow M^{\\prime \\mathrm {der}}$ restricts to induce an isomorphism $ A^{\\beta _0} \\tilde{\\longrightarrow } M^{\\prime \\mathrm {ab}}.$ We have a Haar measure on $\\mathbb {A}_F^\\times $ and we endow $A^{\\beta _0}(\\mathbb {A}_F)$ by declaring that the product measure on $A^{\\beta _0}(\\mathbb {A}_F)\\beta _0^\\vee (\\mathbb {A}_F^\\times )$ is our Haar measure on $M^{\\mathrm {ab}}(\\mathbb {A}_F).$ As usual let $(\\mathbb {A}_F^\\times )^1:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|=1\\rbrace , \\quad (\\mathbb {A}_F^\\times )^+:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|>1\\rbrace , \\quad (\\mathbb {A}_F^\\times )^-:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|<1\\rbrace .$ For algebraic $F$ -groups $G$ we set $[G]:=G(F) \\backslash G(\\mathbb {A}_F)$ and define $[\\mathbb {G}_m]^1:=F^\\times \\backslash (\\mathbb {A}_F^\\times )^1, \\quad [\\mathbb {G}_m]^ \\pm :=F^\\times \\backslash (\\mathbb {A}_F^\\times )^\\pm .$ We endow $(\\mathbb {A}_F^\\times )^1$ with the unique measure so that $|\\cdot |:\\mathbb {A}_F^\\times /(\\mathbb {A}_F^\\times )^1 \\tilde{\\longrightarrow } \\mathrm {Im}(|\\cdot |) \\subseteq \\mathbb {R}_{>0}$ is measure preserving.", "Here in the number field case $\\mathrm {Im}(|\\cdot |)=\\mathbb {R}_{>0}$ with its usual Haar measure $\\frac{dt}{t}$ (where $dt$ the Lebesgue measure on $\\mathbb {R}$ ).", "In the function field case the image is discrete and endowed with the counting measure.", "The following technical lemma will be used in the proof of Theorem REF : Lemma 5.4 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "For every $\\chi \\in (\\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times })^{\\Delta _P},$ $\\chi ^{\\prime } \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and $s_0 \\in \\mathbb {C}$ $ &\\sum _{y } \\int _{[\\mathbb {G}_m]^1 \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)|\\chi _s(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_{z}})|dmd^\\times t$ is finite.", "Moreover for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large $ &\\sum _{y}\\int _{[\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)|\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)^*_{\\chi ^{\\prime }_{ z}})|dm d^\\times t$ converges.", "The expression $ &\\sum _{y } \\int _{ [\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_{z}})dm d^\\times t$ originally defined for $\\mathrm {Re}(s_{\\beta _0})$ large, admits a meromorphic continuation to $s \\in \\mathbb {C}^{\\Delta _P}.$ Here all sums over $y$ are over $ P^{\\prime }(F) \\backslash Y(F)$ .", "The group $[\\mathbb {G}_m]^1$ is compact.", "Using this observation and the argument proving Lemma REF we see that for all $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ and $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the integrals $ \\begin{split}&\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m) \\sum _{x \\in Y_{P}(F)}|f_1((\\beta _0^\\vee (t)m)^{-1}x)|dm d^\\times t\\\\& \\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m) \\sum _{x^* \\in Y_{P^*}(F)}|f_2((\\beta _0^\\vee (t)m)^{-1}x^*)|dmd^\\times t, \\end{split}$ are convergent.", "Let $(f_\\infty ,f^\\infty ) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )) \\times \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F^\\infty ))$ , let $(t,m) \\in \\mathbb {A}_F^\\times \\times T^{\\beta _0}(\\mathbb {A}_F)$ , and let $y \\in Y(F).$ Arguing as in the proof of Theorem REF for each $\\chi ^{\\prime } \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and $s_0 \\in \\mathbb {C}$ there is an $f^{\\prime \\infty }\\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F^\\infty ))$ such that $ \\begin{split}&\\sum _{x \\in X_{P \\cap M_{\\beta _0}(F)}}\\iota _y(f_\\infty f^{\\prime \\infty })((\\beta _0^\\vee (t)m)^{-1} x)- \\sum _{x^*\\in X_{P^* \\cap M_{\\beta _0}}(F)}\\mathcal {F}_{P \\cap M_{\\beta _0} |P^* \\cap M_{\\beta _0}}\\iota _y^*(L(\\beta _0^{\\vee }(t)m)(f_\\infty f^{\\prime \\infty }))(x^*)\\\\&=\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{z }}) \\end{split}$ Multiplying both sides of (REF ) by $\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))$ , summing over $y \\in P^{\\prime \\mathrm {der}}(F) \\backslash G(F)$ and then integrating over $[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]$ we arrive at $ \\begin{split}&\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\sum _{y}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\Bigg (\\sum _{x \\in X_{P \\cap M_{\\beta _0}(F)}}\\iota _y(f_\\infty f^{\\prime \\infty })((\\beta _0^\\vee (t)m)^{-1} x)\\\\&- \\sum _{x^*\\in X_{P^* \\cap M_{\\beta _0}}(F)}\\mathcal {F}_{P \\cap M_{\\beta _0} |P^* \\cap M_{\\beta _0}}\\iota _y^*(L(\\beta _0^{\\vee }(t)m)(f_\\infty f^{\\prime \\infty }))(x^*)\\Bigg ) dm d^\\times t\\\\&=\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m))\\chi _s(t\\omega _P(m))\\sum _{y} \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{ z }})dmd^\\times t \\end{split}$ To prove that the bottom sum and integral in (REF ) converge absolutely it suffices to prove that the top sum and integral converge absolutely.", "Using Lemma REF and Proposition REF this reduces to the assertion that the integrals in (REF ) are convergent.", "Hence $ \\int \\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\delta _P^{1/2}(\\beta _0^\\vee (t)m))|\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{z }})|dmd^\\times t$ converges, where the integral is over $[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}].$ Using (REF ) we see that (REF ) unfolds to (REF ).", "This completes the proof of the first assertion of the lemma.", "Let $n$ be the maximal order of the pole of $E(I,f^{\\prime }_{\\chi ^{\\prime }_{z}})$ at $z=s_0$ as $f^{\\prime } \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F))$ varies.", "Recall the continuity assertion for $\\iota _y^*$ contained in Proposition REF .", "Using the first absolute convergence assertion of the lemma and Lemma REF there are continuous linear functionals $\\Lambda _{i,y}((\\cdot )f^\\infty ):\\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )) \\longrightarrow \\mathbb {C}$ for $0 \\le i \\le n-1$ such that $ \\begin{split}&\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F)}\\int _{ A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)^*_{\\chi ^{\\prime }_{ z}})dm \\\\&= \\sum _{y \\in P^{\\prime }(F) \\backslash Y(F) } \\int _{A^{\\beta _0}(\\mathbb {A}_F)} \\left( \\sum _{i=0}^{n-1}\\chi _s(t\\omega _P(m))|t|^{-s_{0}}(\\log |t|)^i\\overline{\\chi }(t) \\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))\\right) dm\\end{split}$ for all $t \\in \\mathbb {A}_F^\\times .$ Here when $F$ is a function field we give $\\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )$ the discrete topology.", "Varying $f^\\infty $ and using Lemma REF we deduce that $\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F)} \\int _{A^{\\beta _0}(\\mathbb {A}_F)}|\\chi _s(\\omega _P(m)\\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))|dm$ is convergent for each $i.$ Using this fact we see that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large the integral over $t \\in [\\mathbb {G}_m]^-$ of (REF ) is $\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F) } \\int _{A^{\\beta _0}(\\mathbb {A}_F)} \\left( \\int _{[\\mathbb {G}_m]^-}\\sum _{i=0}^{n-1}\\chi _s(t\\omega _P(m))|t|^{-s_{0}}(\\log |t|)^i\\overline{\\chi }(t)d^\\times t \\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))\\right) dm;$ in other words, it is permissible to bring the integral over $t$ inside the other integral and the sum.", "The integrals over $t$ may be evaluated in an elementary manner, yielding the remaining assertions of the lemma.", "Assume for the moment that $f$ is $K_M$ -finite and that $\\mathrm {Re}(s_{\\beta _0})$ is large.", "Then $E_{Y_P}(f_{\\chi _s})=&\\int _{[M^{\\mathrm {ab}}]}\\delta _P^{1/2}(m)\\chi _s(\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f(m^{-1} x )dm\\\\=&\\int _{[\\mathbb {G}_m]^+ \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\&+\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x\\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\&+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t.$ Here $d^\\times t$ is the measure on $[\\mathbb {G}_m].$ The subgroup $ [\\mathbb {G}_m]^1<[\\mathbb {G}_m]$ has nonzero measure with respect to $d^\\times t$ if and only if $F$ is a function field.", "By Lemma REF one has $ \\mathcal {F}_{P|P^*} \\circ L(m)=\\delta _{P^* \\cap M^{\\beta _0}}(m) L(m) \\circ \\mathcal {F}_{P|P^*}.$ Moreover $\\delta _{P}^{1/2}(m)\\delta _{P^* \\cap M}(m)=\\delta _{P^*}^{1/2}(m).$ With this in mind we apply the Poisson summation formula of Theorem REF to $\\tfrac{1}{2}$ the second integral and the third integral above to see that $E_{Y_P}(f_{\\chi _s})$ is the sum of (REF ) and (REF ) below: $&\\int _{[\\mathbb {G}_m]^+ \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dm d^\\times t\\\\ \\nonumber &+\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\ \\nonumber &+\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)((\\beta _0^\\vee (t)m)^{-1} x )dm d^\\times t\\\\ \\nonumber &+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)((\\beta _0^\\vee (t)m)^{-1}x )dm d^\\times t$ and $\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\chi _s(t\\omega _P(m))&\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0) \\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F}\\Bigg ( \\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_P(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _z^{\\prime }})\\\\&-\\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_{P^*}(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{\\chi ^{\\prime }_{-z}})\\Bigg )dmd^\\times t \\nonumber \\\\+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\chi _s(t\\omega _{P}(m))&\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F}\\Bigg (\\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_P(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_z})\\nonumber \\\\&-\\sum _{y,\\chi ^{\\prime }} \\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{\\chi _{-z}^{\\prime }}) \\nonumber \\Bigg )dm d^\\times t$ where the sums over $y$ are over $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ and the sums over $\\chi ^{\\prime }$ are over $\\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }.$ We recall that the sums over $s_0$ have finite support in a set depending only on the $K_M$ type of $f$ by Conjecture REF in the number field case and the fact that $E(I,f_{\\chi _s})$ is rational in the sense of [36] in the function field case [36].", "Since $\\omega _{P^*}(\\beta _0^{\\vee }(t))=\\omega _{P}(\\beta _0^\\vee (t))^{-1}$ Lemma REF implies that the lower two integrals in (REF ) converge for $\\mathrm {Re}(s_{\\beta _0})$ small enough.", "Thus they converge for all $s$ .", "Since we already know that the upper two converge absolutely for all $s$ we deduce that (REF ) is a holomorphic function of $s.$ Now consider (REF ).", "We have the functional equation $ E(I,\\iota _y^*(L(m)f)_{\\chi _{\\beta _0 s}})=\\delta _{P^* \\cap M_{\\beta _0}}(m)E^*(I,\\iota _y^*(L(m)\\mathcal {F}_{P|P^*}(f))_{\\chi _{\\beta _0s}})$ by Theorem REF and (REF ).", "Using Lemma REF and (REF ) to justify switching sums and integrals, we deduce that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large (REF ) is equal to $\\frac{1}{2}\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}}&\\frac{w(s_0)}{\\kappa _F}\\sum _{y} \\int _{[\\mathbb {G}_m]^1 \\times A^{\\beta _0}(\\mathbb {A}_F)}\\chi _s(t\\omega _P(m))\\Bigg ( \\delta _P^{1/2}(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _{\\beta _0z}})\\\\&- \\delta _{P^*}^{1/2}(t\\omega _P(m))\\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{(\\chi _{\\beta _0})_{-z}})\\Bigg )dm d^\\times t \\nonumber \\\\+\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0) \\ge 0\\end{array}}&\\frac{w(s_0)}{\\kappa _F}\\sum _y\\int _{[\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\chi _s(t\\omega _{P}(m))\\Bigg (\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _{\\beta _0z}}) \\nonumber \\\\&- \\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{(\\chi _{\\beta _0})_{-z}})\\Bigg )dmd^\\times t .", "\\nonumber $ where the sums on $y$ are now over $P^{\\prime }(F) \\backslash G(F).$ Here we are using (REF ) to unfold the integral.", "By Lemma REF and (REF ) the expression (REF ) is holomorphic for $\\mathrm {Re}(s)$ large, and admits a meromorphic continuation to the plane.", "Thus under the assumption that $f$ that is $K_M$ finite, we have proved the meromorphic continuation statement in the theorem.", "By a symmetric argument we deduce that the sum of (REF ) and (REF ) is $E^*_{Y_{P^*}}(\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ This proves the functional equation $E_{Y_P}(f_{\\chi _s})=E^*_{Y_{P^*}} (\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ Thus far we have assumed that $f$ is $K_M$ -finite.", "To remove this assumption we note that for any $f \\in \\mathcal {S}(X_{P}(\\mathbb {A}_F))$ and any $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a $K_M$ -finite $f^{\\prime } \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ such that $f_{\\chi _s}=f_{\\chi _s}^{\\prime }$ .", "Moreover, using Lemma REF , we can choose $f^{\\prime }$ so that $\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*=\\mathcal {F}_{P|P^*}(f^{\\prime })_{\\chi _s}^*$ .", "This allows us to remove the $K_M$ -finiteness assumption.", "The proof of Theorem REF shows that the poles of $E_{Y_P}(f_{\\chi _s})$ are controlled by the poles of $E(I,f^{\\prime }_{\\chi _{\\beta _0 s}})$ for $f^{\\prime } \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ In particular it is easy to deduce the following corollary from the proof: Corollary 5.5 Under the hypotheses of Theorem REF , for fixed $(s_{\\beta }) \\in \\mathbb {C}^{\\Delta _{P^{\\prime }}}$ the order of the pole of $E_{Y_P}(f_{\\chi _s})$ at $s_{\\beta _0}=s_0$ is no greater than the maximum of the orders of the pole of $E(I,f^{\\prime }_{\\chi _{\\beta _0z}})$ at $z=s_0$ as $f^{\\prime }$ ranges over $\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ $\\Box $ On the poles of degenerate Eisenstein series We assume for this section that $F$ is a number field.", "Our goal here is to verify conjectures REF and REF in some cases.", "Without loss of generality we take $G=M_{\\beta _0}$ , thus $P \\le G$ is now a maximal parabolic subgroup containing our fixed Borel $B$ and $P^*$ is the opposite parabolic.", "Let $Q$ be the unique maximal parabolic subgroup containing $B$ that is conjugate to $P^*.$ Let $K \\le G(\\mathbb {A}_F)$ be a maximal compact subgroup and let $\\mathbb {C}_+:=\\lbrace z \\in \\mathbb {C}:\\mathrm {Re}(z)>0\\rbrace .$ Lemma 6.1 To prove Conjecture REF it suffices to show that for each character $ \\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a finite set $\\Upsilon (\\chi ) \\subset \\mathbb {C}_+$ such that if $E(g,\\Phi ^{\\chi _s}_P)$ or $E(g,\\Phi ^{\\chi _s}_Q)$ has a pole at $s=s_0$ with $\\mathrm {Re}(s_0)>0$ for any holomorphic $K$ -finite section $\\Phi ^{\\chi _s}_P \\in I_P(\\chi _s)$ or $\\Phi ^{\\chi _s}_Q \\in I_Q(\\chi _s)$ then $s \\in \\Upsilon (\\chi ).$ By the observations in the proof of Lemma REF if $a_{P|P}(\\chi _s)$ has a pole at $s=s_0$ for $\\mathrm {Re}(s_0) \\ge 0$ then $s_0=0.$ Moreover, the order of the pole is bounded by an integer depending only on $P$ and $G.$ Thus for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ Theorem REF implies that the only possible pole of $E(g,f_{\\chi _s})$ on the line $\\mathrm {Re}(s)=0$ is at $s=0$ and its order is bounded in a sense depending only on $P$ and $G.$ Now consider poles of $E(g,f_{\\chi _s})$ for $\\mathrm {Re}(s)<0.$ Using notation from the proof of Theorem REF and arguing as in that proof we have $E(g,f_{\\chi _s})=E^*(g,\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s})=E_Q(g,j(\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s})).$ Thus the order of the pole of $E(g,f_{\\chi _s})$ at $s_0$ is equal to the order of the pole of $E_Q(g,j(\\mathcal {F}_{P|P^*}(f)^*_{\\chi _{s}}))$ at $s_0.$ Since $\\mathcal {F}_{P|P^*}(f) \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ and $a_{P^*|P^*}((\\chi _s)^{-1})$ is holomorphic for $\\mathrm {Re}(s)<0$ by Lemma REF we have that $\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s} \\in I_{P^*}^*(\\chi _s)$ is a holomorphic section for $\\mathrm {Re}(s)<0$ .", "This implies $j(\\mathcal {F}_{P|P}(f)^*_{\\chi _{s}})$ is a holomorphic section of $I_Q((\\chi ^{-1})_{-s})$ for $\\mathrm {Re}(s)<0.$ The lemma follows.", "The proof of the following lemma is analogous: Lemma 6.2 To prove Conjecture REF it suffices to show that there is a finite set $\\Upsilon \\subset \\mathbb {C}_+$ and an integer $n>0$ such that if $E(g,\\Phi ^{\\chi _s}_P)$ or $E(g,\\Phi ^{\\chi _s}_Q)$ has a pole at $s=s_0$ with $\\mathrm {Re}(s_0)>0$ for any $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and holomorphic $K$ -finite sections $\\Phi ^{\\chi _s}_P \\in I_P(\\chi _s)$ or $\\Phi ^{\\chi _s}_Q \\in I_Q(\\chi _s)$ then $s_0 \\in \\Upsilon $ and $\\chi ^n=1.$ $\\Box $ Theorem 6.3 Suppose that $G=\\mathrm {Sp}_{2n}$ and that $ P$ is the Siegel parabolic, that is, the parabolic subgroup with Levi subgroup isomorphic to $\\mathrm {GL}_n.$ Then Conjecture REF is true.", "We use the results of [26].", "We point out that $a_{P|P}(\\chi _s)=a_{I_{2n}}(\\chi _s)$ in the notation of loc. cit.", "by [16].", "Thus the theorem follows from Lemma REF , the fact that $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s) > 0$ by Lemma REF , and the Corollary of [26].", "Theorem 6.4 Suppose that $G=\\mathrm {SL}_n$ .", "Then Conjecture REF is true.", "This can be derived using Lemma REF and the same arguments as those proving [24].", "The proof comes down to two statements explained in [24].", "The first is the statement that a certain quotient of completed $L$ -functions depending on $\\chi $ has only finitely many poles for $\\mathrm {Re}(s)>0$ .", "The second is that a certain normalized intertwining operator has only finitely many poles.", "These are proven exactly as in loc. cit.", "The required facts on induced representations of $\\mathrm {GL}_2$ may be found in [28].", "Much of the work towards proving Conjecture REF for arbitrary parabolic subgroups of symplectic groups is contained in [23], and much of the work towards proving Conjecture REF for general linear groups is contained in [24].", "The additional steps necessary seem to require careful analysis of the reducibility points of local principal series representations at the archimedean places." ], [ "The Poisson summation formula on $X_{P \\cap M_{\\beta _0}}(F)$", "The Poisson summation formula on $X_{P \\cap M_{\\beta _0}}(F)$ was established under some local assumptions on the test functions involved in [8] with a slightly different definition of the Schwartz space.", "In this section we establish it in general following the arguments of [19].", "To ease notation, for this section only we assume that $P \\le M_{\\beta _0},$ which implies $M_{\\beta _0}=G.$ This amounts to assuming that $G$ is simple and $P$ is a maximal parabolic subgroup.", "Thus $X_{P \\cap M_{\\beta _0}}=X_P=X_{P,PIG}$ where $I \\in G(F)$ is the identity.", "The construction of the Schwartz space and the Fourier transform given in the previous section reduces to the construction of [16] in this case.", "We observe that $\\Delta _P=\\lbrace \\beta _0\\rbrace $ in the setting of this section.", "Thus we drop $\\Delta _P$ and $\\beta _0$ from notation when no confusion is likely.", "For a quasi-character $\\chi :A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times \\rightarrow \\mathbb {C}^\\times ,$ $s \\in \\mathbb {C}$ , $f_1 \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ and $f_2 \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ we have degenerate Eisenstein series $ \\begin{split}E(g,f_{1\\chi _s}):&=\\sum _{\\gamma \\in P(F) \\backslash G(F)}f_{1\\chi _s}(\\gamma g)\\\\E^*(g,f^*_{2\\chi _s}):&=\\sum _{\\gamma \\in P^*(F) \\backslash G(F)}f^*_{2\\chi _s}(\\gamma g).", "\\end{split}$ Here $\\chi _s=\\chi |\\cdot |^s$ .", "It is well-known that these converge absolutely for $\\mathrm {Re}(s)$ large enough (resp.", "small enough).", "For the proof of absolute convergence in a special case see [19]; the proof generalizes to our setting.", "Let $ a_{P|P}(\\chi _s):=\\prod _v a_{P|P}(\\chi _{vs}).$ Lemma 4.1 Let $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ .", "The function $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s)> 0.$ It admits a meromorphic continuation to the plane.", "Moreover there is an integer $n$ depending only on $G$ such that $a_{P|P}(\\chi _s)$ is holomorphic if $\\chi ^n \\ne 1.$ The first claim follows from the same remarks proving Lemma REF .", "Since $a_{P|P}(\\chi _s)$ is a product of (completed) Hecke $L$ -functions the second two assertions are clear.", "Theorem 4.2 (Langlands) Let $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ be finite under a maximal compact subgroup of $G(\\mathbb {A}_F).$ For a character $\\chi :A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times \\rightarrow \\mathbb {C}^\\times $ the Eisenstein series $E(g,f_{\\chi _s})$ has a meromorphic continuation to the $s$ -plane and admits a functional equation $E(g,f_{\\chi _s})=E^*(g,\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ If $\\mathrm {Re}(z)=0$ then the order of the pole of $E(g,f_{\\chi _s})$ at $s=z$ is bounded by the order of the pole of $a_{P|P}(\\chi _s)$ at $s=z.$ To ease translation with the manner the theory is usually phrased, let $Q$ be the unique parabolic subgroup containing $B$ that is conjugate to $P^*,$ and let $M_Q$ be the Levi subgroup of $P$ containing $T.$ Choose $w \\in G(F)$ so that $w P^*w^{-1}=Q$ and $wMw^{-1}=M_Q.$ There is an isomorphism $j:C^\\infty (X_{P^*}^\\circ (F)) &\\tilde{\\longrightarrow } C^\\infty (X_Q^\\circ (F))\\\\f &\\tilde{\\longmapsto } (x \\mapsto f(w^{-1}x)).$ For suitable sections $\\Phi ^{\\chi _s} \\in I_Q(\\chi _s)$ and $g \\in G(\\mathbb {A}_F)$ we can then form the Eisenstein series $E_Q(g,\\Phi ^{\\chi _s})=\\sum _{\\gamma \\in Q(F) \\backslash G(F)}\\Phi ^{\\chi _s}(\\gamma g).$ Let $K$ be a maximal compact subgroup of $G(\\mathbb {A}_F)$ .", "If $\\Phi ^{\\chi _s}$ is $K$ -finite and holomorphic for $\\mathrm {Re}(s)$ sufficiently large, then $E_Q(g,\\Phi ^{\\chi _s})$ is absolutely convergent for $\\mathrm {Re}(s)$ sufficiently large.", "We observe that for $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ one has $ E^*(g,f_{\\chi _{s}}^*)=E_Q(g,j (f_{\\chi _s}^*))$ for $\\mathrm {Re}(s)$ sufficiently small.", "By definition of the Schwartz space, for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ the section $f_{\\chi _s}$ is a holomorphic multiple of the product of completed Hecke $L$ -functions $a_{P|P}(\\chi _s).$ With this in mind, the meromorphy assertion is proven in [9], [33] for $K$ -finite functions $f.$ These references also contain a proof of the functional equation $E(g,f_{\\chi _s})=E_Q(g,j \\left( \\mathcal {R}_{P|P^*}(f_{\\chi _s})\\right))$ which implies by (REF ) that $E(g,f_{\\chi _s})=E^*(g, \\mathcal {R}_{P|P^*}(f_{\\chi _s})).$ We have $ \\mathcal {R}_{P|P^*}(f_{\\chi _s})=(\\mathcal {F}_{P|P^*}(f))_{\\chi _s}^*$ by Theorem REF and the argument of [19].", "Thus the functional equation stated in the theorem follows from (REF ).", "As mentioned above for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ the Mellin transform $f_{\\chi _s}$ is a holomorphic multiple of $a_{P|P}(\\chi _s)$ .", "Thus the last assertion of the theorem follows from the fact that Eisenstein series attached to $K$ -finite holomorphic sections are themselves holomorphic on the unitary axis [2], [33].", "Let $C(\\chi _s)$ be the analytic conductor of $\\chi _s,$ normalized as in [19].", "Using notation as in (REF ) and (REF ) one has the following estimate: Theorem 4.3 Assume that $F$ is a number field, that Conjecture REF is true, and $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ is $K$ -finite.", "Let $A<B$ be real numbers and let $p\\in \\mathbb {C}[x]$ be a polynomial such that $p(s)E(g,\\chi _s)$ is holomorphic in the strip $V_{A,B}.$ Then for all $N \\ge 0$ one has $|E(g,f_{\\chi _s})|_{A,B,p} \\ll _{N,f} C(\\chi _s)^{-N}.$ The argument is the same as that proving [19].", "Let $K_M <M^{\\mathrm {ab}}(\\mathbb {A}_F)$ be the maximal compact subgroup and let $ \\kappa _F:={\\left\\lbrace \\begin{array}{ll}\\frac{2^{r_1}(2\\pi )^{r_2}h_F R_F}{d_F^{1/2}e_F}& \\textrm { if }F \\textrm { is a number field}\\\\\\frac{h_F}{d_F^{1/2}(q-1)\\log q}&\\textrm { if }F \\textrm { is a function field}\\end{array}\\right.", "}$ where $r_1$ and $r_2$ are the number of real (resp.", "complex) places, $h_F$ is the class number, $R_F$ is the regulator, $d_F$ is the absolute discriminant, $e_F$ is the number of roots of unity in $F$ , and in the function field case $F$ has field of constants $\\mathbb {F}_q$ .", "For complex numbers $s_0$ let $ w(s_0)={\\left\\lbrace \\begin{array}{ll} \\tfrac{1}{2} \\textrm { if }\\mathrm {Re}(s_0)=0,\\\\1 \\textrm { otherwise.}\\end{array}\\right.", "}$ We now prove the following special case of Theorem REF : Theorem 4.4 Let $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ .", "Assume that $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $\\sum _{x \\in X_{P}^\\circ (F)}&f(x)+\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E^*(I,\\mathcal {F}_{P|P^*}(f)^*_{\\chi _{-s}})\\\\&=\\sum _{x^* \\in X_{P^*}^{\\circ }(F)}\\mathcal {F}_{P|P^*}(f)(x^*)+\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Here the sum on $\\chi $ is over characters of $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times .$ The sums over $x$ and $x^*$ are absolutely convergent.", "It is clear that if $f$ is $K_M$ -finite and Conjecture REF is valid then the double sum over $\\chi $ and $s_0$ has finite support.", "The same is true if $F$ is a function field or Conjecture REF is valid.", "This is how we are using these assumptions here and below.", "Remark Before proving Theorem REF we clarify its meaning.", "Assume $F$ is a number field and Conjecture REF is valid.", "For any $s_0 \\in \\mathbb {C}$ and $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ consider the linear functional $ f_\\infty \\longmapsto \\mathrm {Res}_{s=s_0}E(I,(f_\\infty f^\\infty )_{\\chi _s}).$ It is defined on the dense subspace of $\\mathcal {S}(X_P(F_\\infty ))$ consisting of $K_\\infty $ -finite functions.", "The proof of the theorem shows that this linear functional is continuous with respect to the Fréchet topology on $\\mathcal {S}(X_P(F_\\infty )).$ Hence it extends to all of $\\mathcal {S}(X_P(F_\\infty )).$ For $f_\\infty $ that are not $K_\\infty $ -finite this is the manner the expression $\\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ is to be interpreted in this paper.", "We take the obvious analogous conventions regarding the meaning of $\\mathrm {Res}_{s=s_0}E^*(I,f^{\\prime *}_{\\chi _s})$ for $f^{\\prime } \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ that are not $K_\\infty $ -finite.", "Let $K_\\infty \\le G(F_\\infty )$ be a maximal compact subgroup.", "Assume first that $f=f_\\infty f^\\infty $ where $f_\\infty \\in \\mathcal {S}(X_{P}(F_\\infty ))$ is $K_\\infty $ -finite.", "Let $I_F:={\\left\\lbrace \\begin{array}{ll}i\\left[-\\frac{\\pi }{\\log q},\\frac{\\pi }{\\log q} \\right] & \\textrm { if }F \\textrm { is a function field, and }\\\\i\\mathbb {R}&\\textrm { if }F \\textrm { is a number field.}", "\\end{array}\\right.", "}$ By Mellin inversion and Theorem REF in the number field case $\\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{\\chi } \\int _{\\sigma +I_F }E(I,f_{\\chi _s}) \\frac{ds}{c 2\\pi i}$ for $\\sigma $ sufficiently large and a suitable constant $c.$ Here the sum over $\\chi $ is as in the statement of the theorem.", "Since $f$ is $K_\\infty $ -finite, the support of the sum over $\\chi $ is finite.", "The constant $c$ may be computed as follows.", "It depends on our choice of Haar measure on $[M^{\\mathrm {ab}}],$ which is normalized to correspond to the measure on $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times $ described in §REF .", "This is induced as explained in §REF from a measure on $\\mathbb {A}_F$ that is self-dual with respect to $\\psi .$ The induced measure on $F \\backslash \\mathbb {A}_F$ is independent of the choice of $\\psi .$ Thus using [46] and a choice of self-dual measure one obtains $\\mathrm {meas}(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )={\\left\\lbrace \\begin{array}{ll} \\frac{2^{r_1}(2\\pi )^{r_2}h_F R_F}{d_{F}^{1/2}e_F} &\\textrm { if }F \\textrm { is a number field}\\\\\\frac{h_F }{d_{F}^{1/2}(q-1)} & \\textrm { if }F \\textrm { is a function field.}\\end{array}\\right.", "}$ This implies $c=\\kappa _F.$ Notice that in the function field case $\\kappa _F=\\frac{\\mathrm {meas}(A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times )}{\\log q}$ because of the measure of $I_F.$ We shift contours to $\\mathrm {Re}(\\sigma )$ very small to see that this is $\\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{\\chi } \\int _{\\mathrm {Re}(s)=\\sigma ^{\\prime }}E(I,f_{\\chi _s}) \\frac{ds}{\\kappa _F2\\pi i}+\\frac{1}{\\kappa _F}\\sum _{\\chi }\\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ where now $\\sigma ^{\\prime }$ is sufficiently small.", "Here the support of the sum over $\\chi $ and $s_0$ is finite by assumptions (2) or (3) in the number field case and the fact that $E(I,f_{\\chi _s})$ is rational in the sense of [36] in the function field case [36].", "In the number field case the bound required to justify the contour shift is provided by Theorem REF .", "We now apply the functional equation of Theorem REF and Mellin inversion to deduce the identity $ \\sum _{x \\in X_{P}^\\circ (F)}f(x)=\\sum _{x^* \\in X_{P^*}^{\\circ }(F)}\\mathcal {F}_{P|P^*}(f)(x^*)+\\frac{1}{\\kappa _F}\\sum _{\\chi }\\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Since all elements of the Schwartz space $\\mathcal {S}(X_P(\\mathbb {A}_F))$ are $K_\\infty $ -finite in the function field case assume for the moment that $F$ is a number field.", "Write $K_{M\\infty }=K_{M} \\cap M^{\\mathrm {ab}}(F_\\infty ).$ We assume without loss of generality that $f=f_\\infty f^\\infty $ where $f_\\infty \\in \\mathcal {S}(X_P(F_\\infty ))$ , $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ .", "We additionally either assume (2) and that $f_\\infty $ transforms according to a particular character $\\eta $ under $K_{M\\infty }$ , or we assume (3).", "Since $K_\\infty $ -finite functions are dense in $\\mathcal {S}(X_{P}(F_\\infty ))$ by the standard argument [45] we can choose a sequence $\\lbrace f_i:i \\in \\mathbb {Z}_{\\ge 1}\\rbrace \\subset \\mathcal {S}(X_P(F_\\infty ))$ of $K_\\infty $ -finite $f_i$ such that $f_i \\rightarrow f$ in the Frechét topology on $\\mathcal {S}(X_P(F_\\infty )).$ Under assumption (2) we additionally assume that the $f_i$ transform under $K_{M\\infty }$ by $\\eta $ .", "We observe that the support of the sum over $\\chi $ and $s_0$ in (REF ) and its analogues when $f$ is replaced by $f_if^\\infty $ are contained in a finite set independent of $i$ under either assumption (2) or (3).", "In fact, the finite set can be taken to depend only on the $K_M^\\infty $ -type of $f^\\infty $ .", "It is clear from Lemma REF that for each fixed $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ the map $\\Lambda _{1,f^\\infty }:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\sum _{x \\in X_P^\\circ (F)}f_\\infty (x)f^\\infty (x)$ is continuous on $\\mathcal {S}(X_P(F_\\infty )).$ The same is true of $\\Lambda _{2,f^\\infty }:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\sum _{x^* \\in X_{P^*}^\\circ (F)}\\mathcal {F}_{P|P^*}(f_\\infty )(x^*)\\mathcal {F}_{P|P^*}(f^\\infty )(x^*)$ since the Fourier transform is continuous.", "Thus $\\Lambda _{j,f^\\infty }(f_i) \\rightarrow \\Lambda _{j,f^\\infty }(f)$ for as $i \\rightarrow \\infty $ for $j \\in \\lbrace 1,2\\rbrace .$ Finally consider $\\Lambda _{f^\\infty ,s_0}:\\mathcal {S}(X_P(F_\\infty )) &\\longrightarrow \\mathbb {C}\\\\f_\\infty &\\longmapsto \\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s}).$ Using Lemma REF we can choose an $f^{\\prime \\infty }\\in \\mathcal {S}(X_{P}(\\mathbb {A}_F^\\infty ))$ such that $\\Lambda _{f^\\infty ,s_0}=\\Lambda _{1,f^{\\prime \\infty }}-\\Lambda _{2,f^{\\prime \\infty }}.$ In more detail one uses Lemma REF to choose $f^{\\prime \\infty }$ so that the contribution of $\\mathrm {Res}_{s=s_0}E(I,f_{\\chi _s})$ to (REF ) is unchanged, but the contribution of the other residues vanish.", "Thus $\\Lambda _{f^\\infty ,s_0}$ is continuous.", "We deduce that (REF ) is valid for all $K_M$ -finite $f$ under assumption (2) and for all $f$ under assumption (3).", "We now return to the case of a general global field.", "To obtain the expression in the theorem from (REF ) we use the functional equation and holomorphy assertion of Theorem REF and the observeration that for any meromorphic function $f$ on $\\mathbb {C}$ and any $s_0 \\in \\mathbb {C}$ one has $\\mathrm {Res}_{s=s_0}f(s)=-\\mathrm {Res}_{s=-s_0}f(-s).$ Lemma 4.5 Let $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }.$ Let $n$ be the maximal order of the pole at $s_0$ of $E(I,f_{\\chi _s})$ as $f$ ranges over $K_\\infty $ -finite elements of $\\mathcal {S}(X_P(F_\\infty )).$ For each $f^\\infty \\in \\mathcal {S}(X_P(\\mathbb {A}_F^\\infty ))$ there are continuous linear functionals $\\Lambda _i((\\cdot )f^\\infty ):\\mathcal {S}(X_P(F_\\infty )) \\longrightarrow \\mathbb {C}$ such that if $f_\\infty $ is $K_\\infty $ -finite then $ \\delta _P^{-1/2}(\\beta ^\\vee _0(t))\\sum _{i=0}^{n-1} |t|^{-s_0}(\\log |t|)^i \\overline{\\chi }(t)\\Lambda _i(f_\\infty f^\\infty )=\\mathrm {Res}_{s=s_0}E(I,(L(\\beta _0^\\vee (t))f_\\infty f^\\infty )_{\\chi _s}).$ As usual, when $F$ is a function field we give $\\mathcal {S}(X_P(F_\\infty ))$ the discrete topology.", "We have $\\tfrac{1}{|t|^{s}}=\\sum _{j=0}^\\infty \\frac{(-(s-s_0)\\log |t|)^j}{ j!", "|t|^{s_0}}$ for $s$ in a neighborhood of $s_0$ .", "On the other hand if $f_\\infty $ is $K_\\infty $ -finite then we can write $E(I,(f_{\\infty }f^\\infty )_{\\chi _s})=\\sum _{i=0}^{n-1}\\frac{(-1)^{i}i!", "\\Lambda _i(f_\\infty f^\\infty )}{(s-s_0)^{i+1}} +g(s)$ where $g(s)$ is holomorphic in a neighborhood of $s_0$ and $\\Lambda _i(f_\\infty f_\\infty ) \\in \\mathbb {C}$ .", "Then the expression (REF ) is valid for $K_\\infty $ -finite $f_\\infty .$ The fact that $\\Lambda _i((\\cdot )f^\\infty )$ extends to a continuous functional on all of $\\mathcal {S}(X_P(F_\\infty ))$ is tautological in the function field case.", "In the number field case it follows from a variant of the proof of the continuity of the functional $\\Lambda _{f^\\infty ,s_0}$ in the proof of Theorem REF ." ], [ "Proofs of Theorem ", "For $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ consider the sum $ \\sum _{y \\in Y_{P}(F)}f(y ).$ Lemma 5.1 The sum (REF ) is absolutely convergent.", "Let $|\\cdot |_{\\beta }=\\prod _{v}|\\cdot |_{\\beta ,v}$ , where $|\\cdot |_{\\beta ,v}$ is the norm on $V_{\\beta }(F_v)$ fixed above (REF ).", "Using lemmas REF and REF and the notation in these lemmas we see that for $\\alpha >0$ sufficiently small there is a Schwartz function on $\\Phi \\in \\mathcal {S}(V_{\\beta _0}(\\mathbb {A}_F) \\times V_{\\Delta _{P^{\\prime }}}^\\circ (\\mathbb {A}_F))$ such that the sum is dominated by $\\sum _{\\xi \\in V^{\\circ }(F)}\\Phi (\\xi )\\prod _{\\beta \\in \\Delta _P}|\\xi |^{\\alpha -r_\\beta }_\\beta <\\infty $ where $V^\\circ $ is defined as in (REF ).", "Recall the function $w(s_0)$ from (REF ).", "The following theorem is the precise version of Theorem REF : Theorem 5.2 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ .", "Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid, and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "One has $&\\sum _{x \\in Y_{P}(F)}f(x )+\\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F} \\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(\\mathcal {F}_{P|P^*}(f))^*_{\\chi _{-s}})\\\\&=\\sum _{x^* \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)(x ^*)+\\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})$ where the sums on $\\chi $ are over all characters of $A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times .$ Moreover the sums over $x$ and $x^*$ and the triple sums over $(y,\\chi ,s_0)$ are absolutely convergent.", "Remark We point out that conjectures REF and REF are assertions about functions in $\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)),$ whereas $f$ lies in $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ We use Lemma REF to write $ \\begin{split}\\sum _{x\\in Y_{P}(F)}f(x ) &=\\sum _{ y }\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}f(\\iota _y(\\gamma ))=\\sum _{ y}\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(f)(\\gamma ).\\end{split}$ Here and throughout the proof all sums over $y$ are over $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ .", "By Proposition REF we have $\\iota _y^*(f) \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ There is a natural map $(M \\cap M_{\\beta _0})^{\\mathrm {ab}} \\rightarrow M^{\\mathrm {ab}}$ and hence $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(\\mathbb {A}_F)$ acts on $\\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ With respect to this action $\\iota _y$ is $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(F)$ -equivariant.", "In particular if (2) is valid the assumption that $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ is $K_M$ -finite implies $\\iota _y(f)$ is finite under the maximal compact subgroup of $(M \\cap M_{\\beta _0})^{\\mathrm {ab}}(\\mathbb {A}_F).$ Hence applying Theorem REF we see that the above is $&-\\sum _{y }\\Bigg (\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E^*(I,(\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f)))^*_{\\chi _{-s}})\\Bigg )\\\\&+\\sum _{ y }\\Bigg (\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f))(\\gamma ) +\\sum _{\\chi }\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\Bigg ).$ By Proposition REF this is $&-\\sum _{y }\\Bigg (\\sum _{\\chi } \\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(\\mathcal {F}_{P |P^*}(f))^*_{\\chi _{-s}})\\Bigg )\\\\&+\\sum _{ y }\\Bigg (\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(\\mathcal {F}_{P|P^*}(f))(\\gamma ) +\\sum _{\\chi }\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\\\mathrm {Re}(s_0)\\ge 0\\end{array}} \\frac{w(s_0)}{\\kappa _F}\\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\Bigg ).$ By Lemma REF $\\sum _{ y }\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\iota _y^*(\\mathcal {F}_{P |P^*}(f))( \\gamma )=\\sum _{x^* \\in Y_{P^*}(F)}\\mathcal {F}_{P |P^*}(f)( x^*).$ This completes the proof of the identity in the theorem.", "For the absolute convergence statement in the theorem, we observe that (REF ) and (REF ) are absolutely convergent by Lemma REF .", "By the argument above and the functional equation in Theorem REF we see that $\\sum _{\\gamma \\in X_{P \\cap M_{\\beta _0}}^{\\circ }(F)}&f(\\iota _y(\\gamma ))\\\\ &=\\sum _{\\gamma \\in X_{P^* \\cap M_{\\beta _0}}^{\\circ }(F)}\\mathcal {F}_{P \\cap M_{\\beta _0}|P^* \\cap M_{\\beta _0}}(\\iota _y^*(f))(\\gamma )+\\frac{1}{\\kappa _F}\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})$ for all $y.$ We deduce that the sum over $y$ in $\\frac{1}{\\kappa _F}\\sum _{y} \\left(\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s})\\right)$ is absolutely convergent as well.", "The support of the sum over $\\chi $ is finite and under assumptions (1) or (2) it depends only on the $K_M$ -type of $f.$ Under assumption (3) it depends only on the $K_M \\cap M(\\mathbb {A}_F^\\infty )$ -type of $f.$ With this in mind, for any fixed $s_0 \\in \\mathbb {C}$ we can use Lemma REF to choose an $f^{\\prime } \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ with the same $K_M$ -type as $f$ so that $&\\sum _{y } \\left(\\sum _{\\chi } \\sum _{s_0 \\in \\mathbb {C}} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f^{\\prime })_{\\chi _s})\\right)=\\sum _{y} \\mathrm {Res}_{s=s_0}E(I,\\iota _y^*(f)_{\\chi _s}).$ We deduce that the right hand side of (REF ) is absolutely convergent, and the same is true if we replace $f$ by $\\mathcal {F}_{P|P^*}(f)$ by Proposition REF and Theorem REF .", "Let $(f_1,f_2) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)) \\times \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F)).$ Recall the definition of the generalized Schubert Eisenstein series $E_{Y_P}(f_{1\\chi _s})$ and $E_{Y_{P^*}}^*(f_{2\\chi _s}^*)$ of (REF ).", "Using lemmas REF and REF and standard arguments one obtains the following lemma: Lemma 5.3 For $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ one has $\\sum _{x \\in M^{\\mathrm {ab}}(F) \\backslash Y_{P}(F)}\\int _{M^{\\mathrm {ab}}(\\mathbb {A}_F)} \\delta _{P}^{1/2}(m)|\\chi _s(\\omega _P(m))f_1(m^{-1}x)|dm<\\infty $ for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "In particular, $E_{Y_P}(f_{1\\chi _s})$ converges absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large.", "Similarly, for $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the series $E_{Y_{P^*}}^*(f_{2\\chi _s}^*)$ converges absolutely for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently small.", "$\\Box $ With notation as in Lemma REF , let $A^{\\beta _0}:=\\ker \\omega _{\\beta _0}:M^{\\mathrm {ab}} \\longrightarrow \\mathbb {G}_m$ Thus $M^{\\mathrm {ab}}$ is the internal direct sum of $\\beta _0^\\vee (\\mathbb {G}_m)$ and $A^{\\beta _0}$ by Lemma REF .", "Recall that $M^{\\prime }$ is the unique Levi subgroup of $P^{\\prime }$ containing $M$ .", "Applying Lemma REF with $P$ replaced by $P^{\\prime }$ we see that the canonical map $M^{\\mathrm {ab}} \\rightarrow M^{\\prime \\mathrm {der}}$ restricts to induce an isomorphism $ A^{\\beta _0} \\tilde{\\longrightarrow } M^{\\prime \\mathrm {ab}}.$ We have a Haar measure on $\\mathbb {A}_F^\\times $ and we endow $A^{\\beta _0}(\\mathbb {A}_F)$ by declaring that the product measure on $A^{\\beta _0}(\\mathbb {A}_F)\\beta _0^\\vee (\\mathbb {A}_F^\\times )$ is our Haar measure on $M^{\\mathrm {ab}}(\\mathbb {A}_F).$ As usual let $(\\mathbb {A}_F^\\times )^1:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|=1\\rbrace , \\quad (\\mathbb {A}_F^\\times )^+:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|>1\\rbrace , \\quad (\\mathbb {A}_F^\\times )^-:=\\lbrace t \\in \\mathbb {A}_F^\\times :|t|<1\\rbrace .$ For algebraic $F$ -groups $G$ we set $[G]:=G(F) \\backslash G(\\mathbb {A}_F)$ and define $[\\mathbb {G}_m]^1:=F^\\times \\backslash (\\mathbb {A}_F^\\times )^1, \\quad [\\mathbb {G}_m]^ \\pm :=F^\\times \\backslash (\\mathbb {A}_F^\\times )^\\pm .$ We endow $(\\mathbb {A}_F^\\times )^1$ with the unique measure so that $|\\cdot |:\\mathbb {A}_F^\\times /(\\mathbb {A}_F^\\times )^1 \\tilde{\\longrightarrow } \\mathrm {Im}(|\\cdot |) \\subseteq \\mathbb {R}_{>0}$ is measure preserving.", "Here in the number field case $\\mathrm {Im}(|\\cdot |)=\\mathbb {R}_{>0}$ with its usual Haar measure $\\frac{dt}{t}$ (where $dt$ the Lebesgue measure on $\\mathbb {R}$ ).", "In the function field case the image is discrete and endowed with the counting measure.", "The following technical lemma will be used in the proof of Theorem REF : Lemma 5.4 Let $f \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F)).$ Assume $F$ is a function field, $F$ is a number field, Conjecture REF is valid and $f$ is $K_M$ -finite, or $F$ is a number field and Conjecture REF is valid.", "For every $\\chi \\in (\\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times })^{\\Delta _P},$ $\\chi ^{\\prime } \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and $s_0 \\in \\mathbb {C}$ $ &\\sum _{y } \\int _{[\\mathbb {G}_m]^1 \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)|\\chi _s(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_{z}})|dmd^\\times t$ is finite.", "Moreover for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large $ &\\sum _{y}\\int _{[\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)|\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)^*_{\\chi ^{\\prime }_{ z}})|dm d^\\times t$ converges.", "The expression $ &\\sum _{y } \\int _{ [\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_{z}})dm d^\\times t$ originally defined for $\\mathrm {Re}(s_{\\beta _0})$ large, admits a meromorphic continuation to $s \\in \\mathbb {C}^{\\Delta _P}.$ Here all sums over $y$ are over $ P^{\\prime }(F) \\backslash Y(F)$ .", "The group $[\\mathbb {G}_m]^1$ is compact.", "Using this observation and the argument proving Lemma REF we see that for all $f_1 \\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F))$ and $f_2 \\in \\mathcal {S}(Y_{P^*,P^{\\prime }}(\\mathbb {A}_F))$ the integrals $ \\begin{split}&\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m) \\sum _{x \\in Y_{P}(F)}|f_1((\\beta _0^\\vee (t)m)^{-1}x)|dm d^\\times t\\\\& \\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m) \\sum _{x^* \\in Y_{P^*}(F)}|f_2((\\beta _0^\\vee (t)m)^{-1}x^*)|dmd^\\times t, \\end{split}$ are convergent.", "Let $(f_\\infty ,f^\\infty ) \\in \\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )) \\times \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F^\\infty ))$ , let $(t,m) \\in \\mathbb {A}_F^\\times \\times T^{\\beta _0}(\\mathbb {A}_F)$ , and let $y \\in Y(F).$ Arguing as in the proof of Theorem REF for each $\\chi ^{\\prime } \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and $s_0 \\in \\mathbb {C}$ there is an $f^{\\prime \\infty }\\in \\mathcal {S}(Y_{P,P^{\\prime }}(\\mathbb {A}_F^\\infty ))$ such that $ \\begin{split}&\\sum _{x \\in X_{P \\cap M_{\\beta _0}(F)}}\\iota _y(f_\\infty f^{\\prime \\infty })((\\beta _0^\\vee (t)m)^{-1} x)- \\sum _{x^*\\in X_{P^* \\cap M_{\\beta _0}}(F)}\\mathcal {F}_{P \\cap M_{\\beta _0} |P^* \\cap M_{\\beta _0}}\\iota _y^*(L(\\beta _0^{\\vee }(t)m)(f_\\infty f^{\\prime \\infty }))(x^*)\\\\&=\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{z }}) \\end{split}$ Multiplying both sides of (REF ) by $\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))$ , summing over $y \\in P^{\\prime \\mathrm {der}}(F) \\backslash G(F)$ and then integrating over $[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]$ we arrive at $ \\begin{split}&\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\sum _{y}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\Bigg (\\sum _{x \\in X_{P \\cap M_{\\beta _0}(F)}}\\iota _y(f_\\infty f^{\\prime \\infty })((\\beta _0^\\vee (t)m)^{-1} x)\\\\&- \\sum _{x^*\\in X_{P^* \\cap M_{\\beta _0}}(F)}\\mathcal {F}_{P \\cap M_{\\beta _0} |P^* \\cap M_{\\beta _0}}\\iota _y^*(L(\\beta _0^{\\vee }(t)m)(f_\\infty f^{\\prime \\infty }))(x^*)\\Bigg ) dm d^\\times t\\\\&=\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m))\\chi _s(t\\omega _P(m))\\sum _{y} \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{ z }})dmd^\\times t \\end{split}$ To prove that the bottom sum and integral in (REF ) converge absolutely it suffices to prove that the top sum and integral converge absolutely.", "Using Lemma REF and Proposition REF this reduces to the assertion that the integrals in (REF ) are convergent.", "Hence $ \\int \\sum _{y \\in P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)}\\delta _P^{1/2}(\\beta _0^\\vee (t)m))|\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)(f_\\infty f^\\infty ))_{\\chi ^{\\prime }_{z }})|dmd^\\times t$ converges, where the integral is over $[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}].$ Using (REF ) we see that (REF ) unfolds to (REF ).", "This completes the proof of the first assertion of the lemma.", "Let $n$ be the maximal order of the pole of $E(I,f^{\\prime }_{\\chi ^{\\prime }_{z}})$ at $z=s_0$ as $f^{\\prime } \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F))$ varies.", "Recall the continuity assertion for $\\iota _y^*$ contained in Proposition REF .", "Using the first absolute convergence assertion of the lemma and Lemma REF there are continuous linear functionals $\\Lambda _{i,y}((\\cdot )f^\\infty ):\\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )) \\longrightarrow \\mathbb {C}$ for $0 \\le i \\le n-1$ such that $ \\begin{split}&\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F)}\\int _{ A^{\\beta _0}(\\mathbb {A}_F)}\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m)) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)^*_{\\chi ^{\\prime }_{ z}})dm \\\\&= \\sum _{y \\in P^{\\prime }(F) \\backslash Y(F) } \\int _{A^{\\beta _0}(\\mathbb {A}_F)} \\left( \\sum _{i=0}^{n-1}\\chi _s(t\\omega _P(m))|t|^{-s_{0}}(\\log |t|)^i\\overline{\\chi }(t) \\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))\\right) dm\\end{split}$ for all $t \\in \\mathbb {A}_F^\\times .$ Here when $F$ is a function field we give $\\mathcal {S}(Y_{P,P^{\\prime }}(F_\\infty )$ the discrete topology.", "Varying $f^\\infty $ and using Lemma REF we deduce that $\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F)} \\int _{A^{\\beta _0}(\\mathbb {A}_F)}|\\chi _s(\\omega _P(m)\\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))|dm$ is convergent for each $i.$ Using this fact we see that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large the integral over $t \\in [\\mathbb {G}_m]^-$ of (REF ) is $\\sum _{y \\in P^{\\prime }(F) \\backslash Y(F) } \\int _{A^{\\beta _0}(\\mathbb {A}_F)} \\left( \\int _{[\\mathbb {G}_m]^-}\\sum _{i=0}^{n-1}\\chi _s(t\\omega _P(m))|t|^{-s_{0}}(\\log |t|)^i\\overline{\\chi }(t)d^\\times t \\Lambda _{i,y}(L(m)(f_\\infty f^\\infty ))\\right) dm;$ in other words, it is permissible to bring the integral over $t$ inside the other integral and the sum.", "The integrals over $t$ may be evaluated in an elementary manner, yielding the remaining assertions of the lemma.", "Assume for the moment that $f$ is $K_M$ -finite and that $\\mathrm {Re}(s_{\\beta _0})$ is large.", "Then $E_{Y_P}(f_{\\chi _s})=&\\int _{[M^{\\mathrm {ab}}]}\\delta _P^{1/2}(m)\\chi _s(\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f(m^{-1} x )dm\\\\=&\\int _{[\\mathbb {G}_m]^+ \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\&+\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x\\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\&+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t.$ Here $d^\\times t$ is the measure on $[\\mathbb {G}_m].$ The subgroup $ [\\mathbb {G}_m]^1<[\\mathbb {G}_m]$ has nonzero measure with respect to $d^\\times t$ if and only if $F$ is a function field.", "By Lemma REF one has $ \\mathcal {F}_{P|P^*} \\circ L(m)=\\delta _{P^* \\cap M^{\\beta _0}}(m) L(m) \\circ \\mathcal {F}_{P|P^*}.$ Moreover $\\delta _{P}^{1/2}(m)\\delta _{P^* \\cap M}(m)=\\delta _{P^*}^{1/2}(m).$ With this in mind we apply the Poisson summation formula of Theorem REF to $\\tfrac{1}{2}$ the second integral and the third integral above to see that $E_{Y_P}(f_{\\chi _s})$ is the sum of (REF ) and (REF ) below: $&\\int _{[\\mathbb {G}_m]^+ \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dm d^\\times t\\\\ \\nonumber &+\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _P^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P}(F)}f((\\beta _0^\\vee (t)m)^{-1}x )dmd^\\times t\\\\ \\nonumber &+\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)((\\beta _0^\\vee (t)m)^{-1} x )dm d^\\times t\\\\ \\nonumber &+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\chi _s(t\\omega _P(m))\\sum _{x \\in Y_{P^*}(F)}\\mathcal {F}_{P|P^*}(f)((\\beta _0^\\vee (t)m)^{-1}x )dm d^\\times t$ and $\\frac{1}{2}\\int _{[\\mathbb {G}_m]^1 \\times [A^{\\beta _0}]}\\chi _s(t\\omega _P(m))&\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0) \\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F}\\Bigg ( \\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_P(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _z^{\\prime }})\\\\&-\\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_{P^*}(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{\\chi ^{\\prime }_{-z}})\\Bigg )dmd^\\times t \\nonumber \\\\+\\int _{[\\mathbb {G}_m]^- \\times [A^{\\beta _0}]}\\chi _s(t\\omega _{P}(m))&\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}}\\frac{w(s_0)}{\\kappa _F}\\Bigg (\\sum _{y,\\chi ^{\\prime }} \\delta ^{1/2}_P(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi ^{\\prime }_z})\\nonumber \\\\&-\\sum _{y,\\chi ^{\\prime }} \\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m) \\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{\\chi _{-z}^{\\prime }}) \\nonumber \\Bigg )dm d^\\times t$ where the sums over $y$ are over $P^{\\prime \\mathrm {der}}(F) \\backslash Y(F)$ and the sums over $\\chi ^{\\prime }$ are over $\\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }.$ We recall that the sums over $s_0$ have finite support in a set depending only on the $K_M$ type of $f$ by Conjecture REF in the number field case and the fact that $E(I,f_{\\chi _s})$ is rational in the sense of [36] in the function field case [36].", "Since $\\omega _{P^*}(\\beta _0^{\\vee }(t))=\\omega _{P}(\\beta _0^\\vee (t))^{-1}$ Lemma REF implies that the lower two integrals in (REF ) converge for $\\mathrm {Re}(s_{\\beta _0})$ small enough.", "Thus they converge for all $s$ .", "Since we already know that the upper two converge absolutely for all $s$ we deduce that (REF ) is a holomorphic function of $s.$ Now consider (REF ).", "We have the functional equation $ E(I,\\iota _y^*(L(m)f)_{\\chi _{\\beta _0 s}})=\\delta _{P^* \\cap M_{\\beta _0}}(m)E^*(I,\\iota _y^*(L(m)\\mathcal {F}_{P|P^*}(f))_{\\chi _{\\beta _0s}})$ by Theorem REF and (REF ).", "Using Lemma REF and (REF ) to justify switching sums and integrals, we deduce that for $\\mathrm {Re}(s_{\\beta _0})$ sufficiently large (REF ) is equal to $\\frac{1}{2}\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0)\\ge 0\\end{array}}&\\frac{w(s_0)}{\\kappa _F}\\sum _{y} \\int _{[\\mathbb {G}_m]^1 \\times A^{\\beta _0}(\\mathbb {A}_F)}\\chi _s(t\\omega _P(m))\\Bigg ( \\delta _P^{1/2}(t\\omega _P(m))\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _{\\beta _0z}})\\\\&- \\delta _{P^*}^{1/2}(t\\omega _P(m))\\mathrm {Res}_{s=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{(\\chi _{\\beta _0})_{-z}})\\Bigg )dm d^\\times t \\nonumber \\\\+\\sum _{\\begin{array}{c}s_0 \\in \\mathbb {C}\\\\ \\mathrm {Re}(s_0) \\ge 0\\end{array}}&\\frac{w(s_0)}{\\kappa _F}\\sum _y\\int _{[\\mathbb {G}_m]^- \\times A^{\\beta _0}(\\mathbb {A}_F)}\\chi _s(t\\omega _{P}(m))\\Bigg (\\delta _{P}^{1/2}(\\beta _0^\\vee (t)m)\\mathrm {Res}_{z=s_0}E(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)f)_{\\chi _{\\beta _0z}}) \\nonumber \\\\&- \\delta _{P^*}^{1/2}(\\beta _0^\\vee (t)m)\\mathrm {Res}_{z=s_0}E^*(I,\\iota _y^*(L(\\beta _0^\\vee (t)m)\\mathcal {F}_{P|P^*}(f))^*_{(\\chi _{\\beta _0})_{-z}})\\Bigg )dmd^\\times t .", "\\nonumber $ where the sums on $y$ are now over $P^{\\prime }(F) \\backslash G(F).$ Here we are using (REF ) to unfold the integral.", "By Lemma REF and (REF ) the expression (REF ) is holomorphic for $\\mathrm {Re}(s)$ large, and admits a meromorphic continuation to the plane.", "Thus under the assumption that $f$ that is $K_M$ finite, we have proved the meromorphic continuation statement in the theorem.", "By a symmetric argument we deduce that the sum of (REF ) and (REF ) is $E^*_{Y_{P^*}}(\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ This proves the functional equation $E_{Y_P}(f_{\\chi _s})=E^*_{Y_{P^*}} (\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*).$ Thus far we have assumed that $f$ is $K_M$ -finite.", "To remove this assumption we note that for any $f \\in \\mathcal {S}(X_{P}(\\mathbb {A}_F))$ and any $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a $K_M$ -finite $f^{\\prime } \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ such that $f_{\\chi _s}=f_{\\chi _s}^{\\prime }$ .", "Moreover, using Lemma REF , we can choose $f^{\\prime }$ so that $\\mathcal {F}_{P|P^*}(f)_{\\chi _s}^*=\\mathcal {F}_{P|P^*}(f^{\\prime })_{\\chi _s}^*$ .", "This allows us to remove the $K_M$ -finiteness assumption.", "The proof of Theorem REF shows that the poles of $E_{Y_P}(f_{\\chi _s})$ are controlled by the poles of $E(I,f^{\\prime }_{\\chi _{\\beta _0 s}})$ for $f^{\\prime } \\in \\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ In particular it is easy to deduce the following corollary from the proof: Corollary 5.5 Under the hypotheses of Theorem REF , for fixed $(s_{\\beta }) \\in \\mathbb {C}^{\\Delta _{P^{\\prime }}}$ the order of the pole of $E_{Y_P}(f_{\\chi _s})$ at $s_{\\beta _0}=s_0$ is no greater than the maximum of the orders of the pole of $E(I,f^{\\prime }_{\\chi _{\\beta _0z}})$ at $z=s_0$ as $f^{\\prime }$ ranges over $\\mathcal {S}(X_{P \\cap M_{\\beta _0}}(\\mathbb {A}_F)).$ $\\Box $" ], [ "On the poles of degenerate Eisenstein series", "We assume for this section that $F$ is a number field.", "Our goal here is to verify conjectures REF and REF in some cases.", "Without loss of generality we take $G=M_{\\beta _0}$ , thus $P \\le G$ is now a maximal parabolic subgroup containing our fixed Borel $B$ and $P^*$ is the opposite parabolic.", "Let $Q$ be the unique maximal parabolic subgroup containing $B$ that is conjugate to $P^*.$ Let $K \\le G(\\mathbb {A}_F)$ be a maximal compact subgroup and let $\\mathbb {C}_+:=\\lbrace z \\in \\mathbb {C}:\\mathrm {Re}(z)>0\\rbrace .$ Lemma 6.1 To prove Conjecture REF it suffices to show that for each character $ \\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ there is a finite set $\\Upsilon (\\chi ) \\subset \\mathbb {C}_+$ such that if $E(g,\\Phi ^{\\chi _s}_P)$ or $E(g,\\Phi ^{\\chi _s}_Q)$ has a pole at $s=s_0$ with $\\mathrm {Re}(s_0)>0$ for any holomorphic $K$ -finite section $\\Phi ^{\\chi _s}_P \\in I_P(\\chi _s)$ or $\\Phi ^{\\chi _s}_Q \\in I_Q(\\chi _s)$ then $s \\in \\Upsilon (\\chi ).$ By the observations in the proof of Lemma REF if $a_{P|P}(\\chi _s)$ has a pole at $s=s_0$ for $\\mathrm {Re}(s_0) \\ge 0$ then $s_0=0.$ Moreover, the order of the pole is bounded by an integer depending only on $P$ and $G.$ Thus for any $f \\in \\mathcal {S}(X_P(\\mathbb {A}_F))$ Theorem REF implies that the only possible pole of $E(g,f_{\\chi _s})$ on the line $\\mathrm {Re}(s)=0$ is at $s=0$ and its order is bounded in a sense depending only on $P$ and $G.$ Now consider poles of $E(g,f_{\\chi _s})$ for $\\mathrm {Re}(s)<0.$ Using notation from the proof of Theorem REF and arguing as in that proof we have $E(g,f_{\\chi _s})=E^*(g,\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s})=E_Q(g,j(\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s})).$ Thus the order of the pole of $E(g,f_{\\chi _s})$ at $s_0$ is equal to the order of the pole of $E_Q(g,j(\\mathcal {F}_{P|P^*}(f)^*_{\\chi _{s}}))$ at $s_0.$ Since $\\mathcal {F}_{P|P^*}(f) \\in \\mathcal {S}(X_{P^*}(\\mathbb {A}_F))$ and $a_{P^*|P^*}((\\chi _s)^{-1})$ is holomorphic for $\\mathrm {Re}(s)<0$ by Lemma REF we have that $\\mathcal {F}_{P|P^*}(f)^*_{\\chi _s} \\in I_{P^*}^*(\\chi _s)$ is a holomorphic section for $\\mathrm {Re}(s)<0$ .", "This implies $j(\\mathcal {F}_{P|P}(f)^*_{\\chi _{s}})$ is a holomorphic section of $I_Q((\\chi ^{-1})_{-s})$ for $\\mathrm {Re}(s)<0.$ The lemma follows.", "The proof of the following lemma is analogous: Lemma 6.2 To prove Conjecture REF it suffices to show that there is a finite set $\\Upsilon \\subset \\mathbb {C}_+$ and an integer $n>0$ such that if $E(g,\\Phi ^{\\chi _s}_P)$ or $E(g,\\Phi ^{\\chi _s}_Q)$ has a pole at $s=s_0$ with $\\mathrm {Re}(s_0)>0$ for any $\\chi \\in \\widehat{A_{\\mathbb {G}_m} F^\\times \\backslash \\mathbb {A}_F^\\times }$ and holomorphic $K$ -finite sections $\\Phi ^{\\chi _s}_P \\in I_P(\\chi _s)$ or $\\Phi ^{\\chi _s}_Q \\in I_Q(\\chi _s)$ then $s_0 \\in \\Upsilon $ and $\\chi ^n=1.$ $\\Box $ Theorem 6.3 Suppose that $G=\\mathrm {Sp}_{2n}$ and that $ P$ is the Siegel parabolic, that is, the parabolic subgroup with Levi subgroup isomorphic to $\\mathrm {GL}_n.$ Then Conjecture REF is true.", "We use the results of [26].", "We point out that $a_{P|P}(\\chi _s)=a_{I_{2n}}(\\chi _s)$ in the notation of loc. cit.", "by [16].", "Thus the theorem follows from Lemma REF , the fact that $a_{P|P}(\\chi _s)$ is holomorphic and nonzero for $\\mathrm {Re}(s) > 0$ by Lemma REF , and the Corollary of [26].", "Theorem 6.4 Suppose that $G=\\mathrm {SL}_n$ .", "Then Conjecture REF is true.", "This can be derived using Lemma REF and the same arguments as those proving [24].", "The proof comes down to two statements explained in [24].", "The first is the statement that a certain quotient of completed $L$ -functions depending on $\\chi $ has only finitely many poles for $\\mathrm {Re}(s)>0$ .", "The second is that a certain normalized intertwining operator has only finitely many poles.", "These are proven exactly as in loc. cit.", "The required facts on induced representations of $\\mathrm {GL}_2$ may be found in [28].", "Much of the work towards proving Conjecture REF for arbitrary parabolic subgroups of symplectic groups is contained in [23], and much of the work towards proving Conjecture REF for general linear groups is contained in [24].", "The additional steps necessary seem to require careful analysis of the reducibility points of local principal series representations at the archimedean places." ] ]
2107.01874
[ [ "A Formal Semantics of the GraalVM Intermediate Representation" ], [ "Abstract The optimization phase of a compiler is responsible for transforming an intermediate representation (IR) of a program into a more efficient form.", "Modern optimizers, such as that used in the GraalVM compiler, use an IR consisting of a sophisticated graph data structure that combines data flow and control flow into the one structure.", "As part of a wider project on the verification of optimization passes of GraalVM, this paper describes a semantics for its IR within Isabelle/HOL.", "The semantics consists of a big-step operational semantics for data nodes (which are represented in a graph-based static single assignment (SSA) form) and a small-step operational semantics for handling control flow including heap-based reads and writes, exceptions, and method calls.", "We have proved a suite of canonicalization optimizations and conditional elimination optimizations with respect to the semantics." ], [ "Introduction", "introduction Compilers are an essential ingredient of the computing base.", "Software developers need to be able to trust their compilers because an error in a compiler can manifest as erroneous generated code for any of the myriad of programs it compiles.", "This paper forms the first steps of a wider project that focuses on the verification of compiler optimization passes, a common source of compiler errors.", "The project does not cover initial parsing, type checking and intermediate representation (IR) construction passes, nor the final machine-dependent code generation pass.", "The multi-pass structure of a compiler affords verification on a pass-by-pass basis.", "An optimization pass transforms a program represented in the IR.", "The verification of a pass involves proving that, for every IR input program, the transformation implemented by the pass preserves the semantics of the program.", "This task can be partitioned into: defining a formal semantics for the IR, defining the optimizations as transformations of the IR, and verifying that the transformations are semantics preserving.", "In this paper, we embark on the process of verifying the optimization passes of an existing production compiler, GraalVM [14], using Isabelle/HOL [13].", "We present a formalization of the IR used by the GraalVM compiler (Sect 3-6).", "We briefly describe the validation of this semantics against the existing compiler implementation (validation), then show the effectiveness of the semantics by proving two kinds of local optimizations (optimization).", "The main contribution of this paper is to devise a formal semantics of the GraalVM IR in Isabelle/HOL [13].", "The IR combines control flow and data flow into a single `sea-of-nodes' graph structure [3], rather than a more conventional control-flow graph with basic blocks representing sequential flow.", "GraalVMIR gives further details of the GraalVM Compiler.", "As far as we are aware, this is the first formal semantics of a sea-of-nodes IR that covers method calls, with exceptions, as well as object reads and writes.", "The semantics of the IR consists of the following components: the graph representation corresponding to the GraalVM IR (graphmodel), data-flow semantics that handles expression evaluation using a big-step operational semantics (dataflow), local control-flow semantics that handles control flow within a single method using a small-step operational semantics (localcontrolflow), global control-flow semantics that handles method invocation and return, exceptions handling, and promotes the local control-flow semantics to a small-step operational semantics (globalcontrolflow).", "Each stage builds on the previous.", "Note that expression evaluation within the GraalVM IR is side-effect-free and terminating, so it is appropriate to use a big-step semantics that just returns the result, whereas for the control-flow semantics we use a small-step operational semantics to account for non-terminating programs and to accurately model the order of all side-effects, including object reads and writes, method calls, etc." ], [ "GraalVM IR", "GraalVMIR The GraalVM Intermediate Representation (IR) is a sophisticated graph structure that is designed to support implementation of efficient code optimizing transformations (see graph1 for an example graph).", "A side-effect-free expression is represented by a data-flow subgraph that is acyclic (i.e.", "it is a DAG), so that common subexpressions are represented by shared subgraphs (instead of by value numbering in traditional SSA form).", "This has the advantage that data-flow dependencies are explicitly represented in the graph [6].", "Expressions with potentially observable side-effects, such as method invocations or field accesses, are incorporated into the control-flow graph.", "The IR combines both data-flow and control-flow aspects of a program within a single graph structure.", "This graphical representation allows efficient implementation of optimizations equivalent to global value numbering and global code motion optimization strategies [2].", "[language=Java,caption=A simplified AddNode class definition in GraalVM, float=t, label=nodeclass, frame=t] class AddNode extends Node @Input ValueNode x; @Input ValueNode y; The GraalVM IR graph consists of many different kinds of nodes (over 200) with two main kinds of edges: input edges that specify the data inputs of a node; successor edges that specify the control-flow successors of a node.", "Nodes of the GraalVM IR are implemented in Java as a collection of Java classes which inherit from a base Node class.", "Each subclass of Node can specify their possible edge connections, either input or successor edges, by annotating fields that store references to other Node subclasses.", "Listing shows a simplified example of one such Node subclass for an addition expression.", "AddNode has two input edges $x$ and $y$ but it has no successors because it is a pure data-flow node." ], [ "Graph Model in Isabelle/HOL", "graphmodel Our Isabelle/HOL model of the GraalVM IR graph has a close correspondence with the Java Node subclasses but still supports efficient reasoning and pattern matching in Isabelle.", "We use natural numbersA more abstract representation would be better but using natural numbers allows us to utilise Isabelle code generation facilities.", "to identify nodes of the graph, and define an Isabelle datatype $IRNode$ (see IRNode) to model the concrete subclasses of the Java Node class.", "We developed a tool that uses Java reflection to traverse the GraalVM Node subclasses and generate the $IRNode$ datatype, including within each branch of the datatype the input edges, successor edges, and selected data attributes of each node, using the same names as in the Java source code but prefixed with “$ir\\_$ ” to avoid name clashes (field names are global functions in Isabelle).", "We currently translate 45 of the concrete subclasses of the Java Node class into Isabelle, which corresponds to over 85% of the nodes used to compile the Dacapo Java benchmarkhttps://github.com/dacapobench/dacapobench and is enough to handle simple example programs.", "For the 60+ interface classes and abstract Java subclasses of Node, such as BinaryArithmeticNode, we also generate a corresponding Isabelle boolean functionIn Isabelle/HOL “$S \\Rightarrow T$ ” is the type of a function from $S$ to $T$ .", "over the $IRNode$ type, such as: isbinary Figure: An extract of the Isabelle datatype definition of the IR graph nodes (some node types and fields are omitted or abbreviated to save space).IRNode gives the Isabelle representation of the graph nodes.All theories are available at https://github.com/uqcyber/veriopt-releases/tree/atva2021.", "$ConstantNode$ corresponds to a Java constant, so has a value constant as its only field, with no input or successor edges.", "Similarly, $ParameterNode$ has a single natural number field that is an index into the list of parameter values of the current method.", "Binary expression nodes (like $AddNode$ ) have two input expression edges, named $ir\\_x$ and $ir\\_y$ .", "The data-flow aspects of merging multiple control-flow paths are handled by a $\\phi $ -node (abbreviating $ValuePhiNode$ ) for each value that is dependent on the path used to reach an associated merge node (e.g.", "$MergeNode$ ).", "The semantics of $\\phi $ -nodes is explained more fully in localcontrolflow, but note that a $\\phi $ -node has a pseudo-input edge called $ir\\_merge$ that references the merge node associated with the $\\phi $ -node, and a list of input edges $ir\\_values$ that is in one-to-one correspondence with the control-flow edges into that merge node.", "To illustrate how the structure of a node influences its semantics, consider an $IfNode$ .", "An $IfNode$ has one input edge for its boolean condition, and two successor edges, one to take when the condition evaluates to true and the other successor edge to take when it evaluates to false.", "In addition to explicit (named) input and successor fields, the Java Node classes use annotations and meta-data in each subclass to provide generic access functions for accessing the list of all inputs of an arbitrary subclass, and similarly for all successors.", "Such generic access is helpful for implementing visitor patterns that walk the graph, etc.", "In Isabelle, we provide the equivalent functionality by defining two functions over $IRNode$ , $inputs\\text{-}of$ and $successors\\text{-}of$ , in the following style, in which “$\\cdot $ ” represents list cons.", "inputsof We model an IR graph for a single method as a partial map ($$ ) from node $ID$ s to $IRNode$ s with a finite domain.", "graphdefnostamp A finite domain is a requisite property for code generation used by validation efforts (see validation), however, we have found reasoning to be more straightforward with total functions and hence we introduce the kind function, denoted $g\\langle \\langle nid \\rangle \\rangle $ , that is a total function that performs lookup on the underlying partial function, $g$ , resulting in $NoNode$ for identifiers with no mapping.", "In addition, we lift the domain function to the function $ids$ and introduce functions $inputs$ , $succ$ , $usages$ , and $predecessors$ that, given a graph and a node identifier, produce the sets of input, successor, usage, and predecessor node ids, respectively.", "There are several conditions that a graph $g$ should satisfy to be well-formed, such as being closed, i.e.", "all inputs and successors identify nodes within the graph (that is, within $ids~g$ ).", "The key invariants that we have needed so far are shown in graphinvar, and include the edge-closure properties, as well as the requirement that node zero should be the $StartNode$ for the method represented by the graph, and that all the nodes in the graph are proper nodes, rather than $NoNode$ .", "Additionally, end nodes need to have at least one usage which is treated as the pseudo-successor edge for an end node.", "The input edges of a merge node are used by $\\phi $ nodes to determine the value for a $\\phi $ node, the number of input edges of any $\\phi $ node must match the number of input edges of its associated merge node to ensure correct execution.", "We expect to add further invariants in the future as we prove deeper properties of the graph.", "Indeed, one of the expected benefits of this project is to discover important IR invariants that are currently implicit in the way that the GraalVM compiler constructs and uses the graph, and to: prove that those invariants are preserved by the graph transformations that represent optimizations; document those invariants explicitly and implement them in the Java code base so that they can be checked at runtime during test runs of the compiler.", "Figure: Isabelle well-formedness graph invariants.An $IRGraph$ represents a single method.", "In the GraalVM compiler, to uniquely identify a method, one needs not only its name but the class in which it is defined and the types of its parameters to handle method overloading (as in Java [11]).", "Together these form the method's signature, which is represented by the type $Signature$ .", "Programs are represented as a partial function from method signatures to their $IRGraph$ .", "programdef" ], [ "Data-flow semantics", "dataflow In a programming language like Java, expression evaluation may involve side effects, such as calling a method.", "The GraalVM, and hence our semantics, treats nodes that may have a side effect differently.", "These nodes are included in the control-flow graph so that they are evaluated as part of the control-flow semantics (see localcontrolflow) and hence the order of their evaluation is preserved.", "When one of these nodes (with node identifier $nid$ , say) is evaluated as part of the control flow semantics, the calculated value is saved under the node identifier $nid$ in a mapping $m$ from node identifiers to values, which we refer to as the method state.", "The data-flow semantics handles the evaluation of side-effect-free expressions, which are represented by a directed acyclic (sub-)graph (DAG), in which internal nodes are operators (with input arguments that are graph node ids) and leaf nodes are either constant nodes, parameter nodes, or control-flow nodes representing expressions that may have had side effects, e.g.", "a method invocation node.", "These control-flow nodes have their current value stored in the method state $m$ under their node identifier, with $m~nid$ giving the current value associated with (leaf) node $nid$ .", "The values of the parameters are given by a list of values $p$ , with $p_{[i]}$ giving the value of the $i^{th}$ parameter.", "Figure: Data-flow semantics for a subset of nodesFor a graph $g$ , method state $m$ , and list of parameter values $p$ , in our big-step operational semantics for expressions, an evaluation of a node $n$ to a value $v$ is represented as $[g,\\ m,\\ p]\\ {}\\ n\\ {}\\ v.$ A sample of the 27 evaluation rules for data nodes is given in datasemantics.", "Note that for the $AddNode$ , the $+$ is overloaded to use the Isabelle/HOL WORD library to add two fixed-size integers, so that integer arithmetic follows Java's twos-complement semantics with wrapping upon overflow.", "Each parameter node contains the index $i$ of its parameter in the formal parameter list, with $p_{[i]}$ giving the parameter's value.", "Control-flow nodes for expressions with side effects (such as $ValuePhiNode$ , $InvokeNode$ , $NewInstanceNode$ , $LoadFieldNode$ ) extract the current value of the node from the method state $m$ .", "Each of these node types also has a rule in the control-flow semantics that triggers their evaluation and updates $m$ with the result (see localcontrolflow).", "The control-flow semantics requires the ability to evaluate a list of expressions, $nids$ , to a list of values, $vs$ , written, $[g,~m,~p]~\\vdash ~nids~{}~vs,$ (note the longer arrow), which is the obvious lifting of evaluation of a single expression to evaluate each expression in the list (not detailed for space reasons)." ], [ "Local control-flow semantics", "localcontrolflow To support object orientation, the semantics requires a heap to store objects.", "We define a heap in the form of a function $h$ that for an object reference $r$ and a field name $f$ gives the value of that field for that object, $h~r~f$ [1].", "Note that while the heap is always finite, its size is unbounded.", "heapdef defines our heap representation.", "$Heap$ is a type that maps object references and field names to values.", "$DynamicHeap$ expands $Heap$ to track the next free object referenceThe operation for allocating a new object could nondeterministically choose any unused object reference, but we have made it a deterministic function that allocates the next location to facilitate the use of Isabelle code generation facilities.", "in the heap, $Free$ , each time a new object is instantiated the next free object reference is incremented and the current free object reference is used.", "The supporting definitions, h-load-field, h-store-field, and h-new-inst, are used by the semantics of the load () and store () field nodes in controlsemantics.", "Figure: Isabelle model of a heap and supporting definitionsThe control-flow semantics local to a method is given by a small-step operational semantics.", "A configuration consists of a triple of the current node id, $nid$ , the method state, $m$ , as used in expression evaluation, and the heap, $h$ .", "The transition $[g,\\ p]\\ {}\\ {}nid{}\\ m{}\\ h{}\\ {}\\ {}nid{} {}\\ m{}{}\\ h{}{},$ can be read as, within the context of graph, $g$ , and list of parameter values, $p$ , an execution step can transition from configuration $(nid, m, h)$ to configuration $(nid^{\\prime }, m^{\\prime }, h^{\\prime })$ .", "The node id acts as a program counter for the graph representation of the method.", "For a configuration, $(nid,m,h)$ , to be valid, $nid$ must be a control flow node within $g$ , $p$ must give values for all parameters to the current method, and $m$ gives the values for all control-flow nodes that represent expressions with side effects that have been reached in the current invocation of the method.", "Figure: Control Node Semanticscontrolsemantics shows most of the rules for the local control-flow semantics — to save space we omit the load and store rules for static fields, where the object pointer is $None$ rather than $(Some\\;obj)$ .", "A number of nodes have a control-flow behaviour of a no-op; we group them together as sequential nodes.", "Their semantics () is a transition from the current node to the node attached to the only successor edge.", "An $IfNode$ () chooses to transition to the first ($tb$ ) or second ($fb$ ) successor edge based on the evaluation of the condition expression.", "[language=Java,frame=t,linewidth=0.28xleftmargin=0pt,framexleftmargin=5pt,escapeinside=(*@@*)] int fact(int n) int result = 1; while (n > 1) result *= n; n = n - 1; return result; Figure: Example factorial program transformed into a GraalVM IR graphOur approach to handling $\\phi $ nodes is similar to that used by Demange et al.", "for their formalization of reasoning about the sea of nodes in Coq [4].", "End nodes () represent the end of a basic block in SSA terminology.", "Each end node forms an input to a merge node and each merge node has an associated set of $\\phi $ nodes, each of which represents a value that is dependent on which path was taken to reach the end node, and hence the merge node.", "When an end node is reached, the method state $m$ of each associated $\\phi $ node is updated with the value of its associated expression DAG in the current state, $m$ .", "This process is best explained via the example in graph1, in which nodes 3, 10 and 19 are constant nodes, node 20 is an AddNode, node 18 is a MulNode, node 11 is a IntegerLessThanNode, $\\phi $ -node 8 represents the value of the local variable result, and node 1 corresponds to the parameter n, which provides the initial value of $\\phi $ -node 7, which represents the variable n within the loop.", "The ProxyNode 15 is the value of the $\\phi $ -node 8 (i.e.", "result) but has an additional dependency on the LoopExitNode 14 to ensure the value is that after loop exit.", "Note that the value of AddNode 20 is calculated using the inputs constant -1 and the $\\phi $ -node 7, representing the previous value of n, to give the new value of the $\\phi $ -node 7 (hence the double-headed arrow between nodes 7 and 20).", "Given $merge$ , the id of the merge node LoopBegin 6, usage $\\phi $ nodes of $merge$ , $phis$ = [$\\phi _1$ 7, $\\phi _2$ 8 ] input end nodes of $merge$ , $ends$ = [End 5, LoopEnd 21 ] inputs of $\\phi _1$ 7 excluding $merge$ , [ParameterNode P(0) 1, AddNode + 20 ] inputs of $\\phi _2$ 8 excl.", "$merge$ , [ConstantNode C(1) 3, MultiplyNode * 18] when $End$ 5 is reached evaluate the first input edge to all $phis$ in the original method state, $m$ , i.e.", "for 1, $[g,m,p] \\vdash P(0) \\mapsto r_1$ and for 3 $[g,m,p] \\vdash C(1) \\mapsto 1$ .", "update $m$ to map the values of the evaluated expressions to each $\\phi $ node, i.e.", "$m^{\\prime }(\\phi _1) = r_1$ and $m^{\\prime }(\\phi _2) = 1$ .", "$LoopEnd$ 21 is reached evaluate the second input edge to all $phis$ in the original method state, $m$ , i.e.", "for 20 $[g,m,p] \\vdash AddNode({7},{19}) \\mapsto r_1$ and for 18 $[g,m,p] \\vdash MulNode({8}, {7}) \\mapsto r_2$ .", "Note that when the evaluation reaches a $\\phi $ node, it refers to the (previous) value of the $\\phi $ node in $m$ , i.e.", "$m(\\phi )$ .", "update $m$ to map the values of the evaluated expressions to each $\\phi $ node, i.e.", "$m^{\\prime }(\\phi _1) = r_1$ and $m^{\\prime }(\\phi _2) = r_2$ .", "More generally, a merge node may have a list of input end nodes, $ns$ , and any number of associated $\\phi $ nodes, each of which has a list of input expressions, each of which is of the same length as $ns$ .", "When the merge node is reached via its $i^{th}$ input end node, the value of each associated $\\phi $ node is updated within $m$ to the value of the $(i+1)^{th}$ input expression of the $\\phi $ node using method state $m$ (the $i+1$ offset is because input edge zero of a $\\phi $ node is used to connect to its merge node).", "When a $NewInstanceNode$ is reached in the control flow (), space is allocated in the heap for a new object $ref$ using the function h-new-inst function (heapdef).", "The value associated with the $NewInstanceNode$ is updated in $m^{\\prime }$ to the new object reference $ref$ so that subsequent data-flow evaluations of the $NewInstanceNode$ evaluate to $ref$ .", "A $LoadFieldNode$ () contains a field name $f$ and an optional input edge to a node that must evaluate to an object reference, $obj$ .", "The h-load-field function (heapdef) reads the value from the heap based on the object reference and field name.", "The resulting value, $v$ , is then stored in $m^{\\prime }$ under the node id of $LoadFieldNode$ so that subsequent data-flow evaluations of the $LoadFieldNode$ result in $v$ .", "Similar to the $LoadFieldNode$ , the $StoreFieldNode$ () contains a field identifier, $f$ , and an optional input edge to a node which must evaluate to an object reference, $obj$ .", "A $StoreFieldNode$ also has an input edge to a node, $newval$ , that is evaluated to a value, $val$ and stored in the heap.", "The h-store-field function (heapdef) stores $val$ in the updated heap, $h^{\\prime }$ , corresponding to the field $f$ and object reference, $obj$ .", "Note that null pointer dereferences are checked by a separate (dominating) $GuardNode$ (not covered in this paper) and hence null pointer dereferences are not an issue for load and store field.", "To save space, we omit load and store for static fields — these do not evaluate an object reference." ], [ "Global control-flow semantics", "globalcontrolflow The semantics in localcontrolflow only handles control flow within a single method.", "To handle method calls and returns, we lift the semantics to a richer global configuration that consists of a pair, $(stk,h)$ , containing a stack, $stk$ , of local configurations for each called but not yet returned method and a global heap, $h$ .", "The stack contains tuples of the form $(g,nid,m,p)$ , in which $g$ represents the method's graph, $nid$ is a node id (the program counter) within $g$ , $m$ is the method state, and $p$ is the list of of parameter values, as for the data-flow semantics.", "The $IRGraph$ of the method with signature $s$ in program $P$ (of type $Program$ ) is given by $P~s$ .", "interproceduralsemantics gives a small-step semantics for global control flow.", "Given a program $P$ , a transition of the form $P \\vdash (stk, h) \\longrightarrow (stk^{\\prime }, h^{\\prime })$ represents a step from a configuration stack $stk$ and heap $h$ to a new stack $stk^{\\prime }$ and heap $h^{\\prime }$ .", "Stacks are represented as lists, so $(g,nid,m,p) {} stk$ represents a stack with top as the local configuration $(g,nid,m,p)$ and remainder of the stack, $stk$ .", "Figure: Interprocedural SemanticsLocal control-flow transitions are promoted to global control-flow transitions in which the top of stack is updated according to the local transition step ().", "For an $InvokeNode$ (), its list of actual parameter expressions, $arguments$ , is evaluated to give the list of parameter values, $p^{\\prime }$ .", "The method state $m^{\\prime }$ for the invoked method is initially empty ($new{}{\\hspace{0.0pt}}map{}{\\hspace{0.0pt}}state$ ).", "The method being invoked is determined by the $MethodCallTargetNode$ , which is attached via an input edge to an $InvokeNode$ .", "The $MethodCallTargetNode$ contains the signature, targetMethod, of the invoked method.", "A new local configuration consisting of the graph of the invoked method, $targetGraph$ , a method start node id of zero, the method state $m^{\\prime }$ , and the list of parameter values $p^{\\prime }$ is pushed onto the stack.", "For a $ReturnNode$ (), the return expression is optional.", "Here we only consider the case in which there is some return expression.", "The return value, $v$ , is calculated using the top-of-stack graph $g$ , method state $m$ and parameters $p$ (i.e.", "the called method).", "The second stack element is a local configuration containing the graph of the calling method, cg, id of the invocation node, $cnid$ , the method state at the point of call, $cm$ , and the parameters of the calling method, $cp$ .", "The top two elements of the stack are replaced by a single local configuration consisting of the calling method's graph $cg$ , the successor $cnid^{\\prime }$ of invocation node $cnid$ , a new method state $cm^{\\prime }$ that updates $cm$ to map the invocation node $cnid$ to the returned value, $v$ , and the parameters to the calling method, $cp$ .", "Certain methods can result in exceptions rather than regular returned values.", "Calls to these methods are made using the $InvokeWithExceptionNode$ .", "The invocation of these methods is handled with the same semantics as $InvokeNode$ .", "An $UnwindNode$ () indicates that an exception has been thrown.", "The control-flow path when an $UnwindNode$ is reached is determined by the exEdge successor of the calling $InvokeWithExceptionNode$ .", "The $InvokeWithExceptionNode$ is the node on the second top of the stack when an $UnwindNode$ is reached.", "The top two elements of the stack are replaced by a single local configuration consisting of the graph of the calling method, $cg$ , the exEdge successor of the $InvokeWithExceptionNode$ , and the method state $cm$ updated so that the $InvokeWithExceptionNode$ maps to the object reference $e$ of the exception that was thrown." ], [ "Validation of Execution Semantics", "validation The GraalVM compiler contains thousands of unit test cases, and many of these define a standalone method.", "Each test checks that its unoptimized and optimized execution give the same result.", "We have added code to intercept such tests and translate the unoptimized IR graph, the input parameter values, and the expected result into our Isabelle IR graph notation.", "We can then use Isabelle's code generation mechanism to execute the Isabelle IR graph of the method with the given input parameters, and check if the result matches.", "We have translated and executed over 1400 of these unit tests so far, and after fixing a minor boolean-to-integer conversion issue and adding support for initializing static fields before the method is called, they all return the expected result.", "This gives us some initial confidence that our execution semantics corresponds to the GraalVM IR semantics.", "Any remaining differences will become apparent during the correctness proofs of optimizations." ], [ "Proving Optimizations", "optimization The GraalVM compiler contains a comprehensive canonicalization phase.", "Subsequent optimization phases rely on the canonicalization phase to minimize the forms which an IR can take.", "The majority of the canonicalization optimizations do not rely on additional static analysis processes, so are good case studies for the process of proving local optimizations.", "A canonicalization of a data-flow node within a graph $g_1$ , replaces a data-flow node in $g_1$ at $nid$ with a new node and may introduce additional new nodes with fresh node ids to form a new graph $g_2$ .", "The replacement must maintain the property that the subgraph is acyclic.", "While the new node at $nid$ may no longer reference some node ids that the original node at that position did, the unreferenced nodes are left in the graph because there may be other references to those nodes elsewhere in graph.", "To show the correctness of these forms of canonicalization optimizations, noting that expression evaluation has been shown to be deterministic, it is sufficient to show that for all method states $m$ , evaluating the new node at $nid$ gives the same value as evaluating the old node at $nid$ , i.e.", "$\\forall m,~p ~.~ ([g_1,~m,~p]~\\vdash g_1 nid \\mapsto v) \\longleftrightarrow ([g_2,~m,~p]~\\vdash g_2 nid \\mapsto v).$ For example, we have completed proofs of correctness of optimizations of conditional expressions (Java's (c ?", "v1 : v2)).", "As an example of a canonicalization of the control-flow graph, we define a set of optimizations for the $IfNode$ in canonicalifrules.", "We show the optimization where an $IfNode$ with a constant condition is replaced by a $RefNode$ to either the true or false branch, where a $RefNode$ is a sequential node that just transitions to its successor.", "In addition, we give the optimization where both successor edges of the $IfNode$ are equal, replacing with a $RefNode$ to one of the (equal) branches.", "Note that these optimizations bypass the condition evaluation but as that is side effect free, it is of no consequence.", "Figure: Canonicalization rules for an IfNodeIfNodeWe prove that the canonicalization rules are correct by showing that, given: a node, $before$ , where $g\\langle \\langle nid\\rangle \\rangle $ = $before$ ; that $before$ can be canonicalized to the node $after$ ; a graph, $g^{\\prime }$ , where the node at $nid$ has been replaced by $after$ ; then we can prove that $g^{\\prime }$ has the same behaviour as $g$ starting from node $nid$ in both graphs.", "Thus far, we have encoded and proved exploratory components of the canonicalization phase and the entirety of the conditional elimination phase allowed by our subset of nodes.", "The techniques used for the requisite static analysis during the conditional elimination phase are to be the subject of future papers." ], [ "Related Work", "RelatedWorks The closest research to that presented here is the work of Demange et al.", "[4] who provide the semantics of an abstract sea-of-nodes representation in Coq, which focuses on the semantics of $\\phi $ nodes and regions.", "The semantics is used to prove a semantic property and a simple optimization transformation.", "Their formalization allows properties of the abstract sea-of-nodes representation to be proven in isolation.", "We offer a variant of this semantics that matches the concrete implementation of a production compiler, and we extend the approach to handle interprocedural calls and a heap-based object model.", "Two notable verified compiler projects are CompCert [9], for a subset of C verified in Coq, and CakeML [7], for a subset of ML verified in HOL4.", "These are both substantial projects verifying end-to-end correctness of their respective compilers from source code to generated machine code.", "Unlike these projects, this project targets only the optimization phase of the compiler, a common source of issues, rather than full end-to-end verification.", "JinjaThreads [12] is a substantial formalization effort of the Java language semantics in Isabelle/HOL.", "Unlike our project, JinjaThreads focuses on directly formalizing the language semantics, rather than a language-agnostic IR.", "As the GraalVM IR is implemented in Java, one plausible approach to our project would be to use the JinjaThreads formalization to prove optimizations correct.", "However, such proofs would have been undoubtedly laborious, so we have instead chosen to introduce a semantics to capture the IR semantics directly and allow optimizations to be more easily expressed and proved.", "VeLLVM [15] formalizes the LLVM [8] IR semantics using the Coq proof assistant.", "While the approach is similar, the target IR is substantially different.", "LLVM shares some common properties such as being in SSA form, but the GraalVM IR is a sea-of-nodes graph structure that unifies a program's control-flow and data-flow, while the LLVM IR is in traditional basic block SSA form.", "K-LLVM [10] is another formalization effort for the LLVM IR that does not directly expand on VeLLVM but expands the formalized feature set by offering a separate formalization implemented in $\\mathbb {K}$ .", "$\\mathbb {K}$ is a framework designed for formalizing language semantics, which can produce language interpreters as well as export to Isabelle/HOL to allow proofs based on the specification." ], [ "Conclusions", "conclusions We have described an Isabelle model and execution semantics for the sophisticated sea-of-nodes graph structure [3] that is used as the internal representation in the GraalVM optimizing compiler [5].", "Additionally, we have proved several suites of local optimizations correct according to the semantics.", "In future work, we plan to tackle more global optimizations that transform the input graph in more complex ways.", "In the longer term, we also want to explore expressing optimizations in a high-level notation that can more easily be transformed into Isabelle (for correctness proof purposes) as well as into Java code that implements the graph transformation, in order to have a tight connection between the Java and Isabelle graph transformations." ], [ "Acknowledgements", "Mark Utting's position and Brae Webb's scholarship are both funded in part by a gift from Oracle Labs.", "Thanks especially to Cristina Cifuentes, Paddy Krishnan and Andrew Craik from Oracle Labs Brisbane for their helpful feedback, and to the Oracle GraalVM compiler team for answering questions.", "Thanks to Chris Seaton for helping us extend the SeaFoam IR visualization tool to output the graph in Isabelle syntax.", "Thanks also to Kristian Thomassen for his work on the semantics of $\\phi $ -nodes and Sadra Bayat Tork who investigated IR graph invariants in the GraalVM compiler." ] ]
2107.01815
[ [ "Sample Efficient Reinforcement Learning via Model-Ensemble Exploration\n and Exploitation" ], [ "Abstract Model-based deep reinforcement learning has achieved success in various domains that require high sample efficiencies, such as Go and robotics.", "However, there are some remaining issues, such as planning efficient explorations to learn more accurate dynamic models, evaluating the uncertainty of the learned models, and more rational utilization of models.", "To mitigate these issues, we present MEEE, a model-ensemble method that consists of optimistic exploration and weighted exploitation.", "During exploration, unlike prior methods directly selecting the optimal action that maximizes the expected accumulative return, our agent first generates a set of action candidates and then seeks out the optimal action that takes both expected return and future observation novelty into account.", "During exploitation, different discounted weights are assigned to imagined transition tuples according to their model uncertainty respectively, which will prevent model predictive error propagation in agent training.", "Experiments on several challenging continuous control benchmark tasks demonstrated that our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity." ], [ "INTRODUCTION", "Reinforcement learning (RL) has achieved many remarkable results in recent years, including outperforming human performances on video games [1], [2], [3], mastering the game of GO [4], playing card games [5], and even learning manipulation skills for robotics [6], [7].", "RL methods used in these domains are commonly divided into two categories: model-based RL and model-free RL.", "Model-based RL methods build a predictive model of the environment to assist the optimization of a policy or a controller, while in contrast model-free methods learn directly from agent’s interactions with the environment.", "Model-free RL has shown to obtain better asymptotic performance in the face of sequential decision-making problems, while the successes are highly sample inefficient [8].", "Model-based RL can significantly reduce the sample complexity by learning a model of the dynamics to do planning or generate simulations, which are substantially less expensive than interactions in a real environment.", "This property guarantees the potential of model-based RL in applications where sample efficiency plays a crucial role such as in robotics and automatic driving.", "Recently, significant improvements have been made in more sample-efficient model-based RL algorithms [9], [10], [11], [12].", "However, there is a substantial challenge for model-based RL, i.e., the model bias.", "With limited observation of the real environment, we can hardly build an accurate predictive model especially in high-dimensional tasks, typically resulting in performance degradation of the learned policy [13].", "This is because the model bias can be propagated and lead to an increase in overall error during agent training.", "There are two sources of model bias: aleatoric uncertainty and epistemic uncertainty [10], [14].", "Concretely, aleatoric uncertainty is due to stochasticity of the dynamics or random noise during interactions, while epistemic uncertainty results from the lack of sufficient training data.", "One way to address the model bias is to use an ensemble of predictive models.", "One example is PETS [10] which was proposed to do trajectory optimization by introducing a ensemble of probabilistic models.", "ME-TRPO [9] leveraged an ensemble of models to maintain the model uncertainty and regularize the learning process by policy validation.", "Pathak, et al.", "[12] further proposed a method to support more effective exploration via model disagreement based on uncertainty estimates from ensembles.", "However, most prior researches have used ensemble methods to reduce model bias or guide exploration in isolation.", "In this paper, we present MEEE, an ensemble method that is compatible with most of Dyna-style [15] RL algorithms.", "MEEE consists of the following key ingredients: Random initialization: We initialize the model parameters randomly in order to enforce diversity between ensemble dynamic models, which stabilizes the learning process and improves performance.", "Optimism-based exploration: To further reduce the sample complexity and guide exploration in a more efficient manner, we propose a formulation for exploration inspired by the work in active learning literature [16] and upper-confidence bound method [17].", "Our method selects optimal actions from an action candidates set and encourage exploration by adding an extra bonus for unconversant state-action space, where the model ensemble generates uncertain predictions.", "Uncertainty-weighted exploitation: Model bias can be exploited and propagated during agent training, which would lead to a deviation between the learned policy under the model ensemble and the true optimal policy under the real dynamic.", "To handle this issue, we assign different penalty coefficients to the augmented samples according to their uncertainty estimates under the model ensemble.", "Specifically, for those imagined samples with high uncertainty, the update of parameters is supposed to be more cautious, which can significantly mitigate error propagation when combining model-free methods with model-augmented transition samples.", "We demonstrate the effectiveness of MEEE for continuous control benchmarks, i.e., OpenAI Gym [18] and MuJoCo [19].", "In our experiments, MEEE consistently improves the performance of state-of-the-art RL methods and outperforms baselines, including SAC [20] and MBPO [11]." ], [ "Related Work", "RL algorithms are well-known for their ability to acquire behaviors through trial-and-error with the environment [21].", "However, such online data collection usually requires high sample complexity, which limits the flexibility of RL algorithms when applied in real-world applications.", "Although off-policy algorithms [22], [20], [23] can in principle be applied to an offline setting [24], they perform poorly in practice due to the limit of generalization to unseen states [25], and can even pose risks in safety-critical settings [26].", "To overcome these challenges, model-based RL presents an alternate set of approaches involving the learning of approximate dynamics models which can subsequently be used for policy search.", "Existing work on model-based RL utilizes generic priors like smoothness and physics [27], [28] for model learning, or directly models the dynamics by Gaussian processes [29], local linear models [30], and neural network function approximators [31], [32].", "With a learned model, we are able to overcome the exploration-exploitation dilemma in RL." ], [ "Model-based exploration", "Since the dynamics prediction model can reflect the agent’s capability of predicting the consequences of its behavior, the prediction error can be utilized to provide intrinsic exploration rewards (curiosity) [33].", "The intuition behind this is that the higher the prediction error, the less familiar the predictive model is with that state-action pair.", "And this can be further inferred that the faster the error rate drops, the faster the model learning progress will be [34].", "The IAC [35] approach outlines the idea of estimating learning progress and assigning intrinsic exploration bonuses through a forward dynamics prediction model.", "Considering the poor behavior when training the dynamics prediction model directly on raw pixels, Stadie et al.", "[36] estimated the exploration reward in the encoding space by means of an autoencoder.", "In contrast, ICM [37] utilizes a self-supervised inverse dynamics model to approximate the state-space encoding function, robust to the uncontrollability of the environment.", "In addition to the aforementioned modeling approaches, the environment dynamics can also be modeled by variational inference.", "By doing so, VIME [38] was proposed to maximize the information gain in the learning process of the prediction model, measured by the reduction of entropy.", "In order to overcome the bias of one single predictive model, Pathak et al.", "[12] proposed to use an ensemble of models to model the environment dynamics and calculated the disagreement among different model outputs as the additional exploration bonus." ], [ "Model-based exploitation", "Model-based RL algorithm is a powerful tool to reduce the sample complexity by doing simulation or planning with the learned prediction model, which means that it is a good solution to the problem of insufficient and costly samples.", "PILCO [29] learns a probabilistic model through Gaussian process regression, which can express the uncertainty of environment well and make the model-based method more adaptable to complex environments.", "Deep-PILCO [32] generalizes to more complex environments by introducing Bayesian neural network (BNN) with high-capacity function approximators for neural networks.", "Model ensemble methods are further introduced to capture the uncertainty of predictions more comprehensively, which can enhance the explanatory power of the model and improve the robustness of learning policies.", "By modularly integrating the environment models into model-free method, the sample efficiency of the original model-free algorithm can be significant improved.", "For example, Dyna-style methods [15], [9], [11] use the models to generate augmented samples.", "PETS [10] directly uses the models for planning instead of performing explicit policy learning.", "However, all of these model-based methods suffer from model bias, and the performance degrades with the compounding error, resulting in bad decision policies.", "MVE [39] and STEVE [40] try to solve this problem by using short-horizon model-based rollouts and incorporating data from these rollouts into value estimation." ], [ "Background", "We consider a Markov decision process (MDP [41]), defined by $\\left(\\mathcal {S}, \\mathcal {A}, \\mathcal {P}, \\mathcal {R}, \\gamma , \\rho \\right)$ .", "Here $\\mathcal {S} \\subseteq \\mathbb {R}^{n}$ and $\\mathcal {A} \\subseteq \\mathbb {R}^{m}$ stand for the state and action spaces respectively, $\\mathcal {P}: \\mathcal {S} \\times \\mathcal {A} \\rightarrow \\mathcal {S}$ is a deterministic transition function to depict the dynamic, $\\mathcal {R}: \\mathcal {S} \\times \\mathcal {A} \\rightarrow \\mathbb {R}$ is a bounded reward function, $\\gamma \\in (0,1)$ is the discount factor, and $\\rho $ is the initial state distribution.", "The agent would select an action $a_{t}$ according to a policy $\\pi _{\\theta }: \\mathcal {S} \\rightarrow \\mathcal {A}$ , while the initial state $s_{0} \\sim \\rho $ .", "This generates a trajectory of states, actions, and rewards $\\tau =\\left(s_{0}, a_{0}, r_{0}, s_{1}, a_{1}, \\ldots \\right)$ , where $a_{t} \\sim \\pi _{\\theta }\\left(.", "\\mid s_{t}\\right)$ , $r_{t}=\\mathcal {R}\\left(s_{t}, a_{t}\\right)$ , and $s_{t+1}=\\mathcal {P}\\left(s_{t}, a_{t}\\right)$ .", "The goal of reinforcement learning is to find the optimal policy $\\pi ^{*}$ that maximizes the cumulative expected return, denoted by $\\eta (\\theta )$ : $\\pi ^{*}=\\underset{\\pi }{\\operatorname{argmax}}\\ \\eta (\\theta )=\\underset{\\pi }{\\operatorname{argmax}}\\ E_{\\tau }\\left[\\sum _{t=0}^{+\\infty } \\gamma ^{t} r_{t}\\right]$ Usually, model-based reinforcement learning methods assume the dynamic $\\mathcal {P}$ and reward function $\\mathcal {R}$ are unknown, and aim to construct a model $\\mathcal {P}_{\\phi }\\left(s_{t+1}, r_{t}|s_{t}, a_{t}\\right)$ to approximate the transition distribution and reward function by collecting interaction data with the true dynamics.", "Figure: Overview of the MEEE algorithm.", "By adopting optimism-based exploration, we continuously collect samples into the environment replay buffer B env B_{env}, which are used for training a set of BNN models.", "Then we sample from the model ensemble 𝒫 φ \\mathcal {P}_{\\phi } to fill into a model replay buffer B model B_{model}, and assign different weights to these generated samples, based on which the agent is gradually optimized." ], [ "MEEE", "We present MEEE: Model-Ensemble Exploration and Exploitation.", "Summarized generically in Algorithm REF , MEEE executes exploratory actions to learn a more accurate ensemble model, optimizes the agent with imaginary data generated by the trained model, uses the updated policy with the ensemble model to evaluate the expected returns of the action candidates, then continues to determine the exploratory action iteratively.", "The full procedure is illustrated in Figure REF .", "However, errors in the model predictions can be exploited during agent training, resulting in discrepancies between the optimal policy under the model and under the true environment.", "To mitigate this issue, we consider to assign discounted weights to the imagined data according to their uncertainty estimates.", "Instantiating MEEE amounts to specifying three design decisions: (1) the parameterization and training of the predictive model $\\mathcal {P}_{\\phi }$ , (2) the optimization algorithm for the agent, and (3) how to take most advantages of the predictive model for high sample efficiency." ], [ "Model ensemble", "In our work, we set $\\mathcal {P}_{\\phi }$ (termed a model ensemble) as an ensemble of dynamics models $\\left\\lbrace p_{\\phi }^{1}, \\ldots , p_{\\phi }^{I}\\right\\rbrace $ , where $p_{\\phi }^{i}\\left(.\\mid s_{t}, a_{t}\\right)= \\mathcal {N}\\left(\\mu _{\\phi }^{i}\\left(s_{t}, a_{t}\\right), \\Sigma _{\\phi }^{i}\\left(s_{t}, a_{t}\\right)\\right)$ is a BNN which assumes the outputs obey a Gaussian distribution with diagonal covariance.", "The use of model ensemble is a general setting in many researches [38], [9], [12], and is proven to be an efficient way to handle both of aleatoric uncertainty and epistemic uncertainty [10].", "With different random initialization, we train each of the model separately based on the environment replay buffer $B_{env}$ via straightforward maximum likelihood estimation that minimizes the prediction error: $\\mathcal {L}_{\\phi }^{i}=\\frac{\\sum _{\\tau _{t} \\in B_{env}}\\left[\\left\\Vert \\left(s_{t+1},r_{t}\\right)-\\left(\\hat{s}_{\\left(t+1, i\\right)}, \\hat{r}_{\\left(t, i\\right)}\\right)\\right\\Vert _{2}^{2}\\right]}{|B_{env}|}$ where $\\tau _{t}=\\left(s_{t}, a_{t}, r_{t}, s_{t+1}\\right)$ is the ground truth, $\\left(\\hat{s}_{(t+1, i)}, \\hat{r}_{(t, i)} \\right) \\sim p_{\\phi }^{i}\\left(s_{t}, a_{t}\\right)$ , and $i\\in \\left\\lbrace 1,\\cdots ,I\\right\\rbrace $ .", "Standard techniques such as back-propagation through time (BPTT) and reparametrization trick [42] are followed to facilitate the model learning.", "[thp!]", "Initialize policy $\\pi _{\\theta }$ , Q-function $Q_{\\zeta }$ , predictive model ensemble $\\mathcal {P}_{\\phi }$ , augmentation function $f$ , environment replay buffer $B_{env}$ , and model replay buffer $B_{model}$ N epochs Train model $\\mathcal {P}_{\\phi }$ on $B_{env}$ via maximum likelihood each step $t$ Generate a base action according to the policy: $a_{t} \\sim \\pi _{\\theta }\\left(.", "\\mid s_{t}\\right)$ OPTIMISM-BASED EXPLORATION Collect $K$ action candidates using augmentation: $\\left\\lbrace a_{t, i}=f(a_{t},i)\\mid i\\in [1 ; K]\\right\\rbrace $ Choose the optimal action in the candidates set: $a^{*}_{t}=\\arg \\max _{a \\in \\lbrace a_{t,i}\\rbrace }\\left\\lbrace Q_{\\zeta }\\left(s_{t}, a\\right)+\\lambda \\hat{V}\\left(s_{t}, a\\right)\\right\\rbrace $ Store transition $\\tau _{t}=\\left(s_{t}, a^{*}_{t}, r_{t}, s_{t+1}\\right)$ : $B_{env}\\leftarrow B_{env}\\cup \\lbrace \\tau _{t}\\rbrace $ M model rollouts Sample $\\hat{s}_{0}=s_{t} \\sim B_{env}$ as an initial state $i \\in [0; k-1]$ steps Simulation in model $\\mathcal {P}_{\\phi }$ based on the policy: $\\hat{a}_{i} \\sim \\pi _{\\theta }\\left(.", "\\mid \\hat{s}_{i}\\right)$ Store transitions and respective weights: $B_{model}\\leftarrow B_{model}\\cup \\lbrace \\left(\\hat{\\tau }_{i}, w(\\hat{\\tau }_{i})\\right)\\rbrace $ G gradient updates UNCERTAINTY-WEIGHTED EXPLOITATION Update $Q_{\\zeta }$ and $\\pi _{\\theta }$ by minimizing $\\mathcal {L}_{\\text{critic }}^{\\text{MEEE }}(\\zeta )$ and $\\mathcal {L}_{\\text{actor }}^{\\text{MEEE }}(\\theta )$ Model-Ensemble Exploration and Exploitation (MEEE)" ], [ "Soft Actor-Critic", "Soft actor critic (SAC) algorithm [20] is a state-of-the-art off-policy algorithm for continuous control problems.", "The training process of SAC parameters alternates between a soft policy evaluation and a soft policy improvement.", "At the soft policy evaluation step, SAC optimizes the critic by minimizing the Bellman residual, i.e., the critic loss: $\\begin{aligned}\\mathcal {L}_{\\text{critic }}^{\\text{SAC }}(\\zeta )&=\\mathbb {E}_{\\tau _{t} \\sim B} \\left[\\mathcal {L}_{Q}\\left(\\tau _{t}, \\zeta \\right)\\right],\\\\\\mathcal {L}_{Q}\\left(\\tau _{t}, \\zeta \\right)&=\\left[Q_{\\zeta }\\left(s_{t}, a_{t}\\right)-\\bar{Q}_{t}\\right]^{2}\\end{aligned}$ where $\\tau _{t}$ is a transition sample, $B$ is a replay buffer, $\\bar{Q}_{t}$ is the temporal difference (TD) target, and the Q-function parameterized by $\\zeta $ is a neural network to approximate the cumulative expected return starting from $(s_t, a_t)$ .", "With the delayed parameters $\\bar{\\zeta }$ and a temperature hyperparameter $\\alpha $ , the TD target can be calculated by the following equation: $\\bar{Q}_{t}=r_{t}+\\gamma \\mathbb {E}_{a_{t+1} \\sim \\pi _{\\theta }}\\left[\\bar{Q}_{t+1}\\right]$ where $\\bar{Q}_{t+1}$ is modified by adding an entropy-based regularization term: $\\bar{Q}_{t+1}=Q_{\\bar{\\zeta }}\\left(s_{t+1}, a_{t+1}\\right)-\\alpha \\log \\pi _{\\theta }\\left(a_{t+1} \\mid s_{t+1}\\right)$ At the soft policy improvement step, SAC updates the policy $\\pi _{\\theta }$ by minimizing the actor loss: $\\begin{aligned}\\mathcal {L}_{\\text{actor }}^{\\text{SAC }}(\\theta )&=\\mathbb {E}_{s_{t} \\sim B}\\left[\\mathcal {L}_{\\pi }\\left(s_{t}, \\theta \\right)\\right], \\\\\\mathcal {L}_{\\pi }\\left(s_{t}, \\theta \\right)&=\\mathbb {E}_{a_{t} \\sim \\pi _{\\theta }}\\left[\\alpha \\log \\pi _{\\theta }\\left(a_{t} \\mid s_{t}\\right)-Q_{\\zeta }\\left(s_{t}, a_{t}\\right)\\right]\\end{aligned}$ where the policy $\\pi _{\\theta }$ is modeled with a BNN.", "Our work updates the agent parameters by using the SAC algorithm in a Dyna [15] style, i.e., the replay buffer $B$ set to be the $B_{model}$ , which collects transitions sampling from $\\mathcal {P}_{\\phi }$ ." ], [ "Uncertainty quantification of model ensemble", "Model-based RL aims to find an optimal policy that achieves best expected return with high sample-efficiency.", "The key of sample efficiency lies in the learned model, but there exists one question: how confident is the model prediction?", "To answer this question, uncertainty quantification on state-action pairs is a prerequisite to evaluate the belief of the agent over the prediction model.", "Building upon the wealth of prior work [43], [44], [12], we develop and leverage proper uncertainty estimates that particularly suit the ensemble setting.", "Specifically, we leverage the variance over the model ensemble to quantify the uncertainty in a self-supervised manner: $\\begin{aligned}\\hat{V}\\left(s_{t}, a_{t}\\right) &\\triangleq \\operatorname{Var}\\left(\\left\\lbrace \\mu _{\\phi }^{i}\\left(s_{t}, a_{t}\\right) \\mid i \\in [1 ; I]\\right\\rbrace \\right) \\\\&=\\frac{1}{I-1} \\sum _{i}\\left(\\mu _{\\phi }^{i}\\left(s_{t}, a_{t}\\right)-\\mu ^{\\prime }\\right)^{2}\\end{aligned}$ where $\\mu ^{\\prime }\\triangleq \\frac{1}{I} \\sum _{i} \\mu _{\\phi }^{i}\\left(s_{t}, a_{t}\\right)$ .", "High uncertainty indicates low confidence in prediction and thus requires more active exploration and conservative exploitation.", "As a result, there are two main usages of the uncertainty estimates: providing an additional bonus to encourage exploration and discounted weights to prevent model bias.", "Figure: Illustration of the uncertainty estimate, which partitions the state-action space into three parts with the support of real samples.", "As the number of samples in a certain area increases, we would learn more knowledge about that region.", "In order to obtain a greater information gain between the new belief over the prediction model to the old one, the agent is expected to take a new action in the uncertain region." ], [ "Optimism-based exploration", "Motivated by the upper confidence bound (UCB) algorithm [17], we propose an exploration strategy to learn the predictive model more efficiently.", "Since $\\hat{V}\\left(s_{t}, a_{t}\\right)$ can provide well-calibrated uncertainty estimates on unseen state-action pairs, our method adds an exploration bonus to expected return and chooses the action that maximizes: $a_{t}=\\arg \\max _{a}\\left\\lbrace Q_{\\zeta }\\left(s_{t}, a\\right)+\\lambda \\hat{V}\\left(s_{t}, a\\right)\\right\\rbrace $ where $\\lambda >0$ is a hyperparameter, which defines the relative weighting of Q-function and the model ensemble uncertainty.", "Our work differs from prior model-ensemble methods in that we consider both best estimates of Q-functions and uncertainty.", "The intuition is illustrated in Fig.", "REF .", "We remark that the theoretical connection between information gain and the model ensemble uncertainty have already been discussed in [38], [16], [45].", "However, only maximizing the uncertainty is sample-inefficient as the agent would always seek to the novelist state space no matter how much the Q-fucntion is, which can incur risk actions.", "On the other hand, only Q-function values considered will limit the efficiency of exploration and result in slower model learning.", "To mitigate this problem, the optimism-based exploration is proposed to incentivize the policy to visit high-value state-action pairs, which are equipped with both large Q-functions and information gains.", "Specifically, it is not straightforward to find the action that maximizes Eq.", "(REF ) in continuous action spaces.", "To handle this issue, we propose a simple approximation scheme (see Fig.", "REF ), which generates $K$ action candidates by using $f$ to augment the action $a_{t} \\sim \\pi _{\\theta }$ , and then selects the action that maximizes Eq.", "(REF ).", "Practically, our work uses random Gaussian noise generator for augmentation, i.e., $f\\left(a_t\\right)=a_t+z, z \\sim \\mathcal {N}\\left(\\textbf {0},\\Psi \\right)$ , where $\\Psi >0$ is a hyperparameter to control the scope of the disturbance.", "We insist that other data augmentation methods can also be used in candidates generation in principle.", "Figure: Illustration of optimism-based exploration." ], [ "Uncertainty-weighted exploitation", "Since conventional model-based methods in a Dyna [15] framework such as ME-TRPO [9] and MBPO [11] utilize $B_{model}$ , the compounding error inheriting from model-bias would propagate during agent training.", "This error propagation is shown to cause convergence and divergence between the optimal policies under the true dynamic $\\mathcal {P}$ and the model-ensemble $\\mathcal {P}_{\\phi }$ [11].", "To mitigate this issue, we propose to assign different discount weights to the imagined samples based on the uncertainty estimates as follows: $w(\\hat{\\tau }_{t})=\\sigma \\left(-\\hat{V}\\left(\\hat{s}_{t}, \\hat{a}_{t}\\right) * T\\right)+0.5$ where $T>0$ is another temperature hyperparameter, $\\hat{\\tau }_{t}=\\left(\\hat{s}_{t}, \\hat{a}_{t}, \\hat{r_{t}}, \\hat{s}_{t+1}\\right) \\sim \\mathcal {P}_{\\phi }$ is an imagined transition, and $\\sigma $ is the sigmoid function.", "Because $\\hat{V}\\left(\\hat{s}_{t}, \\hat{a}_{t}\\right)$ is always nonnegative, the discounted weight is bounded in $\\left[0.5, 1.0\\right]$ .", "This empirical setting of $w(\\hat{\\tau }_{t})$ is referred to SUNRISE [46].", "In order to obtain a better capacity of resisting model bias, we down-weight the sample transitions with high uncertainty by modifying the proposed objectives of SAC as following: $\\begin{aligned}\\mathcal {L}_{\\text{critic }}^{\\text{MEEE }}(\\zeta ) &=w(\\hat{\\tau }_{t}) * \\mathcal {L}_{\\text{critic }}^{\\text{SAC }}(\\zeta ), \\\\\\mathcal {L}_{\\text{actor }}^{\\text{MEEE }}(\\theta ) &=w(\\hat{\\tau }_{t}) * \\mathcal {L}_{\\text{actor }}^{\\text{SAC }}(\\theta )\\end{aligned}$ Here, imagined transitions with lower confidence, i.e., greater uncertainty, would result in more discreet updates, which prevent the model bias propagation during agent training and are shown to provide performance guarantee for model-based RL algorithms.", "In principle, a potential benefit of a discounted weight is that the policy is still allowed to learn from the inaccurate imagined samples for a better generalization performance.", "Figure: Walker2d-v2" ], [ "Experimental results", "Our experiments aims to answer the following questions: Comparison to prior work: How does MEEE perform in common benchmark tasks in comparison to prior state-of-the-art RL algorithms?", "Contribution of each technique in MEEE: What is the contribution of optimism-based exploration and uncertainty-weighted exploitation for MEEE?", "To answer the posed questions, we consider four common benchmark tasks from OpenAI Gym [18] simulated with MuJoCo [19] physics engine: HalfCheetah-v2, Ant-v2, Humanoid-v2 and Walker2d-v2, which are illustrated in Fig.", "REF .", "In our comparisons, we compare to SAC [20] and MBPO [11], which represent the state-of-the-art in both model-free and model-based RL.", "The experimental code is publicly available at https://github.com/YaoYao1995/MEEE.", "Figure: Ablation experiments" ], [ "Comparison with State-of-the-Arts", "Note that MEEE is a unified framework for utilizing uncertainty quantification to provide a better exploration–exploitation tradeoff, so MEEE can in principle be used in conjunction with most of the Dyna-style RL algorithms.", "In order to make a fair comparison, we implement MEEE by slightly modifying MBPO to support optimism-based exploration and uncertainty-weighted exploitation, and the additional hyperparameters are simply set as: $\\lambda =1, \\Psi =\\mathbf {1}, T=20$ .", "We declare that this is not the optimal setting after fussy parametric selection.", "As shown in Fig.", "REF , we plot learning curves for all methods, along with the asymptotic performance of SAC which does not converge in the shown region.", "These results highlight the strength of MEEE in terms of performance and sample complexity as MEEE outperforms other methods by a significant margin on three of the benchmark tasks.", "For example, MEEE’s performance on Walker2d-v2 at 200 thousand steps matches that of SAC at 2 million steps.", "However, notice that there is little difference in performance between MEEE and MBPO on HalfCheetah-v2.", "One possible reason is that the state space of HalfCheetah-v2 is less complex, so the exploration and exploitation mechanism of the vanilla MBPO is more than adequate.", "We believe future work on improving uncertainty estimation and careful hyperparameter tuning may be able to improve the performance further." ], [ "Ablation Study", "In order to verify the individual effects of each technique in MEEE, we conduct a thorough ablation study on MEEE.", "The main goal of the ablation study is to understand how uncertainty-weighted exploitation and optimism-based exploration affect performance.", "We denote MEEE_v1 as the version only using uncertainty-weighted exploitation, and MEEE_v2 to indicate only optimism-based exploration was considered.", "The results of our study are shown in Fig.", "REF .", "Surprisingly, we find that the performance of MEEE_v1 and MEEE_v2 differs in different tasks.", "For example, MEEE_v2 performs comparably to the vanilla MEEE on Humanoid-v2, while the performance on Walker2d-v2 is even worse than MBPO, possibly because Humanoid-v2 requires greater generalization and hence places more demands on effective exploration.", "On the contrary, we notice that MEEE_v1 performs pretty well on Ant-v2 and Walker2d-v2 but performs poorly on Humanoid-v2, which suggests that uncertainty penalty is able to make major contributions when the model bias of the model ensemble is relatively severe.", "To sum up, using uncertainty-weighted exploitation or optimism-based exploration alone sometimes degrades performance slightly.", "However, if they are used in a complementary way, MEEE will surely attain the improved performance, which is proven to be empirically successful." ], [ "CONCLUSIONS", "In this work, we present a simple unified ensemble method namely MEEE that integrates random initialization, optimism-based exploration, and uncertainty-weighted exploitation to handle various issues in model-based RL algorithms.", "Our experiments show that MEEE consistently outperforms state-of-the-art RL algorithms on several benchmark continuous control tasks.", "We further evaluate the effect of each key component of our algorithm, showing that both optimism-based exploration and uncertainty-weighted exploitation are essential for successful applications of MEEE.", "For future work, we will investigate the integration of uncertainty quantification into other model-based RL frameworks and study how to make a better uncertainty estimation.", "Another enticing direction for future work would be the application of MEEE to real-world robotics systems." ] ]
2107.01825
[ [ "Strengthening practical continuous-variable quantum key distribution\n systems against measurement angular error" ], [ "Abstract The optical phase shifter that constantly rotates the local oscillator phase is a necessity in continuous-variable quantum key distribution systems with heterodyne detection.", "In previous experimental implementations, the optical phase shifter is generally regarded as an ideal passive optical device that perfectly rotates the phase of the electromagnetic wave of $90^\\circ$.", "However, the optical phase shifter in practice introduces imperfections, mainly the measurement angular error, which inevitably deteriorates the security of the practical systems.", "Here, we will give a concrete interpretation of measurement angular error in practical systems and the corresponding entanglement-based description.", "Subsequently, from the parameter estimation, we deduce the overestimated excess noise and the underestimated transmittance, which lead to a reduction in the final secret key rate.", "Simultaneously, we propose an estimation method of the measurement angular error.", "Next, the practical security analysis is provided in detail, and the effect of the measurement angular error and its corresponding compensation scheme are demonstrated.", "We conclude that measurement angular error severely degrades the security, but the proposed calibration and compensation method can significantly help improve the performance of the practical CV-QKD systems." ], [ "Introduction ", "Continuous-variable quantum key distribution (CV-QKD) provides a way for two remote participants called Alice and Bob to establish symmetric keys through an unsafe channel [28], [5].", "In CV-QKD, the information is encoded in the quadratures of the optical field and decoded by coherent detection.", "The physical implementation of CV-QKD using Gaussian modulated coherent state (GMCS) is based on mature optical communication techniques with high reliability and low cost.", "Thus, CV-QKD has attracted much attention in recent years.", "Theoretical research of GMCS CV-QKD has made great progress [3], [15], [16], and such significant progress accelerates experimental progress [14], [7], [9], [27], [31], [30].", "As well, an overview of practical CV-QKD systems using telecom components is proposed [6].", "The security analysis of CV-QKD contains theoretical security analysis, practical security analysis and composable security analysis under the universal framework.", "Although the theoretical security and composable security of CV-QKD protocol based on GMCS have been proved based on many assumptions [17], [15], [16], the practical security problems caused by imperfections of experimental devices remain unsolved absolutely.", "Nevertheless, the non-ideal elements of the system are so numerous that they are often ignored.", "What is worse, the attacks controlled by Eve against the system never end.", "Thus, researchers put forward continuous-variable measurement-device-independent (CV-MDI) or continuous-variable one-sided-device-independent (CV-1sDI) protocol to resist those attacks involving the imperfections of devices [24], [4], [22].", "However, the experimental implementation of CV-MDI or CV-1sDI protocol is somewhat tricky.", "As a result, using proven CV-QKD protocols and dealing with the imperfections of experimental devices within these protocols is the most efficient method [11], [13], [20], [32].", "These imperfections may not be eradicated absolutely, but there are corresponding methods to mitigate the impacts of these non-ideal factors.", "For instance, imperfect Gaussian modulation due to the jitter of the half-wave voltage of the intensity modulator and the phase modulator can be solved by a unique calibration method [19].", "One-time calibration model solves imperfect monitoring at light source [1].", "Attacks from the poor linearity of the homodyne detector and imperfect beam splitter are defended by adding additional monitoring devices [23], [12].", "In the GMCS CV-QKD protocol, the receiver usually adopts homodyne or heterodyne detection.", "As for the homodyne detection scheme, only one quadrature component $X$ or $P$ needs to be measured, so basis choice is an indispensable step, and imperfect basis choice will cause security problems [18].", "In the heterodyne detection scheme, both quadrature components $X$ and $P$ need to be measured, which is similar to phase diversity reception in classical optical communication [21], [29].", "In the CV-QKD scheme using heterodyne detection, we usually choose the optical phase shifter to help measure the $P$ quadrature.", "The optical phase is demanded to rotate the phase by $90^\\circ $ constantly.", "Here we restricted ourselves to optical fibre systems for simplicity.", "That is to say, the optical phase shifter in this paper is an optical fibre phase shifter, short for phase shifter.", "However, under the action of external force, the fibre is stretched or compressed within the elastic deformation range, and parameters such as the fibre change's geometrical size and refractive index change, thus causing the phase change of the transmitted signal in the fibre.", "Therefore, the phase shifter is somewhat susceptible to environmental changes and can hardly shift the phase by $90^\\circ $ exactly, resulting in a deviation of the true value from $90^\\circ $ , which is defined as the measurement angular error in this paper.", "We start with expressions of the canonical quadratures of the received state and give the corresponding theoretical model of CV-QKD using heterodyne detection.", "The security analysis and simulation show that the secret key rates would be decreased in the presence of measurement angular error.", "Furthermore, we propose a method that can calibrate the measurement angular error.", "Thus it can close the potential security loopholes.", "The paper is organized as follows.", "In Sec.", ", we provide a practical implementation analysis and entanglement-based (EB) description of measurement angular error, and then a compensation method is provided.", "In Sec.", ", we discuss the practical security of measurement angular error through two aspects; one is the estimation of quantum channel parameters, the rest is the detailed calculation of secret key rates.", "Then in Sec.", ", numerical simulation results and analysis are provided.", "In Sec.", ", we give a conclusion and discuss the importance of compensation on measurement angular error.", "In the GMCS CV-QKD protocol, Alice randomly generates two groups of Gaussian random numbers which corresponds to $X$ and $P$ quadratures.", "They have the same modulation variance ${V_A}$ in shot-noise units (SNUs).", "Gaussian modulation consists of intensity modulation and phase modulation.", "Intensity obeys a Rayleigh distribution, while phase obeys a uniform distribution.", "$\\begin{array}{l}X = {A_{sig}}\\cos {\\phi _{sig}},\\\\P = {A_{sig}}\\sin {\\phi _{sig}},\\end{array}$ where ${A_{sig}}$ and ${\\phi _{sig}}$ are the modulation information loaded on the intensity modulator and the phase modulator respectively.", "The practical implementation scheme of GMCS protocol with heterodyne detection can be seen in Fig.", "REF .", "To prepare the quantum state, Alice generates continuous light using a laser source.", "The first AM is just for pulse generation, and then the pulse is divided into two splits with a beam splitter (BS); one travels through the signal path, the other travels through the local oscillator (LO) path.", "After Gaussian modulation and proper attenuation, the quantum signal is polarized multiplexed with the LO using a polarization beam splitter (PBS).", "The prepared signal are then transmitted to Bob through a noisy quantum channel.", "At the receiver side, the signal is demultiplexed with a PBS and a dynamic polarization controller (DPC).", "The heterodyne detection scheme comprises the optical phase shifter and two balanced homodyne detectors (BHDs); one measures $X$ quadrature while the other measures $P$ quadrature.", "In the CV-QKD scheme using heterodyne detection , the optical phase shifter needs to be set to $90^\\circ $ to measure the quadrature component $P$ .", "But the actual shifting phase is $ \\varphi _{PS} =\\frac{\\pi }{2} - \\theta $ ($\\theta $ is a small deviation value) due to the non-ideal external factors.", "Other irrelevant devices and operations are assumed to be ideal for simplicity.", "The electric field expression of signal and LO before detection can be expressed by [2] Figure: Schematics layout of the heterodyne detection GMCS protocol.", "AM: amplitude modulator; PM: phase modulator; BS: beam splitter; PBS: polarization beam splitter; VA: variable attenuator; DPC: dynamic polarization controller; PS: phase shifter.${E_{sig1}} &= {E_{sig2}} = {A_{sig}}\\cos \\left( {2\\pi {f_s}t + {\\phi _{sig}} + {\\varphi _0} + {\\varphi _{channel\\_sig}}} \\right),\\\\{E_{L{O_1}}} &= {A_{LO}}\\cos \\left( {2\\pi {f_L}t + {\\varphi _0} + {\\varphi _{channel\\_LO}}} \\right),\\\\{E_{L{O_2}}} &= {A_{LO}}\\cos \\left( {2\\pi {f_L}t + {\\varphi _0} + {\\varphi _{channel\\_LO}} + {\\varphi _{PS}}} \\right),$ where $E$ represents the electric field intensity, ${f_s}$ and ${f_L}$ are the center frequency of the quantum signal and the LO, ${\\varphi _0}$ is the initial phase of the laser and ${\\varphi _{channel}}$ represents the phase during quantum channel.", "Thus, the generated photocurrents of the first and second homodyne detectors can be expressed as ${I_1}$ and ${I_2}$ [25], respectively ${I_1} &= {R_1}{\\eta _1}\\left( {{{\\left| {{E_{sig1}} + {E_{L{O_1}}}} \\right|}^2} - {{\\left| {{E_{sig1}} - {E_{L{O_1}}}} \\right|}^2}} \\right)\\\\&= {R_1}{\\eta _1}{A_{sig}}{A_{LO}}\\cos \\left( {2\\pi \\Delta ft + {\\phi _{sig}} + \\Delta {\\varphi _{channel}}} \\right),\\\\{I_2} &= {R_2}{\\eta _2}\\left( {{{\\left| {{E_{sig2}} + {E_{L{O_2}}}} \\right|}^2} - {{\\left| {{E_{sig2}} - {E_{L{O_2}}}} \\right|}^2}} \\right)\\\\&= {R_2}{\\eta _2}{A_{sig}}{A_{LO}}\\cos \\left( {2\\pi \\Delta ft + {\\phi _{sig}} + \\Delta {\\varphi _{channel}} + {\\varphi _{PS}}} \\right),$ where ${R_i}$ and ${\\eta _i}$ denote the responsiveness and quantum efficiency of two balanced homodyne detectors ($i=1,2$ ).", "Because the quantum signal and the LO are from the same laser and their optical path is almost the same, so we consider $\\Delta f=0$ .", "Meantime, we assume the shifting phase in a quantum channel of the quantum signal is the same as that of the LO, so $\\Delta {\\varphi _{channel}}=\\varphi _{channel\\_sig}-\\varphi _{channel\\_LO}=0$ .", "By taking Eq.", "(REF ) into Eq.", "(REF ), the photocurrents can be further be written as $\\begin{array}{l}{I_1} = {R_1}{\\eta _1}{A_{LO}}{X_1},\\\\{I_2} = {R_2}{\\eta _2}{A_{LO}}(\\sin \\theta \\cdot {X_2} - \\cos \\theta \\cdot {P_2}),\\end{array}$ where ${X_i}$ and ${P_i}$ are measurement results on the upper branch ($i=1$ ) or lower branch ($i=2$ ).", "Since the phase shifter can only change the phase and does not influence the intensity of the SNUs, so the coefficient ${R_i}{\\eta _i}{A_{LO}}$ of $I_1$ and $I_2$ would be normalized by SNUs directly.", "Thus, the normalized measurement results $I_1^{\\prime }$ and $I_2^{\\prime }$ can be expressed as $\\begin{array}{l}{I_1}^{\\prime } = {X_1},\\\\{I_2}^{\\prime } = \\sin \\theta \\cdot {X_2} - \\cos \\theta \\cdot {P_2}.\\end{array}$ In doing so, the influence of the imperfect phase shifter on the final heterodyne detection can be seen more intuitively in the form of photocurrent." ], [ "Entanglement-Based description of measurement angular error", "In this subsection, we mainly discuss the EB description of the measurement angular error in the CV-QKD scheme using heterodyne detection.", "In Sec.", "REF , we have shown that the phase of LO is rotated by $ \\varphi _{PS} =\\frac{\\pi }{2} - \\theta $ in practical heterodyne detection scheme caused by imperfect phase shifter.", "In the new model, it can be interpreted that the phase of LO is rotated $90^\\circ $ ideally while the quantum signal following a phase shifts $\\theta $ , as shown in Fig.", "REF (a).", "Equivalently, this means we can first rotate the state to be measured by $\\theta $ and then perform the ideal measurement to obtain the $P$ quadrature in phase space, as illustrated in Fig.", "REF (b).", "The quadrature transformation of the phase shift operator is given in term of the symplectic matrix ${S_{PS}}$ , which reads ${S_{PS}} = \\left( {\\begin{array}{*{20}{c}}{\\cos \\theta }&{\\sin \\theta }\\\\{ - \\sin \\theta }&{\\cos \\theta }\\end{array}} \\right).$ Figure: Model of real heterodyne detector with measurement angular error in CV-QKD.", "PS: phase shift.", "(a) Model of non-ideal PP quadrature measurement.", "(b) Non-idealmeasurement in phase space.Now we present the complete EB model of measurement angular error in the CV-QKD scheme using heterodyne detection.", "The coherent state preparation by Alice is modelled by a heterodyne measurement of one half of a two-mode squeezed vacuum (EPR) state of variance $V$ .", "The other half of the EPR state is sent to Bob through the quantum channel controlled by Eve.", "At the receiver, Bob performs the heterodyne detection on the mode $B$ and uses a 50:50 BS to split mode $B$ into two different modes, namely, $B_1$ and $B_2$ , and each of them is measured using homodyne detection.", "Among them, a phase shift operation is performed while measuring $P$ quadrature, as shown in Fig.", "REF .", "The heterodyne detection results can be expressed as ${x_{{B_1}}} &= \\frac{1}{{\\sqrt{2} }}{x_B} + \\frac{1}{{\\sqrt{2} }}{x_v},\\\\{p_{{B_3}}} &= - \\sin \\theta \\cdot {x_{{B_2}}} + \\cos \\theta \\cdot {p_{{B_2}}},$ where ${x_{{B_2}}} =- \\frac{1}{{\\sqrt{2} }}{x_B} + \\frac{1}{{\\sqrt{2} }}{x_v}$ , ${p_{{B_2}}} =- \\frac{1}{{\\sqrt{2} }}{p_B} + \\frac{1}{{\\sqrt{2} }}{p_v}$ , which could be calculated from quadrature transformation ${S_{BS}}$ in Sec.", ".", "It can then be observed from Eq.", "(REF ) and Eq.", "() that the EB description above is equivalent to the practical implementation of measurement angular error in the CV-QKD scheme using heterodyne detection.", "Figure: The EB description of measurement angular error in CV-QKD scheme using heterodyne detection." ], [ "Estimation of measurement angular error", "In this subsection, we propose a method to estimate the measurement angular error in the GMCS CV-QKD scheme using heterodyne detection.", "From Sec.", "REF , we have illustrated that the EB description is equivalent to the PM description in this scheme.", "Therefore, the results in the PM model can be substituted into the corresponding EB model.", "In the meanwhile, according to the EB model, there are relationships between the various modes, from which the derivation of the measurement angular error can be found.", "Hence, according to Eq.", "(REF ) and Eq.", "(), we can acquire the concrete formulation about covariance and variance, which can be measured directly from the practical experiment $\\left\\langle {{p_{{B_3}}},{x_{{B_1}}}} \\right\\rangle = - \\sin \\theta \\cdot ( - \\frac{{{V_B}}}{2} + \\frac{1}{2}),\\\\{V_{{B_1}}} = \\frac{1}{2}\\left( {{V_B} + 1} \\right),$ where $\\left\\langle {{p_{{B_3}}},{x_{{B_1}}}} \\right\\rangle $ is the covariance between ${x_{{B_1}}}$ and ${p_{{B_3}}}$ , and $V_{B_1}$ is the variance of ${x_{{B_1}}}$ .", "The values above are all normalized by SNUs.", "Then, the mean value of the error angle can be simplified as $\\theta = \\arcsin \\left( {\\frac{{\\left\\langle {{p_{{B_3}}},{x_{{B_1}}}} \\right\\rangle }}{{{V_{{B_1}}} - 1}}} \\right).$ In order to figure out $p_{B_2}$ , the ideal measurement result of $P$ quardrature, and the precise calibration of $\\theta $ is essential as well as $x_{B_2}$ .", "Since the BS will introduce a vacuum state, there exists an uncertainty from the vacuum fluctuation.", "However, the deviation is not dominant in the measurement result; thus, it allows us to assume $x_{{B_1}}\\approx {x_{{B_2}}}$ , then $p_{B_2}$ can be calculated, and the measurement angular error can be compensated through post-processing as in Eq.", "().", "Collective attack is regarded as an optimal attack among Gaussian-modulation coherent detection schemes [15], [16].", "Hence, the analysis about practical security under collective attack is worthy of discussing.", "Thus, the security analysis against the collective attack of the CV-QKD protocol with heterdyne detection is provided.", "In the system with imperfect phase shifter, the parameters required for security analysis are needed to be obtained through parameter estimation in the PM model.", "Therefore, we briefly describe the process of parameter estimation at first.", "For a general linear channel, the correlation of data between Alice and Bob is given by $y = tx + z,$ where $x$ is the Gaussian modulation quantum signal with the modulation variance $V_A$ , $y$ is the received quantum signal through a Gaussian channel and balanced homodyne detectors (BHDs) with total transmittance $t$ .", "$z$ contains excess noise from quantum channel and electrical noise from BHDs.", "Thus, in ideal-measurement scenario, measurement results after the heterodyne detection can be written as $\\left\\lbrace {\\begin{array}{*{20}{c}}{{x_B} = {t_x}{x_A} + {z_x}},\\\\{{p_B} = {t_p}{p_A} + {z_p}},\\end{array}} \\right.$ $\\sqrt{{\\eta _1}{T_x}} $ where ${t_x} = \\sqrt{{\\eta _1}{T_x}}$ , ${t_p}=\\sqrt{{\\eta _2}{T_p}}$ .", "The quantum channel parameters of transmittance $T$ and excess noise $\\varepsilon $ are relevant to those values through following equations $\\left\\langle {{x_A},{x_B}} \\right\\rangle = \\sqrt{{\\eta _1} {T_x}} \\cdot {V_A} ,\\left\\langle {{p_A},{p_B}} \\right\\rangle = \\sqrt{{\\eta _2} {T_p}} \\cdot {V_A} ,\\\\{\\left\\langle {{x_B}^2} \\right\\rangle = {\\eta _1} {T_x} \\cdot {V_A} + {\\eta _1} {T_x} \\cdot {\\varepsilon _1} + 1 + {v_{el}}_1},\\\\{\\left\\langle {{p_B}^2} \\right\\rangle = {\\eta _2} {T_p} \\cdot {V_A} + {\\eta _2} {T_p} \\cdot {\\varepsilon _2} + 1 + {v_{el}}_2},$ where ${V_A} = \\left\\langle {{x_A}^2} \\right\\rangle =\\left\\langle {{p_A}^2} \\right\\rangle $ , $\\eta _i $ and ${{v_{el}}_i}$ are performance parameters of the the BHDs, we assume BHDs in our model are ideal, so the detection efficiency ${\\eta _1} = {\\eta _2}=1 $ , electronic noise ${v_{el}}_1 = {v_{el}}_2=0$ .", "Noted that all the parameters above have been normalized by SNUs.", "The secret key rate under collective attack is derived from the covariance matrix [28] ${\\gamma _{A{B_1}{B_3}}} = \\left( {\\begin{array}{*{20}{c}}{{V_A}}&{{C_{A{B_1}}}}&{{C_{A{B_3}}}} \\\\{C_{A{B_1}}^T}&{{V_{{B_1}}}}&{{C_{{B_1}{B_3}}}} \\\\{C_{A{B_3}}^T}&{C_{{B_1}{B_3}}^T}&{{V_{{B_3}}}}\\end{array}} \\right).$ $V$ and $C$ in this matrix represent the variance of and correlation of the mode.", "The transmittance $T$ and excess noise $\\varepsilon $ in the covariance matrix can then be derived with the data of Alice and the heterodyne detection results of Bob $\\left\\lbrace {\\begin{array}{*{20}{c}}{{T_{x(p)}} = \\frac{{{{\\left\\langle {x{{(p)}_A},x{{(p)}_B}} \\right\\rangle }^2}}}{{{{\\left\\langle {x{{(p)}_A}^2} \\right\\rangle }^2}}},} \\\\{{\\varepsilon _{x(p)}} = \\frac{{\\left\\langle {x{{(p)}_B}^2} \\right\\rangle - 1}}{{{T_{x(p)}}}} - \\left\\langle {x{{(p)}_A}^2} \\right\\rangle .", "}\\end{array}} \\right.$ In the case of measurement angular error, one should replace ${p_B}$ with ${p_B}^{\\prime }$ in Eq.", "(), $T_p$ and $\\varepsilon _p$ in Eq.", "(REF ) should subsequently be rewritten as $\\left\\lbrace {\\begin{array}{*{20}{c}}{{T_p}^{\\prime } = {{\\cos }^2}\\theta \\cdot {T_p}},\\\\{{\\varepsilon _p}^{\\prime } = \\frac{{{\\varepsilon _p}}}{{{{\\cos }^2}\\theta }} + {{\\tan }^2}\\theta \\cdot \\left\\langle {{x_A}^2} \\right\\rangle }.\\end{array}} \\right.$ It is obvious that the estimated ${T_p}^{\\prime }<{T_p}$ and ${\\varepsilon _p}^{\\prime }>{\\varepsilon _p}$ if the measurement angular error is above zero.", "Thus, covariance matrix becomes ${\\gamma _{A{B_1}{B_3}}} = \\left( {\\begin{array}{*{20}{c}}{{V_A}}&{{C_{A{B_1}}}}&{{C_{A{B_3}}}^{\\prime }} \\\\{C_{A{B_1}}^T}&{{V_{{B_1}}}}&{{C_{{B_1}{B_3}}}^{\\prime }} \\\\{C_{A{B_3}}^T}^{\\prime }&{C_{{B_1}{B_3}}^T}^{\\prime }&{{V_{{B_3}}}^{\\prime }}\\end{array}} \\right).$ By comparing the above two covariance matrices Eq.", "(REF ) and Eq.", "(REF ), the different elements is ${C_{A{B_3}}},{C_{{B_1}{B_3}}}$ and ${V_{{B_3}}}$ which contains transmittance ${T_p}^{\\prime }$ and excess noise ${\\varepsilon _p}^{\\prime }$ which are the decisive parameters for the secret key rate of CV-QKD.", "The detailed calculation of secret key rate is shown in the Appendix ." ], [ "Simulation and results", "In this section, we illustrate the simulation results about the effects of measurement angular error given the security analysis in Sec.", ".", "First of all, what needs to be simulated is the influence of measurement angular error on the system, among which the most important one is its influence on the secret key rate.", "Thus, the secret key rate as a function of transmittance distance with different measurement angular errors is calculated, as depicted in Fig.", "REF .", "Three colors of lines, red, blue, and black correspond to different angles $\\theta = {0^ \\circ }$ (after compensation), $\\theta = {5^ \\circ }$ and $\\theta = {10^ \\circ }$ , in this simulation, channel excess noise is set to be $\\varepsilon = 0.01$ , and the reconciliation efficiency $\\beta = 0.95$ [26].", "From the Fig.", "REF , it can be seen that the secret key rate with measurement angular error is lower than that in the ideal scenario.", "Furthermore, the secret key rate decreases sharply with the increase of measurement angular error.", "Figure: The secret key rate versus the transmissiondistance with different measurement angular error.", "From left to right, the curves correspond to secret key rate forθ=0 ∘ \\theta = {0^ \\circ }, θ=5 ∘ \\theta = {5^ \\circ } and θ=10 ∘ \\theta = {10^ \\circ }.", "The other parameters of systems: modulation variance is V=20V=20, excess noise is ε=0.01\\varepsilon = 0.01, reconciliation efficiency is β=0.95\\beta = 0.95.In Fig.", "REF , we simulate three curves with three measurement angular errors.", "Not satisfied with the only effect on the secret key rate, the maximum transmittance distance is also computed within a broader range of measurement angular errors.", "Thus, we compute the maximum transmittance distance as a function of measurement angular error, and the simulation results are displayed in Fig.", "REF .", "Different curves represent the results under the influence of different excess noise.", "For each curve, the maximum transmittance distance decreases significantly.", "We show the measurement angular error of $30^\\circ $ for a complete display of measurement angular errors.", "However, in practical implementations, we only need to consider measurement angular error as a real small value that is easily ignored.", "Although the angular error is relatively small, the slope of the curve changes rapidly, indicating that even a tiny measurement angular error would cause a significant loss in transmittance distance in the practical experiment.", "Figure: The maximum transmittance distance versus measurement angular error.", "From left to right, the curves correspond toε=0.1\\varepsilon = 0.1, ε=0.05\\varepsilon = 0.05 and ε=0.01\\varepsilon = 0.01.", "The other parameters of systems: modulation variance is V=20V=20, reconciliation efficiency is β=0.95\\beta = 0.95.Figure: The optimal modulation variance for the systems with different measurement angular error.", "The red and black curves correspond to cases with different distances.", "Three types of lines, solid, dashed and dotted lines correspond to different measurement angular error.The other parameters of systems: modulation variance is V=20V=20, reconciliation efficiency is β=0.95\\beta = 0.95.The existence of measurement angular error can also cause the changes of optimal modulation variance; thus, the curves of the secret key rate versus optimal modulation variance with measurement angular error $\\theta = {0^ \\circ }$ , $\\theta = {5^ \\circ }$ and $\\theta = {10^ \\circ }$ is simulated, and the simulation result is depicted in Figure REF .", "The main figure describes the modulation variance limited within 100, while the inset describes the modulation variance limited within 20 to show the details clearly.", "It can be intuitively observed that optimal modulation variance decrease with the increase of measurement angular error in different distances.", "To sum up, the existence of measurement angular error dramatically affects the performance of the practical CV-QKD system using heterodyne detection.", "The existence of measurement angular error will not only reduce the secret key rate of the system but also reduce the transmittance distance; what is worse, the existence of measurement angular error will change the optimal modulation variance of the system, and also reduce the performance of the system, which further reveals the importance of compensation of measurement angular error." ], [ "Conclusion", "This paper analyses the effect of measurement angular error caused by the imperfect phase shifter in CV-QKD using heterodyne detection.", "The imperfection of the phase shifter is recognized from the detection results of the practical experimental implementation, and the corresponding EB model is proposed based on the PM model.", "In order to avoid the potential attack caused by the imperfect phase shifter, we propose a method to estimate the measurement angular error, which can then be compensated through data processing.", "Practical security analysis shows channel parameters will be misestimated due to such an error, which further causes a drastic decrease in the achievable secret key rate.", "Numerous simulation is provided including the measurement angular error as well as the case after the compensation.", "Simulation results suggest that the performance of the practical systems can be significantly improved when properly dealing with the measurement angular error.", "It should be noted that our analysis can be directly extended to support other CV-QKD systems that rely on the heterodyne detection schemes, for instance, in LLO-CV-QKD [8], [10].", "Undoubtedly, it is worth observing that our work is to strengthen practical security resulted from devices' imperfection.", "This work was supported by the Key Program of National Natural Science Foundation of China under Grant No.", "61531003, National Natural Science Foundation of China under Grant No.", "62001041, China Postdoctoral Science Foundation under Grant No.", "2020TQ0016, and the Fund of State Key Laboratory of Information Photonics and Optical Communications." ], [ "Calculation of the secret key rate", "The secret key rate is conducted under the collective attack for simplicity, we only show the secret key rate for reverse reconciliation [5].", "$K = \\beta {I_{AB}} - {\\chi _{BE}},$ where $\\beta $ is the reconciliation efficiency, $I_{AB}$ is the mutual information between Alice and Bob, ${\\chi _{BE}}$ is the maximum information available to Eve on Bob's key which is bounded by Holevo quantity.", "The covariance matrix of Alice and the mode that just comes out of the channel ${\\gamma _{AB}}$ is ${\\gamma _{AB}} &= \\left( {\\begin{array}{*{20}{c}}{V \\cdot I}&{\\sqrt{T \\left( {{V^2} - 1} \\right)} \\cdot {\\sigma _z}}\\\\{\\sqrt{T \\left( {{V^2} - 1} \\right)} \\cdot {\\sigma _z}}&{\\left( {T V + \\left( {1 - T} \\right)\\omega } \\right) \\cdot I}\\end{array}} \\right)\\\\: &= \\left( {\\begin{array}{*{20}{c}}{a \\cdot I}&{c \\cdot {\\sigma _z}} \\\\{c \\cdot {\\sigma _z}}&{b \\cdot I}\\end{array}} \\right).$ where $I = diag(1,1)$ , ${\\sigma _z} = diag(1, - 1)$ , and Eve's variance $\\omega $ can be described by $\\omega = T\\varepsilon /(1 - T) + 1$ .", "And we define $a = V$ , $b = TV + \\left( {1 - T} \\right)\\omega $ , $c = \\sqrt{T\\left( {{V^2} - 1} \\right)} $ for simplicity.", "Then mode B is divided with mode $B_1$ and $B_2$ by BS.", "Such a transformation can be expressed as ${\\gamma _{A{B_1}{B_2}}} = {Y_{BS}} \\cdot \\left( {{\\gamma _{AB}} \\oplus I} \\right) \\cdot {Y_{BS}}^T,$ where the matrix ${Y_{BS}}$ describes the BS transformation that acts on only the mode $B$ .", "It is written as ${Y_{BS}} = I \\oplus {S_{BS}},$ where ${S_{BS}} = \\left( {\\begin{array}{*{20}{c}}{\\frac{1}{{\\sqrt{2} }}I}&{\\frac{1}{{\\sqrt{2} }}I} \\\\{ - \\frac{1}{{\\sqrt{2} }}I}&{\\frac{1}{{\\sqrt{2} }}I}\\end{array}} \\right)$ .", "In the same way, the covariance matrix ${\\gamma _{A{B_1}{B_3}}}$ after the phase shifter operation becomes ${\\gamma _{A{B_1}{B_3}}} = Y_{PS} \\cdot {\\gamma _{A{B_1}{B_2}}} \\cdot Y{_{PS}^T}, \\\\$ where ${Y_{PS}} = {I_4} \\oplus {S_{PS}}$ .", "In heterodyne-detection scenario, both data X and P are used to extract keys, thus the mutual information between Alice and Bob should be ${I_{AB}} = I_{AB}^x + I_{AB}^p$ , where $\\left\\lbrace {\\begin{array}{*{20}{c}}{I_{AB}^x = \\frac{1}{2}{{\\log }_2}\\left[ {\\frac{{V_A^x}}{{V_{A|B}^x}}} \\right]},\\\\{I_{AB}^p = \\frac{1}{2}{{\\log }_2}\\left[ {\\frac{{V_A^p}}{{V_{A|B}^p}}} \\right]},\\end{array}} \\right.$ where $V_A^{x(p)} = \\frac{1}{2}(a + 1)$ , and $V_{A|B}^{x(p)}$ can be calculated from $V_{A|B}^{x(p)} = \\left\\langle {{A_{x(p)}}^2} \\right\\rangle - \\frac{{{\\left\\langle {{A_{x(p)}}{B_{x(p)}}} \\right\\rangle }}^2}{{\\left\\langle {{B_{x(p)}}^2} \\right\\rangle }}.$ In the following part, we provide the calculation for ${\\chi _{BE}}$ ${\\chi _{BE}} = S\\left( E \\right) - S\\left( {E|B} \\right).$ Here we assume that Eve can purify the whole system, so we can get $S\\left( E \\right) = S\\left( {AB} \\right)$ , which is exactly the same as the standard CV-QKD GG02 protocol [5].", "The symplectic eigenvalues are given by ${\\lambda _{1,2}} = \\sqrt{\\frac{1}{2}\\left( {A \\pm \\sqrt{{A^2} - 4{B^2}} } \\right)},$ where $A = {a^2} + {b^2} - 2{c^2},B = ab - {c^2}$ .", "We also assume that after Bob performs projective measurements on his modes, the remaining system $AE$ is pure, so we get $S\\left( {E|{B_1}{B_3}} \\right) = S\\left( {A|{B_1}{B_3}} \\right)$ .", "$S\\left( {E|{B_1}{B_3}} \\right)$ is calculated in two steps by first performing P-measurement on mode $B_3$ and then performing X-measurement on mode $B_1$ .", "The conditional covariance matrix after measuring mode $B_3$ is written as ${\\gamma _{A{B_1}|B_3^p}} = {\\gamma _{A{B_1}}} - {C_{A{B_1}{B_3}}}{\\left( {{X_p}{\\gamma _{{B_3}}}{X_p}} \\right)^{MP}}C_{A{B_1}{B_3}}^T,$ where ${C_{A{B_1}{B_3}}}$ and ${\\gamma _{A{B_1}}}$ are the submatrices of the covariance matrix ${\\gamma _{A{B_1}{B_3}}}$ , ${X_p} = diag(0,1)$ , $MP$ denotes the Moore–Penrose inverse of a matrix.", "Then the X-measurement is performed on mode $B_1$ , with the post-measurement covariance matrix ${\\gamma _{A|B_3^pB_1^x}}$ , which reads ${\\gamma _{A|B_3^pB_1^x}} = {\\gamma _{A{B_1}|B_3^p}} - {C_{A{B_1}}}{\\left( {{X_x}{\\gamma _{{B_1}}}{X_x}} \\right)^{MP}}C_{A{B_1}}^T,$ where ${C_{A{B_1}}}$ and ${\\gamma _{{B_1}}}$ can be acquired by Eq.", "(REF ), and ${X_x} = diag(1,0)$ .", "And the symplectic eigenvalue can be calculated by Eq.", "(REF ), the expressions for the entropies ${\\chi _{BE}}$ can be further simplified as follows ${\\chi _{BE}} = \\sum \\limits _{i = 1}^2 {G\\left( {\\frac{{{\\lambda _i} - 1}}{2}} \\right)} - G\\left( {\\frac{{{\\lambda _3} - 1}}{2}} \\right),$ where $G\\left( x \\right) = \\left( {x + 1} \\right){\\log _2}\\left( {x + 1} \\right) - x{\\log _2}x$ , ${\\lambda _{1,2}}$ and ${\\lambda _3}$ are the symplectic eigenvalues of ${\\gamma _{AB}}$ and $\\gamma _{A|B_3^pB_1^x}$ respectively." ] ]
2107.01798
[ [ "Automating Generative Deep Learning for Artistic Purposes: Challenges\n and Opportunities" ], [ "Abstract We present a framework for automating generative deep learning with a specific focus on artistic applications.", "The framework provides opportunities to hand over creative responsibilities to a generative system as targets for automation.", "For the definition of targets, we adopt core concepts from automated machine learning and an analysis of generative deep learning pipelines, both in standard and artistic settings.", "To motivate the framework, we argue that automation aligns well with the goal of increasing the creative responsibility of a generative system, a central theme in computational creativity research.", "We understand automation as the challenge of granting a generative system more creative autonomy, by framing the interaction between the user and the system as a co-creative process.", "The development of the framework is informed by our analysis of the relationship between automation and creative autonomy.", "An illustrative example shows how the framework can give inspiration and guidance in the process of handing over creative responsibility." ], [ "Introduction", "The increasing demand in industry and academia for off-the-shelf ML methods has generated a high interest in automating the many tasks involved in the development and deployment of ML models.", "Such AutoML can make ML more widely accessible to non-experts, and decrease the workload in establishing ML pipelines, amongst other benefits.", "AutoML is a very active area of research.", "The progress to date has been documented in several surveys [53], [54], [13], [28].", "There exist a book [30], an AutoML challenge [26] and a dedicated workshop at the International Conference on Machine Learning, currently in its seventh edition.", "Crucially though, the automation of generative DL as ML subdomain has received very little attention.", "While AutoML is concerned with automating solutions for classification and regression, methods in generative DL deal with the task of distribution fitting, i.e.", "matching a model’s probability distribution to the (unknown) distribution of the data.", "NAS, an important topic of research in AutoML, has been extended to GAN [24], [36], [22], [21], one prominent type of generative models.", "Moreover, evolutionary approaches have been applied to optimising the GAN training objective [57] and other training parameters [20].", "Even though certain aspects of the GAN training scheme have been automated, we highlight three gaps in existing research: (i) there exists no unified automation framework for generative DL more generally; (ii) existing work does not address the use of generative DL for artistic purposes; (iii) researchers have not sought to motivate the automation of DL systems with the goal to endow artificial systems with creative autonomy.", "We propose a framework for the automation of generative deep learning that, on the one hand, adopts core concepts from AutoML, and on the other hand, is informed by the theory and practice of CC research, the “philosophy, science and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviours that unbiased observers would deem to be creative” [14].", "We can leverage insights from CC because automation in generative DL aligns with one of the field's central research goals: to endow computational systems with creative responsibilities [19], i.e.", "the ability to make specific decisions in a creative process.", "These decisions independently can be understood as targets for automation when framing the design of a generative DL pipeline as a form of co-creativity [34].", "By virtue of this interpretation, we can inform the automation of generative DL more specifically with well-established, generic CC strategies to equip computational systems with creative responsibilities.", "Our framework differs from AutoML not only in its stronger focus on generative models, but also in the assumed goals of the generative DL pipeline.", "More specifically, we identify targets for automation based on the wide and successful application of generative DL in artistic work.", "In contrast to standard applications, artistic ML engineers and users aim to produce artefacts of high cultural value over perfectly generalised reproductions of the training data.", "Our main contribution is to gather, standardise and highlight opportunities to automate generative DL for artistic applications.", "We identify commonalities of DL pipelines in artistic projects and bring them together in a common framework.", "This provides a starting point for handing over creative responsibilities in a range of applications, not only artistic.", "We concentrate our efforts on generative deep learning, rather than generative ML more generally.", "While we assume the majority of applications to be built on DL approaches, we do not rule out that other generative ML methods might be used within the framework.", "Our contribution does not consist of a formal solution to a singular automation problem.", "In contrast, we aim to provide a big picture view of all automation tasks and their associated opportunities and challenges, to be solved in future work.", "To leverage insights from CC in the development of our framework, we first clarify the relationship between automating generative DL and endowing artificial systems with creative responsibility.", "We then outline a standard non-automated pipeline for the development and deployment of generative deep learning models, and show how applications in artistic settings differ from this standard pipeline.", "Drawing from these two sources, we lay out the automated generative deep learning pipeline, describe several targets for automation therein and suggest ways in which automation could be achieved.", "We continue with an illustrative example to demonstrate how our framework can give inspiration and guidance in the process of gradually handing over creative responsibility to a generative system.", "We analyse the relationship between automation and creative autonomy in the context of our framework.", "We conclude the paper by discussing the limitations of our framework and suggest directions for future work." ], [ "Automated, Artistic Deep Learning as Co-Creation", "We believe that the development of a framework for automated generative DL can benefit from the insights gathered over more than two decades of CC research, because the automation of targets in generative DL can be considered a specific instance of the grand CC goal to give computational systems responsibility over decisions in a creative process.", "With each creative responsibility that is handed over to the system, i.e.", "with each target that is being automated, we increase the computational system’s creative autonomy [31], [39], [25], i.e.", "its capacity to operate independently of a human instructor, allowing for it to be ultimately considered a creator in its own right [18].", "Crucially though, the users of automated generative DL typically want to retain some control over the automation and its outcome.", "In developing our framework, we must thus decide which responsibilities should be retained in order to sustain certain modes of interaction between the artistic users and the generative DL system.", "To this end, it is useful to frame this interaction in the process of automation as a co-creative act.", "We adopt Kantosalo et al.", "'s [34] working definition of human-computer co-creativity as collaborative creativity where both the human and the computer take creative responsibility for the generation of a creative artefact.", "To qualify as a collaborative activity, both human and system must achieve shared goals (Kantosalo et al., [34], drawing on [52], [52]).", "Different automation strategies can enable two coarse forms of interaction.", "First, the user and system could engage in task-divided co-creativity, in which co-creative partners take specific roles within the co-creative process, producing new concepts satisfying the requirements of one party [33].", "Second, they could engage in alternating co-creativity, where both partners take turns in creating a new concept satisfying the requirements of both parties [33].", "Alternating co-creativity requires the computational system to not only exhibit creative responsibility for either the generation or evaluation of artefacts, but for both.", "Crucially, even a non-automated generative DL system can be considered creative in a minimal sense, in that it (despite the name) not only merely generates [56] new samples or artefacts, but also evaluates their proximity to the training set via its loss function.", "This is accomplished either explicitly, through likelihood estimation, or implicitly, with the help of a critic in an adversarial setting.", "The system thus produces artefacts that are novel and valuable, realising both requirements of the two-component standard definition of creativity [47].", "We write creative in a minimal sense, because the novelty of artefacts will decline, while their value increases, the better the system approximates the (unknown) distribution from which the training data was drawn.", "The definition of the training set and loss function by the user satisfies that both partners interact towards shared goals.", "Through different ways to automate the ML pipeline, we can free the human partner from certain manual work, while retaining specific creative responsibilities.", "We believe that providing the computational system with creative responsibility in the form of automating certain targets does not constrain, but rather expands the shared creative process.", "The person or producer has, due to their personality and cognitive characteristics, a strong impact on the creative process, product, and the creative environment, i.e.", "the press [44], [32].", "However, human creativity is also limited, e.g.", "due to our bounded rationality [51].", "A computational system can complement human shortcomings, e.g.", "via its higher information processing or memory capacity, enabling creativity on larger search spaces [6], [58]." ], [ "A Standard Generative Pipeline and Artistic Deviations", "We outline the various steps in the process of building and deploying a generative DL model for standard non-automated usage and contrast it with the particular differences that arise when using a model in different artistic contexts.", "Additionally, we provide a brief overview post-training modifications that aim for active divergence [4], allowing to manipulate a model into producing artefacts that do not exactly resemble the training data.", "A more detailed survey of such techniques can be found in [8].", "Our goal is to highlight the many choices that have to be taken in the construction of a generative DL pipeline and identify those tasks which pose an opportunity for automation in our framework." ], [ "Data Acquisition", "The first step towards developing generative models is data acquisition.", "We distinguish two cases: (i) using pre-existing data sets and (ii) creating new ones.", "It should be noted, that generative ML is also applied in privacy sensitive areas such as medicine, and in the augmentation of small data sets, as it can produce synthetic data to replace an entire data set or supplement it with additional samples.", "The augmentation by way of a generative model can be necessary whenever a data set is too small to train another model (e.g.", "a classifier) with a high number of parameters (i.e.", "weights and biases in a neural network).", "However, when the generative model itself requires a large amount of training data, other pre-training data augmentation steps through graphic manipulations can help to do so effectively [35]." ], [ "Using Existing Data Sets", "In a research setting, it is most common to use standard benchmark data sets or subsets thereof, for training and evaluating generative models.", "It is generally best practice in machine learning to split the data into training, test and validation subsets.", "However, generative models are sometimes trained on the entire data set and alternative methods of evaluation are used." ], [ "Creating a New Data Set", "When creating a data set from scratch, the goal is normally to fully represent the subject or category that is being modelled.", "Therefore, as much data as possible will be collected to maximise variation in the data set and to represent all modes as evenly as possible, i.e.", "the variety of artefacts that are statistically significantly different from one another.", "Creating varied, high-quality data sets with the large amounts of data required for training generative models can be very labour intensive and usually the purview of a select few academic and industry laboratories.", "This is often performed in a distributed fashion, where many workers are involved in collecting, evaluating and labelling data samples.", "In contrast to data sets created for industrial and research applications, data sets for artistic purposes are often composed with very different goals.", "It may not be important to accurately and fully represent a subject matter or domain, as long as the end goal produces interesting results.", "Data sets are often much smaller, and the considerations for the desired aesthetic characteristics in the end results are much more important in deciding which examples should and which should not be included in the data set.", "A lot of effort will go into sourcing material and the resulting data sets are much more likely to be reflect an artists individual style and (visual) language.", "In some cases, the entire data set will come from an artist’s personal archive [45]." ], [ "Training", "The objective of training a generative model is to learn a mapping function from an easily controllable and well understood distribution, e.g.", "a standard Gaussian, to a distribution of much higher complexity and dimensionality, e.g.", "that of natural colour images.", "There are a number of different training schemes, which apply to different architectures.", "They are commonly categorised by their formulation of the training objective.", "Methods maximise the likelihood of the data either (i) explicitly, such as auto-regressive and flow-based models, (ii) approximately, e.g.", "VAE, or (iii) implicitly (GAN).", "When using a method that explicitly models the data, training will be performed until a desired likelihood score is reached.", "With VAEs, the goal of training is to maximise the log-likelihood of the data set.", "In the adversarial setup, the decision when to stop training is less clear.", "Training is often run for a pre-specified period and the results are evaluated qualitatively.", "A fully trained model ideally represents the entire training data distribution, and can be sampled randomly to produce good results.", "Another desirable quality is that interpolation between two input vectors is matched in the outputs.", "Generalisation is a goal of almost all ML systems and applications.", "A model should be able to generalise to unseen data, while not underfitting or overfitting the training data.", "In an artistic setting, however, this is often less important, and if it produces interesting results, artists may often embrace the aesthetic qualities of an underfit [50] or overfit model [7]." ], [ "Evaluation", "The general performance of a model is measured in terms of the distance of the learned distribution to the target distribution.", "A model further ideally covers all modes in the input data set.", "For generative methods that explicitly model a probability distribution over the data, the (log) likelihood can be measured and evaluated directly.", "Implicit methods, such as GAN, have to be assessed with other metrics such as the Inception Score [48] and the FID [29].", "As these metrics are only a simplified standard for evaluation and have some shortcomings, additional qualitative checks might be needed to ensure fidelity of the output.", "While in some artistic settings good quantitative performance might matter, it can be ignored entirely in others, and a qualitative assessment of the output is usually much more important.", "Quality, diversity and accuracy may not be the only considerations (and may even be actively avoided), whereas novelty, interesting mis-representations of the data and other aesthetic qualities may be desired.", "Due to the variety of qualities that an artist might look for in a model’s output, there is no unique or widely used standard metric for evaluation.", "This is rooted in the highly individualistic nature of artistic work and linked to the additional strategies for iterative improvements and curation of the output which we discuss in the following subsections." ], [ "Iterative Improvements of Outputs", "Here we look at the diverging strategies for the gradual improvement of a system’s output in a research and development versus an artistic setting." ], [ "Iterating on the Model", "In the research and development of generative models, the data set often remains fixed, while various aspects of the network architecture and training regime will be altered.", "For instance, various optimisation hyper-parameters will be evaluated, such as: learning rate, momentum or batch size; or network configurations: number of layers, type of activation functions, etc.", "Different training regimes may also be experimented with, such as: optimisation algorithms, loss functions, and methods for regularisation and sampling." ], [ "Iterating on the Data Set", "In artistic contexts, it is much more common to iterate on the data set and keep other parameters fixed, before possibly making iterative improvements to the network and model parameters.", "Data that appears to be producing unwanted results, or skewing the model in certain directions may be removed.", "Revisiting the composition of samples (such as cropping), and the removal and addition of samples in order to refine the data set may be undertaken [49]." ], [ "Deployment", "Generative models are used differently in standard and artistic settings in accordance with their respective goal.", "We here differentiate between standard sampling and output curation." ], [ "Standard Sampling", "Generative models are trained with the goal that they can be sampled randomly and every generated output will be of value and high typicality [46].", "Therefore, in most standard applications models are simply sampled randomly with no additional filtering taking place.", "When filtering is performed, it is often done with the goal of quality evaluation, such as using the discriminator for evaluation quality [2], or using the CLIP model [42], as was the case in evaluating and ranking the generated outputs of the discrete VAE model in the DALL-E image generation project [43]." ], [ "Output Curation", "Rather than sampling randomly from a model, artists will often spend a lot of time curating a model’s output.", "The goal of building a model in an artistic setting is not necessarily to generate only samples of high value, but to produce some interesting or novel results, which can then be hand-selected.", "This can be through filtering samples or searching and exploring the latent space.", "In some cases, such as combining language-image models with latent space search for text-to-image generation, e.g.", "[40], much effort goes into prompt engineering to find a specific latent vector that produces interesting results." ], [ "Post-training Modifications", "Having looked previously at the curation of a model’s output in an artistic setting, i.e.", "the act of identifying the few artefacts of interest in a large set of output samples, we now turn to active divergence techniques [4] which aim at consistently producing results that diverge from the training data.", "These strategies, specifically developed in creative contexts for the purpose of art production, include hacks, tricks and modifications to the model parameters, as well as the daisy-chaining of several models.", "One approach is to find a set of parameters where the generated artefacts blend characteristics of multiple data sets.", "For this, a pre-trained model can be fine-tuned on a second data set, different from the original data.", "As soon as the results present an optimal blend between the two data domains, the fine-tuning can be stopped.", "This mixture of data sets can also be achieved by blending the weights of two models.", "Either interpolating on the weight parameters of the two models, or swapping layers between models, so that the new model contains higher level characteristics of one model, and lower level characteristics of another.", "Another method consists in chaining multiple models together.", "This allows artists to explore and combine characteristics of different data sets.", "Unconditional generative models will often be chained together with domain-translation models, e.g.", "CycleGAN [59] for sketch-to-image translation, or style transfer algorithms [23].", "The aim of such pipelines is to produce artefacts that reflect the complex combination of characteristics from many data sets.", "Other approaches make modifications to the model in order to have artefacts completely diverge from any training data.", "An existing pre-trained model can be fine-tuned using a loss function that maximises the likelihood over the training data [9].", "Other techniques intelligently combine learned features across various models [27], or rewrite the weights of the model [3], re-configuring them to represent novel data categories or semantic relationships.", "In contrast, network bending does not require any changes to the weights of the model [10].", "An analysis of the model is performed to determine which features are responsible for generating different semantic properties in the generated output.", "Deterministically controlled filters are then inserted as new layers into a model and applied to the activation maps of features." ], [ "An Automation Framework", "We build our framework drawing on the standard generative DL pipeline and its artistic deviations, as previously described.", "We first discuss automation as a search problem and the various techniques it can be approached.", "We then go on to list the targets for automation in a generative deep learning pipeline for artistic purposes.", "Some decisions have implications for other targets further down the line, e.g.", "the number and type of hyper-parameters depend in part on the kind of network architecture and optimisation algorithm.", "While the process is presented as a sequence of consecutive steps from input to output, it should be understood that all steps are optional and flexibility is required.", "Improving a system’s output works best as an iterative loop in which we might go back and adjust or intervene at any given prior step.", "We define the terminology of our framework as follows.", "With automation, we refer to the act of addressing with computational means those decisions in a generative deep learning pipeline that normally would be taken by a person.", "A target is defined as one such decision which provides an opportunity for automated instead of manual tuning." ], [ "Automation as a Search Problem", "A generative pipeline is automated by assigning responsibilities over individual targets to either the user or the system.", "While those retained by a person will have to be tuned manually, all other targets require the system to determine a configuration independently.", "This problem is analogous to the search problem over hyper-parameters in AutoML.", "The possible values of each automated target effectively construct a search space over possible system configurations.", "The number of total permutations, and the resulting search space, can grow rapidly with every independent target added.", "Limiting continuous parameter values to a reduced range or a set of discrete values, as per grid search for machine learning hyper-parameters, can help make the problem more feasible.", "The formulation as a search problem is the standard way to tackle automation in AutoML.", "However, extensive search over meta-parameters can be computationally expensive, time-consuming, cause high energy consumption and consequently have a considerable environmental impact.", "The extensive work on search problems provides numerous approaches to constrain this search.", "Strategies range from complete, to informed, to random methods.", "While exhaustive search can yield an optimal solution, it can be impractical and often infeasible for large search spaces.", "Random sampling, on the other extreme, can be a surprisingly effective strategy at low cost and with potentially surprising results.", "While [31] requires a system to meet the non-randomness criterion in order to be considered creatively autonomous, this definition does not rule out all uses of randomness and allows for testing random perturbations to a system’s standards.", "AI-based search methods can benefit from meaningful heuristics and leverage both exploration and exploitation (e.g.", "evolutionary search).", "Gradient-based methods have seen a lot of progress in recent years.", "Other approaches include rule-based selection and expert systems, with drawbacks including that they require manual construction and expert knowledge.", "Finally, machine learning itself can be used to choose values through a pre-trained model.", "Indeed, practitioners in generative deep learning tend to go directly to automation via deep learning.", "In particular, recent advances in contrastive language-image pre-training [42] allow for computing similarities between text and images.", "Such a model could take over the responsibility of assessing whether an image looks like a text description, or vice versa, at any point in the pipeline where a human artist would do the same task.", "All of the above approaches can be applied in an iterative fashion over subsets of the search space, gradually limiting the range of possible values." ], [ "Automation vs. Autonomy", "While we have primarily focused on increasing a system’s creative autonomy through automation, our framework does not grant a system as much autonomy as to enable it to act entirely independently in response to its own motivations [25].", "A system within our framework would remain inactive until engaged with.", "Such engagement can range from a stimulus through available sensors, e.g.", "cameras, microphones or heat sensors, to a text or image prompt or an entire inspiring set [46], to more precise and detailed instructions.", "In any case, this choice of input channel and sensibility has to be taken by a human and is not a target in our framework.", "We further assume the choice of generated media (image, audio, text, video, etc.)", "to be made by a person prior to building a system.", "Naturally, it is not difficult to imagine a setup in which this choice, too, becomes part of the pipeline.", "Going one step further in autonomous automation, our framework and its targets make it possible to devise a generative system which produces automated generative pipelines.", "In fact, it might be possible for a generative system to generate itself, much like a general-purpose compiler that compiles its own source code.", "This self-referential generation has similarly been proposed in work on automated process invention [12]." ], [ "Targets for Automation", "Below we define and discuss the many tasks and decisions that are part of a generative DL pipeline in an artistic setting and which can be automated within our framework.", "Wherever applicable, we explain how a target relates to concepts of AutoML and CC.", "The following subsections identify individual targets for automation.", "The complete process is illustrated as a sequence of steps in figure REF .", "As per this diagram, we organise the steps into three stages: (i) a preparation stage to gather relevant materials (ii) a configuration stage, where the models, training regimes and parameters are tuned to produce valuable output, and (iii) a presentation stage where the user deploys a final model and curates the output.", "The first target (selecting a pre-trained model) is optional and can be skipped in order to start from scratch instead.", "In this case, we begin with data preparation and curation." ], [ "Pre-trained model (optional)", "It might not be necessary to train a network from scratch if an appropriate pre-trained model is available, especially when a quick system setup is desired.", "A list of pre-trained models, tagged with keywords associated to their generative domain, could provide a knowledge base for a system to select, download and deploy a model.", "This can either be directly put to use, in which case the system could immediately skip to evaluating the model, or it can be fine-tuned on a smaller set of data.", "Such additional fine-tuning could be dependent on the outcome of the pre-trained model’s evaluation.", "Only if the pre-trained model’s output is not satisfactory would it have to be further optimised or de-optimised.", "Working with a pre-trained model has implications for the subsequent choices of the network architecture, training scheme and loss function." ], [ "Data preparation and curation", "This preparation step includes the acquisition, cleaning, augmentation and transformation of data samples, akin to data preparation in AutoML.", "Starting with the data collection task, we consider different data sources from which a system could select.", "Drawing on existing data sets, such as an artist’s private data collection, can introduce important desirable biases and ensure high quality output.", "In contrast, scraping samples from the internet could contribute to the generation of surprising results.", "Additional pre-trained generative models can provide a source for synthesised data in large quantities.", "An important addition to the pre-processing is data curation, in contrast to simple cleaning.", "Rather than filtering out noisy samples, for artistic purposes it can be desirable to add ‘noise’.", "To this end, it is not uncommon in an artistic context to mix multiple data sets.", "In this additional step, the system thus further refines the data set, similar to an artist adding or removing individual samples, which can influence the qualities of the system’s final output.", "This is an opportunity for iterative improvements and for alternating co-creativity [33], given that the system both generates and evaluates.", "Automation in the cleaning and curation tasks can be achieved, e.g.", "in the image domain, by employing other computer vision or contrastive language-image models." ], [ "Network architecture and training scheme", "This target for automation defines the choice of possible architectures (e.g.", "GAN, VAE, Transformer), which could include non-neural methods.", "NAS in AutoML is concerned with finding optimal combinations of basic building blocks of artificial neural networks in terms of performance on a classification or regression task, an immensely difficult optimisation problem.", "We recommend in our framework to instead select from tried-and-tested architectures, only altering parts of the architecture with a direct influence on the output, e.g.", "the number of upsampling convolutions which determine the final output image size.", "The training scheme is largely influenced by the choice of architecture.", "In the case of GAN, the training scheme includes the choice of whether to train the discriminator and generator networks in parallel or consecutively, and how many individual optimisation steps to perform for either." ], [ "Loss function", "The formulation of the basic loss term is highly dependent on a model’s training scheme and constitutes the the minimum requirement for successful training.", "However, additional loss terms can change or supplement the basic term for further refinement of the training objective.", "As a central part in guiding the model parameter optimisation process, any modification to the loss terms will strongly impact the modelled distribution and consequently the system’s output.", "In other contexts, methods have been proposed for the automatic invention of objective functions [17].", "These could provide a starting point for adapting the approach to the constraints of loss functions in generative DL." ], [ "Optimisation algorithm", "The selected algorithm will be responsible for adjusting a model’s parameters through error correction informed by the gradient of the loss function.", "This choice can potentially have an influence on the system’s output, as it is responsible for finding one of the potentially many local minima in the loss landscape.", "As it determines whether convergence can be reached at all, this decision can ultimately make or break the success of the training process.", "It can further largely influence convergence speed and be critical in time-sensitive setups.", "The choice of optimisation algorithms might be limited by the previous selection of network architecture and corresponding training scheme." ], [ "Hyper-parameter tuning", "Optimisation of batch size, learning rate, momentum, etc.", "can be achieved via AutoML methods, and there is much active research in this area." ], [ "Model selection and evaluation", "From all the possible models, the best one has to be selected in accordance with given criteria relevant to the task at hand.", "As the training process is essentially a succession of gradual changes of model parameters over time, this task is equivalent to identifying the right moment to stop training.", "Additionally, and in order not to lose previous training states, model checkpoints can be saved along the way as training progresses and whenever model evaluation satisfies given criteria.", "After training is finished, the best model has to be selected from all candidate checkpoints.", "In standard ML projects, this would normally be done with respect to the primary concern of predictive accuracy.", "But in generative projects, other considerations may include how surprising the outputs are, synthesis speed (for tool or real-time uses) and coherence of the results.", "Such criteria could be employed in a weighted sum of metrics, where the system can give more or less emphasis to individual terms.", "This would allow the combination of standard metrics like FID in the image domain for general output fidelity with a measure for sample similarity compared to a reference sample(s), inspiring set or text prompt via a contrastive language-image model." ], [ "Output curation", "Having obtained a successfully trained model, we want a system to reliably produce high-quality output.", "While efforts in previous steps were aimed at refining the model which is at the core of the generative process, this final automation target aims to raise the system’s overall output quality.", "Two approaches come to mind: filtering and search.", "In the former, a system could select those samples from a large batch of model outputs that rank highest against a given metric.", "In the latter, the system could search for vectors directly in a model’s latent space via one of the various methods we have outlined in the section above on approaches to search problems.", "The evaluation measure, as before, could be the similarity of samples compared to a set of reference samples, an inspiring set or a text prompt via a contrastive language-image model." ], [ "An Illustrative Example", "In early 2021, a generative deep learning Colab notebook [5] called the Big Sleep was shared online [40].", "It allows for text-to-image generation [1], effectively visualising a user-given text prompt, often with innovative content and design choices, as per the example in figure REF .", "This is an instance of an artistic deviation from the standard pipeline, where CLIP [42] is used to evaluate a generated image w.r.t.", "a given text prompt, driving a gradient-based search for latent vector inputs to a generative model called BigGAN [11].", "We use this setup as an example to identify the following places where automation could be introduced, based on our framework.", "We highlight concrete techniques and references for automation from the literature.", "$\\bullet $ In terms of pre-trained model selection, numerous people have substituted BigGAN with other GAN generators.", "This creative responsibility could be automated, with the system choosing from a database of GANs and installing new ones into the notebook.", "$\\bullet $ In terms of data preparation and curation, users often choose imaginative text prompts, as the notebook often produces high quality, surprising results for these.", "This could be substituted, for example, with automated fictional ideation techniques [38].", "$\\bullet $ [40], the notebook programmer, innovated in loss function definition, employing patches from generated images rather than the entire image to evaluate its fit to the prompt.", "Various image manipulation routines could be automatically tested within loss function calculations from a library, with the system automatically altering the notebook at code level.", "$\\bullet $ As described in [15], in some circumstances where multiple images are being generated simultaneously, increasing the learning rate can help searches fail quickly.", "Such hyper-parameter tuning could be automated using standard AutoML techniques, guided by requirements on acceptable search successes and output image quality.", "$\\bullet $ In terms of model selection and deployment, we can imagine models being used as creative web-services [55], with higher-level CC systems accessing text-to-image generators in a variety of projects.", "$\\bullet $ There has been an explosion of human usage of notebooks like the Big Sleep, with attendant output curation via cherry picking results for posting on social networks and in blogs.", "This would be an ideal target for automation with systems using CLIP and other techniques to evaluate images, also possibly inventing new aesthetic measures [17].", "Figure: Image generated by the Big Sleep Colab notebook for the prompt “The Melbourne skyline in pastel colours”.", "Note the appropriate presentation of content and style, and additional pastel strokes in the sky as an unprompted innovation." ], [ "Discussion", "We have presented a framework for the specific purpose of automating manual tasks in a generative DL pipeline for artistic projects.", "We adopt the core concepts of AutoML and adjust them in two ways.", "First, we focus on generative DL which differs in the type of learning task, in that it is concerned with modelling the distribution of a training set, rather than classification or regression.", "And second, we address the artistic usage of generative DL, where more emphasis is given to the qualities of the generated output over the qualities of the model.", "The specialisation of our framework inversely limits its generalisability in the same ways.", "On the one hand, there might be artefact-driven applications of generative DL within or outside CC that we have not considered.", "On the other hand, our framework is not generally applicable to generative approaches in DL due to its special emphasis on artistic uses.", "Its focus on generative DL further limits its validity for other generative modelling methods." ], [ "Automation and Creative Autonomy", "We have previously analysed the close relationship between the automation of generative DL systems and the central CC goal to increase a system's creative autonomy [31], [39], [25] by granting it more creative responsibilities [18].", "Here, we complement this a priori analysis based on knowledge of our concrete automation pipeline.", "The aim is to understand to which extent our proposed pipeline already enables facets of creative autonomy, and how CC insights on creative autonomy could be used to advance it in future work.", "Automation is necessary for creative autonomy, but the opposite does not hold: while a fully automated generative DL system might still exactly follow user-prescribed goals, an autonomously creative system has the freedom to pursue a course independent of its programmer’s or operator’s intentions [31].", "This firstly requires the system to autonomously evaluate its creations, which is satisfied by any system that can be considered creative [56].", "In addition, an autonomously creative system must be capable of autonomous change, i.e.", "initiating and guiding changes to its standards without being explicitly directed when and how to do so [31].", "To prevent trivial implementations of these capabilities, [31] requires them to not exclusively rely on random decisions.", "To assess how much our pipeline realises creative autonomy, we can draw on various CC approaches to enhancing autonomy in computational systems.", "For instance, [19] proposes repeatedly asking ourselves: what am I using the software for now?", "Once we identify why we are using the software, we can  write code that allows the software to use itself for the same purpose.", "If we can repeatedly ask, answer and code for these questions, the software will eventually  create autonomously for a purpose, with no human involvement.", "Our framework provides various candidate targets to perform such a gradual elevation of a generative DL system.", "For the evaluation of a concrete system built under our framework, we consider the FACE model [16], [41] an adequate evaluation tool.", "In this evaluation model, systems are described in terms of the creative acts they perform.", "Such an analysis allows for the identification of newly added creative responsibilities through automation.", "[37] follow a more constrained approach and, as part of a larger agenda to realise meta-creativity in CC, propose that creative autonomy requires artefact-, goal- and potentially generator-awareness, realised through operators of (self-) reflection and (self-) control which closely match Jennings’ [31] requirements for evaluation and change.", "Whether a system built within our framework satisfies these definitions depends on the extent to which it is granted responsibilities in the form of automating decision making for targets identified in the framework.", "We demonstrate this based on extensions to a non-automated generative DL system.", "Such a system can be considered to have some generator-awareness due to the role of its loss function (self-reflection), and its adjustments of own parameters through error correction methods like back-propagation (weak self-control).", "A system’s control over changes to its generator can be increased from weak to strong within our framework, through the automated manipulation of network architecture or selection of a pre-trained model.", "Further putting a system in charge of its loss function within our framework (strong control) affords it goal-awareness and consideration as autonomously creative, if it is capable of modifying the loss function in response to its evaluation of generated output.", "Crucially, more radical forms of creative autonomy do not eliminate co-creation, i.e.", "cut ties with the system user entirely, but facilitate different forms of interaction.", "To really become independent of its designer, a system must not be isolated but interact with critics and creators that shape its evaluation and changes [31].", "A fully creatively autonomous system might refuse the will of its interaction partner [31], [25], but we believe that this holds a promise for innovative artistic collaborations between people and computational systems, connecting artistic practices in generative DL with the philosophy and goals of CC." ], [ "Future Work", "In this formulation of our framework we have only briefly mentioned automation of creative responsibilities via the usage of ML models.", "The possibility of training or deploying multiple models in the same system enables the addition of organisational structures to our framework, in which we think of individual models as agents in a multi-agent system.", "To use our framework in co-creative applications, augmenting a system with the ability to communicate its adjustments and intentions would be especially beneficial.", "Moreover, to address our framework’s limitations, further work is needed to consider applications which use generative DL but are not artistically focused.", "This could potentially inform a more general automated ML framework, which would further benefit from more formal definitions.", "We plan further study of the ways in which deep learning researchers, practitioners and artists work with generative systems, in particular where they have, and could, add levels of automation, via analyses such as the illustrative example above.", "Some of the techniques that artists apply, such as data set curation and iteration, as well as the selection of generated outputs, are promising avenues for automation and require further investigation.", "We further plan to put our framework to use in applied projects.", "Through this, we aim to provide demonstrative examples of how some of the challenges in automation can be tackled and to show the surprising results that automation can afford.", "For the evaluation of such demonstrative examples we plan to draw from the FACE descriptive model of creative acts [16], [41]." ], [ "Acknowledgements", "We thank our reviewers for their helpful comments.", "Sebastian Berns and Terence Broad are funded by the EPSRC Centre for Doctoral Training in Intelligent Games & Games Intelligence (IGGI) [EP/L015846/1, EP/S022325/1].", "Christian Guckelsberger is supported by the Academy of Finland Flagship programme Finnish Center for Artificial Intelligence (FCAI).", "AutoML[AutoML]automated machine learning CCcomputational creativity CLIPContrastive Language-Image Pretraining DLdeep learning FIDFréchet Inception Distance GANgenerative adversarial network MLmachine learning NASneural architecture search VAEvariational auto-encoder" ] ]
2107.01858
[ [ "Single Model for Influenza Forecasting of Multiple Countries by\n Multi-task Learning" ], [ "Abstract The accurate forecasting of infectious epidemic diseases such as influenza is a crucial task undertaken by medical institutions.", "Although numerous flu forecasting methods and models based mainly on historical flu activity data and online user-generated contents have been proposed in previous studies, no flu forecasting model targeting multiple countries using two types of data exists at present.", "Our paper leverages multi-task learning to tackle the challenge of building one flu forecasting model targeting multiple countries; each country as each task.", "Also, to develop the flu prediction model with higher performance, we solved two issues; finding suitable search queries, which are part of the user-generated contents, and how to leverage search queries efficiently in the model creation.", "For the first issue, we propose the transfer approaches from English to other languages.", "For the second issue, we propose a novel flu forecasting model that takes advantage of search queries using an attention mechanism and extend the model to a multi-task model for multiple countries' flu forecasts.", "Experiments on forecasting flu epidemics in five countries demonstrate that our model significantly improved the performance by leveraging the search queries and multi-task learning compared to the baselines." ], [ "Introduction", "The control of infectious diseases is an important task for public health authorities as well as all industry stakeholders worldwide.", "Various infectious diseases in addition to COVID-19, which has recently attracted global attention, have had a significant impact on global health and the economy.", "The forecasting of infectious disease epidemics is necessary to execute appropriate measures for their control.", "In particular, influenza epidemics, a representative class of severe infectious diseases, leads to 290,000 to 650,000 deaths annually [46].", "Such instances have motivated public health authorities to forecast the consequences of influenza in different countries.", "Many studies relating to flu forecasting models have been conducted for a long time.", "In recent years, besides models by leveraging historical flu activity, several models have been proposed to forecast the flu volume by exploiting online user-generated contents (UGCs) such as search query data and social media posts to capture human movements as social sensors [18], [4], [39].", "The majority of existing flu forecasting models by leveraging UGCs and historical flu activity data focus on one country or each area in one country.", "However, we assume that it is feasible to create a single flu forecasting model targeting multiple countries because the flu time series in each country exhibit strong seasonality and therefore, hold strong similarity.", "For example, Pearson correlations of the flu time series in the five countries (US, JP, UK, AU and FR) with different cultures, locations, and languages have a moderate correlation with one another (almost all correlations are over 0.6, refer to Appendix A.)", "Moreover, in terms of search queries, which are a representative resource, it has been reported that the user search behaviors for health themes in different countries are similar [50], [33], [3]; for example, similar search queries are used when looking for a specific disease.", "Thus, it is possible that a single model can achieve sufficient flu forecasting for different countries.", "Also, the training of a single model using various flu-related data can capture the nature of flu epidemics in each country by escaping from overfitting, which is caused by a lesser degree of historical data in one country for training [27], [26].", "Our study challenges flu forecasting for various countries with one model as a multi-task problem, which enables two or more tasks to be learned jointly and shares information between the respective tasks.", "In other words, we treat each country as each task within the framework of multi-task learning.", "Besides, for the development of a flu prediction model with higher performance, we solve two issues; how to find suitable search queries and how to leverage the search queries in the model construction.", "We address these issues in the following parts of the paper.", "The first issue is how to select queries and keywords in search engines as a resource for flu forecasting.", "Many methods using UGCs for forecasting the flu volume have been developed since the emergence of Google Flu [18], which demonstrated that the number of search queries capturing human behaviors was a good resource for forecasting.", "Certain studies [21], [49], [53], [48] have depended on “Google Correlate,” which returns English search queries that are the most highly correlated to an input time series, for the selection of suitable search queries.", "However, this approach cannot be used in many areas (non-English-speaking areas) and it has already been unavailable since December 2019.", "Therefore, we discuss a method for selecting search queries in languages other than English to create a flu forecasting model for multiple countries.", "In particular, we examine two transfer methods of search queries from English to other languages (Japanese and French): the translation-based method and the combination method of word alignment and time-series correlation (Section 3).", "The second issue is how to effectively incorporate search queries into a flu forecasting model.", "Two types of data have been applied extensively: historical flu activity data involving the previous year's data (known as “historical ILI data”) [43], [47], [45] and online UGC data [53], [4], [52], which mainly consist of search query data.", "A representative example of simultaneous inputs is the ARGO model [48], which is based on linear regression using the input data of the Google search time series and the historical ILI data.", "The ARGO has exhibited superior results for flu forecasting in the US [32].", "However, it has recently been reported that the effect of the search query data in a forecast model is small, and historical ILI data is sufficient as input [1].", "According to these reports, there remains room for considering how to effectively integrate the search query data, whereas these data have improved the forecasting performance in certain cases.", "That is, the simple methods of handling these two resources are insufficient for improving forecasting models.", "Furthermore, the overall mutual effect between the historical ILI data and search query data is difficult to capture effectively using existing models, which makes it difficult to extract this effect and apply it to tasks.", "To tackle this issue, we propose a model that combines inputs by considering the characteristics of input data.", "This approach is based on two aspects: the flu time series exhibits strong seasonality and search query data are useful features for forecasting non-seasonal parts.", "Specifically, the search query data are used to forecast the deseasonalized component of flu data by leveraging the attention mechanism [5], which is useful for considering the feature importance (Section 4.2).", "Subsequently, we use the model addressing the task as a base and extend it to the flu forecasting model for multiple countries (Section 4.3).", "Similar to ours, Zou et al.", "[52] proposed a multi-task model based on linear and Gaussian regression to forecast the flu volume in the following two problem settings: several states in the US, and two countries, namely the US and England.", "Our multi-task model further develops the above in two aspects: we tackle flu forecasting in five countries, each of which differs in terms of the area or language, and we apply not a simple model such as a statistical model, but our novel neural network-based model for multi-task learning to achieve higher accuracy and long-term forecasting.", "Other related studies are discussed in detail in Appendix B.", "In summary, we aim to construct a flu forecasting model targeting multiple countries by leveraging multi-task learning while solving two issues as below.", "First, to find suitable search queries, we examine the transfer methods of the search queries from English to other languages.", "Second, we effectively incorporate the search query data into the model, and propose a novel forecasting model that considers the characteristics of the input data, historical ILI data, and search query data.", "The experiments demonstrate that the proposed models and methods achieve the best accuracy among comparative models for forecasting flu epidemics in five countries." ], [ "ILI rates from health agencies", "We obtained weekly ILI rates, representing the number of ILI cases per 100,000 people in a population, as a measure of ILI activity for the US, Japan, Australia, England, and France from their established syndromic surveillance systems, namely the Centers for Disease Control and Preventionhttps://www.cdc.gov/, the National Institute of Infectious Diseaseshttps://www.niid.go.jp/niid/ja/, Australian Sentinel Practices Research Networkhttps://aspren.dmac.adelaide.edu.au/, Public Health Englandhttps://www.gov.uk/government/organisations/public-health-england, and GPs Sentinelles Networkhttps://www.sentiweb.fr/, respectively.", "The England data span from 2013/41st week to 2020/29th week, whereas the others span from 2013/26th week to 2020/29th week.", "We denote these countries using the corresponding country codes, namely US, JP, AU, FR, and UK.", "Time series of weekly search query frequencies were retrieved through Google Trendshttps://trends.google.com as the UGC data.", "The frequency represents the weekly search activity of the queries within a specific region.", "The two methods for selecting search queries are described in Section .", "The time series of the Google Trends data in the training period were normalized to have a minimum value of zero and maximum value of one (min-max normalization).", "The data span was the same as that of the ILI rate data." ], [ "Methods for finding search queries", "We proposed two transfer methods, namely the translation-based and word-alignment and temporal correlation based (WT-based) methods, to explore multilingual search queries using a list of English search queries, which were created in previous research [52] and placed in a URLhttps://github.com/binzou-ucl/google-flu-mtl.", "As input for the proposed model, we selected the top $L$ English search queries for the US, AU, and UK based on the list, and selected each search query in JP and FR corresponding to the English search query based on these one-to-one query mapping methods.", "The usefulness of mapping from English to other languages is described in [33], [50], [53].", "These studies pointed out that the volume movement in the search queries is similar among countries with certain health conditions." ], [ "Translation-based method: ", "This is the simplest trasfer method for the conversion of English into other languages.", "To select other languages' queries, we translated English search queries into those of the target language.", "We used Google Translatehttps://translate.google.com for the translation-based method.", "For Japanese morphemes, which are not separated by spaces, we divided each morpheme and inserted spaces between them.", "It is possible that the translation-based approach, which simply maps the queries to the target language, will not capture suitable queries.", "For example, in Japanese, the abbreviation of influenza, “flu,” is translated into “I-N-FU-LU-E-N-ZA” and is not translated into the Japanese abbreviation of influenza, “I-N-FU-LU.” Moreover, it is difficult to select the suitable orthographical variant, the three categories of which used by the three Japanese writing scripts are applied (kanji-script, hiragana-script, and katakana-script).", "We solved these problems using the combination WT-based method, which considers the semantic similarity to the English search queries and temporal similarity to the historical ILI data.", "Word alignment is one method that is used for creating cross-lingual word embeddings to compute word similarities in different languages, and is trained using sources of monolingual text with a smaller cross-lingual corpus of aligned text [34].", "This approach can solve the above problems.", "For the word alignment, we used the method to learn cross-lingual word embeddings proposed by Zhou et al. [51].", "We needed to prepare word embeddings based on the monolingual text for English and the target languages (Japanese and French).", "For this purpose, we obtained the word embedding dataset [20] learned by fasttext from Wikipedia corpora [9].", "Thereafter, we applied these word embeddings to the word alignment method.", "To search for words with similar meanings, we used cosine similarity to map each word in the search query, except for prepositions and articles, to the $k$ most similar words in other languages using the common word embedding space created by the word alignment.", "The similarity score was represented by $\\Theta _{w}$ .", "Temporal correlation is a method for finding a better search query based on the similarity of the time series of the search queries to the time series of the historical ILI data for the forecast.", "It was calculated by the Pearson correlation between the time series of the search query, for which candidates were provided by the word alignment, and the time series of the historical ILI in each country.", "The score was represented by $\\Theta _{t}$ .", "The WT-based method selects the search query with the best score in the equation $\\Theta _{w} + \\Theta _{t}$ corresponding to an English search query.", "This is inspired by [53], which used a similar method of selecting search queries for creating a transfer model of flu forecasts.", "Our research differs from the previous research in terms of the motivation whereby we discuss how to find better search queries for flu forecasts." ], [ "Problem formulation", "Our aim is to forecast the future ILI rates in various countries.", "We formulate this problem as a supervised machine learning task.", "Let $\\textbf {X} = \\lbrace x_{t-N+1},...,x_{t-1},x_{t}\\rbrace \\in \\mathbb {R}^{N}$ be a time series of historical ILI data containing $N$ weekly data points.", "Let $\\textbf {Q} = \\lbrace q_{t-N+1},...,q_{t-1},q_{t}\\rbrace \\in \\mathbb {R}^{N \\times L}$ be the search query data containing $N$ weekly data points and $L$ queries.", "Our model forecasts the true $S$ -step-ahead values $\\textbf {Y} = \\lbrace x_{t+1},...,x_{t+S}\\rbrace \\in \\mathbb {R}^{S}$ .", "We learn a function $f: \\lbrace \\textbf {X}, \\textbf {Q}\\rbrace \\rightarrow \\textbf {Y}$ that maximizes the prediction accuracy in each country." ], [ "Model structure", "Our model is motivated by the idea that search query data are useful features for forecasting non-seasonal parts of flu data.", "This concept originates from a previous study [35], which reported that the flu forecasting accuracy is improved by splitting the forecasting part from the historical ILI data and search query data.", "The model architecture is presented in Fig.", "REF .", "For the data preparation, we divide the historical ILI data into the seasonalized and deseasonalized components.", "Under the assumption that the seasonalized component has a constant frequency in the future, we forecast the deseasonalized component in the future.", "For the forecasting, we apply the encoder–-decoder model considering the search query data using an attention mechanism [5].", "Figure: Architecture of proposed model.Historical ILI data are divided into seasonalized and deseasonalized components.We apply the deseasonalized part to the encoder–-decoder model comprising GRUs with an attention mechanism considering search queries.Figure: Architecture of proposed model expanded to multi-task learning.The red boxes indicate the share of different parameters in the model for each country.The blue boxes indicate the same parameters.", "Furthermore, country embedding is introduced as the initial latent state of the GRUs." ], [ "Flu decomposition: ", "We use the seasonal-trend decomposition using LOESS (STL) method [15], which considers the following time-series model with the trend and seasonality: $ y_{t} = \\tau _{t} + s_{t} + r_{t}, t = 1,2,...,N,$ where $y_{t}$ denotes the historical ILI data at time $t$ , $\\tau _{t}$ is the trend in the time series, $s_{t}$ is the seasonal signal with period $T$ , and $r_{t}$ is the reminder signal.", "The seasonal signal describes the repeated patterns in the specified period $T$ , which remain constant over time.", "The trend describes the continuous increase or decrease.", "The detailed decomposition algorithm is outlined in [15].", "In our method, the historical ILI data are divided into seasonalized and deseasonalized components.", "The deseasonalized component $\\textbf {X}^{\\tau }$ represents the residual that is obtained by subtracting the seasonalized part $\\textbf {X}^{s}$ from the historical ILI data $\\textbf {X}$ ; $\\textbf {X}^{\\tau } = \\textbf {X} - \\textbf {X}^{s}$ .", "Our neural network-based architecture is developed to forecast the future value of a deseasonalized component.", "It is assumed that the seasonalized component $\\textbf {X}^{s}$ exhibits a constant pattern in the future.", "Subsequently, the flu forecast value $\\textbf {Y}$ is output by simply adding the value based on the pattern of the seasonalized component to the forecast value of the deseasonalized component using our model.", "Our model employs an encoder–-decoder architecture to forecast more than two weeks ahead.", "This architecture is composed of gated recurrent units (GRUs) [14], representing a simple and powerful variant of the RNN, and an attention mechanism, which indicates that the neural network pays close attention to parts of the data when performing tasks.", "The GRUs are used to capture the hidden representations of the deseasonalized component of the historical ILI data $\\textbf {X}^{\\tau }$ and search query data $\\textbf {Q}$ , and the attention is used to help our model to focus on salient changes in the time series in each query regarding the historical ILI data.", "The attention mechanism computes the importance of each query with respect to the forecast and aids in making our model transparent and interpretable.", "To capture the hidden representation of the deseasonalized component of the historical ILI data $\\textbf {X}^{\\tau }$ as the encoder, the GRUs use the input data $\\textbf {X}^{\\tau }_{t}$ and previous hidden representation $\\textbf {H}_{t-1}$ , as follows: $\\begin{aligned}\\textbf {r}_{t} = \\sigma \\left(U_{r}\\textbf {X}^{\\tau }_{t} + W_{r}\\textbf {H}_{t-1}\\right), \\quad \\textbf {f}_{t} = tanh\\left(U_{h}\\textbf {X}^{\\tau }_{t} + \\textbf {H}_{t-1} \\odot W_{h}\\textbf {r}_{t}\\right), \\\\\\textbf {z}_{t} = \\sigma \\left(U_{z}\\textbf {X}^{\\tau }_{t} + W_{z}\\textbf {H}_{t-1}\\right), \\quad \\textbf {H}_{t} = \\left(1 - \\textbf {z}_{t}\\right) \\odot \\textbf {H}_{t-1} + \\textbf {z}_{t} \\odot \\textbf {f}_{t},\\end{aligned}$ where $\\textbf {z}_{t}$ and $\\textbf {r}_{t}$ represent the reset and update gates at time $t$ , respectively.", "In this case, $U_{z}, U_{r}, U_{h} \\in \\mathbb {R}^{1 \\times M}$ , and $W_{z}, W_{r}, W_{h} \\in \\mathbb {R}^{M \\times M}$ are parameters for the respective gates, whereas $M$ is the GRU output dimension.", "We combine Equation (REF ) as follows: $\\textbf {H}^{\\tau }_{i} = GRU\\left(\\textbf {X}^{\\tau }_{i}\\right), \\quad i \\in \\left\\lbrace t-N+1, ..., t\\right\\rbrace ,$ where $\\textbf {H}^{\\tau }_{t} \\in \\mathbb {R}^{1 \\times M}$ , which is the last GRU hidden state, is used as the hidden representation.", "The search query data also leverage the GRU, as is the case with the historical ILI data.", "$\\textbf {H}^{q}_{i,j} = GRU\\left(\\textbf {Q}_{i,j}\\right), \\quad i \\in \\left\\lbrace t-N+1, ..., t\\right\\rbrace , \\quad j \\in \\left\\lbrace 1, ..., L\\right\\rbrace ,$ where $\\textbf {H}^{q}_{t} \\in \\mathbb {R}^{L \\times M}$ , which is the last GRU hidden state, is used as the hidden representation and $L$ is the number of queries.", "The combination representation by the attention mechanism is obtained from the hidden representations $\\textbf {H}^{\\tau }_{t}$ and $\\textbf {H}^{q}_{t}$ .", "In general, an attention mechanism can be defined as mapping a query $q$ and a set of key–value pairs $\\lbrace k, v\\rbrace $ to an output $o$ .", "For each position $i$ , we compute the attention weighting as the inner product between the query $q_{i}$ and key $k_{i}$ at every position.", "For the application of our model, we treat the hidden representation of the deseasonalized component $\\textbf {H}^{\\tau }_{t}$ as the query, and the hidden representation of the search queries $\\textbf {H}^{q}_{t}$ as the key and value.", "Position $i$ indicates the location of each search query representation ($i \\in \\lbrace 1,...,L\\rbrace $ ).", "The query, key, and value representations are calculated from each representation through linear projection, as follows: $\\begin{aligned}\\textbf {S}_{q} = \\textbf {W}^{q}\\textbf {H}^{\\tau }_{t}, \\qquad \\textbf {S}_{k} = \\textbf {W}^{k}\\textbf {H}^{q}_{t}, \\qquad \\textbf {S}_{v} &= \\textbf {W}^{v}\\textbf {H}^{q}_{t},\\\\\\end{aligned}$ where $\\textbf {S}_{q} \\in \\mathbb {R}^{1 \\times M}$ indicates the query representation, and $\\textbf {S}_{k}, \\textbf {S}_{v} \\in \\mathbb {R}^{L \\times M}$ indicate the key and value representations, respectively.", "Following the linear projection, the dot-product attention computes the importance of each query representation and the attention representation $\\textbf {H}^{\\tau q}$ ; $\\textbf {H}^{\\tau q} = {\\rm Softmax}(\\textbf {S}_{q}\\textbf {S}_{k})\\textbf {S}_{v},$ where ${\\rm Softmax}(\\textbf {S}_{q}\\textbf {S}_{k})$ represents the importance of each query and the dimension of $\\textbf {H}^{\\tau q}$ is $M$ .", "Thereafter, we apply the feature, concatenating the attention representation $\\textbf {H}^{\\tau q}$ and hidden representation of the deseasonalized component $\\textbf {H}_{t}^{\\tau }$ , to a multi-layer perceptron (MLP); $\\textbf {H}^{enc} = {\\rm MLP}([\\textbf {H}_{t}^{\\tau } \\cdot \\textbf {H}^{\\tau q}]),$ where the dimension of $\\textbf {H}^{enc}$ is $M$ .", "For the inference of the deseasonalized value of the flu data in the forecast $\\lbrace t+1,...,t+S\\rbrace $ , we apply $\\textbf {H}^{enc}$ to the GRU as the decoder and MLP, which constitute two layers.", "${\\left\\lbrace \\begin{array}{ll}\\textbf {H}^{dec}_{i} &= GRU\\left(\\textbf {X}^{\\tau }_{t}, \\textbf {H}^{enc}\\right), \\quad i = t+1\\\\\\textbf {H}^{dec}_{i} &= GRU\\left(\\textbf {O}^{schedule}_{i-1}\\right), \\quad i \\in \\left\\lbrace t+2, ..., t+S\\right\\rbrace \\end{array}\\right.", "}\\\\\\hat{\\textbf {O}}_{i} = MLP(\\textbf {H}^{dec}_{i}), \\quad i \\in \\left\\lbrace t+1, ..., t+S\\right\\rbrace ,$ where $\\hat{\\textbf {O}}_{i}$ , which is the decoder output, represents the forecast of the deseasonalized value at time $i$ .", "Moreover, $\\textbf {O}^{schedule}_{i-1}$ refers to the value to be applied the scheduled sampling [7], which is a system of feeding the model with either ground truth values with a probability of $\\epsilon $ or forecasts from the model with a probability of $1 - \\epsilon $ .", "This resolves the problem that the discrepancy between the input distributions of the training and testing can lead to poor performance, as the ground truth values are replaced by forecast values generated by the model.", "Finally, we can calculate the forecast of the flu volume $\\hat{\\textbf {Y}}_{i}$ at time $i$ by simply adding the forecast of the deseasonalized value $\\hat{\\textbf {O}}_{i}$ to the seasonalized value $\\textbf {X}^{s}_{i}$ ; $\\hat{\\textbf {Y}}_{i} = \\hat{\\textbf {O}}_{i} + \\textbf {X}^{s}_{i}, i \\in \\left\\lbrace t+1, ..., t+S\\right\\rbrace $ .", "For the model training, we need to determine the true value of the deseasonalized component $\\textbf {O}_{i}$ .", "This is achieved by a simple method, namely subtracting the seasonal part $\\textbf {X}^{s}_{i}$ that is assumed to have a constant frequency in the future season from the true flu volume $\\textbf {Y}_{i}$ .", "We use the mean squared error (MSE) loss between the true value of the deseasonalized component $\\textbf {O}$ and the forecast value $\\hat{\\textbf {O}}$ ." ], [ "Extension to multi-task model", "We extend the proposed model to possess the capability of multi-task learning for flu forecasting in various countries.", "Our model aims to improve the expressive ability by means of multi-task learning, which shares part of the learning representations.", "The architecture of our multi-task model is presented in Fig.", "REF .", "In Fig.", "REF , the components surrounded by blue share all the parameters of the hidden features, whereas those surrounded by red have different parameters set depending on the country.", "The GRUs in our model use the same parameters for each task; in particular, parameters of equations (REF ), (REF ), (REF ), and (REF ) are the same.", "The attention and MLP for the final output are set as country-specific; parameters of equations (REF ) and () differ for each country.", "Furthermore, we propose “country embedding” as the initial latent representation of two GRUs regarding the time series of the search queries and deseasonalized component for the multi-task learning of the flu forecasting.", "The proposal is based on the possibility of flexible modeling even in the shared representations by changing the initial latent state depending on the forecast target.", "The country embedding is calculated as follows: $\\textbf {H}^{country} = {\\rm MLP}({\\rm Country\\_id}),$ where $\\textbf {H}^{country} \\in \\mathbb {R}^{M}$ indicates the initial hidden representation of the GRUs as the input of Equations (REF ) and (REF ), and “Country_id” is the value assigned according to the country (e.g., the “Country_id” of US is 1 and that of JP is 2).", "At each step in the training process of the multi-task learning, we randomly select a country, followed by a random training batch $\\lbrace \\textbf {X}^{country\\_id}, \\textbf {Q}^{country\\_id}, \\textbf {Y}^{country\\_id}\\rbrace $ ; that is, we set one batch containing only the data of one country at a time.", "Our experimental code is public in https://github.com/hkefka385/single-model-for-influenza-forecasting" ], [ "Experimental settings", "We forecasted the ILI rates in the five countries (US, JP, AU, FR, and UK) using the proposed model.", "To validate the forecasting model, the proposed model and other comparative models forecasted the ILI rates from weeks 1 to 5.", "We assessed the forecasting performance using three year-long datasets including three flu terms (2017/30th to 2018/29th weeks, 2018/30th to 2019/29th weeks, and 2019/30th to 2020/29th weeks).", "We set 52 weeks (one year) as the validation period before the testing period, and we set more than three years from the initial week of the ILI data to before the validation period as the training period.", "We decided to use the WT-based method to identify search queries with all of the models, because the WT-based method is a better approach than translation-based method.", "(Note that the comparison between the WT-based and translation-based methods is examined in Section REF .)", "We set 52 weeks as $N$ and 5 weeks as $S$ , which indicated the number of weeks ahead for the forecast.", "Furthermore, we set 10 as $L$ , which indicated the number of search queries in the English list as input, and set 100 as $k$ , which indicated the parameter of the WT-based method.", "We subsequently selected the learning rate and hidden layer sizes of GRU $M$ as (0.001, 0.01, 0.1, 1.0) and (8, 16, 32, 64), respectively, in the validation period.", "During training, all model parameters were updated in a gradient-based manner following the Adam update rule [24].", "We set the number of epochs to 300 with early stopping.", "We validated the proposed model in the experiments.", "Proposed w/o sq: The proposed model was trained using only the historical ILI data of a target country.", "Proposed_single: The proposed model was trained using the data of a target country.", "The model was the same as that introduced in Section REF .", "Proposed_multi2: The proposed model was trained using the data of two target countries, namely the US and JP, for multi-task learning, as in the model introduced in Section REF .", "Proposed_multi5: The proposed model was trained using the data of five target countries for multi-task learning." ], [ "Comparative models", " GRU: The GRU model, one of the recurrent-based models, captures the temporal dependencies in the data and preserves the back-propagated error through the time and layers, referring to equation (REF ).", "It has been used successfully in influenza forecasting [31].", "We employed an encoder-–decoder architecture based on the GRU for the multi-step-ahead forecast.", "Two variations of the GRU were used in the experiments: “GRU w/o sq” had only historical ILI data, and “GRU” had historical ILI and search query data.", "ARGO: The ARGO model [48] is an autoregressive (AR) model with Google search queries as exogenous variables.", "The simple architecture of this model enables one-step-ahead forecast of the flu volume [2], [32], [37].", "The model fails to produce a multi-step-ahead forecast because it requires search query data in advance of the week that we wish to forecast.", "The parameters and input data are the same as those of the proposed model.", "Transformer: The Transformer is one of the most successful models in the NLP.", "Thus far, the Transformer-based flu forecasting model has achieved the highest accuracy [53].", "Two-stage: The Two-stage model [35], composed of long short-term model and AR model, was developed inspired by a similar idea to ours, in that the usefulness of the input data differs; historical ILI data and search query data are useful for forecasting the seasonality and trend, respectively.", "For the multi-step-ahead forecast, we extended the two-stage model to the encoder–-decoder architecture.", "Multi-task Elastic Net (MTEN): The MTEN [52] was proposed as a multi-task model for flu forecasting of US regional areas from search query data.", "This model extends the standard elastic net model to a multi-task version.", "We used the same search queries as those of the proposed model as input.", "The model outputs a one-step-ahead forecast for the same reason as that of the ARGO model.", "GRU_multi: For the simple comparative method for multi-task, we make one unified model on GRUs, which is trained on the aggregated data of five countries.", "The model is the same setting as GRU, which is one of the comparative methods.", "To compare the forecast performance levels of each model, we use two evaluation metrics: the coefficient of determination $\\rm {R^{2}}$ with a higher value indicating better performance, and root mean squared error $\\rm {RMSE}$ with a lower value indicating better performance." ], [ "Results", "The experimental results for US are presented in Table REF .", "(We examine the experimental results of the other countries in Section REF , and the experimental results for JP in detail are presented in Appendix C.) This result indicates that the proposed model (particularly our multi-task model) outperformed most baseline methods, confirming the benefits of the model architecture and multi-task learning.", "GRU w/o sq and GRU were superior baseline models and achieved approximately $0.8$ to $0.9$ for $\\rm {R^{2}}$ in the one-week-ahead forecasts for each country by capturing the temporal dependencies with the RNN architecture.", "Transformer, a state-of-the-art flu forecasting method, and Two-stage achieved relatively better scores in the near-ahead forecasts (from 1-week to 3-week) than the GRU-based models, but had almost the same scores in the far-ahead forecasts (from 4-week to 5-week).", "These results indicate that it is not easy to improve the accuracy of far-ahead forecasts.", "In contrast, the statistical model ARGO achieved relatively lower accuracy than the deep learning models.", "We assume that the deep learning-based models were more suitable for flu forecasting in terms of obtaining far-ahead forecast architecture with ease and exhibiting relatively higher accuracy than the statistical-based models, although the calculation cost was high.", "Likewise, MTEN based on the statistical model and multi-task learning had the same characteristics.", "It tended to exhibit lower accuracy than the other models because its input was only search query data.", "GRU_multi, a comparavie method for the validation of multi-task, had lower accuracy.", "It shows the difficulty of forecasting with a single model without devising model architecture and learning.", "Compared to these models, the proposed models (Proposed_single, Proposed_multi2, and Proposed_multi5) achieved the best scores with respect to the terms, metrics, and any-ahead forecasts.", "These results reveal that the architecture in the proposed model is useful for flu forecasting.", "Proposed_single achieved the best score among the models without multi-task learning in almost all terms, in which it exhibited the best score in the near-ahead forecast, whereas it had a lower score in the far-ahead forecast than the GRU-based models.", "The high degree of the score improvement in Proposed_multi2 and Proposed_multi5 compared to Proposed_single demonstrated the usefulness of the multi-task learning.", "In the near-ahead forecast, the multi-task learning effects were sometimes not observed, whereas the scores of these models in the far-ahead forecast were significantly improved.", "For example, in the term 2017 to 2018 in US, the five-week-ahead forecast by Proposed_multi5 achieved an improvement of 0.136 points in the $\\rm {RMSE}$ and 0.059 points in the $\\rm {R^{2}}$ compared to Proposed_single.", "Using data from different countries for simultaneous training, the model obtained the latent features of the time series of the ILI rates, thereby improving the forecasting performance.", "The difference in the accuracy of the model trained using the data of two countries (Proposed_multi2) and that trained using the data of five countries (Proposed_multi5) was not large.", "Table: Model forecasting performances for US.Table: Forecasting performances of each model for ILI rates in JP, UK, AU, and FR from 2017/30th week to 2018/29th week.Table: Comparison of forecasting performances of translation-based and WT-based methods using Proposed_single model in JP and FR from 2017/30th week to 2018/29th week." ], [ "Multi-model performance for other countries", "Table REF displays the forecast performances of four models (GRU, Proposed_single, GRU_multi, and Proposed_multi5) for the ILI rates in JP, UK, AU, and FR from 2017/30th week to 2018/29th week.", "These results suggest that the multi-task learning model Proposed_multi5 achieved the best score in almost all ahead forecasts, as well as the flu forecasting result in US.", "Multi-task learning is not limited to a numbefr of countries, but can be applied to various countries with different languages and environments, and the experimental results revealed that the multi-task method improved the forecasting performance." ], [ "Comparison of models without and with search queries", "Recent research relating to flu forecasts [1] claimed that the effect of search queries is small.", "To tackle this problem, our research presents a model with an attention mechanism that effectively considers search queries.", "To examine the search queries' effectiveness, we validated the degree of improvement of the two variation models, namely the GRU-based (GRU w/o sq and GRU) and proposed (Proposed w/o sq and Proposed_single) models, without and with search queries.", "The experimental results for the flu forecast in US (Table REF ) indicate that the change from GRU w/o sq to GRU resulted in an average improvement of $0.007$ points in the $\\rm {RMSE}$ , and of $0.001$ points in the $\\rm {R^{2}}$ .", "However, the change from Proposed w/o sq to Propose_single resulted in an average improvement of $0.091$ points in the $\\rm {RMSE}$ , and of $0.017$ points in the $\\rm {R^{2}}$ .", "This suggests that the search query data resulted in the GRU-based models, which simply used the search query data as input, exhibits low improvement scores by adding them.", "However, the proposed model, with a well-crafted architecture for the search query data input, achieved a significantly improved score.", "These results confirm that it is difficult to treat search queries as input for flu forecasting, and it is necessary to contribute to the score improvement by considering the model devices, such as the introduction of an attention mechanism." ], [ "Analysis of the methods to find search queries", "We compared the translation-based and WT-based methods for the selection of search queries.", "For comparison, we experimented with the flu forecast from 2017/30th week to 2018/29th week in JP and FR using the Proposed_single model with the translation-based and WT-based methods.", "The results in Table REF demonstrate that the WT-based method achieved better scores than the translation-based method in all experimental metrics in FR and most experimental metrics in JP.", "However, the degree of improvement in the accuracy was not large.", "For example, for $\\rm {R^{2}}$ , the two-week-ahead forecast for JP exhibited only a $0.005$ point improvement, and that for FR exhibited only a $0.035$ point improvement.", "Our model based on a neural network can consider a small number of search queries as input for efficient calculation, compared to the multi-task model [52] based on a statistical method that can consider many search queries.", "We assume that the architecture of our model, which does not involve a large number of search queries as input, is insignificantly affected by the selection of search queries.", "Although the results demonstrated that the WT-based method was superior as the selection method for our flu forecasting model, substantial room for consideration remains, such as which method is better for models dealing with a large number of search queries." ], [ "Conclusions", "In this study, we attempted to construct a flu forecasting model targeting multiple countries by leveraging multi-task framework.", "Also, we addressed two tasks: finding suitable search queries in languages other than English and leveraging the search query data as input for the forecasting model.", "We revealed that the WT-based method is a better approach for the exploration of search queries.", "Moreover, we proposed a novel forecasting model considering the characteristics of the input data, historical ILI data and search query data, and demonstrated the usefulness of the model architecture.", "Throughout the flu forecasting experiments in multiple countries, the proposed model achieved the highest performance by acquiring the latent features in the flu time series and by treating the task as multi-task learning.", "Our experiments demonstrated the feasibility of constructing a flu forecasting model targeting multiple countries and the usefulness of search query data as input for the proposed model.", "However, the method of searching for suitable search queries remains a major challenge, which our research has not yet solved.", "Although we used the list of English search queries, a method for identifying appropriate search queries without relying on external resources is required.", "Moreover, it is necessary to examine a method to apply the proposed flu forecasting model to new infectious diseases from short period data, such as COVID-19, for dealing with a pandemic." ], [ "Correlation between time series of ILI rates in each country", "Fig.", "REF presents the flu time series from 2016/29th week to 2019/30th week in five countries.", "This represents that the flu time series in each country exhibits strong seasonality and therefore, holds strong similarity.", "Table REF displays the Pearson correlations of the time series among the respective countries.", "These values suggest that the influenza-like illness (ILI) rates in the five countries with different cultures, locations, and languages have a moderate correlation with one another (almost all correlations are over 0.6).", "The high correlation gives us a strong motivation to addresses the challenge of flu forecasting for various countries in one model.", "Figure: Time series of ILI rates in five countries, applied to min-max normalization, from 2016/29th week to 2019/30th week." ], [ "Related Work", "Flu forecasting is a type of time-series prediction.", "Time-series prediction tasks are mainly divided into univariate and multivariate types [29].", "In research on time-series prediction, various models suitable for each task have been proposed over time.", "Owing to the rapid development of neural networks, many models have been based on these, particularly convolutional neural networks (CNNs) [38], [6], [11] and recurrent neural networks (RNNs) [41], [40], [25], which capture the temporal variation.", "In recent years, there has been an increase in time-series prediction models using “attention” (Transformer) to achieve state-of-the-art performance in multiple natural language processing applications [5], [42].", "Attention generally aggregates temporal features using dynamically generated weights, thereby enabling the network to focus on significant time steps in the past directly.", "For example, [17], [28], [30] are time-series prediction models based on attention.", "Although numerous models and methods have been proposed to achieve higher prediction accuracy, it is difficult to apply them to the influenza forecasting problem in a simple manner.", "This is owing to the problem setting of flu forecasting; that is, the future flu volume is estimated from two major resources: historical ILI data and UGC data.", "This problem belongs to a multivariable problem with one objective variable and various explanatory variables, and most models are not designed for this problem type.", "Thus, we need to develop a time-series model suitable for flu and not apply state-of-the-art prediction models.", "Numerous models relating to flu forecasting have been proposed to date.", "Prior to the emergence of online UGCs, compartmental models such as SIR [23] and IDEA [36], as well as statistical models such as autoregressive models [44] using historical ILI data, were used extensively for flu forecasting.", "With the development of the Internet, some researches [19], [13] revealed that online UGCs such as search queries and social media posts are useful resources, same as in the field of influenza prediction [18].", "Although certain studies have used only one resource of either historical ILI data or online UGCs, the majority of studies proposed supervised methods using online UGCs together with historical ILI data simultaneously as input.", "Most of these approaches do not consider the characteristics of the data types, but simply simultaneous inputting, when learning the model.", "Several previous models [37], [35] based on statistical methods have been developed to exploit the characteristics of the different input data.", "However, these studies exhibit certain disadvantages, such as the necessity of long training terms or a small degree of improved accuracy.", "Our method based on neural networks to capture the latent features can achieve the best accuracy in flu forecasting by means of an appropriate combination method of two inputs: historical ILI data and search query data.", "Moreover, we aim to learn a single flu forecasting model for multiple countries as multi-task learning.", "Multi-task learning, which was introduced by [12], improves the generalization, and achieves superior efficiency and prediction accuracy by using a shared representation from related tasks.", "It is used extensively in various areas, such as natural language processing (NLP) [8], [22] and time series [10], [16].", "The fundamentals of multi-task learning were presented in detail in [12].", "Zou et al.", "[52] tackled a similar problem to ours and proposed a multi-task model based on linear and Gaussian regression to forecast the flu volume in the following two problem settings: several states in the US, and two countries, namely the US and England.", "Our multi-task model and task further develop the above in two aspects: we tackle flu forecasting in five countries, each of which differs in terms of the area or language, and we apply not a simple model such as a statistical model, but our novel neural network-based model for multi-task learning to achieve higher accuracy and long-term forecasting.", "Table: Model forecasting performances for JP." ], [ "Experimental results for JP", "The result for JP is presented in Tables REF .", "This result indicates that the proposed model (particularly our multi-task model) outperformed most baseline methods, confirming the benefits of the model architecture and multi-task learning, same as the US.", "The proposed models (Proposed_single, Proposed_multi2, and Proposed_multi5) achieved the best scores with respect to the terms, metrics, and any-ahead forecasts.", "These results reveal that the architecture in the proposed model is useful for flu forecasting.", "Proposed_single achieved the best score among the models without multi-task learning in almost all terms except for 2018 to 2019 in JP, in which it exhibited the best score in the near-ahead forecast, whereas it had a lower score in the far-ahead forecast than the GRU-based models.", "Same as the US, the high degree of the score improvement in Proposed_multi2 and Proposed_multi5 compared to Proposed_single demonstrated the usefulness of the multi-task learning, except in 1-week forecast.", "In terms of the comparison of models without and with search queries, the experimental results for the flu forecast in JP indicate that the change from GRU w/o sq to GRU resulted in an average improvement of $-0.026$ points in the $\\rm {RMSE}$ , and of $-0.030$ points in the $\\rm {R^{2}}$ .", "However, the change from Proposed w/o sq to Propose_single resulted in an average improvement of $0.187$ points in the $\\rm {RMSE}$ , and of $0.012$ points in the $\\rm {R^{2}}$ .", "Same as the US, this suggests that the search query data resulted in the GRU-based models exhibits low or worse improvement scores by adding them, however the proposed model, with a well-crafted architecture for the search query data input, achieved a significantly improved score." ], [ "Examples of search queries based on each selection method", "Examples of search queries based on each selection method are presented in Table REF .", "Compared to the translation-based method, the WT-based method exhibited many similar points in the selection results, although several aspects differed.", "For example, in Japanese, the abbreviation representation “UTF8ipxmインフル (I-N-FU-LU)” is selected as the corresponding word for “flu,” and not “UTF8ipxmインフルエンザ (I-N-FU-LU-E-N-ZA).” In French, “infection” is selected as the corresponding word for “symptoms.” UTF8ipxm Table: Examples of selected search queries by translation-based and WT-based methods." ], [ "Effect of the country embedding", "To examine the country embedding effectiveness, we validated the degree of improvement of the two proposed models without and with country embedding (Proposesd_multi5 w/o CE and Proposesd_multi5).", "The experimental results for the flu forecast in US and JP are presented in Table REF .", "Proposesd_multi5 achieved relatively better scores than those without country embedding in two countries.", "This suggests that the country embedding, which is the initial latent representation of two GRUs regarding the time series of the search queries and deseasonalized component for each country, exhibits improvement scores.", "Table: Comparison of forecasting performances of Proposesd_multi5 and Proposesd_multi5 without country embedding (CE) in US and JP from 2017/30th week to 2018/29th week." ], [ "Effect of the attention in the proposed model", "The attention mechanism in the proposed model not only successfully combines the search query data, but can also provide a broad understanding of which queries affect the forecast and in what manner.", "Fig.", "REF presents the visualization of the attention weight of each search query for the forecast in the US from 2017/30th week to 2018/29th week.", "A large change occurred in the attention weight around 2018/8th week, at which time the flu became an epidemic, whereas the attention weight in each query was almost constant except during this period.", "This means that the information from search queries is useful in forecasting the flu during an epidemic period.", "In particular, “flu and fever” and “symptoms of flu” are useful in the flu forecasting model because they have a large attention weight.", "It is interesting that the weight of “flu and fever” was large despite that of “fever flu,” with the same meaning, being small.", "Using attention to visualize useful search queries offers the potential to aid in determining the resources of the input for creating the forecasting model in situations where a useful search query has not been determined.", "Figure: Visualization of attention weight of each search query for forecast in US from 2017/30th week to 2018/29th week.", "Red indicates a large attention weight, whereas blue indicates a small attention weight." ] ]
2107.01760
[ [ "$cos(2\\phi_h)$ asymmetry in $J/\\psi$ production in unpolarized $ep$\n collision in NRQCD" ], [ "Abstract We present a recent calculation of the transverse momentum dependent gluon distributions inside unpolarised protons and show how the ratio of the linearly polarized and the unpolarized gluon distribution in the proton can be probed by looking at $Cos(2\\phi_h)$ asymmetry in $J/\\psi$ production in unpolarized $ep$ collision.", "We use NRQCD for estimating $J/\\psi$ production and include contributions both from color singlet and color octet states." ], [ "Abstract", "We present a recent calculation of the transverse momentum dependent gluon distributions inside unpolarised protons and show how the ratio of the linearly polarized and the unpolarized gluon distribution in the proton can be probed by looking at $Cos(2\\phi _h)$ asymmetry in $J/\\psi $ production in unpolarized $ep$ collision.", "We use NRQCD for estimating $J/\\psi $ production and include contribution both from color singlet and color octet states." ], [ "Introduction", "Transverse momentum dependent (TMD) parton distribution functions (PDFs) play an important role in understanding the spin and spatial structure of the proton.", "They not only provide the information about the longitudinal momentum fraction but also give us information about the internal transverse momentum $k_{\\perp }$ carried by partons.", "Unlike collinear PDFs, which are universal, TMDs are process dependent due to their initial and final state interactions.", "The two experiments where TMDs can be extracted are SIDIS and DY.", "In these experiments the observables of most interest are the single-spin asymmetries (SSA) and azimuthal asymmetries.", "The leading-twist distributions, unpolarised gluon TMD $ f_1^g(x,\\mathbf {k}_{\\perp }^2)$ and $h_1^{\\perp g}(x,\\mathbf {k}_{\\perp }^2)$ , the so-called Boer–Mulders function, are greatly relevant to these asymmetries.", "The function $f_1^g(x,\\mathbf {k}_{\\perp }^2)$ represents the probability of finding an unpolarized gluon, within an unpolarized hadron, with a longitudinal momentum fraction $x$ and transverse momentum $k_\\perp $ , while as $h_1^{\\perp g}(x,\\mathbf {k}_{\\perp }^2)$ represents the distribution of linearly polarized gluons within the unpolarized hadron.", "Due mainly to the lack of experimental data, information on the $h_1^{\\perp g}(x,\\mathbf {k}_{\\perp }^2)$ is very limited.", "Heavy quark pair or dijet production in SIDIS [1], diphoton pair [2] and $\\Upsilon (1S)$ +jet [3] production in $pp$ collision have been suggested for extracting $h_1^{\\perp g}(x,\\mathbf {k}_{\\perp }^2)$ .", "It has been seen in these processes that measuring azimuthal asymmetries, we can probe $h_1^{\\perp g}(x,\\mathbf {k}_{\\perp }^2)$ .", "Quarkonium production is an important tool to probe the gluon TMDs (see [4]for a recent review).", "In Ref.", "[5] the authors probed $h_1^{\\perp g}$ in $cos(2\\phi _h)$ asymmetry in $J/\\psi $ production through the leading order (LO) process $\\gamma ^{\\ast }+g\\rightarrow J/\\psi $ at the future EIC at $z=1$ , where $z$ measures the fraction of photon energy transferred to $J/\\psi $ .", "This was extended to the region $z<1$ in [6], where only the color singlet contribution within the non-relativistic QCD (NRQCD) was considered.", "In this article we review a recent calculation of the $cos (2 \\phi _h)$ asymmetry in $J/\\psi $ production in electron-proton collision[7].", "The production of $J/\\psi $ is calculated in the NRQCD framework with the inclusion of both color singlet and color octet contributions.", "We will be taking $\\gamma ^{\\ast }+g\\rightarrow J/\\psi +g$ partonic subprocess into the consideration, and investigate mainly the small-$x$ region, where the gluon TMDs play a major role.", "The rest of this paper is organized as follows.", "The analytical framework of our calculation is discussed in Section 2.", "In Section 3, we present our numerical results and finally, in Section 4, we conclude." ], [ "Azimuthal asymmetry in $J/\\psi $ leptoproduction", "The $cos(2\\phi _h)$ asymmetry for $e(l)+p(P)\\rightarrow e(l^{\\prime })+J/\\psi (P_h)+X(P_x)$ process is defined as $\\langle cos(2\\phi _h)\\rangle &=&\\frac{\\int d\\phi _h cos(2\\phi _h)d\\sigma }{\\int d\\phi _h d\\sigma },$ where $\\phi _h$ is the azimuthal angle of $J/\\psi $ production plane with the lepton plane and $d\\sigma $ is the differential scattering cross section.", "We consider the frame in which the incoming proton and the virtual photon exchanged in the process move in $+z$ and $-z$ directions.", "The kinematics here is defined in terms of two light-like vectors with the help of a Sudakov decomposition, here chosen to be the momentum $P(=n_{-})$ of the incoming proton, and a second vector $n(=n_{+})$ , obeying the relations $n\\cdot P = 1$ and $n_{+}^2=n_{-}^ 2 = 0$ .", "Since at small-$x$ the proton is rich in gluons, the dominant partonic sub-process at NLO for the $J/\\psi $ production is $\\gamma ^{\\ast }(q)+g(k)\\rightarrow J/\\psi (P_h)+g(p_g)$ .", "The differential scattering cross-section can be written as a convolution of leptonic tensor, a soft parton correlator for the incoming hadron and a hard part: $\\begin{aligned}d\\sigma ={} &\\frac{1}{2s}\\frac{d^3l^{\\prime }}{(2\\pi )^32E_l^{\\prime }}\\frac{d^3P_{h}}{(2\\pi )^32E_{P_h}}\\int \\frac{d^3p_g}{(2\\pi )^32E_g}\\int dxd^2\\mathbf {k}_{\\perp }(2\\pi )^4\\delta (q+k-P_h-p_g)\\\\&\\times \\frac{1}{Q^4}\\mathcal {L}^{\\mu \\mu ^{\\prime }}(l,q)\\Phi ^{\\nu \\nu ^{\\prime }}(x,\\mathbf {k}_{\\perp })~\\mathcal {M}_{\\mu \\nu }(\\mathcal {M}_{\\mu ^{\\prime }\\nu ^{\\prime }})^{\\ast }\\end{aligned}$ The term $\\mathcal {M}_{\\mu \\nu }$ represents the amplitude of $J/\\psi $ production in the ${\\gamma ^{\\ast }+g\\rightarrow J/\\psi +g}$ partonic sub-process and $\\mathcal {L}^{\\mu \\mu ^{\\prime }}(l,q)$ is the leptonic tensor.", "At leading twist, the gluon correlator of the unpolarized proton contains two TMD gluon distribution functions $\\Phi _g^{\\nu \\nu ^{\\prime }}(x,\\mathbf {k}_{\\perp })=-\\frac{1}{2x}\\bigg \\lbrace g_{\\perp }^{\\nu \\nu ^{\\prime }}f_1^g(x,\\mathbf {k}_{\\perp }^2)-\\left(\\frac{k_{\\perp }^{\\nu }k_{\\perp }^{\\nu ^{\\prime }}}{M_p^2}+g_{\\perp }^{\\nu \\nu ^{\\prime }}\\frac{\\mathbf {k}_{\\perp }^2}{2M_p^2}\\right)h_1^{\\perp g}(x,\\mathbf {k}_{\\perp }^2)\\bigg \\rbrace .$ Here $g_{\\perp }^{\\nu \\nu ^{\\prime }}=g^{\\nu \\nu ^{\\prime }}-P^{\\nu }n^{\\nu ^{\\prime }}/P\\cdot n-P^{\\nu ^{\\prime }}n^{\\nu }/P\\cdot n$ .", "Following Refs.", "[7] and references within, to which we refer the reader for more details, we could write the final expression for the azimuthal asymmetry as: $\\langle cos(2\\phi _h)\\rangle \\propto \\frac{ \\int k_\\perp d k_\\perp \\Big ( A_2~f_1^g(x,\\textbf {k}_{\\perp }^2)+\\frac{k_{\\perp }^2}{M_p^2}~B_2~h_1^{\\perp g}(x,\\textbf {k}_{\\perp }^2) \\Big ) }{\\int k_\\perp d k_\\perp \\Big (A_0~f_1^g(x,\\textbf {k}_{\\perp }^2)+\\frac{k_{\\perp }^2}{M_p^2}~B_0~h_1^{\\perp g}(x,\\textbf {k}_{\\perp }^2) \\Big )}.$ In the kinematics considered above we found that the unpolarized gluon distribution and the linearly polarized distributions are not disentangled, however the above result can be used for the extraction of their ratio." ], [ " Results and discussion", "In this section, we present numerical estimates of the $cos(2\\phi _h)$ asymmetry in the kinematical region to be accessed at EIC and to avoid the contribution from virtual diagrams.", "We have also imposed a lower cutoff $z>0.1$ to avoid the gluon fragmentation contribution to $J/\\psi $ .", "We verified that changing the lower cutoff does not affect the asymmetry much.", "We took mass of the proton to be $M_p = 1$ GeV.", "The contraction in the calculation above for the different states $i.e.,~{^1}S_0^{(8)}, ^3S_1^{(1,8)}$ and $^3P_{J(=0,1,2)}^{(8)}$ is calculated using the FeynCalc [8], [9].", "In all the plots of the asymmetry, the long distance matrix elements (LDMEs) are taken from Ref.", "[10] except for the right panel of Fig.", "REF , where we have used different sets of LDMEs.", "We have used two sets of parameterization for the TMDs to calculate the $cos(2\\phi _h)$ asymmetry: 1) Gaussian-type parameterization [11], [12] and  2) McLerran-Venugopalan model [13], [14], [15] at small-$x$ region." ], [ "$cos(2\\phi _h)$ asymmetry in the Gaussian parameterization", "The results for the Gaussian are obtained for $\\sqrt{s}=100$ GeV and $Q^2=15$ GeV.", "We have incorporated only few of the results here, the reader could find the more details in Ref.", "[7].", "Figure: cos(2φ h )cos(2\\phi _h) asymmetry in e+p→e+J/ψ+Xe+p\\rightarrow e+ J/\\psi +Xprocess as function of P h⊥ P_{h\\perp } at s=100\\sqrt{s}=100 GeV and Q 2 =15Q^2=15 GeV 2 ^2.Left: contributions from color singlet and color octet states using the LDMEs set CMSWZ .", "Right: comparing the asymmetry for different LDMEs sets: CMSWZ , SV , ZSSL , BK .In the left panel of Fig.", "REF , we present the contribution to the asymmetry from the color singlet(CS) and the color octet(CO) states.", "From the plot, we see that the color octet states are giving a significant contribution to the asymmetry, whereas the contribution from the CS is almost zero and slightly positive in higher $P_{h \\perp }$ region.", "In the right panel of Fig.", "REF , we showed the asymmetry for different sets of LDMEs.", "We see that the magnitude and the sign of the asymmetry depends on the set of LDMEs used.", "In fact, this is because of different states contributing to the asymmetry depending on the LDMEs.", "We have a larger asymmetry for LDMEs set BK." ], [ "$cos(2\\phi _h)$ asymmetry in the McLerran-Venugopalan (MV) model", "For the MV parameterization,the results are obtained in the kinematic region defined by $\\sqrt{s}=150$ GeV, $x=0.01$ and $z=0.7$ .", "The integration ranges are $y\\in [0.2,0.9]$ and $x_B\\in [0.005,0.009]$ .", "The value of $Q$ is set according to $y,x_B$ and $s$ .", "In Fig.", "REF (Left), we present the contribution to the $cos(2\\phi _h)$ asymmetry for the $J/\\psi $ coming from the individual states, as a function of $P_{h \\perp }$ .", "The maximum contribution to the asymmetry comes from the $^1S^{(8)}_0$ state.", "In the same Fig.", "REF (right), we show the comparison of the asymmetry in the Gaussian and MV model, respectively (with two different values of $Q_{sg0}$ ) within the same kinematical region as discussed above.", "The asymmetry in MV model depends on the saturation scale.", "We note that the Gaussian parameterization of the TMDs gives larger $cos(2\\phi _h)$ asymmetry." ], [ "Conclusion", "We investigated the $cos(2\\phi _h)$ asymmetry in $J/\\psi $ production in an electron-proton collision in the kinematics of the future EIC.", "We calculated them in the small-$x$ domain, where the gluon TMD, namely the unpolarized and linearly polarized gluon TMD are important in unpolarized scattering.", "The virtual-photon-gluon fusion process $\\gamma ^{\\ast }+g\\rightarrow J/\\psi +g$ is the dominant sub-process in this kinematical region for $J/\\psi $ production.", "We used the NRQCD-based color octet formalism to calculate the $J/\\psi $ production rate.", "A small but significant $cos(2\\phi _h)$ asymmetry is obtained.", "We used the Gaussian and MV models for parameterization of TMDs.", "The magnitude of the asymmetry was found to be larger with Gaussian parameterization.", "We incorporated contributions both from CO and CS states.", "The asymmetry depends on the LDMEs used; specifically, contributions from individual states were found to be significantly dependent on the set of LDMEs used.", "Overall, our results indicate that the $cos(2 \\phi _h)$ asymmetry in $J/\\psi $ production might be a very useful tool for probing the ratio of the linearly polarized gluon TMD and the unpolarized gluon TMD in the small-$x$ region at the EIC." ], [ "Acknowledgement", "M.Siddiqah thanks the organisers of DIS2021 for the opportunity to present this work." ] ]
2107.01821
[ [ "Outflows in the Radio-Intermediate Quasar III Zw 2: A Polarization Study\n with the EVLA & uGMRT" ], [ "Abstract We present results from a polarization study of the radio-intermediate quasar, III Zw 2, at a redshift of 0.089, with the upgraded Giant Metrewave Radio Telescope (uGMRT) at 685 MHz and the Karl G. Jansky Very Large Array (VLA) at 5 and 34 GHz.", "We detect a kpc-scale outflow, exhibiting transverse magnetic (B-) fields.", "The curved jet terminates in a bow-shock-like radio structure with inferred B-fields aligned with the lobe edges.", "We suggest that the radio outflow in III Zw 2 is a combination of a collimated jet along with a wind-like component.", "This \"wind\" component could be a magnetized accretion disk wind or the outer layers of a broadened jet or a combination of both.", "The current data cannot differentiate between these possibilities.", "We also detect kpc-scale lobe emission that is misaligned with the primary lobes in the uGMRT images.", "The spectral indices and the electron lifetimes in the misaligned lobe are similar to the primary lobe, suggesting that the misaligned lobe is not a relic.", "We propose that changing spectral states of the accretion disk, and the subsequent intermittent behaviour of the outflow, along with the close interplay between the jet and \"wind\" could explain the radio-intermediate nature of III Zw 2.", "Our study shows that radio-intermediate quasars are promising sources for understanding the role of jets and winds in galaxy evolution and demonstrates the power of radio polarization studies towards achieving this." ], [ "Introduction", "Seyfert galaxies are a subclass of active galactic nuclei (AGN).", "AGN are powered by accretion of matter on to supermassive black holes (SMBH).", "Type 1 Seyferts are those that exhibit the presence of both broad and narrow emission lines in their optical spectra, while Type 2 exhibit only narrow emission lines; these two types have been explained to be due to orientation and obscuration effects [3].", "Seyfert galaxies are typically classified as radio-quiet (RQ) AGN, based on their radio-loudness parameter (R) which is the ratio of 5 GHz radio flux density to optical B-band flux density being $\\le 10$ [41].", "However, majority of the type 1 Seyfert galaxies move into the radio-loud (RL) AGN category when only the nuclear contributions in both radio and optical flux densities are considered [32], [44].", "The origin of radio outflows in Seyfert galaxies is poorly understood [71].", "AGN jets/winds and starburst-driven winds are considered as potential contributors to the radio emission observed in Seyfert galaxies [5], [17], [28].", "Radio polarimetric observations can in principle help to distinguish between these primary contenders based on the differences in their degrees of polarization, magnetic (B-) field structures and rotation measures (RM) [84], [86].", "It has long been suggested that the radio-intermediate (RI) quasars [23] are relativistically boosted counterparts of RQ quasars, III Zw 2 [42] being a classic example [61], [23].", "In the light of this hypothesis, studying outflows in RI quasars can provide better insights into the origin of radio emission in RQ quasars and on how they compare with the properties of the better studied class of RL quasars.", "Apart from radiation, accretion of matter onto SMBHs can drive powerful outflows, such as jets and/or winds, which transport matter and energy from the nucleus to the surrounding medium.", "These outflows can affect star-formation, chemical composition of the ambient medium and cooling flows at the galaxy clusters cores [22].", "They play an important role in the co-evolution of SMBHs and their host galaxies, [25], via the “AGN feedback” processes [65], [39], [109], [94].", "However, there are several gaps in our understanding about the different modes of these outflows, such as jets and winds, and how each influence their host galaxies.", "This problem could be better addressed if studied in systems where more than one outflow mode exists.", "[60] have suggested that two modes of outflow, i.e.", "a jet and a wind, could co-exist in AGN of intermediate radio-loudness (see their Figure 4).", "Therefore, RI AGN may be promising sources for understanding how AGN influence their host galaxies and the role of jets/winds in galaxy evolution.", "While jets are generally understood to be produced by the electromagnetic extraction of rotational energy from spinning black holes [9], winds may originate either from the accretion disk or the torus or the broad line region and may be driven either thermally [6], [107], [47], [63], or radiatively [74], [110], [67], or magnetically [8], [45], [21], [26].", "Another model for nuclear “wind” is proposed by [64] in their numerical simulations, where they demonstrate how the strong interaction between an inclined relativistic jet and the dense interstellar medium (ISM) can cause the jet to deflect, decelerate, and break out into a sub-relativistic outflow reminiscent of an AGN “wind” [78] or a nuclear starburst [104], along the minor axis of the galaxy following a path of least resistance.", "Such a “wind” creates a spherical bubble that clears the ambient medium as it rises.", "Therefore, one cannot fully differentiate between a magnetized accretion disk wind and the outer layers of a broadened jet (like a jet sheath).", "To fully differentiate between these, one will need to take into account the emission-line gas structure and kinematics along with the polarized radio emission, as well as realistic magnetohydrodynamic (MHD) jet simulations comprising of these multiple components.", "Such an analysis is beyond the scope of the present work, but can be aimed for in the future.", "Therefore, in this paper, we use “wind” generally to refer to either a magnetized accretion disk wind or the outer layers of a broadened jet, or a combination of both.", "III Zw 2 has been identified as a Seyfert 1 galaxy in the literature [4], [37].", "III Zw 2 was in fact the first Seyfert galaxy where a superluminal radio jet was discovered [10].", "It is a member of the Palomar Green quasar sample [81].", "The $\\it {Fermi}$ -LAT detection of two distinct $\\gamma $ -ray flares in III Zw 2 has been recently reported [54].", "III Zw 2 is known to be highly variable at radio [108], [82], [50], [2], [24], IR [51], [87], UV [14], optical [57], [16], X-ray [40], [79] and $\\gamma $ -ray [54] wavelengths.", "Its radio core flux density has been known to vary by a factor of 20-30 over a span of a few years [2], [24].", "Early studies of III Zw 2 suggested a spiral host galaxy [38], [99].", "However, what was initially identified as a spiral arm by [37] is now accepted to be a star-forming tidal arm indicative of an ongoing galaxy merger [98].", "Recent studies using Hubble Space Telescope also suggest an elliptical host galaxy [105].", "At a redshift of 0.0893, it forms a part of a triple interconnected system of compact galaxies [111].", "Table: Observation detailsThe supermassive black hole in III Zw 2 has been estimated to be $7.4\\times 10^8$  M$_\\odot $ with an Eddington ratio of $\\sim $ 0.07 [90].", "A part of the BLR is in a Keplerian disk [73].", "Some of the BLR emission may come from an accretion disk wind [73].", "The disk has been inferred to lie at an inclination of $12\\pm 5$ [73].", "[60] find a significant inverse correlation between ionized wind column density (N$_\\mathrm {H}$ ) and the radio-loudness parameter for a sample of 16 Seyfert 1s having R $>10$ .", "Their sample includes III Zw 2 for which they find N$_\\mathrm {H}=(7\\pm 4)\\times 10^{20}$  cm$^{-2}$ , and an ionized wind outflow velocity of $(-1780\\pm 670)$  km s$^{-1}$ .", "The N$_\\mathrm {H}$ -R relation is explained as a manifestation of a common driving mechanism for both outflow modes, such as a magnetically-driven wind and a jet.", "Similar to its total intensity, the radio core polarization (angle) in III Zw 2 varies dramatically over the time period of years [2], [100].", "Jet precession has been inferred from very long baseline interferometry (VLBI) studies, radio variability, and the X-ray study [12], [53], [31].", "A precessing radio jet interacting with a molecular torus roughly every five years was invoked by [12] to explain their observations of radio outbursts in III Zw 2.", "A variability period of about 5.14 yrs was discovered in III Zw 2 by [53] based on the analysis of its radio light curves, which they attributed to the helical motion of the radio jet.", "[31] found that the source of X-ray reflection in III Zw 2 changed from being an inner accretion disk to a torus over a period of 11 years.", "This was explained as a result of a precessing radio jet in III Zw 2 that may have originated from a precessing accretion disk owing to disc instabilities from an ongoing merger event [98], [55].", "In this paper, we present the results from a multi-frequency radio polarimetric study of III Zw 2, and discuss how these results can explain some of the peculiarities of this source.", "Throughout this work, we have adopted $\\Lambda $ CDM cosmology with $H_0$ = 73 km s$^{-1}$  Mpc$^{-1}$ , $\\Omega _{m}$ = 0.27 and $\\Omega _{v}$ = 0.73.", "Spectral index $\\alpha $ is defined such that flux density at frequency $\\nu $ , $S_\\nu \\propto \\nu ^{\\alpha }$ ." ], [ "Observations and Data reduction", "We observed III Zw 2 using the upgraded Giant Metrewave Radio Telescope (uGMRT) at 685 MHz (Band 4) on June 23, 2020 and the Karl Jansky Very Large Array (VLA) at 5 GHz in the B-array configuration on August 20, 2020.", "The 34 GHz VLA C-array and 1.5 GHz B-array data were obtained from the VLA data archive.", "The details of the observations are provided in Table REF .", "The uGMRT data were reduced with our $\\tt {CASA}$Common Astronomy Software Applications; [91]-based pipeline that handles total intensity [93] as well as polarization dataAvailable at https://sites.google.com/view/silpasasikumar/.", "Basic calibration and editing for VLA 1.5, 5 and 34 GHz data was carried out using the $\\tt {CASA}$ calibration pipeline for VLA data reduction.", "This was followed by manual polarization calibration on 5 and 34 GHz data, since the 1.5 GHz data had only RR and LL visibilities.", "We discuss the polarization calibration ahead." ], [ "Polarization Calibration", "Firstly, the model of a polarized calibrator was set manually using the task $\\tt {SETJY}$ in $\\tt {CASA}$ .", "For this, we provided the model parameters such as the reference frequency, Stokes I flux density at the reference frequency, the spectral index, and the coefficients of the Taylor expansion of fractional polarization and polarization angle as a function of frequency about the reference frequency.", "The coefficients were estimated by fitting a first-order polynomial to the values of fractional polarization and polarization angle as function of frequency, which were obtained from the NRAO VLA observing guidehttps://science.nrao.edu/facilities/vla/docs/manuals/obsguide/ modes/pol.", "Figure: uGMRT 685 MHz uv-tapered total intensity contour image superimposed over SDSS g-band optical image.", "The contour levels are (0.40, 0.70, 1.88, 6.58, 25.32) mJy beam -1 ^{-1}.", "The color scale extends from --0.024 to 15.0 ×\\times 15 nJy or -0.36-0.36 nJy to 0.22 μ\\mu Jy.The inset on the top right presents VLA 5 GHz total intensity contour image superimposed over the HST WFC3 F547M optical image showing the SW jet and lobe.", "The enlarged view of the same showing the full radio structure is presented in Figure .Figure: uGMRT 685 MHz total intensity contour image with electric fractional polarization vectors superimposed in red.", "The peak contour flux is 35 mJy beam -1 ^{-1} and the contour levels are 0.21 ×\\times (-1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512) mJy beam -1 ^{-1}.", "2 length of the vector corresponds to 6% fractional polarization.Polarization calibration was carried out in three steps: (i) the cross$-$ hand (RL, LR) delays, resulting from residual delay difference between R and L signals, were solved using a polarized calibrator that has strong cross-polarization.", "This was carried out using the task $\\tt {GAINCAL}$ with $\\tt {gaintype=KCROSS}$ in $\\tt {CASA}$ .", "(ii) the instrumental polarization (i.e, the frequency dependent leakage terms or `D-terms'), was solved using either an unpolarized calibrator or a polarized calibrator with good parallactic angle coverage.", "This corrected for the polarization leakage between the feeds owing to their imperfections and non-orthogonality (for e.g.", "the R/X$-$ polarized feed picks up L/Y-polarized emission, and vice versa).", "We used the task $\\tt {POLCAL}$ in $\\tt {CASA}$ to solve for instrumental polarization with $\\tt {poltype=Df}$ while using the unpolarized calibrator and with $\\tt {poltype=Df+QU}$ while using the polarized calibrator.", "This task uses the Stokes I, Q, and U values (Q and U being zero for unpolarized calibrator) in the model data to derive the leakage solutions.", "Also, when $\\tt {poltype=Df+QU}$ is used, this task solves for both source polarization and instrumental polarization simultaneously based on their differential rotation with the parallactic angle.", "Table: Summary of radio total intensity and polarization propertiesThe unpolarized calibrator 3C84 was used for leakage calibration in all the projects.", "The average value of the D-term amplitude turned out to be $\\sim 10\\%$ (typically $\\le 20\\%$ ) for the uGMRT antennas (Project DDTC130).", "For the VLA antennas, this value turned out to be $\\sim 7\\%$ for Project 20A-182 and $\\sim 5\\%$ for Project 17A-027 (typically $\\le 10\\%$ in both cases).", "(iii) the frequency-dependent polarization angle was solved using a polarized calibrator with known polarization angle.", "This corrected for the R-L phase offset which arises from the difference between the right and left gain phases for the reference antenna.", "This was carried out using the task $\\tt {POLCAL}$ in $\\tt {CASA}$ with $\\tt {poltype= Xf}$ .", "The calibration solutions were then applied to the multi-source dataset.", "The $\\tt {CASA}$ task $\\tt {SPLIT}$ was used to extract the calibrated visibility data for III Zw 2 from the multi-source dataset while averaging the spectral channels such that the bandwidth smearing effects were negligible.", "The total intensity or the Stokes I image of III Zw 2 was created using the multiterm-multifrequency synthesis [76] algorithm of $\\tt {TCLEAN}$ task in $\\tt {CASA}$ .", "Three rounds of phase-only self-calibration followed by one round of amplitude and phase self-calibration were carried out for the VLA data whereas, four rounds of phase-only self-calibration followed by four rounds of amplitude and phase self-calibration were carried out for the uGMRT data.", "Stokes Q and U images were created from the last self-calibrated visibility data, using the same input parameters as for the Stokes I image but with lesser number of iterations.", "The rms noise of the Stokes (I, Q, U) images is (19, 10, 11) $\\mu $ Jy beam$^{-1}$ for uGMRT at 685 MHz, (66, none, none) $\\mu $ Jy beam$^{-1}$ for VLA at 1.5 GHz, (15, 13, 13) $\\mu $ Jy beam$^{-1}$ for VLA at 5 GHz and (278, 47, 48) $\\mu $ Jy beam$^{-1}$ for VLA at 34 GHzStokes Q and U images appear to have been over-cleaned at 34 GHz compared to Stokes I.", "However, potential discrepancies have been accounted for by blanking pixels with Stokes Q or U signal-to-noise ratio less than 3, greater than 10$$ error, and greater than 10% error, while creating the linear polarization, polarization angle, and fractional polarization images, respectively.", "Linear polarized intensity ($P=\\sqrt{Q^2+U^2}$ ) and polarization angle ($\\chi $ = 0.5 tan$^{-1} (U/Q)$ ) images were obtained by combining the Stokes Q and U images using the $\\tt {AIPS}$Astronomical Imaging Processing System; [106] task $\\tt {COMB}$ with $\\tt {opcode=POLC}$ (which corrects for Ricean bias) and $\\tt {POLA}$ respectively.", "Regions with signal-to-noise ratio less than 3 times the rms noise in the $P$ image and with values greater than 10$$ error in the $\\chi $ image were blanked while using $\\tt {COMB}$ .", "Fractional polarization ($F=P/I$ ) images were obtained in $\\tt {COMB}$ with $\\tt {opcode=DIV}$ .", "Regions with fractional polarization errors $\\gtrsim $ 10% were blanked in the image.", "Figure: VLA 5 GHz total intensity contour image with electric fractional polarization vectors superimposed in red.", "The inset on the top right presents the VLA 5 GHz total intensity contour image of the radio core with electric polarization vectors superimposed in red.", "The peak contour flux is 124 mJy beam -1 ^{-1} and the contour levels are 0.05 ×\\times (-1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512) mJy beam -1 ^{-1}.", "1 length of the vector corresponds to 25% fractional polarization in the main panel and 0.031 mJy beam -1 ^{-1} polarized intensity in the inset." ], [ "uGMRT-VLA RM estimation", "The Stokes Q and U images for the VLA 5 GHz data were created from two sub-bands with a bandwidth of 1024 MHz each, obtained from the original data having 16 spectral windows (spw) with a bandwidth of 128 MHz each, using the task $\\tt {TCLEAN}$ in $\\tt {CASA}$ .", "These were combined to create the polarization angle ($\\tt {PANG}$ ) images using the $\\tt {AIPS}$ task $\\tt {COMB}$ .", "The mean polarization angle of the “core” was estimated from the VLA sub-band $\\tt {PANG}$ images as well as the VLA 34 GHz $\\tt {PANG}$ image using the task $\\tt {IMSTAT}$ in $\\tt {AIPS}$ .", "The rotation measure (RM) of the core was estimated by fitting the relation: $\\chi $ = $\\chi _{\\circ }$  + RM $\\lambda ^2$ ($\\chi _{\\circ }$ and $\\chi $ being the intrinsic and observed polarization angles respectively) to the core polarization angles at the 3 frequencies: 5 GHz, 6 GHz and 34 GHz (the former two being the central frequencies of the VLA sub-band datasets), using $\\tt {PYTHON}$ fitting procedure - $\\tt {CURVE\\_FIT}$ .", "The uGMRT core polarization angle was excluded from this analysis since the core polarization structure in the uGMRT image was not clearly resolved from the jet, unlike the VLA images.", "We note that the calibrated data was not divided into four sub-bands since the signal-to-noise ratio in the individual sub-bands was too low for a reliable RM estimation.", "In order to estimate the RM for the south-western (SW) lobe, we created another set of $\\tt {PANG}$ images from the VLA sub-band datasets using the Stokes Q and U images that were convolved with an identical circular beam corresponding to the uGMRT resolution (4$~\\times $ 4$$ ).", "We also created the uGMRT $\\tt {PANG}$ image from the Stokes Q and U images convolved with this beam.", "The RM for the lobe was estimated by fitting the $\\chi -\\lambda ^2$ relation to the mean polarization angles of the lobe at the 3 frequencies: 685 MHz, 5 GHz and 6 GHz using the $\\tt {CURVE\\_FIT}$ procedure." ], [ "uGMRT-VLA Spectral Index Image", "Stokes I images of VLA 1.5 GHz and uGMRT 685 MHz data, convolved with an identical circular beam (6.5$~\\times $ 6.5$$ ; the beam of the lower resolution image), were created using the task $\\tt {TCLEAN}$ in $\\tt {CASA}$ .", "These images were made spatially coincident using the task $\\tt {OGEOM}$ in $\\tt {AIPS}$ , and combined using $\\tt {opcode=SPIX}$ in $\\tt {AIPS}$ task $\\tt {COMB}$ .", "A `two-point' spectral index image was created using the relation: $\\alpha $ = log ($\\mathrm {S}_1$ /$\\mathrm {S}_2$ )/log($\\nu _1/\\nu _2$ ) where $\\mathrm {S}_1$ and $\\mathrm {S}_2$ are flux densities at frequencies $\\nu _1$ and $\\nu _2$ (here $\\nu _1$ =685 MHz and $\\nu _2$ =1.5 GHz).", "Regions with flux densities below 3 times the rms noise were blanked while using $\\tt {COMB}$ ." ], [ "Radio Morphology", "A triple radio source with a “hotspot\"-core-“hotspot\" structure and a curved jet, along with a misaligned lobe, are observed in III Zw 2.", "The jet in III Zw 2 does not terminate in compact hotspots similar to those observed in FRII radio galaxies but rather in “bow-shock-like” radio features (see Section REF ahead).", "Figure REF presents the uv-tapered uGMRT 685 MHz total intensity contour image of III Zw 2 superimposed over SDSS g-band optical image, with an inset on the top right presenting the VLA 5 GHz total intensity contour image superimposed over the HST WFC3 F547M optical image of the SW jet and lobe displaying the bow-shock-like morphology of the “hotspot” region.", "The final resolution of the uv-tapered uGMRT image is 7.0$~\\times $ 6.5$$ with a PA = 88$$ .", "The uv-tapering was carried out to detect the diffuse lobe emission, especially in the misaligned lobe, better.", "As evident from the figure, the host galaxy is an interacting galaxy and part of an interconnected triplet of compact galaxies.", "While we cannot rule out contributions to the radio flux density from the companion galaxies seen in projection along the misaligned lobe, we find that the brightest part of diffuse radio emission in the misaligned lobe is not spatially coincident with the second largest of the companion galaxies to the south.", "Moreover, both the radio flux density and the spectral index of the misaligned lobe provide no clear indications that it has additional contribution from companion galaxies.", "Figures REF , REF and REF present total intensity contour images of III Zw 2 with electric fractional polarization vectors superimposed in red using uGMRT at 685 MHz, VLA at 5 GHz in B-array and 34 GHz in C-array respectively.", "The left panel of Figure REF presents a zoom-out of the inset of Figure REF , showing the full radio structure of III Zw 2, including the emission from the north-eastern (NE) lobe.", "From the figure, it appears that the tidal arm towards the north in the optical image may have interacted with the radio jet, but failed to disrupt it.", "The white curve superimposed over the SW lobe represents the bow-shock-like feature at the jet termination point.", "Table  REF provides the values of total flux density, polarized flux density, fractional polarization and polarization angle for different radio components of III Zw 2 from different projects.", "The errors in the total flux density of the cores were estimated from the calibration uncertainties (assuming $\\sim $ 10%) and rms noise of the images.", "Flux density values for compact components like the core were estimated using the Gaussian-fitting $\\tt {AIPS}$ task $\\tt {JMFIT}$ , whereas for extended emission, the $\\tt {AIPS}$ tasks $\\tt {TVWIN}$ and $\\tt {IMSTAT}$ were used.", "The rms noise values were also obtained using $\\tt {TVWIN}$ and $\\tt {IMSTAT}$ .", "The polarized flux density, fractional polarization and polarization angle values were estimated as the mean values for different components from the polarized intensity, fractional polarization and polarization angle images respectively, using the task $\\tt {IMSTAT}$ or $\\tt {TVSTAT}$ in $\\tt {AIPS}$ .", "The respective errors were estimated from the same regions of the corresponding noise images.", "All the lengths and widths used in the calculations ahead, were measured using the task $\\tt {TVDIST}$ in $\\tt {AIPS}$ .", "The distance from the core to “hotspot” in the SW lobe is 24 kpc and to the “hotspot” in the NE lobe is 33 kpc.", "The jet is one-sided which could be explained as a result of the Doppler boosting and dimming effects; the brighter SW jet is the approaching one.", "Also, the jets are asymmetric in length, which could in principle arise from an asymmetry in the environment around the lobes.", "The medium to the south-west of III Zw 2 is denser than the medium to the north-east; the closest neighbouring galaxy to the south-west of III Zw 2 is only $\\sim $ 30 arcsec away [12].", "We also detect prominently a diffuse lobe to the south-east that is misaligned with the primary lobes (see Figures REF and  REF ).", "In the 34 GHz VLA image, we detect only the radio core.", "The misaligned lobe in III Zw 2 was briefly discussed by [93], where we had presented a shorter exposure uGMRT image at 685 MHz, obtained on December 7, 2018.", "In that work we had obtained an integrated flux density in the radio core of $\\sim $ 60 mJy and in the current work, it has dropped to $\\sim $ 40 mJy.", "The $\\approx $ 30% drop in the core flux density lies within the known observed range in the literature.", "The integrated flux density of the SW and NE lobes agree within errors between the two epochs.", "The spectral index image of III Zw 2 created using our new uGMRT data and similar resolution 1.5 GHz VLA data, indicates a mean spectral index of $-0.7$ and $-0.6$ in the SW and misaligned lobes, respectively (Figure REF ).", "The spectral index image shows a flat spectrum region in the misaligned lobe.", "This however is not spatially coincident with the southernmost companion galaxy, with the offset being $-6.4$  arcsec in the north and $+1.5$  arcsec in the east.", "We also find an ultra-steep ($\\alpha <-$ 1) region to the south of the SW primary lobe.", "This may be plasma left behind by the jet head that has subsequently aged.", "Some of this lobe material may also be pushed towards a region of lower ambient density by the bouyant forces from the medium surrounding the SW lobe, giving rise to the misaligned lobe (see Sections REF and  REF ahead).", "The radio core is polarized in the uGMRT 685 MHz image as well as in the 5 and 34 GHz VLA images.", "For an optically thick core, the inferred B-fields are roughly transverse to the local jet direction and also to the direction of the VLBA jet [10], [12].", "Using the procedure discussed in Sections REF , we estimated the RM for the core to be $-120\\pm 15$  rad m$^{-2}$ .", "We use the following relation to estimate the electron number density of the Faraday rotating medium that produces an RM of 120 rad m$^{-2}$ for the core: RM = $\\int $ 812 n$_e$  B$_{||}$  dl rad m$^{-2}$ , where n$_e$ is in cm$^{-3}$ , B$_{||}$ is the line-of-sight component of the B-field in milliGauss (mG) and dl is the path length of the Faraday screen in parsecs.", "Assuming a path length of $\\sim $ 2 kpc, which is the size of the radio core, and B$_{||}$ as the equipartition B-field of 0.14 mG, we obtain an n$_e$ of $\\sim 5\\times 10^{-4}$ cm$^{-3}$ which typically corresponds to the number density of the hot ionized medium in the ISM [96].", "The equipartition B-field is estimated using “minimum energy” arguments and equations (1) to (5) from [70], assuming a proton-to-electron energy ratio and a volume filling factor of unity.", "The radio spectrum is assumed to extend from 10 MHz to 100 GHz.", "The integrated flux density of the inner-jet is estimated from the 34 GHz VLA image and its mean spectral index value (0.8) is estimated using the integrated flux densities at 5 GHz and 34 GHz (see Table REF ).", "A cylindrical volume with dimensions (= 2 kpc, 1 kpc) corresponding to the 34 GHz core is used." ], [ "The Radio Lobes & Jet", "The SW lobe is polarized in both uGMRT 685 MHz and VLA 5 GHz images and the jet is polarized in the uGMRT image.", "Assuming optically thin plasma in the jet and lobes (in which case the B-fields are perpendicular to the $\\chi -$ vectors), both images reveal that the inferred B-fields are aligned with the edges of the SW lobe and also follow the jet bend at termination.", "The presence of the Laing-Garrington effect [30], [49] in the lobes of this quasar is also revealed, which favours the SW lobe as the approaching jet side.", "The uGMRT image reveals B-field to be largely transverse to the jet direction.", "A jet region close to the core (annotated as J1) in the VLA 5 GHz image is highly polarized, exhibiting inferred B-fields aligned with the local jet direction.", "The RM for the SW lobe was estimated as $-$ 4$\\pm $ 1 rad m$^{-2}$ using the procedure discussed in Sections REF .", "Using this value of RM, a path length of $\\sim $ 9 kpc which is the size of the SW lobe estimated from the VLA 5 GHz image, and B$_{||}$ as the equipartition B-field of $\\sim 7~\\mu $ G, we obtain an n$_e$ of $\\sim 7\\times 10^{-5}$ cm$^{-3}$ , which corresponds to the typical number density of the inter-galactic medium [101].", "For the minimum energy calculation, the integrated flux density of the SW lobe is estimated from the VLA 5 GHz image and its mean spectral index value ($-$ 0.9) is estimated using the integrated flux densities at 685 MHz and 5 GHz (see Table REF ).", "A cylindrical volume with length and radius of the SW lobe (= 9 kpc, 4 kpc) estimated from the 5 GHz image is used.", "The equipartition estimates for both the SW and misaligned lobes are provided in Table REF .", "The electron lifetimes are estimated using the relation given in [112], assuming the break frequency to be 1.5 GHz.", "The integrated flux densities of the lobes are estimated from the 1.5 GHz VLA image.", "The mean spectral indices of the lobes are obtained from the 685 MHz$-$ 1.5 GHz spectral index image presented in Figure REF .", "A cylindrical volume with length and width of the lobes estimated from the 1.5 GHz image is used.", "Figure: VLA 34 GHz total intensity contour image with electric fractional polarization vectors superimposed in red.", "The peak contour flux is 528 mJy beam -1 ^{-1} and the contour levels are 30 ×\\times (-1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512) mJy beam -1 ^{-1}.", "1.51.5 length of the vector corresponds to 0.3% fractional polarization.Figure: VLA 5 GHz total intensity contour image superimposed over the HST WFC3 F547M optical image showing the tidal arm.", "The contour levels are (0.05, 0.08, 0.14, 0.28, 0.57, 1.21, 2.58, 5.53, 11.88, 25.57) mJy beam -1 ^{-1}.", "The color scale extends from 1.0×10 -5 1.0\\times 10^{-5} to 3.0×3.0~\\times (4.57×10 -7 )=4.6×10 -12 4.57\\times 10^{-7}) = 4.6\\times 10^{-12} Jy to 1.4 μ\\mu Jy.", "The white curve represents the bow-shock-like feature at the jet termination point.Figure: 685 MHz uGMRT total intensity contours superimposed over 685 MHz -- 1.5 GHz spectral index image in color created using 685 MHz uGMRT and 1.5 GHz VLA data.", "The peak contour flux is 37 mJy beam -1 ^{-1} and the contour levels are 0.4 ×\\times (-1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512) mJy beam -1 ^{-1}.", "The color scale ranges from --2 to 2.Table: Equipartition estimates for the radio lobes at 1.5 GHzThe propagation of the head of a radio lobe through an ambient medium is governed by a balance between the thrust of the jet and the ram pressure exerted by the medium [83].", "The density of the environment around a radio lobe ($\\rho _a$ ) could be estimated as: $\\rho _a$ =$\\pi _j$ /(A$_h$ v$_h^2$ ), where $\\pi _j$ is the thrust of the jet spread over a cross-sectional area A$_h$ of the bow-shock at the end of the jet and v$_h$ is the advance speed of the head of the lobe.", "$\\pi _j$ could be written as Q$_j$ /v$_j$ where Q$_j$ is the jet power, i.e.", "the amount of energy delivered by the jet per unit time and v$_j$ is the velocity of the jet material, which is assumed to be c, the velocity of light.", "v$_h$ could be written as l$_h$ /t$_h$ where l$_h$ is the length of the lobe and t$_h$ is the age of the lobe.", "The jet power could be estimated as the total energy of the lobe (from Column 6 of Table REF ) divided by the age of the lobe.", "We find that both SW and misaligned lobes have comparable ages and jet powers (see Table REF ).", "A$_h$ is proportional to R$_h^{-2}$ , where R$_h$ is the radius of the impact area.", "Considering half-width of the lobes close to their ends as R$_h$ , we find that R$_h$ is $\\sim $ 10 kpc for both the lobes.", "Also, l$_h$ is $\\sim $ 15 kpc for the SW lobe and $\\sim $ 50 kpc for the misaligned lobe.", "Higher l$_h$ implies lower $\\rho _a$ for the misaligned lobe as compared to the SW lobe." ], [ "Discussion", "We discuss below the radio and polarization features in order to explain the nature of the AGN in III Zw 2." ], [ "Radio Features", "The curved kpc-scale jet terminates in a bow-shock-like region in the SW lobe (see Figure REF ).", "A similar structure is also observed at the end of the eastern lobe.", "A compact terminal hotspot is not observed in III Zw 2.", "The inferred B-fields are aligned with the lobe edges.", "The shape and magnetic field structure resemble that observed at the end of the eastern jet in the episodic radio galaxy, Pictor A [72], [80], [102], but without a preceding hotspot in III Zw 2.", "In fact the shape and magnetic field structures observed at the lobe edge in III Zw 2 are similar to what is observed at the lobe edges of Seyfert galaxies [43], [85], [86].", "This radio morphology is also observed in Fig.", "1e in the restarted jet simulation work of [15].", "These authors find that when a restarted jet is launched into a region of thermalized plasma from the previous episode of jet activity, and therefore is overdense compared to the old jet material through which it passes, a protrusion is seen at the leading edge of the lobe (their Fig.", "1e), similar to the feature observed in III Zw 2.", "The morphology of the SW lobe in III Zw 2 may therefore be indicative of restarted jet activity, which is also supported by the evidence of intermittent jet activity on parsec-scales [11], [12], [53].", "It is possible that the intermittency has a shorter duty cycle compared to RL AGN.", "The terminal lobe region could also indicate a sharp jet bend which may have resulted when the jet encountered a density inhomogeneity in the intergalactic medium.", "However, this cannot explain why a similar structure is observed on the counter-jet side.", "Similar spectral indices and electron lifetimes for both SW and misaligned lobes (see Figure REF and Table REF ) suggest that the misaligned emission is not due to a previous AGN activity episode.", "It appears that the outflow, which is likely a combination of a jet and a wind (see Section REF ahead), moves away from the jet termination region into a lower density region, that is, in the direction of least resistance.", "This would be consistent with our finding that $\\rho _a$ for the misaligned lobe is lower than for the SW lobe (see Section REF ).", "The primary and the misaligned radio lobes of III Zw 2 exhibit a mean spectral index between $-$ 0.6 and $-$ 0.7 and no clear spectral steepening with distance from the core.", "These lobe characteristics are similar to those revealed in the “sputtering” AGN, NGC 3998 [97].", "These results could suggest that the primary lobes are being actively fed by the nucleus by intermittent fuelling and that the plasma in the misaligned lobe is being constantly energized and re-accelerated in the presence of turbulence, instabilities and shocks arising from galactic merging activity." ], [ "Inferred B-field Structures and Nature of the Outflow", "The transverse B-fields in the core are suggestive of a toroidal B-field component at the base of the outflow, which continues all the way up to the edge of the SW lobe as seen in the uGMRT 685 MHz image.", "The transverse B-fields in the uGMRT jet could either be interpreted as due to transverse shocks that amplify and order B-fields by compression, or could represent a toroidal component of a large-scale helical B-field associated with the jet [27], [75].", "The parallel B-fields in the jet component J1 from the VLA 5 GHz image could represent a poloidal component of the helical jet B-field.", "Overall, the polarization structures observed in III Zw 2 are consistent with the work of [7], where the authors suggest that with increasing distance from the core, the toroidal component of the B-field becomes more dominant and the poloidal component decays faster since the poloidal field varies as r$^{-2}$ and the toroidal field varies as r$^{-1}$ v$^{-1}$ .", "The depolarization of the jet in the 5 GHz VLA image could be explained as an “inverse depolarization” effect.", "This effect has been previously observed by [35], [36] in optically thin jet features.", "This effect could originate from a combination of structural inhomogeneities in the jet B-field that could cancel the polarization along the line of sight and internal Faraday rotation that increases with the square of the wavelength [95], [33], [34].", "Such structural inhomogeneities could naturally arise in helical B-fields and random B-fields that are tangled over long length scales.", "Internal Faraday rotation could align the polarization from the far side of the jet with that from the near side of the jet.", "This reduces the cancellation between the vectors and increases the net polarization at longer wavelengths.", "However, the works of [60] and [62] suggest that toroidal B-fields could be associated with magnetically-driven AGN winds and poloidal B-fields could be associated with jets.", "Therefore, one cannot rule out the possibility of a magnetized accretion disk wind or the outer layers of a broadened jet threaded by toroidal B-fields, or with a series of transverse shocks as seen in BL Lac jets [27], [56], on larger spatial scales than sampled by the VLA observations.", "The inherent complexities in the models, data and numerical simulations make it difficult to differentiate between MHD winds and the outer layers of a broadened jet.", "A co-axial jet and wind outflow model where the jet is embedded inside the wind, would appear co-spatial in projection.", "Moreover, such a model would be similar to that revealed in the protostellar system HH 212 comprising of a jet and MHD disk wind [52].", "In order to confirm if the model of [60] and [62] is valid for III Zw 2, one would require highly sensitive observations at higher frequency and higher resolution that can substantially sample poloidal B-fields in the spine of the jet (for which we already have a marginal detection), in conjunction with emission-line data that can provide complete information on the multi-phase outflow.", "By modelling the X-ray reflection spectrum of III Zw 2 from joint XMM-Newton and NuSTAR observations, [13] find that the spin parameter for III Zw 2 is $\\ge $ 0.98.", "A positive spin parameter suggests a prograde rotation i.e.", "a co-rotation of the black hole and the accretion disk [77]; the high value suggests that the black hole is spinning rapidly.", "Using an AGN subsample from [60] that included III Zw 2, [29] find that high spinning black hole co-rotating with a radiatively efficient thin accretion disk could give rise to a strong wind and a weak jet, consistent with our results on III Zw 2." ], [ "Implications for the Radio Intermediate Nature", "[58] suggest disc truncation and refilling as an explanation for the intermittent jet cycle in 3C120.", "Moreover, hard and soft spectral states of BHs are often associated with advection-dominated accretion flow [66] and thin accretion disk [89] respectively [20], [46].", "These could indicate that intermittent jet production is associated with changing spectral states of the accretion disk in microquasars and quasars [68].", "A soft spectral state is characterized by a standard thin accretion disk that extends to the last stable orbit.", "Since the magnetic flux accumulation close to the BH in thin accretion disks is not efficient enough for launching of powerful radio jets [92], this state would produce suppressed radio jets and strong winds.", "Eventually, the inner regions of the standard accretion disk gets destroyed by some instability and replaced by a radiatively inefficient accretion flow.", "This hard state is accompanied by efficient magnetic flux accumulation close to the BH, giving rise to powerful and steady radio jets.", "After this the disk refills again with radiatively efficient accretion flow and the spectral state changes to soft state accompanied by quenched radio jets and this cycle repeats.", "Overall, as and when the inner accretion disk region destabilizes, a jet component is ejected.", "The inner disk region in regular RL and RQ AGN destabilizes on much longer timescales, typically of the order of $10^6-10^8$  yrs [1], [103], [19], [59], [88], than the AGN showing intermittent/sputtering jet activity on a much shorter, decadal timescales.", "This may suggest that the RQ AGN are the ones in the soft state, hosting strong winds and suppressed jets, and RL AGN are the ones in the hard state, launching powerful jets, at the time of observations.", "The outflow in III Zw 2 appears to be a combination of an AGN jet embedded inside a broader wind.", "In the framework of changing spectral states of the accretion disk, the MHD wind may be launched from the outer regions of the accretion disk whereas the jet may be launched from the inner regions of the disk.", "Sputtering/intermittent jet activity, typically on decadal timescales [12], [53], may suggest that the time scales of transition between the hard and soft states is so rapid that one may always find a combination of an AGN jet of moderate radio luminosity and an accretion disk wind.", "The interaction of the jet with the wind plasma could possibly disrupt the jet or, much of the mass and energy may be carried away by the wind, rather than the jet.", "This could explain why the jets in RI AGN are low-powered and small-scaled, as compared to the RL AGN.", "Moreover, the indication of the restarted jet activity and the associated bow-shock-like morphology in the lobes of III Zw 2 (discussed in Section REF ) may arise because of the launching of the jet during the current epoch of the hard state into a region of thermalized wind plasma released from the previous epoch of the soft state.", "The wind component may also be the outer layers of a broadened jet from a previous epoch that results from an inclined and interacting jet, or be a combination of an accretion disk wind and jet layer.", "Overall, the close interplay between the jet and the wind may in turn explain the RI nature of III Zw 2.", "Moreover, our results are consistent with the idea that radio-loudness is a function of the epoch at which the source is observed [68], [69], [48], [18]." ], [ "Conclusions", "We have presented the results from our multi-frequency radio polarization study of the radio-intermediate quasar, III Zw 2, with the uGMRT and the VLA.", "The inferred B-fields appear to be toroidal at the base of the outflow, which further continues all the way upto the edge of the SW lobe as seen in the uGMRT 685 MHz image.", "The transverse B-fields in the uGMRT jet may either indicate B-field amplification due to shock compression inside the jet or the toroidal component of a large-scale helical B-field in the jet, or the toroidal B-fields threading an AGN wind that is sampled better by the lower frequency and poorer resolution of the uGMRT compared to the higher frequency and higher resolution of the VLA.", "The VLA 5 GHz image reveals tentative signatures of a poloidal B-field component in the spine of the jet.", "Overall, the outflow in III Zw 2 may be a combination of a jet embedded inside a wind, where the wind can either be a magnetized accretion disk wind or the outer layers of a broadened jet or a combination of both.", "The bow-shock-like morphology of the jet terminal regions are consistent with restarted jet activity in III Zw 2, which is in turn consistent with its “sputtering” nature.", "Diffuse lobe emission that is misaligned with the primary lobes is detected.", "The spectral indices and the electron lifetimes in the misaligned lobe are similar to the SW lobe, suggesting that the misaligned emission is not due to a previous AGN activity episode.", "It appears that the SW lobe is embedded in a medium of higher density than the misaligned lobe.", "Therefore, the buoyant forces applied by the medium around the SW lobe pushes the lobe material laterally to a region of lower density so as to achieve a density equilibrium, which gives rise to the misaligned lobe emission.", "Spectral state changes in the accretion disk and the subsequent episodic/intermittent behaviour of the outflow, along with the close interplay between the AGN jet and wind, could be responsible for the radio-intermediate nature of III Zw 2.", "Moreover, based on suggestions of a co-existence of an AGN jet and wind, and the presence of a prograde-rotating high spin BH, we infer that a radiatively efficient thin accretion disk could be present in III Zw 2 at the current epoch, as proposed in the work of [29].", "The current work clearly demonstrates the power of radio polarization studies to understand the origin of radio outflows in systems where multiple emission mechanisms co-exist, such as the RI AGN.", "The fact that the RI AGN straddle the RL-RQ division, makes them ideal candidates for not only understanding the origin of the RL-RQ dichotomy but also providing insights on jets and winds and their impact on galaxy evolution via feedback processes." ], [ "Acknowledgements", "We thank the referee for their insightful suggestions that have improved this manuscript significantly.", "LCH was supported by the National Science Foundation of China (11721303, 11991052) and the National Key R&D Program of China (2016YFA0400702).", "We thank the staff of the GMRT who have made these observations possible.", "The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.", "We acknowledge the support of the Department of Atomic Energy, Government of India, under the project 12-R&D-TFR-5.02-0700.", "The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.", "The data underlying this article will be shared on reasonable request to the corresponding author.", "The VLA data underlying this article can be obtained from the NRAO Science Data Archive (https://archive.nrao.edu/archive/advquery.jsp) using the proposal ids: 20A-182, 17A-027 and 15A-070.", "The GMRT data underlying this article can be obtained from the GMRT Online Archive (https://naps.ncra.tifr.res.in/goa/data/search) using the proposal id: DDTC130.", "The HST data underlying this article can be obtained from the MAST HST Archive (https://archive.stsci.edu/hst/search.php) using the proposal id: ICNO01010." ] ]
2107.01818
[ [ "Quantitative transfer of regularity of the incompressible Navier-Stokes\n equations from $\\mathbb{R}^3$ to the case of a bounded domain" ], [ "Abstract Let $u_0\\in C_0^5 ( B_{R_0})$ be divergence-free and suppose that $u$ is a strong solution of the three-dimensional incompressible Navier-Stokes equations on $[0,T]$ in the whole space $\\mathbb{R}^3$ such that $\\| u \\|_{L^\\infty ((0,T);H^5 (\\mathbb{R}^3 ))} + \\| u \\|_{L^\\infty ((0,T);W^{5,\\infty }(\\mathbb{R}^3 ))} \\leq M <\\infty$.", "We show that then there exists a unique strong solution $w$ to the problem posed on $B_R$ with the homogeneous Dirichlet boundary conditions, with the same initial data and on the same time interval for $R\\geq \\max(1+R_0, C(a) C(M)^{1/a} \\exp ({CM^4T/a})) )$ for any $a\\in [0,3/2)$, and we give quantitative estimates on $u-w$ and the corresponding pressure functions." ], [ "Introduction", "We are concerned with the three-dimensional incompressible Navier-Stokes equations $\\begin{split}\\partial _t u - \\nu \\Delta u + (u\\cdot \\nabla ) u + \\nabla \\pi &=0,\\\\\\operatorname{div}u &=0\\end{split}$ in $\\mathbb {R}^3$ , where $\\nu >0$ is the viscosity coefficient, $u$ is the velocity of a fluid, and $p$ is the pressure function.", "The equations are equipped with an initial condition $u(0)=u_0$ .", "The study of the equations goes back to the work of Leray [26] and Hopf [18], who showed the global-in-time existence of weak solutions in the case of $\\mathbb {R}^3$ (Leray) and the case of a bounded, smooth domain $\\Omega \\subset \\mathbb {R}^3$ (Hopf).", "These are usually referred to as Leray-Hopf weak solutions.", "We refer the reader to the recent comprehensive review article [29] of Leray's work and to [33] for a general background of the mathematical theory of the Navier-Stokes equations (REF ).", "We note that the fundamental question of global-in-time existence of strong solutions remains open in each of these settings.", "Heywood [17] was the first to study the connections between the Navier-Stokes equations posed on different domains, and he showed that a Leray-Hopf weak solution on $\\mathbb {R}^3$ can be obtained as a limit of weak solutions on $B_R$ .", "In the two-dimensional case Kelliher [19] proved that weak solutions on large domains converge strongly, in the energy space, to a weak solution on $\\mathbb {R}^2$ , provided the latter exists on the same time interval.", "Some connections regarding existence of smooth solutions on in various setting were explored from a different point of view by Tao [38].", "A more direct link regarding well-posedness question between the case of the whole space and the torus has been recently shown by Robinson [32], who used a compactness method to show that for a localized initial data, given the solution remains strong in $\\mathbb {R}^3$ until time $T$ , the same is true in the case of sufficiently large periodic torus.", "This is the problem of “transfer of regularity” that we are concerned with in this note.", "It is closely related to numerical analysis of problems of fluid mechanics (see [20], for example), where often an infinite domain must be approximated by a bounded domain.", "Some related problems of transfer of regularity have been studied in the context of vanishing viscosity limits, where the existence of smooth solution to the Euler equations implies the existence of a smooth solution to the Navier-Stokes equations with a sufficiently small viscosity [10].", "Another related phenomenon is that sufficiently smooth solution of the Navier-Stokes equations on a time interval $[0,T]$ gives similar regularity of some numerical schemes [9].", "Some other works [31], [14] deduce regularity of 3D flows that are, in some sense, “sufficiently two-dimensional.” We refer the reader to [32] for a further discussion.", "We note that it can be verified that the considerations of Robinson [32] translate to the case of a bounded, smooth domain $\\Omega \\subset \\mathbb {R}^3$ .", "This gives a transfer of regularity from $\\mathbb {R}^3$ to a sufficiently large $\\Omega $ .", "However, due to the use of the compactness method, it is not clear from this approach how large the approximating domain would need to be.", "We address this issue here.", "Our main theorem gives the first quantitative result regarding the size of the domain $\\Omega $ on which the problem (REF ), equipped with the homogeneous Dirichlet boundary conditions, has a unique strong solution on the same time interval and with the same initial data.", "Theorem 1 (Main result) Let $R_0>0$ and $M\\ge 1$ , $a\\in [0,3/2)$ and assume that $u_0 \\in C_0^5 (B_{R_0})$ is divergence free.", "Suppose that $(u,\\pi )$ is a strong solution of (REF ) with the initial condition $u(0)=u_0$ on $\\mathbb {R}^3$ for $t\\in [0,T]$ , such that $ \\Vert u (t) \\Vert _{H^5}+ \\Vert u (t) \\Vert _{W^{5,\\infty }} \\le M$ for all $t\\in [0,T]$ .", "Then for every $R\\ge R_0+1$ such that $R\\gtrsim _{a,M,u_0 } \\mathrm {e}^{C M^4 T/a },$ where $a\\in [0,3/2)$ , there exists a unique strong solution $(w,\\tilde{\\pi })$ to the problem (REF ) posed on $B_R$ with the homogeneous boundary condition, $ \\left.", "u\\right|_{\\partial B_R} =0$ , and the same initial condition $w(0) = u_0$ (see Definition REF below), where $C>1$ is a universal constant.", "Moreover $\\begin{split}\\Vert \\nabla (u -w ) (t) \\Vert _{L^2(B_R)} &\\lesssim _{a,M,u_0 } \\mathrm {e}^{CM^4t} R^{-a} \\\\\\text{ and }\\quad \\Vert \\nabla (\\pi - \\tilde{\\pi } ) \\Vert _{L^p((0,t);L^2(B_R))} &\\lesssim _{a,M,u_0 } t^{\\frac{1}{p}} \\mathrm {e}^{CM^4t} R^{-a}\\end{split}$ for all $t\\in [0,T]$ , $a\\in [0,3/2)$ and $p\\in (1,\\infty )$ .", "In the above theorem and below we use subscripts to articulate any dependencies of implicit constants.", "For example, the symbol “$\\lesssim _{a,M,u_0 }$ ” denotes “$\\le C(a,M,u_0)$ ” for some implicit constant $C(a,M,u_0)>0$ dependent on $a,M,u_0$ only.", "In fact, in the particular case of (REF ) the constant can be made more precise, $C(a,M,u_0) :=C(a) \\left( \\frac{D^2 }{M^3} \\right)^{-\\frac{1}{a}}$ where $D:=\\mathrm {e}^{C MT} (4R_0^{2a+8} \\Vert u_0 \\Vert ^2_{H^{5}} + M).$ Similarly the implicit constants in (REF ) can be taken of the form $C(a,M,u_0) = C(a) M D^2$ .", "These quantified constants are clear from the proof below.", "We note that the assumptions of the above theorem hold if $u \\in L^\\infty ((0,T);L^\\infty )$ or $\\nabla u \\in L^\\infty ((0,T); H^1)$ (as shown by Leray [26], see also [29]).", "In fact in that case $M \\le C (\\Vert u \\Vert _{L^\\infty ((0,T);L^\\infty )}, \\Vert u_0 \\Vert _{H^5 },\\Vert u_0 \\Vert _{W^{5,\\infty } })$ , which can be shown by an iteration (with respect to the order of the derivatives) of Gronwall inequalities (see [12] or [33] for details).", "There are a number of well-known sufficient conditions that guarantee that a given Leray weak solution is in fact strong [8], [11], [22], [25], [28], [30], [33], [34], [35], [40].", "One of them is the Ladyzhenskaya-Prodi-Serrin [25], [30], [34] condition, $u \\in L^p ((0,T);L^q (\\mathbb {R}^3 ))$ , where $p\\in [2,\\infty )$ , $q\\in (3,\\infty ]$ are such that $\\frac{2}{p}+\\frac{3}{q} =1.$ Then in fact $\\Vert \\nabla u \\Vert _{L^\\infty ((0,T);L^2)}^2 \\le \\Vert \\nabla u_0 \\Vert _2^2 \\exp \\left( \\Vert u \\Vert _{L^p ((0,T);L^q (\\mathbb {R}^3))}^p\\right),$ see [33], for example.", "The above theorem is also valid with $B_R$ replaced by any bounded and smooth domain of the form $R\\Omega $ , where $B_1\\subset \\Omega $ .", "Since Theorem REF can be proved using a straightforward procedure, we present it now, including some quantitative bounds that will be verified in detail in Sections REF –REF below.", "We use the notation $\\Vert \\cdot \\Vert _p \\equiv \\Vert \\cdot \\Vert _{L^p (\\mathbb {R}^3)}$ for brevity.", "[Proof of Theorem REF ] Let $\\phi \\in C_0^\\infty (\\mathbb {R}^3 ; [0,1])$ be such that $\\phi =1$ on $B_{R-1}$ , and $\\phi =0$ outside $B_{R}$ .", "Then $u\\phi $ is a solution to the problem $\\begin{split}\\partial _t (u\\phi ) - \\Delta (u\\phi ) + \\nabla (\\phi \\pi ) + ((u\\phi ) \\cdot \\nabla ) (u\\phi ) & = F_1 \\\\\\left.", "(u\\phi ) \\right|_{\\partial B_R} &= 0,\\\\\\operatorname{div}\\, (u\\phi ) &= \\nabla \\phi \\cdot u,\\\\(u\\phi ) (0) &= u_0\\end{split}$ in $\\mathbb {R}^3 \\times (0,T)$ , where $F_1 :=- \\phi (1-\\phi ) (u\\cdot \\nabla )u + (\\phi u \\cdot \\nabla \\phi ) u + \\pi \\nabla \\phi - u \\Delta \\phi - 2 \\nabla u \\cdot \\nabla \\phi .$ In Section REF we study the spatial decay of strong solutions of (REF ) to deduce that at each time $t\\in [0,T]$ $\\Vert \\nabla u - \\nabla (u\\phi ) \\Vert _2 \\lesssim _a DR^{-a} \\qquad \\text{ and } \\qquad \\Vert \\nabla ((1-\\phi ) \\pi ) \\Vert _2\\lesssim _a D^2 R^{-a}$ and $\\Vert F_1 \\Vert _2 \\lesssim _a D^2 R^{-a },$ where $a \\in [0,3/2)$ .", "Since $\\operatorname{div}(u\\phi ) \\ne 0$ , we can use a Bogovskiĭ-type correction $u_c$ (see Section REF ) such that $\\Vert \\nabla u_c \\Vert _2 + \\Vert u_c \\Vert _\\infty \\lesssim _a DR^{-a},$ $\\hspace{42.67912pt}\\Vert F_2 \\Vert _2 \\lesssim _a D^2 R^{-a }$ for all $t\\in [0,T]$ and $a \\in [0,3/2)$ , where $F_2 :=(u_c \\cdot \\nabla ) (u\\phi ) + (u\\phi \\cdot \\nabla )u_c + (u_c \\cdot \\nabla ) u_c - \\Delta u_c +\\partial _t u_c,$ and such that the “corrected velocity field” $r:=u\\phi + u_c$ is divergence free and vanishes on $\\partial B_R$ .", "Letting $F:=F_1+F_2,$ we deduce from (REF ), (REF ) (in Section REF ) that for $R$ satisfying (REF ) the problem $\\begin{split}\\partial _t v - \\Delta v + (v\\cdot \\nabla ) v + \\nabla \\overline{ \\pi } &=F- (v\\cdot \\nabla )r - (r\\cdot \\nabla ) v \\qquad \\text{ in } B_R \\times (0,T), \\\\\\operatorname{div}v &=0 \\qquad \\text{ in } B_R \\times [0,T],\\\\\\left.", "v(t) \\right|_{\\partial B_R} &= 0 \\qquad \\text{ for } t\\in [0,T]\\end{split}$ has a unique strong solution $v$ on $[0,T]$ with $v(0)=0$ (see Definition REF and Lemma REF ), such that $\\Vert \\nabla v(t) \\Vert _{L^2 (B_R)} \\lesssim _a \\frac{D^{2} R^{-a} }{M^{2}} \\mathrm {e}^{CM^{4} t} \\quad \\text{ and }\\quad \\Vert \\nabla \\overline{\\pi } \\Vert _{L^p((0,t);L^2(B_R))} \\lesssim _{a,p} t^\\frac{1}{p} M D^2 R^{-a} \\mathrm {e}^{CM_2^4 t } ,$ for every $t\\in [0,T]$ , $p\\in (1,\\infty )$ , $a\\in [0,3/2)$ , where $C>1$ is an absolute constant.", "It follows that then $w:=r + v,\\qquad \\tilde{\\pi } :=\\pi \\phi + \\overline{\\pi }$ satisfies the claim of Theorem REF (i.e., satisfies (REF ) with vanishing right-hand side and initial condition $w(0)=u_0$ ).", "The estimates (REF ) follow directly from (REF ) and (REF ).", "The most difficult part of the above procedure are the decay estimates (REF ),(REF ), for which we employ the machinery developed by Kukavica and Torres [23], [24].", "Inspired by [24] we consider the vorticity equation (on the whole space $\\mathbb {R}^3$ ), and we observe that, given spatial derivatives of $u$ are bounded, all spatial derivatives of vorticity $\\omega $ are well-localized.", "This allows us to control $\\Vert |x|^a D^l \\omega \\Vert _2$ for all $l$ and $a$ (see (REF )).", "In order to obtain control in other $L^p$ spaces of $|x|^a D^lu$ we need to apply a Fourier method (see Lemma REF and (REF )).", "This also results in the restriction $p\\in [2,\\infty )$ and $a\\in [0,3/p^{\\prime }+l)$ , where $p^{\\prime } $ denotes the dual exponent of $p$ , and $l\\ge 1$ denotes the order of the spatial derivatives of $u$ .", "Using the Caffarelli-Kohn-Nirenberg inequality (see (REF ) below), we can also handle the case $l=0$ (see (REF )).", "Using weighted pressure inequalities (REF ) and (REF ), we can then obtain (REF ), see Section REF .", "The reason for the restriction $a\\in [0,3/2)$ comes from the lowest order terms $\\pi \\nabla \\phi $ and $u\\Delta \\phi $ appearing in $F_1$ , as then taking $p:=2$ gives $a\\in [0,3/2)$ .", "The main feature of the Bogovskiĭ correction $u_c$ is that it is supported on the set $B_R \\setminus B_{R-1}$ , which is not star-shaped.", "Such extensions of the Bogovskiĭ lemma are well-known (see [5] or [13], for example).", "In our case the divergence structure of $\\mathrm {div}\\, (u\\phi ) = \\nabla \\phi \\cdot u$ make such extension easier by using a partition of identity to decompose $\\phi $ into a number of cutoff functions supported on star-shaped domains.", "We discuss this and show the resulting estimates (REF ) and (REF ) in Section REF .", "Finally, the well-posedness of the system (REF ) can be verified using classical arguments.", "We discuss it in Section REF in order to expose the required smallness of $F$ and to verify (REF ).", "We note that the required order 5 of derivatives that must be under control in the assumptions in Theorem REF come from the fact that the highest order derivative of $u$ whose spatial decay must be under control is 2 (see term “$\\Delta u_c$ ” in $F_2$ ).", "Due to our Fourier method in Lemma REF , this translates into decay estimate on $D^4 \\omega $ (see (REF )), which in turn requires boundedness of $\\Vert u \\Vert _{H^5}+\\Vert u \\Vert _{W^{5,\\infty }}$ (see (REF )).", "We also note that the proof of Theorem REF can also be performed with the cutoff function $\\phi $ replaced by a cutoff with a different transition rate, such as $\\phi _1 \\in C_0^\\infty ( B_{R_2} ; [0,1])$ with $\\phi _1=1$ on $B_{R_1}$ , where $R_2>0$ , $R_1\\in (0, R_2)$ .", "In such case some additional factors of $(R_2-R_1)^{-1}$ can be obtained from the terms involving derivatives of $\\phi _1$ .", "Thus the main estimate (REF ) would become weaker as $|R_2-R_1|\\rightarrow 0$ .", "On the other hand it would not improve as $|R_2 - R_1|\\rightarrow \\infty $ in the sense that we would still need $R_1$ bounded below as in (REF ).", "For example, we use the inequality $|x|>R_1$ whenever $(1-\\phi _1)\\ne 0$ (e.g.", "in (REF ) below), which would make the decay estimates (REF )-(REF ) at least of order $R_1^{-a}$ .", "It will become clear from the proof (see (REF )) that one can replace (REF ) with $D:=\\mathrm {e}^{CMT} ( A+ M),$ where $C>1$ is a universal constant, and $A>1$ is such that $\\Vert |x|^{a+4} D^l \\omega _0 \\Vert ^2_2 \\le A$ for $l=0,\\ldots ,4$ , where $\\omega _0 :=\\mathrm {curl}\\, u_0$ denotes the initial vorticity (see (REF ) below).", "Thus the requirements $u_0\\in C_0^\\infty (B_{R_0})$ and $R\\ge R_0 +1$ are not necessary in Theorem REF , and we obtain the following.", "Corollary 2 (Non-compactly supported $u_0$ ) Let $a\\in [0,3/2)$ .", "Suppose that $u_0 \\in H^5 (\\mathbb {R}^3 ) \\cap W^{5,\\infty } (\\mathbb {R}^3)$ is such that (REF ) holds for some $A>1$ .", "Then the claim of Theorem REF remains valid for $ R\\ge C(a) \\left( \\frac{D^2}{M^3} \\mathrm {e}^{CM^4 T } \\right)^{\\frac{1}{a}},$ where $D$ is from (REF ), with the initial condition on $w(0)$ replaced by $w(0)=u\\phi + u_c$ (see (REF ) above)." ], [ "Proof of the main result", "As sketched above, Theorem REF follows from (REF )–(REF ).", "We first introduce some concepts and inequalities." ], [ "Preliminaries", "We use standard conventions regarding the Lebesgue spaces $L^p (\\Omega )$ , Sobolev spaces $W^{k,p}(\\Omega )$ , $H^k(\\Omega )$ , $H^1_0(\\Omega )$ , and we denote by $C_0^\\infty (\\Omega )$ the space of smooth functions with compact support in $\\Omega $ .", "We write $\\Vert \\cdot \\Vert _p \\equiv \\Vert \\cdot \\Vert _{L^p}$ .", "In the case $\\Omega = \\mathbb {R}^3$ we omit the domain to simply write $L^p \\equiv L^p(\\mathbb {R}^3)$ and $H^k \\equiv H^k_0 \\equiv H^k (\\mathbb {R}^3)$ .", "We denote by $V$ the closure of the set of divergence-free functions $\\phi \\in C_0^\\infty (\\Omega )$ in the $H^1$ norm.", "We denote by $\\Omega ^c :=\\mathbb {R}^3 \\setminus \\Omega $ the complement of a set $\\Omega $ .", "We denote by $C\\ge 1$ any universal constant that may change value from line to line.", "We denote by $\\widehat{f} (\\xi ) :=\\int _{\\mathbb {R}^3} f(x) \\mathrm {e}^{-2\\pi i x\\cdot \\xi } \\mathrm {d}x$ the Fourier transform of $f$ , and by $\\mathcal {R}_j f $ , $\\widehat{\\mathcal {R}_j f }(\\xi ) :=\\mathcal {R}_j (\\xi )\\widehat{f}(\\xi ) \\equiv \\frac{\\xi _j}{|\\xi |} \\widehat{f}(\\xi )$ , the Riesz transform with respect to the $j$ -th variable, for $j=1,2,3$ .", "We let $\\eta \\in C_0^\\infty (B_2; [0,1])$ be such that $\\eta =1$ on $B_1$ , and let $\\widetilde{\\eta } :=1- \\eta $ .", "We recall that $\\Vert \\Lambda ^a ( \\mathcal {R}(\\xi ) \\widetilde{\\eta } (\\xi ) ) \\Vert _\\infty <\\infty $ for every $a\\ge 0$ , see [24] for a proof.", "Given $T>0$ and a smooth and bounded domain $\\Omega \\subset \\mathbb {R}^3$ , we consider the Navier-Stokes initial boundary value problem, $\\begin{split}\\partial _t v - \\Delta v + (v\\cdot \\nabla ) v + \\nabla \\overline{\\pi } &=F- (v\\cdot \\nabla )r - (r\\cdot \\nabla ) v \\qquad \\text{ in } \\Omega \\times (0,T),\\\\\\operatorname{div}v &=0,\\\\\\left.", "v \\right|_{\\partial \\Omega } &= 0,\\\\v(0) &=v_0,\\end{split}$ where $v_0 \\in C_0^\\infty (\\Omega )$ , $F\\in L^\\infty ((0,T);L^2)$ and $r\\in L^2([0,T]; H^1 )$ .", "Definition 3 (Strong solution to (REF )) We say that $v$ is a strong solution to (REF ) if $v\\in L^\\infty ((0,T);V)\\cap L^2 ((0,T);H^2 (\\Omega ))$ and $\\int _0^t \\int _\\Omega \\left( -v\\cdot \\partial _t \\varphi + \\nabla v : \\nabla \\varphi + \\left( (v\\cdot \\nabla ) v + (v\\cdot \\nabla ) r + (r\\cdot \\nabla )v -F\\right) \\cdot \\varphi \\right) = \\int _\\Omega v_0 \\cdot \\varphi (0) - \\int _\\Omega v(t)\\cdot \\varphi (t)$ for all divergence-free $\\varphi \\in C_0^\\infty ([0,\\infty )\\times \\Omega ) $ and almost all $s\\in (0,T)$ .", "We note that for every strong solution $v$ to (REF ), there exists a unique (up to a function of time) pressure function $\\overline{\\pi }$ (see [36] or [33]).", "In the case of $\\Omega = \\mathbb {R}^3$ $\\overline{\\pi } = \\sum _{i,j=1}^3 \\mathcal {R}_i \\mathcal {R}_j (u_j u_i),$ see [29] or [33] for details.", "Moreover, $\\Vert \\nabla \\overline{\\pi } \\Vert _{L^p ((0,t); L^q (\\Omega ) )} \\lesssim _{p,q} \\Vert F - (v\\cdot \\nabla )r - (r\\cdot \\nabla )v \\Vert _{L^p ((0,t); L^q (\\Omega ) )}$ for $p,q\\in (1,\\infty )$ , see [36].", "We note that for $\\Omega = B_R$ the implicit constant in (REF ) does not depend on $R$ , which can be verified by a scaling argument.", "Given $R>0$ we denote by $\\mathbb {P}: L^2 (B_R)\\rightarrow L^2 (B_R)$ the standard Leray projection, i.e.", "$\\mathbb {P}v :=v - \\nabla \\phi ,$ where $\\phi \\in H^1_0 (B_R)$ is the unique weak solution of the Poisson equation $\\Delta \\phi = \\operatorname{div}u$ with the homogeneous boundary condition $\\left.", "\\phi \\right|_{\\partial B_R } =0$ .", "Then $\\Vert \\mathbb {P}v \\Vert _{L^2 (B_R)} \\le \\Vert v \\Vert _{L^2(B_R)}$ .", "It follows from the uniqueness of weak solutions of the Poisson equation that, if $v_\\lambda (x) :=v (\\lambda x)$ , then $(\\mathbb {P}v )_\\lambda = \\mathbb {P}v_\\lambda \\quad \\text{ and }\\quad \\Vert (\\mathbb {P}v )_\\lambda \\Vert _{L^2 (B_{R/\\lambda })} = \\lambda ^{-3/2}\\Vert \\mathbb {P}v \\Vert _{L^2 (B_{R })}.$ Given $R>0$ we also set $D(A) :=H^1_0 (B_R) \\cap H^2 (B_R ),\\quad \\text{ and } \\quad Au:=\\mathbb {P}\\Delta u\\qquad \\text{ for }u\\in D(A).$ Now recall the homogeneous Agmon's inequality $\\Vert u \\Vert _{L^\\infty (B_R)} \\le C \\Vert \\nabla u \\Vert _{L^2 (B_R)}^\\frac{1}{2} \\Vert A u \\Vert _{L^2 (B_R)}^\\frac{1}{2}$ for $u\\in D(A)$ , where $C>0$ is a constant that does not depend on $R$ .", "Indeed, in the case $R=1$ the inequality follows by $\\Vert u \\Vert _{L^\\infty } \\le \\tilde{C} \\Vert u \\Vert _{H^1}^\\frac{1}{2} \\Vert u \\Vert _{H^2}^\\frac{1}{2}$ (see, for example, Theorem 1.20 in [33]) by applying the Poincarè inequality to replace $\\Vert u \\Vert _{H^1}$ by $\\Vert \\nabla u \\Vert _{L^2}$ and by applying a Stokes estimate to replace the last norm by $\\Vert A u \\Vert $ (see, for example, Proposition 2.2 in Temam [39]).", "The case of $R\\ne 1$ follows by rescaling and observing (REF ).", "Another application of the Stokes estimate and an observation of the scaling gives that $\\Vert D^2 u \\Vert _{L^2 (B_R) } \\sim \\Vert Au \\Vert _{L^2 (B_R)},$ for all $u\\in H^1_0 (B_R)\\cap H^2 (B_R)$ , where the symbol $\\sim $ means “$\\lesssim $ and $\\gtrsim $ ”, and the implicit constants are independent of $R>0$ .", "In a similar way we obtain $\\Vert u \\Vert _{L^4 (B_R)} \\le C \\Vert u \\Vert _{L^2 (B_R)}^\\frac{1}{4} \\Vert \\nabla u \\Vert _{L^2 (B_R)}^\\frac{3}{4}$ for all $u\\in H^1_0 (B_R)$ , where $C>0$ does not depend on $R$ .", "We recall the weighted inequality for singular integrals (see [37] or [24]), $\\Vert |x|^a \\nabla u \\Vert _2 \\lesssim _a \\Vert |x|^a \\omega \\Vert _2$ for $a\\in [0,3/2)$ , as well as the Caffarelli-Kohn-Nirenberg inequality (see [6]), $\\Vert |x|^{a-1} u \\Vert _p \\lesssim _{a,p} \\Vert |x|^a \\nabla u \\Vert _p,$ where $a\\in [0,\\infty )$ .", "We will also use the inequality of Grujić and Kukavica [15] $\\Vert f \\Vert _1 \\le C \\Vert f \\Vert _2^{\\frac{1}{2}} \\Vert |x|^3 f \\Vert _2^{\\frac{1}{2}},$ as well as the inequality due to Chae [7], $\\Vert \\Lambda ^a (fg) \\Vert _p \\lesssim _p \\Vert f \\Vert _{p_1} \\Vert \\Lambda ^a g \\Vert _{p_2} + \\Vert \\Lambda ^a f \\Vert _{q_1} \\Vert g \\Vert _{q_2} ,$ where $a>0$ , $p\\in (1,\\infty )$ and $p_1,p_2,q_1,q_2\\in [1,\\infty ]$ are such that $1/p_1+1/p_2 =1/q_1+1/q_2=1/p$ .", "Moreover, for $p\\in (1,\\infty )$ , $\\Vert |x|^a \\pi \\Vert _p \\lesssim _{p,a} \\Vert |x|^a |u|^2 \\Vert _p$ for $a\\in [0, n/p^{\\prime })$ , and $\\Vert |x|^a \\nabla \\pi \\Vert _p \\lesssim _{p,a} \\Vert |x|^a | u | \\,| \\nabla u | \\Vert _p + \\Vert |x|^{a-1} |u|^2\\Vert _p$ for $a\\in [0, n/p^{\\prime }+1)$ , where the last term can be omitted if $a<n/p^{\\prime }$ , see [21] for a proof." ], [ "Spatial decay of strong solutions in $\\mathbb {R}^3$", "In this section we are concerned with the decay properties of strong solutions to (REF ) on the whole space $\\mathbb {R}^3$ , and we prove (REF ) and (REF ).", "To this end we note that the vorticity $\\omega :=\\mathrm {curl} \\, u$ satisfies $\\partial _t \\omega - \\Delta \\omega + (u \\cdot \\nabla ) \\omega - (\\omega \\cdot \\nabla ) u =0.$ Since $D^l u$ is bounded in time in any $L^p$ we see that, considering $u$ as given, the vorticity equation is local, which enables us to control any spatial decay of any spatial derivative of $\\omega $ .", "To be more precise, we let $G_l(t):=\\Vert |x|^a D^l \\omega \\Vert _2^2,$ and we observe that, for each multiindex $\\alpha $ with $|\\alpha | =l$ , $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t} \\left( \\int |x|^{2a} |D^\\alpha \\omega |^2 \\right) &= 2 \\int |x|^{2a} D^\\alpha \\omega _j \\partial _t D^\\alpha \\omega _j \\\\&= 2\\int |x|^{2a} D^\\alpha \\omega _j \\Delta D^\\alpha \\omega _j - 2 \\int |x|^{2a} D^\\alpha \\omega _j D^\\alpha \\partial _k ( u_k \\omega _j -\\omega _k u_j) \\\\&\\le - \\int |x|^{2a} |\\nabla D^\\alpha \\omega |^2 -4a \\int |x|^{2a-2} x_k D^\\alpha \\omega _j \\partial _k D^\\alpha \\omega _j + \\Vert u \\Vert _{W^{l+1,\\infty }} G_l(t)^\\frac{1}{2} G_{l+1}(t)^\\frac{1}{2} \\\\&\\le - \\frac{1}{2} \\int |x|^{2a} |\\nabla D^\\alpha \\omega |^2 + C\\int |x|^{2a-2} | D^\\alpha \\omega |^2 + \\Vert u \\Vert _{W^{l+1,\\infty }} G_l(t)^\\frac{1}{2} G_{l+1}(t)^\\frac{1}{2} \\\\&\\le - \\frac{1}{2} \\int |x|^{2a} |\\nabla D^\\alpha \\omega |^2 + C\\Vert D^\\alpha \\omega \\Vert _2^{\\frac{2}{a}} G_l(t)^\\frac{a-1}{a} + \\Vert u \\Vert _{W^{l+1,\\infty }} G_l(t)^\\frac{1}{2} G_{l+1}(t)^\\frac{1}{2} ,\\end{split}$ where, in the third line, we integrated the first term by parts, and, in the fourth line, we noted that $a\\le 10$ and applied the Young inequality $cb\\le \\varepsilon c^2 + C_\\varepsilon b^2 $ to absorb a part of the second term by the first term.", "We also applied Hölder's inequality in the last line.", "Thus, summing in $|\\alpha | =l$ , for $l=1,\\ldots ,4$ , $G_l^{\\prime }(t) \\lesssim - G_{l+1}(t) + \\Vert \\omega \\Vert _{W^{l,2}}^{\\frac{2}{a}} G_l (t)^{\\frac{a-1}{a}} + \\Vert u \\Vert _{W^{l+1,\\infty }} G_l(t)^\\frac{1}{2} G_{l+1}(t)^\\frac{1}{2} \\lesssim M + (1 + M ) G_l(t),$ where we have also applied the Young inequality.", "By the Gronwall inequality we obtain that $G_l (t) \\lesssim \\mathrm {e}^{CMt} (G_l (0 )+ M) \\le D$ for $t\\in [0,T]$ , $l=0,\\ldots 4$ , where we have recalled (REF ) for the definition of $D_l$ and noted that $\\Vert u (t) \\Vert _{H^{l}},\\Vert u (t) \\Vert _{W^{l,\\infty }}\\le M_{l}$ for all $t\\in [0,T]$ (recall Theorem REF ).", "Moreover we used the assumption that $M\\ge 1$ .", "In order to translate this estimate into decay of the velocity field $u$ in $L^p$ for $p\\ge 2$ , we need the following lemma, which is concerned with homogeneous Fourier multipliers $\\mathcal {M}$ of the form $\\mathcal {M}= \\mathcal {R}_i \\mathcal {R}_j \\partial ^\\beta $ , where $\\mathcal {R}_j$ stands for the Riesz transform with respect to the $j$ -th variable (recall Section REF ) and $\\beta $ is a multiindex.", "In other words $ \\widehat{\\mathcal {M}f }(\\xi ) = m(\\xi ) \\widehat{f} (\\xi )\\equiv \\frac{\\xi _i \\xi _j }{|\\xi |^2} \\xi ^\\beta \\widehat{f}(\\xi )\\equiv \\mathcal {R}(\\xi ) \\xi ^\\beta \\widehat{f}(\\xi ),$ where $\\beta \\in {\\mathbb {N}}^l $ for some $l\\in {\\mathbb {N}}$ .", "Lemma 4 Suppose that $\\int g =0$ and that $\\Vert |x|^a \\partial ^\\alpha g \\Vert _2 \\lesssim _a M_{k}$ for every $a\\ge 0 $ and every multiindex $\\alpha $ with $|\\alpha |\\le k$ .", "Then $\\Vert |x|^a \\mathcal {M}g \\Vert _p \\lesssim _{a,p} M_{l+3}$ for every $p\\in [2, \\infty )$ , $a\\in [0,3/p^{\\prime }+l+1)$ , where $\\mathcal {M}$ is a multiplier of the form (REF ) with $|\\beta | =l$ .", "The proof is inspired by Lemma 2.8 in [24].", "Recall (from Section REF ) that $\\eta \\in C_0^\\infty (\\mathbb {R}^3 ; [0,1])$ is such that $\\eta =1$ on $B(1)$ and $\\eta =0$ outside $B(2)$ , and $\\tilde{\\eta } :=1-\\eta $ .", "By the Hausdorff-Young inequality (see [2]), $\\begin{split}\\Vert |x|^a \\mathcal {M}g \\Vert _p &\\le \\Vert \\Lambda ^a (m(\\xi ) \\widehat{g} (\\xi ) )\\Vert _{p^{\\prime }}\\le \\Vert \\Lambda ^a (m(\\xi ) \\eta (\\xi )\\widehat{g} (\\xi ) ) \\Vert _{p^{\\prime }}+\\Vert \\Lambda ^a (m(\\xi ) \\tilde{\\eta }(\\xi )\\widehat{g} (\\xi ) ) \\Vert _{p^{\\prime }}.\\end{split}$ Using (REF ) and recalling the form of $m$ (REF ) we can estimate the second term on the right-hand side by a constant multiple of $\\Vert \\xi ^\\beta \\widehat{g}(\\xi ) \\Vert _{p^{\\prime }} \\Vert \\Lambda ^a (\\mathcal {R}(\\xi ) \\tilde{\\eta } (\\xi )) \\Vert _\\infty + \\Vert \\mathcal {R}(\\xi ) \\tilde{\\eta } (\\xi ) \\Vert _\\infty \\Vert \\Lambda ^a ( \\xi ^\\beta \\widehat{g}(\\xi ) ) \\Vert _{p^{\\prime }} \\lesssim _m \\Vert \\xi ^\\beta \\widehat{g}(\\xi ) \\Vert _{p^{\\prime }}+\\Vert \\Lambda ^a ( \\xi ^\\beta \\widehat{g}(\\xi ) ) \\Vert _{p^{\\prime }},$ where we used (REF ) in the last step.", "In order to estimate the resulting terms, we first replace $\\Lambda ^a$ by a classical derivative $\\partial ^\\gamma $ , for $\\gamma \\in {\\mathbb {N}}$ , and observe that, since $p^{\\prime }\\in (1,2]$ , we can use Lebesgue interpolation and the inequality (REF ) of Grujić and Kukavica to get $\\Vert \\partial ^\\gamma _\\xi (\\xi ^\\beta \\widehat{g}(\\xi ) \\Vert _{p^{\\prime }} \\lesssim \\sum _{\\beta +\\gamma ^{\\prime }=\\beta ^{\\prime }+\\gamma } \\Vert \\xi ^{\\beta ^{\\prime }} \\partial _\\xi ^{\\gamma ^{\\prime }} \\widehat{g} (\\xi ) \\Vert _{p^{\\prime }} \\lesssim \\sum _{\\beta +\\gamma ^{\\prime }=\\beta ^{\\prime }+\\gamma } \\Vert \\xi ^{\\beta ^{\\prime }} \\partial _\\xi ^{\\gamma ^{\\prime }} \\widehat{g} (\\xi ) \\Vert _{2}^{\\frac{1}{2}+\\frac{1}{p}} \\Vert |\\xi |^{3+|\\beta ^{\\prime }|} \\partial _\\xi ^{\\gamma ^{\\prime }} \\widehat{g} (\\xi ) \\Vert _{2}^{\\frac{1}{2}-\\frac{1}{p}} \\lesssim _a M_{l+3},$ where in the last step we used the Plancherel identity and the assumption to note that $\\Vert |\\xi |^b \\partial _\\xi ^{\\gamma ^{\\prime }} \\widehat{g} (\\xi )\\Vert _2 \\le \\sup _{|\\kappa |=b} \\Vert \\partial _x^\\kappa (x^{\\gamma ^{\\prime }} g) \\Vert _2 \\lesssim _{\\gamma ^{\\prime }} M_{b}$ for every $b\\ge 0$ , and every multiindex $\\gamma ^{\\prime }$ .", "Thus also $\\Vert \\Lambda ^a ( \\xi ^\\beta \\widehat{g}(\\xi ) ) \\Vert _{p^{\\prime }}\\lesssim _{p,a} M_{l+3},$ by interpolation.", "It remains to estimate the first term on the right-hand side of (REF ), $\\Vert \\Lambda ^a (m(\\xi ) \\eta (\\xi )\\widehat{g} (\\xi )) \\Vert _{p^{\\prime }}.$ To this end we note that by assumption $\\widehat{g}(0)=0$ , and so the Fundamental Theorem of Calculus gives $\\widehat{g}(\\xi ) = \\xi \\cdot \\int _0^1 \\nabla _\\xi \\widehat{g} (s\\xi )(1-s) \\mathrm {d}s.$ This and (REF ) gives that $\\begin{split}\\Vert \\Lambda ^a (m(\\xi ) \\eta (\\xi )\\widehat{g} (\\xi ) \\Vert _{p^{\\prime }} &\\le \\Vert m(\\xi ) \\xi \\eta ( \\xi ) \\Vert _{p^{\\prime }} \\left\\Vert \\Lambda ^a \\int _0^1 (1-s) \\nabla \\widehat{g} (s\\xi ) \\mathrm {d}s \\right\\Vert _{\\infty } \\\\&\\hspace{28.45274pt}+ \\Vert \\Lambda ^a (m(\\xi )\\xi \\eta (\\xi ))\\Vert _{p^{\\prime }} \\left\\Vert \\int _0^1 (1-s) \\nabla \\widehat{g} (s\\xi ) \\mathrm {d}s \\right\\Vert _\\infty \\\\&\\lesssim _a \\Vert \\Lambda ^a \\nabla \\widehat{g} \\Vert _{\\infty } + \\Vert \\nabla \\widehat{g} \\Vert _\\infty \\\\&\\lesssim \\Vert |x|^{a+1} g \\Vert _1 + \\Vert |x| g \\Vert _1 \\lesssim _a M_{0},\\end{split}$ where we used the fact that $a\\in [0,3/p^{\\prime }+l+1)$ to deduce that $\\Vert \\Lambda ^a (m(\\xi ) \\xi _i \\eta (\\xi ) \\Vert _{p^{\\prime }}\\lesssim _a 1$ in the second inequality as well as the Grujić-Kukavica inequality (REF ) in the last step.", "Noting that $\\mathrm {curl}\\, \\omega =\\mathrm {curl}\\,\\mathrm {curl}\\,u= \\nabla (\\operatorname{div}u ) - \\Delta u= -\\Delta u$ , we obtain $\\partial _m u_i = \\partial _m (-\\Delta )^{-1} (\\mathrm {curl}\\, \\omega )_i = \\partial _m (-\\Delta )^{-1} \\sum _{j,k=1}^3 \\epsilon _{ijk} \\partial _j \\omega _k = \\sum _{j,k=1}^3 \\epsilon _{ijk} \\mathcal {R}_m \\mathcal {R}_j \\omega _k,$ where $\\epsilon _{ijk}$ denote the Levi-Civita tensor, that is $\\epsilon _{ijk}=1$ if $ijk$ is an even permutation of 123, $-1$ if odd, and 0 otherwise.", "Thus we can apply the above lemma with $g:=\\omega _k $ , $\\mathcal {M} :=\\mathcal {R}_m cR_j \\partial ^\\beta $ , where $\\beta \\in \\mathbb {N}^{l-1}$ , $j,k,m\\in \\lbrace 1,2,3\\rbrace $ , and obtain $\\Vert |x|^a D^l u \\Vert _p \\lesssim _{p,a} D$ for $t\\in [0,T]$ (which we omit in our notation), $p\\in [2,\\infty )$ , $l=1,2$ and $a\\in [0, 3/p^{\\prime } +l)$ .", "The case $l=0$ follows by the Caffarelli-Kohn-Nirenberg inequality (REF ).", "In particular we obtain the first claim in (REF ) as $\\Vert \\nabla u - \\nabla (u\\phi ) \\Vert _2 \\le \\Vert (1-\\phi ) \\nabla u \\Vert _2 + \\Vert \\nabla \\phi u \\Vert _2 \\lesssim R^{-a} \\left( \\Vert |x|^a \\nabla u \\Vert _2 + \\Vert |x|^a u \\Vert \\right) \\lesssim _a DR^{-a}$ for $a\\in [0,3/2)$ .", "Another consequence of (REF ) is that $\\Vert |x|^a (u\\cdot \\nabla ) u \\Vert _2 \\le \\Vert |x|^{a_1} u \\Vert _{4} \\Vert |x|^{a_1} \\nabla u \\Vert _4 \\lesssim _a D^2,$ for $a\\in [0,11/2)$ , where $a_1\\in [0,9/4)$ , $a_2\\in [0,13/4)$ are such that $a=a_1+a_2$ .", "Moreover, (REF ) gives $\\Vert |x|^a \\pi \\Vert _p \\lesssim _{p,a} \\Vert | u |^2 |x|^a \\Vert _p \\lesssim _{p,a} D^2$ for $p>1$ , $a\\in [0,3/p^{\\prime })$ , while (REF ) implies $\\Vert |x|^a \\nabla \\pi \\Vert _p \\lesssim _{p,a} \\Vert | u | \\,| \\nabla u |\\, |x|^a \\Vert _p + \\Vert |u|^2 |x|^{a-1} \\Vert _p \\le \\Vert u \\Vert _\\infty \\left( \\Vert \\nabla u |x|^a \\Vert _p + \\Vert u |x|^{a-1} \\Vert _p \\right) \\lesssim _{p,a} D^2$ for $p>1$ , $a\\in [0,n/p^{\\prime }+1)$ .", "Thus in particular $\\Vert \\nabla ((1-\\phi ) \\pi ) \\Vert _2 \\lesssim \\Vert \\pi \\Vert _{L^2(B_{R-1}^c)} + \\Vert \\nabla \\pi \\Vert _{L^2 (B_{R-1}^c)} \\lesssim _a D^2 R^{-a}$ for $a\\in [0,3/2)$ , which gives the second claim in (REF ).", "The above estimates also imply (REF ), as $\\begin{split}\\Vert F_1 \\Vert _2 &\\le \\Vert |(u\\cdot \\nabla ) u |+| u |^2 +| p | +| u| + |\\nabla u | \\Vert _{L^2 (B_{R+1}\\setminus B_R)}\\\\& \\le R^{-a} \\Vert |x|^a ( |(u\\cdot \\nabla ) u |+| u |^2 +| \\pi | +| u| + |\\nabla u | )\\Vert _{2} \\lesssim _a D^2 R^{-a}\\end{split}$ for every $a<3/2$ .", "Moreover, we can also use the Navier-Stokes equations (REF ) to estimate the decay of $u_t$ as $\\Vert |x|^a \\partial _t u \\Vert _2 \\le \\Vert |x|^a \\Delta u \\Vert _2 + \\Vert |x|^a (u\\cdot \\nabla )u \\Vert _2 + \\Vert |x|^a \\nabla \\pi \\Vert _2 \\lesssim _a D^2$ for $a\\in [0,5/2)$ ." ], [ "The Bogovskiĭ-type correction", "In this section we consider the correction $u_c$ that makes $r :=u \\phi + u_c $ divergence free, recall (REF ), (REF ).", "We first recall the Bogovskiĭ lemma.", "Lemma 5 (Bogovskiĭ lemma) Let $\\Omega \\subset \\mathbb {R}^3$ be a star-shaped domain with respect to $B_1$ (namely that the line segment $[x,y]$ joining any $x\\in \\Omega $ with any $y\\in B_1$ is contained in $\\Omega $ ).", "Then given $f\\in C_0^\\infty (\\Omega )$ with $\\int f =0$ there exists $v\\in C_0^\\infty ( \\Omega ;\\mathbb {R}^3)$ such that $\\mathrm {div}\\, v =f$ and $\\Vert v \\Vert _{W^{k,p}(\\Omega )} \\lesssim _{k,p} \\Vert f \\Vert _{W^{k-1,p} (\\Omega )}.$ The Bogovskiĭ lemma is a well-known result (see [3], [4] or [13], for example).", "In fact, letting $h\\in C_0^\\infty (B_1)$ be such that $\\int h=1$ , the vector field $v(x) :=\\int _\\Omega f(y) \\left( \\frac{x-y}{|x-y|^3} \\int _{|x-y|}^\\infty h\\left( y + z \\frac{x-y}{|x-y|} \\right) z^2 \\mathrm {d}z \\right) \\mathrm {d}y$ satisfies the claim of the lemma.", "We note that our domain, $B_{R}\\setminus B_{R-1}$ , is not star-shaped, and so we need to decompose the domain as well as $f=\\nabla \\phi \\cdot u$ into a number of pieces that would allow us to construct the correction $u_c$ .", "Some decompositions of this form can be found in [13], where one of the main difficulties is to guarantee that each of the pieces still have compact support as well as vanishing integral.", "In our case, this issue simplifies, as $f=\\operatorname{div}(u \\phi )$ , and so this divergence structure allows us to apply a partition of unity inside “div”.", "To be more precise, we let $G_1,\\ldots , G_L\\subset B_R \\setminus B_{R-1}$ be open balls, and $\\psi _l \\in C_0^\\infty (G_l;[0,1])$ ($l=1,\\ldots , L$ ) be such that $\\Vert \\psi _l \\Vert _{W^{2,\\infty }} \\le C$ for some universal constant $C>0$ and $\\psi _1+\\ldots \\psi _L=1$ on $\\mathrm {supp}\\,\\phi $ .", "We can assume that each $G_l$ intersects at most 10 other $G_l$ 's.", "Note that $\\operatorname{div}(\\psi _l \\phi u)$ is compactly supported and has vanishing mean for each $l=1,\\ldots , L$ and so we can use Lemma REF (i.e.", "by (REF ) with $\\Omega :=G_l$ , $f:=\\operatorname{div}( \\psi _l \\phi u )$ ) to obtain $v_l \\in C_0^\\infty (G_l)$ such that $\\Vert v_l \\Vert _{W^{k,p} }\\lesssim \\Vert \\psi _l \\phi u \\Vert _{W^{k,p}}$ for all $k\\ge 0$ , $p\\in (1,\\infty )$ , where the implicit constant can be chosen independent of $l,R$ .", "This gives that $u_c :=\\sum _{l=1}^L v_l$ belongs to $C_0^\\infty (B_R \\setminus B_{R-1})$ and $\\Vert u_c \\Vert _{W^{k,p}}^p \\lesssim \\sum _{l=1}^L \\Vert v_l \\Vert _{W^{k,p} }^p \\lesssim \\sum _{l=1}^L \\Vert \\psi _l \\phi u \\Vert _{W^{k,p}}^p \\lesssim _p \\Vert u \\Vert _{W^{k,p} (B_R \\setminus B_{R-1} ) }^p$ for $p\\in (1,\\infty )$ .", "Thus $\\begin{split}\\Vert u_c \\Vert _{W^{1,p}}&\\lesssim \\Vert u \\Vert _{W^{1,p} (B_R \\setminus B_{R-1})} \\lesssim _{a,p} DR^{-a} \\qquad \\text{ for } a\\in [0,3/p^{\\prime }), p\\in (1,\\infty )\\\\\\Vert \\Delta u_c \\Vert _2 &\\lesssim \\Vert u \\Vert _{H^2 (B_R \\setminus B_{R-1})} \\lesssim _a DR^{-a} \\qquad \\text{ for } a\\in [0,3/2),\\\\\\text{ and }\\Vert \\partial _t u_c \\Vert _{2}&\\lesssim \\Vert \\partial _t u \\Vert _{L^2 (B_R \\setminus B_{R-1})} \\lesssim _a D^2 R^{-a}\\qquad \\text{ for } a\\in [0,5/2),\\end{split}$ where we used (REF ), (REF ) and the last inequality follows from the form of (REF ), which allows differentiation inside the integral.", "This and the embedding $H^2 \\subset L^\\infty $ gives (REF ).", "Moreover $ \\begin{split}\\Vert F_2 \\Vert _2 &= \\Vert (u_c \\cdot \\nabla )(u\\phi ) + (u\\phi \\cdot \\nabla )v_c + (u_c \\cdot \\nabla ) u_c - \\Delta u_c +\\partial _t u_c \\Vert _2 \\\\&\\lesssim \\Vert u_c \\Vert _{W^{1,4}} \\Vert u \\Vert _{W^{1,4} (B_{R-1}^c)} +\\Vert u_c \\Vert _{W^{1,4}}^2 + \\Vert \\Delta u_c \\Vert _2 + \\Vert \\partial _t u_c \\Vert _2 \\\\&\\lesssim _a D^2 R^{-a}\\end{split}$ for $a\\in [0,3/2)$ , where we also used (REF ).", "This gives (REF ), as required." ], [ "The Navier-Stokes equations on $B_R$ with small forcing", "In this section we discuss well-posedness of (REF ) with small small forcing $F$ , and we prove (REF ).", "Lemma 6 (Strong solution to (REF ) for small forcing) Let $T>0$ and $r : [0,T]\\rightarrow D(A)$ , and suppose that there exists $N\\ge 1$ such that $\\Vert \\nabla r (t) \\Vert _{L^2(B_R)}+\\Vert \\nabla r (t) \\Vert _{L^4(B_R)}+\\Vert D^2 r (t) \\Vert _{L^2(B_R)} \\le N$ for all $t\\in [0,T]$ .", "Then there exists a unique strong solution $v$ to (REF ) above if $\\Vert F \\Vert _{L^\\infty ((0,T);L^2 (B_R))} \\le \\varepsilon $ for some $ \\varepsilon \\in \\left( 0 , C N^{3} \\mathrm {e}^{-2N^4C^2T}\\right) , $ where $C>1$ is a universal constant.", "Moreover, $\\Vert \\nabla v (t) \\Vert _{L^2 (B_R)} \\le \\frac{\\varepsilon }{N^2C } \\left( \\mathrm {e}^{4N^4 C^2t }-1 \\right)^\\frac{1}{2} , \\qquad \\Vert \\nabla \\overline{\\pi } \\Vert _{L^p ((0,t); L^2 (B_R)} \\lesssim _p t^{\\frac{1}{p}} N \\varepsilon \\mathrm {e}^{2N^4C^2t }$ for all $t\\in [0,T]$ and $p\\in (1,\\infty )$ .", "Recall (REF ) for the definition of the Stokes operator $A$ , and note that the assumption on $r$ implies that $\\Vert r (t) \\Vert _{L^\\infty (B_R)} \\le C \\Vert \\nabla r \\Vert _{L^2 (B_R)}^{\\frac{1}{2}} \\Vert D^2 r \\Vert _{L^2 (B_R)}^{\\frac{1}{2}} ,$ due to (REF ) and (REF ).", "We note that taking $\\varepsilon :=C(a) D^2 R^{-a}$ , and $R$ as in (REF ), the lemma implies (REF ), as required.", "The lemma can be proved using a standard Galerkin procedure, and we provide a sketch of the proof (inspired by [33]) to keep track of the quantitative estimates.", "We first note that uniqueness follows in the same way as uniqueness of local-in-time strong solutions to the homogeneous Navier-Stokes equations (see Theorem 6.10 in [33], for example).", "For existence, let $\\mathcal {N} :=\\mathrm {span} \\,\\lbrace a_1, \\ldots , a_n \\rbrace $ denote the linear space spanned by the first $n$ eigenvalues of the Stokes operator $A$ (recall (REF )) on $B_R$ , that is for all $k$ $a_k\\in D(A)$ and $A a_k = \\lambda _k a_k$ for some $\\lambda _k >0$ such that $0 < \\lambda _k \\le \\lambda _{k+1}$ .", "We first show that for each $n$ there exist $c_1, c_2, \\ldots , c_n \\in C^1 ([0,T])$ such that $w :=\\sum _{k=1}^n c_k (t) a_k \\in \\mathcal {N}$ is a weak solution of the Galerkin approximation of $\\begin{split}\\partial _t w + Aw + P_n \\left( (w\\cdot \\nabla ) w \\right) &=P_n F -P_n \\left( (w\\cdot \\nabla ) v + (v\\cdot \\nabla ) w \\right) \\\\\\operatorname{div}w &=0,\\\\w(0) &= 0,\\end{split}$ where $P_n\\colon L^2 \\rightarrow \\mathcal {N} \\subset L^2$ is the orthogonal projection onto $\\mathcal {N}$ , i.e.", "that $w\\in L^\\infty ((0,T);L^2)\\cap L^2 ((0,T);H^1)$ satisfies (REF ) for $\\phi \\in \\mathcal {N}$ .", "Indeed taking the inner product of the above equation with $a_k$ ($k=1,\\ldots ,n$ ) we have $c_k^{\\prime } + \\sum _{j=1}^n c_j \\int A a_ja_k + \\sum _{i,j =1}^n c_i c_j \\int (a_i \\cdot \\nabla )a_j\\cdot a_k = - \\sum _{j=1}^n c_j \\int \\left( (a_j \\cdot \\nabla )r\\cdot a_k + (r\\cdot \\nabla ) a_j\\cdot a_k \\right) + \\int F \\cdot a_k,$ and so using the facts that $A a_j = \\lambda _j a_j$ , that $a_j$ 's are orthonormal in $L^2$ (see [33]) and setting $B_{ij}^{(k)} :=\\int (a_i \\cdot \\nabla )a_j \\cdot a_k,\\qquad D_j^{(k)}:=\\int \\left( (a_j \\cdot \\nabla )r\\cdot a_k + (r\\cdot \\nabla ) a_j \\cdot a_k \\right), \\qquad C^{(k)} :=\\int F\\cdot a_k,$ we obtain a system of $n$ differential equations for $c_1, \\ldots , c_n$ , $c_k^{\\prime }=- \\sum _{i,j=1}^n c_i c_j B_{ij}^{(k)} - \\sum _{j=1}^n (c_j D_j^{(k)} + \\lambda _k ) + C^{(k)},$ with initial conditions $c_k(0) =0$ for $k=1,\\ldots , n$ .", "Since the right-hand side is locally Lipschitz, we obtain local in time well-posedness of the system (see Hartman [16]).", "That the $c_k$ 's exist for all times can be observed by testing (REF ) by $w\\in \\mathcal {N}$ , which gives that $\\frac{1}{2} \\frac{\\mathrm {d}}{\\mathrm {d}t} \\Vert w \\Vert ^2 +\\Vert \\nabla w \\Vert ^2 = -\\int ( (w\\cdot \\nabla )r )\\cdot w -\\int F w\\le \\frac{1}{2} \\Vert \\nabla w \\Vert ^2 + c \\Vert w \\Vert ^2 (1 + \\Vert r \\Vert ^2_\\infty ) + \\frac{1}{2} \\Vert F \\Vert ^2,$ where we used the cancellations $\\int ((w\\cdot \\nabla ) w)\\cdot w = \\int ((r \\cdot \\nabla )w )\\cdot w =0$ , as well as integrated the term $\\int ( (w\\cdot \\nabla )r )\\cdot w$ by parts and applied Young's inequality.", "For brevity, we also used the notation $\\Vert \\cdot \\Vert \\equiv \\Vert \\cdot \\Vert _{L^2 (B_R)}$ and $\\Vert \\cdot \\Vert _p\\equiv \\Vert \\cdot \\Vert _{L^p (B_R)}$ , which we continue for the rest of the proof.", "The Gronwall inequality gives that $ \\sum _{k=1}^n c_k^2 = \\Vert w \\Vert ^2 \\le \\int _0^t \\Vert F (s) \\Vert ^2 \\mathrm {e}^{cN^2 (t-s)} \\mathrm {d}s \\le \\varepsilon ^2 \\mathrm {e}^{cN^2 t} $ for $t\\ge 0$ , which shows global existence of $c_k$ 's, and also implies that $\\int _0^T \\Vert \\nabla w \\Vert ^2 \\lesssim \\int _0^T \\Vert F (t) \\Vert ^2 \\mathrm {d}t + N^2 \\int _0^T \\Vert F (t) \\Vert ^2 \\mathrm {e}^{cN^2t} \\mathrm {d}t <\\infty $ .", "Moreover $w$ is bounded in $L^\\infty ((0,T);V)$ and in $L^2 ((0,T); H^2)$ , uniformly in $n$ .", "Indeed, multiplying the equation by $Aw$ we obtain $\\begin{split}\\frac{1}{2} \\frac{\\mathrm {d}}{\\mathrm {d}t } \\Vert \\nabla w \\Vert ^2 + \\Vert Aw \\Vert ^2 &= \\int \\left( ((w\\cdot \\nabla ) w )Aw - \\mathbb {P}F Aw - ((w\\cdot \\nabla )r) Aw - ((r\\cdot \\nabla ) w Aw \\right) \\\\& \\le \\Vert w \\Vert _\\infty \\Vert \\nabla w \\Vert \\Vert Aw \\Vert + \\Vert F \\Vert \\Vert Aw\\Vert + \\Vert w \\Vert _\\infty \\Vert \\nabla r \\Vert \\Vert Aw \\Vert + \\Vert r \\Vert _\\infty \\Vert \\nabla w \\Vert \\Vert Aw \\Vert \\\\&\\lesssim \\Vert \\nabla w \\Vert ^{\\frac{3}{2}} \\Vert Aw \\Vert ^{\\frac{3}{2}} + \\Vert F \\Vert \\Vert Aw \\Vert + \\Vert \\nabla r \\Vert \\Vert \\nabla w \\Vert ^{\\frac{1}{2}} \\Vert Aw \\Vert ^{\\frac{3}{2}} + \\Vert r \\Vert _\\infty \\Vert \\nabla w \\Vert \\Vert Aw \\Vert ,\\end{split}$ where we used (REF ) in the third inequality.", "Thus using Young's inequality we can absorb $\\Vert Aw \\Vert ^2$ on the left-hand side to obtain $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t } \\Vert \\nabla w \\Vert ^2 + \\Vert A w \\Vert ^2 &\\le C^2 \\Vert \\nabla w \\Vert ^6 + C^2 \\Vert \\nabla w \\Vert ^2 ( \\Vert \\nabla r \\Vert ^4 + \\Vert r \\Vert _\\infty ^2 ) + \\Vert F \\Vert ^2\\\\&\\le C^2 \\Vert \\nabla w \\Vert ^6 + C^2 N^4 \\Vert \\nabla w \\Vert ^2 + \\varepsilon ^2\\end{split}$ for some $C>\\max \\lbrace 1,c\\rbrace $ , where $c$ is from (REF ).", "Thus, since $ g(t) :=\\frac{\\varepsilon ^2 }{N^4C^2 } \\left( \\mathrm {e}^{4N^4C^2t }-1 \\right)$ satisfies $g^{\\prime }(t) \\ge C^2 g(t)^3 + C^2N^4 \\,g(t) + \\varepsilon ^2$ for $t\\in [0,T]$ , we have that $\\Vert \\nabla w \\Vert ^2 \\le g(t) \\qquad \\text{ for }t\\in [0,T],$ which also implies that $\\Vert D^2 w \\Vert _{L^2 ((0,T)\\times B_R )}^2 \\lesssim \\Vert A w \\Vert _{L^2 ((0,T)\\times B_R )}^2 \\le g(T) - g(0) = {\\varepsilon }{(NC)^{-1} } \\left( \\mathrm {e}^{4NCT }-1 \\right) $ , where we also used (REF ).", "Finally (REF ) shows that $\\Vert \\partial _t w \\Vert _{L^{\\frac{4}{3}}((0,T);V^*)} $ is bounded uniformly in $n$ (recall that $w\\equiv w_n $ is the Galerkin approximation for given $n$ ), which can be shown by a standard argument, using Hölder's inequality, Lebesgue interpolation and Sobolev embedding $H^1_0\\subset L^6$ , see for example [33].", "This estimate on the time derivative lets us use the Aubin-Lions lemma (see [1], [27] or [39]) to extract a subsequence $\\lbrace w_{n_k} \\rbrace $ such that $\\begin{split}w_{n_k} \\rightarrow & v \\qquad \\text{ in } L^3 ((0,T)\\times B_R),\\\\w_{n_k} \\stackrel{*}{\\rightharpoonup } &v \\qquad \\text{ in } L^\\infty ((0,T); V),\\\\D^2 w_{n_k } \\rightharpoonup &D^2 v \\qquad \\text{ in } L^2 ((0,T)\\times B_R)\\end{split}$ for some $v\\in L^\\infty ((0,T);V)\\cap L^2 ((0,T);H^2)$ .", "This mode of convergence enables us to take the limit in the weak formulation of the equation for $w_{n_k}$ , and so shows that $v$ is the required solution.", "The estimate for $\\nabla v$ in (REF ) follows from (REF ).", "As for the estimate for $\\nabla \\overline{\\pi }$ in (REF ) we have $\\begin{split}\\Vert (v\\cdot \\nabla ) r + (r\\cdot \\nabla ) v \\Vert _2 &\\le \\Vert v \\Vert _4 \\Vert \\nabla r \\Vert _4 + \\Vert r\\Vert _{\\infty } \\Vert \\nabla v \\Vert _2 \\lesssim N\\left( \\Vert v \\Vert _2^{\\frac{1}{4}} \\Vert \\nabla v \\Vert _2^{\\frac{3}{4}} + \\Vert \\nabla v \\Vert _2 \\right) \\lesssim N \\varepsilon \\mathrm {e}^{2N^4C^2t }\\end{split}$ at each time, where we used (REF ) and (REF ) in the second inequality, as well as (REF ) and (REF ) in the last.", "Thus (REF ) gives $\\begin{split}\\Vert \\nabla \\overline{\\pi } \\Vert _{L^p ((0,t); L^2 (B_R)} &\\lesssim _p \\Vert F- (v\\cdot \\nabla )r - (r\\cdot \\nabla )v \\Vert _{L^p ((0,t); L^2 (B_R))} \\lesssim t^{\\frac{1}{p}} N \\varepsilon \\mathrm {e}^{2N^4C^2t }\\end{split}$ for every $p\\in (1,\\infty )$ , as required." ], [ "Acknowledgement", "The author has been partially supported by the Simons Foundation." ] ]
2107.01803
[ [ "Mobility decisions, economic dynamics and epidemic" ], [ "Abstract In this paper we propose a theoretical model including a susceptible-infected-recovered-dead (SIRD) model of epidemic in a dynamic macroeconomic general equilibrium framework with agents' mobility.", "The latter affect both their income (and consumption) and their probability of infecting and of being infected.", "Strategic complementarities among individual mobility choices drive the evolution of aggregate economic activity, while infection externalities caused by individual mobility affect disease diffusion.", "Rational expectations of forward looking agents on the dynamics of aggregate mobility and epidemic determine individual mobility decisions.", "The model allows to evaluate alternative scenarios of mobility restrictions, especially policies dependent on the state of epidemic.", "We prove the existence of an equilibrium and provide a recursive construction method for finding equilibrium(a), which also guides our numerical investigations.", "We calibrate the model by using Italian experience on COVID-19 epidemic in the period February 2020 - May 2021.", "We discuss how our economic SIRD (ESIRD) model produces a substantially different dynamics of economy and epidemic with respect to a SIRD model with constant agents' mobility.", "Finally, by numerical explorations we illustrate how the model can be used to design an efficient policy of state-of-epidemic-dependent mobility restrictions, which mitigates the epidemic peaks stressing health system, and allows for trading-off the economic losses due to reduced mobility with the lower death rate due to the lower spread of epidemic." ], [ "Introduction", "We propose an integrated assessment model, denoted by ESIRD, encompassing a susceptible-infected-recovered-dead (SIRD) model of epidemic and a dynamic macroeconomic general equilibrium model of economy, where mobility choices of forward looking agents affect both income (and consumption) and the spread of epidemic.", "A calibrated version of the model illustrates the possibilities to use the model to design an efficient policy of state-of-epidemic-dependent mobility restrictions.", "Pandemic crisis has shown that sudden drops in individual mobility have a substantial negative consequences on aggregate income and consumption [43].", "The decrease of individual mobility along COVID-19 crisis has been the joint outcome of individual decisions, caused by the diffusion of infection, and of containment measures imposed by national authorities (lockdown, curfew, etc.).", "In turn, a reduction in individual mobility brings down individual income [30] as well as epidemic dynamics, being higher individual mobility associated to a higher probability of infecting and being infected [42].", "Therefore, entangled externalities and “general equilibrium” effects are at work; more precisely, individual mobility decisions display i) strategic complementarities with mobility choice of other agents, because the marginal impact on individual income of individual mobility is increasing in the aggregate mobility [10], [15]; and, ii) negative externalities on contagion dynamics, because of agents in their mobility choices internalize the risk of being infected but not the effect of infecting other people [5].Another possible source of externality, the healthcare congestion, is analysed by [32].", "So far, within the recent dynamic micro-funded epidemiological-economic literature (see, e.g., [18], [48], [32]) no contribution has characterized in a dynamic macroeconomic general equilibrium model the optimal mobility choices in presence of strategic complementarities in production and negative externalities on contagion dynamics, the first step in a more adequate evaluation of policies reducing individual mobility for contrasting COVID-19 epidemic.", "In the model we focus on short-term mobility.", "It should be meant as the daily/weekly activities of mobility observed in labour market, i.e.", "commuting, movements by car, track, bus, train, etc., generally observed in the economy.", "This mobility also includes movements for the activity of consumption, both for purchasing goods and services both for leisure.", "Epidemic dynamics is driven by a generalized version of the SIRD model where the average number of contacts per person per time is endogenous, as well as the transition rate, i.e.", "the flow of new infected, and depends on the mobility choices of agents.", "Agents maximize an inter-temporal discrete time utility function taking into account consumption and mobility costs.", "Their choice of mobility for working (respectively for consuming) depends on their state (susceptible, infectious or recovered), the aggregate level of economic activity, the current and future policies on mobility restrictions, and on their future utility, which, in turn, depends on the probabilities of being infected in the future and on the future dynamics of economy.", "In each period aggregate economic activity (consumption) depends on the state of the epidemic and on the individual mobility choices.", "We set the agent's problem as a game with a continuum of players in a finite state space (the four states of agents).", "The notion of equilibrium is basically borrowed – even if re-elaborated – from [33] (see their Definition REF ) and shown to be equivalent to the notion of Nash equilibrium (Proposition REF ).", "We provide an existence result (Theorem REF ), and then propose an algorithm to identify an equilibrium (see Section REF and Theorem REF ).", "The model can be seen as a discrete time, finite state, infinite horizon Mean Field Game (MFG).", "The latter studies the behavior of Nash equilibria in differential games as the number of agents becomes large.", "There is extensive recent research activity on MFGs starting from the pioneering works of Lasry and Lions [36], [37], [38].", "In the large population limit, one expects to obtain a game with a continuum of agents where, like in our case, the effects on the decision of any agent from the actions of the other agents are experienced through the statistical distribution of states.", "Since perturbations from the strategy of an agent do not influence the statistical states' distribution, the latter acts as a parameter in each agent's control problem.", "The passage to the limit is still a difficult theoretical topic and it is out of the scope of this paper (see, e.g., [12]).", "We calibrate the model by using Italian experience on COVID-19 epidemic in the period February 2020 - May 2021.", "Numerical explorations of model under different configurations of state-of-epidemic-dependent mobility restrictions highlight the presence of a trade-off between economic losses and fatalities due to pandemic, i.e.", "of a pandemic possibilities frontier as in [34] and [1].", "However, we argue that policy evaluation should take into account two additional directions, the first relate to the share of susceptible at the end of period of evaluation, which can favor a fresh outbreak of epidemic in the future without an efficient vaccine; and the social feasibility of prolonged mobility restrictions [51].", "Our paper makes four main contributes to literature.", "The first is to the epidemiological-macroeconomic literature, which has recently received a burst from the COVID-19 outbreak.", "Its main goal is to produce integrated assessment models, where the economic dynamics complements epidemiological models.", "In particular, a strand of literature focuses on optimal policy problem from a planner's perspective without modeling individual behavior (see, e.g., [2], [44], [41], [3]), while another one considers forward-looking agents and market determination of good and factor prices, as in [18], [48], [32] and [34].", "With respect to these contributions we explicitly consider agents' (short-term) mobility.", "There are several good reasons for doing this: (i) in the epidemiological literature mobility is (not surprisingly) identified as the key variable in containing the epidemic [42]; (ii) mobility is an easily measurable variable and many datasets are actually freely available (e.g.", "GSM mobility data, Google Mobility Trends, and Apple Mobility Lab); therefore, it is ideal for bringing the model to the data; and, (iii) since mobility was/is the primary focus of several restrictive policies imposed by governments, in the proposed framework it is particularly natural to evaluate past and future policies on mobility restrictions.", "Focusing on mobility implies, as already argued, taking into account non-market interactions among individual choices; this leads to an another element of novelty in our epidemiological-macroeconomic model: the presence of strategic complementarities in individual decisions.", "The latter introduces substantial difficulties in the mathematical study of the model, which we start addressing proving the existence of a Nash equilibrium.", "The second contribution is on methodological side.", "We have discussed above that our model belongs to the class of discounted infinite horizon, discrete time, finite state space MFG.", "So far, to the best of our knowledge our model does not fall into the classes already studied in the literature.", "In particular, [27], [17], [29], and [7], [52] deal with MFG in discrete time and finite state space.", "However, [29] and [7] consider only finite horizon problems; [27] (and similarly [52]) consider infinite horizon problems of ergodic type or with entropy penalization, where the dependence of the agents' utility from the choices of the other agents is more regular than in our model, and this allows them to prove existence and uniqueness of equilibrium.", "Finally, [17] consider an infinite horizon MFG, but where agents' cost does not depend on the strategies of the other agents, which instead happens in our model for the presence of strategic complementarities.", "Hence, our result of existence of an equilibrium and the verification type theorem are to be considered a novelty.", "We also contribute to the theoretical economic literature focusing on the endogenous determination of the infection rate and the reproduction rate of an epidemic [4].", "Infection rate depends on a large number of aggregate factors (climate, geography, health system, etc.", "), but also crucially revolves on individual choices.", "To endogenize infection rate has been proposed several approaches, among which a purely epidemiological approach as [23] and a behavioral approach (see [20] and [6]).", "[22], [48], and [18] are instead more in line with our approach, developing a settings where forward-looking individuals chose their actions facing a epidemic-economic trade-off.", "However, no paper directly models mobility choices of individuals taking into account strategic complementarities and negative externalities in an infinite horizon general equilibrium setting for explaining the dynamics of infection rate along the pandemic.", "The advantage of our approach are evident in the interpretation of results, allowing for directly correlating mobility and infection rate, and in the possibility to bring the model to data.", "The final contribution is to the literature looking at the effect of epidemics diffusion on mobility (see, e.g, [40] and [42] for an epidemiological perspective).", "In particular, [21] and [47] look at the effects of governmental suggestions and Fox News messages respectively; while [28] and try to disentangle the effects of individual decisions (auto-segregation) and governmental restrictive policies on mobility during the COVID-19 crisis.", "Our model provide a perfect candidate theoretical framework to analyse the observed mobility behavior since it characterizes in a dynamic general equilibrium setting individual decisions both in a policy-free regime (auto-segregation) or, alternatively, in a regime where restrictive measures on mobility are introduced.", "More important, our contribution provides a theoretical framework to evaluate restrictive policies going beyond the simple trade-off economic losses/fatalities as prospected in [34], [1], and [26].", "It makes it possible, for instance, to take into account in the evaluation other key dimensions regarding the social feasibility of policies, the fragility of post-lockdown situations with a high risk of fresh outbreaks, and the sustainability of health systems (see, in particular, Sections and ).", "It also allows to give an answer to the provocative question posed, among others, by [14] on the viability of a containment policy based only on self-confinement of individuals free of any governmental restrictions on mobility.", "At least for the Italian experience in 2020, our model suggests that a policy only based on self-confinement would have resulted in a peak prevalence of nearly six million infected people (see Section ), which corresponds to a need of about four hundred thousand of beds in hospitals.", "This would have been unsustainable for a country having, in February 2020, about 190,000 beds in hospitals, most of them already occupied by patients with COVID-19 independent pathologies.", "The paper is organized as follows: Section presents the model, Section focuses on the agent's optimization problem while Section provides a recursive construction method for equilibrium(a).", "Section calibrates the model to Italian data; Section uses the model to investigate the effects of policies aiming at mitigating epidemic and their effects on economic activity; Section concludes." ], [ "The epidemiologic-economic dynamic model", "We consider an infinite horizon discrete time world with a continuum set of agents, whose individual actions do not modify the evolution of the global epidemic state.", "This is, roughly speaking, the typical setting that is used in the Mean Field Games theory, as discussed in the Introduction.", "As in the classical SIRD framework ([13]), at each time period, the disease status $k$ of an agent can be: susceptible ($k=S$ ); infected ($k=I$ ); recovered ($k=R$ ); and died $(k=D)$ .", "We then denote the set of possible disease status by $\\mathbb {K}$ , i.e.", "$\\mathbb {K}:=\\left\\lbrace S,I,R,D\\right\\rbrace .$" ], [ "Mobility strategies and epidemic dynamics", "We want to model how the classes of disease, i.e., the shares of population with different disease status, evolve over time following the standard SIRD model without vital dynamics (newborns are not considered) according to the mobility choices of the classes.", "We assume that, at any time $t\\in \\mathbb {N}$ , the agents of each epidemic class behave in the same way.", "Hence, we introduce the set $\\mathcal {A}=\\Big \\lbrace {\\Theta }=({\\Theta }_p,{\\Theta }_c): \\mathbb {N}\\times \\mathbb {K} \\rightarrow [0,1]^2 \\ \\mbox{s.t.", "}\\ \\ {\\Theta }(\\cdot ,D, \\cdot )=(0,0)\\Big \\rbrace ,$ where $\\Theta _{p}(t,k)$ and $\\Theta _{c}(t,k)$ represent, respectively, the mobility rate (whose maximal value is w.l.o.g.", "normalized to 1) for production and for consumption chosen by the agents belonging to the class of disease $k\\in \\mathbb {K}$ at time $t\\in \\mathbb {N}$ .", "Consider the set $\\mathcal {P}(\\mathbb {K})$ of probability distributions on $\\mathbb {K}$ and let $\\mu (t)\\in \\mathcal {P}(\\mathbb {K})$ represent the epidemic state at time $t\\in \\mathbb {N}$ , i.e.", "$\\mu (t)(\\lbrace k\\rbrace )$ represents the shares of population in disease status $k\\in \\mathbb {K}$ .", "The evolution of the epidemic depends on the mobility rates chosen by the agents, i.e.", "on $\\Theta $ , as follows: $\\mu (t+1)=[Q^{\\Theta (t,\\cdot )}(\\mu (t))]^{T}\\mu (t),$ where $[\\cdot ]^{T}$ denotes transposition, $Q^{\\Theta (t,\\cdot )}(\\nu ):=\\left(\\begin{array}{cccc}1- \\tau ^{\\Theta (t,\\cdot )}(\\nu ) & \\tau ^{\\Theta (t,\\cdot )}(\\nu ) & 0 & 0 \\\\0 & 1- \\pi _R -\\pi _D & \\pi _R & \\pi _D \\\\0 & 0 & 1 & 0 \\\\0 & 0 & 0 & 1 \\\\\\end{array}\\right), \\ \\ \\ \\nu \\in \\mathcal {P}(\\mathbb {K}),$ where $\\pi _R, \\pi _{D}\\in (0,1)$ are two exogenous constants such that $\\pi _{R}+\\pi _{D}<1$ and $ \\tau ^{\\Theta (t,\\cdot )}(\\nu ) :=\\beta _{P}\\nu (I)\\Theta _{p}(t,I){\\Theta }_p(t,S) +\\beta _{C}\\nu (I)\\Theta _{c}(t,I){\\Theta }_c(t,S)$ where $\\beta _P, \\beta _C>0$ are given constants such that $\\beta _P+\\beta _C<1$ .", "More explicitly, using the notation $\\mu (t,k):=\\mu (t)(\\lbrace k\\rbrace ),$ ${\\left\\lbrace \\begin{array}{ll}\\mu (t+1,S)=\\mu (t,S) \\big (1-\\beta _{P}\\mu (t,I)\\Theta _{p}(t,I){\\Theta }_p(t,S) +\\beta _{C}\\mu (t,I)\\Theta _{c}(t,I){\\Theta }_c(t,S)\\big ),\\\\\\\\\\mu (t+1,I)=\\mu (t,S)\\big (\\beta _{P}\\mu (t,I)\\Theta _{p}(t,I){\\Theta }_p(t,S) +\\beta _{C}\\mu (t,I)\\Theta _{c}(t,I){\\Theta }_c(t,S)\\big ) \\\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ -\\mu (t,I)(1-\\pi _R-\\pi _D),\\\\\\\\\\mu (t+1,R)=\\mu (t,I)\\pi _R+\\mu (t,R),\\\\\\\\\\mu (t+1,D)=\\mu (t,I)\\pi _D+\\mu (t,D).\\end{array}\\right.", "}$ The following remark explain the interpretation of the above equations.", "Remark 2.1 Denoting by $s(t), i(t)$ , respectively, the percentage of population susceptible and infected at time $t$ , we have $s(t)=\\mu (t,S)$ , $i(t)=\\mu (t,I)$ .", "Hence $\\Delta s(t)&:= s(t+1)-s(t)= \\mu (t+1,S)-\\mu (t,S)= - \\beta (t) s(t) i(t)$ where $\\beta (t) :=\\beta _{P}\\Theta _{p}(t,I){\\Theta }_p(t,S) +\\beta _{C}\\Theta _{c}(t,I){\\Theta }_c(t,S)$ plays the role of the transmission rate of disease in the classical SIRD model.", "In our model it depends on the mobility choices of the infected and susceptible agents $\\Theta _{p}(t,I)$ , ${\\Theta }_p(t,S)$ , $\\Theta _{c}(t,I)$ and ${\\Theta }_c(t,S)$ .", "Remark REF shows how our approach micro-funds the value of $\\beta (t)$ (the average number of contacts sufficient for transmission of a person per unit of time), the key parameter of SIRD model.", "It allows on the one hand to distinguish the probability of contracting the virus in the workplace or by consuming and, on the other hand, it makes explicit the dependence of the probability of infection on the movements made to participate to the two activities.", "It should be noted that the dependence of $\\beta $ on movements is “quadratic”.", "This fact is quite natural since the probability of contracting the virus increases both because of the increased mobility of the agent $i$ and for the increased mobility of other agents.", "Since in the model agents'behaviors are only determined by individual utilities, this will generate a first channel of strategic complementarity (with aggregate negative effects) in the mobility choices of agents.", "Agents will not completely internalize the aggregate contagion effects of their mobility choices and will have the tendency to move (and therefore spread the virus) “too much” with respect to the optimal level." ], [ "Income and mobility", "We suppose that the income of agent depends on three elements: (i) her/his health conditions; (ii) her/his mobility choices to work; and, (iii) the general state of the economy.", "More precisely, at a generic period $t\\in \\mathbb {N}$ , the income $Y(t,k)$ of the representative agent in the state $k\\in \\lbrace S,I,R\\rbrace $ , when the epidemic is in the state $\\mu (t)$ and she undertakes the production mobility choice $\\vartheta _{p}\\in [0,1]$ , is given by $Y(t,k,\\vartheta _{p}) =Z(t)\\left(A_{0}(k) + A_{1}(k) \\vartheta _p\\right),$ where $A_{0}(k)= \\left\\lbrace \\begin{array}{ll}a_0^{SR} & \\mbox{if} \\ k\\in \\lbrace S, R\\rbrace \\\\a_0^{I}& \\mbox{if} \\ k=I\\end{array}\\right.\\ \\ \\ \\ \\ A_{1}(k)=\\left\\lbrace \\begin{array}{ll}a_1^{SR} &\\mbox{if} \\ k\\in \\lbrace S, R\\rbrace \\\\a_1^{I} & \\mbox{if} \\ k=I\\end{array}\\right.$ and $Z(t) := \\phi \\bigg ( \\mu (t,S)\\Theta _{p}(t,S), \\,\\,\\mu (t,I)\\Theta _{p}(t,I), \\,\\, \\mu (t,R)\\Theta _{p}(t,R)\\bigg ),$ where $\\phi :[0,1]^{3}\\rightarrow (0,\\infty )$ is non-decreasing in all the components and such that $\\phi (0,0,0)=\\varepsilon >0$ .", "We assume that ${\\text{$0< a_0^{I}\\le a_0^{{SR}}$ and $0\\le a_1^I\\le a_{1}^{SR}$,}}$ where the second inequalities reflect the fact that that healthy (susceptible or recovered) agents are more productive than infected.", "Notice that in the agent's income $Y$ , the only term depending on $\\mu $ and $\\Theta $ is $Z$ ." ], [ "Consumption and mobility", "The price faced by the agent for the consumption good depends on her/his (consumption-related) mobility choice $\\vartheta _{c}$ and it is given by: $P(t,\\vartheta _{c}) = \\frac{1}{P_{0} + P_{1} \\vartheta _c},$ where ${P_{0}, P_{1}\\ge 0}.$ Abstracting from saving, at the generic time period, the consumption of the agent in the disease state $k$ when the epidemic is in the state $\\mu (t)$ and she undertakes the production-consumption choices $\\vartheta =(\\vartheta _{c},\\vartheta _{c})$ is then given by $C(t,k;\\vartheta ) &=& \\dfrac{Y(t,k,\\vartheta _{p}) }{P(t,\\vartheta _{c}) } =Z(t) \\left( A_{0}(k) + A_{1}(k) \\vartheta _p\\right) \\left(P_{0} + P_{1} \\vartheta _c \\right).\\nonumber $" ], [ "Mobility costs ", "We introduce now the mobility costs of agents.", "We assume that moving is costly.", "In particular, at a generic time period, the cost of the agent in the disease state $k$ when the epidemic state is $\\mu (t)$ to move with intensity $\\vartheta _p$ (respectively, $\\vartheta _c$ ) in the labour market (respectively, in the consumption market) is $\\gamma _p\\left(k,\\mu (t)\\right) \\vartheta _p \\ \\ \\Big (\\mbox{respectively, } \\ \\gamma _c\\left(k,\\mu (t)\\right)\\vartheta _c\\,\\,\\Big ),$ where $\\gamma _p: \\mathbb {K}\\times \\mathcal {P}(\\mathbb {K})\\rightarrow (0,\\infty )$ (respectively, $\\gamma _c: \\mathbb {K}\\times \\mathcal {P}(\\mathbb {K})\\rightarrow (0,\\infty )$ ).", "We assume that: ${\\gamma _{p}(R,\\nu )\\le \\gamma _{p}(S,\\nu )\\le \\gamma _{p}(I,\\nu ), \\ \\ \\ \\ \\gamma _{c}(R,\\nu )\\le \\gamma _{c}(S,\\nu )\\le \\gamma _{c}(I,\\nu ), \\ \\ \\ \\forall \\nu \\in \\mathcal {P}(\\mathbb {K}).", "}$" ], [ "Agents' utility", "The utility at time $t$ of the agent in the disease state $k\\in \\mathbb {K}$ , undertaking the actions $\\vartheta =(\\vartheta _p,\\vartheta _c)\\in [0,1]^{2}$ is $u(t,k,\\vartheta ),$ where $u:\\mathbb {N}\\times \\mathbb {K}\\times [0,1]^2\\rightarrow \\mathbb {R}$ with $u(\\cdot ,D,\\cdot )\\equiv 0$ and, for $k\\in \\lbrace S,I,R\\rbrace $ , $u(t,k,\\vartheta ) :=\\ln \\Big [Z(t)\\left(A_{0}(k) + A_{1}(k) \\vartheta _{p}\\right)\\left(P_{0} + P_{1} \\vartheta _{c}\\right)\\Big ]- \\gamma _p\\left(k,\\mu (t)\\right) \\vartheta _{p} - \\gamma _c\\left(k,\\mu (t)\\right) \\vartheta _{c}- M,\\\\$ where $M\\in \\mathbb {R}$ is the constant utility of state dead, which “normalizes the utility of nonsurvival to zero” [45].", "In the expression of the utility the dependence on time only arises through $Z(t)$ and $\\mu (t)$ , hence we can write it as $u(t,k,\\vartheta )=U(k,Z(t),\\mu (t),\\vartheta ),$ where $U: \\mathbb {K}\\times (0,\\infty )\\times \\mathcal {P}(\\mathbb {K})\\times [0,1]^{2}\\rightarrow \\mathbb {R},$ $U(k,z,\\nu ,\\vartheta )= \\ln \\Big [z\\left(A_{0}(k) + A_{1}(k) \\vartheta _{p}\\right)\\left(P_{0} + P_{1} \\vartheta _{c}\\right)\\Big ]- \\gamma _p\\left(k,\\nu \\right) \\vartheta _{p} - \\gamma _c\\left(k,\\nu \\right) \\vartheta _{c}-M.$ Variables $\\mu $ and $\\Theta $ only arises in the terms $Z$ and $(\\gamma _{p},\\gamma _{c})$ .", "$Z$ depends on $\\mu $ and $\\Theta $ through (REF ), while $\\mu $ depends on $\\mu _0$ and $\\Theta $ through (REF ).", "Hence, we find useful to emphasize the dependence of $u$ on $\\mu _{0}$ and $\\Theta $ by writing $u^{\\mu _{0},\\Theta }$ ." ], [ "The agent's optimization problem", "To define and study the equilibria in the above setting, we first look at the optimization problem of an agent who may possibly deviate from the behavior of the others.", "Fix the initial epidemic distribution $\\mu _{0}\\in \\mathcal {P}(\\mathbb {K})$ and the overall agents' behavior $\\Theta \\in \\mathcal {A}$ .", "Then, let $\\mu (t)$ , $t\\in \\mathbb {N}$ , be evolving according to (REF ) with initial state $\\mu (0)=\\mu _{0}$ .", "We assume, as it is natural in the context of a continuum of agents, that the mobility choices of the agent do not modify the evolution of $\\mu $ .", "Hence, we consider an agent who takes for given $\\mu _{0}$ and $\\Theta $ — hence, $\\mu (\\cdot )$ , since her choices do not affect the evolution of $\\mu $ — and deviates from $\\Theta $ maximizing her/his expected total inter-temporal utility.", "Since we rely on Dynamic Programming, we let the initial time and state vary.", "Hence, we assume that the agent starts at time $t_{o}\\in \\mathbb {N}$ in the state $k_{o}\\in \\mathbb {K}$ , where $(t_{o},k_{o})\\in \\mathbb {N}\\times \\mathbb {K}$ , and that she chooses her strategies (in the closed-loop sense) in the set $\\mathcal {A}_{\\text{ag}}(t_{o}):=\\Big \\lbrace {\\theta }=({\\theta }_p,{\\theta }_c): \\lbrace t_o,t_{o}+1,...\\rbrace \\times \\mathbb {K} \\rightarrow [0,1]^2 \\ \\mbox{s.t.", "}\\ \\ {\\theta }(\\cdot ,D)=(0,0)\\Big \\rbrace .$ The feedback strategies depend on the present time and state of the agent – the knowledge of $\\mu $ and $\\Theta $ is hidden in the dependence on time of $\\theta $ .", "Given a discount factor $1-\\rho \\in (0,1)$ , the expected total inter-temporal utility is given by taking the discounted sum over time of the instantaneous utility introduced above in (REF ): $J^{\\mu _{0},\\Theta }(t_{o},k_{o};{\\theta }) := \\mathbb {E} \\left[ \\sum _{t=t_{o}}^{\\infty } {(1-\\rho )}^{t-t_{o}} u^{\\mu _{0},\\Theta }(t,K^{\\theta }(t), \\theta (t,K^{\\theta }(t))) \\right],$ where the epidemic state of the agent $K^{\\theta }(t)$ evolves randomly as follows.", "First, $K^{\\theta }(t_{o})=t_{o}$ and at each time $t\\ge t_o$ the state at time $t+1$ , i.e.", "$K^{\\theta }(t+1)$ , is determined by the transition kernel introduced below when the generic variable $\\vartheta \\in [0,1]^{2}$ is substituted by $\\theta (t,S)$ : $\\mathcal {P}^{\\vartheta }(t):=\\left(\\begin{array}{cccc}p_{SS}^{\\vartheta }(t) & p_{SI}^{\\vartheta }(t) & p_{SR}^{\\vartheta }(t) & p_{SD}^{\\vartheta }(t) \\\\p_{IS}^{\\vartheta }(t) & p_{II}^{\\vartheta }(t) & p_{IR}^{\\vartheta }(t) & p_{ID}^{\\vartheta }(t) \\\\p_{RS}^{\\vartheta }(t) & p_{RI}^{\\vartheta }(t) & p_{RR}^{\\vartheta }(t) & p_{RD}^{\\vartheta }(t) \\\\p_{DS}^{\\vartheta }(t) & p_{DI}^{\\vartheta }(t) & p_{DR}^{\\vartheta }(t) & p_{DD}^{\\vartheta }(t) \\\\\\end{array}\\right)=\\left(\\begin{array}{cccc}1- \\tau _{\\text{ag}}^{\\vartheta }(t)& \\tau _{\\text{ag}}^{\\vartheta }(t)& 0 & 0 \\\\0 & 1- \\pi _R -\\pi _D & \\pi _R & \\pi _D \\\\0 & 0 & 1 & 0 \\\\0 & 0 & 0 & 1 \\\\\\end{array}\\right).$ where $\\tau _{\\text{ag}}^{\\vartheta }(t)&:=\\beta _{P}\\mu (t,I)\\Theta _{p}(t,I)\\vartheta _{p} +\\beta _{C}\\mu (t,I)\\Theta _{c}(t,I)\\vartheta _{c}.$ $\\tau _{\\text{ag}}^{\\vartheta }(t)$ is obtained by $\\tau ^{\\Theta (t,\\cdot )}$ substituting $\\Theta (t,S)$ with $\\vartheta $ , i.e.", "the probability of infection of the susceptible agent changes due her choice of deviating from $\\Theta (t,S)$ .", "Summarizing $K^{\\theta }$ is the controlled Markov chain whose transition kernel at each time $t\\ge t_{o}$ is $\\mathcal {P}^{\\theta (t,K^{\\theta }(t))}(t)$ , i.e.", "this matrix gives the transition probability of the state of the agent from time $t$ to time $t+1$ .When $\\theta =\\Theta $ , then the dynamics of $\\mu $ is, as expected, the Fokker-Planck equation associated to the random dynamics of the agent.", "The agent aims at maximizing $J^{\\mu _{0},\\Theta }(t_{o},k_{o};\\theta )$ over $\\theta \\in \\mathcal {A}_{\\text{ag}}(t_{o})$ .", "Define the value function of the agent as $V^{\\mu _0,\\Theta }(t_{o},k_{o})=\\sup _{\\theta \\in \\mathcal {A}_{\\text{ag}}(t_{o})} J^{\\mu _{0},\\Theta }(t_{o},k_{o};\\theta ).$ It is well known, by the Dynamic Programming Principle, that the value function is a solution (possibly not unique) to the so called Bellman equation, which is written as follows (with unknown $v$ ): $v(t_{o},k_{o}) = \\sup _{\\vartheta \\in [0,1]^2}\\sum _{k\\in \\mathbb {K}} p^{\\vartheta }_{k_{o}k}(t_{o})\\big [ u^{\\mu _{0},\\Theta }(t_{o},k_o,\\vartheta )+ (1-\\rho )v(t_{o}+1,k)\\big ].$" ], [ "Definition of equilibrium", "Let us give the definition of Nash equilibrium for the continuum of agents' game.", "Definition 4.1 (Nash equilibrium) Let $\\mu _{0}\\in \\mathcal {P}(\\mathbb {K})$ .", "A Nash equilibrium (for our continuum of agents' game) starting at $\\mu _0$ is a map $\\overline{\\Theta }^{\\mu _{0}}\\in \\mathcal {A}$ such that, calling $\\overline{\\mu }(\\cdot )$ the solution to (REF ) associated to $\\overline{\\Theta }^{\\mu _{0}}$ and defining $\\overline{\\theta }(t,k):=\\overline{\\Theta }^{\\mu _{0}}(t,k), \\ \\ \\ \\ (t,k)\\in \\lbrace t_{o},t_{o}+1,...\\rbrace \\times \\mathbb {K},$ we have $J^{\\mu _0,\\overline{\\Theta }^{\\mu _{0}}}(t_{o},k_o)=V^{\\mu _0,\\overline{\\Theta }^{\\mu _{0}}}(t_{o},k_{o}).$ Inspired by [33] we also give the following definition that will turn out to be equivalent to the latter one.", "Definition 4.2 (Equilibrium) Let $\\mu _{0}\\in \\mathcal {P}(\\mathbb {K})$ be a given initial distribution of the disease at time $t=0$ .", "An equilibrium starting from $\\mu _{0}$ is a couple $(v^{\\mu _{0}},\\overline{\\Theta }^{\\mu _{0}})$ , with $v^{\\mu _{0}}:\\mathbb {N}\\times \\mathbb {K}\\rightarrow \\mathbb {R}$ and $\\overline{\\Theta }^{\\mu _{0}}\\in \\mathcal {A}$ , such that: (i) $v^{\\mu _{0}}$ is bounded and satisfies for every $ (t_{o},k_{o})\\in \\mathbb {N}\\times \\mathbb {K}$ the dynamic programming equation $&v(t_{o},k_{o}) = \\sup _{\\vartheta \\in [0,1]^2}\\sum _{k\\in \\mathbb {K}}p_{k_{o} k}^{\\vartheta }(t_{o})\\big ( u^{\\mu _{0},\\overline{\\Theta }^{\\mu _{0}}}(t_{o},k_{o},\\vartheta )+ (1-\\rho )v(t_{o}+1,k) \\big ).$ (ii) Calling $\\overline{\\mu }(\\cdot )$ the solution to (REF ) with initial datum $\\mu _{0}$ and with $\\Theta =\\overline{\\Theta }^{\\mu _{0}}$ , for all $(t_{o},k_{o})\\in \\mathbb {N}\\times \\mathbb {K}$ , the couple $\\overline{\\Theta }^{\\mu _{0}}(t_{o},k_{o})\\in [0,1]^{2}$ is an optimizer in the right hand side of (REF ).", "Proposition 4.3 The two above definitions are equivalent.", "(a) Let $\\mu _0\\in \\mathcal {P}(\\mathbb {K})$ , let $(v^{\\mu _{0}},\\overline{\\Theta }^{\\mu _{0}})$ be an equilibrium in the sense of Definition REF , and let $(t_o,k_o)\\in \\mathbb {N}\\times \\mathbb {K}$ .", "Then, define $\\overline{\\theta }(t,k):=\\overline{\\Theta }^{\\mu _{0}}(t,k), \\ \\ \\ \\ (t,k)\\in \\lbrace t_{o},t_{o}+1,...\\rbrace \\times \\mathbb {K},$ By standard verification arguments in optimal control, it is clear that, since $v^{\\mu _{0}}$ is bounded, it coincides with the value function of the agent and the control $\\overline{\\theta }\\in \\mathcal {A}_{\\text{ag}}(t_{o})$ is optimal for the agent when the other agents follow the same feedback strategy.", "Hence, (REF ) is verified showing that $\\overline{\\Theta }^{\\mu _{0}}$ is a Nash equilibrium in the sense of Definition REF .", "(b) Let $\\mu _0\\in \\mathcal {P}(\\mathbb {K})$ and let $ \\overline{\\Theta }^{\\mu _{0}}$ be a Nash equilibrium in the sense of Definition REF and consider the couple $(V^{\\mu _0,\\overline{\\Theta }^{\\mu _{0}}}, \\overline{\\Theta }^{\\mu _{0}})$ .", "By the dynamic programming principle, $V^{\\mu _0,\\overline{\\Theta }^{\\mu _{0}}}(t_{o},k_{o})$ satisfies (REF ), so part (i) of Definition REF is satisfied.", "Part (ii) of the same definition is satisfied by (REF )." ], [ "Existence of equilibria", "The main result of this subsection is the proof of the existence of an equilibrium.", "We shall make use of the following fixed point theorem.", "Theorem 4.4 (Tikhonov's fixed point Theorem) Let $\\mathcal {V}$ be a locally convex topological vector space, let $\\mathcal {Q}\\subseteq \\mathcal {V}$ be a nonempty compact convex set, and let $F:\\mathcal {Q}\\rightarrow \\mathcal {Q}$ be a continuous function.", "Then $F$ has a fixed point.", "Theorem 4.5 An equilibrium exists for each $\\mu _0\\in \\mathcal {P}(\\mathbb {K})$ .", "Fix $\\mu _0\\in \\mathcal {P}(\\mathbb {K})$ .", "Consider the space of sequences $\\mathcal {V}:=\\bigg \\lbrace q=(q_{R},q_{I},q_{S},q_{D}):\\mathbb {N}\\rightarrow \\mathbb {R}^{4} \\bigg \\rbrace $ endowed with the topology of pointwise convergence.", "The latter is a locally convex topological vector space since the topology is induced by the family of seminorms $\\mathbf {p}_{t}(q)=|q(t)|_{{\\mathbb {R}^{4}}}, \\ \\ \\ t\\in \\mathbb {N},$ where $q(t)$ is the $t-$ th component of $q$ .", "Then, consider $\\mathcal {Q}:=\\bigg \\lbrace q=(q_{R},q_{I},q_{S},q_{D}):\\mathbb {N}\\rightarrow [0,1]^{2}\\times [0,1]^{2}\\times [0,1]^{2}\\times \\lbrace 0\\rbrace \\bigg \\rbrace \\subset \\mathcal {V}.$ $\\mathcal {Q}$ is convex and, by Tikhonov's compactness Theorem, it is compact in $\\mathcal {V}$ .", "We consider the one-to-one correspondence $\\mathcal {M}: \\mathcal {Q}\\rightarrow \\mathcal {A}$ defined by $(\\mathcal {M} q)(t,k)\\equiv q_{k}(t), \\ \\ \\ (t,k)\\in \\mathbb {N}\\times \\mathbb {K}.$ Let $F:\\mathcal {Q}\\rightarrow \\mathcal {Q}, \\ \\ \\ \\ F(q)(t_{o},k_{o}):=(\\hat{\\theta }_p(t_{o},k_{o};q),\\hat{\\theta }_c(t_{o},k_{o};q)), \\ \\ \\ \\ (t_{o},k_{o})\\in \\mathbb {N}\\times \\mathbb {K}.$ where $(\\hat{\\theta }_p(t_{o},k_{o};q),\\hat{\\theta }_c(t_{o},k_{o};q))$ is the unique the maximizer over $[0,1]^2$ of $ \\vartheta \\mapsto \\sum _{k\\in \\mathbb {K}} p^{\\vartheta }_{k_{o}k}(t_{o})\\big [ u^{\\mu _{0},\\mathcal {M}(q)}(t_{o},k_o,\\vartheta )+ (1-\\rho )V^{\\mu _{0},\\mathcal {M}(q)}(t_{o}+1,k)\\big ].$ Clearly, if $\\overline{q}$ is a fixed point of $F$ , then $(V^{\\mu _{0},\\mathcal {M}(\\overline{q})}, \\mathcal {M}(\\overline{q}))$ is an equilibrium according to Definition REF .", "Fix $(t_o,k_{o})\\in \\mathbb {N}\\times \\mathbb {K}$ ; given a sequence $(q^{n})\\subset \\mathcal {Q}$ converging to $q\\in \\mathcal {Q}$ , we have, $V^{\\mu _{0},\\mathcal {M}(q_{n})}(t_{o},k_{o}) \\rightarrow V^{\\mu _{0},\\mathcal {M}(q)}(t_{o},k_{o})$ .", "Consequently, by strict concavity and regularity of $u^{\\mu _{0},\\mathcal {M}(q)}$ , we also have the convergence $(\\hat{\\theta }_p(t_{o},k_{o};q_{n}),\\hat{\\theta }_c(t_{o},k_{o};q_{n}))\\rightarrow (\\hat{\\theta }_p(t_{o},k_{o};q),\\hat{\\theta }_c(t_{o},k_{o};q)).$ This shows that $F$ is continuous.", "We conclude by Theorem REF ." ], [ "Recursive construction of equilibria", "In this section we present a recursive algorithm which allows to compute an equilibrium of our game.", "First we present the algorithm and then we prove, in Theorem REF , that, under suitable conditions, it provides an equilibrium.", "In the algorithm we will build a couple $(\\hat{v},\\hat{\\Theta })$ which will be proved to be an equilibrium in the sense of Definition REF .", "We denote also by $\\hat{\\mu }$ the associated epidemic distribution.", "All the objects defined below depend on $\\mu _{0}$ but, to lighten the notation, we do not stress this dependence.", "At time $t=0$ , we start with $\\hat{\\mu }(0)=\\mu _{0}$ and with an arbitrary $\\hat{v}(0,\\cdot )$ such that $\\hat{v}(0,D)=0$ .", "Then, at generic time $t\\in \\mathbb {N}$ , having at hand the values of $\\hat{v}(t,\\cdot )$ , $\\hat{\\mu }(t)$ and (for $t>0$ ) $\\hat{\\Theta }(t-1,\\cdot )$ , we define $\\hat{v}(t+1,\\cdot )$ , $\\hat{\\mu }(t+1)$ and $\\hat{\\Theta }(t,\\cdot )$ , as follows.", "We set $\\hat{v}(t+1,D)=0$ and $\\hat{\\Theta }(t,D)=0$ .", "For $k\\in \\lbrace I,R\\rbrace $ , we define, the couple $\\Phi (k,\\nu )=(\\Phi _{p}(k,\\nu ),{\\Phi }_{c}(k,\\nu ))$ whereHereafter, given $a,b\\in \\mathbb {R}$ , we denote $a\\vee b=\\max \\lbrace a,b\\rbrace $ , $a\\wedge b=\\min \\lbrace a,b\\rbrace $ .", "${\\left\\lbrace \\begin{array}{ll}\\displaystyle {{\\Phi }_{p}(k,\\nu ):=\\left(\\left(\\frac{1}{\\gamma _p(k,\\nu )}-\\frac{A_{0}(k)}{A_{1}(k)}\\right)\\vee 0\\right)\\wedge 1, }\\\\\\\\\\displaystyle {\\Phi _{c}(k,\\nu ):= \\left(\\left(\\frac{1}{\\gamma _c(k,\\nu )}-\\frac{P_{0}}{P_{1}}\\right)\\vee 0\\right)\\wedge 1,}\\end{array}\\right.", "}$ and $\\hat{\\Theta }(t,k):=\\Phi (k,\\hat{\\mu }(t)).$ Given $\\xi \\in \\mathbb {R}$ , $\\nu \\in \\mathcal {P}(\\mathbb {K})$ , we define $\\hat{\\vartheta }^{S}(\\xi ,\\nu )=(\\hat{\\vartheta }_{p}^{S}({\\xi },\\nu ),\\hat{\\vartheta }_{c}^{S}({\\xi },\\nu ))= \\big ((\\tilde{\\vartheta }_{p}^{S}(\\xi ,\\nu )\\wedge 1)\\vee 0,\\ (\\tilde{\\vartheta }^{S}_{c}(\\xi ,\\nu )\\wedge 1)\\vee 0 \\big ),$ where $\\tilde{\\vartheta }^{S}_p(\\xi ,\\nu )=\\frac{1}{\\gamma _p(S,\\nu )+{(1-\\rho )} \\beta _{P} \\hat{a}(\\nu )\\xi }-\\frac{A_{0}(S)}{A_{1}(S)},\\\\\\\\ \\tilde{\\vartheta }^{S}_c(\\xi ,\\nu )=\\frac{1}{\\gamma _c(S,\\nu )+{(1-\\rho )} \\beta _{C} \\hat{b}(\\nu )\\,\\xi }-\\frac{P_{0}}{P_{1}},$ where $\\hat{a}(\\nu )= \\nu (I)\\Phi _{p}(I,\\nu ), \\ \\ \\ \\hat{b}(\\nu )=\\nu (I) \\Phi (I,\\nu ),$ and also define, for $k\\in \\mathbb {K}$ , $\\nu \\in \\mathcal {P}(\\mathbb {K})$ , $\\vartheta ^{S}\\in [0,1]$ , $\\hat{Z}(\\vartheta ^{S},\\nu )= \\phi \\bigg ( \\nu (S)\\vartheta ^{S}, \\,\\,\\nu (I)\\Phi _{p}(I,\\nu ), \\,\\, \\nu (R)\\Phi _{p}(R,\\nu )\\bigg ).$ Recalling the expression of $U$ given in (REF ), we define, given $w_{R},w_{I}\\in \\mathbb {R}$ , $\\xi \\in \\mathbb {R}_{+}$ , $\\nu \\in \\mathcal {P}(\\mathbb {K})$ , ${\\left\\lbrace \\begin{array}{ll}W(w_{R},R,\\nu ,\\xi ):=\\displaystyle {\\frac{1}{1-\\rho }\\left(w_{R}- U(R, \\hat{Z}(\\hat{\\vartheta }^{S}(\\xi ,\\nu ), \\nu ),\\nu ,\\Phi (R,\\nu )\\right),}\\medskip \\\\\\displaystyle {W(w_{I},w_{R},I,\\nu ,\\xi ):=\\frac{1}{1-\\pi _R-\\pi _D}\\left[\\frac{w_{I}-U(I,\\hat{Z}(\\hat{\\vartheta }^{S}(\\xi ,\\nu ),\\nu ), \\nu , \\Phi (I,\\nu ))}{1-\\rho }- \\pi _R\\, W(w_{R},R,\\nu ,\\xi ) \\right]}.\\end{array}\\right.", "}$ Given $w=(w_{S},w_{I},w_{R})\\in \\mathbb {R}^{3}$ , we consider the algebraic equation in the variable $\\xi \\in \\mathbb {R}_{+}$ $w_{S} =(1-\\rho ) W(w_{I},I,\\nu ,\\xi )+(1-\\rho ) \\xi + f(S,\\nu ,\\xi ),$ where $f(S,\\nu ,\\xi )=& \\ \\ U(S,\\hat{Z}(\\hat{\\vartheta }^{S}(\\xi ,\\nu ),\\nu ),\\nu ,\\hat{\\vartheta }^{S}(\\xi ,\\nu ))\\\\&- (1-\\rho )\\left(\\beta _{P}\\hat{a}(\\nu )\\hat{\\vartheta }^{S}_{p}(\\xi ,\\nu ) +\\beta _{C}\\hat{b}(\\nu )\\hat{\\vartheta }^{S}_{c}(\\xi ,\\nu )\\right) \\xi .\\nonumber $ For every $\\nu \\in \\mathcal {P}(\\mathbb {K})$ , denote the set of solutions (possibly empty) to this equation by $\\hat{\\xi }(w,\\nu )$ .", "If $\\hat{\\xi }(w,\\nu )$ is nonempty and a singleton, we can define $\\tilde{\\Phi }(t,S,\\nu ):=\\hat{\\vartheta }^{S}(\\hat{\\xi }(\\hat{v}(t,\\cdot ),\\nu ), \\nu ).$ Assume now that $\\hat{\\xi }(\\hat{v}(t,\\cdot ),\\hat{\\mu }(t))$ is nonempty and a singleton, hence $\\hat{\\vartheta }^{S}(\\hat{\\xi }(\\hat{v}(t,\\cdot ),\\hat{\\mu }(t)), \\hat{\\mu }(t))$ and $\\tilde{\\Phi }(t,S,\\hat{\\mu }(t))$ are well defined.", "We set ${\\left\\lbrace \\begin{array}{ll}\\hat{v}(t+1,R):= W(\\hat{v}(t,R), R, \\hat{\\mu }(t), \\hat{\\xi }(\\hat{v}(t,\\cdot ),\\hat{\\mu }(t))),\\\\\\hat{v}(t+1,I):=W(\\hat{v}(t,I), I, \\hat{\\mu }(t), \\hat{\\xi }(\\hat{v}(t,\\cdot ),\\hat{\\mu }(t))),\\\\\\hat{v}(t+1,S):= \\hat{\\xi }(\\hat{v}(t,\\cdot ),\\hat{\\mu }(t))+\\hat{v}(t+1,I),\\end{array}\\right.", "}$ and ${\\left\\lbrace \\begin{array}{ll}\\hat{\\Theta }(t,R):= \\Phi (R,\\hat{\\mu }(t)),\\\\\\hat{\\Theta }(t,I):= \\Phi (I,\\hat{\\mu }(t)),\\\\\\hat{\\Theta }(t,S):= \\tilde{\\Phi }(t,S,\\hat{\\mu }(t))=\\hat{\\vartheta }^{S}(\\hat{\\xi }(t,\\hat{\\mu }(t)), \\hat{\\mu }(t)).\\end{array}\\right.", "}$ Finally, we define $\\hat{\\mu }(t+1)=[Q^{\\hat{\\Theta }(t,\\cdot )}(\\hat{\\mu }(t))]^{T}\\hat{\\mu }(t).$ Then, we repeat, recursively on time, the construction above.", "Theorem 4.6 Let $\\mu _{0}$ and $\\hat{v}(0,\\cdot )$ with $\\hat{v}(0,D)=0$ be assigned.", "Consider the recursive construction of the objects above and assume that $\\hat{\\xi }(\\hat{v}(t,\\cdot ),\\hat{\\mu }(t))$ is well defined for each $t\\in \\mathbb {N}$ and that the resulting $\\hat{v}$ is bounded.", "Then the couple $(\\hat{v},\\hat{\\Theta })$ is an equilibrium starting at $\\mu _{0}$ in the sense of Definition REF , hence of Definition REF .", "First of all we observe that, under the above assumptions, the sequence $(\\hat{v},\\hat{\\Theta })$ is well defined by induction.", "Now we show that (i) and (ii) of Definition REF hold for such sequence.", "We preliminarily notice that, given $(t_{o},k_{o})\\in \\mathbb {N}\\times \\lbrace S,I,R\\rbrace $ , the function $[0,1]^2\\rightarrow \\mathbb {R}$ , $\\vartheta = (\\vartheta _p, \\vartheta _c) \\mapsto u(t_{o},k_{o},\\vartheta )$ is strictly concave in $[0,1]^2$ , since $D_{\\vartheta }u(t_{o},k_{o},\\vartheta )=\\left(\\frac{A_1(k_{o}) }{A_0(k_{o}) + A_1(k_{o}) \\vartheta _p}- \\gamma _p(k_{o},\\mu (t_{o})), \\ \\frac{P_{1} }{P_{0} + P_{1} \\vartheta _c}- \\gamma _c(k_{o},\\mu (t_{o}))\\right),$ and $D^2_{\\vartheta }u(t_{o},k_{o},\\vartheta )=\\left(\\begin{array}{cc}\\displaystyle -\\frac{A_1(k_{o})^2}{(A_0(k_{o}) + A_1(k_{o}) \\vartheta _p)^2}& 0 \\\\0 & \\displaystyle -\\frac{P^2_{1} }{(P_{0} + P_{1} \\vartheta _c)^2}\\end{array}\\right).$ Now we fix $t_o\\in \\mathbb {N}$ and show that $\\hat{v}(t_o,\\cdot )$ solves the dynamic programming equation on the various occurrences of $k_{o}\\in \\mathbb {K}$ , where $\\hat{\\Theta }(t_o,\\cdot )$ are the maximizers of the right hand side of (REF ) .", "($k_{o}=D$ ) In this case the dynamic programming equation reduces to $&v(t_{o},D) =u(t_{o},D,(0,0))+ (1-\\rho )v(t_{o}+1,D) = (1-\\rho )v(t_{o}+1,D).$ It is clear that the above constructed $\\hat{v}$ is always zero on $D$ and hence satisfies the above equation.", "The maximizer is the unique admissible control $\\hat{\\Theta }(t_{o},D):=(0,0)$ .", "($k_{o}=R$ ) In this case the dynamic programming equation reduces to $v(t_{o},R)& = \\sup _{\\vartheta \\in [0,1]^2}\\big ( u(t_{o},R,\\vartheta )+ (1-\\rho )v(t_{o}+1,R) \\big )\\\\&= (1-\\rho ) v(t_{o}+1,R) +\\sup _{\\vartheta \\in [0,1]^2}u(t_{o},R,\\vartheta ).$ The optimization above leads to the unique maximum point $(\\hat{\\vartheta }_{p},\\hat{\\vartheta }_{c})= \\big ((\\tilde{\\vartheta }_{p}\\wedge 1)\\vee 0,\\ (\\tilde{\\vartheta }_{c}\\wedge 1)\\vee 0 \\big ),$ where ${\\left\\lbrace \\begin{array}{ll}\\displaystyle {\\tilde{\\vartheta }_p=\\frac{A_{1}(R)-\\gamma _p(I,\\mu (t_{o}))A_{0}(R)}{\\gamma _p(R,\\mu (t_{o}))A_{1}(R)}=\\frac{1}{\\gamma _p(R,\\mu (t_{o}))}-\\frac{A_{0}(R)}{A_{1}(R)}},\\\\\\\\\\displaystyle {\\tilde{\\vartheta }_c=\\frac{P_{1}-\\gamma _c(R,\\mu (t_{o}))P_{0}}{\\gamma _c(R,\\mu (t_{o}))P_{1}}=\\frac{1}{\\gamma _c(R,\\mu (t_{o}))}-\\frac{P_{0}}{P_{1}}.}\\end{array}\\right.", "}$ We therefore get $v(t_{o}+1,R) = \\frac{v(t_{o},R)- u(t_{o},R,\\hat{\\vartheta })}{1-\\rho }.$ This is exactly the expression of the first equation in point 6 above.", "Hence $\\hat{v}$ satisfies the dynamic programming equation (REF ) in this case.", "The maximizer is $\\hat{\\vartheta }$ above which coincides with the expression of $\\hat{\\Theta }$ given in the fourth equation in point 6 above.", "($k_{o}=I$ ) In this case the dynamic programming equation reduces to $v(t_{o},I) &= \\sup _{\\vartheta \\in [0,1]^2}\\Big ( u(t_{o},I,\\vartheta )+ (1-\\rho )\\left((1-\\pi _R-\\pi _D)v(t_{o}+1,I) +\\pi _Rv(t+1,R)\\right)\\big )\\\\& = (1-\\rho )\\left((1-\\pi _R-\\pi _D)v(t_{o}+1,I) +\\pi _Rv(t+1,R)\\right) + \\sup _{\\vartheta \\in [0,1]^2}u(t_{o},I,\\vartheta ).$ The optimization above leads to the unique maximum point $(\\hat{\\vartheta }_{p},\\hat{\\vartheta }_{c})= \\big ((\\tilde{\\vartheta }_{p}\\wedge 1)\\vee 0,\\ (\\tilde{\\vartheta }_{c}\\wedge 1)\\vee 0 \\big ),$ where ${\\left\\lbrace \\begin{array}{ll}\\displaystyle {\\tilde{\\vartheta }_p=\\frac{A_{1}(I)-\\gamma _p(I,\\mu (t_{o}))A_{0}(I)}{\\gamma _p(I,\\mu (t_{o}))A_{1}(I)}=\\frac{1}{\\gamma _p(I,\\mu (t_{o}))}-\\frac{A_{0}(I)}{A_{1}(I)}},\\\\\\\\\\displaystyle {\\tilde{\\vartheta }_c=\\frac{P_{1}-\\gamma _c(I,\\mu (t_{o}))P_{0}}{\\gamma _c(I,\\mu (t_{o}))P_{1}}=\\frac{1}{\\gamma _c(I,\\mu (t_{o}))}-\\frac{P_{0}}{P_{1}}.}\\end{array}\\right.", "}$ We therefore get $v(t_{o}+1,I)=\\frac{1}{1-\\pi _R-\\pi _D}\\left[\\frac{v(t_{o},I)-u(t_{o},I,\\hat{\\vartheta })}{1-\\rho }- \\pi _Rv(t+1,R) \\right].$ This is exactly the expression of the second equation in point 6 above.", "Hence $\\hat{v}$ satisfies the dynamic programming equation (REF ) in this case.", "The maximizer is $\\hat{\\vartheta }$ above which coincides with the expression of $\\hat{\\Theta }$ given in the fifth equation in point 6 above.", "($k_{o}=S$ ) In this case the dynamic programming equation reduces to $v(t_{o},S) &= \\sup _{\\vartheta \\in [0,1]^2}\\Big ( u(t_{o},S,\\vartheta )+ (1-\\rho )\\left((1-\\tau _{\\text{ag}}^{\\vartheta }(t_{o})) v(t_{o}+1,S) +\\tau _{\\text{ag}}^{\\vartheta }(t_{o}) v(t_{o}+1,I)\\right)\\big ),$ which can be rewritten as $v(t_{o},S) =& \\ \\ (1-\\rho ) v(t_{o}+1,I)+(1-\\rho ) (v(t_{o}+1,S)-v(t_{o}+1,I)) \\\\&+\\sup _{\\vartheta \\in [0,1]^2}\\Big ( u(t_{o},S,\\vartheta )- (1-\\rho )\\tau _{\\text{ag}}^{\\vartheta }(t_{o}) (v(t_{o}+1,S)-v(t_{o}+1,I))\\Big ),$ Set $\\xi := v(t_{o}+1,S)-v(t_{o}+1,I)$ and consider the optimization above in terms of the parameter $\\xi \\in \\mathbb {R}_{+}$ .", "The maximization leads to the unique maximum point $(\\hat{\\vartheta }_{p}(\\xi ),\\hat{\\vartheta }_{c}(\\xi ))= \\big ((\\tilde{\\vartheta }_{p}(\\xi )\\wedge 1)\\vee 0,\\ (\\tilde{\\vartheta }_{c}(\\xi )\\wedge 1)\\vee 0 \\big ),$ where $\\tilde{\\vartheta }_p(\\xi )=\\frac{1}{\\gamma _p(S,\\mu (t_{o}))+{(1-\\rho )} a(t_{o})\\xi }-\\frac{A_{0}(S)}{A_{1}(S)}, \\ \\ \\ \\ \\ \\ \\ \\ \\tilde{\\vartheta }_c(\\xi )=\\frac{1}{\\gamma _c(S,\\mu (t_{o}))+{(1-\\rho )} b(t_{o})\\xi }-\\frac{P_{0}}{P_{1}},$ where $a(t_{o})=\\hat{a} (\\mu (t_{o}))= \\mu (t_{o},I)\\hat{\\Theta }_{p}(t_{o},I)= \\mu (t_{o},I)\\Phi _{p}(I,\\mu (t_{o})),$ $b(t_{o})=\\hat{b} (\\mu (t_{o}))= \\mu (t_{o},I)\\hat{\\Theta }_{c}(t_{o},I)= \\mu (t_{o},I)\\Phi _{c}(I,\\mu (t_{o})),$ where $\\hat{\\Theta }(t_o,I)$ is defined in the previous item.", "Recalling the definition of $f$ given in (REF ), we consider the algebraic equation in the variable $\\xi \\in \\mathbb {R}$ $v(t_{o},S) =(1-\\rho ) v(t_{o}+1,I)+(1-\\rho ) \\xi + f(S,\\mu (t_{o}),\\xi ).$ By assumption this equation has a unique solution $\\xi (t_{o})=\\hat{\\xi }(t_o,\\mu (t_{o}))$ the unique solution to this equation.", "We get $v(t_{o}+1,S)= \\hat{\\xi }(t_{o})+ v(t_{o}+1,I).$ This is exactly the expression of the third equation in point 6 above.", "Hence $\\hat{v}$ satisfies the dynamic programming equation (REF ) in this case.", "The maximizer is $\\hat{\\vartheta }$ above which coincides with the expression of $\\hat{\\Theta }$ given in the sixth equation in point 6 above." ], [ "Calibration of the model", "In the calibration of the model we focus on the recent Italian experience for COVID-19.", "Italy was unfortunately the first Western country severely hit by COVID-19; the epidemic shock was sudden and unexpected as well as the deep impact on Italian mobility and production (see Figure REF below).", "At the same time, Italy was also the first Western country to adopt strict restrictions in mobility in March 2020.", "Overall, this makes the Italian case particularly well-adapted to calibrate/estimate the relationship between mobility, production and dynamics of epidemic.Data and codes are available at https://people.unipi.it/davide_fiaschi/ricerca/.", "The first step in the numerical calibration of the model is to specify the $Z(t)$ in (REF ).", "In order to take as small as possible the number of model's parameters, we consider the following one-parameter specification: $Z(t) \\equiv 1 - \\exp \\left( - g \\left[ \\Theta _p(t,S) \\mu (t,S) + \\Theta _p(t,I) \\mu (t,I) + \\Theta _p(t,R) \\mu (t,R) \\right] \\right),$ where $g$ measures the sensitivity of individual income to aggregate mobility, i.e.", "the complementarities between individual and aggregate mobility in determining the level of individual income.", "In this respect we expect that $g$ is greater than 0.", "Taking (REF ) into account, overall we have to set 19 parameters, which are listed in Table REF , together with their values and information on the methods used to their setting.", "Below we provide more details on the method used to set their values.", "Table: List of model's parameters, their values and notes on how they are calculated/calibrated/estimated." ], [ "Calibration of the epidemiological parameters", "The calibration of the epidemiological parameters focuses on daily dynamics as standard in epidemiology [24].", "Several studies provide basic information on COVID-19 main epidemiological characteristics.", "In particular, [50] report that the average number of days for recovering from COVID-19 is 14, which implies $\\pi _R =0.07142$ .", "[25], instead, document an overall probability to die once infected of 0.94% in Italy and an average number of days from infection to death of 18, which implies $\\pi _R=0.00052$ .", "Finally, for setting $\\beta _P$ and $\\beta _C$ we assume that they are equal, so that observed infection rate is the product between $\\beta _P$ ($\\beta _C$ ) and the average mobility of infected individuals once mobility of susceptible is normalized to one in an economy without infected (see System of (REF )).", "[16] report that the prevalence rate of symptoms of COVID-19 in infected people is about 30%, i.e.", "70% of infected people are asymptomatic.", "Assuming that the latter maintain the same mobility, we set average mobility of infected individual 30% less than the one of susceptible.", "Since the observed infection rate can be expressed as $\\left(\\pi _{D} + \\pi _{R}\\right)R_0$ , then $\\beta _P=\\beta _C = 0.7\\left(\\pi _{D} + \\pi _{R}\\right)R_0=0.14606$ given a basic reproduction rate $R_0$ of COVID-19 equal to 2.9 for Italy.https://en.wikipedia.org/wiki/Basic_reproduction_number." ], [ "Calibration of economic part", "The calibration of parameters governing the relationship between income and mobility are based on the Italian experience in the period February 15, 2020 - May 31, 2021 reported in Figure REF .", "Figure: The relationship between weakly mobility for workplace and weakly economic activity in the period February 15, 2020 - May 31, 2021 (Italian holiday weeks are not reported).", "Dashed lines indicate weeks of new imposed mobility restrictions at national level (March 9, 202, March 22, 2020, October 8, 2020 and October 24, 2020) and of a relax in mobility restrictions (May 4, 2020, May 18, 2020, and November 24, 2020).", "Source: Google Mobility Trend (https://www.google.com/covid19/mobility/) and OECD Weekly Tracker of GDP growth (https://www.oecd.org/economy/weekly-tracker-of-gdp-growth/)Italian economic activity as estimated by OECD Weekly Tracker of GDP growthhttps://www.oecd.org/economy/weekly-tracker-of-gdp-growth/.", "appears very correlated with mobility for workplaces as reported by Google Mobility Trend.", "https://www.google.com/covid19/mobility/).", "The strong drop in mobility in the period between February 23, 2020 and March 8, 2020 (almost - 10%) well before the first introduction of mobility restrictions at national level in the week of March 8, 2020, supports our idea of an endogenous response of individual to epidemic evolution, which burst in Italy at the of February 2020.", "The severe restrictions on mobility imposed in two steps in March 2020 leaded to a drop in mobility and economic activity of about 70% and 25% with respect to reference period respectively.", "The relax in restriction in May 2020 led to a bounce back in both variables, but recover was not complete.", "In the autumn of 2020, as result of the second pandemic wave, Italy again experienced new mobility restrictions, with associated reduction in economic activity.", "Normalizing economic activity and mobility to 1 in an economy with only susceptible and taking (REF ) and (REF ) to formulate a (nonlinear) relationship between mobility and economic activity, a nonlinear estimation procedure produces an estimate of $g$ , $a_0^{SR}$ and $a_1^{SR}$ of 0.70229, 0.29805 and 7.74162 respectively.", "$a_0^{I}$ and $a_1^{I}$ are set to 0.49160 and 0.29805 to respect the assumption that mobility of infected individual is 70% of susceptible.", "As regard $P_0$ and $P_1$ , they are set observing that, according to (REF ) and (REF ), average propensity to consume can be expressed as a function of consumption mobility, $P_0$ , and $P_1$ .", "Taking as proxy for consumption mobility the mobility for retail and recreation from Google Mobility Trendhttps://www.google.com/covid19/mobility/.", "and the quarterly average propensity to consume from Italian national account, we estimate $P_0=0.47187$ and $P_1=0.12828$ .", "Finally, the utility of state dead $M$ is set equal to $-1.3$ to avoid that, independent of state of epidemic and economic activity, lifetime utility of survival individuals can be negative." ], [ "“Dumb” SIRD versus economic SIRD (ESIRD) model", "Table REF and Figure REF highlight the importance of considering endogenous mobility choice in the analysis.", "In particular, the comparison between the “dumb” SIRD, where mobility of susceptible, infected and recovered is maintained constant for the whole period of simulation and equal to their initial baseline values, and the ESIRD model, where individual mobility is decided in an optimizing framework without any imposed restriction, points out the 30% more cumulative deaths of dumb SIR as opposed to a lower drop in mobility and production (both as peak and as cumulative impact).", "After 425 days from its outbreak epidemic is substantially ended in both models, i.e.", "$\\mu _I$ is almost zero, but the optimized mobility of individual in ESIRD has led to a non negligible mass of susceptible equal 31.4% in day 425 and substantially lower death rate (0.5% versus 0.7%).", "Figure: Comparison between “dumb” SIRD model versus SIRD model with endogenous mobility.", "Numerical experiments based on the parameters reported in Table ." ], [ "Questioning the ESIRD", "In this section we discuss how our framework could be used to evaluate alternative policies of mobility restriction.", "The high peak prevalence reported for ESIRD in Table REF explains why several countries imposed strong mobility restrictions in 2020.", "A peak of infected of 5,858,062 individuals would correspond to a need of about 398,749 beds in hospitals, taking 6.8% the proportion of infected individuals hospitalised [49].", "For example, Italy in February 2020 had about 190,000 available beds in hospital, making “laissez faire” approach to COVID-19 not practicable (not considering the advantage to take time in waiting for a vaccine).", "In the following we therefore study some mitigation strategies as defined in [24], i.e.", "“to use NPIs (non-pharmaceutical intervention) not to interrupt transmission completely, but to reduce the health impact of an epidemic” in the hope (as it is effectively happened) of a rapid development of a vaccine.", "In particular, we will focus on policies that, by increasing mobility costs ($\\gamma $ s), reduce individual mobility and therefore the infection rate and the peak prevalence.", "In this regard, [42] provide strong evidence that reducing mobility is the key factor for bringing down COVID-19 transmission, while [51] present scenario analysis based on different mobility in Italy.", "At the same time, reduction mobility hurts production, putting policy maker in front a trade-off between economic losses and fatalities due to COVID-19, i.e.", "it is possible to point out a pandemic possibilities frontier as in [34] and [1].", "However, we add two dimensions in the discussion, the first related to the share of remaining susceptible at the end of the period of analysis, which could make easier a fresh outbreak of epidemic in the future, and the second related to the social feasibility of some policies based on a long reduction of individual mobility.", "Table REF reports the effect of different policies increasing (in the same percentage) the cost of mobility for production and consumption with respect to the baseline model when the share of infected individuals exceeds 3% and to maintain this increase until the share of infected individual gets down to 0.5% or to 0.1% in the more severe scenario (mrs).", "Table: Alternative scenarios of restriction of mobility (severity of lockdown) and exit from these restrictions (mrs adopts a more strict threshold for relaxing the restrictions).", "Numerical experiments based on the parameters reported in Table .Peak prevalence decreases up to a rise of 30% in mobility cost and then it is almost rigid to further increment (see Table REF ).", "Peak prevalence of 1,275,206 individuals would amount to a need of 86,801 beds in hospitals.", "Not reported numerical investigations show that to decrease this peak prevalence would require to start mobility restrictions with a lower share of infected individuals than 3%.", "However, increasing mobility costs have also a growing negative impact both on economic activity and a death rate.", "This trade-off is represented in Figure REF , which corresponds to the pandemic possibilities frontier discussed in [34] and [1] but calculated in a very different theoretical framework.", "We can appreciate from Figure REF how a scenario with 30% of additional cost and an exit threshold of 0.1% from mobility restriction Pareto dominate the scenario with 30% of additional cost and an exit threshold of 0.5%.", "Figure: Trade-offs in alternative scenarios of mobility restrictions and exit from these restrictions.", "Numerical experiments based on the parameters reported in Table .Figure: Dynamics of infection rate in different scenarios of mobility restriction (severity of lockdown) and exit from these restrictions (threshold for relaxing the restrictions).", "Numerical experiments based on the parameters reported in Table .However, the former scenario presents two additional non favourable characteristics with respect to the latter.", "First, as reported in Figure REF , the share of susceptible after 425 days from the outbreak of epidemics is substantially higher (82.4% versus 73.3%); moreover, as highlighted by Figures REF and REF , it requires a prolonged period of mobility restrictions (almost one year!).", "In this respect, scenarios with 30% of additional cost and an exit threshold of 0.5% or with 50% of additional cost and an exit threshold of 0.1% endogenously present a succession of periods with and without mobility restrictions making this scenario more socially feasible.", "Figure: Dynamics of epidemics and of main economic variables in alternative scenarios of mobility restrictions and exit from these restrictions.", "Numerical experiments based on the parameters reported in Table .We conclude observing that, even though individuals are perfectly informed of restriction policy and on the behaviour of pandemic, several scenarios include waves of infections, as result of the endogenous switching between a regime with mobility restrictions and one without any restriction (see, e.g., Figures REF -REF )." ], [ "Concluding remarks", "We provide a dynamic macroeconomic general equilibrium model with pandemic, denoted ESIRD, where perfect-foresight forward looking agents' (short-term) mobility positively affects their income (and consumption), but also contributes to the spread of pandemic in an extended SIRD model.", "Dynamics of economy and pandemic is jointly driven by strategic complementaries in production and negative externalities on infection rates of individual mobilities.", "We therefore address one of the main economic-driven leverages of compartmental epidemiological models, i.e.", "the endogenization of reproduction rate of epidemic [4].", "After having proved the existence of a Nash equilibrium and studied the recursive construction of equilibrium(a), we conduct some numerical investigations on the forward-backward system resulting from individual optimizing behaviour, calibrating model's parameters on Italian experience on COVID-19 in 2020-2021.", "In our ESIRD model the forward-looking behavior of agents tends to smooth the peak prevalence of pandemic with respect to the simplest SIRD model with “dumb” agents, but in our numerical explorations peak prevalence appears to be still too high to be sustainable for the Italian health system (e.g.", "in relation to the number of available beds in hospital).", "Once establish that self-regulation of individual mobility decisions is not sufficient to manage the pandemic, we evaluate different regimes of mobility restrictions, which can be easily accommodate within our theoretical framework.", "In particular, we argue that regimes compatible with the saturation of healthcare system must be evaluated in terms of trade-off between economic losses and fatalities as proposes, e.g., by [34], [1], but also for their social feasibility of maintaining prolonged periods of mobility restrictions and for leaving higher shares of susceptible at the end of the period, which makes fresh outbreak of epidemic more likely.", "In this respect, we point out that successive small waves of epidemic can be the result of an efficient regime of mobility restrictions.", "Our analysis raises a series of issues for future research.", "We ignore heterogeneity of population in terms of “risk groups” (typically, in case of Cavid-19, age cohorts, see [46] and [1]), and therefore we cannot evaluate any policy conditioned to individual characteristics, as, for instance, done by [9] or [26].", "We also focus on a world before the vaccine, that is standard in this kind of models [8] and consistent with the period used to calibrate the model.", "However, in a world with vaccine, or with an expected date of its availability, different questions arises for the timing, targets and costs of vaccination [31] as well as on the timing of mobility restrictions.", "Finally, we did not include other non-pharmaceutical interventions, and in particular we do not model testing policies, as, for instance, in [19].", "Some extensions of empirical analysis appear very promising.", "Firstly, the possibility to study scenarios where mobility restrictions are (mostly) focused on mobility for production or on mobility for consumption.", "For example, in Europe the second waves of restrictive measures in the period Oct 2020 - May 2021, largely revolved around mobility for consumption.See for instance, for France, JORF 0080, 3 April 2021, Text 28, https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000043327303.", "A second extension concerns the more precise estimation of the relationship between individual mobility, aggregate mobility and production in presence of strategic complementarities, which poses non trivial issue of identification [39].", "From the theoretical point of view, a general question on the proposed theoretical framework is if the Markov equilibrium can be found in a generalized feedback form, where the closed-loop strategies $\\Theta $ depends not only on time and state of each agent, but also on the current distribution of the other agents.", "A possible answer is to look at the Master Equation associated to our model, which, in turn, could be a first step to obtain stronger properties of the equilibria, like subgame perfection, and, possibly, uniqueness (see, e.g., Section 1.4 in [11])." ], [ "Procedure of simulation", "NOTE: In this appendix the notation is lightened from that used in the body of the article to avoid making the formulas too heavy thinking and difficult to read.", "Set initial value of utilities and their feasible range.", "We argue that as $t$ goes to infinitum the number of infected agents converges to zero, i.e.", "$\\lim _{t \\rightarrow \\infty }\\mu \\left(t,I\\right)=0$ and $\\lim _{t \\rightarrow \\infty }\\mu \\left(t,R\\right)\\gg 0$ and the lifetime utilities is maximum.", "Then: $U(S)^{\\max } = \\lim _{t \\rightarrow \\infty } U(t,S) & = \\lim _{t \\rightarrow \\infty } \\frac{\\kappa (t,\\mu (t,I),SR) + \\ln (1+\\bar{Z}(t)) }{{\\rho }} = \\\\ \\nonumber =& \\dfrac{\\kappa (\\infty ,0,SR) + \\ln \\left(1+ 1/\\gamma _p(\\infty ,0,SR) - a^{SR}_0/a^{SR}_1 \\right)}{{\\rho }}; \\\\U(R)^{\\max } = \\lim _{t \\rightarrow \\infty } U(t,R) & = \\lim _{t \\rightarrow \\infty } \\frac{\\kappa (t,\\mu (t,I),SR) + \\ln (1+\\bar{Z}(t)) }{{\\rho }} = \\\\ \\nonumber =& \\dfrac{\\kappa (\\infty ,0,SR) + \\ln \\left(1+ 1/\\gamma _p(\\infty ,0,SR) - a^{SR}_0/a^{SR}_1 \\right)}{{\\rho }}; \\\\ \\nonumber U(I)^{\\max } = \\lim _{t \\rightarrow \\infty } U(t,I) & = \\dfrac{{\\rho }\\kappa (\\infty ,0,I) + {(1-\\rho )} \\pi _R \\kappa (\\infty ,0,SR)}{{\\rho }\\left[1- {(1-\\rho )}\\left(1-\\pi _R-\\pi _D\\right)\\right]} + \\\\+ & \\dfrac{\\left[1-{(1-\\rho )}\\left(1-\\pi _R\\right)\\right]\\ln \\left(1+ 1/\\gamma _p(\\infty ,0,SR) - a^{SR}_0/a^{SR}_1 \\right)}{{\\rho }\\left[1- {(1-\\rho )}\\left(1-\\pi _R-\\pi _D\\right)\\right]}.$ where: $\\kappa (t,\\mu _I,SR) := \\ln \\left(\\dfrac{a^{SR}_1}{\\gamma _p(t,\\mu _I,SR)}\\right) + \\gamma _p(t,\\mu _I,SR)\\dfrac{a^{SR}_0}{a^{SR}_1} + \\ln \\left(\\dfrac{P_1}{\\gamma _c(t,\\mu _I,SR)}\\right) +\\gamma _c(t,\\mu _I,SR)\\dfrac{P_0}{P_1} - 2;$ and $\\kappa (t,\\mu _I,I) := \\ln \\left(\\dfrac{a^{I}_1}{\\gamma _p(t,\\mu _I,I)}\\right) + \\gamma _p(t,\\mu _I,I)\\dfrac{a^{I}_0}{a^{I}_1} + \\ln \\left(\\dfrac{P_1}{\\gamma _c(t,\\mu _I,I)}\\right) +\\gamma _c(t,\\mu _I,I)\\dfrac{P_0}{P_1} - 2.$ Feasible range.", "Feasible range is defined as follows: $T:= \\left\\lbrace \\left(x,y,z\\right) \\in \\left(0,U(R)^{\\max }\\right) \\times \\left(0,U(I)^{\\max }\\right) \\times \\left(0,U(R)^{\\max }\\right): y \\le x \\le z \\right\\rbrace .$ Set initial conditions of population at period 0 with $\\epsilon $ very small $\\mu (0,S) & = 1-\\epsilon ; \\\\\\mu (0,I) & = \\epsilon ; \\\\\\mu (0,R) & = 0; \\\\\\mu (0,D) & = 0.$ Set the initial value of utilities in the three states in the feasible range $T$ by choosing $\\delta ^I, \\delta ^S, \\delta ^R \\ge 0$ .", "$U(0,R) & = U(R)^{\\max }(1-\\delta ^R); \\\\U(0,S) & = U(0,R)(1-\\delta ^S); \\\\U(0,I) & = U(0,S)(1-\\delta ^I)$ Calculate $a(0)$ and $b(0)$: $a(0) & =\\beta _P \\times \\mu (0,I) \\times \\overline{\\theta }_p(0,\\mu (0,I),I) \\\\b(0) & =\\beta _C \\times \\mu (0,I) \\times \\overline{\\theta }_c(0,\\mu (0,I),I),$ where $\\overline{\\theta }_p(0,\\mu (0,I),I) & = 1/\\gamma _{p}(0,\\mu (0,I),I) - \\dfrac{a^I_0}{a^I_1} \\text{ and}\\\\\\overline{\\theta }_c(0,\\mu (0,I),I) & = 1/\\gamma _{c}(0,\\mu (0,I),I) - \\dfrac{P_0}{P_1}.$ Find $\\Delta U(1,S,I) := U(1,S) - U(1,I)$ by solving the following implicit equation $0 &= -{(1-\\rho )} \\left(1-\\pi _R -\\pi _D\\right) \\Delta U(1,S,I) + \\left(1-\\pi _R -\\pi _D\\right) U(0,S) - U(0,I) +\\pi _R U(0,R) + \\\\ \\nonumber & -\\pi _R \\kappa (1,\\mu (1,I),R) + \\kappa (1,\\mu (1,I),I) - \\left(1-\\pi _R -\\pi _D\\right)\\chi \\left(\\Delta U(1,S,I)\\right) + \\pi _D \\ln \\left(1 + Z\\left(\\Delta U(1,S,I) \\right)\\right),$ where $\\chi \\left( \\Delta U(1,S,I)\\right) &:= \\\\ \\nonumber & \\ln \\left(\\dfrac{a^{SR}_1}{\\gamma _p(1,\\mu (1,I),S) + {(1-\\rho )} a(0) \\Delta U(1,S,I)}\\right) + \\\\ \\nonumber &+\\dfrac{a^{SR}_0}{A^{SR}_1}\\left\\lbrace \\gamma _p(1,\\mu (1,I),S) + {(1-\\rho )} a(0) \\Delta U(1,S,I)\\right\\rbrace + \\\\ \\nonumber &+ \\ln \\left(\\dfrac{P_1}{\\gamma _c(1,\\mu (1,I),S) + {(1-\\rho )} b(0) \\Delta U(1,S,I)}\\right) + \\\\ \\nonumber &+ \\dfrac{P_0}{P_1}\\left\\lbrace \\gamma _c(1,\\mu (1,I),S) + {(1-\\rho )} b(0)\\Delta U(1,S,I)\\right\\rbrace -2 \\\\ \\nonumber $ and $\\bar{Z}(0) = Z\\left(\\Delta U(1,S,I) \\right) & = \\mu (0,S) \\times \\left[\\frac{1}{\\gamma _p(1,\\mu (1,I),S)+{(1-\\rho )} a(0) \\Delta U(1,S,I)}-\\frac{a^{SR}_{0}}{a^{SR}_{1}}\\right] + \\\\& + \\mu (0,I) \\times \\overline{\\theta }_p(0,\\mu (0,I),I) + \\mu (0,R) \\times \\overline{\\theta }_p(0,\\mu (0,I),R).$ where $\\overline{\\theta }_p(0,\\mu (0,I),R) & = 1/\\gamma _{p}(0,\\mu (0,I),R) - \\dfrac{a^{SR}_0}{a^{SR}_1} \\text{ and}\\\\\\overline{\\theta }_c(0,\\mu (0,I),R) & = 1/\\gamma _{c}(0,\\mu (0,I),R) - \\dfrac{P_0}{P_1}.$ Calculate the level of movement of susceptible $\\overline{\\theta }_p(0,\\mu (0,I),S) &=\\frac{1}{\\gamma _p(0,\\mu (0,I),S)+{(1-\\rho )} a(0) \\Delta U(1,S,I)}-\\frac{a^{SR}_{0}}{a^{SR}_{1}}; \\\\\\overline{\\theta }_c(0,\\mu (0,I),S) &=\\frac{1}{\\gamma _c(0,\\mu (0,I),S)+{(1-\\rho )} b(0) \\Delta U(1,S,I)} -\\frac{P_{0}}{P_{1}}.$ Calculate the level of utilities at period 1 $U(1,R) & = \\dfrac{U(0,R) -\\ln \\left(1 + \\bar{Z}\\left(0\\right)\\right) - \\kappa (0,\\mu (0,I),R)}{{1-\\rho }}; \\\\U(1,I) & = \\dfrac{U(0,I)-\\pi _R U(0,R) +\\pi _R \\kappa (0,\\mu (0,I),R) -\\kappa (0,\\mu (0,I),I) - \\left(1-\\pi _R\\right)\\ln \\left(1 + \\bar{Z}\\left(0\\right)\\right)}{{(1-\\rho )}\\left(1-\\pi _R-\\pi _D\\right)} ;\\\\U(1,S) & = U(1,S,I) + U(1,I) ;$ Upgrade the composition of population $\\mu (1,S) &= \\mu (0,S) \\left[1-a(0)\\overline{\\theta }_p(0,\\mu (0,I),S)- b(0)\\overline{\\theta }_c(0,\\mu (0,I),S)\\right],\\\\\\mu (1,I) &= \\mu (0,S)\\left[a(0)\\overline{\\theta }_p(0,\\mu (0,I),S)+ b(0)\\overline{\\theta }_c(0,\\mu (0,I),S)\\right]+\\mu (0,I)(1-\\pi _R-\\pi _D),\\\\\\mu (1,R) &= \\mu (0,I)\\pi _R+\\mu (0,R),\\\\\\mu (1,D) &= \\mu (0,D) + \\mu (0,I)\\pi _D.$ Check if Condition (REF ) is satisfied.", "If not start with a new set of $\\delta $ s at point D. If Condition (REF ) is satisfied and the number of periods is lower of a given threshold repeat points E-I by taking the new level of $\\mu $ s at point I." ] ]
2107.01746
[ [ "On the quantum tunneling time: Instantaneous, finite or probabilistic?" ], [ "Abstract Quantum particles interacting with potential barriers are ubiquitous in physics, and the question of how much time they spend inside classically forbidden regions has attracted interest for many decades.", "Recent developments of new experimental techniques revived the issue and ignited a debate with often contradictory results.", "This motivates the present study of an exactly solvable model for quantum tunneling induced by a strong field.", "We show that the tunneling dynamics can depart significantly from the scenario in which the barrier-traversal time is zero or very small.", "However, our findings do not support the idea of a well-defined tunneling time either.", "Our numerically exact results should help in finding a consensus about this fundamental problem." ], [ "Introduction", "The tunnel effect [1], [2], [3] has captured the imagination of physicists from the early days of quantum mechanics.", "Perhaps because of the lack of a classical analogue, one question in particular attracted a lot of attention; namely how long is the time a quantum particle spends inside a potential barrier (see e.g.", "[4], [5], [6]), i.e.", "in the spatial region that is inaccessible to it in classical physics.", "This problem was studied numerous times [7] in various physical contexts from the scattering on a one-dimensional potential barrier [8] to the mapping of electron trajectories in attosecond angular streaking experiments [9], [10].", "It is fair to say that the physics community has not yet completely agreed on the answer.", "On the contrary, recent development of ultra-precise techniques in the strong-field physics [9], [11] reignited the debate as experiments produced often contradictory results.", "The literature dealing with the tunneling dynamics is extensive.", "To give a few representative examples, there are, very roughly speaking, three schools of thought concerning the subject.", "Some experiments produced evidence that the time needed to traverse a potential barrier is zero, negligible [12], [13], [14], [11] or at least very small [15].", "Others maintain that it takes a certain finite amount of time before a particle emerges from its “quantum tunnel through a potential barrier” [16], [17], [10], [18], [19], [20].", "Yet other authors hold the idea that characterization in terms of a sharply defined tunneling time is not suitable or useful for what is an inherently non-classical effect [21], [22].", "One possible reason that this controversy is difficult to settle is that sophisticated strong field experiments [9], [11] require an interpretation model [23] to give a meaning to the measured data, e.g.", "one has to map the location of a detected free electron to its classical trajectory [13], [14] in order to deduce the time at which the electron was released from an atom.", "Theoretical approaches, such as numerical solutions to the time-dependent Schrödinger equation [24], [25], have their own challenges, too.", "Subtle differences between definitions, including traversal, dwell, and reflection time [8], as well as the point of exit [14], [26], and multielectron effects [27] add further dimensions to the discussion.", "Some of the frequently discussed approaches for the tunneling time are Larmur time, Buttiker-Landauer time [6], the Eisenbud-Wigner time [28], Pollak-Miller time [29], Wigner time [30] and Bohmian time [31], [32].", "While the first four of these do not have a straightforward application in the situation discussed in this work (due to differences in the physical setting and/or simplicity of our model), Wigner and Bohmian times could apply to our scenario, with the Wigner time being perhaps the most relevant.", "Inspired by the ongoing debate, we present a theoretical study of a simple, but exactly solvable model that allows one to accurately characterize the time-dependent wavefunction of a tunneling particle.", "Of course, simple systems have been employed in this problem before (see e.g.", "[33], and the scenario investigated in  [34] is similar in spirit to ours).", "Here we want to concentrate on tunneling invoked by an external field.", "Our approach makes it possible to study the dynamics in the regime of a nearly opaque, spatially very long barrier that is difficult to address by other methods, and which greatly emphasizes the quantum nature of the effect.", "In particular, we are able to investigate the dynamics for very weak field strengths, where the distinction between instantaneous and delayed tunneling is clearer, and where it is easier to identify the effective exit point from the quantum tunnel." ], [ "Model", "The rationale for choosing the model for our investigation is the need to eliminate the uncertainties that one inevitably encounters in the numerical solution and in the interpretations of results for a more realistic system.", "We consider a toy model that can be solved exactly, and the solution of which can be accurately numerically evaluated.", "Let us assume a Stark problem given by a Hamiltonian for a one-dimensional particle, $H &= -\\frac{1}{2} \\frac{d^2}{d x^2} - V_0 \\ \\ \\ \\ \\text{for} \\ \\ \\ -L \\le x \\le 0 \\nonumber \\\\H &= -\\frac{1}{2} \\frac{d^2}{d x^2} - F x \\ \\ \\ \\ \\text{for} \\ \\ \\ x > 0 \\ ,$ where the field strength $F>0$ and the depth of the potential well in the left half-space is constant (with positive $V_0$ ).", "To select the domain of Hamiltonian (REF ), we assume $\\partial _x \\psi (x\\!\\!=\\!\\!-L)\\!=\\!0$ .", "At $x\\!=0\\!$ we require the functions belonging to the domain of the operator to be continuous, together with their first derivative.", "This model is inspired by a metal nano-tip exposed to a strong external field as realized in experiments by irradiation by strong optical pulses [35], [36].", "Here we consider a constant field strength, and concentrate on time-scales much shorter than the optical cycle.", "The potential well represents a partially filled conduction band, so that the energy of $W = -V_0$ represents the bottom of the band, and the states in the vicinity of the Fermi level will contribute most to the tunneling current.", "While the limit $L\\rightarrow \\infty $ can be taken, we shall evaluate our illustrations for a finite $L$ , and note that a concrete choice of its value plays no significant role in our observations.", "We will examine this model also in a complementary limit representing a different physical situation.", "Namely, we take a small-$L$ and large $V_0$ limit, such that there exist exactly one bound state with the energy equal to $1/2$ .", "This is to mimic quantum tunneling from atoms exposed to strong fields.", "The scenario we are to examine is as follows.", "We assume that the system is prepared in the absence of the external field, i.e.", "we have $F=0$ , and the initial wavefunction is selected as one of the bound eigenstates.", "We denote its energy by $Q$ .", "Then, at time $t=0$ we suddenly switch the field on to $F>0$ , and then follow the evolution of the wavefunction.", "Any positive value of the the field strength $F$ creates a finite but possibly broad potential barrier through which the initial state can tunnel toward $x\\rightarrow \\infty $ .", "We are particularly interested in how long it takes for the particle to appear at the classical exit from the tunnel, i.e.", "at the location $x_\\text{exit}=-Q/F$ .", "More generally, it is desirable to understand the dynamics of the wavefunction in the classically allowed region $\\psi (x>x_\\text{exit},t>0)$ .", "For example, from the application standpoint it is important to understand the emitted electron bunches, including any limits for the duration of such pulses.", "The situation is schematically depicted in Fig.", "REF .", "Figure: Color online.Potential function for a 1D model of a metal nano-tip.For the initial condition given by a zero-field stationary state with energy of -|Q|-\\vert Q\\vert ,we aim to calculate the time-dependent wavefunction of the tunneling particle for locationsbeyond the classical tunnel exit at x exit x_\\text{exit}.Initial wavefunction.", "The initial condition is a zero-field eigenfunction corresponding to a negative energy $Q\\in (-V_0,0)$ .", "Convenient parameterizations, utilizing a real-valued wavenumber $k_Q = \\sqrt{2 (V_0 + Q)}$ , can be written as $\\psi _Q(x) &= \\frac{\\cos ( k_Q (L + x) )}{\\cos (k_Q L)} \\ \\ \\text{for} \\ \\ x<0 \\ \\ , \\ \\ \\ \\ \\\\ \\nonumber \\psi _Q(x) &= \\exp ( -\\sqrt{-2 Q} x) \\ \\ \\text{for} \\ \\ x>0 \\ ,$ and the eigenvalue equation for energy $Q$ is $k_Q \\tan ( k_Q L ) = \\sqrt{-2 Q} \\ .$ The evolution of the system starts from such a bound state after a sudden switch of the field $F$ from zero to a finite positive value.", "The instantaneous switch-on of the field gives a clear meaning to the time when the evolution starts.", "If there was a finite ramp-up time of $F(t)$ , it would not be clear exactly at what time does the tunneling process commence.", "Despite the fact that the tunneling probability grows exponentially with the field, the ramp inevitably introduces an uncertainty into the interpretation of $t=0$ .", "On the other hand, for an instantaneous turn on of the field, the dynamics we are going to encounter is non-adiabatic as we will appreciate shortly.", "Spectrum and eigenfunctions in non-zero field.", "Needless to say, ours is a text-book, exactly solvable system for which all Hamiltonian eigenstates can be easily obtained.", "For any positive $F$ , however small, the spectrum of our system becomes continuous, and it encompasses the whole real axis.", "The following is a suitable parameterization of the energy-eigenfunctions we shall utilize in what follows: $H \\psi _W(x) = &W \\psi _W(x) \\ , \\nonumber \\\\\\psi _W(x) = &\\frac{\\cos (k_W (L+x))}{\\cos (k_W L)} \\frac{1}{N \\sqrt{D^+ D^-}} \\ \\ \\ \\text{for} \\ \\ \\ x < 0 \\ \\ \\ \\nonumber \\\\\\psi _W(x) = &\\frac{-i}{N}\\sqrt{\\frac{D^-}{D^+}} Ci^+(\\alpha (x + W/F)) + \\\\ \\nonumber &\\frac{+i}{N}\\sqrt{\\frac{D^+}{D^-}} Ci^-(\\alpha (x + W/F))\\ \\ \\ \\text{for} \\ \\ \\ x > 0 \\ ,$ where $k_W = \\sqrt{2 (V_0 + W)}$ , $\\alpha = -(2 F)^{1/3}$ and $N=2^{7/6} F^{1/6}$ is a normalization factor.", "Functions $Ci^\\pm $ are combinations of the Airy functions, $Ci^\\pm (z) = Bi(z) \\pm i Ai(z) \\,$ and are chosen as such in order to express $\\psi _W$ as a superposition of the incoming and outgoing waves.", "Specifically, $Ci^+(\\alpha (x + W/F))$ behaves as an outgoing wave in the region of large positive $x$ .", "Functions $D^\\pm (W)$ are determined from the requirement of smoothness at $x=0$ .", "Asking for the continuity of the wavefunction and of its first derivative at $x=0$ , one obtains $D^+(W) &= \\frac{\\pi }{2 } \\left( Ci^{+^{\\prime }}(\\alpha W/F) - \\frac{k_W}{\\alpha } \\tan (k_W L) Ci^{+}(\\alpha W/F) \\right) \\nonumber \\\\D^-(W) &= \\frac{\\pi }{2 } \\left( Ci^{-^{\\prime }}(\\alpha W/F) - \\frac{k_W}{\\alpha } \\tan (k_W L) Ci^{-}(\\alpha W/F) \\right) \\ .$ Note that the zeros of these expressions, when analytically continued to complex plane, determine the location of the Stark resonances for the model under consideration.", "When $W=z$ is chosen such that $D^+(z)=0$ , the incoming part of the wavefunction vanishes, while the outgoing one has poles at a complex-valued energies.", "We will utilize the outgoing resonances to construct part of the wavefunction of the particle as it tunnels through the potential barrier.", "Note that the “atom-model” limit of $L\\rightarrow 0$ and $V_0\\rightarrow \\infty $ can be taken in the above formulas by taking $k_W \\tan (k_W L) \\rightarrow 1 .$ This limit introduces a contact or Dirac-delta interaction at the origin, and fixes the energy of the single bound state to $-1/2$ .", "Expansion in energy eigenstates We start by formulation of the time-dependent solution in the standard way, using the completeness of the Hamiltonian eigenfunctions.", "The eigenstates $\\psi _W(x)$ can be normalized to the Dirac-delta function in energy $W$ , so that we have a unity decomposition guaranteed to exist for the self-adjoint operator, $\\int \\!", "\\psi _W(x) \\psi _W(y) dW = \\delta (x-y) \\ ,$ which is the completeness relation that allows one to express an arbitrary initial wavefunction $\\phi (x)$ , like so $\\phi (x)\\!", "=\\!\\!", "\\int \\!\\!", "\\delta (x\\!-\\!y) \\phi (y) dy =\\!\\!", "\\int \\!\\!", "\\psi _W(x)\\!\\!", "\\int \\!\\!", "\\psi _W(y) \\phi (y) dy dW ,$ where we used the fact that $\\psi _W$ is real.", "The evolution of this initial condition at later times can be described with the expansion written like this $\\psi (x,t)\\!", "=\\!\\!", "\\int \\!\\!", "\\frac{e^{-i t W} \\psi _W(x)}{\\sqrt{D^-(W) D^+(W)}} A(W) dW$ with the overlap integral $A(W) = \\int \\!\\!", "\\sqrt{D^-(W) D^+(W)} \\psi _W(y) \\phi (y) dy .$ The $D^\\pm $ factors were distributed in the above such that we have analytic functions when continued into complex plane.", "Interested in the tunneling part of the solution for $x>0$ , we can write the outside component as $\\psi \\!", "=\\!\\!\\int \\!\\!", "\\frac{i e^{-i t W}}{N}\\!\\!", "\\left[\\!", "\\frac{ Ci^-(\\alpha x\\!+\\!", "\\frac{\\alpha W}{F}) }{D^-(W)} - \\frac{ Ci^+(\\alpha x\\!", "+\\!\\frac{\\alpha W}{F}) }{D^+(W)}\\!", "\\right]\\!\\!", "A(W) dW.$ The spectral amplitude $A(W)$ can be split into $A(W) = A_\\text{I}(W) + A_\\text{O}(W)$ where the overlap integral between $\\psi _W$ and $\\phi $ consists of the “inside” and “outside” (of the well) contributions.", "Specializing this for the chosen initial wavefunction $\\psi _Q(x)$ , we have an exact $A_\\text{I}\\!=\\!\\frac{\\sqrt{-2 Q} - k_W \\tan (k_W L) }{2 N (Q-W)}$ and $&A_\\text{O}\\!=\\!", "\\frac{-i}{N} \\int _{0}^\\infty \\!\\!\\!\\!", "dy\\ \\exp [-\\sqrt{-2 Q} y] * \\\\ \\nonumber &\\left[ D^-(W) Ci^+\\left(\\alpha (y + \\frac{W}{F})\\right) - D^+(W) Ci^-\\left(\\alpha (y + \\frac{W}{F})\\right) \\right]$ that must be calculated numerically.", "Expansion in resonant states The expansion of the time-dependent wavefunction in terms of energy eigenstates (REF ) can be in principle evaluated.", "Unfortunately, numerical calculation of the integral in (REF ) is next to impossible due to extremely fast variation of the integrand.", "This is a challenge especially for very weak field $F$ that pulls the arguments of the embedded Airy functions to infinity.", "The fast integrand variation is caused by poles located extremely close to the real axis.", "These poles correspond to the Stark resonances, which may be viewed as eigenstates of a non-Hermitian Hamiltonian which has the same differential expression as (REF ) but has its domain specified by the asymptotic boundary condition which requires that the functions that belong to the domain behave as outgoing waves (also known as the Siegert boundary condition).", "We proceed to evaluate the formally exact expression for $\\psi (x,t)$ , by deforming the integration contour from that following the real axis to one for which the difficult to calculate part of the integral can be obtained from the Stark poles.", "We choose an integration contour $C\\!=\\!\\lbrace z\\in \\mathbb {C};\\text{Im}\\lbrace z\\rbrace \\!=\\!-s\\rbrace $ that parallels the real axis in the lower complex half-plane.", "Utilizing the residue theorem, the expression for the outside wavefunction can be written like so $\\psi (x,t)\\!", "=\\!", "-2 \\pi \\sum _{p} e^{-i t W_p}\\frac{Ci^+(\\alpha (x + W_p/F))}{N D^{+^{\\prime }}(W_p)} A(W_p)\\nonumber \\\\+\\int _C\\!\\!", "\\frac{i e^{-i t z}}{N}\\!\\!", "\\left[\\!", "\\frac{ Ci^-(\\alpha x\\!+\\!", "\\frac{\\alpha z}{F}) }{D^-(z)} - \\frac{ Ci^+(\\alpha x\\!", "+\\!\\frac{\\alpha z}{F}) }{N D^+(z)}\\!", "\\right]\\!\\!", "A(z) dz$ Here the set of poles $W_p$ that were crossed by deformation of the contour is finite and it depends on the choice of the parameter $s$ .", "The discrete sum above is a resonant-state expansion, and its purpose here is to replace the part of the integral that is most difficult to evaluate.", "Resonant series expansions similar to the discrete part of (REF ) were successfully used in systems exhibiting spontaneous decay without the external field (e.g. [37]).", "The situation is somewhat different for the Stark resonant states that arise due to the homogeneous external field.", "We therefore think it may be illustrative to discuss the general structure of the poles that are relevant for our resonant-state series.", "Figure: Color online.Solutions to the resonant-state eigenvalue equations.", "The contoursare zero lines for the real and imaginary parts of D + (W)D^+(W), and their intersectionsindicate the location of the resonant poles W p W_p.", "Three different families ofsolutions are indicated here and discussed in the text.There are three families of poles distributed in the lower half of the complex plane (with their counterparts related to the incoming resonant states living in the upper half-plane).", "They are illustrated in Fig.", "REF .", "The figure shows the contours defined by Re$\\lbrace D^+(W)\\rbrace =0$ in blue and Im$\\lbrace D^+(W)\\rbrace =0$ in red.", "Thus, the intersections of these contours mark the locations of the resonant-state poles.", "The first set are the B-poles; they correspond to the metastable states that arise from the zero-field bound states.", "The imaginary parts of these complex energies are tiny and therefore invisible on the scale of this figure.", "The second set of poles, marked with $A$ in the figure, correspond to the resonant states similar to the positive-energy states at $F=0$ .", "There are infinitely many of these poles, located below the real axis, with the imaginary part growing for the resonances with larger real parts.", "The third set of poles, indicated by $C$ in the picture, are located along the Stokes line of Airy functions.", "Again, there is infinitely many of them, each in the vicinity of zeros of $Ci^+(\\alpha z/F)$ .", "It should be noted that the parameters for this illustration were chosen such that only a few poles appear in each family.", "Specifically, this requires an extremely strong field, and a small potential well that only accommodates a few bound states.", "We choose our contour $C$ to run close to the real axis, but “below” B-poles.", "As $C$ continues into the half-plane $\\text{Re}\\lbrace z\\rbrace >0$ , it eventually crosses the line of A-poles, so the resonant-state expansion part of (REF ) is limited to those poles that are located between the real axis and contour $C$ .", "The specific choice of $C$ is informed by the numerics involved.", "As the contour drops deeper into the lower plane, the influence of the B-poles becomes less severe, and their contribution is replaced by the resonant-state sum.", "However, Airy functions grow exponentially fast away from the real axis, and this means that there exist huge cancellations in the numerical evaluation of various terms contributing to the contour integral.", "It is therefore most practical to keep $C$ not too distant from the real axis.", "Results shown here were obtained with s=3/100 which required about sixty poles contributing to the resonant-state series.", "We have performed all our calculations for several different values of $s$ in order to verify that the results are indeed independent of the choice for the contour." ], [ "Tunneling from a discrete state", "We start our illustrations with the “atom model” case since it is much less computationally demanding.", "For this we take the large-$V_0$ plus small-$L$ limit specified in (REF ), so that there remains a single bound state with the energy of $Q=-1/2$ and the wavefunction $\\phi (x)=e^{-x}$ , that serves as our initial condition.", "Figure: Color online.The initial wavefunction of the particle as reproduced by the mixed representationutilizing a contour integral and a subset of outgoing resonant states ().In order to check that our expansion and numerical integration in (REF ) are properly normalized and can be evaluated with sufficient accuracy, we verify that for $t=0$ we indeed recover the initial wavefunction.", "This is illustrated in Fig.", "(REF ).", "The imaginary part of the recovered wavefunction should vanish, and our calculations gives values of the order of $10^{-8}$ .", "The error in the real part is on the level of a percent at the origin where it is largest.", "Concentrating on the question of the tunneling time, we observe the probability density as it evolves at a fixed point of observation.", "If one assumes that the so-called simple-man's picture of quantum tunneling applies, then we expect that the particle appears at $x_\\text{exit}=-Q/F$ instantaneously and it has zero velocity at the exit from the tunnel.", "Then, subject to acceleration by the external field, it moves away and arrives at a chosen observation location $x_\\text{obs}$ .", "The expected time of arrival based on this scenario is $t_\\text{eta}=\\sqrt{2(x_\\text{obs}-x_\\text{exit})/F}$ .", "First we evaluate the full wavefunction at the position $x=25$ which corresponds to the classical exit from the tunnel for the given applied field $F=1/50$ .", "Figure REF shows that the tunneling particle appears at the classical exit not instantaneously, but delayed.", "The real and imaginary parts of the wavefunction evaluated versus the observation time indicate that at earlier times the particle arrives most likely with a higher velocity that at later times.", "This is in line with previous works, e.g.", "[38], that have also shown that for a 1D model the tunneling ionization produces a non-zero outgoing momentum at the tunnel exit.", "Clearly, the simple-man's scenario does not apply for this specific situation as there is a pronounced delay and the arrival time exhibits a broad distribution.", "Figure: Color online.Tunneling from a single bound state; particle wavefunction is shown versus time at the locationcorresponding to the classical tunnel-exit point.In order to appreciate the arrival-timing of the particle at different observation locations, Fig.", "REF depicts the observed probability density versus time as detected at two locations.", "Arrows mark the expected time of arrival at the given location under the assumption that the simple man's scenario holds.", "When the peak of the probability density pulse is adopted as the location of the classical trajectory, it becomes evident that the particle arrives early at more distant locations, indicating that the initial velocity at the classical exit does not vanish, which is also in contrast to the simple man's scenario.", "Figure: Color online.Tunneling particle probability density versus time for two different observation locations xxfor a model with a single bound state taken as the initial wavefunction.", "The dashed linecorresponds to the location at the classical tunnel exit.Arrows mark the expected time of arrival for a particle that obeys the simple-man's scenario." ], [ "Tunneling from a quasi-continuum of states", "To illustrate our observations concerning the dynamics of the tunneling particle for the model of a metal nano-tip, we choose for $L=100$ , $V_0=25/68$ , $Q=-0.1848\\ldots $ and a very weak field $F=1/100$ .", "The depth of the potential well $V_0$ and the energy $Q$ of the initial stationary state are motivated by metal nano-tips, with $Q$ taken roughly corresponding to the typical work function of common metals.", "In contrast, the above field value (in atomic units) is significantly smaller than the typical fields achieved by irradiation by femtosecond optical pulses.", "As alluded to in the introduction, we concentrate on weak fields in order to make the potential barrier nearly opaque, and thus “amplify” the potential deviations from the scenario of the instantaneous tunneling.", "The first step in numerical evaluation of the expansion in (REF ) is to obtain the locations of poles, or solutions to the equation $D^+(W) = 0$ .", "This can be done by initiating the search around the energies close to the bottom of the potential well.", "Having found two and more poles one can estimate the location of the next by extrapolation, and subsequently find the root with working precision of up to two hundred digits.", "The high numerical accuracy during this and subsequent calculations is a must in order to obtain converged results.", "Depending on the contour chosen, several tens of resonant poles $W_p$ are needed for the chosen size of the potential well.", "The next step consists in calculating the different terms that originate in the integrand of (REF ), in particular the wavefunction overlaps $A(W_p)$ and the pole-residue values $D^{+^{\\prime }}(W_p)$ .", "Obviously, high-precision arithmetic must be employed.", "Having found the complex resonant energies and the corresponding wavefunction overlaps together with the pole residues, we can evaluate the resonant-expansion part of the wavefunction.", "Evaluation of the contour integral is straightforward but it requires a lot of numerical effort.", "We found that accurate tabulation of the integrand along the contour with very fine discretization along $z$ helps to speed up the calculation for the wavefunction as a function of time.", "Figure REF shows selected results for one of our simulations.", "For the chosen parameters, the exit from the tunnel is at $x_\\text{exit}=18$ , and the full black curve corresponds to this observation location.", "It reveals that the maximum of the probability density arrives significantly delayed with respect to $t_\\text{eta}=0$ .", "Obviously, in this case the tunneling time seems to be finite, and in fact rather far from instantaneous.", "Figure: Color online.Tunneling particle probability density versus time for different observation locations xx.Arrows mark the expected time of arrival for a particle that obeys the simple-man's scenario.Several other curves in Fig.REF show similar results for the observation locations at increasing distances from the tunnel exit.", "The arrows in the plot indicate the expected time of arrival in each case.", "One can see that while the particle seems to be delayed when observed close to the tunnel exit, it arrives earlier than expected when detected far from the tunnel.", "In other words, it moves faster than expected and this is due to the fact that in contrary to the simple-man's model it exhibits a non-zero velocity at the exit.", "One could argue that the classical exit point $x_\\text{exit}$ must not be taken as given by the nominal value $-Q/F$ , but should be adjusted.", "Indeed, classical trajectory fitted to our numerical results would indicate the effective tunnel-exit point at $x \\sim 5$ and the escape velocity of 0.12 in atomic units.", "Figure: Color online.Probability density for the tunneling particle versus time and observation location.The red horizontal line indicates the location of the tunnel exit for the given externalfield strength.A note concerning the interpretation of the curves shown in Fig.", "REF might be in order.", "The figure shows that the amplitude of the pulse decreases with the distance from the origin, while the duration only increases slowly.", "It might therefore seem that the total probability of finding the particle inside of the pulse may not be conserved.", "However, one has to consider the fact that the particle accelerates in the external field, and the pulse in fact delivers the same accumulated probability independently of the distance.", "This can be verified accurately when the wavefunction (or, more accurately, the probability density) is integrated over space for a fixed time; In our illustration the probability is conserved with an accuracy of a few parts in thousand.", "However, one should also note that our calculation evaluates the wavefunction as a whole and that is why it also contains a portion that remains bound on the time-scales shown in our illustrations.", "We think that the tiny variations in the total probability transported by the pulse actually reflect the finite accuracy achieved in our calculation, and that the temporal variations of the \"bound-part\" of the wavefunction are small in comparison.", "So keeping in mind the finite accuracy of the calculation, it is reasonable to say that these pulses represent mostly particles that are escaping toward infinity.", "In order to visualize the particle trajectory, Fig.", "REF shows the probability density plotted versus time and observation location.", "One can see that the particle's probability density concentrates along the parabola reflecting its classical acceleration.", "However, the slope at the intersection with the red horizontal line which marks $x_\\text{exit}$ indicates that the particle appears here delayed and with a non-zero velocity.", "Interestingly, the figure also brings to light that a well-defined wave-packet forms already before it reaches the classical exit point from the tunnel.", "In effect, there seems to be a distribution of the tunnel exits [39] and corresponding velocities.", "This is similar to the Wigner tunneling picture [40], but in our case a range of energies contribute to the wavefunction giving rise to a “fuzzy” particle trajectory, with parameters potentially different than those calculated for a fixed-energy propagator [40] suitable for an adiabatic regime.", "Also note that ours is a non-adiabatic regime when mapping to a single classical trajectory may not be fully adequate [41].", "Our results also illustrate that the wavepacket exhibits a quite well-defined duration, which can be perhaps surprising given that there is no inherent time-scale imposed by the external field as there would be in the excitation by an optical pulse.", "However, one can appreciate that there is a natural time-scale imposed by the energy spectrum of the states that contribute to the tunneling current.", "The broader is the initial energy spectrum the shorter is the resulting particle pulse.", "The duration of the emitted pulse is relevant for the applications of metal nano-tips as super-fast sources of electron bunches [36], [42].", "While ours is at best an idealized model for a fast electron source, it is reasonable to look at this result as an estimate for the “fastest achievable electron packet.” Moreover, our illustration clearly shows that the resulting particle pulse consists of a spectrum of energies and, accordingly, suffers from dispersion which will spread-out the pulse upon further propagation." ], [ "Discussion", "This section is devoted to a brief discussion of a few technical issues relevant for the calculations underlying our results.", "First of all, it should be emphasized that the above example represents a generic behavior that we have obtained for a number of various model settings.", "We have found that whenever the field strength is weak we see very pronounced deviations from the scenario of instantaneous tunneling.", "Yet another aspect the reader might be wondering about is that the expected exponential decay of the meta-stable state born out of the initial stationary state is not evident in our results.", "The explanation has to do with the fact that we look at relatively weak field strengths; The evanescent tail of the initial bound state is exceedingly small at $x=x_\\text{exit}$ unless its energy $Q$ is close to the very top of the potential well.", "The outgoing probability current induced to this state by the external field also occurs at much slower time scale and it therefore appears negligible on the background of the fast wavepacket generated by the suddenly imposed field.", "For the given conditions, the decay rate of the metastable states energetically close to the initial wavefunction is of the order of $10^{-9}$ .", "The “tunneling pulses” seen in our examples exhibit velocities on the order of unity and result in roughly $10^{-5}$ ionization rate, so their contribution dominates by about four orders of magnitude.", "However, it is possible to create situations in stronger fields where these slow and fast components of the tunneled wavefunction are comparable.", "Last but not least we touch upon the the non-adiabatic aspect of the tunneling dynamics seen in our calculations.", "Clearly, it is the instantaneous switch-on of the field that is responsible for the broad energy spectrum of the resulting wavefunction at time $t\\rightarrow 0^+$ .", "One could ask if a more adiabatic turning on of the field can result in a more sharply defined tunneling time.", "Intuition suggests that when the field is turned on slowly the higher-energy components of the resulting state will be reduced and, consequently, the wave-packet will broaden more in time due to its smaller bandwidth.", "Moreover, the slow ramp-on of $F(t)$ introduces significant arbitrariness into the meaning of $t=0$ because only a sudden switch-on gives a clearly defined initial time for the evolution.", "So, while in terms of non-adiabadicity ours is a most extreme scenario that can only be approximated in an experiment, it shows that the way the system is excited introduces yet another aspect that complicates the notion of the classical barrier-traversal time." ], [ "Conclusion", "We have presented an exactly solvable model for an electron tunneling from a metal nano-tip exposed to an external field.", "Taking advantage of the non-Hermitian reformulation of the time-dependent problem, we were able to calculate the wavefunction for the field strengths that present a difficult challenge for more standard methods, including numerical solutions of the time-dependent Schrödinger equation.", "The time-dependent wavefunction revealed that the energy spectrum of the system imposes an inherent lower limit on the duration of the electron pulse emitted by the tip even when the field turns on suddenly.", "We have also seen that the energy spectrum of the emitted particle is wide and the tunneling wave-packets therefore experience strong temporal dispersion.", "However, our main motivation for this study was the currently debated question of the tunneling time, which some claim to be instantaneous, while others present evidence that it must be finite, and still others maintain that the quantum nature of the tunneling problem precludes a meaningful description in classical terms.", "As for the notion of the tunneling time, our results show that for our particular examples in one dimension and for a weak external field the tunneling dynamics exhibits a pronounced delay between the sudden switch-on of the field and the time when the particle can be detected at the point of the classical exit from the potential barrier.", "Moreover, the so-called simple man's scenario of quantum tunneling also does not apply because the particle has a significant velocity when it appears at the classical exit point.", "So in this exactly solvable system, one can safely say that the tunneling time is not instantaneous.", "However, our results do not support the idea of a well-defined tunneling time either.", "There are at least two reasons for this.", "First, it is obvious that at the point of classical exit, the temporal distribution of the probability density “pulse” is actually broader than the apparent tunnel time as defined by the time of arrival of the peak.", "In other words, the tunneling time is a stochastic quantity that perhaps could be better described by a probability distribution.", "Second, we have seen that a well-defined moving wave-packet forms actually before the particle reaches the classically allowed region.", "In hindsight this is not very surprising given the energy uncertainty caused by the fast switch-on of the field.", "Because of the energy spread, some components of the wavefunction experience the potential barrier as a classically accessible region.", "This makes the very notion of the tunnel exit rather ill-defined.", "By extension the utility of the tunneling time itself is limited at best.", "However, the time-dependent wavefunction exhibits a well-defined although fuzzy trajectory, and in this sense the behavior in our illustrations resembles the Wigner scenario [40].", "The question of how the results of this paper compare quantitatively to Wigner`s time is interesting, and will be addressed elsewhere.", "To conclude, we do not think that observations based on a simple 1D model can be an arbiter in any way between the different schools of thought and experiments concerning the problem of the tunneling time.", "We hope, however, that the lessons learned from an exactly solvable system will help to shed additional light on the debate, and that they will aid in building much needed intuition about the appropriate mix of quantum and classical in our understanding of the dynamics that play a role in a plethora of physical contexts.", "This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-18-1-0183." ] ]
2107.01737
[ [ "DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling" ], [ "Abstract Rap generation, which aims to produce lyrics and corresponding singing beats, needs to model both rhymes and rhythms.", "Previous works for rap generation focused on rhyming lyrics but ignored rhythmic beats, which are important for rap performance.", "In this paper, we develop DeepRapper, a Transformer-based rap generation system that can model both rhymes and rhythms.", "Since there is no available rap dataset with rhythmic beats, we develop a data mining pipeline to collect a large-scale rap dataset, which includes a large number of rap songs with aligned lyrics and rhythmic beats.", "Second, we design a Transformer-based autoregressive language model which carefully models rhymes and rhythms.", "Specifically, we generate lyrics in the reverse order with rhyme representation and constraint for rhyme enhancement and insert a beat symbol into lyrics for rhythm/beat modeling.", "To our knowledge, DeepRapper is the first system to generate rap with both rhymes and rhythms.", "Both objective and subjective evaluations demonstrate that DeepRapper generates creative and high-quality raps with rhymes and rhythms.", "Code will be released on GitHub." ], [ "Fast", "Figure REF provides a rap generated by DeepRapper with fast beat frequency, which the frequency is 4.3.", "The rap express ones beat wished to his/her lover.", "The following is the translation of texts in   REF UTF8gbsn .", "Table: NO_CAPTIONFigure: Rap generated of fast beat frequency.", "Vowels in red color represents that the word rhymes with previous sentence.", "Bold word means a beat is aligned with the word." ], [ "Medium", "Figure REF provides a rap generated by DeepRapper with medium beat frequency, which the frequency is 2.6.", "The rap praises the times we live in.", "The following is the translation of texts in   REF Table: NO_CAPTIONFigure: Rap generated of medium beat frequency.", "Vowels in red color represents that the word rhymes with previous sentence.", "Bold word means a beat is aligned with the word.Table: NO_CAPTION" ], [ "Slow", "Figure REF provides a rap generated by DeepRapper with slow beat frequency, where the frequency is 2.1.", "The rap express ones relief from life.", "The following is the translation of texts in   REF Figure: Rap generated of slow beat frequency.", "Vowels in red color represents that the word rhymes with previous sentence.", "Bold word means a beat is aligned with the word.Table: NO_CAPTION" ], [ "Translation of Chinese Examples in the Paper", "Words in red are rhymes.", "UTF8gbsn" ], [ "Translation of Chinese in Figure 2", "  Table: NO_CAPTION" ], [ "Translation of Chinese in Figure 3", "  Table: NO_CAPTION" ], [ "Translation of Chinese in Figure 4", "  Table: NO_CAPTION" ], [ "Translation of Chinese in Figure 6", "  Table: NO_CAPTION" ] ]
2107.01875
[ [ "Highly efficient conversion of laser energy to hard X-rays in high\n intensity laser-solid simulations" ], [ "Abstract We present simulations which predict significantly higher laser to X-ray efficiencies than those previously found in high intensity (1e20-1e22 W/cm2) laser-solid simulations.", "The bremsstrahlung emission is shown to last for 10-100 ps, which is difficult to model with conventional particle-in-cell (PIC) codes.", "The importance of collective effects is also demonstrated, showing the limitations of Monte Carlo modelling in these systems.", "A new, open-source hybrid-PIC code with bremsstrahlung routines has been developed to model this X-ray production in 3D.", "Special boundary conditions are used to emulate complex electron refluxing behaviour, which has been characterised in 2D full-PIC simulations.", "The peak X-ray efficiency was recorded in thick gold targets, with 7.4% conversion of laser energy into X-rays of energy 1 MeV or higher.", "The target size is shown to play a role in the conversion efficiency and angular distribution of emitted X-rays, and a simple analytic model is presented for estimating these efficiencies." ], [ "Introduction", "When a high-intensity laser pulse strikes a solid target, the illuminated surface is ionised and forms a plasma layer.", "This plasma is further heated by the laser, injecting a large current of high energy (hot) electrons into the solid, with a roughly exponential energy distribution.", "[1] Multipetawatt laser facilities such as ELI [2] and Apollon [3] are expected to reach intensities between $10^{22}$ -$10^{23}$ Wcm$^{-2}$ , creating hot electrons over 100 MeV in energy.", "Such electrons could lead to efficient X-ray generation through either nonlinear Compton scatter (NCS) in the laser focus, or through bremsstrahlung as the electrons traverse the solid.", "These X-rays could act as a source for photonuclear reactions, [4] radiotherapy, [5] radiography [6] or in pair production for laboratory astrophysics.", "[7] As X-rays are emitted in the direction of motion for ultra-relativistic particles, angular distributions may also act as a diagnostic for electron motion and divergence within the solid.", "Calculating the conversion efficiency of hot electron energy into bremsstrahlung radiation is complicated by the existence of competing energy loss mechanisms.", "Some energy goes to ionisation energy loss, where hot electrons excite atomic electrons in the solid and raise the target temperature.", "The hot electron current will also draw a resistive return current response, establishing fields which reduce the hot electron energy and further heat the target (Ohmic heating).", "Upon reaching the target edge, the highest energy electrons will escape, but the build-up of negative charge beyond the target boundary forms a sheath field which reflects most electrons back in.", "[8] Electron refluxing provides an additional energy loss mechanism, as the sheath field strength can change during a reflux.", "[9] These processes reduce the energy available for bremsstrahlung radiation, and must be characterised.", "Several groups have already characterised the X-ray emission in laser-solid simulations by adding bremsstrahlung radiation to particle-in-cell (PIC) codes, and treating the solid as a cold, dense plasma.", "[10], [11], [12], [13], [14], [15], [16], [17] This approach has the advantage of directly modelling the absorption of laser energy in the pre-plasma, but requires a large number of computational macro-particles to suppress self-heating.", "[15], [18] In low resolution PIC simulations, electrons gain energy non-physically and bremsstrahlung routines allow this energy to be radiated away, producing a false X-ray emission.", "Due to high computational demands, these codes are typically restricted to short-pulse 2D simulations for thin targets, and are often not run long enough to capture the full bremsstrahlung emission.", "Previous attempts [13], [14] to characterise the bremsstrahlung efficiency with PIC codes have considered the energy radiated in 36 fs and 300 fs, but these run-times are insufficient to capture an emission on the order 10-100 ps.", "Other groups [19], [20], [21], [8] have used Monte Carlo codes like Geant4 [22], [23], [24] to model X-ray production in these systems.", "Electron injection characteristics are either modelled with PIC codes or assumed from the laser intensity, duration and focal spot size, and the bremsstrahlung emission is recorded as electrons propagate through the solid.", "Cross sections for bremsstrahlung, elastic scatter and ionisation energy loss are evaluated using the known values for electrons in solids, but each electron is treated independently and collective effects such as sheath-field energy loss, resistive electric fields and any generated magnetic fields are neglected.", "We have developed a hybrid extension[25] to the PIC code EPOCH,[18], [26], [27] including resistive fields and elastic scatter equations,[28], [29] with additional bremsstrahlung and Möller scatter algorithms adapted from Geant4.", "[22], [23], [24] This provides a similar functionality to the hybrid-PIC code LSP, [30] but in an open-source format.", "A brief discussion of the code is presented in Section , with technical details and benchmarking covered in the appendices.", "This code has allowed 3D simulation of the full bremsstrahlung emission with some collective effects, which cannot be done using traditional PIC or Monte Carlo codes.", "The bremsstrahlung radiation characteristics and boundary conditions are presented in Section , where we show significantly higher laser-to-X-ray efficiencies than in previous simulations which only modelled run-times under 500 fs.", "[14], [13] As the focus of this paper is energy loss within the solid, we will ignore the NCS X-rays associated with electron acceleration in the laser focal spot.", "[31], [32] Hybrid-PIC codes only simulate the hot electrons in laser-solid interactions, making them far less computationally expensive than traditional PIC codes.", "[28], [29] A hybrid field solver assumes the presence of a return current based on the temperature and resistivity of the solid, and adds in the corresponding fields without having to simulate the many cold particles in the solid density plasma.", "This cold plasma treatment is justified in metals and insulators, as target ionisation occurs quickly via pre-pulse heating, or field/collisional ionisation due to hot electrons.", "[33] Equations describing the evolution of the fields, temperature and resistivity of the solid are presented in Appendix .", "Originally, hybrid-PIC codes were designed to track hot electrons of lower energy than those of interest here, lacking bremsstrahlung radiation and using a continuous form of ionisation energy loss.", "[29] At higher electron energies, a more appropriate form of ionisation loss would also include discrete Möller scatter, where incident electrons can lose large amounts of energy creating secondary hot electrons ($\\delta $ -rays) by fully exciting atomic electrons from the background solid.", "Algorithms for bremsstrahlung radiation have also been included.", "The full details of the physics applied to hot electrons as they traverse the solid are given in Appendix .", "Additionally, a set of benchmarks for the code is provided in Appendix .", "When hot electrons reach a simulation boundary, those above a critical energy $\\kappa _\\text{esc} a_0 m_e c^2$ are removed from the simulation (escaped), while the rest are reflected (refluxed).", "Here, $m_e$ and $c$ describe the electron rest mass and the speed of light respectively.", "The normalised vector potential a08.510-6 I02 describes the strength of the laser, with $I_0$ denoting the peak, cycle-averaged intensity ($\\text{Wm}^{-2}$ ), and $\\lambda $ the wavelength.", "Refluxing electrons have their total momentum reduced by $\\kappa _\\text{tnsa} a_0 m_e c$ on each re-injection, and are scattered through an angle randomly sampled from a uniform distribution between $\\pm 0.5 \\sigma _{\\langle \\Delta \\theta \\rangle }$ .", "The empirical parameters are assigned values $\\kappa _\\text{esc}=2$ , $\\kappa _\\text{tnsa}=2.7\\times 10^{-3}$ and $\\sigma _{\\langle \\Delta \\theta \\rangle }=23$ .", "These values were taken from 2D full-PIC simulations of electrons refluxing in sheath fields, and are defined in Section REF ." ], [ "Simulation setup", "Hybrid-PIC simulations were run to model the hot electron to bremsstrahlung efficiency, $\\eta _{e\\rightarrow \\gamma }$ for a variety of targets at different intensities.", "Unless stated otherwise, the hybrid-PIC simulations in this paper were run with cubic cells of side 0.7 m, as this was found to be the largest cell-size which converged the electric and magnetic fields in test runs.", "To improve statistics, the bremsstrahlung cross section was increased by a factor of 10, and macro-photon weights were reduced by the same factor to conserve real particle number.", "The efficiency of laser energy to hot-electron energy was set to $\\eta _{l\\rightarrow e}=0.3$ .", "The initial background electron and ion temperatures were set to 300 K. Hot electrons were injected into the simulation through the $x_\\text{min}$ boundary, with spatial and temporal envelope functions, $f(r)$ and $g(t)$ respectively.", "A 2D Gaussian was used for $f(r)$ , characterised by a radial full-width at half maximum (fwhm), $r_{fwhm}$ .", "Similarly, a 1D Gaussian was used for $g(t)$ , described by the fwhm, $t_{fwhm}$ .", "To cut off low-weight macro-electrons, nothing was injected when $g(t)<0.1$ , or into cells with $f(r)<0.5$ .", "This gave a mean envelope of $\\langle fg \\rangle \\approx 0.41$ for cells injecting electrons, with a mean root envelope $\\langle \\sqrt{fg} \\rangle \\approx 0.61$ .", "The laser intensity was varied in the range $10^{20}$ -$10^{22}\\text{ Wcm}^{-2}$ , with $r_{fwhm} = 5$ m, and $t_{fwhm}=40$ fs.", "1226 macro-electrons per timestep were injected into each cell which satisfied the envelope conditions.", "Macro-electrons were uniformly injected into a cone where the half angle was the smaller of 20 or $\\tan ^{-1}(\\sqrt{2/(\\gamma -1)})$ from Moore scaling,[34] where $\\gamma $ is the Lorentz factor of a given injected electron." ], [ "Bremsstrahlung efficiency", "FIG.", "REF shows $\\eta _{e\\rightarrow \\gamma }$ evaluated for multiple $100\\times 100\\times 100$ m3 targets in 3D hybrid-PIC simulations.", "The target materials considered were Al, Cu, Sn and Au, and plastic CH targets are also shown plotted at atomic number $Z=2.7$ .", "The peak efficiency of hot electron energy to X-rays over 1 MeV occurs for the $10^{22}$ Wcm-2 shot on Au with $\\eta _{e\\rightarrow \\gamma }=0.25$ , which corresponds to a laser to X-ray efficiency of $\\eta _{l\\rightarrow \\gamma }=0.074$ .", "These simulations consider the full bremsstrahlung emission, and observe efficiencies higher than those reported from PIC simulations.", "Previous estimates[13], [14] for $\\eta _{l\\rightarrow \\gamma }$ in Al targets at $10^{22}$ Wcm-2 have ranged from $4\\times 10^{-6}$ to $8\\times 10^{-5}$ compared to 0.014 in these simulations, although the larger target size here also contributes to the greater efficiency.", "These high efficiencies are significant in experiments where bremsstrahlung is a background, suggesting measurement of X-rays from other processes (for example NCS) may be much more difficult than currently expected.", "To estimate the run-times required to capture the full bremsstrahlung emission for FIG.", "REF , the X-ray characteristics were found for different targets shot by a $10^{22}$ Wcm-2 pulse.", "FIG.", "REF shows the rate of X-ray production for photons over 1 MeV in energy.", "The emission lasts on the order of 10-100 ps, with a strong dependence on target shape when using reflux boundaries, as electrons in smaller targets spend less time between reflux events and lose energy faster.", "The emission from lower $Z$ targets lasts longer, as ionisation loss and bremsstrahlung have lower stopping powers in these targets.", "This plot shows X-rays created within the solid and not the X-rays measured outside, as the code lacks target self-attenuation from the photoelectric effect (although this is less important for X-ray energies of a few MeV or greater).", "Figure: Temporal distribution of bremsstrahlung radiation from hybrid-PIC simulations, with a laser of intensity 10 22 10^{22} Wcm-2 on cubic targets of various compositions and sizes (labelled l 3 l^3 for side-length ll).The angular distribution of X-ray energy also varied with target geometry as shown in FIG.", "REF , although no target reproduced the lobes observed by Vyskočil et al.", "[14] While the data does produce lobes when plotting energy per radian, $dE/d\\theta $ , in 3D simulations it is more appropriate to plot energy per steradian, $dE/d\\Omega $ which re-weights the bins and shows a dominant emission in the forwards and backwards directions.", "A novel angular distribution is observed for the small foil $10\\times 50^2$ m3 target, which shows some emission perpendicular to the injection direction.", "This is because electrons deflected into the perpendicular direction can travel for a long time and emit many X-rays before hitting another boundary and scattering away.", "The perpendicular emission is less visible in the large foil 50 m$\\times 1$ mm2 target as electrons experience reflux scatter less often, and so more energy is lost by the time they scatter into a perpendicular direction.", "FIG.", "REF also shows that magnetic $B$ fields reduce the emission.", "While $B$ fields cannot take energy from the electrons, it was found that their presence led to more energy loss by resistive fields.", "This suggests $B$ fields reduce electron divergence, leading to higher current densities and stronger electric fields through ().", "Figure: Angular distribution of bremsstrahlung radiation from hybrid-PIC simulations (10 22 10^{22} Wcm-2, Cu).", "The injection direction is given by the arrow, and the sign of p y p_y determines the deviation direction for the macro-photons.", "Target dimensions are labelled as l×w 2 l\\times w^2, where ll is the length parallel to electron injection, and the transverse area is w×ww\\times w. The dashed line data refers to a test where the magnetic field was held at 0 in all cells throughout the simulation.In FIG.", "REF , the bremsstrahlung energy spectra are given for some Cu targets.", "These spectra have a sharp gradient change at $E_\\gamma \\approx 86$ MeV, as the only electrons which can radiate above this energy escape the target after only one pass.", "The size of the target in $x$ determines the length of this pass, and the energy spectra beyond 86 MeV are grouped by this size.", "Smaller targets produce less bremsstrahlung radiation overall, as reflux events are more common and take away a greater proportion of the hot electron energy.", "The bremsstrahlung spectrum for the $10\\times 50^2$ m3 Cu target was also calculated from a simulation without $\\delta $ -rays.", "Instead of adding $\\delta $ -rays as macro-electrons which can go on to produce photons, their energy was dumped to the local cell as a temperature increase.", "The resulting spectrum showed no significant difference to the case with $\\delta $ -rays, which suggests the rare high-energy photon emissions from rare high-energy $\\delta $ -rays play a negligible role in the total bremsstrahlung emission.", "Figure: Photon energy distribution of bremsstrahlung radiation from hybrid-PIC simulations (10 22 10^{22} Wcm-2, Cu).", "Target dimensions are labelled as in FIG.", ".", "The pink line denotes the electron escape energy.To obtain these results, we have used special reflux boundaries which were characterised from full-PIC simulations which ran for 700 fs at the longest, as discussed in Section REF .", "While escaping electrons leave the simulation in the first pass through the solid, it is unclear how well the refluxing data describes reflux events on time-scales where solid decompression becomes important.", "In Al $10^{22}$ Wcm-2, a final background temperature of 3.7 keV was recorded at the focal spot location, which suggests the target could grow 20 m over 100 ps according to the mean thermal ion speed." ], [ "Energy loss mechanisms", "A breakdown of the total energy lost to each mechanism is shown in FIG.", "REF for the $10^{20}$ and $10^{22}$ Wcm-2 simulations in $100^3$ m3 Al and Au targets.", "It was found that 28 $\\%$ of all lost energy in the Au $10^{22}$ Wcm-2 simulation was due to bremsstrahlung radiation (from all photon energies), which dominated all other forms of energy loss.", "Ionisation loss dominated at $10^{20}$ Wcm-2, taking 47% of the hot electron energy in Al and 59% in Au.", "Reflux energy loss dominated in $10^{22}$ Wcm-2 Al, accounting for 51% of the energy loss.", "Escaping energy took away 17-22% in all simulations, and resistive fields accounted for 8-19%.", "While some electrons gained energy from these electric fields, field gains were less than 2% of the field losses in all four simulations.", "Figure: Hot electron energy loss breakdown for four hybrid-PIC simulations from FIG.", ".", "The remaining electron energy in these simulations was less than 0.03%0.03\\% of the total energy lost over the run-time.A simple model was constructed to quickly estimate the efficiencies of hot electron energy loss mechanisms, and to demonstrate how these scale with laser and target parameters.", "This model condenses the exponential injection of electron kinetic energies, $\\epsilon _k$ into three macro-electrons, characterised by the high-energy X-ray threshold, $\\epsilon _\\gamma ^{th}$ (1 MeV here), and the escape energy $\\epsilon _\\text{esc}=\\kappa _\\text{esc} a_0 m_e c^2$ .", "The “warm\" macro-electron describes all electrons with $\\epsilon _k<\\epsilon _\\gamma ^{th}$ , the “emitting\" macro-electron holds $\\epsilon _\\gamma ^{th} \\le \\epsilon _k < \\epsilon _\\text{esc}$ , and the “escaping\" macro-electron holds $\\epsilon _k \\ge \\epsilon _\\text{esc}$ .", "The macro-electron weights are found from integrating dNedk = Nek e-k/k between the defining kinetic energy limits, where the mean injected kinetic energy $\\langle \\epsilon _k\\rangle = a_0 m_e c^2 \\langle \\sqrt{fg}\\rangle $ .", "From (), the total number of injected electrons is Ne = I0 fg(rfwhm24) (tfwhm102) lek after substitution of the full injection area and pulse duration for our envelopes.", "Similarly, the three macro-electron $\\epsilon _k$ values are found from integrating $\\epsilon _k dN_e/d\\epsilon _k$ between the defining energies.", "Four stopping-powers are used to characterise energy loss from the individual energy loss mechanisms.", "For bremsstrahlung [35] and ionisation energy loss,[29] we use the continuous stopping power approximations, -.ddx|brem = e612303 me c3 ni Z2 (5.6 0cZ1/3e2) -.ddx|ion = Zni e4402 me v2((kIex) + 12(+ 1) .", "+ .0.9092 - 0.818 - 0.246) where the bremsstrahlung stopping power considers emission into photons of all energies.", "These use the permittivity of free space, $\\epsilon _0$ , reduced Planck constant, $\\hbar $ , and electron charge and speed, $e$ and $v$ respectively.", "Solid parameters $n_i$ and $I_{ex}$ denote the ion number density and mean excitation energy.", "The stopping power from photons over energy $\\epsilon _\\gamma ^{th}$ is -.ddx|> th =-.ddx|brem(k - th) which can be used to calculate $\\eta _{e\\rightarrow \\gamma }$ .", "In a target of size $L_x \\times L_y \\times L_z$ , the typical path between two boundaries is roughly $\\frac{1}{3}(L_x + L_y + L_z)$ .", "The energy lost in a reflux event is described by the reflux boundaries, so we approximate a continuous reflux stopping power of the form -.ddx|tnsa = 3tnsa a0 me c2Lx + Ly + Lz.", "For fields, the stopping power is equivalent to the Lorentz force $-eE$ , where the electric fields in this system are described in ().", "Assuming the hot electron current density, $j_h$ is balanced by the background electron current, the stopping power may be written as $-e\\eta j_h$ for a solid with resistivity, $\\eta $ .", "By approximating a suitable form for $j_h(x)$ , we have -.ddx|field = e2 I0 fg (12rfwhm)2le(x c + 12rfwhm)2k where a constant typical resistivity $\\langle \\eta \\rangle $ has been used.", "Here we have assumed the injected current begins with a circular area of radius $r_{fwhm}/2$ , where electrons move into a cone of half-angle $\\theta _c$ , such that the current radius at $x$ includes the $x \\tan \\theta _c$ term.", "This ensures the field stopping power diminishes as electrons spread out in the solid.", "The “warm\" and “emitting\" macro-electrons are integrated through these stopping powers until they have no more energy, and the “escaping\" macro-electron is integrated to $x=L_x$ , at which point the remaining energy is considered to be escaped.", "Using $\\theta _c = 20$ and $\\langle \\eta \\rangle =10^{-6}$ m, $\\eta _{e\\rightarrow \\gamma }$ was calculated over the simulation range shown in FIG.", "REF , and forms the background heatmap.", "This simple model shows good agreement with the simulation data.", "While calculating this heatmap, a constant $n_i=6\\times 10^{28}$ m-3 was used, along with the approximation $I_{ex}\\approx 11eZ$ .", "The dominant emission mechanisms were identified in each calculation, and are grouped by the pink lines in FIG.", "REF .", "For high $Z$ targets, ionisation loss dominates at low intensities, while bremsstrahlung dominates at high intensities.", "In lower $Z$ targets, the stopping power associated with these processes decreases and reflux energy loss becomes the dominant process, making these set-ups especially unsuitable for modelling using traditional Monte Carlo codes which lack collective effects." ], [ "Reflux energy loss", "The reflux boundaries described in Section REF are characterised by three empirical parameters.", "These are related to the escape energy threshold, $\\kappa _\\text{esc}$ , the mean reflux momentum loss, $\\kappa _\\text{tnsa}$ and the range of scatter values, $\\sigma _{\\langle \\Delta \\theta \\rangle }$ .", "The specific values used in the 3D hybrid-PIC simulations of sections REF and REF were calculated from 2D full-PIC simulations in EPOCH.", "These full-PIC simulations model electron refluxing in the sheath fields, and are similar to those performed by Rusby et al.", "[9] Four simulations modelled two targets (C and Au) shot at two different laser intensities ($10^{20}$ and $10^{22}$ Wcm-2).", "All targets were given a pre-plasma for $(-4<x<0)$ m, with an electron number density $n_e(x) = n_{e0} \\exp (x/L_p)$ , where $n_{e0}$ is the solid electron number density, and the pre-plasma scale-length, $L_p =2$ m. The solid density region spanned $0<x<L_s$ , where the solid length, $L_s$ was 10 m for C and 2 m for Au.", "All simulations used a laser with $r_{fwhm}=$ 5 m, and $t_{fwhm}=40$ fs to match the hybrid-PIC simulations.", "C simulations assumed fully ionised targets, with a run-time of 700 fs, 250 macro-electrons and 50 macro-ions per cell, for square cells of side 20 nm.", "The simulation window spanned $(-30 < x < 130)$ m, with $y$ between $\\pm 10$ m. For Au we used $\\text{Au}^{51+}$ ions, which had a greater $n_e$ than Al and so smaller square cells of side 5 nm were used to prevent self-heating.", "The simulation window was reduced to having $x$ and $y$ range between $\\pm 10$ m and $\\pm 4$ m respectively, with 125 macro-electrons and 25 macro-ions per cell, and a run-time of 160 fs.", "Six EPOCH particle probes were positioned in the simulation window: two tracking electrons escaping the window through $x_\\text{min}$ and $x_\\text{max}$ boundaries, and the rest tracking particles entering and leaving the solid density region at $x=0$ and $x=L_s$ .", "These probes output the momentum, position and weight of each macro-particle passing them, and have been extended to also output particle ID and crossing time.", "All macro-electrons in the pre-plasma were tracked, and were considered hot electrons once they triggered the $x=L_s$ probe for the first time.", "Once hot electrons exit the solid-density region there are four possible end-states: re-entering the solid (refluxed), escaping the simulation window through an $x$ boundary (escaped), a $y$ boundary (lost, no useful information), or remaining outside the solid but within the window until the end of the simulation (absorbed into the sheath field).", "These runs determined the likelihood of these end-states, and also looked at how the properties of hot electrons changed over a reflux.", "Figure: The number spectrum of hot electrons exiting the solid through x=L s x=L_s in the C 10 22 10^{22} Wcm-2 simulation, binned by the electron momentum.", "The colour denotes the distribution of end-states in each bin.", "An insert is provided to show the fate of rare, high momentum outgoing electrons.FIG.", "REF shows a number spectrum of hot electrons in the C $10^{22}$ Wcm-2 simulation as they pass $x=L_s$ into the vacuum.", "Most electrons reflux back into the target, with the highest energy electrons escaping through $x_\\text{max}$ and some lower energy electrons ending in the sheath field.", "The $\\kappa _\\text{esc}$ parameter was chosen such that $\\kappa _\\text{esc}a_0 m_e c^2$ was the energy associated with the first bin in FIG.", "REF which had all electrons escape after passing into the vacuum from $x=L_s$ .", "The sharp switch of end-states justifies our treatment of a critical escape energy.", "Electrons labelled as lost have escaped through a $y$ boundary, and it is unclear whether they would have refluxed, escaped, or been absorbed if the simulation window was larger in $y$ .", "The simulation was repeated with a smaller window of size $(-30<x<90)$ m, and hot electrons were found to escape at the same energy as in the larger window simulations, with similar energy distributions at $x_\\text{max}$ .", "This suggests convergence in the escape energy cut-off, but these 2D sheath fields will decay slower with distance than fields in 3D space, so this cut-off is likely an over-estimate.", "The qualitative behaviour is similar in all four simulations for electrons exiting through $x=L_s$ .", "Refluxing also dominates electrons exiting the solid on the pre-plasma side through $x=0$ , but the absorption chance is typically greater on this boundary.", "For example, C $10^{22}$ Wcm-2 has 20% absorption in the bin corresponding to the $dN/dp$ peak for the $x=0$ probe, compared to 2% for the peak bin in the $x=L_s$ probe.", "A typical reflux event was found to last $\\sim $ 60 fs on the pre-plasma side, but only $\\sim $ 20 fs on the rear, so this increased absorption could be due to counting more refluxing electrons outside the solid at the simulation end.", "Figure: Refluxing electrons on x=L s x=L_s are binned by outgoing longitudinal momentum (in units of the ponderomotive momentum p 0 =a 0 m e cp_0=a_0m_ec), and the bin averaged momentum change is plotted.", "Simulations are labelled by target material and laser intensity in Wcm-2.", "The shaded region on C 10 2 210^22 denotes the upper and lower average deviations from the mean in each bin.The exiting, $p_x^\\text{out}$ and returning, $p_x^{in}$ longitudinal momenta of electrons leaving and re-entering the solid through $x=L_s$ was recorded for all reflux events.", "In FIG.", "REF , refluxing electrons are binned by $p_x^{out}$ values, and the mean fractional change of longitudinal momentum in each bin is shown by the solid lines for all four simulations.", "Most hot electrons lose longitudinal momentum when refluxing, with the highest energy electrons losing the most.", "The $\\kappa _\\text{tnsa}$ parameter is chosen such that $\\kappa _\\text{tnsa}a_0 m_e c$ is the average momentum loss for all hot electrons exiting and re-entering the solid, on both the $x=0$ and $x=L_s$ sides.", "Electron momenta beyond those plotted in this figure mostly escape the simulation window through the $x_{max}$ boundary.", "These trends seem similar across the different target materials, sizes and run-times, although the results appear grouped by intensity.", "In C simulations, it was found that $\\eta _{l\\rightarrow e}$ was 0.27 in the $10^{22}$ Wcm-2 run, but only 0.03 for $10^{20}$ Wcm-2.", "This demonstrates different injection characteristics for FIG.", "REF , and could explain the reduced peak momentum achieved in lower intensity runs (relative to the ponderomotive momentum).", "The $\\eta _{l\\rightarrow e}$ value for $10^{20}$ Wcm-2 is similar to the one found to fit the data for the FIG.", "REF benchmark in Appendix , which was at $3.1\\times 10^{20}$ Wcm-2 and also at normal incidence.", "At oblique incidence, the FIG.", "REF benchmark fit with $\\eta _{l\\rightarrow e}=0.3$ , which is closer to that of C $10^{22}$ Wcm-2, despite only being at $4\\times 10^{20}$ Wcm-2.", "The choice to set $\\eta _{l\\rightarrow e}=0.3$ in sections REF and REF was made to allow for direct comparisons between the results.", "Figure: Refluxing electrons at x=L s x=L_s are binned by outgoing θ out =tan -1 (|p y out /p x out |)\\theta ^\\text{out}=\\tan ^{-1}(|p_y^\\text{out}/p_x^{out}|), and the Δθ=θ in -θ out \\Delta \\theta = \\theta ^\\text{in}-\\theta ^\\text{out} values are averaged in each bin.", "Simulations are labelled as in FIG.", ".", "The shaded region for C 10 22 10^{22} denotes the upper and lower average deviations from 〈Δθ〉\\langle \\Delta \\theta \\rangle in each bin.In addition to the large decrease in $p_x$ , a smaller increase in $p_y$ was found on refluxing which contributes to the angle increase observed by Vyskočil et al.", "[14] FIG.", "REF shows the average change in angle when refluxing for hot electrons exiting the solid at different angles.", "On average, hot electrons exiting below 30 to the injection direction return at a greater angle, and those above 30 return lower.", "The shaded-region which denotes average deviation from the mean is large and roughly uniform over all outgoing angles, which shows a large range of scatter angles independent of the outgoing direction.", "The $\\sigma _{\\langle \\Delta \\theta \\rangle }$ parameter is the shaded-region size for $\\langle \\Delta \\theta \\rangle $ , averaged over all bins for electron reflux events at both $x=0$ and $x=L_s$ .", "This average is weighted by the number of electrons in each bin.", "The empirical parameters have been calculated in each simulation, and the results are shown in TABLE REF .", "Typical averages were chosen for $\\kappa _\\text{tnsa}$ and $\\sigma _{\\langle \\Delta \\theta \\rangle }$ for our hybrid-PIC simulations.", "We also chose $\\kappa _\\text{esc}=2$ to be closer to the C $10^{22}$ Wcm-2 simulation, as this has the most similar $\\eta _{l\\rightarrow e}$ value to our hybrid electron injection.", "Table: Reflux boundary characterisation parameters from 2D full-PIC simulations, labelled by laser intensity in Wcm-2, and target material." ], [ "Conclusion", "A hybrid-PIC code has been written and benchmarked against experiments for Vulcan shots around $10^{20}$ Wcm-2, where hot electron injection was found to form the dominant source of uncertainty.", "Full-PIC simulations in 2D demonstrated that over short time-scales (up to 700 fs from a 40 fs pulse) most electrons reflux with an energy loss, and the highest energy electrons escape.", "Using our reflux boundaries, the bremsstrahlung emission occurred over a time-scale on the order of 10-100 ps, and showed higher efficiencies than previously reported.", "PIC simulations were shown to underestimate the bremsstrahlung efficiency by orders of magnitude, as they are unable to capture the full emission.", "Monte Carlo codes are expected to overestimate the emission, as they lack collective energy loss mechanisms.", "In these 3D simulations, we did not observe the lobes in the bremsstrahlung angular distribution found in 2D full-PIC simulations.", "Different energy-loss mechanisms were found to dominate at different laser intensities and target atomic numbers, with bremsstrahlung dominating in high-intensity high-$Z$ set-ups.", "A simple analytic model was provided for estimating efficiencies $\\eta _{e\\rightarrow \\gamma }$ , and showed good agreement with the predictions of the code.", "At these time-scales, the code could be improved by evolving the immobile background ion fluid with a hydrodynamic code, and diffusing the solid temperature with a thermal conductivity model.", "2D full-PIC simulations could be set-up starting with hot electrons in decompressed targets, to better characterise refluxing at later times.", "The code could also be extended to include photon transport effects like photoelectric attenuation, and Bethe-Heitler pair production [36] to better model higher intensities.", "Acknowledgements: This work was in part funded by the UK EPSRC grants EP/G054950/1, EP/G056803/1, EP/G055165/1, EP/ M022463/1 and EP/M018156/1.", "The project was undertaken on the Viking Cluster, which is a high performance compute facility provided by the University of York.", "We are grateful for computational support from the University of York High Performance Computing service, Viking and the Research Computing team.", "We would also like to thank our AWE partners N. Sircombe (now at Arm Ltd.) and G. Crow and for their support in this project.", "The data that support the findings of this study (code, input decks and additional benchmarks) are openly available in the York Research Database[37] at http://doi.org/10.15124/707baa95-44e0-4f55-9476-ef1097b0a668." ], [ "Solids", "Hybrid-PIC codes model the transport of hot electrons through solids with a significantly colder and denser electron population.", "The hot electron currents in these systems exceed the Alfvén limit, and propagate by drawing a return current density, $\\textbf {j}_b$ from the background electrons.", "[38] This return current establishes a resistive electric field, $\\textbf {E}$ according to Ohm's law E = jb, where $\\eta $ denotes the local resistivity of the solid.", "To avoid simulation of background particles, the field equations are expressed using only the hot electron current density, $\\textbf {j}_h$ by substituting the total current density $\\textbf {j} = \\textbf {j}_h + \\textbf {j}_b$ into the Ampère-Maxwell law, and iterating the magnetic field $\\textbf {B}$ with the Faraday-Lenz law E = ( 10B - jh ) Bt = - E .", "The displacement current in () has been negelcted, as this is negligible over multi-picosecond timescales.", "[28] Our code was built as an extension to EPOCH by introducing a new solid concept to the code.", "Solids are single-element immobile fluids added to the simulation window, and are described by an atomic number, $Z$ , mean excitation energy, $I_{ex}$ , and radiation length, $X_0$ .", "A spatially varying ion number density, $n_i$ is assigned to each solid, and multiple solids may be assigned to the same cell to construct compound materials like plastic.", "The hybrid mode also tracks the local background electron and ion temperatures, $T_e$ and $T_i$ (in Kelvin) and the resistivity in each cell.", "The temperature-dependent resistivity is calculated using a reduced form of the Lee-More model, [39] = meZ* ni e2 A where $Z^*$ is the local solid ionisation state (given by the More Table IV algorithm,[40]) $\\tau $ is the electron relaxation time, and $A^\\alpha $ is a correction factor.", "Here the Lee-More equations have been converted to SI units.", "Our reduced model varies between the hot and cold relaxation time limits (A)hot = 128023e4me2(kbTe)3/2(Z*)2ni (A)cold = R0v1 where the Coulomb logarithm $\\ln \\Lambda $ is evaluated using the Lee-More method, [39] the ion sphere radius $R_0=(3/4\\pi n_i)^{1/3}$ , mean thermal speed $\\bar{v}=\\sqrt{3 k_b T_e/m_e}$ , and $\\lambda _1$ is a fitting parameter.", "The value of $\\tau A^\\alpha $ used in () is $\\max {((\\tau A^\\alpha )_{\\text{hot}},(\\tau A^\\alpha )_{\\text{cold}})}$ , and resistivity is taken to be $\\eta \\lambda _2$ , where $\\lambda _2$ is a second fit parameter.", "The $(\\lambda _1, \\lambda _2)$ values are taken to be (7, 3.5) from a fit to experimental Al resistivities.", "[41] The electron temperatures of the background solid are updated for each cell and timestep according to Te = ZniCkb where $\\rho _\\epsilon $ is the density of the energy deposited in the cell over the timestep, $\\Delta t$ , and $C$ is the heat capacity of the solid C = 0.3 + 1.2T'2.2 + T'(1.1 + T')2 for $T^{\\prime }=(k_bT_e/e)Z^{-4/3}$ .", "[42] For compound solids, we replace the electron number density term $n_e = Z n_i$ in () with the sum of $n_e$ over all solids in the cell, and calculate a cell-averaged $1/C$ value weighted by the $n_e$ value of each solid.", "This ensures that two overlapping solids of the same material retains the same behaviour as the equivalent single solid.", "In Ohmic heating, the induced return current dissipates heat by travelling through the resistive solid, depositing an energy density of $\\rho _\\epsilon = \\eta \\textbf {j}_h \\cdot \\textbf {j}_h\\Delta t$ (as $|\\textbf {j}_h|\\approx |\\textbf {j}_b|$ ).", "[29] In ionisation heating, $\\rho _\\epsilon $ is the sum of the ionisation losses for all hot electrons in a cell over $\\Delta t$ , divided by the cell volume.", "Background electrons share energy with the ions through collisions, updating the temperatures of each species at the rates dTedt = (Ti-Te)(Z*)2 e4 nitc dTidt = (Te-Ti)(Z*)3 e4 nitc with the repeated term, $t_c$ representing 1tc = 23(2kb)3/2 me mi02(Te mi + Ti me)3/2 where $T_i$ and $m_i$ describe the ion temperature and mass respectively.", "[43]" ], [ "Hot electrons", "Hot electrons are injected into the simulation with exponentially distributed energies, and a mean kinetic energy given by ponderomotive scaling, $\\langle \\epsilon _k(\\textbf {r},t)\\rangle = a(\\textbf {r},t)m_ec^2$ , for position $\\textbf {r}$ and time $t$ .", "Here we use a local normalised vector potential $a(\\textbf {r},t)=a_0 \\sqrt{f(\\textbf {r})g(t)}$ , which applies an intensity reduction to (REF ) due to the envelope functions $f(\\textbf {r})$ and $g(t)$ .", "The number of electrons injected into the simulation, $N_e^\\text{cell}$ from a cell with transverse area, A, over a time-step, $\\Delta t$ is given by Necell = I0f(r)g(t)At lek(r,t) where $\\eta _{l\\rightarrow e}$ is the absorption efficiency of laser energy into hot electron kinetic energy.", "The ionisation energy loss algorithm has been adapted from Geant4, and takes two forms depending on the energy transferred to background electrons.", "[22], [23], [24] Hot electron energy loss is described by a continuous stopping power when background electrons are excited to energies, $\\epsilon _k^\\delta $ less than a cut-off energy, $\\epsilon _{k,\\text{cut}}$ , dEdx = Z ni e48 02 me v2 ( (2(+1)me2c4Iex2) + F- - ) ) where $v$ and $\\gamma $ are the speed and Lorentz factor of the hot electron respectively, and $\\epsilon _{k,\\text{cut}}$ is set to 1 keV.", "Here, $F^-$ is a function of $\\gamma $ and $\\epsilon _{k,\\text{cut}}$ , and $\\delta $ is the density effect function.", "[44] Background electrons excited to energies over $\\epsilon _{k,\\text{cut}}$ are treated as a discrete emission ($\\delta $ -rays), and are added into the simulation as macro-electrons.", "Secondary particle emission from macro-electrons in a PIC code is achieved using the optical depth method.", "[45] Over timestep $dt$ in a solid with a cross section per atom $\\sigma $ , a macro-electron covers an optical depth, $d\\tau = n_i\\sigma v dt$ , where $d\\tau $ is equivalent to the probability of an emission event during $dt$ .", "The cumulative probability of emission by optical depth $\\tau $ is $F(\\tau )=1-e^{-\\tau }$ .", "Hence, an optical depth of emission, $\\tau _e$ can be sampled for each macro-electron using $\\tau _e=-\\ln (1-x_r)$ , where $x_r$ is a uniformly-distributed random number between 0 and 1.", "The total $\\tau $ traversed by a macro-electron is saved, and once this exceeds $\\tau _e$ a secondary particle is emitted, the saved $\\tau $ value is reset and a new $\\tau _e$ is sampled.", "Discrete $\\delta $ -ray emission uses the cross section of high energy Möller scatter, [44] and a Geant4 algorithm for sampling the $\\delta $ -ray energy from the differential cross section.", "[22], [23], [24] A separate optical depth is used for tracking bremsstrahlung photon emission, which is characterised using the Seltzer-Berger differential cross-sections.", "[46] Following the theory of Wu et al, [15] the Seltzer-Berger cross sections are enhanced by the factor, $F_\\sigma $ F= 1 + |D/as||as me c2/|(Z*Z)2 to account for differences in nuclear charge screening from ionised backgrounds.", "Here $\\lambda _D$ is the Debye length of the background ions, and $a_s=1.4 a_B Z^{-1/3}$ describes charge screening from atomic electrons where $a_B$ denotes the Bohr radius.", "The bremsstrahlung photon emission direction is sampled using a Geant4 algorithm, which draws a direction according to the Tsai differential cross section.", "[47], [48] Two models for elastic scatter have been implemented: a hybrid-style approach used in previous hybrid-PIC codes, [28], [29] and a Geant4-style approach using an Urban algorithm adapted to the PIC framework.", "[49] The hybrid model solves the Fokker-Planck equation in the limit of low $Z$ targets and neglects large-angle scattering, deriving an expected deflection, $\\Delta \\theta $ over time $\\Delta t$ = (t) Z2e4ni me2 02 p3 (40 h pZ1/3me e2)t where $\\Gamma (t)$ is a random number drawn from a standard normal distribution.", "The Urban multiple scattering approach uses model functions which match the angular distribution moments of Lewis theory, [50] and provides an empirical fit for mapping large angle scattering onto experimental results.", "[51] The full Urban model modifies both angle and position at the end of each step, to account for scattering within the step.", "Steps within hybrid-PIC simulations are shorter as they are constrained to a single cell, so we neglect the spatial deviation in the PIC implementation." ], [ "Benchmarking", "The first benchmark considers the experimental results of Lockwood et al, which measured energy deposition as a function of depth in a variety of targets.", "[52] In FIG.", "REF , the hybrid-PIC code attempts to recreate their data for a 0.5 MeV electron beam at normal incidence on a Ta target.", "This was performed at low electron currents which give negligible resistive fields, and bremsstrahlung and $\\delta $ -ray emission may also be neglected at these electron energies.", "This benchmark mainly tests the ionisation energy loss and elastic scatter routines.", "The 3D simulation window ($x\\times y \\times z$ ) spanned $150\\times 2\\times 2$ m3 ($256\\times 8 \\times 8$ cells), with open boundary conditions in $x$ , and periodic boundaries in $y$ and $z$ .", "In the first timestep, 50 macro-electrons of unit weight were injected into each cell on the $x_\\text{min}$ boundary, and the simulation ran for 2 ps.", "The deposited grid energy was deduced from the final electron temperature distribution and the heat capacity used in the simulations, and was summed over all cells which shared an $x$ position.", "These simulations were performed for both Davies [29] and Urban [22], [23], [24] elastic scatter models, and show a reasonable agreement with the Lockwood data.", "As the Davies simulation ran roughly 3 times faster, we have opted for this elastic scatter model in this paper.", "Figure: Energy deposition per incident electron from a 0.5 MeV electron beam injected into a Ta target.", "Lockwood experimental data is compared to hybrid-PIC simulations running different elastic scatter algorithms.The hybrid field solver, Ohmic heating, reduced Lee-More model and laser-accelerated electron injection were benchmarked against experiments with the Vulcan petawatt laser.", "Evans et al obtained a temperature-depth curve with data from shots on multiple plastic targets, where the temperature was measured from a 0.2 m Al tracer layer buried at different depths.", "[53] This was recreated in the hybrid-PIC code using a simulation window which spanned $32.2\\times 20 \\times 20$ m3 ($322\\times 40 \\times 40$ cells), for a target which was Al between $x=28$ and 28.2 m, and plastic otherwise.", "The peak laser intensity was estimated to be $3.1\\times 10^{20}$ Wcm-2, with a temporal fwhm, $t_{fwhm}=800$ fs, and a spatial radial fwhm, $r_{fwhm} = 10$ m. To fit the data, we assume $\\eta _{l\\rightarrow e} = 0.04$ .", "To estimate the peak $T_e$ , the slow thermal exchange with ions has been neglected and the final $T_e$ values are recorded.", "The central $T_e(x)$ values are plotted in FIG.", "REF , and show a reasonable fit to the experimental results.", "The presence of the Al tracer layer at 28 m demonstrates the complex target capability of the code, and shows only a small increase in the temperature at this point.", "Deviations from experiment are attributed to the rough approximations in the injected electron characteristics, as real injections will be complicated by pre-plasmas and imperfect focal spots.", "Figure: The temperature distribution of plastic targets with Al tracer layers after exposure to Vulcan shots (3.1×10 20 3.1\\times 10^{20} Wcm-2).", "Experimental data is compared to the electron temperature from 3D hybrid-PIC simulations, averaged over the central 11×1111\\times 11 cells in yy and zz for a given xx, after 1.57 ps.", "A 2D heatmap of the temperature averaged over the central 11 cells in zz is provided in the insert, where the pink lines denote the central 11 cells in yy.The final benchmark attempts to recreate the experimental bremsstrahlung photon number spectrum into a 40$$ forward cone, from $4\\times 10^{20}$ Wcm-2 Vulcan shots on thick Au targets.", "[54] The code modelled a 3 mm $\\times 100^2$ m2 Au solid (cubic cells of length 0.7 m), and ran to 1.2 ps.", "Hot electrons were injected with $t_{fwhm}=800$ fs, $r_{fwhm}=5$ m, and $\\eta _{l\\rightarrow e}$ = 0.3.", "FIG.", "REF shows the number spectrum of bremsstrahlung photons created with angle less than 20$$ to the mean injection direction.", "While we expect to over-estimate the low energy bremsstrahlung emission as our code lacks photoelectric attenuation, [55] we see that low energy X-rays are actually under-estimated here.", "When looking at the bremsstrahlung emission from electron beams on uniform targets, the hybrid code matches equivalent runs in Geant4, which suggests a correct implementation.", "Hence, the FIG.", "REF discrepancy is again attributed to the over-simplified electron injection model.", "Figure: Number spectrum of X-ray photons from a 4×10 20 4\\times 10^{20} Wcm-2 shot on a 3 mm Au target, for X-rays falling within a 40 cone (20 half-angle) about the injection direction.", "Experimental data is compared to an equivalent run using the hybrid-PIC code." ] ]
2107.01723
[ [ "On the cauchy problem with degenerate diffusion and nonlocal nonlinear\n sources" ], [ "Abstract This paper is devoted to the analysis of non-negative solutions for a generalisation of the parabolic equation with porous medium like nonlinear diffusion and nonlinear nonlocal reaction.", "We investigate under which conditions equilibration between two competing effects, repulsion modelled by nonlinear diffusion and aggregation modelled by nonlinear reaction, occurs.", "Precisely, we exhibit that the qualitative behavior of solutions is decided by the nonlinear diffusion which is chosen in such a way that its scaling and the reaction term coincide, i.e.", "that there is a critical exponent $m+2/n$ for the reaction exponent $\\alpha,$ solutions exist globally with uniformly upper bounds in the case of (i)$1\\le \\alpha<m+2/n$ for any initial data, (ii) $\\alpha>m+2/n$ for small initial data and (iii) $\\alpha=m+2/n$ for small mass capacity $M_0$.", "In the case of (ii) and (iii), the decay properties of the solution are also discussed.", "Moreover, numerical simulations are carried out to verify the theoretical analysis and explore other issues that lie beyond the scope of the analysis." ], [ "Introduction", "In this work, we analyse qualitative properties of non-negative solutions in dimension $n \\ge 3$ for the degenerate equation of the type $\\left\\lbrace \\begin{array}{ll}u_t=\\Delta u^m+\\chi u^\\alpha \\left(M_0- \\int _{{\\mathbb {R}}^n} u(x,t)dx\\right),\\quad & x \\in {\\mathbb {R}}^n, t>0, \\\\u(x,0)=u_0(x) \\ge 0 \\in L^1 \\cap L^\\infty ({\\mathbb {R}}^n),\\end{array}\\right.$ where $\\chi >0, m>1, \\alpha \\ge 1$ .", "(REF ) is related to many equations arising from population dynamics [11], [17], $u$ is the density of the population.", "The purpose of nonlinear diffusion $\\Delta u^m$ with $m>1$ is to model the local repulsion of population, this can be interpreted as taking into account anti-crowding effects [10].", "The reaction term presents a growth factor of logistic type defined in terms of the total mass of the population which is a competitive term limiting such growth, where the resources of the environment can be consumed nonlocally.", "$\\chi u^\\alpha $ can also be interpreted as the nonlinear source in the process of diffusion [1], which is called heat source for $\\chi >0$ and cold source for $\\chi <0.$ In the case of $\\alpha =1$ , the coefficient $\\chi M_0$ is sometimes called Malthusian parameter which induces an exponential growth for low density populations.", "The case $\\alpha =2$ , which is the motivation of this work, considers the addition of sexual reproduction to the model with the reproduction rate proportional to the square of the density [30].", "Nonlocal type reaction terms can also describe Darwinian evolution of a structured population density or the behavior of cancer cells with therapy as well as polychemotherapy [21], [22].", "The main feature of this class of equations is the interplay between the degeneracy in the principal part and the growth of the forcing term.", "A fundamental property of the solutions to (REF ) is the formal boundedness of the total mass of the system $m(t)=\\int _{{\\mathbb {R}}^n} u(x,t) dx$ which satisfies $\\frac{d}{dt}m(t)=\\chi \\left(M_0-m(t) \\right) \\int _{{\\mathbb {R}}^n} u^\\alpha dx.$ If the initial mass $m_0:=\\int _{{\\mathbb {R}}^n} u_0 dx>M_0$ , then $m(t)$ decreases in time and $M_0 \\le m_0$ for all $t>0.$ Thus we find that $u(x,t)$ is a subsolution of the porous medium equation $v_t=\\Delta v^m$ which admits a global solution for any $m>1$ [26], [27].", "By the comparison principle, all solutions of (REF ) exist globally.", "When the initial mass $m_0<M_0$ , then $m(t)$ increases in time and $ m_0 \\le m(t) \\le M_0.$ Therefore, we assume that the initial mass satisfies $ m_0=\\int _{{\\mathbb {R}}^n} u_0 dx<M_0 $ throughout this paper.", "In this sense, $M_0$ can be considered as the carrying capacity [24].", "In any dimension $n \\ge 3,$ we will concentrate on a particular choice of the nonlinear reaction exponent $\\alpha =m+2/n$ which produces a balance in the mass-invariant scaling of diffusion and reaction.", "Indeed, let $u_\\lambda (x)=\\lambda ^n u(\\lambda x,t)$ of same mass as $u$ , the diffusion term $\\lambda ^{2+nm} \\Delta u_\\lambda ^m$ has the same scaling as the reaction term $\\lambda ^{n \\alpha } u_\\lambda ^\\alpha \\left( M_0-\\int _{{\\mathbb {R}}^n} u_\\lambda dx \\right)$ if and only if $nm+2=n\\alpha $ or equivalently $\\alpha =m+2/n.$ The dynamics in (REF ) are governed by the interaction between nonlinear diffusion and reaction.", "We are interested in three cases: critical case $\\alpha =m+2/n$ , subcritical case $\\alpha <m+2/n$ , supercritical case $\\alpha >m+2/n$ .", "Under what conditions which of the two competing items dominates will be explored in this paper." ], [ "Comments on the non-degenerate case $m=1$", "In the non-degenerate case $m=1,$ there is a vast body of literature on the semi-linear reaction-diffusion equations $u_t=\\Delta u+F(u)$ in bounded domain, cf.", "the papers [5], [15], [20], [21], [22], [28], [29], [30], [33], [32].", "For instance, the reaction term is $F(u)=\\int _{\\Omega } u^p dx-\\beta u^q$ [32] where the competitive effect of the local term $u^q$ becomes more influential as population grows such that the equation possesses the comparison principle which helps in proving the existence of global solutions by virtue of the boundedness of $\\Omega $ , or $F(u)=u^p-\\frac{1}{|\\Omega |} \\int _{\\Omega } u^p dx$ [15] where the equation is equipped with a decreasing Lyapunov functional.", "Yet there are few results on this type of equations in the whole space.", "This is partially due to the apparent lack of a good Lyapunov functional and the unboundedness of the domain.", "More importantly, the comparison principle (which has been used in the equation with local reaction term for the global existence [31]) is no longer applicable to our model with nonlocal reaction term.", "Before turning to the nonlocal term in the whole space, we firstly mention the following fundamental work of Fujita [12], he proved that for $1<\\alpha <1+2/n,$ the local classical solution blows up in finite time (the same is true for $\\alpha =1+2/n$ [13], [16]).", "The natural guess is that if $M_0-\\int _{{\\mathbb {R}}^n} u dx$ remains positive, our model has similar structure to Fujita equation.", "However, the result obtained in our previous paper [4] gives an opposite consequence.", "That's, for $1<\\alpha <1+2/n$ , our model admits a global classical solution as long as the initial value $u_0(x)$ is nontrivial." ], [ "Our results for the degenerate case $m>1$", "In the degenerate case $m>1,$ before proceeding further, let us state the notion of weak solutions we will deal throughout this paper with: Definition 1.1 (Weak solution) Let $u_0$ be an initial condition satisfying $u_0 \\in L^1 ({\\mathbb {R}}^n;(1+|x|^2)dx ) \\cap L^\\infty ({\\mathbb {R}}^n),~~\\nabla u_0^m \\in L^2({\\mathbb {R}}^n),~~u_0 \\ge 0,~~\\int _{{\\mathbb {R}}^n} u_0 dx <M_0$ and $T \\in (0,\\infty ]$ .", "The non-negative functions defined in ${\\mathbb {R}}^n \\times [0,T)$ is called a weak solution of (REF ) on $[0,T)$ if Regularity: $u \\in L^\\infty (0,T;L^1 \\cap L^\\infty ({\\mathbb {R}}^n)),~u^m \\in L^2(0,T;H^1({\\mathbb {R}}^n)).$ $u$ satisfies the equation in the sense of distribution, i.e.", "that $& \\int _0^T \\int _{{\\mathbb {R}}^n} \\nabla u^m \\cdot \\nabla \\varphi dxdt-\\chi \\int _0^T \\int _{{\\mathbb {R}}^n} u^\\alpha \\varphi dx \\left( M_0-\\int _{{\\mathbb {R}}^n} u dx \\right) dt \\nonumber \\\\& =\\int _{{\\mathbb {R}}^n} u_0(x) \\varphi (x,0)dx+\\int _0^T \\int _{{\\mathbb {R}}^n} u \\varphi _t dxdt $ for any continuously differentiable function $\\varphi $ with compact support in ${\\mathbb {R}}^n \\times [0,T).$ The main results of this work can be listed as follows.", "For the subcritical case $1\\le \\alpha <m+2/n$ , the following theorem gives the existence of a time global weak solution.", "Theorem 1.2 (Uniform boundedness in the subcritical case $1\\le \\alpha <m+2/n$ ) Let $n \\ge 3, m>1, 1 \\le \\alpha <m+2/n.$ For any $T>0,$ under assumption (REF ), there exists a weak solution $u$ to (REF ) on $[0,T)$ .", "Moreover, $u$ is uniformly bounded, i.e.", "that there exists a constant $C=C\\left( \\Vert u_0\\Vert _{L^1({\\mathbb {R}}^n)},\\Vert u_0\\Vert _{L^\\infty ({\\mathbb {R}}^n)},m,\\alpha ,n,\\chi ,m_0,M_0 \\right)$ such that for all $1\\le k \\le \\infty $ $\\displaystyle \\sup _{0<t<T} \\Vert u(\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C.$ Remark 1.3 For the subcritical case, using the mass invariant scaling, the reaction with nonlocal term dominates the diffusion for low density and prevent spreading.", "While for high density, the diffusion dominates the reaction, thus blow-up is precluded.", "Remark 1.4 As pointed out in (REF ) with the absence of the nonlocal term $\\chi u^\\alpha \\int _{{\\mathbb {R}}^n} u dx$ , global solutions cannot exist for $1<\\alpha <m+2/n,$ see [1].", "While Theorem REF shows that the solution of (REF ) exists globally without any restriction on the size of the initial data.", "Remark 1.5 For $\\alpha =1,$ consider a solution $u$ of (REF ) and define the rescaled function $v$ by: $u(x,t)=\\frac{1}{R^n(t)} v\\left( \\frac{x}{R(t)},\\tau (t) \\right)=\\frac{1}{R^n(t)} v(y,\\tau )$ with $R(t)=\\left( 1+\\mu t \\right)^{\\frac{1}{\\mu }},~~\\tau (t)=\\log R(t),$ where $\\mu =nm-n+2$ .", "The rescaled system is $\\left\\lbrace \\begin{array}{ll}\\frac{\\partial v}{\\partial \\tau }=\\Delta v^m +\\nabla \\cdot (vy)+\\chi v \\left( M_0-\\int _{{\\mathbb {R}}^n} v dy \\right) e^{\\mu \\tau }, & y \\in {\\mathbb {R}}^n, ~\\tau >0, \\\\v(\\cdot ,\\tau =0)=u_0 \\ge 0, & y \\in {\\mathbb {R}}^n.\\end{array}\\right.$ Integrating (REF ) over ${\\mathbb {R}}^n$ we obtain $\\left\\lbrace \\begin{array}{ll}\\frac{d m(\\tau )}{d\\tau }=\\chi e^{\\mu \\tau } m(\\tau )\\left( M_0-m(\\tau ) \\right), \\\\m(0)=m_0.\\end{array}\\right.$ As a consequence, $M_0-m(\\tau )=\\frac{M_0}{\\frac{m_0}{M_0-m_0}e^{-\\frac{\\chi M_0}{\\mu }}e^{\\frac{\\chi M_0}{\\mu }e^{\\mu \\tau }}+1}$ which tells us that $\\frac{\\partial v}{\\partial \\tau }=\\Delta v^m +\\nabla \\cdot (v y)+\\chi v \\frac{M_0 e^{\\mu \\tau } }{C_1 e^{C_2 e^{\\mu \\tau }}+1},$ where $C_2=\\frac{\\chi M_0}{\\mu }, C_1=\\frac{m_0}{M_0-m_0}e^{-C_2}.$ For all $m>1,$ the equation (REF ) has a unique integrable stationary solution.", "Computing it we get $v_{\\infty ,M_0}=\\left( C_{M_0}-\\frac{m-1}{2m}|y|^2 \\right)^{\\frac{1}{m-1}}.$ Here the mass $M_0$ of the steady state $v_{\\infty ,M_0}$ fixes $C_{M_0}$ , i.e.", "calculating $M_0=\\int _{{\\mathbb {R}}^n} v_{\\infty ,M_0}(y) dy$ one finds that $C_{M_0}=\\left( \\frac{m-1}{2m} \\right)^{\\frac{n(m-1)}{\\mu }} \\left( \\frac{n \\alpha _n}{2}\\mathcal {B} \\left( \\frac{n}{2},\\frac{m}{m-1} \\right) \\right)^{-\\frac{2(m-1)}{\\mu }} M_0^{\\frac{2(m-1)}{\\mu }},$ where $\\alpha _n=\\frac{\\pi ^{n/2}}{\\Gamma (n/2+1)}$ .", "The behavior of the solution $u$ can be described for large $t$ by $u_{\\infty ,M_0}=\\left( \\frac{ C_{M_0}(1+\\mu t)^{\\frac{2}{\\mu }}-\\frac{m-1}{2m} |x|^2}{1+\\mu t} \\right)^{\\frac{1}{m-1}}$ on expanding sets of the form $|x|<\\sqrt{\\frac{2mC_{M_0}}{m-1}} \\left( 1+\\mu t \\right)^{\\frac{1}{\\mu }}.$ Let us now discuss the critical case $\\alpha =m+2/n$ in which the weak solution exists globally in time for small capacity of the total mass $M_0.$ Theorem 1.6 (Decay properties in the critical case $\\alpha =m+2/n$ ) Let $n \\ge 3, m>1, \\alpha =m+2/n.$ Let $u_0$ be an initial data satisfying (REF ) and the capacity $M_0$ satisfies $M_0 \\le M_\\ast $ where $M_\\ast $ is expressed as $M_\\ast =\\left( \\frac{S_n (\\alpha -m)}{\\chi }\\right)^{\\frac{1}{\\alpha -m+1}} \\frac{\\alpha -m+1}{\\alpha -m}$ with $S_n=\\frac{n(n-2)}{4}2^{\\frac{2}{n}}\\pi ^{1+\\frac{1}{n}}\\Gamma \\left(\\frac{n+1}{2}\\right)^{-\\frac{2}{n}}.$ Then there exists a weak solution $u$ to (REF ) with the following decay property that for any $t>0$ $\\Vert u(\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C_1(1+t)^{-\\frac{k-1}{k(m+2/n-1)}},~~1<k<\\infty $ and the uniform estimate $\\Vert u(\\cdot ,t)\\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le C_2,$ where $C_1, C_2$ are constants depending only on $\\Vert u_0\\Vert _{L^1 \\cap L^\\infty ({\\mathbb {R}}^n)}$ , $m$ , $n$ , $\\chi , m_0,M_0$ .", "For the supercritical case $\\alpha >m+2/n,$ we present the decay property of the weak solution to (REF ) under the smallness assumption on $\\Vert u_0\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}$ .", "Throughout this paper, we define a constant which is related to the initial condition for the existence results: $C_{p_0}=\\left(\\frac{n+2}{2 }\\right)^{\\frac{1}{p_0}}\\left( \\frac{4S_n m(p_0-1)(n+2)}{n\\chi (p_0+m-1)^2} \\right)^{\\frac{1}{\\alpha -m}} \\frac{m_0^{\\frac{1}{p_0}}}{M_0^{\\frac{1}{p_0}+\\frac{1}{\\alpha -m}}},$ where $S_n$ is defined by (REF ) and $p_0=\\frac{n(\\alpha -m)}{2}.$ Theorem 1.7 (Decay property in the supercritical case $\\alpha >m+2/n$ ) Let $n \\ge 3, m>1, \\alpha >m+2/n.$ Suppose that $u_0$ has the property (REF ) satisfying $\\Vert u_0\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)} < C_{p_0},$ then (REF ) has a global weak solution and it holds that for any $t>0$ $&\\Vert u(\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C_1(1+t)^{-\\frac{k-1}{k(\\alpha -1)}},~~1<k<\\infty , \\\\&\\Vert u(\\cdot ,t)\\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le C_2,$ where $C_1,C_2$ are constants depending on $\\Vert u_0\\Vert _{L^1 \\cap L^\\infty ({\\mathbb {R}}^n)}, n, \\alpha , m,\\chi , m_0,M_0$ .", "Remark 1.8 For the supercritical case $\\alpha >m+2/n,$ from the perspective of scaling analysis, the diffusion becomes much more influential than the reaction for low density and the density has infinite-time spreading.", "An interesting conclusion is that the higher the norm under consideration, the faster is the time decay.", "Let us mention that this model shares many common features with the nonlinear Schrödinger equation and the unstable thin film equation, such as the competition between the attractive and repulsive terms.", "As in our scaling analysis, the balance between reaction(attraction) and diffusion(repulsive) happens precisely for our chosen exponent $\\alpha =m+2/n$ .", "In the nonlinear Schrödinger equation, Weinstein [34] proposed the existence of the critical exponent $\\sigma =2/n$ that would separate those equations that only have local solutions from those that do not, see [23].", "In the unstable thin film equation, $m=n+2$ is the critical exponent separating equations with possible finite-time blow-up from problems where the solutions are always bounded [14], a comprehensive discussion of how scaling properties of the equations relate to infinite-time diffusive spreading and finite-time blow-up can be found in [35], see [2], [3] for the subcritical case $m<n+2$ where blow-up is impossible and for the supercritical case $m>n+2$ where the existence of a solution that blows up in finite time." ], [ "Structure of the paper", "This paper is organized as follows.", "Section prepares some preliminary lemmas.", "The following sections are devoted to the detailed proof of the existence of weak solutions for the three cases.", "The point here is to establish the result with all necessary details: regularized problem, uniform estimates, passing to the limit in the regularization parameter.", "Precisely, in Section , a regularized equation is constructed for which the global strong solution exists.", "Firstly, a key maximal time of local existence criterion for solutions of the regularized problem is established, see Proposition REF .", "Then a priori estimates of the local solution have been derived, see Proposition REF for a detailed study of the regularity properties of the solutions.", "Furthermore, by Moser iterative method, we prove that the solution is uniformly bounded in $L^\\infty $ space for almost any positive $t$ , see Proposition REF .", "Section displays the global existence of a weak solution to (REF ) by passing the regularized parameter to zero.", "The main difficulty comes from the nonlocal term $\\int _{{\\mathbb {R}}^n} u dx$ and we prove the three cases in which we use the standard arguments relying on the evolution of the second moment of solutions.", "We also derive the decay rate of global solutions in the critical case and the supercritical case.", "Finally, in Section , series of numerical experiments are carried out to verify the results of the earlier sections and explore other issues that lie beyond the scope of the analysis.", "Further problems and open questions for the nonlinear dynamics of (REF ) are also addressed using numerical simulations." ], [ "Preliminaries", "Before showing the global existence, we shall prepare several lemmas which will be used often in the next sections.", "Lemma 2.1 ([19]) Let $n\\ge 3$ .", "Suppose $u\\in H^1({\\mathbb {R}}^n)$ .", "Then $u\\in L^{\\frac{2n}{n-2}}({\\mathbb {R}}^n)$ and the following holds: $S_n \\Vert u\\Vert _{L^{\\frac{2n}{n-2}}({\\mathbb {R}}^n)}^2\\le \\Vert \\nabla u \\Vert _{L^2({\\mathbb {R}}^n)}^2,$ where $S_n$ is defined in (REF ).", "Lemma 2.2 Let $n\\ge 3$ , $1<\\frac{b}{a}<\\frac{2n}{a(n-2)}$ and $\\frac{b}{a}<\\frac{2}{a}+\\frac{2}{n}$ .", "Assume $w\\in L_+^1({\\mathbb {R}}^n)$ and $w^{\\frac{1}{a}}\\in H^1({\\mathbb {R}}^n)$ with $a>0$ , then $\\Vert w\\Vert _{L^\\frac{b}{a}({\\mathbb {R}}^n)}^{\\frac{b}{a}} \\le S_n^{\\frac{-\\lambda b}{2}}\\Vert w\\Vert _{L^1({\\mathbb {R}}^n)}^{\\frac{b}{a}(1-\\lambda )}\\Vert \\nabla w^{\\frac{1}{a}}\\Vert _{L^2({\\mathbb {R}}^n)}^{b\\lambda },$ where $\\lambda =\\frac{1/a-1/b}{1/a-\\frac{n-2}{2n}}$ .", "Proof.", "We take $u=w^{\\frac{1}{a}}$ in Lemma REF and employ Hölder inequality with $1<\\frac{b}{a}<\\frac{2n}{a(n-2)}$ yield $\\Vert w\\Vert _{L^{\\frac{b}{a}}({\\mathbb {R}}^n)} \\le \\Vert w\\Vert _{L^1({\\mathbb {R}}^n)}^{1-\\lambda }\\Vert w^{\\frac{1}{a}}\\Vert _{L^{\\frac{2n}{n-2}}({\\mathbb {R}}^n)}^{\\lambda a} \\le S_n^{-\\frac{\\lambda a}{2}}\\Vert w\\Vert _{L^1({\\mathbb {R}}^n)}^{1-\\lambda }\\Vert \\nabla w^{\\frac{1}{a}}\\Vert _{L^2({\\mathbb {R}}^n)}^{\\lambda a},$ where $\\lambda =\\frac{1/a-1/b}{1/a-\\frac{n-2}{2n}}$ .", "$\\Box $ The following lemma which have been proved in [4] will play an important role in the proof of global existence of solutions to equation (REF ).", "Lemma 2.3 ([4]) (Gagliardo-Nirenberg-Sobolev inequality) Let $n\\ge 3$ , $p=\\frac{2n}{n-2}$ , $1\\le r<q<p$ and $\\frac{q}{r}<\\frac{2}{r}+1-\\frac{2}{p}$ , then for any $w\\in H^1({\\mathbb {R}}^n)$ and $w\\in L^r({\\mathbb {R}}^n)$ , it holds $\\Vert w\\Vert _{L^q({\\mathbb {R}}^n)}^q\\le C_0\\Vert \\nabla w\\Vert _{L^2({\\mathbb {R}}^n)}^2+\\left(1-\\frac{\\lambda q}{2}\\right)\\left( \\frac{2 S_n C_0}{\\lambda q} \\right)^{-\\frac{\\lambda q}{2-\\lambda q}} \\Vert w\\Vert _{L^r({\\mathbb {R}}^n)}^{\\frac{2(1-\\lambda )q}{2-\\lambda q}},~~n \\ge 3,$ where $\\lambda =\\frac{\\frac{1}{r}-\\frac{1}{q}}{\\frac{1}{r}-\\frac{1}{p}}\\in (0,1)$ and $S_n$ is given by (REF ).", "Lemma 2.4 ([7]) Assume $y(t)\\ge 0$ is a $C^1$ function for $t>0$ satisfying $y^{\\prime }(t)\\le \\eta -\\beta y(t)^a$ for $\\eta >0$ , $\\beta >0$ , then (i)  For $a>1$ , $y(t)$ has the following hyper-contractive property $y(t)\\le (\\eta /\\beta )^{\\frac{1}{a}}+\\left( \\frac{1}{\\beta (a-1)t} \\right)^{\\frac{1}{a-1}}, ~~ \\mbox{for any} ~~ t>0.$ In addition, if $y(0)$ is bounded, then $y(t)\\le \\max \\left( y(0),(\\eta /\\beta )^{\\frac{1}{a}} \\right).$ (ii)  For $a=1,$ $y(t)$ decays exponentially $y(t) \\le \\eta /\\beta +y(0)e^{-\\beta t}.$ More generally, we have Lemma 2.5 ([6]) Assume $f(t)\\ge 0$ is a non-increasing function for $t>0$ .", "$y(t)\\ge 0$ is a $C^1$ function and satisfies $y^{\\prime }(t)\\le f(t)-\\beta y(t)^a$ for $a>1, \\beta >0$ .", "Then for any $t_0>0$ one has $y(t)\\le \\left( \\frac{f(t_0)}{\\beta } \\right)^{1/a}+\\left(\\frac{1}{\\beta (a-1)(t-t_0)}\\right)^{\\frac{1}{a-1}},~~\\mbox{for any}~~t>t_0.$" ], [ "Regularized problem", "In order to justify the formal arguments of the priori estimates (which will be given in Proposition REF ), we consider the regularized problem $\\left\\lbrace \\begin{array}{ll}\\frac{\\partial u_\\varepsilon (x,t)}{\\partial t}=\\Delta u_\\varepsilon ^m+\\varepsilon \\Delta u_\\varepsilon + \\chi u_\\varepsilon ^{\\alpha } \\left( M_0-\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx \\right), ~~& x\\in {\\mathbb {R}}^n,~t>0, \\\\u_\\varepsilon (x,0)=u_{0\\varepsilon }(x),~~& x \\in {\\mathbb {R}}^n.\\end{array}\\right.$ Here we define the convolution $u_{0\\varepsilon }=J_{\\varepsilon }\\ast u_0$ where the regularizing kernel $J_\\varepsilon =\\frac{1}{\\varepsilon ^n}J\\left( \\frac{x}{\\varepsilon } \\right)$ with $J \\in C_0^\\infty ({\\mathbb {R}}^n)$ and $\\int _{{\\mathbb {R}}^n}J dx=1$ so that $\\int _{{\\mathbb {R}}^n} J_\\varepsilon dx=1.$ $u_{0\\varepsilon }$ satisfies $\\Vert u_{0\\varepsilon }\\Vert _{L^1({\\mathbb {R}}^n)} <M_0$ and there exists $\\delta >0$ such that for all $0<\\varepsilon <\\delta $ $\\left\\lbrace \\begin{array}{ll}(i)~u_{0\\varepsilon }\\in L^q({\\mathbb {R}}^n) ~\\mbox{and}~ \\Vert u_{0\\varepsilon }\\Vert _{L^q({\\mathbb {R}}^n)} \\le \\Vert u_0\\Vert _{L^q({\\mathbb {R}}^n)}~ \\mbox{for all}~ 1 \\le q \\le \\infty , \\\\(ii)~0\\le u_{0\\varepsilon }\\in L^1 \\cap W^{2,q}({\\mathbb {R}}^n)~ \\mbox{for all}~q \\in [1,n+3], \\\\(iii)~u_{0\\varepsilon } \\rightarrow u_0~\\mbox{strongly in}~L^q({\\mathbb {R}}^n)~\\mbox{as}~\\varepsilon \\rightarrow 0,~\\mbox{for some}~q\\in [1,\\infty ), \\\\(iv)~\\left\\Vert \\nabla u_{0\\varepsilon }^m \\right\\Vert _{L^2({\\mathbb {R}}^n)} \\le \\left\\Vert \\nabla u_0^m \\right\\Vert _{L^2({\\mathbb {R}}^n)}, \\\\[1mm](v)\\int _{{\\mathbb {R}}^n} |x|^2 u_{0\\varepsilon } dx \\rightarrow \\int _{{\\mathbb {R}}^n} |x|^2 u_{0} dx ~\\mbox{as}~\\varepsilon \\rightarrow 0.", "\\\\\\end{array}\\right.$ We denote $Q_T={\\mathbb {R}}^n \\times [0,T)$ and $&W^{2,1}_q(Q_T):=\\left\\lbrace u \\in L^q(0,T;W^{2,q}({\\mathbb {R}}^n))~\\mbox{and}~u_t \\in L^q(0,T;L^q({\\mathbb {R}}^n))\\right\\rbrace , \\\\&W(Q_T)=W^{2,1}_{\\frac{n}{n-1}} \\cap W^{2,1}_{n+3}(Q_T).$ This section aims to prove the time global solution of (REF ) which reads: Theorem 3.1 (Time global strong solution) Let $n \\ge 3, \\alpha \\ge 1, m>1.$ Suppose that $u_{0\\varepsilon }$ satisfies (REF ), then (REF ) has the unique strong solution in $W(Q_T)$ for all $T>0.$ For the proof of Theorem REF , it suffices to show the following three propositions: Proposition REF , Proposition REF , Proposition REF .", "We first establish the local existence and blow-up criteria, then we show that the local solution admits the uniformly boundedness for extension in time.", "Proposition 3.2 (Time local existence and blow-up criteria) Let $n \\ge 3, \\alpha \\ge 1, m>1.$ Suppose that $u_{0\\varepsilon }$ satisfies (REF ), then there exists a number $T_{\\max }=T\\left( \\Vert u_{0\\varepsilon }\\Vert _{W^{2,n+2}({\\mathbb {R}}^n)},m,\\alpha ,n,\\chi \\right)>0$ such that $u_\\varepsilon (x,t) \\in W^{2,1}_{n+2}(Q_T) \\cap L^\\infty (0,T;L^1 \\cap L^\\infty ({\\mathbb {R}}^n))$ is a non-negative strong solution of (REF ).", "Furthermore, if $T_{\\max }<\\infty ,$ then we have $\\displaystyle \\limsup _{t \\rightarrow T_{\\max }} \\Vert u_\\varepsilon (\\cdot ,t)\\Vert _{L^\\infty ({\\mathbb {R}}^n)}=\\infty .$ Proof.", "The proof will be carried out as follows.", "The first route we shall follow here is to consider the problem $\\left\\lbrace \\begin{array}{ll}\\frac{\\partial u_\\varepsilon (x,t)}{\\partial t}=\\nabla \\cdot \\left( (mu_\\varepsilon ^{m-1}+\\varepsilon )\\nabla u_\\varepsilon \\right)+\\chi u_\\varepsilon ^\\alpha \\left( M_0-\\int _{{\\mathbb {R}}^n} h dx \\right), ~~& x\\in {\\mathbb {R}}^n,~t>0, \\\\u_\\varepsilon (x,0)=u_{0\\varepsilon }(x),~~& x \\in {\\mathbb {R}}^n,\\end{array}\\right.$ where $h \\in L^\\infty (0,T;L^1({\\mathbb {R}}^n))$ is a non-negative function.", "We show the local existence of a strong solution of (REF ).", "Secondly, by the fixed point theorem we prove the local existence of the strong solution of (REF ).", "Finally, we state that the local solution satisfies a blow-up criterion and thus close the proof.", "In the following precise discussions, we will deal with the nonlinear reaction $\\alpha >1$ and the linear reaction $\\alpha =1$ respectively.", "Step 1 (Local existence of the non-negative solution of (REF ))   In this step, in order to prove the existence of a strong solution $u_\\varepsilon $ in (REF ), we observe the equation: $(u_\\varepsilon )_t &=\\nabla \\cdot \\left((mf^{m-1}+\\varepsilon )\\nabla u_\\varepsilon \\right)+\\chi f^{\\alpha -1}u_\\varepsilon \\left(M_0-\\int _{{\\mathbb {R}}^n}hdx \\right) \\nonumber \\\\&=(mf^{m-1}+\\varepsilon )\\Delta u_\\varepsilon +m \\nabla f^{m-1}\\cdot \\nabla u_\\varepsilon +\\chi f^{\\alpha -1}u_\\varepsilon \\left(M_0-\\int _{{\\mathbb {R}}^n}hdx \\right).$ Here $\\alpha >1.$ The proof is refined in the spirit of [9], [25].", "We shall use the notation $X_T:=\\lbrace & f\\in L^\\infty (0,T;W^{2,n+2}({\\mathbb {R}}^n)\\cap L^\\infty (0,T;L^1 \\cap L^\\infty ({\\mathbb {R}}^n)), f_t \\in L^{n+2}(Q_T): \\nonumber \\\\& f \\ge 0, ~\\Vert f\\Vert _{L^\\infty (0,T;W^{2,n+2}({\\mathbb {R}}^n)}+\\Vert f\\Vert _{L^\\infty (0,T;L^1 \\cap L^\\infty ({\\mathbb {R}}^n))}+\\Vert f_t\\Vert _{L^{n+2}(Q_T)} \\nonumber \\\\& \\qquad ~~\\le c_1 \\Vert u_{0\\varepsilon }\\Vert _{W^{2,n+2}({\\mathbb {R}}^n)}+c_2 \\Vert u_{0\\varepsilon }\\Vert _{L^1 \\cap L^\\infty ({\\mathbb {R}}^n)}+c_3 \\rbrace $ for some constants $c_1,c_2,c_3$ only depending on $m,\\alpha ,n,\\chi ,M_0.$ By Theorem 9.1 of [18] with $f \\in X_T$ and $h \\in L^\\infty (0,T;L^1({\\mathbb {R}}^n))$ , it follows that (REF ) corresponding to the initial data $u_{0\\varepsilon }$ has the unique strong solution $u^f_\\varepsilon \\in W(Q_T).$ Hence we can define a mapping $\\Phi $ by $\\Phi :~f \\in X_T \\mapsto u^f_\\varepsilon \\in W(Q_T).$ Now we claim that $u^f_\\varepsilon \\ge 0 \\in L^\\infty (0,T;L^1 \\cap L^\\infty ({\\mathbb {R}}^n)).$ In the following, we use $u_\\varepsilon $ instead of $u^f_\\varepsilon $ for simplicity.", "Multiplying (REF ) by $|u_\\varepsilon |^{k-2}u_\\varepsilon (k>1)$ yields $\\frac{1}{k} \\frac{d}{dt} \\int _{{\\mathbb {R}}^n} |u_\\varepsilon |^k dx&=-(k-1)\\int _{{\\mathbb {R}}^n} (mf^{m-1}+\\varepsilon )|u_\\varepsilon |^{k-2}|\\nabla u_\\varepsilon |^2dx+\\chi \\int _{{\\mathbb {R}}^n} f^{\\alpha -1}|u_\\varepsilon |^k dx \\left( M_0-\\int _{{\\mathbb {R}}^n}h dx \\right) \\\\&\\le \\chi M_0 \\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1} \\int _{{\\mathbb {R}}^n} |u_\\varepsilon |^k dx$ which follows that for $k>1$ $\\frac{d}{dt} \\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le \\chi M_0 \\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1}\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)}.$ Thus we have $\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}+\\chi M_0 \\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1} \\int _0^t \\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} ds.$ Taking $k \\rightarrow \\infty $ and using Gronwall inequality assure that $\\displaystyle \\sup _{0<t<T} \\Vert u_\\varepsilon (t)\\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^\\infty ({\\mathbb {R}}^n)}e^{\\chi M_0\\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1}~T}.$ The nonnegativity of $u_\\varepsilon $ can be obtained by multiplying (REF ) with $u_\\varepsilon ^-:=-\\min (u_\\varepsilon ,0)$ that $\\frac{1}{2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |u_\\varepsilon ^-|^2 dx &=-\\int _{{\\mathbb {R}}^n} (mf^{m-1}+\\varepsilon )|\\nabla u_\\varepsilon ^-|^2 dx+\\chi \\int _{{\\mathbb {R}}^n}f^{\\alpha -1} |u_\\varepsilon ^-|^2 dx \\left(M_0-\\int _{{\\mathbb {R}}^n} h dx \\right) \\\\& \\le \\chi M_0\\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1}\\int _{{\\mathbb {R}}^n} |u_\\varepsilon ^-|^2 dx.$ It follows $\\displaystyle \\sup _{0<t<T} \\Vert u_\\varepsilon ^-(\\cdot ,t)\\Vert _{L^2({\\mathbb {R}}^n)} \\le e^{\\chi M_0 \\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1} T }~\\Vert u_{0\\varepsilon }^-(\\cdot ,0)\\Vert _{L^2({\\mathbb {R}}^n)}=0$ which guarantees that for all $0 \\le t <T$ $u_\\varepsilon (x,t) \\ge 0,~~a.e.~~x \\in {\\mathbb {R}}^n.$ This allows us to integrate (REF ) over ${\\mathbb {R}}^n$ $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx=\\chi \\int _{{\\mathbb {R}}^n}f^{\\alpha -1}u_\\varepsilon dx \\left(M_0-\\int _{{\\mathbb {R}}^n} h dx \\right)$ to obtain $\\displaystyle \\sup _{0<t<T} \\Vert u_\\varepsilon (t)\\Vert _{L^1({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^1({\\mathbb {R}}^n)}e^{\\chi M_0 \\Vert f\\Vert _{L^\\infty (Q_T)}^{\\alpha -1}~T}.$ Now we can see that there exists $T_\\ast =T_\\ast (\\varepsilon ,\\Vert h\\Vert _{L^\\infty (0,T;L^1({\\mathbb {R}}^n))},\\Vert u_{0\\varepsilon }\\Vert _{W^{2,n+2}({\\mathbb {R}}^n)},\\Vert u_{0\\varepsilon }\\Vert _{L^1 \\cap L^\\infty ({\\mathbb {R}}^n)}, T)$ such that $\\Phi $ maps $X_{T_\\ast }$ into itself.", "Considering the complete metric space $(X_T,d)$ where $d$ is defined by $d(f_1-f_2)=\\Vert f_1-f_2\\Vert _{L^\\infty (0,T;L^{n+2}({\\mathbb {R}}^n))}$ , we denote $u_1=u_\\varepsilon ^{f_1},~~u_2=u_\\varepsilon ^{f_2},~~w=u_1-u_2,$ from (REF ) one has $(u_1-u_2)_t= &\\nabla \\cdot \\left( m( f_1^{m-1}-f_2^{m-1} )\\nabla u_1+(mf_2^{m-1}+\\varepsilon )\\nabla (u_1-u_2)\\right) \\nonumber \\\\& + \\chi \\left(( f_1^{\\alpha -1}-f_2^{\\alpha -1} )u_1+f_2^{\\alpha -1}(u_1-u_2) \\right)\\left( M_0-\\int _{{\\mathbb {R}}^n} h dx \\right).$ The multiplication (REF ) by $|w|^n w$ gives rise to $& \\frac{1}{n+2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n}|w|^{n+2} dx=-(n+1)\\int _{{\\mathbb {R}}^n} (mf_2^{m-1}+\\varepsilon )|\\nabla (u_1-u_2)|^2 |u_1-u_2|^n dx \\nonumber \\\\& -(n+1)m \\int _{{\\mathbb {R}}^n} (f_1^{m-1}-f_2^{m-1})|u_1-u_2|^n \\nabla u_1 \\cdot \\nabla (u_1-u_2) dx \\nonumber \\\\& +\\chi \\int _{{\\mathbb {R}}^n} (f_1^{\\alpha -1}-f_2^{\\alpha -1})u_1(u_1-u_2)|u_1-u_2|^n dx \\left(M_0-\\int _{{\\mathbb {R}}^n} h dx\\right) \\nonumber \\\\& +\\chi \\int _{{\\mathbb {R}}^n} f_2^{\\alpha -1}|u_1-u_2|^{n+2} dx \\left(M_0-\\int _{{\\mathbb {R}}^n} h dx \\right) \\nonumber \\\\& :=I_1+I_2+I_3+I_4.$ By Young's inequality we learn that $I_2 &\\le C(m,n,\\Vert f\\Vert _{L^\\infty (Q_T)}) \\int _{{\\mathbb {R}}^n} |f_1-f_2|^{\\min (m-1,1)} |u_1-u_2|^n |\\nabla u_1| ~|\\nabla (u_1-u_2)| dx \\nonumber \\\\&\\le m(n+1)\\varepsilon \\int _{{\\mathbb {R}}^n} |\\nabla (u_1-u_2)|^2 |u_1-u_2|^n dx \\nonumber \\\\&~~ + C\\left(\\frac{1}{\\varepsilon }, m,\\Vert f\\Vert _{L^\\infty (Q_T)}\\right) \\int _{{\\mathbb {R}}^n} |f_1-f_2|^{2\\min (m-1,1)} |\\nabla u_1|^2 |u_1-u_2|^n dx \\nonumber \\\\&\\le m(n+1)\\varepsilon \\int _{{\\mathbb {R}}^n} |\\nabla (u_1-u_2)|^2|u_1-u_2|^n dx\\nonumber \\\\&~~ +C\\left(\\frac{1}{\\varepsilon },\\Vert f\\Vert _{L^\\infty (Q_T)},\\Vert u\\Vert _{L^\\infty (0,T;W^{2,n+2}({\\mathbb {R}}^n))},m\\right)\\Vert u_1-u_2\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^n\\Vert f_1-f_2\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^2,$ where the last inequality is given by Hölder inequality.", "Again by virtue of Hölder inequality we observe $& I_3\\le \\chi (M_0+\\Vert h\\Vert _{L^\\infty (0,T;L^1({\\mathbb {R}}^n))}) C(\\alpha ,\\Vert f\\Vert _{L^\\infty (Q_T)}) \\int _{{\\mathbb {R}}^n} |f_1-f_2|^{\\min (\\alpha -1,1)} u_1 |u_1-u_2|^{n+1} dx \\nonumber \\\\& \\le \\chi (M_0+\\Vert h\\Vert _{L^\\infty (0,T;L^1({\\mathbb {R}}^n))}) C\\left(\\alpha ,\\Vert f\\Vert _{L^\\infty (Q_T)},\\Vert u\\Vert _{L^\\infty (Q_T)}\\right) \\Vert f_1-f_2\\Vert _{L^{n+2}({\\mathbb {R}}^n)} \\Vert u_1-u_2\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^{n+1}.$ Substituting (REF ) and (REF ) into (REF ) arrives at $\\frac{d}{dt} \\Vert w\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^2 \\le c_1 \\Vert f_1-f_2\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^2+c_1 \\Vert u_1-u_2\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^2,$ where $c_1$ are constants depending on $\\varepsilon , \\alpha ,m,n, \\chi , \\Vert u_{0\\varepsilon }\\Vert _{W^{2,n+2}({\\mathbb {R}}^n)}, \\Vert u_{0\\varepsilon }\\Vert _{L^1\\cap L^\\infty ({\\mathbb {R}}^n)},\\Vert h\\Vert _{L^\\infty (0,T;L^1({\\mathbb {R}}^n))}$ .", "By Gronwall inequality it holds that $\\displaystyle \\sup _{0<t<T_\\ast } \\Vert w\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^2 \\le c_1 \\Vert f_1-f_2\\Vert _{L^2(0,T_\\ast ;L^{n+2}({\\mathbb {R}}^n))}^2 e^{c_1 T_\\ast }.$ Therefore, there exists $T_1=T_1(c_1) \\le T_\\ast $ such that $\\displaystyle \\sup _{0<t<T_1} \\Vert w\\Vert _{L^{n+2}({\\mathbb {R}}^n)}^2 \\le \\frac{1}{2} \\Vert f_1-f_2\\Vert _{L^\\infty (0,T_1;L^{n+2}({\\mathbb {R}}^n))}.$ We find that $\\Phi $ becomes a contraction from $X_{T_1}$ into $X_{T_1}$ which is achieved by Banach fixed point theorem.", "Consequently, $\\Phi $ has a fixed point $f=\\Phi (f)=u_\\varepsilon ^f \\in X_{T_1}$ .", "Hence, there exists $T_1=T_1\\left(\\varepsilon , \\alpha ,m,n,\\chi ,\\Vert u_{0\\varepsilon }\\Vert _{W^{2,n+2}({\\mathbb {R}}^n)}, \\Vert u_{0\\varepsilon }\\Vert _{L^1\\cap L^\\infty ({\\mathbb {R}}^n)},\\Vert h\\Vert _{L^\\infty (0,T;L^1({\\mathbb {R}}^n))} \\right)$ such that there is a desired strong solution $u_\\varepsilon ^f$ of (REF ) on $[0,T_1]$ corresponding to the initial data $u_{0\\varepsilon }$ .", "The proof for the case $\\alpha =1$ is a word for word translation of the proof for $\\alpha >1$ except $\\alpha -1$ is replaced by zero.", "Step 2 (Local existence of the non-negative solution of (REF ))    We firstly claim that the solution $u_\\varepsilon $ is bounded in $L^\\infty (0,T_2;L^\\infty ({\\mathbb {R}}^n))$ ($T_2$ is to be determined) as a consequence of the following computations: $ & \\frac{1}{k} \\frac{d}{dt} \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^k dx=-(k-1)\\int _{{\\mathbb {R}}^n} (m u_\\varepsilon ^{m-1}+\\varepsilon ) u_\\varepsilon ^{k-2}|\\nabla u_\\varepsilon |^2 dx + \\chi \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{k+\\alpha -1} dx \\left(M_0-\\int _{{\\mathbb {R}}^n} h dx \\right) \\nonumber \\\\\\le &-\\frac{4m(k-1)}{(m+k-1)^2} \\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{m+k-1}{2}}|^2 dx + \\chi \\left( M_0+\\Vert h\\Vert _{L^\\infty (0,T_1;L^1({\\mathbb {R}}^n))} \\right) \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{k+\\alpha -1} dx \\nonumber \\\\= & -\\frac{4m(k-1)}{(m+k-1)^2} \\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{m+k-1}{2}}|^2 dx + \\bar{A} \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{k+\\alpha -1} dx,$ where $\\bar{A}=\\chi \\left( M_0+\\Vert h\\Vert _{L^\\infty (0,T_1;L^1({\\mathbb {R}}^n))} \\right)$ .", "For $\\alpha >1$ , we apply $w=u_\\varepsilon ^{\\frac{m+k-1}{2}},~q=\\frac{2(k+\\alpha -1)}{k+m-1},~r=\\frac{2k}{k+m-1},~C_0=\\frac{2m(k-1)}{\\bar{A} (k+m-1)^2}$ in Lemma REF for $k>\\max \\left(\\frac{n(\\alpha -m)}{2},1\\right)$ such that $&\\bar{A} \\Vert u_\\varepsilon \\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1} \\\\\\le & \\frac{2m(k-1)}{(k+m-1)^2} \\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^{\\frac{m+k-1}{2}}|^2 dx+C(n,m,\\bar{A}) \\left( \\frac{(k+m-1)^2}{k-1} \\right)^{\\frac{\\lambda q}{2-\\lambda q}} \\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)}^{\\frac{(m+k-1)(1-\\lambda )q}{2-\\lambda q}} \\\\= &\\frac{2m(k-1)}{(k+m-1)^2} \\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^{\\frac{m+k-1}{2}}|^2 dx+C(n,m,\\bar{A}) \\left( \\frac{(k+m-1)^2}{k-1} \\right)^{\\frac{n(\\alpha -1)}{2k+n(m-\\alpha )}} \\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)}^{\\frac{k+\\alpha -1+\\frac{n(m-\\alpha )}{2}}{\\frac{n(m-\\alpha )}{2k}+1}},$ where $\\lambda =\\frac{\\frac{1}{r}-\\frac{1}{q}}{\\frac{1}{r}-\\frac{n-2}{2n}}$ .", "Plugging it into (REF ) we compute $\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}+C(n,m,\\bar{A}) \\left( \\frac{(k+m-1)^2}{k-1} \\right)^{\\frac{n(\\alpha -1)}{2k+n(m-\\alpha )}} \\int _0^t \\Vert u_\\varepsilon (s)\\Vert _{L^k({\\mathbb {R}}^n)}^{\\frac{2(\\alpha -1)}{2+\\frac{n(m-\\alpha )}{k}}+1}ds.$ Taking $k \\rightarrow \\infty $ gives $\\Vert u_\\varepsilon \\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^\\infty ({\\mathbb {R}}^n)}+C(n,m,\\bar{A}) \\int _0^t \\Vert u_\\varepsilon (s)\\Vert _{L^\\infty ({\\mathbb {R}}^n)}^{\\alpha } ds$ which provides the estimate $\\Vert u_\\varepsilon \\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le \\left( \\frac{1}{C\\left(n,m,\\bar{A}\\right)(\\alpha -1)(\\overline{T}-t)} \\right)^{\\frac{1}{\\alpha -1}},\\quad \\overline{T}=\\frac{\\Vert u_{0\\varepsilon }\\Vert _{L^\\infty ({\\mathbb {R}}^n)}^{1-\\alpha }}{C\\left(n,m,\\bar{A}\\right)(\\alpha -1)}.$ Hence $u_\\varepsilon $ is bounded in $L^\\infty (0,T_2;L^\\infty ({\\mathbb {R}}^n))$ where $T_2=\\frac{\\overline{T}}{2}$ .", "For $\\alpha =1,$ (REF ) directly follows that for any $t>0$ and $k>1$ $\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)} e^{\\bar{A} t}.$ Letting $k \\rightarrow \\infty $ we get $\\Vert u_\\varepsilon \\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^\\infty ({\\mathbb {R}}^n)} e^{\\bar{A} t}.$ On the other hand, integrating (REF ) over ${\\mathbb {R}}^n$ we obtain that for $\\alpha \\ge 1$ $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx =\\chi \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^\\alpha dx \\left( M_0-\\int _{{\\mathbb {R}}^n} h dx \\right)\\le \\chi M_0 \\Vert u_\\varepsilon \\Vert _{L^\\infty ({\\mathbb {R}}^n)}^{\\alpha -1} \\int _{{\\mathbb {R}}^n} u_\\varepsilon dx.$ By (REF ) and (REF ) we prove that $u_\\varepsilon \\in L^\\infty (0,T_2;L^1({\\mathbb {R}}^n)).$ Next, for $h \\in X_{T_1},$ consider the map $F:~h \\in L^{\\infty }(0,T_1;L^1({\\mathbb {R}}^n)) \\mapsto u_\\varepsilon ^h \\in L^{\\infty }(0,T_1;L^1({\\mathbb {R}}^n)).$ Analogous to Step 1, it can be seen that there exists $T_3=T_3(M_0,\\alpha ,m,n,\\chi ,\\Vert u_{0\\varepsilon }\\Vert _{L^1 \\cap L^\\infty ({\\mathbb {R}}^n)}) \\le \\min (T_2,T_1)$ such that $F$ is a contraction from $L^{\\infty }(0,T_3;L^1({\\mathbb {R}}^n))$ to $L^{\\infty }(0,T_3;L^1({\\mathbb {R}}^n))$ by making use of Banach's fixed point theorem.", "Thus $F$ has a fixed point $h=F(h)=u_\\varepsilon ^h \\in L^{\\infty }(0,T_3;L^1({\\mathbb {R}}^n))$ and we prove the existence of a solution of (REF ) on the time interval $[0,T_3].$ This is exactly the anticipated result and we complete the proof of Proposition REF .", "$\\Box $ Proposition 3.3 (Priori estimates in $L^k$ for $1<k<\\infty $ )    Let the same assumption as that in Proposition REF hold.", "Suppose that $u_\\varepsilon $ is the non-negative strong solution of (REF ), $C$ is a positive constant depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)},k,m,\\alpha ,n,\\chi ,m_0,M_0$ but not on $\\varepsilon ,$ then $u_\\varepsilon $ satisfies the following estimates: For $1\\le \\alpha <m+2/n,$ the following holds true that for any $t>0$ , $\\Vert u_\\varepsilon (\\cdot ,t) \\Vert _{L^k({\\mathbb {R}}^n)} \\le C,~~\\mbox{for all}~k \\in (1,\\infty ).$ For $\\alpha =m+2/n,$ if $M_0 \\le \\left( \\frac{S_n(\\alpha -m)}{\\chi } \\right)^{\\frac{1}{\\alpha -m+1}}\\frac{\\alpha -m+1}{\\alpha -m},$ then $u_\\varepsilon $ satisfies that for any $t>0$ $\\Vert u_\\varepsilon (\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C\\left( 1+ t \\right)^{-\\frac{k-1}{k(m+2/n-1)}},~~\\mbox{for all}~~k\\in (1,\\infty ),$ where $S_n$ is given by (REF ).", "For $\\alpha >m+2/n$ , $p_0=\\frac{n(\\alpha -m)}{2}$ .", "Now we assume $\\Vert u_{0\\varepsilon }\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}<C_{p_0},$ where $C_{p_0}$ is defined as (REF ).", "Then $u_\\varepsilon $ has the following decay property: $\\Vert u_\\varepsilon (\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C\\left( 1+ t \\right)^{-\\frac{k-1}{k(\\alpha -1)}},~~\\mbox{for all}~~k\\in (1,\\infty ).$ Furthermore, the following regularities hold true for any $T>0$ $& u_\\varepsilon \\in L^\\infty \\left(0,T; L^k({\\mathbb {R}}^n) \\right), \\\\& u_\\varepsilon \\in L^{k+\\alpha -1}\\left(0,T;L^{k+\\alpha -1}({\\mathbb {R}}^n)\\right), \\\\& \\nabla u_\\varepsilon ^{\\frac{m+k-1}{2}} \\in L^2\\left(0,T;L^2({\\mathbb {R}}^n) \\right).$ For the proof of proposition REF , it suffices to show the following three lemmas.", "For simplicity in presentation, throughout this section, we omit all the $\\varepsilon $ dependents and use $u$ instead of $u_\\varepsilon $ .", "We denote $C$ by a positive constant depending not only on $m,n,\\alpha ,\\chi $ , but also on other associated quantities (we will show them clearly at different occurrences in $C(\\cdot )$ ).", "Most of the prior estimates are based on the following arguments.", "Multiplying (REF ) by $ku^{k-1}(k>1)$ we obtain $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^k dx +\\frac{4mk(k-1)}{(k+m-1)^2}\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{k+m-1}{2}}|^2 dx +k\\chi \\int _{{\\mathbb {R}}^n} udx \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dx\\nonumber \\\\= & k\\chi M_0\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1}dx.$ Lemma 3.4 (Case of $1\\le \\alpha <m+2/n$ ) Let the same assumptions as that in proposition REF hold.", "Then there exists a positive constant $C$ depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)},k,$ $m$ , $\\alpha $ , $n$ , $\\chi , m_0,M_0$ but not on $\\varepsilon $ .", "The following holds true that for any $t>0$ $\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le C,~~\\mbox{for all}~~k\\in (1,\\infty ).$ Proof.", "Using $w^{\\frac{1}{a}}=u^{\\frac{k+m-1}{2}},~~ b=\\frac{2(k+\\alpha -1)}{k+m-1},~~ a=\\frac{2k^{\\prime }}{k+m-1}$ in Lemma REF for $k>\\max \\left\\lbrace \\frac{n(\\alpha -m)}{2}-(\\alpha -1),1 \\right\\rbrace $ and $\\max \\left\\lbrace \\frac{n(\\alpha -m)}{2},1 \\right\\rbrace <k^{\\prime }<k+\\alpha -1$ , it holds that $\\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1} \\le S_n^{-\\frac{\\lambda (k+\\alpha -1)}{k+m-1}} \\Vert \\nabla u^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^{\\frac{2\\lambda (k+\\alpha -1)}{k+m-1}} \\Vert u^{\\frac{k+m-1}{2}}\\Vert _{L^{\\frac{2k^{\\prime }}{k+m-1}}({\\mathbb {R}}^n)}^{\\frac{2(1-\\lambda )(k+\\alpha -1)}{k+m-1}},$ where $\\lambda =\\frac{\\frac{k+m-1}{2k^{\\prime }}-\\frac{k+m-1}{2(k+\\alpha -1)}}{\\frac{k+m-1}{2k^{\\prime }}-\\frac{n-2}{2n}}$ .", "We further use Young's inequality to get $\\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1} \\le \\frac{m(k-1)}{\\chi M_0(k+m-1)^2}\\Vert \\nabla u^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2+C(k,M_0)\\Vert u\\Vert _{L^{k^{\\prime }}({\\mathbb {R}}^n)}^{\\frac{(1-\\lambda )(k+\\alpha -1)}{1-\\frac{\\lambda (k+\\alpha -1)}{k+m-1}}}$ since the choices of $k,k^{\\prime }$ ensure $\\frac{2\\lambda (k+\\alpha -1)}{k+m-1}<2.$ Therefore, by Hölder inequality with $1<k^{\\prime }<k+\\alpha -1$ and plugging (REF ) into (REF ) one has $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^k dx +\\frac{3mk(k-1)}{(k+m-1)^2}\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{k+m-1}{2}}|^2 dx +k\\chi m_0 \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dx \\nonumber \\\\\\le & C(k,M_0)\\Vert u\\Vert _{L^{k^{\\prime }}({\\mathbb {R}}^n)}^r \\nonumber \\\\\\le & C(k,M_0) \\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{r\\theta }\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{r(1-\\theta )} \\nonumber \\\\\\le & C(k,M_0)\\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{r\\theta },$ where $r=\\frac{(1-\\lambda )(k+\\alpha -1)}{1-\\frac{\\lambda (k+\\alpha -1)}{k+m-1}}$ , $\\theta =\\frac{(k^{\\prime }-1)(k+\\alpha -1)}{k^{\\prime }(k+\\alpha -2)}$ .", "Moreover, a tedious calculation assures $r\\theta <k+\\alpha -1$ if and only if $1\\le \\alpha <m+2/n.$ An immediate application of Young's inequality in (REF ) leads to $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^k dx +\\frac{3mk(k-1)}{(k+m-1)^2}\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{k+m-1}{2}}|^2 dx +\\frac{km_0\\chi }{2} \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dx \\le C(k,m_0,M_0).$ Recalling the fact that $m(t) \\le M_0$ , by Hölder inequality we have $\\left( \\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}^k\\right)^{\\frac{k+\\alpha -2}{k-1}} \\le \\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1}\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{\\frac{\\alpha -1}{k-1}} \\le \\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1} M_0^{\\frac{\\alpha -1}{k-1}}.$ Hence (REF ) is equivalent to $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx+\\frac{km_0\\chi }{2M_0^{\\frac{\\alpha -1}{k-1}}}\\left( \\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}^k \\right)^{\\frac{k+\\alpha -2}{k-1}} \\le C(k,m_0,M_0).$ Setting $y(t)=\\int _{{\\mathbb {R}}^n} u^kdx,~~ a=\\frac{k+\\alpha -2}{k-1},~~ \\eta =C(k,m_0,M_0)$ in Lemma REF one has that for any $1<k<\\infty $ $\\Vert u(\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C, ~~\\mbox{for any } t>0,$ where $C$ is a constant depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}$ , $k, m_0, M_0, \\alpha ,m,n,\\chi $ .", "On the other hand, we integrate (REF ) from 0 to $T$ in time to obtain that for any $T>0$ $&\\int _{{\\mathbb {R}}^n} u^k(T)dx +\\frac{3mk(k-1)}{(k+m-1)^2}\\int _0^T\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{k+m-1}{2}}|^2dxdt +\\frac{k\\chi m_0}{2}\\int _0^T\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dxdt \\nonumber \\\\\\le & \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^kdx +C(k,m_0,M_0)T,$ from which we derive that for any $T>0$ and $1<k<\\infty $ $u\\in L^{k+\\alpha -1}\\left(0,T;L^{k+\\alpha -1}({\\mathbb {R}}^n)\\right),~ \\nabla u^{\\frac{k+m-1}{2}} \\in L^2\\left(0,T;L^2({\\mathbb {R}}^n)\\right).$ Thus we complete the proof of Lemma REF .", "$\\Box $ The next goal is to consider the critical exponent case $\\alpha =m+2/n.$ Lemma 3.5 (Case of $\\alpha =m+2/n$ ) Let the same assumptions as that in proposition REF hold.", "If the total mass $M_0$ satisfies $M_0 \\le \\left( \\frac{S_n(\\alpha -m)}{\\chi } \\right)^{\\frac{1}{\\alpha -m+1}}\\frac{\\alpha -m+1}{\\alpha -m},$ then there exists a positive constant $C$ depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}, k$ , $m$ , $\\alpha $ , $n$ , $\\chi , m_0,M_0$ but not on $\\varepsilon $ such that $u_\\varepsilon $ satisfies $\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le C\\left( 1+ t \\right)^{-\\frac{k-1}{k(m+2/n-1)}},~~\\mbox{for all}~~k\\in (1,\\infty ).$ Proof.", "For any $k>\\frac{n(\\alpha -m)}{2}-(\\alpha -1)$ , keeping the fact $\\alpha =m+2/n$ in mind we get the following estimate $||u||_{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1} &\\le \\frac{1}{S_n} {||\\nabla u^{\\frac{m+k-1}{2}}||}_{L^2{({\\mathbb {R}}^n)}}^2{||u||}_{L^1({\\mathbb {R}}^n)}^{\\alpha -m}$ by Lemma REF .", "Combining (REF ) with (REF ) we obtain $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx + k\\left( \\frac{4mS_n(k-1)}{(k+m-1)^2}\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{-(\\alpha -m)}+ \\chi \\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}-\\chi M_0 \\right) \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dx \\le 0.$ Denote $y=\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)},~~\\gamma (k)=\\frac{4mS_n(k-1)}{(k+m-1)^2},~~f(y)=\\gamma (k)y^{-(\\alpha -m)}+\\chi y-\\chi M_0.$ A straightforward calculation shows that at $y_0=\\left( \\frac{\\gamma (k)(\\alpha -m)}{\\chi } \\right)^{\\frac{1}{\\alpha -m+1}},$ $f(y)$ attains its minimum $f(y_0)=\\chi ^{\\frac{\\alpha -m}{\\alpha -m+1}}\\gamma (k)^{\\frac{1}{\\alpha -m+1}}(\\alpha -m)^{\\frac{1}{\\alpha -m+1}}\\frac{\\alpha -m+1}{\\alpha -m}-\\chi M_0.$ It's easy to verify $f(y)>f(y_0)\\ge 0,~~\\mbox{for any}~~ y>0,$ whenever $M_0\\le \\left( \\frac{\\gamma (k)(\\alpha -m)}{\\chi } \\right)^{\\frac{1}{\\alpha -m+1}}\\frac{\\alpha -m+1}{\\alpha -m}.$ We point out that $\\gamma (k)$ attains its maximum when $k=m+1$ .", "Therefore, we first treat the case of $k=m+1$ .", "Step 1 (Decay estimate in $\\Vert u\\Vert _{L^{m+1}({\\mathbb {R}}^n)}$ )    Taking $k=m+1$ in (REF ) we have $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{m+1}dx +(m+1)\\left( \\chi \\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}-\\chi M_0+S_n\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{-(\\alpha -m)} \\right)\\Vert u\\Vert _{L^{\\alpha +m}({\\mathbb {R}}^n)}^{\\alpha +m} \\le 0.$ Similar arguments from (REF ) to (REF ) with $k=m+1$ yield that $M_0\\le \\left( \\frac{S_n(\\alpha -m)}{\\chi } \\right)^{\\frac{1}{\\alpha -m+1}}\\frac{\\alpha -m+1}{\\alpha -m},$ results in $f(\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)})>\\eta \\ge 0,$ where $\\eta =f\\left(\\left( \\frac{S_n(\\alpha -m)}{\\chi }\\right)^{\\frac{1}{\\alpha -m+1}}\\right)=\\chi ^{\\frac{\\alpha -m}{\\alpha -m+1}}(S_n(\\alpha -m))^{\\frac{1}{\\alpha -m+1}}\\frac{\\alpha -m+1}{\\alpha -m}-\\chi M_0.$ Hence we recover the inequality $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{m+1} dx +(m+1)\\eta \\Vert u\\Vert _{L^{\\alpha +m}({\\mathbb {R}}^n)}^{\\alpha +m} \\le 0.$ Using Hölder inequality with $1<m+1<m+\\alpha $ we obtain $\\left( \\Vert u\\Vert _{L^{m+1}({\\mathbb {R}}^n)}^{m+1} \\right)^{\\frac{\\alpha +m-1}{m}} \\le \\Vert u\\Vert _{L^{\\alpha +m}({\\mathbb {R}}^n)}^{\\alpha +m}\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{\\frac{\\alpha -1}{m}} \\le \\Vert u\\Vert _{L^{\\alpha +m}({\\mathbb {R}}^n)}^{\\alpha +m} M_0^{\\frac{\\alpha -1}{m}}.$ Thus taking (REF ) and (REF ) together gives $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{m+1}dx +(m+1)\\eta M_0^{-\\frac{\\alpha -1}{m}}\\left( \\int _{{\\mathbb {R}}^n} u^{m+1}dx \\right)^{\\frac{\\alpha +m-1}{m}} \\le 0$ which follows $\\Vert u\\Vert _{L^{m+1}({\\mathbb {R}}^n)}& \\le \\left( \\frac{1}{\\frac{(m+1)(\\alpha -1)\\eta }{m}M_0^{-\\frac{\\alpha -1}{m}}t +\\Vert u_{0\\varepsilon }\\Vert _{L^{m+1}({\\mathbb {R}}^n)}^{-\\frac{(m+1)(\\alpha -1)}{m}}} \\right)^{\\frac{m}{(m+1)(\\alpha -1)}} \\le C(1+t)^{-\\frac{m}{(m+1)(\\alpha -1)}},$ where C is a constant depending on $m$ , $n, \\chi $ , $M_0$ , $\\Vert u_{0\\varepsilon }\\Vert _{L^{m+1}({\\mathbb {R}}^n)}$ .", "In addition, integrating (REF ) from 0 to $T$ in time we obtain that for any $T>0$ $\\int _{{\\mathbb {R}}^n} u^{m+1}(T)dx +(m+1)\\eta \\int _0^T\\int _{{\\mathbb {R}}^n} u^{\\alpha +m} dxdt \\le \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^{m+1}dx$ which assures that $\\int _0^T\\int _{{\\mathbb {R}}^n} u^{\\alpha +m}dxdt \\le C \\left( \\Vert u_{0\\varepsilon }\\Vert _{L^{m+1}({\\mathbb {R}}^n)}^{m+1}, M_0 \\right).$ Therefore, integrating the following equality $&\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{m+1}dx +(m+1)\\int _{{\\mathbb {R}}^n} |\\nabla u^m|^2dx +(m+1)\\chi \\int _{{\\mathbb {R}}^n} udx\\int _{{\\mathbb {R}}^n} u^{\\alpha +m} dx\\nonumber \\\\= & (m+1)\\chi M_0\\int _{{\\mathbb {R}}^n} u^{\\alpha +m}dx.$ from 0 to $T$ in time we also obtain $\\nabla u^{m} \\in L^2(0,T;L^2({\\mathbb {R}}^n)).$ Step 2 (Decay estimates in $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}$ for $1<k<\\infty $ )    For $1<k<m+1$ , by (REF ) with Hölder inequality we have $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)} & \\le \\Vert u\\Vert _{L^{m+1}({\\mathbb {R}}^n)}^{(m+1)\\frac{k-1}{km}}\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{\\frac{m-k+1}{km}} \\le C(1+t)^{-\\frac{k-1}{k(\\alpha -1)}},$ where C is a constant depending on $m$ ,$n, \\chi $ , $M_0, k$ , $\\Vert u_{0\\varepsilon }\\Vert _{L^{m+1}({\\mathbb {R}}^n)}$ .", "For $m+1<k<\\infty $ , taking $w=u^{\\frac{k+m-1}{2}},~~q=\\frac{2(k+\\alpha -1)}{k+m-1},~~r=\\frac{2(m+1)}{k+m-1},~~ C_0=\\frac{2m(k-1)}{\\chi M_0(k+m-1)^2}$ in Lemma REF for $k>\\max (2-\\alpha ,1)$ , we have $\\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1} \\le \\frac{2m(k-1)}{\\chi M_0(k+m-1)^2}\\Vert \\nabla u^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2 +C(k,M_0)\\Vert u\\Vert _{L^{m+1}({\\mathbb {R}}^n)}^{(m+1){\\frac{k+\\alpha -2}{m}}}.$ Recalling (REF ) and substituting (REF ) into (REF ) one has $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx+\\frac{2mk(k-1)}{(k+m-1)^2}\\Vert \\nabla u^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2 +k\\chi \\int _{{\\mathbb {R}}^n} udx\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1}dx \\nonumber \\\\\\le & C(k,M_0)\\left( 1+t \\right)^{-\\frac{k+\\alpha -2}{\\alpha -1}}.$ Therefore, we have $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx +\\frac{k\\chi m_0}{M_0^{\\frac{\\alpha -1}{k-1}}}\\left( \\int _{{\\mathbb {R}}^n} u^kdx \\right)^{\\frac{k+\\alpha -2}{k-1}}\\le C(k,M_0)\\left( 1+t \\right)^{-\\frac{k+\\alpha -2}{\\alpha -1}}.$ Denote $y(t)=\\int _{{\\mathbb {R}}^n} u^kdx, ~~ f(t)=C(k,M_0)\\left( 1+t \\right)^{-\\frac{k+\\alpha -2}{\\alpha -1}},~~ a=1+\\frac{\\alpha -1}{k-1}$ and take $t_0=t/2$ in Lemma REF we derive that for any $t>0$ $\\Vert u(\\cdot ,t)\\Vert _{L^k({\\mathbb {R}}^n)} \\le C \\left( 1+ t \\right)^{-\\frac{k-1}{k(\\alpha -1)}},$ where $C$ is a constant depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}$ , $k, m$ , $n$ , $\\chi $ , $m_0, M_0$ .", "Next, integrating (REF ) from 0 to $T$ in time we obtain that for any $T>0$ $&\\int _{{\\mathbb {R}}^n} u^k(T) dx +\\frac{2mk(k-1)}{(k+m-1)^2}\\int _0^T\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{k+m-1}{2}}|^2 dxdt +k\\chi \\int _0^T\\int _{{\\mathbb {R}}^n} udx \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1}dxdt \\nonumber \\\\\\le & \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^kdx +C(k,M_0).$ Hence we also obtain the following regularities $u\\in L^{k+\\alpha -1}(0,T;L^{k+\\alpha -1}({\\mathbb {R}}^n)),~ \\nabla u^{\\frac{k+m-1}{2}} \\in L^2(0,T;L^2({\\mathbb {R}}^n)).$ Thus completes the proof of this lemma.", "$\\Box $ We are now in a position to begin the study of the supercritical case $\\alpha >m+2/n$ .", "Lemma 3.6 (Case of $\\alpha >m+2/n$ ) Let the same assumptions as that in proposition REF hold, $p_0=\\frac{n(\\alpha -m)}{2}$ .", "If we assume $\\Vert u_{0\\varepsilon }\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}<C_{p_0},$ where $C_{p_0}$ is given by (REF ), then for any $t>0$ $\\Vert u_\\varepsilon \\Vert _{L^k({\\mathbb {R}}^n)} \\le C\\left( 1+t \\right)^{-\\frac{k-1}{k(\\alpha -1)}},~~\\mbox{for~any}~1<k<\\infty ,$ where $C$ is a positive constant depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}$ , $k, \\alpha , m,n, \\chi , m_0,M_0$ but not on $\\varepsilon $ .", "Proof.", "Firstly we plug $w=u^{\\frac{k+m-1}{2}},~~ q=\\frac{2(k+\\alpha -1)}{k+m-1},~~r=\\frac{2k^{\\prime }}{k+m-1},~~ C_0=\\frac{4m(k-1)}{\\chi M_0(k+m-1)^2}$ into Lemma REF for any $k^{\\prime }>\\frac{n(\\alpha -m)}{2}$ and $k>\\max \\left(\\frac{n(\\alpha -m)}{2}-(\\alpha -1),1\\right)$ to obtain $\\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1}\\le \\frac{4m(k-1)}{\\chi M_0(k+m-1)^2} \\left\\Vert \\nabla u^{\\frac{k+m-1}{2}} \\right\\Vert _{L^2({\\mathbb {R}}^n)}^2 +C_{knm}\\left\\Vert u^{\\frac{k+m-1}{2}}\\right\\Vert _{L^{\\frac{2k^{\\prime }}{k+m-1}}({\\mathbb {R}}^n)}^{\\frac{2(1-\\lambda )(k+\\alpha -1)}{k+m-1}\\frac{1}{1-\\frac{\\lambda (k+\\alpha -1)}{k+m-1}}},$ where $C_{knm}= \\left( \\frac{8mS_n(k-1)}{\\chi M_0\\lambda q(k+m-1)^2}\\right)^{-\\frac{\\lambda q}{2-\\lambda q}}\\frac{2-\\lambda q}{2},~~\\lambda =\\frac{\\frac{k+m-1}{2k^{\\prime }}-\\frac{k+m-1}{2(k+\\alpha -1)}}{\\frac{k+m-1}{2k^{\\prime }}-\\frac{n-2}{2n}}.$ Then substituting the above estimates into (REF ) we get $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx +k\\chi m(t)\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1}dx \\le k\\chi M_0 C_{knm}\\Vert u\\Vert _{L^{k^{\\prime }}({\\mathbb {R}}^n)}^b,$ where $b={\\frac{(1-\\lambda )(k+\\alpha -1)}{1-\\frac{\\lambda (k+\\alpha -1)}{k+m-1}}}$ .", "Because of $1<\\frac{n(\\alpha -m)}{2}<k^{\\prime }<k+\\alpha -1$ , by interpolation inequality we compute $\\Vert u\\Vert _{L^{k^{\\prime }}({\\mathbb {R}}^n)}^b \\le \\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{b\\theta }\\Vert u\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}^{b(1-\\theta )},$ where $\\frac{1}{k^{\\prime }}=\\frac{2(1-\\theta )}{n(\\alpha -m)}+\\frac{\\theta }{k+\\alpha -1}$ .", "Some calculations yield that for any $\\frac{n(\\alpha -m)}{2}<k^{\\prime }<k+\\alpha -1$ .", "$b\\theta =k+\\alpha -1, ~~ b(1-\\theta )=\\frac{n(\\alpha -m)}{2}\\frac{k+\\alpha -1-k^{\\prime }}{k^{\\prime }-\\frac{n(\\alpha -m)}{2}}.$ We choose $ k^{\\prime }=\\frac{k+\\alpha -1+\\frac{n(\\alpha -m)}{2}}{2} $ used in (REF ) such that it satisfies $\\Vert u\\Vert _{L^{k^{\\prime }}({\\mathbb {R}}^n)}^b \\le \\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1}\\Vert u\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}^{\\frac{n(\\alpha -m)}{2}}.$ Keeping in mind the fact that $m_0 \\le m(t)$ and collecting (REF ) and (REF ) one obtains that for any $k>\\max \\left(\\frac{n(\\alpha -m)}{2}-(\\alpha -1),1\\right)$ $&\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx +k\\chi m_0C_k^{-p_0}\\left( C_k^{p_0}-\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0} \\right)\\Vert u\\Vert _{L^{k+\\alpha -1}({\\mathbb {R}}^n)}^{k+\\alpha -1}\\le 0,$ where $C_k$ is defined as $C_k=\\left( \\frac{m_0}{M_0 C_{knm} }\\right)^\\frac{1}{p_0}=\\left(\\frac{m_0(n+2)}{2 M_0}\\right)^{\\frac{1}{p_0}}\\left( \\frac{4S_n m(k-1)(n+2)}{n\\chi M_0(k+m-1)^2} \\right)^{\\frac{1}{\\alpha -m}}$ by the choice of $k^{\\prime }$ .", "Next we will demonstrate the boundedness of $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}$ starting from the case of $k=p_0.$ Step 1 (Decay estimate in $\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}$ )   Taking $k=p_0$ in (REF ) we obtain $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{p_0}dx +p_0\\chi m_0 C_{p_0}^{-p_0}\\left( C_{p_0}^{p_0}-\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0} \\right)\\Vert u\\Vert _{L^{p_0+\\alpha -1}({\\mathbb {R}}^n)}^{p_0+\\alpha -1}\\le 0.$ Since we assume $\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}<C_{p_0},$ by bootstrap arguments on (REF ) we have the following estimate $\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}< \\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}<C_{p_0}.$ By Hölder inequality, (REF ) can be rewritten as $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{p_0}dx + \\frac{p_0\\chi m_0(C_{p_0}^{p_0}-\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0})}{C_{p_0}^{p_0}M_0^{\\frac{\\alpha -1}{p_0-1}}} \\left( \\int _{{\\mathbb {R}}^n} u^{p_0}dx \\right)^{\\frac{p_0+\\alpha -2}{p_0-1}} \\le 0.$ After some calculations we have $\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}\\le \\left( \\frac{1}{\\frac{p_0\\chi m_0(C_{p_0}^{p_0}-\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0})(\\alpha -1)}{(p_0-1)C_{p_0}^{p_0}M_0^{\\frac{\\alpha -1}{p_0-1}}}t+\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{-\\frac{p_0(\\alpha -1)}{p_0-1}}} \\right)^{\\frac{p_0-1}{p_0(\\alpha -1)}} \\le C(1+t)^{-\\frac{p_0-1}{p_0(\\alpha -1)}},$ where $C$ is a constant depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}, C_{p_0}$ , $p_0$ , $\\alpha , \\chi $ , $M_0,m_0$ .", "Integrating (REF ) from 0 to $T$ in time we obtain that for any $T>0$ $\\int _{{\\mathbb {R}}^n} u^{p_0}(T)dx +p_0\\chi m_0 C_{p_0}^{-p_0}\\left( C_{p_0}^{p_0}-\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0} \\right)\\int _0^T\\int _{{\\mathbb {R}}^n} u^{p_0+\\alpha -1}dxdt\\le \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^{p_0}dx.$ This assures that for any $T>0$ $u\\in L^\\infty (0,T;L^{p_0}({\\mathbb {R}}^n)),~~u\\in L^{p_0+\\alpha -1}(0,T;L^{p_0+\\alpha -1}({\\mathbb {R}}^n)).$ Step 2 (Decay estimates in $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}$ for $1<k<\\infty $ )   In this step, we will show the decay properties of $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}$ based on the decay of $\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}$ in time.", "We divide $k$ into two cases $1<k<p_0$ and $p_0<k<\\infty $ .", "(1) $1<k<p_0$ .", "By (REF ) one has $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)} &\\le \\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0\\frac{k-1}{k(p_0-1)}}\\Vert u\\Vert _{L^1({\\mathbb {R}}^n)}^{\\frac{p_0-k}{k(p_0-1)}} \\le C(1+t)^{-\\frac{k-1}{k(\\alpha -1)}},$ where $C$ depends on $C_{p_0},p_0,k,\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)},\\alpha ,\\chi ,m_0,M_0$ .", "On the other hand, the inequality $\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)} <\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}<C_{p_0}<C_k$ is seen to hold because of the decreasing of $C_k$ with $k$ .", "Hence integrating (REF ) from 0 to $T$ in time we obtain that for any $T>0$ $\\int _{{\\mathbb {R}}^n} u^k(T)dx +k\\chi m_0 C_k^{-p_0} \\left( C_k^{p_0}-\\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0} \\right)\\int _0^T\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1}dx dt\\le \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^k dx$ which gives the following regularities that for any $T>0$ $u\\in L^\\infty \\left(0,T;L^k({\\mathbb {R}}^n)\\right),~~u\\in L^{k+\\alpha -1}\\left(0,T;L^{k+\\alpha -1}({\\mathbb {R}}^n)\\right).$ (2) $p_0<k<\\infty $ .", "Following [8] we consider the $L^k$ norm of the function $(u-N)+$ with $N>1$ $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} (u-N)_+^kdx\\nonumber \\\\= &-km(k-1)\\int _{\\mathbb {R}^n} (u-N)_+^{k-2}u^{m-1}|\\nabla u|^2dx +k\\chi \\int _{{\\mathbb {R}}^n} (u-N)_+^{k-1}u^\\alpha dx \\left(M_0-\\int _{{\\mathbb {R}}^n} udx\\right)\\nonumber \\\\\\le & -\\frac{4mk(k-1)}{(k+m-1)^2}\\Vert \\nabla (u-N)_+^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2+k\\chi \\int _{{\\mathbb {R}}^n} (u-N)_+^{k-1}(u-N+N)^\\alpha dx \\left(M_0-\\int _{{\\mathbb {R}}^n} udx\\right)\\nonumber \\\\\\le & -\\frac{4mk(k-1)}{(k+m-1)^2}\\Vert \\nabla (u-N)_+^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2+k\\chi M_02^{\\alpha -1}\\int _{{\\mathbb {R}}^n} (u-N)_+^{k-1}\\left( (u-N)^\\alpha +N^\\alpha \\right)dx\\nonumber \\\\=& -\\frac{4mk(k-1)}{(k+m-1)^2}\\Vert \\nabla (u-N)_+^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2+k\\chi M_0 2^{\\alpha -1}\\int _{{\\mathbb {R}}^n} (u-N)_+^{k+\\alpha -1}dx\\nonumber \\\\& +k\\chi M_0 N^\\alpha 2^{\\alpha -1}\\int _{{\\mathbb {R}}^n} (u-N)_+^{k-1}dx.$ The terms involving $\\int _{{\\mathbb {R}}^n}(u-N)_+^{k+\\alpha -1}dx$ and $\\int _{{\\mathbb {R}}^n} (u-N)_+^{k-1}dx $ can be estimated as follows: $\\int _{{\\mathbb {R}}^n}(u-N)_+^{k+\\alpha -1}dx \\le S_n^{-1}\\Vert \\nabla (u-N)_+^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2 \\Vert (u-N)_+\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}^{\\alpha -m}$ and $\\int _{{\\mathbb {R}}^n}(u-N)_+^{k-1}dx& =\\int _{N<u\\le N+1} (u-N)_+^{k-1}dx+\\int _{u>N+1} (u-N)_+^{k-1}dx,\\\\\\int _{N<u\\le N+1}(u-N)_+^{k-1}dx& \\le \\int _{N<u \\le N+1} 1dx\\le \\frac{1}{N}\\int _{N<u\\le N+1}udx\\le \\frac{m(t)}{N},\\nonumber \\\\\\int _{u> N+1}(u-N)_+^{k-1}dx & \\le \\int _{u>N+1} (u-N)_+^kdx\\le \\int _{{\\mathbb {R}}^n} (u-N)_+^kdx\\nonumber .$ Collecting everything together (REF ) becomes $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} (u-N)_+^kdx \\nonumber \\\\\\le & -\\frac{4mk(k-1)}{(k+m-1)^2}\\Vert \\nabla (u-N)_+^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2+k\\chi M_0N^{\\alpha -1}2^{\\alpha -1}m(t)+k\\chi M_0N^\\alpha 2^{\\alpha -1}\\int _{{\\mathbb {R}}^n} (u-N)_+^kdx\\nonumber \\\\& +k\\chi M_02^{\\alpha -1}S_n^{-1}\\Vert \\nabla (u-N)_+^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2\\Vert (u-N)_+\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}^{\\alpha -m}.$ Next, let us observe that for any fixed $\\max \\left(1,\\frac{n(\\alpha -m)}{2}-(\\alpha -1)\\right)<k<\\infty $ and under the condition (REF ), from (REF ) we may choose a $k_0>p_0$ such that $\\Vert u(\\cdot ,t)\\Vert _{L^{p_0}({\\mathbb {R}}^n)}< \\Vert u_{0\\varepsilon }\\Vert _{L^{p_0}({\\mathbb {R}}^n)}\\le C_{k_0}<C_{p_0}$ which guarantees $C_{k_0}^{p_0}-\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{p_0}>0$ , then one has that for any $t>0$ $\\Vert u(\\cdot ,t)\\Vert _{L^{k_0}({\\mathbb {R}}^n)}\\le \\Vert u_{0\\varepsilon }\\Vert _{L^{k_0}({\\mathbb {R}}^n)}.$ Using Hölder inequality it can be estimated that $\\Vert (u-N)_+\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}&\\le \\Vert (u-N)_+\\Vert _{L^{k_0}({\\mathbb {R}}^n)}\\Big (\\int _{u(t)\\ge N}dx\\Big )^{\\frac{2}{n(\\alpha -m)}-\\frac{1}{k_0}}\\\\&\\le \\Vert (u-N)_+\\Vert _{L^{k_0}({\\mathbb {R}}^n)}\\Big (\\int _{u(t)\\ge N}\\frac{u}{N}dx\\Big )^{\\frac{2}{n(\\alpha -m)}-\\frac{1}{k_0}}\\\\&\\le \\Vert u\\Vert _{L^{k_0}({\\mathbb {R}}^n)}\\Big (\\frac{m(t)}{N}\\Big )^{\\frac{2}{n(\\alpha -m)}-\\frac{1}{k_0}}.$ Hence we may choose $N=N(k)$ sufficiently large such that for any $0<t<\\infty $ $\\Vert (u-N)_+\\Vert _{L^{\\frac{n(\\alpha -m)}{2}}({\\mathbb {R}}^n)}^{\\alpha -m} \\le \\Vert u_{0\\varepsilon }\\Vert _{L^{k_0}({\\mathbb {R}}^n)}^{\\alpha -m} \\left(\\frac{m(t)}{N}\\right)^{\\frac{2}{n}-\\frac{\\alpha -m}{k_0}}\\le \\frac{4m(k-1)S_n}{(k+m-1)^2 \\chi M_0 2^{\\alpha -1}}.$ As a consequence, we infer from (REF ) that $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} (u-N)_+^k dx\\le k\\chi M_0 N^\\alpha 2^{\\alpha -1} \\int _{{\\mathbb {R}}^n} (u-N)_+^k dx+k\\chi M_0 N^{\\alpha -1} 2^{\\alpha -1} m(t).$ Finally, Gronwall inequality follows that for any $0\\le t<\\infty $ $\\int _{{\\mathbb {R}}^n} (u-N)_+^kdx\\le \\Big (\\int _{{\\mathbb {R}}^n} (u_{0\\varepsilon }-N)_+^kdx\\Big )e^{k\\chi M_0 N^\\alpha 2^{\\alpha -1} t}+\\frac{M_0}{N}\\left(e^{k\\chi M_0N^\\alpha 2^{\\alpha -1} t}-1\\right).$ To go further, we claim that the bound on $\\int _{{\\mathbb {R}}^n} (u-N)_+^kdx$ is enough to treat $\\int _{{\\mathbb {R}}^n} u^kdx$ .", "We decompose $\\int _{{\\mathbb {R}}^n} u^k dx$ in short and long range parts $\\int _{{\\mathbb {R}}^n} u^kdx=\\int _{u\\le N} u^kdx+\\int _{u>N} u^k dx.$ Then the short range part enjoys good properties for our purpose, $\\int _{u\\le N} u^kdx\\le N^{k-1}m(t)\\le N^{k-1}M_0.$ As for the long range part we write $\\int _{u>N} u^kdx &=\\int _{u>N} (u-N+N)^{k-1}udx\\\\&\\le \\max (2^{k-2},1)\\left(\\int _{u>N} (u-N)^{k-1}udx+N^{k-1}M_0\\right)\\\\&= \\max (2^{k-2},1)\\left(N\\int _{{\\mathbb {R}}^n} (u-N)_+^{k-1}dx+\\int _{{\\mathbb {R}}^n}(u-N)_+^kdx+N^{k-1}M_0\\right) \\\\&\\le \\max (2^{k-2},1)\\left(m(t)+N\\int _{{\\mathbb {R}}^n} (u-N)_+^k dx+\\int _{{\\mathbb {R}}^n} (u-N)_+^k dx+N^{k-1}M_0\\right),$ where the last line is derived from (REF ).", "Therefore, the previous inequality (REF ) warrants that for any $T>0$ $\\int _{{\\mathbb {R}}^n} u^k dx \\le C(k,M_0,N,e^t), ~\\mbox{for}~ 0<t<T.$ Now we can claim that $\\int _{{\\mathbb {R}}^n} u(\\cdot ,t)^k dx$ decays in time at infinity.", "Actually, for $t$ is larger than some $T_k$ one has $S_n^{-1}\\chi M_0\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}^{\\alpha -m} \\le \\frac{2m(k-1)}{(k+m-1)^2}, ~~\\mbox{for} ~t>T_k$ because of the decay property of $\\Vert u\\Vert _{L^{p_0}({\\mathbb {R}}^n)}$ .", "Then plugging (REF ) into (REF ) yields that for $t>T_k$ $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx +\\frac{2mk(k-1)}{(k+m-1)^2}\\Vert \\nabla u^{\\frac{k+m-1}{2}}\\Vert _{L^2({\\mathbb {R}}^n)}^2 +k\\chi \\int _{{\\mathbb {R}}^n} udx\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dx \\le 0$ by Lemma REF .", "Hence by Hölder inequality one has $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^kdx +\\frac{k\\chi m_0}{M_0^{\\frac{\\alpha -1}{k-1}}}\\left( \\Vert u\\Vert _{L^k({\\mathbb {R}}^n)}^k \\right)^{1+\\frac{\\alpha -1}{k-1}} \\le 0, ~~\\mbox{for} ~t>T_k$ which leads to $\\Vert u\\Vert _{L^k({\\mathbb {R}}^n)} \\le C\\left(M_0,m_0,k,\\Vert u(\\cdot ,T_k)\\Vert _{L^k({\\mathbb {R}}^n)}\\right)(1+t-T_k)^{-\\frac{k-1}{k(\\alpha -1)}},~~\\mbox{for} ~t>T_k.$ Here $\\Vert u(\\cdot ,T_k)\\Vert _{L^k({\\mathbb {R}}^n)}$ is bounded from above by a positive constant $C\\left(M_0,k,T_k\\right)$ due to (REF ).", "Moreover, taking (REF ) and (REF ) into account we conclude that for any $0<T<\\infty $ $\\int _0^T\\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dxdt \\le C \\left(\\Vert u_{0\\varepsilon }\\Vert _{L^k({\\mathbb {R}}^n)}, k, m_0,T\\right),~~\\mbox{for}~~ 1<k<\\infty .$ So taking (REF ), (REF ) and (REF ) together we deduce that for any $T>0$ and any $1<k<\\infty $ $u\\in L^\\infty \\left(0,T;L^k({\\mathbb {R}}^n)\\right),~~ u\\in L^{k+\\alpha -1}\\left(0,T;L^{k+\\alpha -1}({\\mathbb {R}}^n)\\right).$ Furthermore, integrating (REF ) from 0 to $T$ in time we conclude that for any $T>0$ and $1<k<\\infty $ $&\\int _{{\\mathbb {R}}^n} u^k(T)dx +\\frac{4mk(k-1)}{(k+m-1)^2}\\int _0^T\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{m+k-1}{2}}|^2 dx dt +k\\chi \\int _0^T\\int _{{\\mathbb {R}}^n} u dx \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1}dx dt \\\\= &\\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^k dx+ k\\chi M_0 \\int _0^T \\int _{{\\mathbb {R}}^n} u^{k+\\alpha -1} dx dt.$ As a result, we have that for any $0<T<\\infty $ and $1<k<\\infty $ $\\nabla u^{\\frac{k+m-1}{2}} \\in L^2\\left(0,T;L^2({\\mathbb {R}}^n)\\right),$ by (REF ).", "Thus ends the proof.", "$\\Box $ On account of the above arguments, our last task is to give the uniform boundedness of solutions for any $t>0$ .", "Proposition 3.7 (Uniform estimate in $L^\\infty $ ) Let the same assumptions as that in proposition REF hold.", "Then there exists a positive constant $C$ depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^1\\cap L^\\infty ({\\mathbb {R}}^n)}$ , $\\alpha $ , $m, n, \\chi , m_0, M_0$ but not on $\\varepsilon $ such that $u_\\varepsilon $ is uniformly bounded for any $t> 0$ , i.e.", "$\\Vert u_\\varepsilon (\\cdot ,t) \\Vert _{L^\\infty ({\\mathbb {R}}^n)} \\le C.$ Proof.", "Firstly, we denote $q_k=2^k+n\\alpha +nm$ and estimate $\\int _{{\\mathbb {R}}^n} u^{q_k}dx$ .", "Multiplying (REF ) with $q_k u^{q_k-1}$ we have $&\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{q_k}dx+\\frac{4mq_k(q_k-1)}{(q_k+m-1)^2}\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{q_k+m-1}{2}}|^2 dx+q_k\\chi \\int _{{\\mathbb {R}}^n}udx\\int _{{\\mathbb {R}}^n} u^{q_k+\\alpha -1} dx\\nonumber \\\\= &M_0 q_k\\chi \\int _{{\\mathbb {R}}^n} u^{q_k+\\alpha -1}dx.$ Let $q=\\frac{2(q_k+\\alpha -1)}{q_k+m-1},~~r=\\frac{2q_{k-1}}{q_k+m-1},$ it's easy to verify that $1<\\frac{q}{r}<\\frac{2n}{r(n-2)}$ and $\\frac{q}{r}<\\frac{2}{r}+\\frac{2}{n}$ , so we take $w=u^{\\frac{q_k+m-1}{2}}$ in Lemma REF to obtain $\\int _{{\\mathbb {R}}^n} u^{q_k+\\alpha -1}dx \\le C(n)C_0^{-\\frac{1}{\\delta _1-1}}\\Big (\\int _{{\\mathbb {R}}^n}u^{q_{k-1}}\\Big )^{\\gamma _1}+C_0\\Vert \\nabla u^{ \\frac{q_k+m-1}{2}}\\Vert ^2_{L^2({\\mathbb {R}}^n)},$ where $& \\delta _1 =\\frac{q_k+m-1-(1-2/n)q_{k-1}}{q_k+\\alpha -1-q_{k-1}}>1+2/n, \\\\& \\gamma _1 =1+\\frac{q_k+\\alpha -1-q_{k-1}}{q_{k-1}+\\frac{n}{2}(m-\\alpha )}<2~ ~\\mbox{for}~~\\alpha \\ge 1,~ m>1,$ $C_0$ is a positive constant to be determined.", "Substituting (REF ) into (REF ) yields $& \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{q_k}dx+\\left(\\frac{4mq_k(q_k-1)}{(q_k+m-1)^2}-C_0 M_0 q_k\\chi \\right)\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{q_k+m-1}{2}}|^2 dx \\nonumber \\\\\\le & C(n)q_k\\chi C_0^{-\\frac{1}{\\delta _1-1}}\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\right)^{\\gamma _1}.$ Notice that $\\frac{4mq_k(q_k-1)}{(q_k+m-1)^2}>2m$ , thus choosing $C_0=\\frac{m}{M_0 q_k\\chi }$ we have $\\frac{4mq_k(q_k-1)}{(q_k+m-1)^2}-C_0 M_0 q_k\\chi \\ge m$ which follows $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{q_k}dx+m\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{q_k+m-1}{2}}|^2 dx \\le C(n,m,\\chi ,M_0)q_k^{\\frac{\\delta _1}{\\delta _1-1}}\\Big (\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\Big )^{\\gamma _1}.$ On the other hand, taking $w=u^{\\frac{q_k+m-1}{2}},~~ q=\\frac{2q_k}{q_k+m-1},~~ r=\\frac{2q_{k-1}}{q_k+m-1},~~C_0=m$ in Lemma REF gives $\\int _{{\\mathbb {R}}^n} u^{q_k}dx \\le C(n)m^{-\\frac{1}{\\delta _2-1}}\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\right)^{\\gamma _1}+m\\int _{{\\mathbb {R}}^n} |\\nabla u^{\\frac{q_k+m-1}{2}}|^2 dx,$ where $\\delta _2& =\\frac{q_k+m-1-(1-2/n)q_{k-1}}{q_k-q_{k-1}}=O(1),\\\\\\gamma _2& =1+\\frac{q_k-q_{k-1}}{q_{k-1}+n(m-1)/2}<2 ~~\\mbox{for}~~\\alpha \\ge 1,~ m>1.$ We may insert (REF ) into (REF ) and take the fact $\\gamma _1 <2$ , $\\gamma _2<2$ into account to get $&\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} u^{q_k}dx+\\int _{{\\mathbb {R}}^n} u^{q_k}dx\\nonumber \\\\\\le &C(n,m,\\chi ,M_0)q_k^{\\frac{\\delta _1}{\\delta _1-1}}\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\right)^{\\gamma _1}+C(n,m)\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\right)^{\\gamma _2} \\nonumber \\\\\\le & C(n,m,\\chi ,M_0)q_k^{\\frac{\\delta _1}{\\delta _1-1}}\\left(\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\right)^{\\gamma _1}+\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx \\right)^{\\gamma _2}\\right) \\nonumber \\\\\\le & 2C(n,m,\\chi ,M_0)q_k^{\\frac{\\delta _1}{\\delta _1-1}}\\max \\left(\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx\\right)^2,1\\right) \\nonumber \\\\\\le & 2C(n,m,\\chi ,M_0)q_k^{1+n/2}\\max \\left(\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx\\right)^2,1\\right) \\nonumber \\\\\\le & 2C(n,m,\\chi ,M_0)2^{k(1+n/2)}(n\\alpha +nm+1)^{1+n/2}\\max \\left(\\left(\\int _{{\\mathbb {R}}^n} u^{q_{k-1}}dx\\right)^2,1\\right).$ Here we have used the fact that $\\delta _1>1+2/n$ .", "Let $K_0=\\max \\Big (1,\\Vert u_{0\\varepsilon }\\Vert _{L^1({\\mathbb {R}}^n)}),\\Vert u_{0\\varepsilon }\\Vert _{L^\\infty ({\\mathbb {R}}^n)}\\Big )$ , we have the following inequality for the initial data $\\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }^{q_k}dx\\le [\\max (\\Vert u_{0\\varepsilon }\\Vert _{L^1({\\mathbb {R}}^n)}),\\Vert u_{0\\varepsilon }\\Vert _{L^\\infty ({\\mathbb {R}}^n)})]^{q_k}\\le K_0^{q_k}.$ Denote $y_k(t)=\\int _{{\\mathbb {R}}^n} u^{q_k}dx,~~ d_0=1+n/2,~~ \\bar{a}=C(n,m,\\chi ,M_0)(n\\alpha +nm+1)^{d_0},$ (REF ) can be recasted as $y^{\\prime }_k(t)+y_k(t)\\le 2\\bar{a}2^{d_0k}\\max \\left(y_{k-1}^2(t),1\\right).$ By virtue of Lemma 4.1 of [7] one can solve that $\\int _{{\\mathbb {R}}^n} u(\\cdot ,t)^{q_k}dx\\le (2\\bar{a})^{2^k-1}2^{d_0(2^{k+1}-k-2)}\\max \\Bigg (\\sup _{t\\ge 0}\\Big (\\int _{{\\mathbb {R}}^n}u(\\cdot ,t)^{q_0}dx\\Big )^{2^k},K_0^{q_k}\\Bigg ).$ Recalling $q_k=2^k+n\\alpha +nm$ and taking the power $\\frac{1}{q_k}$ to both sides of (REF ) we have the uniformly boundedness of the solution $u$ by passing to the limit $k\\rightarrow \\infty $ $\\Vert u(\\cdot ,t)\\Vert _{L^\\infty ({\\mathbb {R}}^n)}\\le 2\\bar{a}2^{2d_0}\\max \\Big (\\sup _{t\\ge 0}\\int _{{\\mathbb {R}}^n} u(\\cdot ,t)^{q_0}dx,K_0\\Big ).$ Thanks to Proposition REF by $q_0=1+n\\alpha +nm$ , it allows us to find $\\int _{{\\mathbb {R}}^n} u(\\cdot ,t)^{1+n\\alpha +nm}dx\\le C\\left(\\Vert u_{0\\varepsilon }\\Vert _{L^{1+n\\alpha +nm}({\\mathbb {R}}^n)},m_0,M_0 \\right).$ Thus (REF ) implies $\\Vert u(\\cdot ,t)\\Vert _{L^\\infty ({\\mathbb {R}}^n)}\\le C(K_0).$ This is exactly the anticipated result.", "$\\Box $ Combining the local existence result Proposition REF and the uniform boundedness Proposition REF , REF we close the proof of Theorem REF ." ], [ "Proof of the main theorems", "In this section, we give the proof of Theorem REF , REF and REF and show the global existence of a weak solution to (REF ) for the three cases.", "Proof of Theorem REF , REF and REF : By virtue of proposition REF and proposition REF , for the initial data satisfies $u_{0\\varepsilon }\\in L^1\\cap L^\\infty ({\\mathbb {R}}^n)$ , the following basic estimates are obtained that for any $T>0:$ $& \\Vert u_\\varepsilon \\Vert _{L^\\infty (0,T;L^1\\cap L^q({\\mathbb {R}}^n))}\\le C,~~\\mbox{for any}~1<q\\le \\infty ,\\\\& \\Vert u_\\varepsilon \\Vert _{L^{k+\\alpha -1}(0,T;L^{k+\\alpha -1}({\\mathbb {R}}^n))}\\le C,~~ \\mbox{for any}~ 1<k\\le \\infty ,\\\\&\\left\\Vert \\nabla u_\\varepsilon ^{\\frac{m+k-1}{2}}\\right\\Vert _{L^2(0,T;L^2({\\mathbb {R}}^n))}\\le C,~~ \\mbox{for any}~ 1<k<\\infty ,$ where $C$ are constants depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^1\\cap L^\\infty ({\\mathbb {R}}^n)}$ , $m$ , $\\alpha $ , $n, \\chi , m_0, M_0$ but not on $\\varepsilon $ .", "Hence there exists a subsequence $u_\\varepsilon $ without relabeling such that for any $T>0$ $& u_\\varepsilon \\rightharpoonup u~ \\mbox{in}~ L^q(0,T;L^1\\cap L^q({\\mathbb {R}}^n)),~~\\mbox{for any}~ 1\\le q<\\infty ,\\\\& u_\\varepsilon \\stackrel{\\ast }{\\rightharpoonup } u~~\\mbox{in}~ L^\\infty (0,T;L^1\\cap L^\\infty ({\\mathbb {R}}^n)).\\nonumber $ Furthermore we will show that for any $T>0:$ $& \\nabla u_\\varepsilon ^m \\in L^\\infty (0,T;L^2({\\mathbb {R}}^n)),\\\\& (u_\\varepsilon ^m)_t \\in L^2(0,T;L^2({\\mathbb {R}}^n)).$ Multiplying (REF ) with $(u_\\varepsilon ^m)_t$ and $(u_\\varepsilon )_t$ respectively we have $& \\frac{4m}{(m+1)^2}\\int _{{\\mathbb {R}}^n} \\left| \\left(u_\\varepsilon ^{\\frac{m+1}{2}}\\right)_t \\right|^2 dx \\nonumber \\\\= & -\\frac{1}{2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^m|^2 dx-\\varepsilon \\int _{{\\mathbb {R}}^n} \\nabla u_\\varepsilon \\cdot (\\nabla u_\\varepsilon ^m)_t dx +\\chi m\\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{\\alpha +m-1} (u_\\varepsilon )_t dx\\left(M_0-\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx \\right) \\nonumber \\\\\\le & \\frac{2m}{(m+1)^2}\\int _{{\\mathbb {R}}^n} |(u_\\varepsilon ^{\\frac{m+1}{2}})_t|^2 dx +C(\\chi ,M_0,m)\\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{2\\alpha +m-1}dx,$ and $&\\varepsilon \\int _{{\\mathbb {R}}^n} |(u_\\varepsilon )_t|^2dx \\nonumber \\\\= & -\\varepsilon \\int _{{\\mathbb {R}}^n} \\nabla u_\\varepsilon ^m \\cdot (\\nabla u_\\varepsilon )_t dx-\\frac{\\varepsilon ^2}{2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon |^2 dx +\\chi \\varepsilon \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^\\alpha (u_\\varepsilon )_t dx\\left(M_0-\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx\\right) \\nonumber \\\\\\le & -\\varepsilon \\int _{{\\mathbb {R}}^n} \\nabla u_\\varepsilon ^m \\cdot (\\nabla u_\\varepsilon )_t dx-\\frac{\\varepsilon ^2}{2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon |^2 dx +\\frac{\\varepsilon }{2}\\int _{{\\mathbb {R}}^n} |(u_\\varepsilon )_t|^2 dx +C(\\chi ,M_0,\\varepsilon )\\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{2\\alpha } dx.$ Combining (REF ) with (REF ) one has $& \\frac{2m}{(m+1)^2} \\int _{{\\mathbb {R}}^n} |u_\\varepsilon ^{\\frac{m+1}{2}}|^2 dx +\\frac{1}{2} \\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^m|^2 dx \\nonumber \\\\& +\\frac{\\varepsilon }{2}\\int _{{\\mathbb {R}}^n} |(u_\\varepsilon )_t|^2 dx +\\frac{\\varepsilon ^2}{2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon |^2 dx +\\varepsilon \\frac{4m}{(m+1)^2}\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^{\\frac{m+1}{2}}|^2 dx \\nonumber \\\\\\le & C(\\chi ,M_0,m)\\left( \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{2\\alpha +m-1} dx +\\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{2\\alpha } dx \\right).$ Integrating (REF ) with respect to time follows that for any $T>0$ $& \\frac{2m}{(m+1)^2}\\int _0^T \\int _{{\\mathbb {R}}^n} \\left| \\left(u_\\varepsilon ^{\\frac{m+1}{2}}\\right)_t\\right|^2 dx dt +\\frac{1}{2}\\displaystyle \\sup _{0<t<T}\\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^m|^2 dx \\nonumber \\\\& +\\frac{\\varepsilon }{2}\\int _0^T \\int _{{\\mathbb {R}}^n} |(u_\\varepsilon )_t|^2 dx dt +\\frac{\\varepsilon ^2}{2}\\displaystyle \\sup _{0<t<T} \\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon |^2 dx +\\varepsilon \\frac{4m}{(m+1)^2}\\displaystyle \\sup _{0<t<T} \\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^{\\frac{m+1}{2}}|^2 dx \\nonumber \\\\\\le & C(\\chi ,M_0,m)\\int _0^T \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{2\\alpha +m} dxdt + \\int _0^T \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^{2\\alpha }dx dt +\\frac{1}{2}\\int _{{\\mathbb {R}}^n} \\left|\\nabla u_{0\\varepsilon }^m\\right|^2 dx \\nonumber \\\\& +\\frac{\\varepsilon ^2}{2} \\int _{{\\mathbb {R}}^n} |\\nabla u_{0\\varepsilon }|^2 dx +\\frac{4m\\varepsilon }{(m+1)^2}\\int _{{\\mathbb {R}}^n} |\\nabla u_{0\\varepsilon }^{\\frac{m+1}{2}}|^2 dx.$ By (REF ) and () we see that for $\\alpha \\ge 1$ , $m> 1$ , there exists a positive constant $C$ which is independent of $\\varepsilon $ such that $&\\int _0^T \\int _{{\\mathbb {R}}^n} |(u_\\varepsilon ^m)_t|^2 dx dt +\\displaystyle \\sup _{0<t<T} \\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^m|^2 dx\\\\\\le & \\frac{4m^2}{(m+1)^2} \\Vert u_\\varepsilon \\Vert _{L^\\infty (Q_T)}^{m-1} \\int _0^T \\int _{{\\mathbb {R}}^n} \\left|(u_\\varepsilon ^{\\frac{m+1}{2}})_t\\right|^2 dxdt+\\displaystyle \\sup _{0<t<T} \\int _{{\\mathbb {R}}^n} |\\nabla u_\\varepsilon ^m|^2dx\\le C.$ It can be seen that $u_\\varepsilon ^m \\in L^\\infty (0,T;H^1({\\mathbb {R}}^n))\\cap H^1(0,T;L^2({\\mathbb {R}}^n))$ which proves (REF ) and ().", "Consequently, there exists a subsequence $u_\\varepsilon $ without relabeling such that $u_\\varepsilon ^m \\rightarrow \\xi ~~\\mbox{in}~~ C\\left(0,T;L_{loc}^2({\\mathbb {R}}^n)\\right)$ which directly gives that for any bounded domain $\\Omega $ $u_\\varepsilon \\rightarrow \\xi ^{\\frac{1}{m}} ~~\\mbox{in}~~ \\Omega ,~t\\in (0,T).$ On the other hand, recalling (REF ) and using the dominated convergence theorem leads to $u_\\varepsilon \\rightarrow u ~~\\mbox{in}~~ L^q(0,T;L^1\\cap L^q(\\Omega ))~~ \\mbox{for any }~~ 1\\le q<\\infty ,$ and thus from (REF ) we find $u_\\varepsilon \\rightarrow \\xi ^{\\frac{1}{m}}=u~~ \\mbox{in}~~\\Omega , ~t\\in (0,T).$ By (REF ) we arrive at $u_\\varepsilon ^m \\rightarrow u^m~~ \\mbox{in}~~ C\\left(0,T;L^2(\\Omega )\\right).$ Hence from (REF ) we conclude $\\nabla u_\\varepsilon ^m \\stackrel{\\ast }{\\rightharpoonup } \\nabla u^m ~~\\mbox{in}~~ L^\\infty (0,T;L^2({\\mathbb {R}}^n)).$ We can also obtain $u_\\varepsilon \\rightarrow u~~\\mbox{in}~~ C\\left(0,T;L^{2m}(\\Omega )\\right)$ by virtue of the fact $|u_\\varepsilon -u|^m\\le |u_\\varepsilon ^m-u^m|$ for $m>1$ .", "Because of $|u_\\varepsilon -u|^k\\le (2\\Vert u_\\varepsilon \\Vert _{L^\\infty (0,T;L^\\infty (\\Omega ))})^{k-2m}|u_\\varepsilon -u|^{2m}~~ \\mbox{for any }~~ k\\ge 2m,$ we can go further to obtain that for any $2<k<\\infty $ $u_\\varepsilon \\rightarrow u ~~\\mbox{in}~~ C\\left(0,T;L^k(\\Omega )\\right).$ In addition, we need to prove $u_\\varepsilon \\rightarrow u ~~\\mbox{in}~~ L^q(0,T;L^1({{\\mathbb {R}}^n}), ~~\\mbox{for any }~~ 1<q<\\infty .$ Here we will apply the second moment estimate to establish the uniform integrability of $u_\\varepsilon $ at far field.", "From (REF ) we find $\\frac{d}{dt}\\int _{{\\mathbb {R}}^n} |x|^2 u_\\varepsilon (\\cdot ,t)dx& =2n\\int _{{\\mathbb {R}}^n} u_\\varepsilon ^m dx +2n\\varepsilon \\int _{{\\mathbb {R}}^n} u_\\varepsilon dx+\\chi \\int _{{\\mathbb {R}}^n} u_\\varepsilon ^\\alpha |x|^2dx\\left(M_0-\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx \\right) \\nonumber \\\\& \\le C\\left(\\Vert u_\\varepsilon \\Vert _{L^1\\cap L^\\infty ({\\mathbb {R}}^n)}\\right)+\\chi \\Vert u_\\varepsilon \\Vert _{L^\\infty (Q_T)}^{\\alpha -1} M_0\\int _{{\\mathbb {R}}^n} u_\\varepsilon |x|^2 dx.$ Then from (REF ) and by Gronwall inequality one gets $\\int _{{\\mathbb {R}}^n} u_\\varepsilon |x|^2dx \\le \\left( \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }|x|^2 dx +C_1 \\right)e^{C_2t}<C,$ where $C_1$ , $C_2$ are constants depending on $\\Vert u_{0\\varepsilon }\\Vert _{L^1\\cap L^\\infty ({\\mathbb {R}}^n)}$ , $\\chi $ , $M_0, \\alpha ,m,n$ .", "Then we compute $\\int _0^T \\Vert u_\\varepsilon \\Vert _{L^1(|x|>R)}^q dt \\le \\int _0^T \\frac{\\left(\\int _{{\\mathbb {R}}^n} u_\\varepsilon |x|^2 dx\\right)^q}{R^{2q}} dt \\rightarrow 0 ~~\\mbox{as}~~ R \\rightarrow \\infty $ for any $1<q<\\infty $ , and the weak semi-continuity of $L^q(0,T;L^1(|x|>R))$ yields $\\int _0^T \\Vert u\\Vert _{L^1(|x|>R)}^q dt \\le \\displaystyle \\liminf _{\\varepsilon \\rightarrow 0} \\int _0^T \\Vert u_\\varepsilon \\Vert _{L^1(|x|>R)}^q dt \\rightarrow 0 ~~\\mbox{as}~~ R \\rightarrow \\infty .$ Therefore, the following inequality is derived that for any $1<q<\\infty $ , as $R\\rightarrow \\infty $ , $\\varepsilon \\rightarrow 0$ , $& \\int _0^T \\Vert u_\\varepsilon -u\\Vert _{L^1({\\mathbb {R}}^n)}^q dt =\\int _0^T \\left( \\Vert u_\\varepsilon -u\\Vert _{L^1(|x|>R)}+\\Vert u_\\varepsilon -u\\Vert _{L^1(|x|\\le R)} \\right)^q dt\\\\\\le & C(q)\\left(\\int _0^T\\Vert u_\\varepsilon \\Vert _{L^1(|x|>R)}^q dt +\\int _0^T \\Vert u\\Vert _{L^1(|x|>R)}^q dt +\\int _0^T \\Vert u_\\varepsilon -u\\Vert _{L^1(|x|\\le R)}^q dt \\right) \\rightarrow 0.$ In the last inequality, the first term goes to zero due to (REF ), the second term is given by (REF ) and (REF ) yields the third term, thus one proves (REF ).", "Now integrating (REF ) with respect to $x$ , $t$ we get the weak formulation for $u_\\varepsilon $ $& \\int _0^T \\int _{{\\mathbb {R}}^n} \\nabla u_\\varepsilon ^m \\cdot \\nabla \\varphi dxdt +\\varepsilon \\int _0^T \\int _{{\\mathbb {R}}^n} \\nabla u_\\varepsilon \\cdot \\nabla \\varphi dxdt -\\chi \\int _0^T\\int _{{\\mathbb {R}}^n} u_\\varepsilon ^\\alpha \\varphi \\left( M_0-\\int _{{\\mathbb {R}}^n} u_\\varepsilon dx \\right) dxdt \\nonumber \\\\= & \\int _{{\\mathbb {R}}^n} u_{0\\varepsilon }(x) \\varphi (x,0) dx +\\int _0^T \\int _{{\\mathbb {R}}^n} u_\\varepsilon \\varphi _t dxdt $ for any continuously differentiable function $\\varphi $ with compact support in ${\\mathbb {R}}^n \\times [0,T)$ .", "Thanks to (REF ), (REF ) and (REF ), passing to the limit $\\varepsilon \\rightarrow 0$ in (REF ) we obtain $& \\int _0^T \\int _{{\\mathbb {R}}^n} \\nabla u^m \\cdot \\nabla \\varphi dxdt -\\chi \\int _0^T \\int _{{\\mathbb {R}}^n} u^\\alpha \\varphi \\left( M_0-\\int _{{\\mathbb {R}}^n} u dx \\right) dxdt \\nonumber \\\\= & \\int _{{\\mathbb {R}}^n} u_0(x)\\varphi (x,0)dx +\\int _0^T \\int _{{\\mathbb {R}}^n} u\\varphi _t dxdt,$ which finishes the proof of our main results.", "$\\Box $" ], [ "Numerical results", "Throughout this section, we numerically consider the radial solutions satisfying $\\left\\lbrace \\begin{array}{ll}u_t=(u^m)_{rr}+\\frac{n-1}{r}(u^m)_r+\\chi u^\\alpha \\left( M_0-n \\beta _n \\int _0^\\infty u(r,t)r^{n-1} dr \\right), \\quad 0<r<\\infty , ~t>0, \\\\u^{\\prime }(0,t)=0,\\quad u \\rightarrow 0 \\mbox{~~as~~} r \\rightarrow \\infty , \\quad t>0, \\\\u(r,0)=u_0(r) \\ge 0,\\end{array}\\right.$ where $n \\ge 3$ and $\\beta _n=\\frac{\\pi ^{n/2}}{\\Gamma (n/2+1)}$ .", "Here the initial mass $m_0=n \\beta _n \\int _0^\\infty u(r,t)r^{n-1} dr$ is assumed to be $m_0<M_0$ such that $M_0-m(t)$ remains non-negative for all $0<t<\\infty $ .", "A series of numerical experiments of the PDE (REF ) with different forms of initial data are presented in this section.", "The numerical results illustrate the theoretical predications of the earlier sections, as well as explore other issues that lie beyond the scope of the analysis.", "Numerical simulations are carried out using semi-implicit finite difference scheme for the diffusion term and linearized method specialized for the nonlinear reaction term.", "Here $\\alpha $ is divided into three cases: $1\\le \\alpha <m+2/n$ , $\\alpha =m+2/n$ and $\\alpha >m+2/n.$" ], [ "Global existence for the subcritical case $1 \\le \\alpha <m+2/n$", "In this subsection, we solve the problem starting from non-negative initial data of forms: compactly supported, non-compactly supported.", "As is expected in Theorem REF , for any given initial mass $m_0$ and $M_0,$ the solution converges to the compact supported steady profile with mass $M_0$ , see Figure REF (a)(b) and Figure REF (a)(b).", "This is more evident in Figure REF (c)(d) and Figure REF (c)(d) where time-profiles of the solution are plotted on a log-scale graph, and the maximum of the solution tends to that of the stationary solution.", "Figure: Numerical simulations of the global existence with compactly supported initial data for n=3n=3: m 0 =32.72413808m_0=32.72413808, M 0 =120M_0=120, the density converges to the unique compactly supported steady solution with mass M 0 M_0.Figure: Numerical simulations of the global existence with non-compactly supported initial data for n=3n=3: m 0 =19.82455437m_0=19.82455437, M 0 =101.10522729M_0=101.10522729, the density converges to the unique compactly supported steady solution with mass M 0 M_0." ], [ "Finite time blow-up for the critical case $\\alpha =m+2/n$", "In Theorem REF , we have proved that the solution will exist globally under the condition of $M_0 \\le M_\\ast $ where $M_\\ast $ is defined as in (REF ).", "For $M_0> M_\\ast $ , we solve (REF ) starting from initial data with different masses $m_0$ and mass capacities $M_0$ to illustrate the influence of $M_0,m_0$ on the dynamics.", "Numerical experiments show that for any given initial mass $m_0$ , there exists a unique critical value $M_c$ such that for $M_0<M_c$ , all the solutions will converge to the unique steady solution with mass $M_0$ .", "While for $M_0>M_c,$ all the solutions will blow up in finite time.", "We can infer from Table REF that higher dimensions require larger $M_c$ for finite time blow-up.", "Table: Critical threshold M c M_c on M 0 M_0 separating finite time blow-up and global existenceWe take $n=3$ as an example, beginning with a multi-bump initial data with mass $m_0=4.57840705$ , for $m_0<M_\\ast <M_0<34.36599205$ , by viewing the simulations of the maximum and mass of the solution, we observe that the bumps firstly move towards the center and the solution quickly increases to the maximum, then it decreases to spread outwards and finally converges to the compactly supported steady profile with mass $M_0$ , see Figure REF (a1-a3).", "While for $M_0>34.36599205$ , the coefficient $M_0-m_0$ of the growth term is large to prevent the solution from spreading, the bumps approach to zero and increase to form a local maximum that determines the position of a blow-up singularity, see Figure REF (b1).", "This is further verified by the time evolution of the mass and maximum of the solution (see Figure REF (b2-b3)).", "In addition, numerical simulations in the blow-up profile illustrate that the larger $M_0,$ the faster the solution blows up (see Figure REF (a)).", "On the other hand, for any given $M_0,$ there is a critical value $m_c$ such that the solution will blow up in finite time for $m_0<m_c$ (or $M_0-m_0>M_0-m_c$ ), and the blow-up time $T_b$ becomes longer for smaller $m_0,$ see Figure REF (b).", "Figure: The initial mass m 0 =4.57840705m_0=4.57840705, the global existence (up) and finite time blow-up (down): (a1-a3) M 0 =27.47044232M_0=27.47044232, the solution will converge to the unique steady solution with mass M 0 M_0, (b1-b3) M 0 =45.78407053M_0=45.78407053, at T b =110.823T_b=110.823s, the solution forms a local maximum that evolves to produce blow-up and the mass increases to 9.314179869.31417986.Figure: Relationship between the blow-up time T b T_b and M 0 -m 0 M_0-m_0: (a) given m 0 =4.57840705m_0=4.57840705, blow-up time T b T_b corresponding to different M 0 M_0 in the range M 0 >34.36599205M_0>34.36599205, (b) given M 0 =90M_0=90, blow-up time T b T_b for m 0 <54.53645869m_0<54.53645869.In the process of simulations, we found that $M_0,m_0$ directly affect the blow-up point and the behavior of the blow-up solution.", "We begin with a multi-bump initial data (see Figure REF (a)) with mass $m_0=97.58555462$ in dimension $n=5$ .", "For $M_0>283.97396395$ , the solution will blow up at finite time, otherwise it will exist globally.", "It is interesting to note that for $M_0=292.75666386$ , the bumps merge to become a single bump and only one singularity rather than four, ultimately occurs, see Figure REF (b).", "While for larger $M_0=439.13499579$ , it's investigated that two singularities appear near the center, see Figure REF (c).", "Figure: Different configurations for different M 0 M_0: (a) the initial density with m 0 =97.58555462m_0=97.58555462, (b) M 0 =292.75666386M_0=292.75666386, formation of one local maximum at the blow-up point, (c) M 0 =439.13499579M_0=439.13499579, formation of two local maxima that determine the position of two blow-up singularities.An important property of degenerate diffusion equations like (REF ) is the nonlinear superposition principle for disjointly-supported solutions [35].", "That is, as long as their respective regions of support do not overlap, any combination of non-negative solutions can be pasted together in the domain to yield another configuration.", "For times when there is no overlap of domains, each bump evolves independently of the others, examples of this are shown in Figure REF (a2) where the central bump and the two compactly-supported bumps away from zero develop independently and finally the central bump evolves to one singularity at zero, and Figure REF (b2) where six supported bumps develop respectively and ultimately two singularities next to zero are formed.", "Figure: Time profiles for solutions approaching blow-up with different forms of initial data for n=3n=3: (a) the piecewise function evolves to one local maximum at zero, (b) the six disjointly-supported bumps evolve independently to form two local maxima next to zero." ], [ "Finite time blow-up and infinite-time convergence for the supercritical case $\\alpha >m+2/n$", "Theorem REF tells us that the solution of (REF ) will exist globally for $\\Vert u_0\\Vert _{L^{p_0}({\\mathbb {R}}^n)}<C_{p_0}$ .", "In this subsection, we will numerically explore the global existence and finite time blow-up of solutions under the assumption of $\\Vert u_0\\Vert _{L^{p_0}({\\mathbb {R}}^n)} \\ge C_{p_0}$ .", "Without loss of generality, we consider the case $n=3.$ Given $M_0=80,$ for $m_0>61.18862434$ , that's $\\Vert u_0\\Vert _{L^{p_0}({\\mathbb {R}}^n)}>11.38458401>C_{p_0}=0.58396290$ , the solution will exist globally and converge to the unique compactly supported stationary solution which is only decided by $M_0$ but not on $m_0,$ see Figure REF (a2)(b2).", "On the contrary, for $m_0<61.18862434$ and $C_{p_0}=0.58396290<\\Vert u_0\\Vert _{L^{p_0}({\\mathbb {R}}^n)}<11.38458401$ , the solution will blow up in finite time, see Figure REF for the formation of a singularity.", "Figure: Convergence to the steady profile: (a1-a3) M 0 =80M_0=80, m 0 =73.28218769m_0=73.28218769, the initial convex function evolves to the unique compactly supported stationary solution with mass 80, (b1-b3) M 0 =80M_0=80, m 0 =68.67339112m_0=68.67339112, the multi-bump initial data tends to the same steady state with mass 80.Figure: The finite time blow-up for M 0 =80M_0=80, m 0 =45.78226075m_0=45.78226075: (a) the multi-bump initial data forms one local maximum that produces blow-up in finite time, (b) the mass increases to 53.3698453953.36984539 at time T b =0.3635T_b=0.3635s, (c) the maximum increases dramatically.Furthermore, in order to explore the relationship among $M_0,m_0$ and the blow-up time $T_b$ , some numerical experiments by choosing different $M_0,m_0$ are carried out.", "It is shown that for any given $m_0$ and its corresponding $M_c$ , larger $M_0$ leads to shorter blow-up time when $M_0>M_c$ , see Figure REF (a).", "Similarly, for any given $M_0$ , there exists a critical threshold $m_c$ such that for $m_0<m_c$ , the blow-up time is longer when $m_0$ is smaller, see Figure REF (b).", "Figure: The influence of M 0 ,m 0 M_0,m_0 on the blow-up time T b T_b: (a) m 0 =15.26075358m_0=15.26075358, M 0 >58.53590504M_0>58.53590504, five branches of masses with time evolution before blow-up time T b T_b, (b) M 0 =80,M_0=80, m 0 <m c =61.18862433m_0<m_c=61.18862433, four branches of masses with time evolution before blow-up time T b T_b." ], [ "Conclusions", "In this paper, we have identified a critical exponent $\\alpha =m+2/n$ for equation (REF ) that separates blow-up solutions from those that exist globally, and we have described the structure in three cases.", "Firstly, for the subcritical case $1\\le \\alpha <m+2/n$ (the diffusion dominates for large population), the global existence and uniformly boundedness of a weak solution to (REF ) are obtained.", "Then for the critical case $\\alpha =m+2/n$ , there exists an upper bound $M_\\ast $ such that the solution exists globally for $m_0<M_0<M_\\ast .$ Thirdly, for the supercritical case $\\alpha >m+2/n$ where the logistic growth term dominates for large population, there exists a universal constant $C_{p_0}$ such that the solution will exist globally with initial data satisfying $\\Vert u_0\\Vert _{L^{p_0}({\\mathbb {R}}^n)}<C_{p_0}$ .", "In addition, for the critical case $\\alpha =m+2/n$ and the supercritical case $\\alpha >m+2/n$ , numerical simulations illustrate that for any given initial mass $m_0,$ there exists a critical threshold $M_c$ to separate finite time blow-up and global existence.", "Precisely, at the time that the population grows, the competitive effect of the logistic growth term becomes more influential such that finite time aggregation occurs when $M_0>M_c$ (the position and number of the blow-up points will change for different $M_0$ ).", "Inversely, for $m_0<M_0<M_c,$ $M_0$ is too small to prevent the population from spreading around although the growth term is dominant at the beginning (see Figure REF (a3)), finally the density converges to the compactly supported steady profile.", "This suggests us to further describe that how is $M_c$ selected for any given $m_0$ , which is a challenging question in our future research." ] ]
2107.01764
[ [ "Nonparametric Detection of Multiple Location-Scale Change Points via\n Wild Binary Segmentation" ], [ "Abstract While parametric multiple change point detection has been widely studied, less attention has been given to the nonparametric task of detecting multiple change points in a sequence of observations when their distribution is unknown.", "Most existing work on this topic is either based on penalized cost functions which can suffer from false positive detections, or on binary segmentation which can fail to detect certain configurations of change points.", "We introduce a new approach to change point detection which adapts the recently proposed Wild Binary Segmentation (WBS) procedure to a nonparametric setting.", "Our approach is based on the use of rank based test statistics which are especially powerful at detecting changes in location and/or scale.", "We show via simulation that the resulting nonparametric WBS procedure has favorable performance compared to existing methods, particularly when it comes to detecting changes in scale.", "We apply our procedure to study a problem in stylometry involving change points in an author's writing style, and provide a full implementation of our algorithm in an associated R package." ], [ "Introduction", "The multiple change point problem involves the simultaneous estimation of the number and location of $K \\ge 0$ change points $\\tau = \\lbrace \\tau _1,\\ldots ,\\tau _K\\rbrace $ that partition a sequence $\\textbf {y} = (y_1,\\ldots ,y_n)$ into $K+1$ segments such that: $S_1 = \\lbrace y_1,\\ldots , y_{\\tau _1} \\rbrace , \\quad S_2 = \\lbrace y_{\\tau _1 + 1}, \\ldots , y_{\\tau _2}\\rbrace , \\quad \\ldots , \\quad S_{K+1} = \\lbrace y_{\\tau _K +1},\\ldots , y_n\\rbrace .$ We will make the commonly used assumption that the $y_i^{\\prime }s$ are univariate observations which are independent and identically distributed between each pair of change points so that $y_i \\sim _{i.i.d.}", "F_j$ if $y_i \\in S_j$ for some set of unknown continuous distributions $F_1,\\ldots ,F_{K+1}$ .", "The literature on the single change point problem where $K \\in \\lbrace 0,1\\rbrace $ is vast, with an overview of traditional methods provided by [4].", "For the multiple change point problem where $K$ can be greater than 1, most existing literature has assumed that the distributional form of each $F_j(\\cdot ;\\theta _j)$ is known, with the change points consisting of shifts in a finite dimensional parameter $\\theta $ .", "A recent overview of parametric techniques for multiple change point detection is provided by [33] and [42].", "Most existing methods fall into one of four categories: Sequential approaches such as the CUSUM [37], [32], [26] or change point model framework [19] which process the sequence one observation at a time, and which are only concerned with estimating the location of the most recent change point.", "Penalized cost function approaches which seek to find the global minimum of some appropriately chosen cost function $C(\\mathbf {y},\\tau )$ (such as the negative log-likelihood) with a penalty for the number of change points [17], [25], [23], [27].", "Stepwise approaches such as binary segmentation which transform the multiple change point problem into the task of recursively testing for the existence of a single change point [2], [10], [34], [43], [22], [15].", "The key advantage of such approaches is that they allow for a strict control on the false positive rate, i.e.", "the probability of incorrectly concluding that at least one change point exists in the sequence when in fact none are present.", "Various Bayesian models such as [3], [16], [13], [9].", "Although interesting, the Bayesian paradigm takes a quite different approach to the multiple change point problem, and we will not discuss it further.", "In many real world situations, it is not reasonable to expect the distributional form of each $F_j$ to be known.", "This has led to the development of nonparametric change point detection algorithms which do not require information about the sequence distributions.", "Much of the classic nonparametric literature has focused on the single change point setting [7], [38], [6], [8] however the multiple change point setting is also of considerable interest.", "Extensions of the sequential approach to the nonparametric setting have been considered by authors such as [18], [41], [35], however such approaches are inefficient in a non-sequential setting since inference for a change point at time $t$ is based only on the observations $y_1,\\ldots ,y_{t-1}$ rather than on the whole sequence.", "The penalized cost function approach is limited in the nonparametric setting since there is no likelihood to act as a cost function.", "However, [44] has recently shown encouraging results using a cost function based on a nonparametric likelihood, and follow-up work by [20] has rendered their approach computationally tractable even for long sequences.", "While this direction is promising, a limitation is the lack of control on the false positive probability which as we show in Section REF can be very substantial.", "The stepwise binary segmentation approach directly extends to the nonparametric setting, since the test for a single change point at each stage can be carried out using a nonparametric single change point test such as that based on maximized Mann-Whitney statistic as used in [38].", "Such an approach was taken by [30] in a multivariate context.", "This allows for a strict control on the false positive rate.", "However, binary segmentation can perform poorly in multiple change point settings due to its greedy nature which only ever searches for the best fitting single change point at each step.", "In some situations, which we discuss further in Section REF , multiple change points will mask each other which may lead to a failure to detect any of them.", "Recently in an influential paper, [15] proposed a modified version of binary segmentation which solves some of these problems.", "Their approach – named Wild Binary Segementation (WBS) – is based on maximizing a test statistic over randomly sampled subsequences of $y_1,\\ldots ,y_n$ .", "By considering subsequences, WBS is able to detect change points which only affect local parts of the sequence which would potentially be masked when using standard binary segmentation.", "Both their original paper and the subsequent literature shows a high degree of promise for WBS.", "In [15], only the parametric setting of the change point problem is considered, with a focus on detecting change points in the mean of a sequence of Gaussian observations.", "In this work, we develop a nonparametric version of their WBS procedure which is able to detect location-scale (i.e.", "mean and/or variance) changes in sequences with an arbitrary and potentially unknown distribution, while maintaining strict control of the false positive rate.", "Our approach is based on nonparametric rank statistics which have a null distribution which is independent of the form of the $F_j$ distributions.", "This independence allows critical values of the test statistics to be computed in a distribution-free manner.", "We compute these values using large scale Monte Carlo simulation and the resulting procedure is implemented in an R package npwbs available at https://cran.r-project.org/web/packages/npwbs/index.html The remainder of this paper proceeds as follows: In Section REF we discuss the single change point problem where $K \\in \\lbrace 0,1\\rbrace $ .", "Section REF generalizes this to the multiple change point problem via binary segmentation, and explains the problems that this procedure can suffer from.", "In Section REF we introduce our nonparametric multiple change point detection procedure using Wild Binary Segmentation, with further implementation details discussed in Sections REF and REF .", "Section compares its performance to several existing nonparametric change point methods.", "This includes the method proposed by [36] who also developed a nonparametric WBS scheme that differs from ours in both the choice of test statistic and the thresholding procedure.", "We show in this section that their method tends to suffer from poor performance on most datasets.", "Finally in Section we conclude with a real data example taken from the field of stylometry where the task is to detect changes in the writing style of an author." ], [ "Detecting a Single Change Point", "We first consider the task of testing for the existence of a single change point $\\tau $ at a specific known location $\\tau =k$ in a sequence $y_1,\\ldots ,y_n$ of observations.", "This can be phrased as a two sample hypothesis test [4]: $H_0: y_1,\\ldots ,y_n \\sim _{i.i.d} F_0$ $H_1: y_1,\\ldots ,y_k \\sim _{i.i.d} F_1,\\quad y_{k+1},\\ldots ,y_n \\sim _{i.i.d} F_2, \\quad F_1 \\ne F_2$ Let $T^k_{1,n}$ denote a two sample test statistic that has been chosen with regards to the type of change that we wish to detect (for example, a shift in location or scale).", "To make the notation clear, we write $T^k_{p,q}$ to denote a two sample test statistic for the samples $\\mathbf {y}_1 = (y_p,y_{p+1},\\ldots ,y_k)$ and $\\mathbf {y}_2 = (y_{k+1},\\ldots ,y_q)$ .", "In the parametric case where the functional forms of $F_0, F_1, F_2$ are all known with only their parameters unknown, a statistic based on the generalised likelihood ratio test is a common choice [4].", "Now suppose that $k$ is unknown and requires estimation.", "In this case, the null hypothesis is the same as above but the alternative hypothesis becomes: $H_1: \\exists k: y_1,\\ldots ,y_k \\sim _{i.i.d} F_1,\\quad y_{k+1},\\ldots ,y_n \\sim _{i.i.d} F_2, \\quad F_1 \\ne F_2$ The test statistic can be chosen as the maximal value of $T^k_{1,n}$ over all possible $k$ , i.e.", ": $T_{1,n} = \\max _{1\\le k<n} |T^k_{1,n}|$ The null hypothesis of no change is rejected if $T_{1,n} > \\gamma $ for some appropriately chosen threshold $\\gamma (n)$ , where we write .", "In the case where the null distribution of $T_{1,n}$ is known and where $\\gamma $ is chosen to be equal to its $1-\\alpha ^{th}$ percentile then this leads to a control of the false positive rate at level $\\alpha $ , i.e.", "if the null hypothesis of no change is true, then the probability of incorrectly detecting a change is less than $\\alpha $ .", "If the null hypothesis is rejected then an estimate of $\\tau $ is given by $\\hat{\\tau } = \\tilde{k}$ where $\\tilde{k}$ is the value of $k$ for which $|T^k_{1,n}|$ is maximal.", "There is a vast literature studying the theoretical properties of this change detection procedure in a parametric context, including derivations of the (usually asymptotic) null distribution of $T_{1,n}$ under various choices of the test statistic $T^k_{1,n}$ , along with the construction of confidence bands for the estimate of $\\tau $ .", "The extension of this procedure to a nonparametric setting is straightforwards: simply choose $T^k_{1,n}$ to be a test statistic with a null distribution that does not depend on the underlying distribution of $y_i$ .", "One way to achieve this is to take $T^k_{1,n}$ to be a statistic associated with a nonparametric test based on sample ranks [38], [6].", "In this case, the null distribution of $T_{1,n}$ will not depend on the distribution of the $y_i$ observations and the false positive rate can hence be controlled without requiring distributional information about $y_i$ ." ], [ "Binary Segementation for Detecting Multiple Change Points", "Next suppose that there are an unknown number $K \\ge 0$ of change-points $\\tau = \\lbrace \\tau _1,\\ldots ,\\tau _K\\rbrace $ and the task is to estimate both $K$ and $\\tau $ .", "The above procedure could be directly extended by replacing the maximal test statistic $T_{1,n}$ with one that is maximized over every possible configuration of change points for every value $K \\in \\lbrace 1,2,\\ldots ,n-1\\rbrace $ .", "While finding the configuration which maximizes $T_{1,n}$ might appear challenging, it can often by achieved in a computationally efficient manner through the use of dynamic programming [25].", "However, determining the null distribution of the resulting $T_{1,n}$ is a substantially more difficult task.", "This has led to the development of step-wise procedures which avoid this direct maximization and instead recursively apply hypothesis tests based on the single change point alternative.", "Specifically, consider again the single change point hypothesis test: $_0: y_1,\\ldots ,y_n \\sim _{i.i.d} F_0$ $H_1: \\exists k: y_1,\\ldots ,y_k \\sim _{i.i.d} F_1,\\quad y_{k+1},\\ldots ,y_n \\sim _{i.i.d} F_2, \\quad F_1 \\ne F_2$ Let $T_{1,n} = \\max _{1\\le k<n} |T^k_{1,n}|$ for an appropriate $T^k_{1,n}$ be defined as above.", "Again, the null hypothesis of no change is rejected if $T_{1,n} > \\gamma (n)$ where we choose $\\gamma $ as a function of the sample size for reasons that will be discussed below.", "Suppose the null hypothesis is rejected and let $\\tilde{k}$ be the estimated change point location.", "The binary segmentation procedure consists of performing a further hypothesis test to check for a change in the observations $y_1,\\ldots ,y_{\\tilde{k}}$ to the left of the change point.", "This is carried out in the same manner as was done for the original sequence, i.e.", "by computing $T_{1,\\tilde{k}} = \\max _{1 \\le k < \\tilde{k}} |T^k_{1,\\tilde{k}}|$ .", "If this $T_{1,\\tilde{k}} > \\gamma (\\tilde{k})$ then we deduce that there is a second change point in the observations $y_1,\\ldots ,y_{\\tilde{k}}$ .", "This procedure is then applied recursively, with the segment $y_1,\\ldots ,y_{\\tilde{k}}$ split into two pieces around this second point, and both segments tested for further change points and so on, until no more change points are found.", "The same procedure is applied to the observations $y_{\\tilde{k}+1},\\ldots ,y_{\\tilde{n}}$ to the right of the original change point, with a further change point in this segment detected if $T_{\\tilde{k}+1,n} = \\max _{\\tilde{k}+1 \\le k < n} |T^k_{\\tilde{k}+1,n}| > \\gamma (n-\\tilde{k})$ .", "The final output of this procedure is an estimate $\\hat{K}$ of the number of change points and their locations $\\hat{\\tau } = (\\hat{\\tau }_1,\\ldots ,\\hat{\\tau }_{\\hat{K}})$ .", "The binary segmentation procedure was first introduced by [43] and has been widely used since [22], [15].", "It is sometimes claimed that the main advantage of binary segmentation is computational since it avoids searching through every possible change point configuration [33].", "While this is true, another major advantage is that it allows the false positive rate to be strictly controlled in the case where the sequence does not contain any change points which is not typically the case with multiple change point methods which are based on optimizing a penalized cost function.", "Figure: Example of a sequence with 3 change points (dotted lines) at locations τ={100,110,120}\\tau =\\lbrace 100,110,120\\rbrace where binary segmentation will struggle to detect any changes due to the masking effect.Despite this, binary segmentation suffers from a major limitation.", "At each stage in the recursive segmentation process, a decision is taken whether the segment under consideration contains exactly 0 or 1 change points.", "This may result in a failure to detect change in the case where two or more change points mask each other.", "Consider the sequence shown in Figure REF .", "In this case, there are three change points in the sequence mean that are easily visible to the eye.", "However there is no obvious location for a single change point that would split this sequence into two segments with substantially different means.", "As such, performing the hypothesis test in Equation REF based on a single change point alternative may fail to reject the null hypothesis, in which case the estimate of the number of change points will be $\\hat{K}=0$ .", "To solve this problem, [15] introduced Wild Binary Segmentation which replaces the above test statistic with one based on maximizing over random length subsequences of $y_1,\\ldots ,y_n$ .", "We will now discuss this procedure along with its nonparametric extension." ], [ "Nonparametric Wild Binary Segmentation", "Our goal is the nonparametric detection of multiple location-scale change points in a univariate sequence of continuous valued observations.", "Following the above discussion, we use a hypothesis testing framework with a nonparametric two-sample test statistic that is sensitive to changes in both location and scale.", "A natural choice would be an omnibus statistic such as the Kolmogorov-Smirnov or Cramer-von-Mises which can detect arbitrary distributional changes.", "However these statistics can be underpowered for detecting location-scale changes compared to other statistics which are more specialized towards these alternatives .", "Instead, we will consider a Lepage-like statistic which is based on a linear combination of standardized Mann-Whitney and Mood test statistics [28], [41].", "Let $y_1 = (y_1,\\ldots ,y_k)$ and $y_2 = (y_{k+1},\\ldots ,y_n)$ denote two samples of independent observations and consider a test for distributional equality where the null hypothesis is that both samples have identical distributions.", "For each observation $y_i$ , let $r_i$ denote its rank in the combined sample, i.e.", "$r_i = \\sum _{j=1}^n I(y_i \\ge y_j)$ so that smallest observation has rank 1, the second smallest has rank 2 and so on..", "The Mann-Whitney test [29] is a two sample rank test for the alternative hypothesis that the distribution of the two samples differs in location.", "It is defined as: $U^k_{1,n} = \\sum _{i=1}^k r_i - \\frac{k(k+1)}{2}$ Since this statistic depends on the observations only through their ranks, it is easy to show that its distribution under the null hypothesis is independent of the distribution of the observations.", "Specifically, its mean and variance under the null are [29]: $E[U^k_{1,n}] = \\frac{k(n-k)}{2}, \\quad Var[U^k_{1,n}] = \\frac{k(n-k)(n+1)}{12}$ While the Mann-Whitney test will reject the null hypothesis if $y_1$ and $y_2$ come from distributions with different locations, it will generally not reject it if the distributions differ only in scale.", "For this purpose an alternative test statistic should be used.", "Our choice is the Mood statistic [31], defined as: $M^k_{1,n} = \\sum _{i=1}^k \\left(r_i -\\frac{n+1}{2}\\right)^2$ Like the Mann-Whitney test, the distribution of the Mood statistic under the null hypothesis is independent of the distribution of the observations.", "Its mean and variance under the null hypothesis are [31]: $E[M^k_{1,n}] = \\frac{k(n^2-1)}{12}, \\quad Var[M^k_{1,n}] = \\frac{k(n-k)(n+1)(n^2-4)}{180}$ In cases where the distribution of the two samples can differ in either location or scale, a test statistic which is sensitive to both possibilities can be defined by combining the Mann-Whitney and Mood statistics together.", "Specifically, we use a Lepage-type statistic similar to [28] that is based on the sum of squared standardized versions of these statistics, i.e: $T^k_{1,n} = \\left( \\frac{U^k_{1,n} - E[U^k_{1,n}]}{\\sqrt{Var[U^k_{1,n}]}}\\right)^2 + \\left( \\frac{M^k_{1,n} - E[M^k_{1,n}]}{\\sqrt{Var[M^k_{1,n}]}} \\right)^2$ Note that the original Lepage statistic defined in [28] use an Ansari-Bradley test statistic rather than the Mood test statistic for detecting in scale.", "Our decision to use the Mood is based on some empirical results suggesting it has favorable power [12] although we did not find a substantial difference between the two.", "Now suppose that we wish to test the hypothesis in Equation REF for a single change point alternative in the sequence $y_1,\\ldots ,y_n$ .", "Due to the concerns discussed in the above section, the previous maximized test statistic $T_{1,n} = \\max _k |T^k_{1,n}|$ may fail to detect a change in the case where multiple change points mask each other.", "Our WBS procedure avoids this problem by randomly selecting subsequences of $y_1,\\ldots ,y_n$ and computing the maximized single change point Lepage statistic on each of these subsequences.", "The statistic used to test the hypothesis in Equation REF is then the maximum of these subsequence statistics.", "The intuition behind this procedure can be seen in Figure REF .", "The failure of standard binary segmentation in this situation comes from the fact that there is no single location which would divide the sequence into two segments with substantially different means.", "However, there are subsequences within which a change point will be very apparent, for example the subsequence $y_{95},\\ldots ,y_{105}$ which will contain the first change point located at $\\tau _1=100$ .", "As such, the value of the Lepage statistic would be substantially higher when evaluated only on this subsequence rather than on the sequence as a whole.", "More formally, let $F^M_n$ be a set of $M$ randomly chosen intervals where the start and end points of each interval have been sampled uniformly from the set $\\lbrace 1,2,\\ldots ,n\\rbrace $ .", "For each such interval $[s_m,e_m] \\in F^M_n$ , let $T^k_{s_m,e_m}$ denote the value of the Lepage statistic for the two sample hypothesis test where $\\mathbf {y}_1 = (y_{s_m},\\ldots ,y_k)$ and $\\mathbf {y}_2 = (y_{k+1},\\ldots ,y_{e_m})$ and let $T_{s_m,e_m} = \\max _{s_m \\le k < e_m} |T^k_{s_m,e_m}|$ be the maximal value of the test statistic on this subsequence.", "The test statistic used in the WBS procedure is then: $T^{WBS}_{1,n} = \\max _{m:[s_m,e_m] \\in F^M_n} T_{s_m,e_m}$ The null hypothesis of no change is rejected if $T^{WBS}_{1,n} > \\gamma (n)$ where $\\gamma $ is again a function of the number of observations and will be discussed in the next section.", "If rejection occurs then let $(s_0,e_0) = \\arg \\max _{m} T_{s_m,e_m} $ be the interval for which the Lepage statistic was maximal.", "The estimate of the change point is then $\\hat{\\tau } = \\tilde{k}$ where $\\tilde{k} = \\arg \\max _k T^{k}_{s_0,e_0}$ .", "The WBS procedure then finds additional change points in a recursive manner, using the same procedure as binary segmentation above.", "First, the observations $y_1,\\ldots ,y_{\\tilde{k}}$ are tested for a change point by maximizing the Lepage statistic over $M$ randomly chosen subsequences of $y_1,\\ldots ,y_{\\tilde{k}}$ and comparing it to a threshold $\\gamma (\\tilde{k})$ .", "The observations $y_{\\tilde{k}+1},\\ldots ,y_n$ to the right of the $\\tilde{k}$ are tested for further change points in a similar way.", "If any further change points are found, the sequence is again split into two segments and both are tested for additional change points, until no further change points are found.", "A full description of the procedure is given in Algorithm REF ." ], [ "Threshold Determination", "We next turn our attention to choosing appropriate values of the thresholds $\\gamma (n)$ used in the WBS-Lepage procedure.", "In order to control the false positive rate, we will choose $\\gamma (n)$ to be the $1-\\alpha ^{th}$ percentile of the null distribution of $T^{WBS}_{1,n}$ in which case the false positive rate will be controlled at level $\\alpha $ .", "For example, if we choose $\\gamma (n)$ to be equal to the 95th percentile then the probability of incorrectly detecting a change when none exists will be less than $0.05$ .", "Note that this is a different approach to that taken by [15] who instead used an information criteria based method to determine the number of change points, which does not give any direct control over the false positive rate.", "Due to the nonparametric nature of the Lepage test statistics, the null distribution of $T^{WBS}_{1,n}$ does not depend on the distribution of the $y_i$ observations.", "As such, Monte Carlo simulation can be used to find the required percentiles of the null distribution.", "While this requires a substantial amount of computation, it only needs to be done a single time and the resulting percentiles will hold for any length $n$ sequence of independent continuous observations regardless of their distribution.", "Figure: Values of the threshold γ(n)\\gamma (n) required to give a false positive probability of 0.05 (red line) and 0.01 (black line) for WBS using the Lepage statistic.We implemented this computation as follows: for each $n \\in \\lbrace 10,11,12,\\ldots ,99,100\\rbrace $ we simulated $1,000$ sequences of length $n$ from a $N(0,1)$ distribution and computed $T^{WBS}_{1,n}$ for each sequence.", "For each $n$ , the resulting $T^{WBS}_{1,n}$ statistics were ranked from smallest to largest, with the 50th largest being an estimate of the 95th percentile of $T^{WBS}_{1,n}$ .", "Since the distribution of $T^{WBS}_{1,n}$ does not depend on the distribution of the data, this will also be a valid estimate of its 95th percentile for any other choice of sequence distribution, not just $N(0,1)$ .", "The same procedure was then carried our for each $n \\in \\lbrace 105,110,\\ldots ,995,1000\\rbrace $ , for each $n \\in \\lbrace 1100,1200,\\ldots ,5000\\rbrace $ , then for each $n \\in \\lbrace 6000,7000,\\ldots ,10000\\rbrace $ .", "Linear interpolation based on these values was then used to produce thresholds for every $n \\le 10,000$ .", "Figure REF shows a plot of the resulting thresholds $\\gamma (n)$ for significance level $\\alpha =0.05$ .", "We also carried out a similar procedure to find thresholds corresponding to a significance level of $0.01$ , which may be useful in cases where there is even greater desire to avoid false positives.", "These thresholds provide the values of $\\gamma (n)$ which are used to test for an initial change point in $y_1,\\ldots ,y_n$ .", "Suppose a change point is found a location $\\tilde{k}$ .", "When the test for the next change point is applied to observations $y_1,\\ldots ,y_{\\tilde{k}}$ we use the threshold that corresponds to the length of this segment, i.e.", "$\\gamma (\\tilde{k})$ .", "This procedure is then repeated at each stage of the segmentation, with $\\gamma (\\cdot )$ always chosen to be the value corresponding to the length of the segment being tested for a change point.", "The full procedure for our nonparametric WBS algorithm is provided in Algorithm REF .", "This algorithm is implemented in the npwbs R package [40] which also contains the precomputed values of $\\gamma (n)$ above for both $\\alpha =0.05$ and $\\alpha =0.01$ .", "The following two points should be noted: Since our procedure only detects a change point if the observed value $T^{WBS}_{p,q}$ is greater than its 95th (or 99th) null percentile, it may not be possible to detect a change point if the segment under consideration is too short.", "This is because our use of rank-based test statistics means that the distribution of $T^{WBS}_{p,q}$ is highly discrete when $q-p$ is small.", "For example, suppose we attempt to detect changes in a sequence of length 3, so the observations are $y_1,y_2,y_3$ .", "Regardless of the observed values of the $y_i^{\\prime }s$ , there are only 6 possible sequences that can be obtained once the observations are converted to ranks: (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), (3,2,1).", "As such, it is clear that the number of unique values of $\\max _k |T^k_{1,3}|$ must be less than 6 (it will not be exactly equal to 6, since several of these rank sequences will produce identical values of the test statistic).", "As such, it will not be possible to choose a threshold $\\gamma $ such that the false positive probability is controlled at level $\\alpha =0.05$ and where it is also possible for $\\max _k |T^k_{1,3}|$ to be greater than this value.", "In general for the Lepage test statistic, it can be shown that the minimum number of observations required for both these conditions to hold is 10.", "As such when we compute thresholds and subsample sequences, we restrict the minimum length of the sequences considered to be 10.", "Note that this does not mean that we require change points to be at separated by at least 10 observations in order to be detectable, since a detected change point in such a sequence of 10 observations will segment it into two sequences of smaller length.", "An alternative to this pre-computing procedure would be to adaptively generate each $\\gamma (\\cdot )$ in real-time during the segmentation procedure using a permutation test, as was done in a CUSUM context by [34].", "A potential advantage of this approach is that it allows the choice of each $\\gamma $ to be sensitive to the previous split points, and would also (e.g.)", "allow the threshold level $\\alpha $ to be changed adaptively, perhaps gradually lowering it after each split in accordance with a Bonferonni-like multiple comparisons procedure.", "While this has some appeal, it does require a significant amount of computational resources which may not be practical.", "As such, we use the pre-generation approach when it comes to assessing performance in Section .", "[t] WBS-Lepage [1] Choose a value for $\\alpha $ on which the $\\gamma $ thresholds are based Initialization: Call WBS-LP$(\\mathbf {y},1,n)$ to find all change points in $\\mathbf {y} = (y_1,\\ldots ,y_n)$ WBS-LP$\\mathbf {y},p,q$ For each $y_i$ where $p \\le i \\le q$ , replace $y_i$ with its rank $r_i = \\sum _{j=p}^q I(y_i \\ge y_j)$ Compute $T^{WBS}_{p,q}$ from Equation REF .", "Let $\\gamma (q-p)$ be the precomputed threshold corresponding to $\\alpha $ .", "$T^{WBS}_{p,q} > \\gamma (q-p)$ Let $(s_0,e_0) = \\arg \\max _{m} T_{s_m,e_m} $ Let $\\tilde{k} = \\arg \\max _k T^{k}_{s_0,e_0}$ .", "Return $\\bigcup \\left[ \\lbrace \\text{WBS-LP}(\\mathbf {y},p,\\tilde{k})\\rbrace , \\lbrace \\tilde{k}\\rbrace , \\lbrace \\text{WBS-LP}(\\mathbf {y},\\tilde{k}+1,q)\\rbrace \\right]$ Return $\\lbrace \\varnothing \\rbrace $ Prune the resulting change points as described in Section REF ." ], [ "Pruning", "A potential issue when using WBS is that, like binary segmentation, the location of each change point is estimated in a greedy manner and is not adjusted based on additional changes points that are found at later stages in the segmentation procedure.", "This can sometimes result in an overestimation of the number of change points if a single change point ends up getting detected twice.", "Suppose that when WBS is used on a sequence, the first change point is estimated to occur at location $\\tilde{k}_1$ .", "The sequence is then split into two segments, and WBS is applied recursively to each.", "Suppose this results in two further change points being found at locations $\\tilde{k}_0$ and $\\tilde{k}_2$ such that $\\tilde{k}_0 < \\tilde{k}_1 < \\tilde{k}_2$ .", "It may be the case that given $\\tilde{k}_0$ and $\\tilde{k}_2$ , the change point at $\\tilde{k}_1$ is not needed, i.e.", "that the observations between $\\tilde{k}_0$ and $\\tilde{k}_2$ are identically distributed.", "This issue was noticed as far back as [22] but hasn't always been taken into account in the subsequent literature.", "We will use a procedure similar to that discussed in [22] to prune the change points found by WBS via a post-processing step.", "Suppose that the WBS-Lepage algorithm finds $\\hat{K}$ change points that have been estimated to occur at locations $\\hat{\\tau }_1 < \\ldots < \\hat{\\tau }_K$ .", "For notational convenience we write $\\hat{\\tau }_0 = 0$ and $\\hat{\\tau }_{K+1} = n$ .", "For each $k \\in 1,\\ldots ,K$ we will make a decision whether $\\hat{\\tau }_k$ should be pruned.", "This is done by performing the following hypothesis test, which tests whether the observations between $\\hat{\\tau }_{k-1}$ and $\\hat{\\tau }_{k+1}$ are identically distributed: $H_0: y_{\\hat{\\tau _{k-1}+1}},\\ldots ,y_{\\hat{\\tau _{k+1}}} \\sim _{i.i.d} F_0$ $H_1: \\exists k: y_{\\hat{\\tau _{k-1}+1}},\\ldots ,y_k \\sim _{i.i.d} F_1,\\quad y_{k+1},\\ldots ,y_{\\hat{\\tau _{k+1}}} \\sim _{i.i.d} F_2, \\quad F_1 \\ne F_2$ This is carried out using the same WBS procedure described above, i.e.", "we compute the maximized Lepage statistic over $M$ randomly chosen subsequences of $y_{\\hat{\\tau _{k-1}+1}},\\ldots ,y_{\\hat{\\tau _{k+1}}}$ to form $T^{WBS}_{\\hat{\\tau }_{k-1}+1, \\hat{\\tau }_{k+1}}$ and compare this to the threshold $\\gamma ( \\hat{\\tau }_{k+1} - \\hat{\\tau }_{k-1})$ .", "If the null hypothesis is not rejected, then we delete $\\hat{\\tau }_k$ from the list of change points.", "Note that since this post-processing step only involves the removal of change points rather than the creation of new ones, it cannot increase the false positive rate of tests used in the WBS procedure.", "The improvement in performance that comes from this pruning step will be studied in Section REF ." ], [ "Experiments", "We now investigate the performance of our proposed WBS-Lepage method for detecting location-scale changes.", "We will compare its performance to three other recently proposed methods for nonparametric change detection.", "The first is the divisive partitioning approach from [30] which is implemented in the ecp R package [24].", "This uses a type of binary segmentation to perform change detection in multivariate sequences, but is also applicable in our univariate setting.", "The second method is that of [44] which estimates change points using a penalized cost function approach.", "Specifically, they utilize a nonparametric maximum likelihood estimate, penalized by a BIC-type penalty.", "While their approach is computationally demanding, an extension using the PELT (Pruned Linear Exact Time [25]) algorithm was introduced by [20] which has been implemented in the changepoint.np R package [21] and we use this version.", "The third method we compare to is the PYWR method from [36] which also uses a WBS-type approach although with different test statistics and thresholding than ours.", "We use the R implementation which they provide alongside their paper.", "For our own WBS-Lepage procedure, we choose $\\gamma (\\cdot )$ such that the false positive rate is controlled at the level of $\\alpha =0.05$ , and utilize the pruning step described in Section REF ." ], [ "False Positives", "In many situations it will not be known in advance whether the sequence being studied contains any change points at all.", "In this case, it is desirable for a change point detection algorithm to avoid false positive detections, and to correctly estimate that no change points have occurred.", "Due to the way the thresholds were constructed in Section REF , our WBS algorithm is guaranteed to control the false positive rate at the desired level (such as $\\alpha =0.05$ ).", "The ECP algorithm is also based around hypothesis testing and hence offers a similar guarantee.", "However approaches such as PELT which are based only on optimizing a penalized cost function offer no such guarantees.", "This is not necessarily a flaw, since the guaranteed control of false positives reflects a strongly frequentist attitude to statistical inference which is not universally accepted.", "Nonetheless, a change detection algorithm which is highly likely to find that change points have occurred when in fact they have not, will have limited practical utility in some situations.", "To explore this further, we carry out the following simulation: we independently simulated $50,000$ sequences from each from the following three distributions: 1) Normal(0,1), 2) Student-t(3), and 3) Lognormal(1,1/2).", "The latter two are examples of a heavy-tailed and a skewed distribution respectively.", "Each of these $50,000$ sequences consists of 100 independent observations, and does not contain any change points.", "The WBS, ECP and PELT algorithms were then applied to each sequence.", "Since there are no change points in any of the sequences, any detected change points will be false positives.", "For both WBS and ECP we set the false positiver rate to be $\\alpha =0.05$ .", "For PELT, we tried a variety of cost functions implemented in the changepoint.np package, specifically the Bayesian Information Criterion (BIC), the Modified BIC (MBIC) and the Strengthened Information Criterion (SIC).", "For PYWR we used the default parameter settings in the authors code.", "Table REF shows the proportion of sequences in which a change point was detected .", "For the WBS-Lepage and ECP procedures, the false positive rate is correctly bounded by $\\alpha =0.05$ regardless of the data distribution, i.e.", "there is less than a $5\\%$ probability of incorrectly flagging that at least one change point has occurred, when the sequence actually contains no change points.", "This reflects their nonparametric nature and allows the false positive rate to be controlled regardless of the sequence distribution.", "While the PYWR algorithm does not provide any guaranteed control of the false positive rate, it does in practice also incur only a low number of false positive detections.", "However the PELT approach suffers from a severely high rate of false positives, with the BIC and SIC penalties incorrectly flagging that has a change has occurred in $99\\%$ of sequences, regardless of the data distribution.", "The MBIC penalty does slightly better and only flags in around $40\\%$ of cases.", "This makes the PELT approach questionable in situations where it reasonable to expect that the sequence may not contain any change points.", "Table: Probability of each change detection method producing a false positive detection in a length 100 sequence of independent observations with no change points, simulated from various distributions." ], [ "Change Detection Performance", "We now explore how our method performs on a number of simulated data sets.", "Specifically, we consider the following four data models which have previously been studied in the change point literature: Figure: Plots showing a typical sequence simulated from each of the 15 data models.", "Each row respectively shows the fms, mix, interval, dhk, and kfe data.", "fms data, previously studied by [15], [14].", "This data consists of 497 observations with $K=6$ change points at locations $\\tau = \\lbrace 39,226,243,300,309,333\\rbrace $ .", "The sequence means in each segment are respectively $0.18,0.08,1.07,-0.53,0.16,-0.69,-0.16$ and the standard deviation $\\sigma =0.3$ is constant.", "mix data, previously studied by [15].", "This consists of 560 observations with $K=13$ change points at locations $\\tau = \\lbrace 11,21,41,61,91,121,161,201,251,301,361,421,491\\rbrace $ .", "The sequence means in each segment are respectively $7,-7,6,-6,5,-5,4,-4,3,-3,2,-2,1,-1$ and the standard deviation $\\sigma =4$ is constant.", "dhk data, previously studied by [11].", "This consists of 1000 observations with $K=9$ change points at locations $\\tau = \\lbrace 100,200,300,400,500,600,700,800,900\\rbrace $ .", "The sequence standard deviations in each segment are respectively $2.5,1,2.5,1,2.5,1,2.5,1,2.5,1$ and the sequence mean $\\mu =0$ is constant.", "kfe data, previously studied by [25].", "This consists of 1000 observations with $K=5$ change points.", "Unlike the other data models, these change points are not fixed, and are randomly generated from a uniform distribution on $[30,970]$ with the constraint that there must be at least 30 observations in each segment.", "The mean of the sequence $\\mu =0$ is constant, and the sequence standard deviations are randomly generated from a Lognormal($0,\\log (10/2)$ ) distribution.", "Note that in the original version of this data studied by [25] there were 10 change points; we have reduced the number to 5 since the nonparametric version of the segmentation task is substantially more challenging than the parametric version which they consider.", "The first two data modes represent shifts in location, while the latter two are shifts in scale.", "We also consider a fifth data model which has not been previously studied and is intended to represent a short-duration location change inside a longer sequence: which is precisely the situation in which classic binary segmentation algorithms struggle since there is no obvious location to place a single change point: interval data.", "This data consists of 1000 observations with $K=2$ change points at locations $\\tau = \\lbrace 490,510\\rbrace $ .", "The sequence means in each segment are respectively $0,2,0$ and the standard deviation $\\sigma =1$ is constant.", "For each of the five data models, we will consider the case where the error distribution is 1) Normal, 2) Student-t with 3 degrees of freedom, and 3) Lognormal(1,1/2), giving 15 data models in total.", "The latter two distributions represent heavy-tailed and skewed distributions respectively.", "For each of these 15 types of data, 100 independent sequences were simulated.", "Figure REF shows a typical sequence in each of these 15 data sets.", "To measure performance, we use two metrics.", "The first assesses whether the change detection algorithm can correctly recover the true number of change points.", "The second assesses whether the algorithm can correctly identify their locations.", "For a particular data model, let $K$ denote the true number of change points and $\\hat{K}_i$ denote the estimated number of change points in the $i^{th}$ simulated data set where $i \\in \\lbrace 1,2,\\ldots ,100\\rbrace $ .", "Similarly, let $\\tau = \\lbrace \\tau _1,\\ldots ,\\tau _K\\rbrace $ denote the true location of the change points, and $\\hat{\\tau }_i = \\lbrace \\hat{\\tau }_{i,1},\\ldots ,\\hat{\\tau }_{i,\\hat{K}}\\rbrace $ be their estimated locations in the $i^{th}$ data set.", "The two metrics are: $\\frac{1}{100} \\sum _{i=1}^{100} |K-\\hat{K}_i|$ , i.e.", "the average absolute error in the estimated number of change points $\\frac{1}{100K}\\sum _{i=1}^{100}\\sum _{k=1}^K I(\\tau _k \\in \\hat{\\tau }_i)$ , i.e.", "the proportion of true change points which are estimated to be in the correct location.", "The second metric only counts a change point as being correctly detected if its location is predicted with exact accuracy.", "Since this is fairly challenging for sequences where the changes are not of a large magnitude, we also consider a third metric which counts a change point as being correctly detected if it is estimated to have occurred within 2 observations (inclusive) on either side, i.e.", ": $\\frac{1}{100K}\\sum _{i=1}^{100}\\sum _{k=1}^K I(\\exists \\hat{\\tau }_{i,j} \\in \\hat{\\tau }_i \\text{ such that } |\\tau _k - \\hat{\\tau }_{i,j} | \\le 2)$ , Table: Average absolute discrepancy 1 100∑ i=1 100 |K-K i |\\frac{1}{100} \\sum _{i=1}^{100} |K - K_i| for each change point method computed over 100 realizations of each of the 15 data models.", "Lower numbers indicate better performance.", "The best method for each data model is highlighted in bold text.We compare performance of nonparametric WBS to the ecp, changepoint.np (PELT) and PYWR methods discussed above.", "We use the MBIC version of PELT due to its lower false positive rate discussed above.", "Tables REF , REF and REF show the results of all these methods evaluated over the simulated data.", "We can draw the following conclusions: When it comes to the identifying the correct number of change points, the WBS method performs the best when it comes to scale changes, while ECP is superior for location changes.", "However the extent to which ECP struggles with scale changes is notable, with it often failing to detect even a single change point on the dfk data despite it containing 9 change points.", "The PYWR method performs relatively poorly on essentially every data set under all performance metrics.", "The PELT method generally performs well for location changes although it somewhat struggles on the interval data, and consistently overestimates the number of change points.", "This is due to its tendency to incur false positive when faced with a long segment of observations which does not contain any change points.", "When it comes to identifying the correct location of the change points, the ECP method performs very poorly.", "Its performance improves drastically when performance is judged on estimating the change point to within two observations of its true location.", "This suggests that it not a preferred approach when absolute accuracy is necessary.", "The performance of WBS and PELT is very similar when it comes to identifying the location of change points that correspond to location changes, with WBS being superior for scale changes.", "In summary, these results show a strong performance by the WBS algorithm which tends to be the best at detecting scale shifts, while also performing reasonably well at detecting location shifts.", "While it is not the best method in every situation, it did not perform extremely badly in any situation either.", "The very poor performance of ECP in some of the scale shift settings means that ECP cannot be recommended in these settings, and PYWR genearlly does not perform well.", "However it should be noted that an advantage of ECP is its ability to also detect changes in multivariate data, for which our WBS approach would not be appropriate.", "The difference between WBS and PELT is less drastic, with both giving similar performances for detecting location changes, with WBS being better for scale changes.", "However PELT's tendency to flag for false positive detections in situations where there are no change points is a potential handicap in situations where there may not be any changes at all, or where the number of change points is small compared to the length of the sequence, as in the interval data." ], [ "The Impact of Pruning", "In Section REF we mentioned that Wild Binary Segmentation can still suffer from overestimating the number of change points due to its greedy nature, and recommended a pruning strategy to combat this.", "The above results utilized this pruning technique.", "It is potentially interesting to compare the extent to which pruning actually improves performance.", "To this end, Table REF compares the performance of the pruned and unpruned version of the algorithm, according to the mean absolute discrepancy performance metric.", "It can be seen that pruning results in consistent and noticeable improvements in performance across most of the data sets.", "As such, its use can generally be recommended in practice compared to the unpruned version." ], [ "Real Data", "We conclude with a real data example drawn from the field of stylometry, where statistical methods are used to answer literary authorship questions.", "Specifically we consider the case of Sir Terry Pratchett, a celebrated British author who wrote the Discworld series of fantasy novels, which consists of 41 books written over a 32 year period from 1983 to 2015.", "During 2007, Pratchett was diagnosed with Alzheimers disease [5], although he continued to write for several years after this diagnosis.", "In [39] the authors developed a multivariate parametric change point model to analyse whether there was a detectable change in Pratchett's writing style following this diagnosis.", "We reanalyze their dataset in a nonparametric manner.", "In stylometry, it is common to represent texts as vectors containing the counts of frequently occurring words.", "Specifically, let $w_1,\\ldots ,w_{200}$ denote the 200 most commonly occurring words across the whole corpus of Discworld books.", "Each of the 41 books can then be written as a vector $\\mathbf {y}_i = (c_{i,1},\\ldots ,c_{i,200})$ where $c_{i,j}$ denotes the number of times that word $w_j$ appeared in book $i$ , with the books arranged in temporal order so that the first Discworld novel corresponds to vector $\\mathbf {y}_1$ and the last corresponds to vector $\\mathbf {y}_{41}$ .", "A 201st element $c_{i,201}$ is also appended to each vector to count the number of words in the book which were not one of the 200 common words, so that the elements of each vector sum to the total number of words in the corresponding book.", "For reference, the list of the 200 most common words in the corpus is shown in Figure REF , reproduced from [39].", "Given this data representation, [39] developed a change point mdoel using the Dirichlet-Multonomial distribution to test for a change point in the sequence of books $\\mathbf {y}_1,\\ldots ,\\mathbf {y}_{41}$ , to investigate whether Alzheimers may cause a detectable change in writing style that would show up in the usage pattern of these common words.", "This hypothesis is based on a substantial volume of stylometric evidence which has found that literary writing style can be effectively characterized through the use of commonly occurring words [1] .", "The change point approach taken in [39] is multivariate and explicitly treats each book as a 201 element vector.", "Since our WBS method is univariate, we will use a lower dimensional representation of this dataset.", "Specifically we project the 43x201 matrix of word counts down onto the principal components of its associated correlation matrix.", "We found that the first principal component did not reveal much useful information.", "However the second principal is shown in Figure REF and displays evidence of structural change towards both the beginning and end of the sequence.", "Figure: List of the 200 most common words in the Discword corpus of 41 novels.Figure: The left hand plot shows the 41 Discworld books projected onto the second principal component of the corpus.", "The right hand plot shows the same data with the two detected change-points superimposed as vertical lines.We deployed our WBS-Lepage method on this univariate sequence of 41 observations.", "Two change points were detected at locations $i=8$ and $i=35$ and are shown on Figure REF .", "The first change point most likely corresponds to a maturation of writing style from Pratchett's early books into his main body of writing.", "The second change point immediately follows the release of his novel `Wintersmith' in the year 2006.", "The next Discworld novel to be written after this change point was titled `Making Money' and released in 2007.", "Since Pratchett's Alzheimer's diagnosis was publicly announced in the year 2007, this provides some evidence that the change in writing style may indeed be connected to the diagnosis.", "These results are consistent with the findings of [39] who detected a similar change point using their Dirichlet-Multnomial approach." ], [ "Conclusion", "The task of detecting multiple change points in a nonparametric setting has been under-explored, however recent developments have produced methods such as nonparametric maximum likelihood [44] and binary segmentation [30] which are well-suited to this purpose.", "We have presented an alternative approach that combines wild binary segmentation with rank based hypothesis testing that competes extremely well with these existing methods on a variety of simulated data sets.", "Compared to the nonparametric maximum likelihood approach, our method is substantially better for detecting changes in scale, while being almost as effective as detecting changes in location.", "It also allows for the control of the false positive rate, which can be important in situations where the sequence does not contain any change points, or when the number of change points is small compared to the number of observations.", "In this situation, we find that the nonparametric maximum likelihood approach tends to overestimate the number of change points, which may not be desirable.", "A drawback of our method is the amount of computational time required to simulate from the null distribution of the test statistic in order to generate the $\\gamma (n)$ thresholds.", "As such, we have provided an R implementation of our method in the npwbs package which contains our algorithm using these precomputed thresholds.", "Table: Percentage of change points which were estimated to be in their correct location, for each detector, averaged over 100 realizations of each of the 15 data models.", "Higher numbers indicate better performance.", "The best method for each data model is highlighted in bold text.Table: Percentage of change points which were estimated to be within 2 observations of their correct location, for each detector, averaged over 100 realizations of each of the 15 data models.", "Higher numbers indicate better performance.", "The best method for each data model is highlighted in bold text.Table: Average discrepancy 1 100∑ i=1 100 (K-K i )\\frac{1}{100} \\sum _{i=1}^{100} (K - K_i) for the pruned and unpruned version of the WBS method across each data set.", "Lower numbers indicate better performance." ] ]
2107.01742
[ [ "Data-Driven Learning of Feedforward Neural Networks with Different\n Activation Functions" ], [ "Abstract This work contributes to the development of a new data-driven method (D-DM) of feedforward neural networks (FNNs) learning.", "This method was proposed recently as a way of improving randomized learning of FNNs by adjusting the network parameters to the target function fluctuations.", "The method employs logistic sigmoid activation functions for hidden nodes.", "In this study, we introduce other activation functions, such as bipolar sigmoid, sine function, saturating linear functions, reLU, and softplus.", "We derive formulas for their parameters, i.e.", "weights and biases.", "In the simulation study, we evaluate the performance of FNN data-driven learning with different activation functions.", "The results indicate that the sigmoid activation functions perform much better than others in the approximation of complex, fluctuated target functions." ], [ "Introduction", "FNNs are widely used as predictive models to fit data distribution.", "They learn using gradient descent methods and ensure a universal approximation property.", "However, gradient-based algorithms suffer from many drawbacks which make the learning process ineffective and time-consuming.", "This is because gradient learning is sensitive to local minima, flat regions, and saddle points of the loss function.", "Moreover, its application is time-consuming for complex target functions, big data, and large FNN architectures.", "Randomized learning was proposed as an alternative to gradient-based learning.", "In this approach, the parameters of the hidden nodes are selected randomly from any interval, and stay fixed.", "Only the output weights are learned.", "The optimization problem in randomized learning becomes convex and can be solved by a standard linear least-squares method [1].", "This leads to very fast training.", "The the universal approximation property is kept when the random parameters are selected from a symmetric interval according to any continuous sampling distribution [2].", "The main problems in randomized learning are [3], [4]: how to select the interval and distribution for the random parameters, and whether the weights and biases should be chosen from the same interval and distribution.", "It was shown in [5] and [6] that the weights and biases of hidden nodes have different functions and should be selected separately.", "The weights decide about the activation function (AF) slopes and should reflect the TS complexity, while the biases decide about the AF shift and should ensure the placement of the most nonlinear fragments of AFs into the input hypercube.", "These fragments are most useful for modeling target function (TF) fluctuations.", "The method proposed in [5] selects the proper interval for the weights based on AF features and TF properties.", "The biases are calculated based on the weights and data scope.", "This approach introduces the AFs into the input hypercube and adjusts the interval for weights to TF complexity.", "In [6], instead of generating the weights, the slope angles of AFs were randomly selected.", "This changed the distribution of weights, which typically is a uniform one.", "This new distribution ensured that the slope angles of AFs were uniformly distributed, which improved results by preventing overfitting, especially for highly nonlinear TFs.", "To improve further FNN randomized learning, in [7], a D-DM was proposed.", "This method introduces the AFs into randomly selected regions of the input space and adjusts the AF slopes to the TF slopes in these regions.", "As a result, the AFs mimic the TF locally, and their linear combination approximates smoothly the entire TF.", "This work contributes to the development of data-driven FNN learning by introducing different AFs, i.e.", "bipolar sigmoid, sine function, saturating linear functions, reLU, and softplus.", "For each AF, the formulas for weights and biases are derived.", "The remainder of this paper is structured as follows.", "In Section 2, the framework of D-DM is presented.", "The formulas for hidden nodes parameters for different AFs are derived in Section 3.", "The performance of FNN data-driven learning with different AFs is evaluated in Section 4.", "Finally, Section 5 concludes the work." ], [ "Framework of the Data-Driven FNN Learning", "Let us consider a shallow FNN architecture with $n$ inputs, a single-hidden layer, and a single output.", "AFs of hidden nodes, $h(\\mathbf {x})$ , map nonlinearly input vectors $\\mathbf {x}=[x_1, x_2,..., x_n]^T \\in \\mathbb {R}^n$ into an $m$ -dimensional feature space.", "An output node combines linearly $m$ nonlinear transformations of the inputs.", "The function expressed by this FNN has the form: $\\varphi (\\mathbf {x}) = \\sum _{i=1}^{m}\\beta _ih_i(\\mathbf {x})$ where $\\beta _i$ is the output weight linking the $i$ -th hidden node with the output node.", "Such FNN architecture has a universal approximation property, even when the hidden layer parameters are not trained but generated randomly from the proper distribution [8], [2].", "The output weights $ \\beta = [\\beta _1, \\beta _2, ..., \\beta _m]^T$ can be determined by solving the following linear problem: $\\mathbf {H}\\beta = \\mathbf {Y}$ , where $\\mathbf {H} = [\\mathbf {h}(\\mathbf {x}_1), \\mathbf {h}(\\mathbf {x}_2), ..., \\mathbf {h}(\\mathbf {x}_N)]^T \\in \\mathbb {R}^{N \\times m}$ is the hidden layer output matrix, and $ \\mathbf {Y} = [y_1, y_2, ..., y_N]^T $ is a vector of target outputs.", "The optimal solution for $ \\beta $ is given by: $\\beta = \\mathbf {H}^+\\mathbf {Y}$ where $ \\mathbf {H}^+ $ denotes the Moore–Penrose generalized inverse of matrix $ \\mathbf {H} $ .", "The hidden node parameters, i.e.", "weights $ \\mathbf {a} = [ a_{1}, a_{2}, ..., a_{n}]^T$ and a bias $b$ , control slopes and position of AF in the input space.", "For a sigmoid AF given by the formula: $h(\\mathbf {x}) = \\frac{1}{1 + \\exp \\left(-\\left(\\mathbf {a}^T\\mathbf {x} + b\\right)\\right)}$ weight $a_j$ decides about the sigmoid slope in the $j$ -th direction and bias $b$ decides about the sigmoid shift along a hyperplane containing all $x$ -axes.", "The appropriate selection of the slopes and shifts of all sigmoids determine the fitting accuracy of FNN to the TF.", "To adjust the sigmoids to the local features of the TF, in [7], a D-DM for FNN learning was proposed.", "This method selects an input space region by randomly choosing one of the training points for each sigmoid.", "Then, it places the sigmoid in this region and adjusts the sigmoid slopes to the TF slopes in the neighborhood of the chosen point.", "By combining linearly all the sigmoids randomly placed in the input space, we obtain a fitted surface which reflects the TF shape in different regions.", "The D-DM algorithm, in the first step, selects randomly training point $\\mathbf {x}^*$ .", "Then, sigmoid $S$ is placed in the input space in such a way that one of its inflection points, $P$ , is in $\\mathbf {x}^*$ .", "The sigmoid value at the inflection point is $0.5$ : $h(\\mathbf {x}^*) = \\frac{1}{1 + \\exp \\left(-\\left(\\mathbf {a}^T\\mathbf {x}^* + b\\right)\\right)}=0.5$ From this equation we obtain the sigmoid bias as: $b = -\\mathbf {a}^T\\mathbf {x}^*$ The slopes of sigmoid $S$ are adjusted to the TF slopes in $\\mathbf {x}^*$ .", "The TF slopes in $\\mathbf {x}^*$ are estimated by fitting hyperplane $T$ to the neighborhood of $\\mathbf {x}^*$ .", "The neighborhood, $\\Psi (\\mathbf {x}^*)$ , contains point $\\mathbf {x}^*$ and $k$ training points nearest to it.", "Hyperplane $T$ has the form: $y = a_1^{\\prime }x_1 + a_2^{\\prime }x_2+...+a_n^{\\prime }x_n +b^{\\prime }$ where coefficient $a_j^{\\prime }$ expresses a slope of $T$ in the $j$ -th direction.", "We assume that sigmoid $S$ is tangent to hyperplane $T$ in point $\\mathbf {x}^*$ .", "This means that the partial derivatives of $S$ and $T$ in $\\mathbf {x}^*$ are the same.", "Comparing the formulas for partial derivatives of both functions, we obtain an equation for the sigmoid weights (see [7] for details): $a_j = 4a_j^{\\prime }, \\quad j = 1, 2, ..., n$ To generate all the FNN hidden nodes, the D-DM algorithm repeats the procedure described above $m$ times.", "So, for each node it randomly selects training point $\\mathbf {x}^*$ , fits hyperplane $T$ to its neighborhood $\\Psi (\\mathbf {x}^*)$ , calculates weights $a_j$ according to (REF ), and calculates biases $b$ according to (REF ).", "Finally, it calculates hidden layer output matrix $\\mathbf {H}$ , and output weights from (REF ).", "The resulting function, $\\varphi (\\mathbf {x})$ , constructed in line with such data-driven learning, reflects TF fluctuations.", "The D-DM has two hyperparameters: the number of hidden nodes $m$ and neighbourhood size $k$ .", "They control the fitting performance of the model and its bias-variance tradeoff.", "Their optimal values for a given TF should be tuned during cross-validation." ], [ "Data-Driven FNN Learning with Different Activation Functions", "When we employ other AFs instead of logistic sigmoids, the projection matrix $\\mathbf {H}$ changes in a way which can entail changes in the approximation properties of the model.", "Using other AFs requires the derivation of new formulas for the hidden node parameters in the following ways.", "Bipolar sigmoid $\\operatorname{\\textsc {sigmoid\\_b}}$ .", "Usually the bipolar sigmoid is defined as a hyperbolic tangent function.", "In this study, we define it slightly differently: $h_{sigb}(\\mathbf {x}) = \\frac{2}{1 + \\exp \\left(-\\left(\\mathbf {a}^T\\mathbf {x} + b\\right)\\right)} -1$ D-DM places $\\operatorname{\\textsc {sigmoid\\_b}}$ in the input space in such a way that one of its inflection points is in the randomly selected training point, $\\mathbf {x}^*$ .", "The $\\operatorname{\\textsc {sigmoid\\_b}}$ value at the inflection points is 0, so, $h_{sigb}(\\mathbf {x}^*)=0$ .", "From this equation we obtain the formula for the bias, which is the same as for the unipolar sigmoid ($\\operatorname{\\textsc {sigmoid\\_u}}$ ), (REF ).", "To find weights $a_j$ , we equate the partial derivatives of $\\operatorname{\\textsc {sigmoid\\_b}}$ in $\\mathbf {x}^*$ to the partial derivatives of hyperplane $T$ , (REF ): $\\frac{\\partial h_{sigb}(\\mathbf {x}^*)}{\\partial x_j} = \\frac{1}{2}a_j(1+h_{sigb}(\\mathbf {x}^*))(1-h_{sigb}(\\mathbf {x}^*)) = a_j^{\\prime }$ From this equation, taking into account that $h_{sigb}(\\mathbf {x}^*)=0$ , we obtain: $a_j = 2a_j^{\\prime }, \\quad j = 1, 2, ..., n$ Sine function $\\operatorname{\\textsc {sine}}$ .", "Let us place the $\\operatorname{\\textsc {sine}}$ AF, $h_{sin}(\\mathbf {x})=\\sin (\\mathbf {a}^T\\mathbf {x} + b)$ , in the input space in such a way that it has one of its inflection point in randomly selected training point $\\mathbf {x}^*$ .", "The $\\operatorname{\\textsc {sine}}$ value in the inflection points is 0, so, $h_{sin}(\\mathbf {x}^*)=0$ .", "From this equation we obtain the formula for bias, which is the same as for both sigmoid AFs, (REF ).", "To determine equations for the weights for $\\operatorname{\\textsc {sine}}$ , we equate the partial derivatives of $\\operatorname{\\textsc {sine}}$ in $\\mathbf {x}^*$ to the partial derivatives of hyperplane $T$ , (REF ): $\\frac{\\partial h_{sin}(\\mathbf {x}^*)}{\\partial x_j} = a_j\\cos (\\mathbf {a}^T\\mathbf {x}^* + b) = a_j^{\\prime }$ Taking into account that $\\sin (\\mathbf {a}^T\\mathbf {x}^* + b)=0$ implies $\\cos (\\mathbf {a}^T\\mathbf {x}^* + b)=1$ , from (REF ) we obtain: $a_j = a_j^{\\prime }, \\quad j = 1, 2, ..., n$ Saturating linear unipolar function $\\operatorname{\\textsc {satlin\\_u}}$ .", "This is a linearized version of $\\operatorname{\\textsc {sigmoid\\_u}}$ defined as follows: $h_{satu}(\\mathbf {x}) =\\left\\lbrace \\begin{array}{llll}0 & & \\mathrm {if} & z \\le 0 \\\\z & & \\mathrm {if} & 0 < z < 1\\\\1 & & \\mathrm {if} & z \\ge 1\\end{array}\\right.$ where $z=\\mathbf {a}^T\\mathbf {x} + b$ .", "$\\operatorname{\\textsc {satlin\\_u}}$ is placed in the input space in such a way that it has a value of 0.5 in $\\mathbf {x}^*$ .", "This is analogous to $\\operatorname{\\textsc {sigmoid\\_u}}$ to which $\\operatorname{\\textsc {satlin\\_u}}$ has a similar shape.", "Thus, $\\mathbf {a}^T\\mathbf {x}^* + b=0.5$ .", "From this equation we obtain: $b = 0.5 -\\mathbf {a}^T\\mathbf {x}^*$ We assume that the middle segment of $h_{satu}(\\mathbf {x})$ , $\\mathbf {a}^T\\mathbf {x}+b$ , has the same slopes as hyperplane $T$ , thus: $a_j = a_j^{\\prime }, \\quad j = 1, 2, ..., n$ Saturating linear bipolar function $\\operatorname{\\textsc {satlin\\_b}}$ .", "This AF is a linearized version of bipolar sigmoid $\\operatorname{\\textsc {sigmoid\\_b}}$ : $h_{satb}(\\mathbf {x}) =\\left\\lbrace \\begin{array}{llll}-1 & & \\mathrm {if} & z \\le -1 \\\\z & &\\mathrm {if} & -1 < z < 1\\\\1 & &\\mathrm {if} & z \\ge 1\\end{array}\\right.$ where $z=\\mathbf {a}^T\\mathbf {x} + b$ .", "$\\operatorname{\\textsc {satlin\\_b}}$ is placed in the input space in such a way that it has a value of 0 in $\\mathbf {x}^*$ .", "Thus, $\\mathbf {a}^T\\mathbf {x}^* + b=0$ .", "From this equation we obtain the same formula for a bias as for sigmoid AFs, (REF ).", "As with $\\operatorname{\\textsc {satlin\\_u}}$ , we assume that the middle segment of $\\operatorname{\\textsc {satlin\\_b}}$ has the same slopes as hyperplane $T$ .", "Thus, weights $a_j$ are the same as the $T$ coefficients, (REF ).", "Rectified linear unit $\\operatorname{\\textsc {relu}}$ .", "This is an AF commonly used in deep learning.", "It is expressed by: $h_{reLU}(\\mathbf {x}) =\\left\\lbrace \\begin{array}{llll}0 & & \\mathrm {if} & z \\le 0 \\\\z & & \\mathrm {if} & z > 0\\end{array}\\right.$ where $z=\\mathbf {a}^T\\mathbf {x} + b$ .", "$\\operatorname{\\textsc {relu}}$ is composed of two half-hyperplanes: the first being $y = 0$ and the second $y=\\mathbf {a}^T\\mathbf {x} + b$ .", "D-DM places the $\\operatorname{\\textsc {relu}}$ AF in the input space so that the second half-hyperplane coincides with hyperplane $T$ .", "Thus, their coefficients are the same: $b = b^{\\prime }, \\quad a_j = a_j^{\\prime }, \\quad j = 1, 2, ..., n$ Softplus $\\operatorname{\\textsc {softplus}}$ .", "This is a smooth approximation of the $\\operatorname{\\textsc {relu}}$ .", "It is expressed by: $h_{soft}(\\mathbf {x}) = \\ln \\left(1 + \\exp \\left(\\mathbf {a}^T\\mathbf {x} + b\\right)\\right)$ For $\\mathbf {x}=[0,0, ..., 0]$ and $b=0$ , the value of $h_{soft}(\\mathbf {x})=\\ln (2)$ .", "Let us shift this function in such a way that it has the value of $\\ln (2)$ in $\\mathbf {x}^*$ .", "In such a case $\\ln (1 + \\exp (\\mathbf {a}^T\\mathbf {x}^* + b))=\\ln (2)$ .", "From this equation we obtain a formula for $b$ , which is the same as for the sigmoids (REF ).", "Now, let us assume that the slopes of $\\operatorname{\\textsc {softplus}}$ in $\\mathbf {x}^*$ are the same as the slopes of $T$ .", "Equating the partial derivative of both functions we obtain: $\\frac{\\partial h_{soft}(\\mathbf {x}^*)}{\\partial x_j} = \\frac{a_j}{1+\\exp (-(\\mathbf {a}^T\\mathbf {x}^* + b))} = a_j^{\\prime }$ From $\\ln (1 + \\exp (\\mathbf {a}^T\\mathbf {x}^* + b))=\\ln (2)$ we obtain $1+\\exp (\\mathbf {a}^T\\mathbf {x}^* + b)=2$ .", "Substituting this into (REF ), we obtain the weights of hidden nodes with $\\operatorname{\\textsc {softplus}}$ AFs: $a_j = 2a_j^{\\prime }, \\quad j = 1, 2, ..., n$ Table REF details the hidden nodes parameters determined by D-DM for different AFs.", "Note that in all cases, weights $a_j$ reflect hyperplane $T$ coefficients $a_j^{\\prime }$ .", "Biases for all AFs, excluding $\\operatorname{\\textsc {relu}}$ , are expressed using a dot product of the weight vector and $\\mathbf {x}^*$ vector.", "Table: Hidden nodes parameters for different activation functions.Fig.", "REF shows AFs of different types introduced into the input space by D-DM.", "The training points belonging to the neighborhood of $\\mathbf {x}^*$ , $\\Psi (\\mathbf {x}^*)$ , are shown as red dots.", "Note that the AFs in all cases have the same slopes in $\\mathbf {x}^*$ as the slope of line $T$ , which estimates the TF slope in $\\mathbf {x}^*$ .", "D-DM introduces $m$ AFs in different regions of the input space.", "Figure: AFs of different types introduced into the input space in 𝐱 * \\mathbf {x}^* by D-DM." ], [ "Simulation Study", "In this section, we report the experimental results over several regression problems in order to compare the fitting properties of D-DM with different AFs, .", "They include an approximation of extremely nonlinear TFs: TF1 $g(x) = \\sin \\left(20\\cdot \\exp x \\right)\\cdot x^2, \\, x \\in [0, 1]$ TF2 $g(x) = 0.2e^{-\\left(10x - 4\\right)^2} + 0.5e^{-\\left(80x - 40\\right)^2} + 0.3e^{-\\left(80x - 20\\right)^2}, \\, x \\in [0, 1]$ .", "TF3 $g(\\mathbf {x}) = \\sum _{j=1}^{n}\\sin \\left(20\\cdot \\exp x_j\\right)\\cdot x_j^2, \\, x_i \\in [0, 1]$ TF4 $g(\\mathbf {x}) = -{\\sum _{i=1}^{n} \\sin (x_i) \\sin ^{20} \\left(\\frac{ix_i^2}{\\pi } \\right)}, \\, x_i \\in [0, \\pi ]$ TF5 $g(\\mathbf {x}) = 418.9829n -{\\sum _{i=1}^{n} x_i \\sin (\\sqrt{|x_i|})}, \\, x_i \\in [-500, 500]$ Both the training and test sets for TF1 and TF2 included 5000 points.", "For the training set, argument $x$ was generated randomly from $U(0,1)$ , and for the test set, it was evenly distributed in $[0, 1]$ .", "The function values were normalized in the range $[0, 1]$ .", "Note that TF1 starts flat, near $x = 0$ , then has increasing fluctuations (see Fig.", "REF ).", "TF2 has two spikes that could be difficult to model with FNN (see Fig.", "REF ).", "TF3-TF5 are multivariate functions.", "We considered these functions with $n=2, 5$ and 10 arguments.", "The sizes of the training and test sets depended on the number of arguments.", "They were 5000 for $n=2$ , 20,000 for $n=5$ , and 50,000 for $n=10$ .", "All arguments for TF3-TF5 were normalized to $[0, 1]$ , and the function values were normalized to $[-1, 1]$ .", "Two-argument functions TF3-TF5 are shown in Fig.", "REF .", "Note that TF3 is a multivariate variant of TF1.", "It combines flat regions with strongly fluctuated regions.", "TF4 expresses flat regions with perpendicular grooves.", "TF5 fluctuates strongly, showing the greatest amplitude at the borders.", "Figure: Target functions TF3-TF5.Fig.", "REF shows the results of TF1 fitting.", "The fitted lines are composed of AFs of different shapes.", "The AFs distributed by D-DM in the input interval (shown by the gray field) are shown in the lower panels.", "FNN included 30 hidden nodes.", "The neighborhood size was 2 ($k=1$ ).", "As you can see from Fig.", "REF , the slopes of the AFs reflect the TF slopes.", "D-DM introduces the steepest fragments of the AFs into the input interval.", "These fragments are the most useful for modeling the TF fluctuations.", "The saturated AF fragments in the input interval are avoided.", "The best fitting results were achieved for both sigmoid AFs.", "$\\operatorname{\\textsc {sine}}$ cannot cope with a TF with variable intensity of fluctuations.", "Neither $\\operatorname{\\textsc {relu}}$ , which yielded the highest fitting error, nor the saturating linear functions are not able to fit smoothly to TF1.", "The smooth counterpart of $\\operatorname{\\textsc {relu}}$ , $\\operatorname{\\textsc {softplus}}$ , improves significantly on the $\\operatorname{\\textsc {relu}}$ fitting results by offering a smooth approximation of TF1.", "Obviously, the results are dependent on the number of hidden nodes.", "The left panel of Fig.", "$\\ref {figZb12}$ shows the TF1 fitting error for different numbers of hidden nodes.", "As can be seen from this figure, the sigmoid AFs outperformed all the others.", "Slightly worse results were achieved for $\\operatorname{\\textsc {softplus}}$ , while the highest error was observed for $\\operatorname{\\textsc {relu}}$ .", "Detailed results for each AF, i.e.", "RMSE for the maximal number of hidden nodes shown in the figures, are presented in Table REF .", "The lowest errors, i.e.", "those that are at least 5% lower than the others, are marked in bold in this table.", "Figure: TF1: Results of D-DM fitting for different AFs.Figure: Convergence of FNN for TF1 and TF2.Table: Fitting errors (RMSE).Fig.", "REF shows fitting results for TF2 (120 hidden nodes and $k=1$ ).", "In this case, $\\operatorname{\\textsc {sigmoid\\_u}}$ and $\\operatorname{\\textsc {sigmoid\\_b}}$ provided the best fitting, while $\\operatorname{\\textsc {satlin\\_u}}$ and $\\operatorname{\\textsc {softplus}}$ provided a slightly worse fitting.", "Other AFs could not cope with this TF.", "For them, increasing the number of hidden nodes did not improve results and RMSE remained outside the acceptable level of 0.01 (see right panel of Fig.", "$\\ref {figZb12}$ and Table REF ).", "Figure: TF2: Results of D-DM fitting for different AFs.Fig.", "REF shows the convergence curves of FNN trained using D-DM for two-argument TF3-TF5 ($k=n$ ).", "In all these cases, the sigmoid AFs yielded the best results, while $\\operatorname{\\textsc {relu}}$ , $\\operatorname{\\textsc {sine}}$ and both saturating linear functions yielded the worst results.", "$\\operatorname{\\textsc {softplus}}$ suffered from numerical problems related to the rapid growth of this function and exceeding the limit for double precision numbers.", "So, in Table REF , no results for $\\operatorname{\\textsc {softplus}}$ are given.", "Figure: Convergence of FNN for TF3-TF5, n=2n=2.Figure: Convergence of FNN for TF3-TF5, n=5n=5.Figure: Convergence of FNN for TF3-TF5, n=10n=10.In the case of multidimensional modeling ($n=5$ and 10), results for all AFs were comparable (see Figs.", "REF and REF ; $k=n$ ).", "This could be explained by the change in the TF landscape, which flattens with an increasing number of dimensions.", "When modeling flat TF, the AF shape turned out not to be as important as in the case of TF with strong fluctuations.", "It is obvious from the performed simulations that the approximation properties of FNN trained using D-DM strongly depend on the AF type.", "The most useful for smoothing highly nonlinear TFs with fluctuations turned out to be the sigmoid AFs.", "The piecewise linear functions, i.e.", "$\\operatorname{\\textsc {relu}}$ , $\\operatorname{\\textsc {satlin\\_u}}$ , and $\\operatorname{\\textsc {satlin\\_b}}$ , have problems with modeling smoothly complex TFs.", "Their linear parts do not fit accurately to TF nonlinearities.", "Likewise $\\operatorname{\\textsc {sine}}$ AFs cannot build an acceptable fitted function for the fluctuated TFs.", "The reason for this is probably the periodic nature of $\\operatorname{\\textsc {sine}}$ .", "When $\\operatorname{\\textsc {sine}}$ AF is introduced into the input space to improve the fitted function in region $\\Psi (\\mathbf {x}^*)$ , it can worsen the fitted function in other regions by introducing unwanted fluctuations.", "$\\operatorname{\\textsc {softplus}}$ AF gave slightly worse results than sigmoid AFs for one-argument TFs, but it caused numerical problems for multivariate TFs." ], [ "Conclusion", "The data-driven FNN learning described in this study is an alternative to both standard gradient-based learning and randomized learning.", "It allows us to bypass the tedious iterative process of tuning weights based on gradients.", "In the proposed approach, the parameters of hidden nodes are calculated based on the local properties of the TF.", "The AFs, which compose the fitted function, are introduced into the input space in randomly selected regions and their slopes are adjusted to the TF slopes in these regions.", "Consequently, the set of AFs reflects the TF fluctuations in different regions, which leads to accurate approximation.", "Our approach is completely different from typical randomized learning, where the AF parameters are chosen randomly and do not reflect the TF landscape.", "D-DM finds the network parameters quickly, without repeatedly presenting the training set.", "FNN performance strongly depends on AF shape.", "In this work, using a data-driven approach, we derived equations for the hidden node parameters for different AFs.", "As our experimental study has shown, the best FNN performance in smoothing highly nonlinear TFs was achieved by the sigmoid AFs.", "They were able to fit to the TF fluctuations.", "$\\operatorname{\\textsc {relu}}$ AF, which is very popular in deep learning, fared very poorly in fluctuation modeling due to its piecewise linear nature.", "Its smooth counterpart, $\\operatorname{\\textsc {softplus}}$ , produced much better results but suffered from numerical problems related to rapid growth." ] ]
2107.01702
[ [ "Improved Asymptotic Bounds for Codes Correcting Insertions and Deletions" ], [ "Abstract This paper studies the cardinality of codes correcting insertions and deletions.", "We give an asymptotically improved upper bound on code size.", "The bound is obtained by utilizing the asymmetric property of list decoding for insertions and deletions." ], [ "Introduction", "We study the existence of codes correcting insertions and deletions.", "Specifically, we consider the situation in which the number of insertions/deletions is a constant fraction of the code length.", "We are interested in deriving upper and lower bounds on the cardinality of codes.", "Levenshtein [9] gave asymptotic upper and lower bounds for codes correcting a constant number of insertions and deletions.", "Later, he presented bounds for correcting any number [10].", "Following his work, there have been several studies [7], [8], [4] on the cardinality of codes.", "However, they mainly focused on codes correcting a constant number of insertions/deletions.", "Giving better bounds for codes correcting a constant fraction of insertions/deletions has been elusive.", "See [3] for a recent survey.", "In this work, we present upper and lower bounds on the cardinality of codes correcting a constant fraction of insertions and deletions.", "Our bounds are derived by an elementary argument in coding theory.", "Specifically, we show that any code $C \\subseteq \\Sigma ^n$ of rate $R$ that can correct $\\delta n$ insertions/deletions must satisfy $R \\le (1 - H_q(\\delta ))/(1-\\delta )$ , where $|\\Sigma | = q$ , $\\delta \\in [0,1)$ , and $H_q(\\cdot )$ is the $q$ -ary entropy function.", "The bound improves on the asymptotic upper bounds of [10] for $q=2$ .", "Our lower bound is a restatement of the known bound of [10], [7] with a different proof.", "The upper bound is obtained by a similar argument to the Elias bound in the Hamming metric.", "We use the list size upper bound of [6] for insertions and deletions.", "It is well-known that any $s$ -deletion correcting code can correct any $s_1$ insertions and $s_2$ deletions with $s_1+s_2=s$ .", "This symmetry in the unique decoding regime does not hold in the list decoding problem.", "In [6], it is proved that any code with a large Levenshtein distance enables list decoding such that the decoding radius of insertion is superior to that of deletion.", "We crucially use this property to derive our upper bound.", "We prove our lower bound by a random coding argument with “expurgation,” which is usually used for giving random-coding error exponents [1].", "The same asymptotic bound was given in [10] by the average sphere-size argument of [12] and in [7] by a greedy algorithm." ], [ "Related Work", "Cullina and Kiyavash [4] improved Levenshtein's upper bound [10] for correcting a constant number of insertions/deletions using a graph-theoretic approach.", "Kulkarni and Kiyavash [8] derived non-asymptotic upper bounds by a linear programming argument for graph matching problems.", "They also gave upper bounds for correcting a constant fraction of insertions/deletion.", "Although their bound improved on the bound of [10], it was not given in the closed-form expressions.", "For extreme cases, where the deletion fraction is either small or high, Guruswami and Wang [5] gave efficient constructions of codes correcting these cases of deletions.", "For the case that the coding rate is nearly zero, Kash et al.", "[7] showed a positive-rate binary code correcting the fraction $p$ of insertions/deletions with $p \\ge 0.1737$ , which improved on the bound of $p \\ge 0.1334$ in [10].", "Bukh, Guruswami, and Haståd [2] significantly improved on the previous results by showing the existence of a positive rate $q$ -ary code with $p \\ge 1 - (2/(q+\\sqrt{q}))$ , which is $p \\ge 0.4142$ for the binary case." ], [ "Code Size Upper Bound", "Let $\\Sigma $ be a finite alphabet.", "The Levenshtein distance ${d}_\\mathrm {L}(x, y)$ between two words $x$ and $y$ is the minimum number of symbol insertions and deletions needed to transform $x$ into $y$ .", "For a code $C \\subseteq \\Sigma ^n$ , its minimum Levenshtein distance is the minimum distance ${d}_\\mathrm {L}(c_1, c_2)$ of every pair of distinct codewords $c_1, c_2 \\in C$ .", "Since any two codewords in $C$ are of the same length, the minimum Levenshtein distance of $C$ takes an even number.", "It is well-known that a code with minimum Levenshtein distance $d$ can correct any $s_1$ insertions and $s_2$ deletions as long as $s_1 + s_2 \\le d/2 - 1$ .", "The Levenshtein distance between two words in $\\Sigma ^n$ takes integer values from 0 to $2n$ .", "Thus, we consider the normalized Levenshtein distance $\\delta = d/2n$ in the analysis.", "The value $\\delta \\in [0,1]$ also represents the fraction of insertions/deletions that can be corrected, since we require $(s_1+s_2)/n \\le (d/2 - 1)/n = \\delta - 1/n$ , which is asymptotically $\\delta $ .", "Let $C \\subseteq \\Sigma ^n$ be a code of minimum Levenshtein distance $d$ with $|\\Sigma |=q$ .", "For a word $x \\in \\Sigma ^n$ , let $I_t(x) \\subseteq \\Sigma ^{n+t}$ be the set of its supersequences of length $n+t$ .", "Namely, $I_t(x)$ consists of words that can be obtained from $x$ by inserting $t$ symbols.", "Similarly, let $D_t(x)$ be the set of words obtained from $x$ by deleting $t$ symbols.", "It is known that the size of $I_t(x)$ does not depend on $x$ and $|I_t(x)| = \\sum _{i=0}^t \\binom{n+t}{i}(q-1)^i \\triangleq I_q(n,t).$ First, we give a simple sphere packing bound.", "We use the fact that the number of supersequences, $|I_t(x)|$ , is independent of $x$ .", "Theorem 1 Let $C \\subseteq \\Sigma ^n$ be a code of minimum Levenshtein distance $d$ and $|\\Sigma |=q$ .", "It holds that $|C| \\le \\frac{q^{n+d/2-1}}{I_q(n,d/2-1)}.$ For each codeword $c \\in C$ , consider the set of supersequences $I_{d/2-1}(c) \\subseteq \\Sigma ^{n+d/2-1}$ .", "Since the code has minimum Levenshtein distance $d$ , each $I_{d/2-1}(c)$ should be disjoint.", "Thus, $ \\sum _{c \\in C} |I_{d/2-1}(c)| \\le q^{n+d/2-1}.$ The statement follows by the equality $|I_{d/2-1}(c)| = I_q(n, d/2-1)$ for any $c \\in C$ .", "Next, we prove our main theorem, which can be seen as an Elias-type upper bound on the code size in the Levenshtein metric.", "Theorem 2 Let $C \\subseteq \\Sigma ^n$ be a code of minimum Levenshtein distance $d < 2n$ and $|\\Sigma |=q$ .", "For any $t \\ge 0$ with $t < \\frac{nd}{2n-d},$ it holds that $|C|\\le \\frac{(n+t)d}{(n+t)d - 2nt}\\cdot \\frac{q^{n+t}}{I_q(n,t)}.", "$ By a double counting, it holds that $ \\sum _{y \\in \\Sigma ^{n+t}}|D_t(y)| = \\sum _{x \\in \\Sigma ^n}|I_t(x)| = q^n \\cdot I_q(n,t).$ By considering the intersection with $C$ , $ \\sum _{y \\in \\Sigma ^{n+t}}|D_t(y) \\cap C| = \\sum _{x \\in C}|I_t(x)| = |C| \\cdot I_q(n,t).$ Thus, by choosing $y \\in \\Sigma ^{n+t}$ uniformly at random, $\\mathbb {E}_{y}[ |D_t(y) \\cap C|] & = \\frac{1}{q^{n+t}} \\sum _{y \\in \\Sigma ^{n+t}} |D_t(y) \\cap C|= \\frac{|C|\\cdot I_q(n,t)}{q^{n+t}}.$ The averaging argument implies that there exists $y \\in \\Sigma ^{n+t}$ such that $|D_t(y) \\cap C| \\ge \\frac{|C|\\cdot I_q(n,t)}{q^{n+t}}.$ We have the following lemma.", "Lemma 1 For any non-negative integer $t$ with $t < nd/(2n-d)$ and $y \\in \\Sigma ^{n+t}$ , it holds that $|D_t(y) \\cap C| \\le \\frac{(n+t)d}{(n+t)d - 2nt}.$ Note that $|D_t(y) \\cap C|$ can be seen as a list size when list decoding of $C$ is applied to $y$ , where $y$ is a received word after $t$ insertions.", "Thus, the statement follows from [6] by setting $t_I= t$ and $t_ 0$ .", "By combining (REF ) and Lemma REF , the statement follows.", "We analyze asymptotics of Theorems REF and REF .", "For a code $C \\subseteq \\Sigma ^n$ of distance $d$ , let $\\delta = d/2n$ and $\\gamma = t/n$ for $t \\ge 0$ .", "Let $\\mathrm {Vol}_q(n, \\ell )$ be the volume of the Hamming ball of radius $\\ell $ in $\\mathbb {F}_q^n$ .", "Namely, $\\mathrm {Vol}_q(n,\\ell ) = \\sum _{i=0}^\\ell \\binom{n}{i}(q-1)^i$ .", "It is well-known (cf.", "[11]) that, for $0 \\le \\ell \\le n$ , $\\mathrm {Vol}_q(n, \\ell ) \\ge \\frac{1}{n+1}\\cdot q^{nH_q(\\ell /n)},$ where $H_q(x) = -x \\log _qx - (1-x)\\log _q(1-x)+x\\log _q(q-1)$ .", "Since $I_q(n, t) = \\mathrm {Vol}_q(n+t, t)$ , $\\frac{1}{n} \\cdot \\log _q I_q(n,t) \\ge (1+\\gamma )H_q\\left(\\frac{\\gamma }{1+\\gamma }\\right) - \\frac{\\log _q((1+\\gamma )n+1)}{n}.$ Regarding Theorem REF , it holds that $ \\frac{1}{n} \\cdot \\log _q I_q(n, d/2-1) \\ge (1+\\delta ) \\cdot H_q\\left(\\frac{\\delta }{1+\\delta } - o(1)\\right) - o(1).", "$ By Theorem REF , the rate $R$ of $C$ is bounded above by $R = \\frac{\\log _q |C|}{n} & \\le (1+\\delta )\\left( 1 - H_q\\left(\\frac{\\delta }{1+\\delta } - o(1) \\right)\\right) + o(1).$ Since the entropy function $H_q(x)$ is continuous, we have the following corollary.", "Corollary 1 For every code $C \\subseteq \\Sigma ^n$ of rate $R$ and minimum Levenshtein distance $2\\delta n$ , $ R \\le (1+\\delta )\\left( 1 - H_q\\left(\\frac{\\delta }{1+\\delta }\\right)\\right) + o(1).$ Regarding Theorem REF , condition (REF ) can be rewritten as $\\gamma < \\frac{\\delta }{1-\\delta }.$ Let $\\gamma = \\delta /(1-\\delta ) - 1/n$ .", "The bound (REF ) can be rewritten as $|C| & \\le \\frac{(1+\\gamma )\\delta }{(1+\\gamma )\\delta - \\gamma }\\cdot \\frac{q^{(1+\\gamma )n}}{I_q(n,t)}\\\\& = \\frac{\\delta /(1-\\delta ) - \\delta /n}{(1-\\delta )/n} \\cdot \\frac{q^{(1+\\gamma )n}}{I_q(n,t)}\\\\& = q^{(1+\\gamma )n\\left(1 - H_q\\left(\\frac{\\gamma }{1+\\gamma }\\right)+o(1)\\right)}.$ Since $ \\frac{\\gamma }{1+\\gamma } = \\frac{\\delta /(1-\\delta )-1/n}{1/(1-\\delta )-1/n} = \\delta - \\frac{(1-\\delta )^2}{n-(1-\\delta )},$ the rate $R$ of $C$ is bounded above by $R = \\frac{\\log _q |C|}{n} & \\le (1+\\gamma ) \\left(1 - H_q\\left(\\frac{\\gamma }{1+\\gamma }\\right)\\right) + o(1)\\\\& \\le \\frac{1}{1-\\delta }\\left(1 - H_q\\left(\\delta - o(1)\\right)\\right) + o(1).$ We obtain the following corollary.", "Corollary 2 For every code $C \\subseteq \\Sigma ^n$ of rate $R$ and minimum Levenshtein distance $2\\delta n$ , $ R \\le \\frac{1}{1-\\delta } \\left( 1 - H_q(\\delta )\\right) + o(1).$" ], [ "Bounds from Hamming-Metric Bounds", "For a code $C \\subseteq \\Sigma ^n$ , let $d_h$ be the minimum Hamming distance of $C$ .", "It is well-known that the minimum Levenshtein distance $d$ of $C$ satisfies $d \\le 2d_h$ .", "Thus, the code size upper bound in the Hamming metric gives an upper bound in the Levenshtein metric.", "More specifically, if any code $C$ satisfies $|C| \\le f(q,n,d_h)$ , then $|C| \\le f(q,n, d/2)$ , where $q = |\\Sigma |$ and $f$ is a function monotonically decreasing on the third argument.", "Let $\\delta = d/2n$ and $\\delta _h = d_h/n$ .", "Similarly, if we have an asymptotic bound on the coding rate $R \\le g(q,\\delta _h)$ , then we have $R \\le g(q,\\delta )$ , where $g$ is a monotonically decreasing function on the second argument.", "We have the following upper bounds on the code size from the literature.", "Proposition 1 For every code $C \\subseteq \\Sigma ^n$ of rate $R$ and minimum Levenshtein distance $2\\delta n$ , it holds that $R & \\le 1 - H_q\\left( \\theta - \\sqrt{\\theta (\\theta - \\delta )}\\right) + o(1); & \\text{(Elias bound)}\\\\R & \\le H_q\\left( \\frac{1}{q}\\left( q - 1 - (q-2)\\delta - 2\\sqrt{\\delta (1-\\delta )(q-1)}\\right)\\right) + o(1) & \\text{(MRRW bound)}$ for $0 \\le \\delta < \\theta $ , where $q = |\\Sigma |$ and $\\theta = 1 - q^{-1}$ ." ], [ "Code Size Lower Bound", "Let $A_q(n,d)$ be the maximum size of code $C \\subseteq \\Sigma ^n$ of minimum Levenshtein distance $d$ , where $|\\Sigma | = q$ .", "We can assume that $d$ is even.", "We give a lower bound on $A_q(n,d)$ via a random coding with expurgation.", "The longest common subsequence between words $x$ and $y$ is the longest word $z$ which is a subsequence of both $x$ and $y$ .", "We denote by $\\mathrm {LCS}(x,y)$ the length of the longest common subsequence between $x$ and $y$ .", "It holds that ${d}_\\mathrm {L}(x,y) = |x| + |y| - 2\\cdot \\mathrm {LCS}(x,y)$ .", "We will use the property that if ${d}_\\mathrm {L}(x, y) < d$ for $x, y \\in C$ , then $\\mathrm {LCS}(x,y) \\ge n-d/2+1$ .", "Theorem 3 $A_q(n,d) \\ge \\left\\lfloor \\frac{q^{n+d/2-1}}{2\\cdot I_q(n - d/2 + 1,d/2-1)^2} \\right\\rfloor .$ Consider a $q$ -ary random code $C \\subseteq \\Sigma ^n$ with $|C| = 2M$ such that each codeword is chosen uniformly at random from $\\Sigma ^n$ , where $M$ will be determined later.", "Let $C = \\lbrace c_1, \\dots , c_{2M}\\rbrace $ .", "Fix a word $z \\in \\Sigma ^{n-t}$ with $0 \\le t \\le n$ .", "The probability that a codeword $x \\in C$ contains $z$ as a subsequence is $\\Pr [ x \\in I_t(z)] = \\frac{I_q(n-t,t)}{q^n}.$ Thus, the probability that codewords $c_i, c_j \\in C$ with $i \\ne j$ contain $z$ as a common subsequence is $I_q(n-t,t)^2 / q^{2n}$ .", "For any distinct $i, j \\in \\lbrace 1, \\dots , 2M\\rbrace $ , it holds that $\\Pr [{d}_\\mathrm {L}(c_i,c_j) < d] &= \\Pr [ \\mathrm {LCS}(c_i,c_j) \\ge n-d/2+1] \\\\& = \\Pr [ \\exists z \\in \\Sigma ^{n-d/2+1}, c_i \\in I_{d/2-1}(z) \\wedge c_j \\in I_{d/2-1}(z)]\\\\& \\le q^{n-d/2+1} \\cdot \\frac{I_q(n-d/2+1,d/2-1)^2}{q^{2n}} = \\frac{I_q(n-d/2+1,d/2-1)^2}{q^{n+d/2-1}}.$ Thus, the expected number of pairs of codewords in $C$ that are within distance less than $d$ is at most $\\binom{2M}{2} \\cdot \\frac{I_q(n-d/2+1,d/2-1)^2}{q^{n+d/2-1}}.$ By choosing $M = \\lfloor (1/2) q^{n+d/2-1}/I_q(n-d/2+1,d/2-1)^2 \\rfloor $ , (REF ) is bounded above by $M$ .", "This implies that there exists a code of size $2M$ that has at most $M$ pairs of codewords that are within distance less than $d$ .", "We can remove one codeword from each pair so that the resulting code has distance at least $d$ .", "This process can be done by removing at most $M$ codewords.", "Hence, there exists a code of size $M$ whose minimum Levenshtein distance is at least $d$ .", "Theorem REF implies the following asymptotic bound, which was presented in [10], [7].", "Corollary 3 There exists a code $C \\subseteq \\Sigma ^n$ of minimum Levenshtein distance $2\\delta n$ such that $\\frac{\\log _q |C|}{n} \\ge 1 + \\delta - 2H_q(\\delta ) - o(1).$" ], [ "Comparison", "Figures REF and REF show asymptotic bounds for $q=2$ and $q=4$ , respectively.", "Corollary REF gives the best upper bounds in both cases.", "For larger $q$ , Corollary REF is inferior to the MRRW bound, especially for large $\\delta $ .", "As we can see, there are large gaps between the upper and lower bounds.", "Note that there exists a binary code of rate $R$ that achieves $\\delta \\ge 0.4142$ for sufficiently small $R >0$  [2].", "It may indicate that improving the lower bound is necessary for closing the gaps.", "The upper bounds on the cardinality were presented in [10], [8].", "Their asymptotic upper bounds were not given in the closed-form.", "Hence it is not easy to make a clear comparison.", "By comparing with the plotted bound of [8], we can see that Corollary REF gives tighter bounds on $\\delta \\ge 0.1$ for $q=2$ and $\\delta \\ge 0.2$ for $q=4$ ." ], [ "Acknowledgment", "This work was supported in part by JSPS Grants-in-Aid for Scientific Research Number 18K11159." ] ]
2107.01785
[ [ "Spin-gap formation due to spin-Peierls instability in\n $\\pi$-orbital-ordered NaO$_2$" ], [ "Abstract We have investigated the low-temperature magnetism of sodium superoxide (NaO$_{2}$), in which spin, orbital, and lattice degrees of freedom are closely entangled.", "The magnetic susceptibility shows anomalies at $T_{1}=220$ K and $T_{2}=190$ K, which correspond well to the structural phase transition temperatures, and a sudden decrease below $T_{3}=34$ K. At 4.2 K, the magnetization shows a clear stepwise anomaly around 30 T with a large hysteresis.", "In addition, the muon spin relaxation experiments indicate no magnetic phase transition down to $T=0.3$ K. The inelastic neutron scattering spectrum exhibits magnetic excitation with a finite energy gap.", "These results confirm that the ground state of NaO$_{2}$ is a spin-singlet state.", "To understand this ground state in NaO$_{2}$, we performed Raman scattering experiments.", "All the Raman-active libration modes expected for the marcasite phase below $T_{2}$ are observed.", "Furthermore, we find that several new peaks appear below $T_{3}$.", "This directly evidences the low crystal symmetry, namely, the presence of the phase transition at $T_{3}$.", "We conclude the singlet-ground state of NaO$_{2}$ due to the spin-Peierls instability." ], [ "Sample synthesis and experimental methods", "We synthesized NaO$_2$ powder samples by the solution method using liquid an ammonia NH$_3$ and methyl amine CH$_3$ NH$_2$ .", "In a Ar–filled glove box (O$_2$ and H$_2$ O $< 0.1$ ppm), alkali metal was placed in a reaction cell (hyper glass cylinder produced by Taiatsu Techno Co. Ltd., ), and the cell was dynamically pumped down to 10$^{-2}$ Pa.", "The reaction cell was cooled with liquid N$_2$ to condense the gas mixture of NH$_3$ and CH$_3$ NH$_2$ .", "After the reaction cell was filled with liquid phase of NH$_3$ and CH$_3$ NH$_2$ (typically, $\\sim 1$ ml), O$_2$ gas was put in at a constant pressure of $\\sim 0.1$ MPa.", "The solution was kept at $-30 ^\\circ $ C. The reaction can be recognized as complete when the solution becomes colorless and the product precipitated.", "After the reaction, we removed the liquid NH$_3$ and CH$_3$ NH$_2$ by dynamically pumping the glass tube, and obtained NaO$_2$ powder.", "The color of NaO$_2$ powder is dark yellow.", "Because the NaO$_2$ sample is very sensitive to air, the samples must be handled in the Ar–filled glove box.", "The x–ray powder diffraction (xrd) patterns of the samples were measured with synchrotron radiation at BL–8A and 8B of KEK–PF (wave length $\\lambda $ = 0.99917 Å).", "The sample was put in a capillary.", "The He–flow and closed cycle cryostats were used for the low temperature measurements.", "Rietveld refinement was performed to obtain the structural parameters using the GSAS II package [1].", "The final weighted $R$ –factor, $R_{\\rm wp}$ , for the room temperature (RT) structure was converged to 4.36 %, indicating a good fit to the experimental data.", "To refine the RT structure using the Rietveld method, we put Cl atom instead of O$_2$ molecule in the unit cell because of the orientational disorder of O$_2$ molecules.", "The lattice parameter of NaO$_2$ is estimated to be $a$ = 5.506 Å, which is consistent with the literature [2].", "Very small amount of impurity phase was found, but was not specified.", "The x–ray diffraction for the single crystal were also measured with synchrotron radiation at BL–8B of KEK–PF in order to investigate the structural distortion below $T_{3}$ .", "The magnetization, $M$ , was measured using a SQUID magnetometer (MPMS–R2 and MPMS3, Quantum Design Co. Ltd.) in the temperature region $> 2$ K. The low–temperature Curie–like behavior strongly depends on the sample batch.", "Although large Curie–tail easily hide the intrinsic low–temperature magnetic properties of NaO$_{2}$ , the origin of magnetic contamination had not been specified from the xrd.", "High magnetic field magnetization was measured in pulsed magnetic fields up to 60 T below 4.2 K at ISSP.", "Van Vleck paramagnetic contribution was not taken into consideration because we could not estimate it either from a temperature independent term in the magnetic susceptibility experiments nor a slope above the saturation magnetization in the high magnetic field magnetization experiments.", "The muon spin relaxation ($\\mu $ SR) measurements were performed at the DOLLY spectrometer at Paul Scherrer Institut (PSI) in Switzerland and at the ARGUS spectrometer (RIKEN–RAL) in the UK.", "For $\\mu $ SR measurements, about 200 mg NaO$_2$ sample was packed into the plastic bag to prevent sample degradation.", "Zero–field (ZF) $\\mu $ SR measurements were carried out down to 0.3 K using Heliox–VT system (Oxford Instruments, Co. Ltd.,).", "The inelastic neutron scattering (ins) experiments were performed using high–resolution chopper spectrometer at J–PARC.", "The incident neutron energy was $30.69$ meV and the energy resolution was $2.4$ meV.", "The powder sample was put in the Aluminum cell.", "The background contribution from the sample cell was measured independently and can be subtracted from the measurements (see section ).", "The Raman scattering experiments were performed using a triple–axis monochrometer (JASCO, NR-1800).", "The CCD detector (Princeton Instruments, LN/CCD–1100PB) was used.", "The wave–length of the Laser was $\\lambda = 564.1$ nm.", "The samples were put in the homemade Aluminum cell attached optical windows.", "The GM cryocooler (SHI, SRDK–2015) was used to cool the samples down to 4 K." ], [ "Investigation of crystal structure", "NaO$_{2}$ undergoes several successive phase transitions as discussed in the text.", "Figure REF shows the xrd patterns and Rietveld refinement results at (a) room temperature (RT), (b) 220 K and (c) 100 K. All phases can be analyzed using the GSAS II package.", "The right side of Fig.", "REF shows schematic figures of the unit cells of phase I, II and III.", "Tables REF , REF and REF summarize the structural parameters at these temperatures.", "Figure: xrd patterns and results of Rietveld refinement at (a) RT, (b) 220 K and (c) 100 K.The blue dots denote the observed data, and the red line denotes the calculated results.The green line denotes the difference between the observed and calculated data.The blue vertical bars indicate the candidate position for the space group symmetry of each phase.The right figures show the unit cells of the phase I, II and III.The yellow circle and the dumbbell indicate Na atom and O 2 _{2} molecule.The O$_{2}$ molecule is octahedrally surrounded by Na atoms in all phases.", "The phase I has a cubic NaCl–type structure (SG: $Fm$$m$ ).", "Because of the cubic symmetry, O$_2$ molecular axis is not fixed along a certain direction, i.e., it has a orientational disorder.", "The phase II has a cubic pyrite–type structure (SG: $Pa$ ).", "As shown in Fig.", "REF (a), in the phase II, the O$_{2}$ molecular axis is aligned along one of four equivalent [111]–directions of the octahedron, i.e., [111], [11], [11], and [1].", "Figures REF (b) shows the (010) and (020) plane for the crystal structure of the phase II, where the u and d denote the upward and downward positional shift of oxygen atom against the $ac$ –plane.", "In this phase, the adjacent O$_{2}$ molecules arrange their molecular axes so as to avoid each other, i.e., coherent antiferro–like arrangement of the molecular axes.", "The phase III has a cubic marcasite–type structure (SG: $Pnnm$ ).", "In the phase III, the octahedron loses the three–fold symmetry and the molecular axis is slightly tilted from the [111]–direction of the octahedron (see the text).", "Na–Na bond lengths are changed to be not equivalent (3.39 Å(blue), 3.89 Å(green) and 4.31 Å(red)).", "Neighboring O$_{2}$ molecules within the $ac$ –plane arrange their molecular axes so as to be parallel to each other, namely, ferro–like alignments of nearest–neighboring molecular axes are given.", "Figure: (left) Unit cell of the phase II of NaO 2 _{2}.O 2 _{2} molecule is octahedrally surrounded by Na atoms.", "(right) Unit cell in the (acac)–plane for the phase II.Table: Structural parameters obtained from Rietveld analysis at 300 K in the phase I.The weighted error R wp R_{\\rm wp}, Goodness Of Fitting (GOF), lattice constant aa, atomic coordinates (xx, yy, zz),and isotropic atomic displacement parameter U iso U_{\\rm iso} are shown.In the Rietveld analysis, Cl atom was placed in site of the O 2 _{2} molecule.Table: Structural parameters obtained from Rietveld analysis at 220 K in the phase II.Table: Structural parameters obtained from Rietveld analysis at 100 K in the phase III." ], [ "Investigation of magnetic exchange interaction", "Figure REF shows the temperature dependence of $\\chi (T)$ , in which the low–temperature Curie–contribution is subtracted.", "To evaluate $J$ from the $\\chi (T)$ in the phase III, we used the so–called Bonner–Fisher (BF) model [3] and a two–dimensional (2DL) model with weak inter–chain interaction [4].", "In Fig.", "REF , $\\chi (T)-\\chi _{\\rm C}$ and several curves for two models are plotted.", "Using the fixed Curie constant, we tried to fit $\\chi (T)$ by these models with different exchange constants, but were unable to reproduce the experiment." ], [ "Details for neutron inelastic scattering measurements", "Figure REF shows the inelastic neutron scattering (ins) spectrum measured with $E_i = 30.69$ meV at $T = 2.70$ K. Figure REF (a) shows the ins spectrum for the NaO$_{2}$ sample in a sample cell made of Aluminum.", "Figure REF (b) shows the ins spectrum measured only for the Al–cell under the same condition, i.e., background spectrum.", "The INS spectrum obtained by subtracting (b) from (a) is shown in Fig.", "REF (c), which corresponds to the net spectrum for the NaO$_{2}$ sample.", "Figure REF shows the ins spectra for the NaO$_{2}$ sample at $T=$ 2.70, 6.12, 10.2, 15.0, 20.1, 24.1, 29.8, 37.3, 46.9 and 56.1 K, which were obtained by the same method at $T=2.70$ K. In order to reveal the change in the intensity of the excitation around $Q^{-1} \\sim 1$ meV as a function of temperature, the spectra between 0.5 Å$^{-1}$ and 1.5 Å$^{-1}$ are integrated, which is displayed in the text.", "Figure: (a) Inelastic neutron scattering (ins) spectrum for the NaO 2 _{2} sample in an Al–cell measuredwith E i =30.69E_i = 30.69 meV at T=2.70T=2.70 K.(b) ins spectrum for the Al–cell only, i.e., background spectrum (Empty can).", "(c) ins spectrum subtracted (b) from (a), which corresponds to the ins spectrum for the NaO 2 _{2} sample.Figure: ins spectra for the NaO 2 _{2} sample measured at T=T= 2.70, 6.12, 10.2, 15.0, 20.1, 24.1, 29.8, 37.3, 46.9 and 56.1 K." ], [ "Estimation of upper limit of a distortion along the 1D chain", "The observation of the superlattice reflection in the diffraction measurements should allow us to conclude the SP ground state in NaO$_{2}$ .", "Unfortunately, we could not experience the direct evidence on the structural change below $T_{3}$ by the powder xrd, single–crystal xrd and elastic neutron scattering measurements.", "Thus, we try to estimate the upper limit of a distortion along the $c$ –axis, where the strong AF interaction would be present between the nearest–neighbor O$_2$ s, by taking an experimental background systematic error into consideration in the elastic neutron scattering experiments.", "The simple dimerization model along the $c$ –axis provides that the superlattice reflections should be observed at the index with $c^*/2$ .", "The structure factor calculation leads that the intensity ratio between the (110) and (001/2) reflections is proportional to $(\\delta c / c)^2$ if $\\delta c / c \\ll 1$ , where the $c$ and $\\delta c$ describe the length of the $c$ –axis and the distortion along the $c$ –axis, respectively.", "If the dimerization along the chain is assumed to be 2%, i.e., $\\delta c / c = 0.02$ , the intensity ratio between the (110) and (001/2) reflection is calculated to be 0.028.", "If this is the case, we obtain the following relation; $I (001/2) &=& \\frac{0.028}{(0.02)^2} \\left(\\frac{\\delta c}{c}\\right)^2 I(110) \\\\dI &>& I (00 1/2)$ From the elastic neutron scattering experiments on NaO$_2$ , the integrated intensity of (110) reflection is estimated to be 7.35 (a.u.)", "and the background systematic error is estimated to be $dI=0.0863$ (a.u.).", "When we put these values into eq.", "(REF ), the distortion, $\\delta c/c$ , was calculated to be less than 0.013.", "If this is the case, the distortion of the $c$ –axis could not been observed." ] ]
2107.01783
[ [ "A note on \"Existence and uniqueness of coexistence states for an\n elliptic system coupled in the linear part\", by Hei Li-Jun, Nonlinear Anal.\n Real World Appl. 5, 2004" ], [ "Abstract In this short paper I report on a paper published in Nonlinear Analysis: Real World Applications in 2004.", "There is a major mistake early in that paper which makes most of its claims false.", "The class of reaction-diffusion systems considered in the paper has been the object of a renewed investigation in the past few years, by myself and others, and recent discoveries provide explicit counterexamples ." ], [ "Introduction", "The paper “Existence and uniqueness of coexistence states for an elliptic system coupled in the linear part” by Hei Li-jun [8] studies the following class of elliptic systems: ${\\left\\lbrace \\begin{array}{ll}-d_i\\Delta u_i=\\sum _{j=1}^n a_{ij}(x)u_j - b_i(\\mathbf {u}) & \\text{in }\\Omega , \\\\\\partial u_i/\\partial \\nu =0 & \\text{on }\\partial \\Omega ,\\end{array}\\right.", "}\\quad i\\in \\lbrace 1,2,\\dots ,n\\rbrace ,$ where $\\Omega $ is a nonempty bounded smooth connected open set in some Euclidean space, $\\partial /\\partial \\nu $ denotes the derivative in the direction of the outward normal at some point on $\\partial \\Omega $ , $n\\in \\mathbb {N}$ , $d_1,d_2,\\dots ,d_n>0$ , the matrix $\\mathbf {A}=(a_{ij})_{1\\le i,j\\le n}$ is essentially positive (namely, with positive off-diagonal entries), $\\mathbf {u}=(u_1,\\dots ,u_n)^{\\textup {T}}$ , and the $\\mathcal {C}^1$ vector field $\\mathbf {b}=(b_1,\\dots ,b_n)^{\\textup {T}}$ satisfies the following assumptions, for each $1\\le i\\le n$ : $b_i(\\mathbf {u})$ is nondecreasing, $b_i(\\mathbf {0})=0$ , $\\partial b_i/\\partial u_j(\\mathbf {0})=0$ , for each $1\\le j\\le n$ ; $b_i(\\mathbf {u})/u_i$ is increasing and $\\lim _{u_i\\rightarrow +\\infty }b_i(\\mathbf {u})/u_i=+\\infty $ provided $u_j>0$ for each $1\\le j\\le n$ .", "As explained in [8], a typical example of vector field $\\mathbf {b}$ is the Lotka–Volterra competition term: $b_i(\\mathbf {u})=u_i\\sum _{j=1}^n c_{ij} u_j$ with the matrix $\\mathbf {C}=(c_{ij})_{1\\le i,j\\le n}$ positive entry-wise.", "In order to fix the ideas, if $n=2$ , then the system with Lotka–Volterra-type $\\mathbf {b}$ has the following more familiar form: ${\\left\\lbrace \\begin{array}{ll}-d_1\\Delta u_1=a_{11}(x)u_1+a_{12}(x)u_2-u_1(c_{11}u_1+c_{12}u_2), \\\\-d_2\\Delta u_2=a_{21}(x)u_1+a_{22}(x)u_2-u_2(c_{21}u_1+c_{22}u_2).\\end{array}\\right.", "}$ Such systems are referred to as mutation–competition–diffusion systems or stage-structured systems, depending on the underlying biological model.", "They have been studied a lot in the past decades, with a renewed interest recently; refer for instance to [5], [9], [2], [3], [7], [1] and references therein.", "After a brief reminder on linear cooperative systems in Section 2, Section 3 of [8] studies the nonlinear system (REF ) and makes the following claims.", "There exists a positive solution of (REF ) if and only if the principal eigenvalue $\\lambda _1$ associated with the linearized operator at $\\mathbf {u}=\\mathbf {0}$ , $-\\operatorname{diag}(d_i\\Delta )-\\mathbf {A}$ with Neumann boundary conditions, is negative.", "If $\\mathbf {A}$ is symmetric ($\\mathbf {A}=\\mathbf {A}^{\\textup {T}}$ ) and $\\lambda _1<0$ , then the positive solution is unique.", "Stability considerations follow in Section 4.", "Although the first claim is consistent with other works on this class of systems, the second one is contradictory.", "It turns out that even the proof of existence when $\\lambda _1<0$ is false, as written.", "I will explain the error and what can be salvaged in the following sections of the present note." ], [ "The uniqueness result cannot possibly be true", "In [4], I gave the following explicit counter-example showing that positive coexistence states are not, in general, unique, even with a symmetry assumption on $\\mathbf {A}$ : ${\\left\\lbrace \\begin{array}{ll}-d_1\\Delta u_1=(1-1/5)u_1+(1/5)u_2-(1/10)u_1(u_1+9u_2), \\\\-d_2\\Delta u_2=(1/5)u_1+(1-1/5)u_2-(1/10)u_2(9u_1+u_2).\\end{array}\\right.", "}$ Indeed, it can be directly verified that the constant positive solutions are in this case: $\\begin{pmatrix}1\\\\1\\end{pmatrix},\\quad \\begin{pmatrix}3-\\sqrt{15/2}\\\\3+\\sqrt{15/2}\\end{pmatrix},\\quad \\begin{pmatrix}3+\\sqrt{15/2}\\\\3-\\sqrt{15/2}\\end{pmatrix}.$ More recently, Cantrell, Cosner and Yu [2] actually showed that, in the two-species case $n=2$ , any system of the form (REF ) with Lotka–Volterra competition which is bistable competitive in the absence of mutations ($a_{12}=a_{21}=0$ ) will remain bistable competitive with sufficiently small mutations, in the sense that the two stable semi-trivial steady states of the Lotka–Volterra competitive system are displaced by small positive off-diagonal terms $a_{12}, a_{21}$ inside the positive cone $\\lbrace u_1\\ge 0, u_2\\ge 0\\rbrace $ and not outside of it.", "The preceding explicit counter-example (REF ) is a particular case of this general result.", "The existence result depending on the sign of $\\lambda _1$ is, on the contrary, consistent with the literature (for instance, [5])." ], [ "In fact, all proofs of the paper are false", "Nevertheless, after a thorough inspection of the proofs in [8], it appears that even the proof of existence is false.", "In fact, the main idea of the paper, namely that (REF ) is actually cooperative, is wrong, and since all proofs rely at least partially on this idea (monotonicity arguments, comparison principles and stability theory for monotone systems), all results are false.", "The system (REF ) is indeed cooperative at the origin $\\mathbf {u}=\\mathbf {0}$ but the preservation of this property away from the origin strongly depends on the specific choice of $\\mathbf {b}$ and, although it can be true [2], it is in general false [2], [6].", "More precisely, the error leading to this false nonlinear comparison principle is in the proof of Theorem 3.2 of [8].", "At the very beginning of the proof, the author states the following claim: fix $\\underline{\\mathbf {u}}\\gg \\mathbf {0}$ , $\\overline{\\mathbf {u}}\\gg \\underline{\\mathbf {u}}$As stated in [8], the claim only assumes $\\overline{\\mathbf {u}}\\ge \\underline{\\mathbf {u}}\\ge \\mathbf {0}$ , however the actual super- and sub-solutions constructed later on in [8] all satisfy $\\overline{\\mathbf {u}}\\gg \\underline{\\mathbf {u}}\\gg \\mathbf {0}$ .", "I claim, without proof for the sake of brevity, that the construction of a counter-example when the inequalities are large instead of strict is cumbersome but still possible., and let $\\mathbf {u}_1$ , $\\mathbf {u}_2$ be two arbitrary vectors such that $\\underline{\\mathbf {u}}\\le \\mathbf {u}_2\\le \\mathbf {u}_1 \\le \\overline{\\mathbf {u}}$ .", "Then, provided the real number $M>0$ is sufficiently large in a way that does not depend on $\\mathbf {u}_1$ and $\\mathbf {u}_2$ , $M\\mathbf {u}_1-\\mathbf {b}(\\mathbf {u}_1)-M\\mathbf {u}_2+\\mathbf {b}(\\mathbf {u}_2)\\ge 0.$ Although this is obviously true in the scalar setting $n=1$ (with $M$ the Lipschitz constant of $\\mathbf {b}$ ), this is false in higher dimensions $n\\ge 2$ , as can be easily verified with the following counter-example.", "Let $\\mathbf {v}=\\underline{\\mathbf {u}}$ , $\\mathbf {u}_2=\\mathbf {v}$ , $\\mathbf {u}_1=\\mathbf {v}+\\varepsilon \\mathbf {e}_1=\\mathbf {v}+(\\varepsilon ,0,\\dots ,0)^{\\textup {T}}$ with $\\varepsilon >0$ so small that $\\mathbf {u}_1\\le \\overline{\\mathbf {u}}$ .", "Then for each $i\\ge 2$ and any $M>0$ , $\\left[M\\mathbf {u}_1-\\mathbf {b}(\\mathbf {u}_1)-M\\mathbf {u}_2+\\mathbf {b}(\\mathbf {u}_2)\\right]_i= b_i(\\mathbf {v})-b_i(\\mathbf {v}+\\varepsilon \\mathbf {e}_1)= v_i\\left(\\frac{b_i(\\mathbf {v})}{v_i}-\\frac{b_i(\\mathbf {v}+\\varepsilon \\mathbf {e}_1)}{v_i}\\right)<0.$ (Even though the assumption of strict monotonicity of $b_i(\\mathbf {u})/u_i$ is ambiguously stated in [8], the previous strict inequality is clear in the aforementioned Lotka–Volterra competition case that the author clearly had in mind.)", "Everything that follows in the paper depends on this false claim." ], [ "What can be salvaged", "Theorem 3.4 in [8] states the uniqueness of the coexistence state under a symmetry assumption on the matrix $\\mathbf {A}$ .", "Although it is false in general as explained before, the variational argument there does not directly rely upon the comparison principle and can be used to prove that two distinct coexistence states $\\mathbf {u}$ and $\\mathbf {v}$ cannot be compared (namely, both $\\mathbf {u}\\ge \\mathbf {v}$ and $\\mathbf {u}\\le \\mathbf {v}$ are false).", "This property is indeed satisfied by the above counter-example (REF )." ] ]
2107.01841
[ [ "Learning towards Robustness in Causally-Invariant Predictors" ], [ "Abstract We propose to learn an invariant causal predictor that is robust to distributional shifts, in the supervised regression scenario.", "Based on a disentangled causal factorization that describes the underlying data generating process, we attribute the distributional shifts to mutation of generating factors, which covers a wide range of cases of distributional shifts as we do not make prior specifications on the causal structure or the source of mutation.", "Under this causal framework, we identify a set of invariant predictors based on the do-operator.", "We provide a sufficient and necessary condition for a predictor to be min-max optimal, i.e., minimizes the worst-case quadratic loss among all domains.", "This condition is justifiable under the Markovian and faithfulness assumptions, thus inspiring a practical algorithm to identify the optimal predictor.", "For empirical estimation, we propose a permutation-regeneration scheme guided by a local causal discovery procedure.", "The utility and effectiveness of our method are demonstrated in simulation data and two real-world applications: Alzheimer's disease diagnosis and gene function prediction." ], [ "Introduction", "Standard supervised learning heavily relies on the assumption that training and test data follow the identical distribution, while many real-world applications do not fit into this setting.", "Therefore, we expect the predictor learned from training data to retain the predictive power in the test data with a non-identical distribution.", "The task of domain (out-of-distribution) generalization addresses such a problem, where the training data are collected from multiple domains with different distributions and we aim to learn a robust predictor that can be transferred to the unseen domains.", "Recently, causality-related domain generalization methods have attracted much attention, where the “causality\" is formulated as a causal structure [1] by the directed acyclic graph (DAG) that provides a modular configuration to describe the causative relations among the variables.", "Equipped with such a formulation, our goal can be described as: given multiple domains' data generated from unknown DAGs, can we identify a robust predictor that is transferable to all domains?", "To answer this question, many explorations have been made [2], [3], [4], [5], [6], [7], [8], [9].", "Specifically, the [7], [8], [10] could successfully identify latent causal factors equipped with a pre-defined DAG tailored for specific tasks; while in general, the DAG may not be known.", "The [4], [5] proposed an invariant predictor, which exploited only the parent nodes of the response variable for prediction.", "However, such method tend to overlook the non-causative dependence that are also invariant across domains, which weakens its predictive power.", "Recently, the enlightening works [11], [12] proposed that the interventional distribution is stable across domains under the selection diagram and I-spec frameworks, respectively, which both rely on the pre-specified mutable variables.", "This can be intractable in real application scenarios when the selection mechanisms that generate these mutable variables are unknown.", "Besides, such an invariant distribution can be non-identifiable, due to the lack of causal discovery in [11], [12].", "Our work starts from the disentangled causal factorizations of the domain-specific distributions and provides a principled way to formulate and solve the domain generalization problem.", "We show that the intervened conditional expectation is invariant across all domains and has a min-max property, which supports its transferability and robustness to distribution shift that provides a safeguard for prediction in the unseen domains.", "For empirical learning, we propose an intuitive and flexible learning method based on data regeneration, where we present a local causal discovery procedure that ensures us to identify necessary causal directions for regeneration.", "As a roadmap, we briefly summarize our method as following three components.", "Causal framework for distribution shift: We adopt the following disentangled causal factorizations to formulate how different domains are related, $p^e(x, y)=p(y|pa_y)\\prod \\nolimits _{x_i\\in x_s}p(x_i|pa_i)\\prod \\nolimits _{x_i\\in x_m}p^{e}(x_i|pa^{e}_i),$ where $e$ is the domain index, $\\mathbf {X}_m$ denotes the variables with changing causal conditionals, and $\\mathbf {X}_s\\!=\\!\\mathbf {X}\\!\\setminus \\!\\mathbf {X}_m$.", "This provides a unifying framework that covers a wide range of distribution shift cases as we do not presume the causal graph structure and changing conditionals as known.", "Such framework coincides with the sparse mechanism shift principle in [13] that says the distribution shift tends to be caused by a sparse change in the causal factorization.", "Invariant predictor with shift-robustness: We define the predictor $\\operatorname{E}_{P^e}\\!\\left(Y|x_s, do(x_m)\\right)$ in terms of the distribution $p^e(x, y)$ and then prove its invariance, which enables us to learn such mapping based on the training domains and then transfer it to the unseen domain.", "Further, we prove that $\\operatorname{E}\\!\\left(Y|x_s, do(x_m)\\right)$ minimizes the worst-case quadratic loss among all domains, which guarantees the predictive performance in any unseen domains.", "We provide an intuitive and accessible proof where we construct a distribution $P^{*}$ that is compatible with the intervened graph and transform the original min-max problem into a max-min one with respect to $P^{*}$.", "Empirical learning based on data regeneration: We propose an intuitive learning method based on data regeneration, which is flexible to be readily incorporated with standard supervised learning methods.", "The method is enlightened by a key observation that the intervened conditional mean is equal to the standard conditional mean under the distribution that is Markov equivalent with the intervened graph.", "To transform the original domain-specific data to be compatible with the intervened graph, we firstly shuffle the mutable variables and then regenerate the responsive variables.", "As the causal structure is unknown, we propose a local causal discovery procedure to determine the necessary causal directions for data regeneration.", "We show that it suffices to a minimal set of responsive variable for estimating the predictor, and the required causal directions are always identifiable.", "For evaluation, we implement the experiments on both simulated data and real-world data.", "The data and source codes are available from supplements." ], [ "Related Work", " Causal learning for domain generalization: There have been emerging works that consider the domain generalization problem from a causal perspective.", "One line of work [2], [3], [4], [6] promotes invariance as a key surrogate feature of causation where the causal graph is more of a motivation.", "Another line of work [7], [8], [9], [14], [15] considers domain generalization for unstructured data using specifically designed causal graphs as a tool to model the distribution shift and guide the learning method, which is more task targeted.", "[11] and [12] are the most relevant to us as they also adopt the interventional distribution but under the selection diagram and I-spec frameworks, respectively.", "While both [11] and [12] concentrated on the identification issue of the stable distributions, we formulate the causal framework from the causal factorizations under the DAG where the predictor is identifiable by definition.", "In comparison to [12], our results allow the case when the intervened conditional mean can not be reduced to a common conditional mean, which is also the important case that separate the method based on do-operator with the line of methods based on invariant conditionals [4], [6], [16].", "Causal discovery with multi-domains: Our work also benefits from the recent progress in causal discovery [17], [18], [19], [20], [21], a subfield that focuses on identifying the causal relations from data.", "Our work shares a similar framework with [17] in formulating the distribution shift.", "However, they focus on recovering the full causal graph to study relations among variables, we provide a local discovery procedure aimed for the prediction of the target." ], [ "Preliminaries", "Problem Setup.", "Suppose that the system includes a target variable $Y\\!\\in \\!\\mathcal {Y}$ , a multivariate predictive variable $\\mathbf {X}\\!\\in \\!\\mathcal {X}$ , and the data are collected from multiple domains $\\mathcal {E}\\!=\\!\\lbrace e\\rbrace $ .", "In practice, different “domains” may refer to different experiment settings or different groups of subjects.", "We consider the setting where the joint distribution of $p^{e}(x, y)$ varies across domains.", "Let $D_{e}\\!=\\!\\lbrace (x_k^e, {y}_k^e)\\rbrace _{k=1}^{n_e}$ denote the dataset in domain $e$ with sample size $n_e$ , where the sample units are identically distributed according to $p^{e}(x, y)$ .", "The distributions set $\\mathcal {P}\\!\\!=\\!\\lbrace P^e|e\\in \\mathcal {E}\\rbrace $, where the upper case $P^e$ denote the distribution with probability density $p^{e}(\\cdot )$ .", "The training data are $\\lbrace D_{e}|e\\in \\mathcal {E}_{\\text{train}}\\rbrace $ where $\\mathcal {E}_{\\text{train}}\\subseteq \\mathcal {E}$ .", "The task is to learn a predictor $f^{*}(\\cdot ): \\mathcal {X}\\mapsto \\mathcal {Y}~$ from the training domains such that $f^{*}(\\cdot )$ is robust to distributional shifts across all domains.", "To measure the robustness of a predictor $f(\\cdot )$ , a natural way is to investigate the performance of $f(\\cdot )$ in its most unfavorable domain.", "That is, we wish to minimize the maximum loss $\\max _{P^e\\in \\mathcal {P}}\\operatorname{E}_{P^e}[\\mathcal {L}(Y,f(x))]$ to provide a safeguard for prediction in the unseen domain, where $\\mathcal {P}$ is the set of distributions over all domains and $\\mathcal {L}$ denotes the loss function.", "Therefore, the predictor $f^{*}(\\cdot )$ with shift robustness should own the following property, $f^{*}(x)=\\mathop {\\text{argmin}}\\nolimits _{f:{\\small \\mathcal {X}\\mapsto \\mathcal {Y}}}\\max \\nolimits _{_{P^e\\in \\mathcal {P}}}\\operatorname{E}_{P^e}[\\mathcal {L}(Y,f(x))].$ Such a goal is in align with existing works [2], [3], [6], while they focus on learning an invariant predictor by adding penalizations instead of dealing with the min-max property.", "In our work, we firstly provide the explicit formula of the predictor in terms of the distribution based on the do-operator.", "Then we prove its invariance and min-max property in (REF ), where we transform the original min-max problem to a more accessible max-min problem.", "Causal graph and do-operator.", "A causal graph $G$ of a set of variables $\\mathbf {V}$ is a directed acyclic graph (DAG) where each node corresponds to a distinct element of $\\mathbf {V}$ and each arrow represents a direct causal relationship [22].", "We let $\\text{PA}_{v_i}, \\text{Ch}_{v_i}, \\text{De}_{v_i}, \\text{An}_{v_i}$ denote the parents, children, descendants, ancestors of $V_i$, and let $\\overline{\\text{An}}_{v_i}$ denote $V_i\\cup \\text{An}_{v_i}$.", "Each variable is generated by $V_i\\!=\\!\\!f_{v_i}(\\text{PA}_{v_i}, U_i)$ where $U_i$ denotes the random noise.", "The graph structure along with the joint independence of noise terms induce a causal factorization of the joint distribution $p(v)=\\prod _{v_i}p(v_i|pa_{v_i})$ .", "The operator do($\\small \\mathbf {V}_1\\!=\\!v_1$ ) means that we fix the value of the random variable $\\mathbf {V}_1$ as constant $v_1$ that curtails the natural tendency of $\\mathbf {V}_1$ to vary in response $\\text{PA}_{\\mathbf {v}_1}$ , then $p(v_0| do(v_1))\\!=\\!p(v)/\\prod _{v_i\\in v_1}p(v_i|pa_i)$ denotes the induced interventional probability distribution of $\\mathbf {V}_0\\!\\!=\\!\\!\\mathbf {V}_1\\!\\!\\setminus \\!\\!\\mathbf {V}_0$.", "In terms of changes in the causal graph, the do-operator is equivalent to removing all edges directed into the intervened variable." ], [ "Methodology", "Firstly, we will introduce a causal framework to formulate the distribution shifts across domains.", "As is known, extrapolation to the unseen domain is generally unachievable as the distribution may change arbitrarily, certain restrictions are required to build connections between the seen and unseen domains.", "In this paper, we take a causal perspective and view the joint distribution $p^{e}(x, y)$ of the real-world data as a product of causal mechanisms, and the shift of $\\lbrace p^{e}(x, y)\\rbrace $ across domains source from some changing causal modules.", "Specifically, suppose that $\\mathbf {X}\\!=\\!", "(X_1,\\cdots , X_p)$ is a $p$ -$\\!$ dimensional variable, and the causal structure of $Y\\cup \\lbrace X_i\\rbrace _{i=1}^{p}$ is represented by a DAG [1] such that the joint distribution admits the following causal factorization $p^e(y,x)=p^{e}(y|pa^{e}_y)\\prod \\nolimits _{i=1}^{p}p^{e}(x_i|pa^{e}_i), \\text{~for~} e\\in \\mathcal {E},$ where each term in the factorization is called as a causal conditional.", "When the set of parent nodes are empty, the causal conditional $p^e(x_i|pa^e_i)$ refers to the marginal density $p^e(x_i)$ .", "The causal graph is a modular configuration to describe the causative relationships among the variables in a qualitative way, where each parent-child relationship represents an autonomous physical mechanism.", "Indeed, a change in $p^{e}(x, y)$ across domains will always be due to the changes in at lease one of the causal conditionals in (REF ).", "Let $\\mathbf {X}_{s}$ and $\\mathbf {X}_{m}$ denote the stable variables set with invariant causal conditionals and mutable variables set with changing ones, respectively, i.e., $\\mathbf {X}_{m}\\!", ":=\\!\\lbrace X_i|\\exists ~e, e^{_\\prime }\\in \\mathcal {E},~ p^{e}(x_i|pa^{e}_i)\\ne p^{e^{_\\prime }}(x_i|pa^{e^{_\\prime }}_i) \\rbrace $ and $\\mathbf {X}_{s}\\!", ":=\\!\\lbrace X_i| p^{e}(x_i|pa_i)\\equiv p(x_i|pa_i)~\\forall e\\!\\in \\!\\mathcal {E} \\rbrace $.", "In this paper, we consider the case where the causal conditional for the target is invariant, i.e., $p^{e}(y|pa_y)\\equiv p(y|pa_y)$ , which has been widely adopted in the existing literature [2], [5], [8], [15].", "For clarity, we summarize such framework as the following." ], [ "Causal framework for distribution shift", "Suppose that causal structure over $Y\\cup \\lbrace X_j\\rbrace $$_{^{j=1}}^{_p}$ is a directed acyclic graph for each domain $e$ and $p^{e}(x, y)$ admits the following causal factorization $p^e(x, y)=p(y|pa_y)\\prod \\nolimits _{x_i\\in x_s}p(x_i|pa_i)\\prod \\nolimits _{x_i\\in x_m}p^{e}(x_i|pa^{e}_i), \\text{~for~} e\\in \\mathcal {E},$ where $x_s$ and $x_m$ are disjoint sets and $x_s\\cup x_m\\!=\\!\\lbrace x_j\\rbrace _{j=1}^{p}$ .", "This framework is closely related to the concept of `soft intervention' [23] and `mechanism change' [24] in the literature of causal discovery, which refers to the change of causal conditionals under certain known experimental interventions.", "It is important to note we adopts the causal factorization, instead of an arbitrary factorization by the multiplication rule, e.g., $p^e(x, y)\\!=\\!\\prod _{i=1}^{p}p^e(x_i|x_{i+1},\\cdots ,x_p,y)p^e(y)$ , as it is believed that the physical mechanism changes are normally rare (sparse mechanism shift, [13]).", "By contrast, a minor change in one causal conditional during the data generating process might cause many terms in a non-causal factorization change simultaneously." ], [ "Causally Invariant Predictor", "In this section, we will show that the conditional expectation under the distribution intervened by $do($$\\mathbf {X}_{m}$$=\\!x_m)$ [1], [5], i.e., $\\operatorname{E}_{P^{e}}(Y|x_s,do(x_m))$ is invariant across $P^{e}\\!\\in \\!", "\\mathcal {P}$ and hence can serve as the invariant predictor.", "Then we will prove that the proposed invariant predictor also possesses the favorable min-max property in (REF ) that supports its robustness to the distribution shift.", "Note that while in this section the predictor is expressed in terms of the distribution (instead of sample data), we will give the empirical learning method with the detection procedure for $\\mathbf {X}_{m}$ in Section REF .", "According to the framework in (REF ), the causal conditionals $\\lbrace p^{e}(x_i|pa^{e}_i)|x_i\\in x_m\\rbrace $ that dictate the generation of $X_i\\in \\mathbf {X}_m$ change across domains $e\\in \\mathcal {E}$ , so we proactively remove such dependence information by $do($$\\mathbf {X}$$_m\\!=\\!x_m)$ and the induced interventional probability distribution $p^{e}(y,x_s|do(x_m))\\equiv p(y|pa_y)\\prod \\nolimits _{x_i\\in x_s}p(x_i|pa_i) \\text{~for~} e\\in \\mathcal {E},$ which is obtained by removing $\\lbrace p^{e}(x_i|pa^{e}_i)|x_i\\in x_m\\rbrace $ and therefore is invariant for $P^{e}\\in \\mathcal {P}$ .", "By implementing such do-operator on the mutable variables, the interventional distribution $p(y,x_s|do(x_m))$ naturally removes all the non-invariant information contained in $\\lbrace p^{e}(x_i|pa^{e}_i)|x_i\\in x_m\\rbrace $ and maximally reserves all the transferable invariant information carried by $p(y|pa_y)$ and $\\lbrace p(x_i|pa_i)|x_i\\in x_s\\rbrace $ .", "The invariance of the interventional distribution directly leads to the invariance of $\\operatorname{E}_{P^{e}}(Y|x_s,do(x_m))$, which is defined by $\\operatorname{E}_{P^{e}}[Y|x_s,do(x_m)]=\\int yp^{e}(y|x_s, do(x_{m}))dy=\\int y\\frac{p^{e}(y,x_s | do(x_{m}))}{\\int _{y}p^{e}(y,x_s | do(x_{m}))dy} dy.$ Proposition 4.1 (Invariance Property)   Suppose that the distributions set $\\mathcal {P}\\!=\\!\\lbrace P^e|e\\in \\mathcal {E}\\rbrace $ where $p^e(y,x)=p(y|pa_y)\\prod _{x_i\\in x_s}p(x_i|pa_i)$ $\\prod _{x_i\\in x_m}p^e(x_i|pa^e_i)$ , then $\\operatorname{E}_{P^{e}}(Y|x_s, do(x_m))$ , the conditional expectation under the distribution intervened by $do(\\mathbf {X}_m\\!=\\!x_m)$ , is invariant for $P^e\\in \\mathcal {P}$ .", "In the following, we will denote invariant distributions as $p(y, x_s|do(x_m))$ omitting the superscript $e$ and denote the prediction as $\\operatorname{E}[Y|x_s,do(x_m)]$ .", "It is worth noting our method is different from the way of variables selection, our invariant predictor is commonly not constant in $x_m$ as the interventional distribution still retains the invariant causal conditionals with $\\mathbf {X}_m$ as the parents.", "The property of invariance is of great importance, as it suggests that the mapping $\\operatorname{E}[Y|x_s,do(x_m)]: \\mathcal {X}\\mapsto \\mathcal {Y}$ can be learned from the training domains using data distributed as $p^e(x,y)$ ($e\\in \\mathcal {E}_{\\text{train}}$ ), and then can be directly transfered to the unseen test domains.", "However, being invariant only suggests that predictor is learnable from training domains, further analyses on its robustness are required.", "Fortunately, it can be proved that under certain conditions our proposed predictor $f^*(x)\\!", ":=\\!\\operatorname{E}[Y|x_s,do(x_m)]$ minimizes the maximum quadratic loss over all domains, i.e., $f^*(x)=\\mathop {\\operatorname{argmin}}\\nolimits _{f:{\\small \\mathcal {X}\\mapsto \\mathcal {Y}}}\\max \\nolimits _{_{P^e\\in \\mathcal {P}}}\\operatorname{E}_{P^e}[(Y-f(\\mathbf {X}))^2].$ The min-max property in (REF ) suggests $\\max \\nolimits _{_{P^e\\in \\mathcal {P}}}\\operatorname{E}_{P^e}[(Y-f^*(\\mathbf {X}))^2]\\le \\max \\nolimits _{_{P^e\\in \\mathcal {P}}}\\operatorname{E}_{P^e}[(Y-f(\\mathbf {X}))^2]$ for any predictor $f(\\cdot )$ , which shows the robustness of our predictor and guarantees its predictive performance in any unseen test domain.", "The establishment of (REF ) requires certain conditions, the following Theorem REF specify the conditions from causal structures, while Theorem REF presents the result under the parametric assumption of additive models.", "The key idea of the proof is to transform the min-max problem to a more accessible max-min one, which will be introduced after the theorems.", "Theorem 4.1 (Min-Max property under certain Causal Structures)   Let $\\mathbf {X}_{m}^{0}\\!=\\!\\mathbf {X}_{m}\\cap \\text{\\emph {Ch}}_{Y}$ , $\\mathbf {X}_{m}^{1}\\!=\\!\\mathbf {X}_{m}\\setminus \\mathbf {X}_{m}^{0}$.", "Suppose that the causal graph for $Y\\cup \\lbrace X_j\\rbrace $$_{^{j=1}}^{_p}$ in each domain satisfies that (i)  for $V_i\\in \\mathbf {X}_{m}^{0}\\cap \\text{\\emph {An}}(\\mathbf {X}_{s})$, all the paths between $V_i$ and $Y$ are d-separated by $\\mathbf {X}_{s}\\cup \\mathbf {X}_{m}^{1}$; and (ii) there is no directed path from $V_i\\in \\mathbf {X}_{m}^{0}\\!\\!\\setminus \\!", "\\text{\\emph {An}}(\\mathbf {X}_{s})$ to $Y$ , then $f^*(x)\\!", ":=\\!\\operatorname{E}[Y|x_s,do(x_m)]$ satisfies the min-max property claimed in (REF ).", "The assumptions in Theorem REF over causal graphs originate from [25].", "Under such restrictions, the invariant prediction $\\operatorname{E}(Y|x_s, do(x_m))$ can be reduced to the conditional expectation $\\operatorname{E}(Y|x_s, x_m^{1})$ , which enables the following variance decomposition formula $\\operatorname{Var}(Y|\\mathbf {X}_{s},\\mathbf {X}_{m}^{1})=\\operatorname{E}[\\operatorname{Var}(Y|\\mathbf {X})|\\mathbf {X}_s, \\mathbf {X}_m^1]+\\operatorname{Var}[\\operatorname{E}(Y|\\mathbf {X})|\\mathbf {X}_s, \\mathbf {X}_m^1]$ that is crucial for the proof.", "When $\\operatorname{E}(Y|x_s, do(x_m))$ can not be reduced to the common conditional expectation, we can also prove the min-max property under certain parametric assumptions on the causal models as shown in Theorem REF .", "Theorem 4.2 (Min-Max property under Additive Noise Model)   Suppose that the causal conditionals for the target $Y$ and its child nodes Ch$_{Y}$ are Gaussian distributed as $Y|pa_y\\sim N(g_y(pa_y), \\sigma ^2_y)$ , $X_i|pa_i\\sim N(g_{x_i}(pa_i), \\sigma ^2_{x_i})$ for $X_i\\in $ Ch$_{Y}$ and $g_i(pa_i)$ is linear in $Y$ .", "Then $f^*(x)\\!", ":=\\!\\operatorname{E}[Y|x_s,do(x_m)]$ satisfies the min-max property claimed in (REF ).", "The assumptions on the causal conditionals in Theorem REF is equivalent to the commonly adopted additive noise assumption written in $Y\\!=\\!g_{y}(\\text{PA}_y)\\!+\\!\\varepsilon _y$ and $X_i\\!=\\!g_{x_i}(\\text{PA}_{x_i})\\!+\\!\\varepsilon _{x_i}$ for $X_i\\!\\in \\!\\text{Ch}_{y}$ .", "While Gaussian distribution is a natural choice for the noise distribution, the results may be extended to more general exponential family distributions.", "In the following, we show the key idea in proving the min-max problem, which also inspires the empirical learning method in Section REF .", "Let $p^{*}(x_m)$ be an arbitrary marginal probability density for $\\mathbf {X}_m$ , we construct the distribution $p^{*}(x, y) = p(y|pa_y)\\prod \\nolimits _{x_i\\in x_s}^{p}p(x_i|pa_i)p^{*}(x_m).$ Let $P^{*}$ be the distribution with density $p^*(x, y)$ .", "Based on the definition, we have two observations $\\operatorname{E}(Y|x_s, do(x_m))=\\operatorname{E}_{P^{*}}(Y|x_s, do(x_m))$ ; $\\operatorname{E}_{P^{*}}(Y|x_s, do(x_m))\\!=\\!", "\\operatorname{E}_{P^{*}}(Y|x) =\\text{argmin}_{f}\\operatorname{E}_{P^{*}}[Y-f(\\mathbf {X}))]^2$ .", "The conclusion (i) directly follows from the fact $P^*\\!\\!\\in \\!\\mathcal {P}$ and the invariance of the intervened conditional expectation, note that $p^*(x, y)$ is a special case of $p^{e}(x, y)$ when $pa_{i}\\!=\\!", "\\varnothing $ for $x_i\\!\\in \\!x_m$ .", "To appreciate (ii), note that $p^{*}(x, y)\\!=\\!p^*(x_m)p^{e}(y,x_s|do(x_m))$ by comparing (REF ) with (REF ).", "This immediately leads to $p^{*}(y|x_s, x_m)\\!=\\!p^*(y|x_s, do(x_m))$ and hence we have the first equality in (ii).", "Further, note that the conditional expectation is the predictor that minimizes the mean squared error in the single distribution case, we have the second equality in (ii).", "Combining the equalities in (i) and (ii), we conclude that $\\operatorname{E}(Y|x_s, do(x_m))=\\text{argmin}_{f}\\operatorname{E}_{P^{*}}[Y-f(\\mathbf {X}))]^2$.", "That is, our proposed invariant predictor is actually the optimal predictor under the constructed special distribution $P^*$ .", "Then with the “Principle of Maximum Conditional Entropy\" in [26], to prove that $\\operatorname{E}[Y|x_s,do(x_m)]$ satisfies the following min-max property $\\mathop {\\text{argmin}}\\nolimits _{f:{\\small \\mathcal {X}\\mapsto \\mathcal {Y}}}\\max \\nolimits _{_{P^e\\in \\mathcal {P}}}\\operatorname{E}_{P^e}[(Y-f(\\mathbf {X}))^2]$ is equivalent to prove that $P^{*}$ is the maximizing distribution as defined in the following $\\quad \\mathop {\\text{argmax}}\\nolimits _{{P^e\\in \\mathcal {P}}}\\min \\nolimits _{f:{\\small \\mathcal {X}\\mapsto \\mathcal {Y}}}\\operatorname{E}_{P^e}[(Y-f(\\mathbf {X}))^2].$ Note $\\min _{f}\\operatorname{E}_{P^e}[(Y-f(\\mathbf {X}))^2]$ measures degree of difficulty in using $X$ to predict $Y.$ Thus, the robust predictor over $\\mathcal {P}$ with the min-max property is the optimal predictor over the “hardest-to-predict\" distribution.", "Therefore, the original min-max problem is transformed into proving that $P^{*}$ in (REF ) is a maximizing distribution defined in (REF ).", "Further, it can be proved that under the quadratic loss, $P^*$ being the maximizing distribution is equivalent to $\\operatorname{E}_{P^*}(\\operatorname{Var}_{P^*}(Y|X))\\ge \\operatorname{E}_{P^e}(\\operatorname{Var}_{P^e}(Y|X))$ for all $P^e\\in \\mathcal {P}$ , which can be proved under the conditions in Theorem REF and REF .", "Additionally, we present an intuitive motivation for claiming $P^*$ to be “hardest-to-predict\" distribution from its construction.", "Note $p^{*}(x, y)$ shares the same invariant causal conditionals with $p^{e}(x, y) $ and eliminate the other dependence, while the eliminated dependence with $Y$ tends to reduce the predictive power of $\\mathbf {X}$.", "It is important that the $P^*$ also inspires the learning method in Section REF , where we will re-define $P^*$ with more scrutiny to reduce the approximation errors." ], [ "Local Causal Discovery and Empirical Estimation", "The key idea for estimating the invariant predictor sources from the following equation, $\\operatorname{E}(Y|x_s, do(x_m))\\!=\\!\\operatorname{E}_{P^{*}}(Y|x)\\hbox{~~for~~} p^{*}(x, y) = p(y|pa_y)\\prod \\nolimits _{x_i\\in x_s}^{p}p(x_i|pa_i)p^{*}(x_m).$ The magical implication of (REF ) is that once we have the data distributed as $P^{*}$, we can estimate the invariant predictor by simply applying the standard supervised learning methods to regress $Y$ on $\\mathbf {X}$ over such data.", "Therefore, the problem is transformed to how to generate the data distributed as $P^{*}$ based on the original data from $P^{e}$, which can be accomplished by firstly shuffling the variables $\\mathbf {X}_m$ and then regenerate the responsive variables (the descendants $\\text{De}(\\mathbf {X}_m)$).", "As the causal structure is not assumed as known, we propose a local causal discovery procedure to identify the causal directions for determining the set of variables that need to be regenerated.", "Further, to reduce the approximation errors brought by the regeneration step, we redefine the distribution $P^{*}$ with more scrutiny such that $\\operatorname{E}(Y|x_s, do(x_m))\\!=\\!\\operatorname{E}_{P^{*}}(Y|x))$ still holds but we only need to regenerate the variables in $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ for estimating the invariant predictor.", "At the same time, we show in Algorithm REF that the causal directions required for determining $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ are always identifiable, which greatly improves the feasibility of our method as the identification of a general causal direction is only achievable under additional identification conditions.", "In the following, we will decompose the empirical learning as the following three steps: (i) detect $\\mathbf {X}_{m}$ and construct the undirected causal skeleton; (ii) implement local causal discovery to determine the regenerated variables; (iii) regenerate data and estimate the invariant predictor." ], [ "The First Step:", "Following [27], we treat the domain index $e$ as a sample realization from the random variable $E$ and assume the following faithfulness assumption [28], which is a standard assumption in causal discovery that ensures the conditional independence relationships contained in the distribution are equivalent to the ones contained in the causal graph.", "Let $\\mathbf {V}\\!=\\!", "\\lbrace X_i\\rbrace _{i=1}^{p}\\cup Y$.", "Under this assumption, we may detect $\\mathbf {X}_m$ and build the undirected causal skeleton by querying the marginal and conditional independence among $\\mathbf {V}$ .", "Assumption 1 (Faithfulness) Suppose that the structural causal equation $X_i\\!=\\!f_i(PA_i, \\theta _i(E),\\varepsilon _{i})$, where $\\theta _i(E)$ is the parameter and $\\varepsilon _{i}$ is the random noise.", "Assume the joint distribution of $Y\\cup \\lbrace X_i\\cup \\theta _i(E)\\rbrace _{i=1}^{p}$ is faithful to the causal graph augmented by including the arrows $\\lbrace \\theta _i(E)\\rightarrow X_i\\rbrace $.", "Detection of $\\mathbf {X}_m$ and Construct the Causal Skeleton $G$ Start with $\\mathbf {X}_m\\!=\\!\\varnothing $ .", "For $V_i\\in \\mathbf {V}$ , test if $V_i\\perp E$ or if there exist a subset $\\mathbf {C}_{v_i, e}\\subseteq \\mathbf {V}$ such that $V_i\\perp E|\\mathbf {C}_{v_i, e}$ .", "If $V_i\\lnot \\perp E$ and there exists no such $\\mathbf {C}_{v_i, e}$ , then include $V_i$ , $\\mathbf {X}_m\\!=\\!\\mathbf {X}_m\\cup V_i$ .", "Start with an undirected graph $G$ including edges for any two variables in $\\mathbf {V}$ and the arrows $E\\rightarrow V_i$ for $V_i\\in \\mathbf {X}_m$ .", "For each pair of $\\lbrace V_i, V_j\\rbrace $.", "If $V_i\\perp V_j$ or there exists a subset $\\mathbf {C}_{v_i, v_j}\\!\\subset \\!\\mathbf {V}$ such that $V_i\\perp V_j | \\mathbf {C}_{v_i, v_j}$, we delete the edge $V_i-V_j$ from $G$ .", "In practice, we use the Hilbert Schmidt Independence Criterion [29] for the marginal independence test and the Kernel-based Conditional Independence test in [30]." ], [ "The Second Step:", "In the second step, we identify the variables set $\\text{De}(\\mathbf {X}_{m})$ and $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$, where the descendants and ancestors are defined in the causal graph $G^{*}$ obtained by removing the arrows into $\\mathbf {X}_m$ .", "We will show in the third step that it suffices to regenerate the variables in $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ for estimating the proposed invariant predictor $\\operatorname{E}(Y|x_s, do(x_m))$ .", "Detection of De${(\\mathbf {X}_{m})}$ and An(Ch$_{y}$ )$\\cap $ De${(\\mathbf {X}_{m})}$ Detect De${(\\mathbf {X}_{m})}$: Start with De${(\\mathbf {X}_{m})}$$\\!=\\!\\varnothing $ and $Q=\\lbrace X_i|X_i\\in \\mathbf {X}_m\\rbrace $ .", "[1] $Q\\ne \\varnothing $ $X_i\\in Q$ find the variable $V_i\\!\\in \\!\\mathbf {V}\\!\\setminus \\!\\mathbf {X}_m$ and adjacent to $X_i$ in the undirected graph $G$ .", "find a separating $\\mathbf {C}_{v_i, e}\\subseteq \\mathbf {V}$ such that $V_i\\perp E|$$\\mathbf {C}_{v_i, e}$ if $V_i\\perp X_i|$$\\mathbf {C}_{v_i, e}\\cup X_i$, then $V_i\\rightarrow X_i$ ; else, $V_i\\leftarrow X_i$ and add $V_i$ into $Q$ .", "Remove $X_i$ from $Q$ .", "Detect Ch$_{y}$$\\cap $ De${(\\mathbf {X}_{m})}$ : Start with Ch$_{y}$$\\cap $ De${(\\mathbf {X}_{m})}$$\\!=\\!\\varnothing $ [1] $X_i\\in \\text{De}(\\mathbf {X}_{m})$ that is adjacent to $Y$ in $G$ find the separating set $\\mathbf {C}_{y, e}\\subseteq \\mathbf {V}$ such that $Y\\perp E| \\mathbf {C}_{y, e}$ if $Y\\lnot \\perp E| \\mathbf {C}_{y, e}\\cup X_i$ , then $Y\\rightarrow X_i$ and add $X_i$ into Ch$_{y}$$\\cap $ De${(\\mathbf {X}_{m})}$.", "In the second part of Algorithm REF , we detect the $\\text{Ch}_y\\cap \\text{De}(\\mathbf {X}_m)$, the final answer $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ is obtained by including the variable over the identified directed path $\\text{Ch}_y\\!\\leftarrow \\!\\cdots \\leftarrow X_i$ for $X_i\\in \\mathbf {X}_m$.", "It should be noted that the separating sets $\\mathbf {C}_{v_i, e}$ required in Algorithm REF always exist as $V_i\\notin \\mathbf {X}_m$ and have been searched in Algorithm REF , so no extra computation cost is needed in finding the separating set.", "Except for providing the specific procedures, Algorithm REF also implies that the required causal directions for determining $\\text{De}(\\mathbf {X}_{m})$ and $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ can always be identified the statistical dependence implied in data, which is important in terms of the practicality as it is known a general causal direction may not be identifiable.", "Then we use the IC algorithm [1] to determine the direction between $X_i\\!\\in \\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ and its adjacent nodes to identify PA$_i$ for regenerating the data of $X_i$ in the third step.", "For example, when $X_i\\!\\in \\!\\text{Ch}_{y}\\!\\cap \\!\\text{De}(\\mathbf {X}_m)$, the direction between $X_i$ and an adjacent variable $X_j$ can be identified as long as $X_j$ is not adjacent to $Y$ or any identified ancestor of $X_i$ , which is more easier to be satisfied than the identification conditions for recovering the complete causal graph in [27]." ], [ "The Third Step:", "Note that $P^{*}$ in (REF ) is obtained from $p^e(x,y)$ by replacing ${\\prod _{x_i\\in x_m}\\!p(x_i|pa_i)}$ with $p^*(x_m)$ , `transforming' from $P^{e}$ to $P^{*}$ can be realized by firstly shuffling the data of $\\mathbf {X}_m$ and then regenerate $\\text{De}(\\mathbf {X}_m)$ that changes in response.", "Since the regeneration step introduces approximation errors in fitting regressors, we would like to regenerate a minimal set of variables that suffices to estimate $\\operatorname{E}(Y|x_s, do(x_{m}))$ .", "In the following, we redefine the distribution $P^{*}$ such that the equality $\\operatorname{E}(Y|x_s, do(x_m))\\!=\\!\\operatorname{E}_{P^{*}}(Y|x))$ still holds and it suffices to to regenerate the identified variables $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$.", "The redefinition of $P^{*}$ is based on the following Proposition REF .", "Proposition 4.2 Let $\\mathbf {X}_m^{0}$ be a subset of $\\mathbf {X}_m$ satisfying (Ch$_y\\!\\cap \\!\\mathbf {X}_m) \\subseteq \\mathbf {X}_m^{0}$, then $p^{e}(y|x_s, x_m^{1}, do(x_m^{0}))$ $\\!\\!\\equiv \\!p(y|x_s,do(x_m))$ and $\\operatorname{E}(Y|x_s, x_m^{1}, do(x_m^{0}))\\!\\equiv \\!\\operatorname{E}(Y|x_s,do(x_m))$ , where $x_m^{1}\\!\\!\\!=\\!x_m\\!\\setminus x_m^{0}$ .", "Let $p^{*}(x, y)$ $\\!=\\!p^e(x,y)p^*(x_m^0)/\\prod _{x_i\\in x_m^0}p(x_i|pa_i)$ , then $\\operatorname{E}_{P^{*}}(Y|x)=\\operatorname{E}(Y|x_s,do(x_m))$ .", "In Algorithm REF , we regenerate the data distributed as $P^{*}$ defined by selecting $\\mathbf {X}_{m}^{0}\\!\\!=\\!\\lbrace \\text{Ch}_y\\!\\cap \\!\\mathbf {X}_{m}\\rbrace \\cup $ $\\lbrace X_i\\!\\in \\!\\mathbf {X}_{m} \\text{with a path to } \\text{Ch}_y\\!\\cap \\!\\mathbf {X}_{m} \\text{ and a directed path to } \\text{Ch}_y\\!\\setminus \\!", "\\mathbf {X}_m\\rbrace $.", "The reason to include the second part is to reduce the number of variables to be regenerated.", "For example, suppose there are paths $\\lbrace Y\\!\\rightarrow \\!X^{s}_1\\!\\leftarrow \\!", "X_1^{m}\\!\\leftarrow \\!", "X_2^{m},Y\\!\\rightarrow \\!X_2^{m}\\rbrace $ where $X_1^m\\!, X_2^{m}\\in \\!\\mathbf {X}^{m}$, then intervening on $\\text{Ch}_y\\!=\\!X_2^{m}$ requires regenerating $X_{1}^{m}$ and $X_1^s$ , while we only need to regenerate $X_1^s$ after intervening on $X_1^{m}$ and $X_2^m$ .", "Note that $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ does not contain nodes in $\\mathbf {X}_m$ as the ancestors/descendants are with respect to the intervened graph.", "Therefore, the mapping from $\\text{PA}_i$ to $X_i\\!\\in \\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ is invariant an can be readily learned from data, which validates the step in line REF .", "Now we explain why it suffices to regenerate $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ instead of the full set Desc${(\\mathbf {X}^{0}_{m})}$ that include all variables that would change in response to ${\\mathbf {X}^{0}_{m}}$.", "Although the notations are involved, the ideas are natural where we concentrate on the Markov Blanket [31] of $Y$ .", "The Markov blanket $\\mathbf {X}$$_{\\text{blanket}}$ =$\\lbrace \\text{PA}_y, \\text{Ch}_y, \\text{PA}(\\text{Ch}_y)\\rbrace $ and $\\lbrace Y\\!\\perp \\!\\mathbf {X}_\\text{blanket}|\\mathbf {X}\\rbrace _{P^*}$ , which leads to $\\operatorname{E}(Y|x)\\!=\\!\\operatorname{E}(Y|x_{\\text{blanket}})$ .", "We only need to regenerate the variables included in the Markov blanket.", "Note $\\text{PA}_y$ does not need to be regenerated because the path between the ancestors of $\\text{PA}_y$ and $Y$ must be d-separated by $\\text{PA}_y$ , while $\\text{Ch}_y$ and its ancestors need to be regenerated due to the collider structure $Y\\!\\rightarrow \\!\\text{Ch}_y\\!\\leftarrow \\!\\text{PA}(\\text{Ch}_y)$.", "Therefore, we only need to regenerate $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$.", "Estimates $\\operatorname{E}(Y|x_s, do(x_m))$ [1] $X_i\\in $ $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ train a regressor $f_i$ from PA$_i$ to $X_i$ in the pooled data $X_i\\in \\mathbf {X}^{0}_m$ shuffle the data $\\lbrace \\lbrace x_{ik}^{e}\\rbrace _{k=1}^{n_e}\\rbrace _{e\\in \\mathcal {E}_{\\text{train}}}$ by randomizing the indexes ($x_{ik}^{e}$ denote the k-th record of variable $X_i$ in domain $e$ ).", "$X_i\\in $ $\\overline{\\text{An}}(\\text{Ch}_y)\\cap \\text{De}{(\\mathbf {X}_{m})}$ replace the original record $x_{ik}^{e}$ by $f_i(pa_{ik})$ based on the shuffled $pa_i$ .", "Train a regressor $f^{*}(x)$ from $x$ to $Y$ over the above regenerated data." ], [ "Experimental Results", "In this section, we illustrate and evaluate our proposed method on both synthetic data and real-world public data from a causal inference challenge [32]." ], [ "Simulation Study on Synthetic Data", "The data generating process follows the causal graph $G$ in Figure REF that induces the causal factorization $p(x, y)\\!\\!=\\!\\!p(x_3)p(y|x_3)p^e(x_1|y)p(x_2|y,x_1,x_3)$ , where $X_1$ is the mutable variable.", "We firstly generate the exogenous variable $X_3$ from $N(5,1^2)$ , then $Y$ from $Y|X_3\\!\\sim \\!", "N(0.5X_3, 0.5^2)$, followed by $X_1|Y\\!\\sim \\!", "N(\\alpha ^e Y, 0.1^2)$ and $X_2|Y,X_1,X_3\\!\\sim \\!", "N(Y\\!-\\!X_1\\!+\\!X_3, 0.1^2)$, where the linear parameter $\\alpha ^{e}$ changes across domains $e\\!\\in \\!\\mathcal {E}$ .", "The training data is generated from $\\alpha ^{e_\\text{train}}\\!=\\!1$ , while in the test data, $-2\\le \\!\\alpha ^{e}\\!\\le 4$ .", "As the causal graph is known in the simulation, we only need to implement the third step to estimate $\\operatorname{E}(Y|x_2,x_3,do(x_1))$ .", "We construct $p^{*}(x,y)\\!=\\!p(x_3)p(y|x_3)p^*(x_1)$ $p(x_2|y,x_1,x_3)$ , where $p^*(x_1)$ replaces the causal conditional $p^e(x_1|y)$ such that $p^{*}(y|x_1, x_2, x_3)\\!=\\!p^{e}(y|x_2,x_3, do(x_1))$ .", "In terms of the causal graph, $P^{*}$ is compatible with the $G^{*}$ in Figure REF , where the arrow $Y\\!\\rightarrow \\!X_1$ is cut off to remove the non-invariant dependence between $Y$ and $X_1$ .", "To obtain data distributed as $p^{*}(x,y)$ based on data generated from $p^{e_\\text{train}}(x,y)$ , we firstly shuffle the training data of $X_1$ by randomizing the indexes, which corresponds to replacing $p^{e_\\text{train}}(x_1|y)$ to the marginal distribution $p^{*}(x_1)\\!=\\!\\int p^{e_\\text{train}}(x_1|y)p^{e_\\text{train}}(y)dy$ .", "Next, as $X_2$ is the descendant of $X_1$, we regenerate $X_2$ by $X_2|Y,X_1,X_3\\sim N(Y\\!-\\!X_1\\!+\\!X_3, 0.1^2)$ based on the shuffled $X_1$ and the original $Y$ and $X_3$.", "Then the regenerated data is distributed as $P^*$ .", "As $\\operatorname{E}_{P^{*}}(Y|x_1, x_2, x_3)\\!=\\!\\operatorname{E}(Y|x_2, x_3, do(x_1))$, the invariant predictor can be obtained from the regenerated data by normal regression methods.", "Figure: Root Mean Square Errors in different domainsFigure REF shows the root mean square errors (RMSE) by the ordinary least square (OLS) regression using all covariates, the OLS regression using the causative variable $X_3$, and our method of implementing the OLS regression on the regenerated data.", "Among the three methods, using all covariates uses the non-transferable information contained in $Y\\!\\rightarrow \\!X_1$ and hence the performance deteriorates greatly when $a^{e}\\ne a^{e_{\\text{train}}}$ .", "While only causative variable is an invariant, it loses the transferable information in $\\lbrace Y, X_1\\rbrace \\!\\rightarrow \\!X_2$.", "By contrast,our estimator $\\operatorname{E}(Y|x_2,x_3,do(x_1))$ maximally reserved the transferable information contained in $X_3\\!\\rightarrow \\!Y$ and $\\lbrace Y, X_1\\rbrace \\!\\rightarrow \\!X_2$ and proactively removed the dependence brought by $Y\\!\\rightarrow \\!", "X_1$.", "Further, we can validate such empirical observations by the theoretical derivation, we can calculate that $Y|x_2, x_3, do(x_1)\\!\\sim \\!N(\\mu , \\sigma ^2)$ for $\\mu \\!=\\!\\left[0.1^2 X_{3}+0.5^2\\left(X_{1}\\!+\\!X_{2}\\!-\\!X_3\\right)\\right]/(0.5^2+0.1^2)$ and $\\sigma ^2\\!=\\!", "(0.5^{-2}\\!\\!+\\!0.1^{-2})^{-1}$, which has lower variance than $Y|x_3\\sim N(0.5x_3, 0.5^2)$." ], [ "Hematology Dataset of Mutant Mouse", "We perform the empirical evaluation on a public dataset provided by the International Mouse Phenotyping Consortium (IMPC) that was used for a causal inference challenge in [32].", "The dataset collects hematology phenotypes of both wild-type mice and mutant mice with 13 kinds of single-gene knockout, where different gene knockouts are treated as different domains.", "We focus on the phenotypes related to the immune system, which includes the cell counts of neutrophil (NEU), lymphocyte (LYM), monocyte (MON), eosinophil (EOS), basophil (BAS), and large unstained cells (LUC).", "The task is to predict LYM from the other five covariates on the mutant mice.", "To implement the first and second steps in Section REF , we take the wild-type mice and three kinds of gene knockouts as the training data.", "The LUC is detected to be the mutable variable by Algorithm REF .", "Figure REF shows the output of Algorithm REF , where we determine the directions between the mutable variable LUC and its adjacent variables to identify $\\text{De}(\\text{LUC})\\!=\\!\\lbrace \\text{BAS}\\rbrace $, and then determine the direction between the target LYM and BAS to identify that $\\text{Ch}_y\\cap \\text{De}(\\text{LUC})\\!=\\!\\text{BAS}$.", "Then intervening on LUC refers to removing the arrows $\\lbrace \\text{MON}, \\text{LRM}\\rbrace \\!\\rightarrow \\!\\text{LUC}$.", "For the third step, each time we additionally take two out of the remaining ten gene knockouts to merge them with the training data for causal discovery as the training data for estimation.", "This produces 45 pairs of training and test data, where each training data include the wild-type and five gene knockouts and each test data include eight gene knockouts.", "In the data regeneration step, we firstly learn a predictor $f_{\\text{LUC}}$ from $\\lbrace \\text{LYM}, \\text{LUC}\\rbrace $ to BAS, then shuffle the data of LUC and regenerate BAS by BAS$ =\\!f_{\\text{LUC}}$ (LYM, LUC) using the shuffled LUC.", "Then we train a predictor $f_\\text{LYM}$ on the intervened data.", "We use Random Forest (RF) regressor from scikit-learn with default parameters for learning $f_{\\text{LUC}}$ and $f_\\text{LYM}$ .", "For comparison, we also directly implement the RF on the original data.", "Besides, we implemented the Invariant Risk Minimization (IRM) in [2], Invariant Conditional (IC) Method in [4], and the invariant causal predictor in [33], where IRM and IC are implemented by the python codes in the authors' github repositories [34], [35] with default parameters and ICP is implemented by the R package `InvariantCausalPrediction'.", "In each of the 45 rounds of training/test split, we take the maximum of mean squared errors (MSE) among the eight test domains, which is consistent with the robustness measurement in (REF ).", "The distribution of maximum MSE is shown in Figure REF .", "Our proposed estimator is the most robust as it shows an advantageous predictive performance with a significantly lower maximum MSE.", "The detailed results (in supplements) show that using RF on intervened data obtains the lowest maximum MSE in 36 out of 45 rounds, while RF on original data in 3 round, IC method in 3 rounds, ICP in 2 round, and IRM in 1 round, respectively.", "Figure: Distribution of maximum MSE from 45 replications" ], [ "Conclusion and Discussion", "This paper proposes a causal framework and a principled learning method for domain generalization.", "We formulate the distribution shift as external interventions to the system and propose a predictor based on the do-operator in the causal graph literature.", "We prove the invariance property to ensure the transferability and prove the min-max property to guarantee the robust predictive performance in unseen domains.", "We present an empirical estimating method based on data regeneration that can be readily incorporated with standard supervised learning method.", "As for the future work, we will consider using the concept of generalized entropy to extend the current results to a more general loss function (except from quadratic loss), and explore how to extends the current framework and learning method to unstructured data by introducing unobserved latent variables." ] ]
2107.01876
[ [ "On the generating function of the Pearcey process" ], [ "Abstract The Pearcey process is a universal point process in random matrix theory.", "In this paper, we study the generating function of the Pearcey process on any number $m$ of intervals.", "We derive an integral representation for it in terms of a Hamiltonian that is related to a system of $6m+2$ coupled nonlinear equations.", "We also obtain asymptotics for the generating function as the size of the intervals get large, up to and including the constant term.", "This work generalizes some recent results of Dai, Xu and Zhang, which correspond to $m=1$." ], [ "Introduction and statement of results", "In random matrix theory, the universality conjecture asserts that the microscopic behavior of the eigenvalues of large random matrices is similar for many different models.", "More precisely, it is expected that the local eigenvalue statistics only depend on the symmetry class of the matrix ensemble and on the nature of the point around which these statistics are considered [28], [30], [38].", "The Pearcey process is one of the canonical point processes from the theory of random matrices: it models the asymptotic behavior of the eigenvalues near the points of the spectrum where the density of states admits a cusp-like singularity.", "This process appears in Gaussian random matrices with an external source [12], [13], [8], Hermitian random matrices with independent, not necessarily identically distributed entries [27], and in general Wishart matrices with correlated entries [32].", "The Pearcey process is also universal in a sense that goes beyond random matrix theory: it appears in certain models of skew plane partitions [39] and of Brownian motions [1], [2], [8], [46], [31].", "The Pearcey process is the determinantal point process on $\\mathbb {R}$ associated with the kernel $K^{\\mathrm {Pe}}_{\\rho }(x,y) & = \\frac{\\mathcal {P}(x)\\mathcal {Q}^{\\prime \\prime }(y)-\\mathcal {P}^{\\prime }(x)\\mathcal {Q}^{\\prime }(y) + \\mathcal {P}^{\\prime \\prime }(x)\\mathcal {Q}(y) - \\rho \\mathcal {P}(x) \\mathcal {Q}(y)}{x-y}, $ where $\\rho \\in \\mathbb {R}$ , $\\mathcal {P}(x) = \\frac{1}{2\\pi } \\int _{-\\infty }^{\\infty }e^{-\\frac{1}{4}t^{4}-\\frac{\\rho }{2}t^{2}+itx}dt, \\qquad \\mathcal {Q}(y) = \\frac{1}{2\\pi } \\int _{\\Sigma }e^{\\frac{1}{4}t^{4}+\\frac{\\rho }{2}t^{2}+ity}dt,$ and $\\Sigma = (e^{\\frac{\\pi i}{4}}\\infty ,0)\\cup (0,e^{\\frac{3\\pi i}{4}}\\infty ) \\cup (e^{-\\frac{3\\pi i}{4}}\\infty ,0)\\cup (0,e^{-\\frac{\\pi i}{4}}\\infty )$ .", "As $\\rho $ increases, the point configurations with few points near 0 are increasingly likely to occur, and when $\\rho $ tends to $+\\infty $ , the Pearcey process “factorizes\" (in the sense of gap probabilities) into two independent Airy processes [6].", "Important progress on the large gap asymptotics for any fixed $\\rho \\in \\mathbb {R}$ have only recently been obtained in [23].", "This paper is inspired by the work [24] of Dai, Xu and Zhang, and is concerned with the moment generating function of the Pearcey process.", "Let $X$ be a locally finite random point configuration distributed according to the Pearcey process, and let $N(x) := \\#\\lbrace \\xi \\in X: \\xi \\in (-x,x)\\rbrace $ be the associated counting function.", "We are interested in the $m$ -point generating function $F(r\\vec{x},\\vec{u}) := \\mathbb {E}\\bigg [ \\prod _{j=1}^{m}e^{u_{j}N(rx_{j})} \\bigg ],$ where $r>0, \\qquad m \\in \\mathbb {N}_{>0}, \\qquad \\vec{u}=(u_{1},\\ldots ,u_{m}) \\in \\mathbb {R}^{m}, \\qquad \\vec{x}=(x_{1},\\ldots ,x_{m}) \\in \\mathbb {R}^{+,m}_{\\mathrm {ord}},$ and $\\mathbb {R}^{+,m}_{\\mathrm {ord}}:=\\lbrace (x_{1},\\ldots ,x_{m}):0<x_{1}<\\ldots <x_{m}<+\\infty \\rbrace $ .", "Our main results are stated in Theorems REF and REF below and can be summarized as follows: Theorem REF establishes an integral representation for $F(r\\vec{x},\\vec{u})$ in terms of a Hamiltonian related a system of $6m+2$ coupled differential equations.", "This system of equations admits at least one solution, which we derive from the Lax pair of a Riemann-Hilbert (RH) problem.", "The asymptotic properties of this solution are stated in Theorem REF .", "Theorem REF is concerned with the asymptotic properties of the generating function as the size of the intervals get large.", "Specifically, Theorem REF gives a precise asymptotic formula, up to and including the constant term, for $F(r\\vec{x},\\vec{u})$ as $r \\rightarrow +\\infty $ .", "For $m=1$ , Theorems REF , REF and REF have previously been obtained in [24].", "The general case $m \\ge 2$ allows to capture the correlation structure of the Pearcey process, see Corollary REF , and to analyze the joint fluctuations of the counting function, see Corollary REF .", "We also expect the case $m=2$ of Theorem REF to play an important role in future studies of the rigidity of the Pearcey process (we comment more on that at the end of this section).", "The relevant system of $6m+2$ coupled differential equations depends on unknown functions which are denoted $p_{0}(r), \\; q_{0}(r), \\; p_{j,k}(r), \\; q_{j,k}(r), \\qquad k=1,2,3, \\quad j=1,\\ldots ,m,$ and is as follows: $& {\\left\\lbrace \\begin{array}{ll}p_{0}^{\\prime }(r) = -\\sqrt{2}\\sum _{j=1}^{m}x_{j}p_{j,3}(r)q_{j,2}(r),\\\\q_{0}^{\\prime }(r) = \\sqrt{2}\\sum _{j=1}^{m}x_{j}p_{j,2}(r)q_{j,1}(r),\\\\q_{j,1}^{\\prime }(r) = \\frac{2}{r}S_{11}(r)q_{j,1}(r) + x_{j}q_{j,2}(r) + \\frac{2}{r}S_{31}(r)q_{j,3}(r), \\\\q_{j,2}^{\\prime }(r) = \\sqrt{2}p_{0}(r)x_{j}q_{j,1}(r) + \\frac{2}{r}S_{22}(r)q_{j,2}(r) + x_{j}q_{j,3}(r), \\\\q_{j,3}^{\\prime }(r) = (rx_{j}^{2} + \\frac{2}{r}S_{13}(r))q_{j,1}(r) + \\sqrt{2}q_{0}(r)x_{j}q_{j,2}(r) + \\frac{2}{r}S_{33}(r)q_{j,3}(r), \\\\p_{j,1}^{\\prime }(r) = -\\frac{2}{r}S_{11}(r)p_{j,1}(r) - \\sqrt{2}p_{0}(r)x_{j}p_{j,2}(r) - (rx_{j}^{2} + \\frac{2}{r}S_{13}(r))p_{j,3}(r), \\\\p_{j,2}^{\\prime }(r) = -x_{j}p_{j,1}(r) - \\frac{2}{r}S_{22}(r)p_{j,2}(r) - \\sqrt{2}q_{0}(r)x_{j}p_{j,3}(r), \\\\p_{j,3}^{\\prime }(r) = -\\frac{2}{r}S_{31}(r)p_{j,1}(r) - x_{j}p_{j,2}(r) - \\frac{2}{r}S_{33}(r)p_{j,3}(r),\\end{array}\\right.", "}$ where $j=1,\\ldots ,m$ , and $S_{kl}(r) = \\sum _{j=1}^{m} p_{j,k}(r)q_{j,l}(r), \\qquad k,l=1,2,3.$ Furthermore, we require the functions (REF ) to satisfy the following $m$ relations $\\sum _{k=1}^{3} p_{j,k}(r)q_{j,k}(r) = 0, \\qquad j=1,\\ldots ,m.$ Let $H(r)=H(r;p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ be defined by $H(r) = & \\; \\sqrt{2}p_{0}(r)\\sum _{j=1}^{m}x_{j}p_{j,2}(r)q_{j,1}(r) + \\sqrt{2}q_{0}(r)\\sum _{j=1}^{m}x_{j}p_{j,3}(r)q_{j,2}(r) \\nonumber \\\\& + \\sum _{j=1}^{m} x_{j}p_{j,1}(r)q_{j,2}(r) + \\sum _{j=1}^{m} x_{j}p_{j,2}(r)q_{j,3}(r) +\\sum _{j=1}^{m}rx_{j}^{2}p_{j,3}(r)q_{j,1}(r) \\nonumber \\\\& + \\frac{1}{2r}\\bigg (\\Big ( S_{11}(r) - S_{22}(r) + S_{33}(r) \\Big )^{2} \\nonumber \\\\& - 2 \\sum _{k=1}^{m}\\sum _{\\ell =1}^{m}\\Big (p_{k,1}(r)p_{\\ell ,3}(r)-p_{k,3}(r)p_{\\ell ,1}(r)\\Big )\\Big (q_{k,1}(r)q_{\\ell ,3}(r)-q_{k,3}(r)q_{\\ell ,1}(r)\\Big ) \\bigg ).", "$ It is readily checked that $q_{0}^{\\prime }(r)=\\frac{\\partial H}{\\partial p_{0}}(r)$ , $p_{0}^{\\prime }(r)=-\\frac{\\partial H}{\\partial q_{0}}(r)$ and $q_{j,k}^{\\prime }(r) = \\frac{\\partial H}{\\partial p_{j,k}}(r), \\qquad p_{j,k}^{\\prime }(r) = -\\frac{\\partial H}{\\partial q_{j,k}}(r), \\qquad j=1,\\ldots ,m, \\; k=1,2,3,$ and therefore $H$ is a Hamiltonian for the system (REF )–(REF ).", "Theorem 1.1 Let $\\rho \\in \\mathbb {R}$ , $r>0, \\qquad m \\in \\mathbb {N}_{>0}, \\qquad \\vec{u}=(u_{1},\\ldots ,u_{m}) \\in \\mathbb {R}^{m}, \\quad \\mbox{ and } \\quad \\vec{x}=(x_{1},\\ldots ,x_{m}) \\in \\mathbb {R}^{+,m}_{\\mathrm {ord}}.$ There exists at least one solution $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ to the system of equations (REF ) and (REF ) satisfying the following asymptotics.", "As $r \\rightarrow + \\infty $ , we have $& p_{0}(r) = \\frac{\\sqrt{3}}{2\\sqrt{2}\\pi }\\sum _{\\ell =1}^{m}u_{\\ell }x_{\\ell }^{\\frac{2}{3}}r^{\\frac{2}{3}} + \\frac{1}{\\sqrt{2}}\\bigg ( \\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2} \\bigg ) + {\\mathcal {O}}(r^{-\\frac{2}{3}}), \\\\& p_{j,1}(r) = -\\frac{1}{3\\pi i} e^{\\frac{1}{2}\\theta _{3}(rx_{j})} (rx_{j})^{\\frac{1}{3}}\\frac{u_{j}}{\\mathcal {A}_{j}} \\nonumber \\\\& \\hspace{42.67912pt} \\times \\bigg ( \\cos (\\vartheta _{j}(r)-\\tfrac{\\pi }{3}) + \\frac{\\sqrt{3}}{2\\pi } \\sum _{\\ell =1}^{m} u_{\\ell }\\frac{x_{\\ell }^{2/3}}{x_{j}^{2/3}}\\cos (\\vartheta _{j}(r)+\\tfrac{\\pi }{3}) \\bigg )(1+{\\mathcal {O}}(r^{-\\frac{2}{3}})) \\\\& p_{j,2}(r) = \\frac{1}{3\\pi i} e^{\\frac{1}{2}\\theta _{3}(rx_{j})} \\frac{u_{j}}{\\mathcal {A}_{j}} \\cos (\\vartheta _{j}(r)) (1+{\\mathcal {O}}(r^{-\\frac{2}{3}})) \\\\& p_{j,3}(r) = -\\frac{1}{3\\pi i} e^{\\frac{1}{2}\\theta _{3}(rx_{j})} (rx_{j})^{-\\frac{1}{3}}\\frac{u_{j}}{\\mathcal {A}_{j}} \\cos (\\vartheta _{j}(r)+\\tfrac{\\pi }{3}) (1+{\\mathcal {O}}(r^{-\\frac{2}{3}})) \\\\& q_{0}(r) = -\\frac{\\sqrt{3}}{2\\sqrt{2}\\pi } \\sum _{\\ell =1}^{m}u_{\\ell }x_{\\ell }^{\\frac{2}{3}}r^{\\frac{2}{3}} + \\frac{1}{\\sqrt{2}}\\bigg ( -\\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2} \\bigg ) + {\\mathcal {O}}(r^{-\\frac{2}{3}}), \\\\& q_{j,1}(r) = 2ie^{-\\frac{1}{2}\\theta _{3}(r x_{j})}(r x_{j})^{-\\frac{1}{3}} \\mathcal {A}_{j}\\sin (\\vartheta _{j}(r)-\\tfrac{\\pi }{3})(1+{\\mathcal {O}}(r^{-\\frac{2}{3}})), \\\\& q_{j,2}(r) = -2ie^{-\\frac{1}{2}\\theta _{3}(r x_{j})} \\mathcal {A}_{j}\\sin (\\vartheta _{j}(r))(1+{\\mathcal {O}}(r^{-\\frac{2}{3}})), \\\\& q_{j,3}(r) = 2ie^{-\\frac{1}{2}\\theta _{3}(r x_{j})}(r x_{j})^{\\frac{1}{3}} \\mathcal {A}_{j} \\nonumber \\\\& \\hspace{42.67912pt}\\times \\bigg (\\sin (\\vartheta _{j}(r)+\\tfrac{\\pi }{3})-\\frac{\\sqrt{3}}{2\\pi }\\sum _{\\ell =1}^{m}u_{\\ell } \\frac{x_{\\ell }^{2/3}}{x_{j}^{2/3}}\\sin (\\vartheta _{j}(r)-\\tfrac{\\pi }{3})\\bigg )(1+{\\mathcal {O}}(r^{-\\frac{2}{3}})), $ where $\\theta _{3}(r)=\\frac{3}{4}r^{\\frac{4}{3}}+\\frac{\\rho }{2}r^{\\frac{2}{3}}$ , $& \\mathcal {A}_{j} = |\\Gamma (1-\\tfrac{u_{j}}{2\\pi i})|\\exp \\bigg ( -\\frac{u_{j}}{3}-\\sum _{k=j+1}^{m} \\frac{u_{k}}{2}-\\sum _{\\begin{array}{c}k=1 \\\\ k \\ne j\\end{array}}^{m} \\frac{u_{k}}{2 \\pi }\\arctan \\frac{\\sqrt{3}x_{k}^{2/3}}{x_{k}^{2/3}+2x_{j}^{2/3}} \\bigg ), \\\\& \\vartheta _{j}(r) = -\\frac{3\\sqrt{3}}{8}(rx_{j})^{\\frac{4}{3}}+\\frac{\\sqrt{3}\\rho }{4}(rx_{j})^{\\frac{2}{3}} + \\arg \\Gamma (1-\\tfrac{u_{j}}{2\\pi i}) \\nonumber \\\\& \\hspace{34.14322pt} - \\frac{u_{j}}{2\\pi } \\bigg ( \\frac{4}{3}\\log (rx_{j}) + \\log \\frac{9}{2} \\bigg ) - \\sum _{\\begin{array}{c}k=1 \\\\ k \\ne j\\end{array}}^{m} \\frac{u_{k}}{2\\pi }\\log \\frac{|x_{j}^{2/3}-\\omega x_{k}^{2/3}|}{|x_{j}^{2/3}- x_{k}^{2/3}|}, $ $\\omega :=e^{\\frac{2\\pi i}{3}}$ and $\\Gamma $ is Euler's Gamma function.", "As $r \\rightarrow 0$ , we have $& p_{0} = \\frac{1}{\\sqrt{2}}\\bigg ( \\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2} \\bigg ) + {\\mathcal {O}}(r), \\qquad q_{0} = \\frac{1}{\\sqrt{2}}\\bigg ( -\\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2} \\bigg ) + {\\mathcal {O}}(r) \\\\& p_{j,1}(r) = {\\mathcal {O}}(r), \\qquad p_{j,2}(r) = {\\mathcal {O}}(1), \\qquad p_{j,3}(r) = {\\mathcal {O}}(r), \\qquad j=1,\\ldots ,m, \\\\& q_{j,1}(r) = {\\mathcal {O}}(1), \\qquad \\hspace{0.48364pt} q_{j,2}(r) = {\\mathcal {O}}(r), \\qquad \\hspace{0.48364pt} q_{j,3}(r) = {\\mathcal {O}}(1), \\qquad \\hspace{0.48364pt} j=1,\\ldots ,m. $ Remark 1.2 Interestingly, for $m=1$ , the functions $p_{0}$ and $q_{0}$ satisfy a system of coupled differential equations, see [24] (see also [12] for $\\rho =0$ ).", "However, it is unclear to us if this result admits a natural analogue for $m\\ge 2$ .", "It is well-known since the works of Jimbo, Miwa, Môri and Sato [37] and of Tracy and Widom [44], [45] that the 1-point generating functions of the universal sine, Airy and Bessel point processes are naturally related to the Painlevé theory.", "We also refer to [33], [3] for some Hamiltonian structures associated with the $m$ -point functions of the sine and Airy processes, and to [21], [19] for some representations of the $m$ -point functions of the Airy and Bessel processes in terms of the solution to a system of $m$ coupled Painlevé equations.", "It was shown in [24] that the 1-point function of the Pearcey process admits an elegant representation in terms of a Hamiltonian associated to a system of 8 coupled differential equations.", "Our first main result generalizes [24] to an arbitrary $m$ .", "Theorem 1.3 Let $\\rho \\in \\mathbb {R}$ , $r>0, \\qquad m \\in \\mathbb {N}_{>0}, \\qquad \\vec{u}=(u_{1},\\ldots ,u_{m}) \\in \\mathbb {R}^{m}, \\quad \\mbox{ and } \\quad \\vec{x}=(x_{1},\\ldots ,x_{m}) \\in \\mathbb {R}^{+,m}_{\\mathrm {ord}}.$ The following relation holds $F(r\\vec{x},\\vec{u}) = \\exp \\bigg ( 2 \\int _{0}^{r}H(\\tau )d \\tau \\bigg ),$ with $H$ given by (REF ), and where $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ is a solution to the system of equations (REF ) and (REF ) which satisfies the asymptotic formulas (REF ) and (REF ).", "Furthermore, $H(r) = {\\mathcal {O}}(1), \\qquad \\mbox{as } r \\rightarrow 0,$ and as $r \\rightarrow +\\infty $ , $& H(r) = \\sum _{j=1}^{m} \\bigg ( \\frac{\\sqrt{3}}{2\\pi }u_{j}x_{j}^{\\frac{4}{3}}r^{\\frac{1}{3}} - \\frac{\\rho }{2\\sqrt{3}\\pi } u_{j} x_{j}^{\\frac{2}{3}}r^{-\\frac{1}{3}} + \\frac{u_{j}^{2}}{3\\pi ^{2}r} - \\frac{u_{j}}{3\\sqrt{3}\\pi r}\\cos (2\\vartheta _{j}(r)) \\bigg ) +{\\mathcal {O}}(r^{-\\frac{5}{3}}),$ where $\\vartheta _{j}(r)$ is defined in ().", "Since the functions $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ appearing in the integral representation (REF ) are rather complicated objects, it is natural to try to approximate $F(r\\vec{x},\\vec{u})$ for small and large values of $r$ with some explicit asymptotic formulas.", "Because $F(r\\vec{x},\\vec{u})$ is a Fredholm determinant (see (REF ) below), the asymptotics of $F(r\\vec{x},\\vec{u})$ as $r \\rightarrow 0$ can be easily obtained from an analysis of the kernel $K_{\\rho }^{\\mathrm {Pe}}(x,y)$ near $(x,y)=(0,0)$ .", "A much more complicated question is to approximate $F(r\\vec{x},\\vec{u})$ for large values of $r$ .", "In the case of the sine, Airy and Bessel processes, the asymptotics for the $m$ -point generating functions are known, see [5], [15] for sine, [10], [17] for Airy, [11], [14] for Bessel and [20] for the transition between Bessel and Airy.", "The asymptotics for the 1-point generating function of the Pearcey process, up to and including the notoriously difficult constant term, have recently been established in [24].", "We provide here the generalization of [24] to an arbitrary $m$ .", "Theorem 1.4 Let $\\rho \\in \\mathbb {R}, \\qquad m \\in \\mathbb {N}_{>0}, \\qquad \\vec{u}=(u_{1},\\ldots ,u_{m}) \\in \\mathbb {R}^{m}, \\quad \\mbox{ and } \\quad \\vec{x}=(x_{1},\\ldots ,x_{m}) \\in \\mathbb {R}^{+,m}_{\\mathrm {ord}}.$ As $r \\rightarrow +\\infty $ , we have $F(r\\vec{x},\\vec{u})=\\exp \\bigg ( \\sum _{j=1}^{m}u_{j}\\mu _{\\rho }(rx_{j}) + \\sum _{j=1}^{m}\\frac{u_{j}^{2}}{2}\\sigma ^{2}(rx_{j}) + \\sum _{1\\le j < k \\le m} u_{j}u_{k}\\Sigma (x_{k},x_{j}) \\\\+ \\sum _{j=1}^{m} 2 \\log \\big ( G(1-\\tfrac{u_{j}}{2\\pi i})G(1+\\tfrac{u_{j}}{2\\pi i}) \\big ) + {\\mathcal {O}}(r^{-\\frac{2}{3}}) \\bigg ), $ where $G$ is Barnes' $G$ -function, and $\\mu _{\\rho }$ , $\\sigma ^{2}$ and $\\Sigma $ are given by $& \\mu _{\\rho }(x) = \\frac{3\\sqrt{3}}{4\\pi }x^{\\frac{4}{3}}- \\frac{\\sqrt{3}\\rho }{2\\pi } x^{\\frac{2}{3}}, \\\\& \\sigma ^{2}(x) = \\frac{4}{3\\pi ^{2}} \\log x+\\frac{1}{\\pi ^{2}} \\log \\frac{9}{2}, \\\\& \\Sigma (x_{k},x_{j}) = \\frac{1}{\\pi ^{2}} \\log \\frac{|x_{j}^{2/3}-\\omega x_{k}^{2/3}|}{|x_{j}^{2/3}- x_{k}^{2/3}|},$ where $\\omega =e^{\\frac{2\\pi i}{3}}$ .", "Furthermore, (REF ) holds uniformly for $\\rho $ in compact subsets of $\\mathbb {R}$ , for $\\vec{u}$ in compact subsets of $\\mathbb {R}^{m}$ , and for $\\vec{x}$ in compact subsets of $\\mathbb {R}^{+,m}_{\\mathrm {ord}}$ .", "The asymptotic formula (REF ) can also be differentiated any number of times with respect to $u_{1},\\ldots ,u_{m}$ at the expense of increasing the error term in the following way.", "Let $\\widetilde{F}(r\\vec{x},\\vec{u})$ be the right-hand side of (REF ) without the error term, and denote the error term by $\\mathcal {E}=\\log F(r\\vec{x},\\vec{u}) - \\log \\widetilde{F}(r\\vec{x},\\vec{u})$ .", "For any $k_{1},\\ldots ,k_{m} \\in \\mathbb {N}_{\\ge 0}$ , we have $\\partial _{u_{1}}^{k_{1}}\\ldots \\partial _{u_{m}}^{k_{m}} \\mathcal {E} = {\\mathcal {O}}\\bigg ( \\frac{(\\log r)^{k_{1}+\\ldots +k_{m}}}{r^{2/3}} \\bigg ), \\qquad \\mbox{as } r \\rightarrow + \\infty .$ We end this section by providing several new applications of Theorem REF , and we also discuss its relevance in future studies of the rigidity of the Pearcey process." ], [ "Applications of Theorem ", "Using (REF ) with $m=1$ , (REF ), $\\partial _{u} \\log F(r,u)|_{u=0} = \\mathbb {E}[N(r)] \\quad \\mbox{ and } \\quad \\partial _{u}^{2} \\log F(r,u)|_{u=0} = \\mbox{Var}[N(r)],$ it readily follows that $& \\mathbb {E}[N(r)] = \\mu _{\\rho }(r) + {\\mathcal {O}}\\bigg ( \\frac{\\log r}{r^{2/3}} \\bigg ), & & \\mbox{as } r \\rightarrow + \\infty , \\\\& \\mbox{Var}[N(r)] = \\sigma ^{2}(r) + \\frac{1+\\gamma _{\\mathrm {E}}}{\\pi ^{2}} + {\\mathcal {O}}\\bigg ( \\frac{(\\log r)^{2}}{r^{2/3}} \\bigg ), & & \\mbox{as } r \\rightarrow + \\infty , $ where $\\gamma _{\\mathrm {E}}$ is Euler's gamma constant.", "These asymptotic formulas for the expectation and variance of $N(r)$ are not new and were already obtained in [24].", "We see from () that the large $r$ asymptotics of $\\mbox{Var}[N(r)]$ are of the form $c_{1}\\log r + c_{2} + o(1)$ for some explicit constants $c_{1}$ and $c_{2}$ .", "The asymptotics for the variance of the counting functions of other classical point processes such as the sine, Airy and Bessel point processes are also of the same form [42], [15], [17], [14] (with different values for $c_{1}$ and $c_{2}$ ).", "This phenomena is expected to be universal, in the sense that it is expected to hold for many point processes in random matrix theory and other related fields, see the very general predictions [40], [41].", "We emphasize that the proof of (REF ) and () only relies on Theorem REF with $m=1$ .", "Using Theorem REF with $m=2$ allows to obtain new results on the correlation structure of the Pearcey process.", "More precisely, using (REF ) with $m=2$ , (REF ), and $\\partial _{u}^{2} \\log \\bigg ( \\frac{F((x_{1},x_{2}),(u,u))}{F(x_{1},u)F(x_{2},u)} \\bigg ) \\bigg |_{u=0} = 2 \\, \\mbox{Cov}\\big (N(x_{1}),N(x_{2})\\big ),$ we directly obtain the following result.", "Corollary 1.5 Let $x_{2}>x_{1}>0$ be fixed.", "As $r \\rightarrow + \\infty $ , $\\mathrm {Cov}\\big (N(rx_{1}),N(rx_{2})\\big ) = \\Sigma (x_{k},x_{j}) + {\\mathcal {O}}\\bigg ( \\frac{(\\log r)^{2}}{r^{2/3}} \\bigg ).$ In [24], the authors also proved that the random variable $(N(r)-\\mu _{\\rho }(r))/\\sqrt{\\sigma ^{2}(r)}$ converges in distribution as $r \\rightarrow + \\infty $ to a normal random variable with mean 0 and variance 1.", "Using Theorem REF , we obtain the following generalization of this result.", "Corollary 1.6 Let $0<x_{1}<\\ldots <x_{m}<+\\infty $ be fixed and consider the random variables $N_{j}^{(r)}$ defined by $N_{j}^{(r)} = \\frac{N(rx_{j})-\\mu _{\\rho }(rx_{j})}{\\sqrt{\\sigma ^{2}(rx_{j})}}, \\qquad j=1,\\ldots ,m.$ As $r \\rightarrow +\\infty $ , we have $\\big ( N_{1}^{(r)},N_{2}^{(r)},\\ldots ,N_{m}^{(r)}\\big ) \\quad \\overset{d}{\\longrightarrow } \\quad \\mathcal {N}(\\vec{0},I_{m}),$ where $I_{m}$ is the $m \\times m$ identity matrix, and $\\mathcal {N}(\\vec{0},I_{m})$ is a multivariate normal random variable of mean $\\vec{0}=(0,\\ldots ,0)$ and covariance matrix $I_{m}$ .", "Let $a_{1},\\ldots ,a_{m} \\in \\mathbb {R}$ be arbitrary and fixed (i.e.", "independent of $r$ ).", "It directly follows from (REF ) and (REF ) with $u_{j}=\\frac{\\sqrt{3}\\pi }{2}\\frac{a_{j}}{\\sqrt{\\log r}}$ , $j=1,\\ldots ,m$ , that $\\mathbb {E}\\bigg [ \\prod _{j=1}^{m}e^{a_{j}N_{j}^{(r)}} \\bigg ] = \\exp \\bigg ( \\sum _{j=1}^{m} \\frac{a_{j}^{2}}{2} + {\\mathcal {O}}\\bigg ( \\frac{1}{\\sqrt{\\log r}} \\bigg ) \\bigg ), \\qquad \\mbox{as } r \\rightarrow + \\infty .$ In other words, the moment generating function of $\\big ( N_{1}^{(r)},N_{2}^{(r)},\\ldots ,N_{m}^{(r)}\\big )$ converges as $r \\rightarrow + \\infty $ pointwise in $\\mathbb {R}^{m}$ to the moment generating function of $\\mathcal {N}(\\vec{0},I_{m})$ .", "This implies the convergence in distribution (REF ) by standard probability theorems, see e.g.", "[7]." ], [ "Possible future applications of Theorem ", "Corollary REF gives information about the joint fluctuations of the counting function at $m$ well-separated points $rx_{1},\\ldots ,rx_{m}$ .", "A more difficult question is to understand the global rigidity of the Pearcey process, that is, to understand the maximum fluctuation of the counting function.", "In recent years in random matrix theory, there has been a lot of progress in the study of ridigity of various point processes, see [29], [4] for important early works, [34] for the sine process, and [18] for the Airy and Bessel point processes.", "Of particular interest for us is the following result on the rigidity of the Pearcey process, which was proved in [16] (by combining results from [18] and [24]): for any $\\epsilon > 0$ , the probability that $\\mu _{\\rho }(x)- \\bigg ( \\frac{4\\sqrt{2}}{3\\pi }+\\epsilon \\bigg ) \\log x \\le N(x) \\le \\mu _{\\rho }(x) + \\bigg ( \\frac{4\\sqrt{2}}{3\\pi }+\\epsilon \\bigg ) \\log x \\qquad \\mbox{for all }x>r$ tends to 1 as $r \\rightarrow + \\infty $ .", "Roughly speaking, this means that with high probability and for all large $x$ , $N(x)$ lies in a tube centered at $\\mu _{\\rho }(x)$ and of width $\\big ( \\frac{8\\sqrt{2}}{3\\pi }+2\\epsilon \\big ) \\log x$ , see Figure REF (left).", "Equivalently, (REF ) can be rewritten for the normalized counting function as follows $& \\lim _{r \\rightarrow \\infty }\\mathbb {P}\\left(\\sup _{x> r}\\left|\\frac{N(x)- \\mu _{\\rho }(x)}{\\log x}\\right| \\le \\frac{4\\sqrt{2}}{3\\pi } + \\epsilon \\right) = 1, $ see Figure REF (right).", "It has also been conjectured in [16] that the upper bound (REF ) is sharp, in the sense that the following complementary lower bound is expected to hold: for any $\\epsilon > 0$ , $& \\lim _{r \\rightarrow \\infty }\\mathbb {P}\\left(\\sup _{x> r}\\left|\\frac{N(x)- \\mu _{\\rho }(x)}{\\log x}\\right| \\ge \\frac{4\\sqrt{2}}{3\\pi } - \\epsilon \\right) = 1,$ and this is also supported by Figure REF (right).", "Figure: Rigidity of the Pearcey process (the pictures are taken from ).", "Left: the smooth blue lines correspond to the upper and lower bounds in () with ϵ=0.05\\epsilon =0.05, and N(x)N(x) is the discontinuous blue line.", "Right: the blue line is N(x)-μ ρ (x) logx\\frac{N(x)- \\mu _{\\rho }(x)}{\\log x}, and the four orange lines correspond to the constants ±42 3π+ϵ\\pm \\frac{4\\sqrt{2}}{3\\pi } + \\epsilon , ±42 3π-ϵ\\pm \\frac{4\\sqrt{2}}{3\\pi } - \\epsilon with ϵ=0.05\\epsilon =0.05.Such lower bounds are notoriously difficult to prove.", "By analogy with the method developed in [22], we expect that Theorem REF with $m=2$ will be useful to establish (REF ).", "However, we also expect that proving (REF ) will also require other estimates that are not provided in this paper, such as the large $r$ asymptotics for $\\mathbb {E}[e^{u_{1}N(rx_{1})+u_{2}N(rx_{2})}]$ when simultaneously $|x_{1}-x_{2}|\\rightarrow 0$ .", "This regime requires a completely different analysis than the one of Theorem REF , and we shall not pursue this here." ], [ "Outline.", "Relying on the fact that the Pearcey kernel $K_{\\rho }^{\\mathrm {Pe}}$ is known to be integrable (of size 3) in the sense of [35], we use in Section the general method of [25] to express $\\partial _{r}\\log F(r\\vec{x},\\vec{u})$ in terms of the solution $\\Phi $ to a $3\\times 3$ RH problem.", "In Section , we derive the system of equations (REF ) and establish the formula $\\partial _{r}\\log F(r\\vec{x},\\vec{u})=2H(r)$ by analyzing a natural Lax pair associated to $\\Phi $ .", "In Section , we use the Deift–Zhou [26] steepest descent method to obtain the large $r$ asymptotics of $\\Phi $ .", "A main technical challenge here is to analyze the behavior of the global parametrix at certain points, which becomes particularly delicate for $m \\ge 2$ , see e.g.", "the asymptotic formulas of Subsection REF .", "The steepest descent analysis of $\\Phi $ for small $r$ is simpler and is performed in Section .", "In Section , we use the small and large $r$ asymptotics of $\\Phi $ together with some remarkable identities for the Hamiltonian to complete the proofs of Theorems REF , REF and REF ." ], [ "Differential identity", "The main result of this section is a differential identity which expresses $\\partial _{r}\\log F(r\\vec{x},\\vec{u})$ in terms of the solution $\\Phi $ to a $3\\times 3$ RH problem.", "It is well-known, see e.g.", "[43], that the moment generating function (REF ) is equal to the Fredholm determinant $F(r\\vec{x},\\vec{u}) = \\det \\big (1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho }\\big ), \\qquad \\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho } := \\sum _{j=1}^{m}(1-s_{j})\\mathcal {K}^{\\mathrm {Pe}}_{\\rho }|_{rA_{j}},$ where $s_{j} := e^{u_{j}+\\ldots +u_{m}} \\in (0,+\\infty )$ , $j=1,\\ldots ,m$ , $A_{1} = (-x_{1},x_{1}), \\qquad A_{j} = (-x_{j},-x_{j-1})\\cup (x_{j-1},x_{j}), \\quad j=2,\\ldots ,m,$ and $\\mathcal {K}^{\\mathrm {Pe}}_{\\rho }|_{rA_{j}}$ is the trace-class operator acting on $L^{2}(rA_{j})$ whose kernel is $K^{\\mathrm {Pe}}_{\\rho }$ .", "We now recall a formula from [8] which expresses $K_{\\rho }^{\\mathrm {Pe}}$ in terms of the solution $\\Psi $ of a $3\\times 3$ RH problem." ], [ "RH problem for $\\Psi $", "(a) $\\Psi : \\mathbb {C}\\setminus \\lbrace \\cup _{j=0}^{5}\\Sigma _{j}\\cup \\lbrace 0\\rbrace \\rbrace \\rightarrow \\mathbb {C}^{3\\times 3}$ is analytic, where $& \\Sigma _{0} = (0,+\\infty ), & & \\Sigma _{1} = e^{\\frac{\\pi i}{4}}(0,+\\infty ), & & \\Sigma _{2} = e^{\\frac{3\\pi i}{4}}(+\\infty ,0), \\nonumber \\\\& \\Sigma _{3} = (-\\infty ,0), & & \\Sigma _{4} = e^{-\\frac{3\\pi i}{4}}(+\\infty ,0), & & \\Sigma _{5} = e^{-\\frac{\\pi i}{4}}(0,+\\infty ).", "$ (b) For $z \\in \\cup _{j=0}^{5}\\Sigma _{j}$ , we denote $\\Psi _{+}(z)$ (resp.", "$\\Psi _{-}(z)$ ) for the limit of $\\Psi (s)$ as $s \\rightarrow z$ from the left (resp.", "right) of $\\cup _{j=0}^{5}\\Sigma _{j}$ (here “left\" and “right\" refer to the orientation of $\\cup _{j=0}^{5}\\Sigma _{j}$ as indicated in (REF )).", "For $z \\in \\Sigma _{j}$ , we have $\\Psi _{+}(z) = \\Psi _{-}(z)J_{j}$ , $j=0,\\ldots ,5$ , where $J_{0},J_{1},J_{2},J_{3},J_{4}$ and $J_{5}$ are respectively given by $& \\begin{pmatrix}0 & 1 & 0 \\\\-1 & 0 & 0 \\\\0 & 0 & 1\\end{pmatrix}, \\begin{pmatrix}1 & 0 & 0 \\\\1 & 1 & 1 \\\\0 & 0 & 1\\end{pmatrix}, \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\1 & 1 & 1\\end{pmatrix}, \\begin{pmatrix}0 & 0 & 1 \\\\0 & 1 & 0 \\\\-1 & 0 & 0\\end{pmatrix}, \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\1 & -1 & 1\\end{pmatrix}, \\begin{pmatrix}1 & 0 & 0 \\\\1 & 1 & -1 \\\\0 & 0 & 1\\end{pmatrix}.$ (c) As $z \\rightarrow \\infty $ , $\\pm \\text{\\upshape Im\\,}z > 0$ , $\\Psi (z) = \\sqrt{\\frac{2\\pi }{3}}e^{\\frac{\\rho ^{2}}{6}}i \\Psi _{0} \\bigg ( I + \\frac{\\Psi _{1}}{z} + {\\mathcal {O}}(z^{-2}) \\bigg )\\text{\\upshape diag\\,}(z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}})L_{\\pm }e^{\\Theta (z)},$ where $& \\Psi _{0} = \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\\\kappa _{3}(\\rho )+\\frac{2\\rho }{3} & 0 & 1\\end{pmatrix}, & & \\Psi _{1} = \\begin{pmatrix}0 & \\kappa _{3}(\\rho ) & 0 \\\\\\widetilde{\\kappa }_{6}(\\rho ) & 0 & \\kappa _{3}(\\rho )+\\frac{\\rho }{3} \\\\0 & \\widehat{\\kappa }_{6}(\\rho ) & 0\\end{pmatrix}, \\\\& \\kappa _{3}(\\rho ) = \\frac{\\rho ^{3}}{54}-\\frac{\\rho }{6}, & & \\kappa _{6}(\\rho ) = \\frac{\\rho ^{6}}{5832} - \\frac{\\rho ^{4}}{162} - \\frac{\\rho ^{2}}{72} + \\frac{7}{32}, \\nonumber \\\\& \\widetilde{\\kappa }_{6}(\\rho ) = \\kappa _{6}(\\rho ) + \\frac{\\rho }{3}\\kappa _{3}(\\rho ) - \\frac{1}{3}, & & \\widehat{\\kappa }_{6}(\\rho ) = \\kappa _{6}(\\rho )-\\kappa _{3}(\\rho )^{2}+\\frac{\\rho ^{2}}{9}-\\frac{1}{3}, \\nonumber \\\\& L_{+} = \\begin{pmatrix}-\\omega & \\omega ^{2} & 1 \\\\-1 & 1 & 1 \\\\-\\omega ^{2} & \\omega & 1\\end{pmatrix}, & & L_{-} = \\begin{pmatrix}\\omega ^{2} & \\omega & 1 \\\\1 & 1 & 1 \\\\\\omega & \\omega ^{2} & 1\\end{pmatrix}, \\nonumber \\\\& \\Theta (z) = {\\left\\lbrace \\begin{array}{ll}\\text{\\upshape diag\\,}(\\theta _{1}(z),\\theta _{2}(z),\\theta _{3}(z)), & \\text{\\upshape Im\\,}z > 0, \\\\\\text{\\upshape diag\\,}(\\theta _{2}(z),\\theta _{1}(z),\\theta _{3}(z)), & \\text{\\upshape Im\\,}z < 0,\\end{array}\\right.}", "& & \\theta _{k}(z) = \\frac{3}{4}\\omega ^{2k}z^{\\frac{4}{3}}+\\frac{\\rho }{2}\\omega ^{k}z^{\\frac{2}{3}}, \\quad k=1,2,3, $ and $\\omega = e^{\\frac{2\\pi i}{3}}$ .", "(d) $\\Psi (z)$ remains bounded as $z \\rightarrow 0$ .", "Consider the following functions $\\mathcal {P}_{j}(z) = \\int _{\\Gamma _{j}}e^{-\\frac{1}{4}t^{4}-\\frac{\\rho }{2}t^{2}+itz}dt, \\qquad j=0,1,\\ldots ,5,$ where $& \\Gamma _{0}=(-\\infty ,+\\infty ), & & \\Gamma _{1} = (i\\infty ,0]\\cup [0,\\infty ), & & \\Gamma _{2} = (i\\infty ,0]\\cup [0,-\\infty ), \\\\& \\Gamma _{3} = (-i\\infty ,0]\\cup [0,-\\infty ), & & \\Gamma _{4} = (-i\\infty ,0]\\cup [0,+\\infty ), & & \\Gamma _{5} = (-i\\infty ,i\\infty ),$ and define $\\widetilde{\\Psi }(z) = \\begin{pmatrix}\\mathcal {P}_{0}(z) & \\mathcal {P}_{1}(z) & \\mathcal {P}_{4}(z) \\\\\\mathcal {P}_{0}^{\\prime }(z) & \\mathcal {P}_{1}^{\\prime }(z) & \\mathcal {P}_{4}^{\\prime }(z) \\\\\\mathcal {P}_{0}^{\\prime \\prime }(z) & \\mathcal {P}_{1}^{\\prime \\prime }(z) & \\mathcal {P}_{4}^{\\prime \\prime }(z)\\end{pmatrix}, \\qquad z \\in \\mathbb {C}.$ It was shown in [8] that the RH problem for $\\Psi $ admits a unique solution which can be explicitly written in terms of $\\mathcal {P}_{j}$ , $j=0,\\ldots ,5$ .", "For example, for $\\arg z \\in (\\frac{\\pi }{4},\\frac{3\\pi }{4})$ , we have $\\Psi (z) = \\widetilde{\\Psi }(z)$ .", "The explicit expression of $\\Psi $ in the other sectors is not needed for us so we do not write it down, but we refer the interested reader to [8].", "The Pearcey kernel can be written as follows (see [8]): $K_{\\rho }^{\\mathrm {Pe}}(x,y) = \\frac{1}{2\\pi i(x-y)}\\begin{pmatrix}0 & 1 & 1\\end{pmatrix}\\widetilde{\\Psi }(y)^{-1}\\widetilde{\\Psi }(x) \\begin{pmatrix}1 & 0 & 0\\end{pmatrix}^{t}, \\qquad x,y \\in \\mathbb {R},$ where $(\\cdot )^t$ denotes the transpose operation.", "Let $\\widetilde{K}^{\\mathrm {Pe}}_{\\rho }$ be the kernel of the operator $\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho }$ appearing in (REF ): $\\widetilde{K}^{\\mathrm {Pe}}_{\\rho }(x,y):=\\sum _{j=1}^{m}(1-s_{j})K_{\\rho }^{\\mathrm {Pe}}(x,y) \\chi _{rA_{j}}(y),$ where $\\chi _{A_{j}}$ denotes the characteristic function of $A_{j}$ , i.e.", "$\\chi _{A_{j}}(x)=1$ if $x \\in A_{j}$ and 0 otherwise.", "From (REF ), it is easy to see that $\\widetilde{K}^{\\mathrm {Pe}}_{\\rho }$ can be written as $\\widetilde{K}^{\\mathrm {Pe}}_{\\rho }(x,y) = \\frac{\\mathbf {f}(x)^{t} \\mathbf {h}(y)}{x-y},$ where $\\mathbf {f}(x) = \\widetilde{\\Psi }(x) \\begin{pmatrix}1 \\\\ 0 \\\\ 0\\end{pmatrix}, \\quad \\mathbf {h}(y) = \\frac{\\sum _{j=1}^{m}(1-s_{j})\\chi _{rA_{j}}(y)}{2\\pi i}\\widetilde{\\Psi }(y)^{-t} \\begin{pmatrix}0 \\\\ 1 \\\\ 1\\end{pmatrix}= \\sum _{j=1}^{m}\\mathfrak {s}_{j}\\chi _{rB_{j}}(y)\\widetilde{\\Psi }(y)^{-t} \\begin{pmatrix}0 \\\\ 1 \\\\ 1\\end{pmatrix}.$ with $\\mathfrak {s}_{j}:=\\frac{s_{j+1}-s_{j}}{2\\pi i}$ and $B_{j}=(-x_{j},x_{j})$ , $j=1,\\ldots ,m$ .", "Since $\\vec{u}\\in \\mathbb {R}^{m}$ , it follows from (REF ) that $F(r\\vec{x},\\vec{u})\\in (0,+\\infty )$ .", "Thus, by (REF ), we have $\\det (1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho })>0$ and in particular $1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho }$ is invertible.", "Using now standard identities for trace-class operators, we obtain $& \\partial _{r} \\log F(r\\vec{x},\\vec{u}) = \\partial _{r} \\log \\det (1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho }) = -\\mbox{Tr} \\Big ( (1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho })^{-1} \\partial _{r}\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho } \\Big ) \\nonumber \\\\& = - \\sum _{j=1}^{m} x_{j} \\bigg ( \\lim _{v \\nearrow rx_{j}}\\mathrm {R}(v,v)+\\lim _{v \\searrow -rx_{j}}\\mathrm {R}(v,v) \\bigg ) + \\sum _{j=1}^{m-1} x_{j} \\bigg ( \\lim _{v \\searrow rx_{j}}\\mathrm {R}(v,v)+\\lim _{v \\nearrow -rx_{j}}\\mathrm {R}(v,v) \\bigg ), $ where $\\mathrm {R}$ is the kernel of the resolvent operator $(1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho })^{-1} \\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho }$ .", "Formula (REF ) shows in particular that $\\widetilde{K}^{\\mathrm {Pe}}_{\\rho }$ is integrable (of size 3) in the sense of [35].", "Hence, by [25], we have $\\mathrm {R}(u,v) = \\frac{\\mathbf {F}(u)^{t}\\mathbf {H}(v)}{u-v}, \\qquad u,v \\in \\mathbb {R},$ where $\\mathbf {F}(u) = (1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho })^{-1}\\mathbf {f}(u) = Y_{+}(u)\\mathbf {f}(u), \\qquad \\mathbf {H}(v) = Y_{+}^{-t}(v)\\mathbf {h}(v), \\qquad u,v \\in \\mathbb {R},$ and $Y$ is given by $Y(z) = I - \\int _{-rx_{m}}^{rx_{m}}\\frac{\\mathbf {F}(w)\\mathbf {h}(w)^{t}}{w-z}dw.$ Furthermore, $Y$ is the unique solution to the following RH problem.", "(a) $Y : \\mathbb {C}\\setminus [-rx_{m},rx_{m}] \\rightarrow \\mathbb {C}^{3\\times 3}$ is analytic.", "(b) $Y$ satisfies the jumps $Y_{+}(x) = Y_{-}(x) (I-2\\pi i \\mathbf {f}(x)\\mathbf {h}(x)^{t}), \\qquad x \\in (-rx_{m},rx_{m})\\setminus \\cup _{j=1}^{m-1} \\lbrace -rx_{j},rx_{j}\\rbrace .$ (c) As $z \\rightarrow \\infty $ , $Y(z) = I + \\frac{Y_{1}}{z} + {\\mathcal {O}}(z^{-2}).$ (d) As $z \\rightarrow z_{*}\\in \\cup _{j=1}^{m-1}\\lbrace -rx_{j},rx_{j}\\rbrace $ , we have $Y(z) = {\\mathcal {O}}(\\log (z-z_{*}))$ .", "Figure: Jump contours Σ k (r) \\Sigma _{k}^{(r)}, k=0,1,...,6k=0,1,\\ldots ,6.Now, we apply a transformation which changes $Y$ into another function $\\Phi $ whose jump matrices are piecewise constant.", "Following [24], we define $\\Phi (z) = \\Phi (z;r)$ as $\\Phi (z) = \\frac{\\Psi _{0}^{-1}}{\\sqrt{\\frac{2\\pi }{3}}e^{\\frac{\\rho ^{2}}{6}}i} {\\left\\lbrace \\begin{array}{ll}Y(z)\\Psi (z), & z \\in \\mathrm {I} \\cup \\mathrm {III} \\cup \\mathrm {IV} \\cup \\mathrm {VI}, \\\\Y(z)\\widetilde{\\Psi }(z), & z \\in \\mathrm {II}, \\\\Y(z) \\widetilde{\\Psi }(z) \\begin{pmatrix}1 & -1 & -1 \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix}, & z \\in \\mathrm {V},\\end{array}\\right.", "}$ where the regions $\\mathrm {I}$ , $\\mathrm {II}$ $\\mathrm {III}$ , $\\mathrm {IV}$ , $\\mathrm {V}$ and $\\mathrm {VI}$ are shown in Figure REF .", "It is easily verified from the RH problems for $Y$ and $\\Psi $ that $\\Phi $ satisfies the following RH problem.", "(a) $\\Phi : \\mathbb {C}\\setminus \\lbrace \\cup _{j=0}^{6}\\Sigma _{j}^{(r)}\\cup \\lbrace -rx_{m},rx_{m}\\rbrace \\rbrace \\rightarrow \\mathbb {C}^{3\\times 3}$ is analytic, where $& \\Sigma _{0}^{(r)} = (rx_{m},+\\infty ), & & \\Sigma _{1}^{(r)} = rx_{m}+e^{\\frac{\\pi i}{4}}(0,+\\infty ), & & \\Sigma _{2}^{(r)} = -rx_{m}+e^{\\frac{3\\pi i}{4}}(+\\infty ,0), \\nonumber \\\\& \\Sigma _{3}^{(r)} = (-\\infty ,-rx_{m}), & & \\Sigma _{4}^{(r)} = -rx_{m}+e^{-\\frac{3\\pi i}{4}}(+\\infty ,0), & & \\Sigma _{5}^{(r)} = rx_{m}+e^{-\\frac{\\pi i}{4}}(0,+\\infty ), $ and $\\Sigma _{6}^{(r)}=(-rx_{m},rx_{m})$ , see also Figure REF .", "(b) For $z \\in \\Sigma _{j}^{(r)}$ , we have $\\Phi _{+}(z) = \\Phi _{-}(z)J_{j}$ , $j=0,\\ldots ,5$ , where $J_{0},J_{1},J_{2},J_{3},J_{4}$ and $J_{5}$ are given by (REF ).", "For $z \\in (-rx_{m},rx_{m})\\setminus \\cup _{j=1}^{m-1}\\lbrace -rx_{j},rx_{j}\\rbrace $ , we have $\\Phi _{+}(z) = \\Phi _{-}(z)J_{6}(z)$ , where $J_{6}(z)=\\begin{pmatrix}1 & s_{j} & s_{j} \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix}, \\qquad z \\in rA_{j}, \\qquad j=1,\\ldots ,m.$ (c) As $z \\rightarrow \\infty $ , $\\pm \\text{\\upshape Im\\,}z > 0$ , $\\Phi (z) = \\bigg ( I+\\frac{\\Phi _{1}}{z}+\\frac{\\Phi _{2}}{z^{2}} +{\\mathcal {O}}(z^{-3}) \\bigg ) \\text{\\upshape diag\\,}\\Big ( z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}} \\Big )L_{\\pm }e^{\\Theta (z)},$ where $\\Phi _{1}$ ,$\\Phi _{2}$ are independent of $z$ and $\\Phi _{1}=\\Psi _{1}+\\Psi _{0}^{-1}Y_{1}\\Psi _{0}$ .", "(d) As $z \\rightarrow rx_{j}$ , $j=1,\\ldots ,m$ , we have $\\hspace{-9.95863pt} \\Phi (z) = \\widehat{\\Phi }_{j}(z) \\begin{pmatrix}1 & - \\mathfrak {s}_{j}\\log (z-rx_{j}) & - \\mathfrak {s}_{j}\\log (z-rx_{j}) \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix} {\\left\\lbrace \\begin{array}{ll}I, & \\hspace{-4.26773pt} z \\in \\mathrm {II}, \\\\\\begin{pmatrix}1 & -s_{j+1} & -s_{j+1} \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix}, & \\hspace{-4.26773pt} z \\in \\mathrm {V},\\end{array}\\right.", "}$ where we recall that $\\mathfrak {s}_{j}:=\\frac{s_{j+1}-s_{j}}{2\\pi i}$ .", "The matrix $\\widehat{\\Phi }_{j}$ is analytic at $rx_{j}$ and satisfies $\\widehat{\\Phi }_{j}(z) = \\Phi _{j}^{(0)}(r) \\Big ( I + \\Phi _{j}^{(1)}(r)(z-rx_{j}) + {\\mathcal {O}}((z-rx_{j})^{2}) \\Big ), \\qquad z \\rightarrow rx_{j},$ for some matrices $\\Phi _{j}^{(0)}(r)$ and $\\Phi _{j}^{(1)}(r)$ .", "(e) $\\Phi $ satisfies the symmetry $\\Phi (z) = -\\text{\\upshape diag\\,}(1,-1,1)\\Phi (-z)\\mathcal {B}, \\qquad \\mathcal {B} = \\begin{pmatrix}-1 & 0 & 0 \\\\0 & 0 & 1 \\\\0 & 1 & 0\\end{pmatrix}.$ Proposition 2.1 We have $\\partial _{r} \\log F(r\\vec{x},\\vec{u}) = -\\sum _{j=1}^{m} 2\\, \\mathfrak {s}_{j} x_{j} \\Big [ \\Big ( \\Phi _{j}^{(1)}(r) \\Big )_{21} + \\Big ( \\Phi _{j}^{(1)}(r) \\Big )_{31} \\Big ].$ The proof is a minor adaptation of [24].", "For $v \\in \\mathbb {R}$ , by (REF ), (REF ) and (REF ), we have $\\mathbf {F}(v) = \\sqrt{\\frac{2\\pi }{3}}e^{\\frac{\\rho ^{2}}{6}}i \\Psi _{0}\\Phi _{+}(v) \\begin{pmatrix}1 \\\\ 0 \\\\ 0\\end{pmatrix}, \\qquad \\mathbf {H}(v) = \\sum _{j=1}^{m}\\mathfrak {s}_{j}\\chi _{rB_{j}}(v) \\frac{\\Psi _{0}^{-t}}{\\sqrt{\\frac{2\\pi }{3}}e^{\\frac{\\rho ^{2}}{6}}i}\\Phi _{+}(v)^{-t} \\begin{pmatrix}0 \\\\ 1 \\\\ 1\\end{pmatrix}.$ Using (REF ), (REF ), (REF ) and (REF ), we find $\\mathrm {R}(v,v) = \\sum _{j=1}^{m}\\mathfrak {s}_{j}\\chi _{rB_{j}}(v) \\bigg ( \\big [ \\Phi _{+}(v)^{-1}\\Phi _{+}^{\\prime }(v) \\big ]_{21} + \\big [ \\Phi _{+}(v)^{-1}\\Phi _{+}^{\\prime }(v) \\big ]_{31} \\bigg ) = \\mathrm {R}(-v,-v), \\qquad v \\in \\mathbb {R},$ which allows us to rewrite (REF ) as $\\partial _{r} \\log F(r\\vec{x},\\vec{u}) & = - 2 \\sum _{j=1}^{m} x_{j} \\lim _{v \\nearrow rx_{j}}\\mathrm {R}(v,v) + 2\\sum _{j=1}^{m-1} x_{j} \\lim _{v \\searrow rx_{j}}\\mathrm {R}(v,v) \\\\& = - 2 \\sum _{j=1}^{m} x_{j} \\mathfrak {s}_{j} \\bigg ( \\big [ \\Phi _{+}(x_{j})^{-1}\\Phi _{+}^{\\prime }(x_{j}) \\big ]_{21} + \\big [ \\Phi _{+}(x_{j})^{-1}\\Phi _{+}^{\\prime }(x_{j}) \\big ]_{31} \\bigg ).$ By (REF ) and (REF ), $\\big [ \\Phi _{+}(x_{j})^{-1}\\Phi _{+}^{\\prime }(x_{j}) \\big ]_{k1} = ( \\Phi _{j}^{(1)}(r) )_{k1}$ , for all $j=1,\\ldots ,m$ and $k=2,3$ , which finishes the proof." ], [ "Lax pair", "In this section, we find an explicit solution to (REF )–(REF ) in terms of $\\Phi $ , we prove the relation $\\partial _{r} \\log F(r\\vec{x},\\vec{u}) = 2 H(r)$ , we derive some further identities for $H$ which will be useful in Section .", "These results generalize part of the content of [24] to an arbitrary $m$ .", "Proposition 3.1 The functions $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ defined by $& p_{0}(r) := \\frac{1}{\\sqrt{2}}\\Big ( \\frac{\\rho }{3}+\\Phi _{1,23} \\Big ), & & q_{0}(r) := \\frac{1}{\\sqrt{2}}\\Big ( \\frac{\\rho }{3} - \\Phi _{1,12} \\Big ), \\\\& \\begin{pmatrix}p_{j,1}(r) \\\\ p_{j,2}(r) \\\\ p_{j,3}(r)\\end{pmatrix} := - \\mathfrak {s}_{j}\\Phi _{j}^{(0)}(r)^{-t}\\begin{pmatrix}0 \\\\ 1 \\\\ 1\\end{pmatrix}, & & \\begin{pmatrix}q_{j,1}(r) \\\\ q_{j,2}(r) \\\\ q_{j,3}(r)\\end{pmatrix} := \\Phi _{j}^{(0)}(r)\\begin{pmatrix}1 \\\\ 0 \\\\ 0\\end{pmatrix}, & & j=1,\\ldots ,m, $ satisfy (REF ) and the system of coupled equations (REF ).", "Remark 3.2 Since $\\Phi $ exists by (REF ), (REF ) and (REF ), Proposition REF implies that there exists at least one solution to (REF )–(REF ).", "Following [24], we will proceed by analyzing the following Lax pair $L(z;r) := \\partial _{z}\\Phi (z;r) \\cdot \\Phi (z;r)^{-1}, \\qquad U(z;r) := \\partial _{r}\\Phi (z;r) \\cdot \\Phi (z;r)^{-1}.$ Since the jump matrices of $\\Phi $ are independent of $z$ and $r$ , $L(z)=L(z;r)$ and $U(z)=U(z;r)$ are analytic in $\\mathbb {C}\\setminus \\lbrace -rx_{m},\\ldots ,-rx_{1},$ $0,rx_{1},\\ldots ,rx_{m}\\rbrace $ .", "Furthermore, using (REF ), we infer that they satisfy $L(z) = -\\text{\\upshape diag\\,}(1,-1,1)L(-z)\\text{\\upshape diag\\,}(1,-1,1), \\qquad U(z) = \\text{\\upshape diag\\,}(1,-1,1)U(-z)\\text{\\upshape diag\\,}(1,-1,1).$ By (REF ), (REF ) and (REF ), we find $L(z) = \\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\1 & 0 & 0\\end{pmatrix}z + A_{0}(r) + \\frac{L_{1}}{z} + {\\mathcal {O}}(z^{-2}), \\qquad \\mbox{as } z \\rightarrow \\infty ,$ where $& A_{0}(r) = \\begin{pmatrix}0 & 1 & 0 \\\\\\frac{\\rho }{3} & 0 & 1 \\\\0 & \\frac{\\rho }{3} & 0\\end{pmatrix} + \\left[ \\Phi _{1},\\begin{pmatrix}0 & 0 & 0 \\\\ 0 & 0 & 0 \\\\ 1 & 0 & 0\\end{pmatrix} \\right] = \\begin{pmatrix}0 & 1 & 0 \\\\\\sqrt{2}p_{0}(r) & 0 & 1 \\\\0 & \\sqrt{2}q_{0}(r) & 0\\end{pmatrix}, \\\\& L_{1} = \\begin{pmatrix}-\\frac{1}{3} & 0 & \\frac{\\rho }{3} \\\\0 & 0 & 0 \\\\0 & 0 & \\frac{1}{3}\\end{pmatrix} + \\left[ \\Phi _{2},\\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\1 & 0 & 0\\end{pmatrix} \\right] + \\left[ \\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\1 & 0 & 0\\end{pmatrix}\\Phi _{1},\\Phi _{1} \\right] + \\left[ \\Phi _{1},\\begin{pmatrix}0 & 1 & 0 \\\\\\frac{\\rho }{3} & 0 & 1 \\\\0 & \\frac{\\rho }{3} & 0\\end{pmatrix} \\right], \\nonumber $ and where we have used the notation $[\\mathcal {B}_{1},\\mathcal {B}_{2}]:=\\mathcal {B}_{1}\\mathcal {B}_{2}-\\mathcal {B}_{2}\\mathcal {B}_{1}$ .", "Also, by (REF )–(REF ) and (), we have $L(z) = \\frac{A_{j}(r)}{z-rx_{j}} + {\\mathcal {O}}(1), \\qquad \\mbox{as } z \\rightarrow rx_{j}, \\quad j=1,\\ldots ,m$ with $A_{j}(r) = -\\mathfrak {s}_{j}\\Phi _{j}^{(0)}(r)\\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix}\\Phi _{j}^{(0)}(r)^{-1} = \\begin{pmatrix}q_{j,1} \\\\ q_{j,2} \\\\ q_{j,3}\\end{pmatrix}\\begin{pmatrix}p_{j,1} & p_{j,2} & p_{j,3}\\end{pmatrix}.$ Since $\\det \\Phi (z)$ is constant, we have $\\mbox{Tr} L(z) = \\mbox{Tr} A_{j}(r) = \\sum _{k=1}^{3} p_{j,k}(r)q_{j,k}(r) = 0$ , which already proves (REF ).", "Combining (REF ), (REF ) and (REF ), we have shown that $L(z) = \\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\z & 0 & 0\\end{pmatrix}+A_{0}(r) + \\sum _{j=1}^{m}\\bigg ( \\frac{A_{j}(r)}{z-rx_{j}} + \\frac{A_{-j}(r)}{z+rx_{j}}\\bigg ),$ where $A_{-j}(r) = \\text{\\upshape diag\\,}(1,-1,1)A_{j}(r)\\text{\\upshape diag\\,}(1,-1,1) = \\begin{pmatrix}q_{j,1} \\\\ -q_{j,2} \\\\ q_{j,3}\\end{pmatrix}\\begin{pmatrix}p_{j,1} & -p_{j,2} & p_{j,3}\\end{pmatrix}.$ For the computation of $U$ , we use (REF ) and (REF )–(REF ) to obtain $& U(z) = {\\mathcal {O}}(z^{-1}), \\qquad \\mbox{as } z \\rightarrow \\infty , & & U(z) = -x_{j} \\frac{A_{j}(r)}{z-rx_{j}} + {\\mathcal {O}}(1), \\qquad \\mbox{as } z \\rightarrow rx_{j}.$ Using also (REF ), we conclude that $U(z) = \\sum _{j=1}^{m}\\bigg ( -x_{j}\\frac{A_{j}(r)}{z-rx_{j}} + x_{j}\\frac{A_{-j}(r)}{z+rx_{j}}\\bigg ).$ It remains to show that the functions $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ satisfy the system of equations (REF ).", "For this, we note that the compatibility condition $\\partial _{z}\\partial _{r}\\Phi (z) = \\partial _{r}\\partial _{z}\\Phi (z)$ is equivalent to the relation $\\partial _{r}L(z) - \\partial _{z}U(z) = [U(z),L(z)].$ On the other hand, by (REF ) and (REF ), we have $\\partial _{r}L(z) - \\partial _{z}U(z) = A_{0}^{\\prime }(r) + \\sum _{j=1}^{m} \\bigg (\\frac{A_{j}^{\\prime }(r)}{z-rx_{j}} + \\frac{A_{-j}^{\\prime }(r)}{z+rx_{j}}\\bigg ).$ Substituting (REF ) and (REF ) in the above two equations, and then taking $z \\rightarrow \\infty $ , we get $A_{0}^{\\prime }(r) & = \\sum _{j=1}^{m}x_{j} \\left[ A_{-j}(r)-A_{j}(r),\\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\1 & 0 & 0\\end{pmatrix} \\right] = \\begin{pmatrix}0 & 0 & 0 \\\\-2\\sum _{j=1}^{m}x_{j}p_{j,3}(r)q_{j,2}(r) & 0 & 0 \\\\0 & 2\\sum _{j=1}^{m}x_{j}p_{j,2}(r)q_{j,1}(r) & 0\\end{pmatrix},$ which yields the first two equations in (REF ).", "We now prove the last six equations of (REF ).", "A direct computation using (REF ) and (REF ) shows that $x_{j}L(z)+U(z) = \\big ( x_{j}\\partial _{z}\\Phi (z) + \\partial _{r} \\Phi (z) \\big ) \\Phi (z)^{-1} = \\partial _{r} \\Phi _{j}^{(0)}(r) \\cdot \\Phi _{j}^{(0)}(r)^{-1} + o(1) \\qquad \\mbox{as } z \\rightarrow rx_{j},$ and using (REF ) and (REF ), we get $x_{j}L(z)+U(z) = M_{j}(r) - \\frac{1}{r}A_{j}(r) + o(1) \\qquad \\mbox{as } z \\rightarrow rx_{j},$ with $M_{j} = \\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\rx_{j}^{2} & 0 & 0\\end{pmatrix} + x_{j} A_{0} + \\frac{1}{r} \\sum _{\\begin{array}{c}k=-m \\\\ k \\ne 0\\end{array}}^{m} A_{k} = \\begin{pmatrix}\\frac{2}{r}S_{11} & x_{j} & \\frac{2}{r}S_{31} \\\\\\sqrt{2}p_{0}x_{j} & \\frac{2}{r}S_{22} & x_{j} \\\\rx_{j}^{2} + \\frac{2}{r}S_{13} & \\sqrt{2}q_{0}x_{j} & \\frac{2}{r}S_{33}\\end{pmatrix}.$ In (REF ), we have omitted the $r$ -dependence of various functions for notational convenience.", "Combining (REF ) and (REF ) yields $\\partial _{r} \\Phi _{j}^{(0)}(r) = \\bigg ( M_{j}(r) - \\frac{1}{r}A_{j}(r) \\bigg ) \\Phi _{j}^{(0)}(r).$ Taking the first column of the above equation and using (REF ), (REF ) and (), we get $\\begin{pmatrix}q_{j,1}^{\\prime }(r) & q_{j,2}^{\\prime }(r) & q_{j,3}^{\\prime }(r)\\end{pmatrix}^{t} = M(r) \\begin{pmatrix}q_{j,1}(r) & q_{j,2}(r) & q_{j,3} (r)\\end{pmatrix}^{t},$ and it is a direct computation to verify that (REF ) is equivalent to the third, fourth and fifth equations of (REF ).", "Finally, using (REF ), (REF ), (REF ) and (REF ), and letting $z \\rightarrow rx_{j}$ , we get $A_{j}^{\\prime }(r) = -[A_{j}(r),M_{j}(r)].$ Combining (REF ) with (REF ) and (REF ), we get $\\begin{pmatrix}p_{j,1}^{\\prime }(r) & p_{j,2}^{\\prime }(r) & p_{j,3}^{\\prime }(r)\\end{pmatrix} = -\\begin{pmatrix}p_{j,1}(r) & p_{j,2}(r) & p_{j,3}(r)\\end{pmatrix} M(r),$ which yields the last three equations in (REF ).", "For later use, we also note that by taking $z \\rightarrow \\infty $ in (REF ) and then by reading the $z^{-1}$ term of the (1,3) entry, we get $\\sum _{j=1}^{m} (A_{j,13}(r)+A_{-j,13}(r)) = 2S_{31}(r) = \\frac{\\rho }{3} + \\Phi _{1,12}(r)-\\Phi _{1,23}(r) = \\rho -\\sqrt{2}(p_{0}(r)+q_{0}(r)),$ where we have also used (REF ) and (REF ).", "In the rest of this section, we prove some identities for $H$ which will be useful in Section .", "Proposition 3.3 Let $H$ be the Hamiltonian given in (REF ) with $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ defined as in (REF )–().", "We have $\\partial _{r} \\log F(r\\vec{x},\\vec{u}) = 2 H(r).$ By (REF ), the claim (REF ) is equivalent to $H(r) = -\\sum _{j=1}^{m}\\mathfrak {s}_{j}x_{j} \\mathrm {Tr}\\left( \\Phi _{j}^{(1)}(r)\\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix} \\right) = -\\sum _{j=1}^{m}\\mathfrak {s}_{j}x_{j} \\bigg [ \\Big ( \\Phi _{j}^{(1)}(r)\\Big )_{21} + \\Phi _{j}^{(1)}(r)\\Big )_{31} \\bigg ].$ Let $\\mathcal {H}(r)$ be the right-hand side of (REF ).", "We must show that $H(r) = \\mathcal {H}(r)$ .", "By reading the ${\\mathcal {O}}(1)$ term in the expansion of $\\partial _{z}\\Phi (z) = L(z)\\Phi (z)$ as $z \\rightarrow rx_{j}$ (using (REF ) and (REF )), we obtain $& \\Phi _{j}^{(1)}(r) = \\mathfrak {s}_{j} \\left[ \\Phi _{j}^{(1)}(r),\\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix} \\right] \\nonumber \\\\& + \\Phi _{j}^{(0)}(r)^{-1}\\left( \\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\rx_{j} & 0 & 0\\end{pmatrix}+A_{0}(r) + \\sum _{\\begin{array}{c}\\ell =1 \\\\ \\ell \\ne j\\end{array}}^{m}\\bigg ( \\frac{A_{\\ell }(r)}{rx_{j}-rx_{\\ell }} + \\frac{A_{-\\ell }(r)}{rx_{j}+rx_{\\ell }}\\bigg ) + \\frac{A_{-j}(r)}{2rx_{j}} \\right)\\Phi _{j}^{(0)}(r).", "$ Substituting (REF ) in the right-hand side of (REF ) leads to $\\mathcal {H}(r) = & -\\sum _{j=1}^{m}\\mathfrak {s}_{j}x_{j} \\begin{pmatrix}0 & 1 & 1\\end{pmatrix} \\Phi _{j}^{(0)}(r)^{-1}\\left[ \\begin{pmatrix}0 & 0 & 0 \\\\0 & 0 & 0 \\\\rx_{j} & 0 & 0\\end{pmatrix}+A_{0}(r) \\right.", "\\nonumber \\\\& \\left.", "+ \\sum _{\\begin{array}{c}\\ell =1 \\\\ \\ell \\ne j\\end{array}}^{m}\\bigg ( \\frac{A_{\\ell }(r)}{rx_{j}-rx_{\\ell }} + \\frac{A_{-\\ell }(r)}{rx_{j}+rx_{\\ell }}\\bigg ) + \\frac{A_{-j}(r)}{2rx_{j}} \\right]\\Phi _{j}^{(0)}(r) \\begin{pmatrix}1 \\\\ 0 \\\\ 0\\end{pmatrix}.", "$ Inserting () and (REF ) in (REF ), we get $\\mathcal {H}(r) = \\sum _{j=1}^{m}x_{j} \\begin{pmatrix}p_{j,1} \\\\ p_{j,2} \\\\ p_{j,3}\\end{pmatrix}^{t}\\left( \\begin{pmatrix}0 & 1 & 0 \\\\\\sqrt{2}p_{0} & 0 & 1 \\\\rx_{j} & \\sqrt{2}q_{0} & 0\\end{pmatrix} + \\sum _{\\begin{array}{c}\\ell =1 \\\\ \\ell \\ne j\\end{array}}^{m}\\bigg ( \\frac{A_{\\ell }}{rx_{j}-rx_{\\ell }} + \\frac{A_{-\\ell }}{rx_{j}+rx_{\\ell }}\\bigg ) + \\frac{A_{-j}}{2rx_{j}} \\right)\\begin{pmatrix}q_{j,1} \\\\ q_{j,2} \\\\ q_{j,3}\\end{pmatrix}.$ The double sum in the above expression can be simplified as $& \\sum _{j=1}^{m}x_{j} \\begin{pmatrix}p_{j,1} \\\\ p_{j,2} \\\\ p_{j,3}\\end{pmatrix}^{t} \\sum _{\\begin{array}{c}\\ell =1 \\\\ \\ell \\ne j\\end{array}}^{m}\\bigg ( \\frac{A_{\\ell }}{rx_{j}-rx_{\\ell }} + \\frac{A_{-\\ell }}{rx_{j}+rx_{\\ell }}\\bigg ) \\begin{pmatrix}q_{j,1} \\\\ q_{j,2} \\\\ q_{j,3}\\end{pmatrix} = \\frac{1}{2} \\sum _{j=1}^{m} \\begin{pmatrix}p_{j,1} \\\\ p_{j,2} \\\\ p_{j,3}\\end{pmatrix}^{t} \\sum _{\\begin{array}{c}\\ell =1 \\\\ \\ell \\ne j\\end{array}}^{m}\\Big ( A_{\\ell } + A_{-\\ell }\\bigg ) \\begin{pmatrix}q_{j,1} \\\\ q_{j,2} \\\\ q_{j,3}\\end{pmatrix}.$ Using (REF ) and (REF ), it is now a direct computation to check that indeed $\\mathcal {H}(r) = H(r)$ , which concludes the proof.", "Proposition 3.4 Let $H$ be the Hamiltonian given in (REF ) with $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ defined as in (REF )–().", "We have $& p_{0}(r)q_{0}^{\\prime }(r)+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(r)q_{j,k}^{\\prime }(r) - H(r) \\nonumber \\\\& = H(r) + \\frac{1}{4}\\frac{d}{dr}\\Big ( 2p_{0}(r)q_{0}(r) + \\sum _{j=1}^{m}\\big [p_{j,2}(r)q_{j,2}(r)+2p_{j,3}(r)q_{j,3}(r)\\big ] -3rH(r) \\Big ).", "$ Furthermore, $\\partial _{\\gamma }\\bigg ( p_{0}(r)q_{0}^{\\prime }(r) + \\sum _{k=1}^{3}\\sum _{j=1}^{m}p_{j,k}(r)q_{j,k}^{\\prime }(r) - H(r) \\bigg ) = \\frac{d}{dr}\\bigg ( \\sum _{k=1}^{3}\\sum _{j=1}^{m}p_{j,k}(r)\\partial _{\\gamma }q_{j,k}(r)+p_{0}(r)\\partial _{\\gamma } q_{0}(r) \\bigg )$ where $\\gamma $ is any parameter among $u_{1},\\ldots ,u_{m}$ .", "Formula (REF ) follows directly from (REF ) and (REF ), and formula (REF ) follows from (REF ) together with $\\partial _{\\gamma }H(r) = \\frac{\\partial H}{\\partial p_{0}}(r)\\partial _{\\gamma }p_{0}(r) + \\frac{\\partial H}{\\partial q_{0}}(r)\\partial _{\\gamma }q_{0}(r) + \\sum _{j=1}^{m}\\sum _{k=1}^{3} \\bigg ( \\frac{\\partial H}{\\partial p_{j,k}}(r)\\partial _{\\gamma }p_{j,k}(r) + \\frac{\\partial H}{\\partial q_{j,k}}(r)\\partial _{\\gamma }q_{j,k}(r) \\bigg ).$" ], [ "Asymptotic analysis of $\\Phi (z;r)$ as {{formula:1b11ad94-bd59-4ad1-9f2a-3a430e46c3f4}}", "In this section, we perform a Deift-Zhou steepest descent analysis to obtain the large $r$ asymptotics of $\\Phi $ .", "The case $m=1$ of this analysis was previously done in [24]." ], [ "First transformation: $\\Phi \\rightarrow T$", "Define $T(z) = \\text{\\upshape diag\\,}\\big ( r^{\\frac{1}{3}},1,r^{-\\frac{1}{3}} \\big ) \\Phi (rz;r)e^{-\\Theta (rz)}.$ The jumps for $T$ on $(-x_{m},x_{m})$ are given by $& T_{+}(z) = T_{-}(z) \\begin{pmatrix}e^{\\theta _{2}(rz)-\\theta _{1}(rz)} & s_{j} & s_{j}e^{\\theta _{2}(rz)-\\theta _{3}(rz)} \\\\0 & e^{\\theta _{1}(rz)-\\theta _{2}(rz)} & 0 \\\\0 & 0 & 1\\end{pmatrix}, & & z \\in (x_{j-1},x_{j}), \\\\& T_{+}(z) = T_{-}(z) \\begin{pmatrix}e^{\\theta _{3,+}(rz)-\\theta _{3,-}(rz)} & s_{j}e^{\\theta _{2,-}(rz)-\\theta _{2,+}(rz)} & s_{j} \\\\0 & 1 & 0 \\\\0 & 0 & e^{\\theta _{3,-}(rz)-\\theta _{3,+}(rz)}\\end{pmatrix}, & & z \\in (-x_{j},-x_{j-1}),$ where $j=1,\\ldots ,m$ , $x_{0}:=0$ , and where we have used $\\theta _{3,+}(z) = \\theta _{2,-}(z), \\quad \\theta _{1,+}(z) = \\theta _{3,-}(z), \\quad \\theta _{1,-}(z) = \\theta _{2,+}(z), \\qquad z<0.$ As $z \\rightarrow \\infty $ , $\\pm \\text{\\upshape Im\\,}z > 0$ , we have $T(z) = \\bigg ( I+\\frac{T_{1}}{z} +{\\mathcal {O}}(z^{-2}) \\bigg ) \\text{\\upshape diag\\,}\\Big ( z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}} \\Big )L_{\\pm }$ where $T_{1} = \\frac{1}{r} \\text{\\upshape diag\\,}\\big ( r^{\\frac{1}{3}},1,r^{-\\frac{1}{3}} \\big ) \\Phi _{1} \\text{\\upshape diag\\,}\\big ( r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}} \\big )$ ." ], [ "Second transformation: $T \\rightarrow S$", "For each $j=1,\\ldots ,m$ , let $\\gamma _{j,+}$ and $\\gamma _{j,-}$ be open curves, lying in the upper and lower half planes respectively, starting at $x_{j-1}$ and ending at $x_{j}$ .", "We also orient the open curves $\\gamma _{-j,+}:=-\\gamma _{j,-}$ and $\\gamma _{-j,-}:=-\\gamma _{j,+}$ from $-x_{j}$ to $-x_{j-1}$ .", "Let $\\gamma _{m+1,+}:=\\Sigma _{1}^{(1)}, \\quad \\gamma _{m+1,-}:=\\Sigma _{5}^{(1)}, \\quad \\gamma _{-m-1,+}:=\\Sigma _{2}^{(1)}, \\quad \\gamma _{-m-1,-}:=\\Sigma _{4}^{(1)},$ $s_{m+1}:=1$ , $x_{m+1}:=+\\infty $ and for $j=1,\\ldots ,m,m+1$ , define $& J_{\\gamma _{j,-}}(z) = \\begin{pmatrix}1 & 0 & 0 \\\\\\frac{e^{\\theta _{1}(rz)-\\theta _{2}(rz)}}{s_{j}} & 1 & -e^{\\theta _{1}(rz)-\\theta _{3}(rz)} \\\\0 & 0 & 1\\end{pmatrix}, & & J_{\\gamma _{j,+}}(z) = \\begin{pmatrix}1 & 0 & 0 \\\\\\frac{e^{\\theta _{2}(rz)-\\theta _{1}(rz)}}{s_{j}} & 1 & e^{\\theta _{2}(rz)-\\theta _{3}(rz)} \\\\0 & 0 & 1\\end{pmatrix}, \\\\& J_{\\gamma _{-j,+}}(z) = \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\\\frac{e^{\\theta _{3}(rz)-\\theta _{1}(rz)}}{s_{j}} & e^{\\theta _{3}(rz)-\\theta _{2}(rz)} & 1\\end{pmatrix}, & & J_{\\gamma _{-j,-}}(z) = \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\\\frac{e^{\\theta _{3}(rz)-\\theta _{2}(rz)}}{s_{j}} & -e^{\\theta _{3}(rz)-\\theta _{1}(rz)} & 1\\end{pmatrix}.$ The next transformation is defined by $S(z) = T(z){\\left\\lbrace \\begin{array}{ll}J_{\\gamma _{j,-}}(z), & \\text{\\upshape Im\\,}z < 0 \\mbox{ and } z \\mbox{ above } \\gamma _{j,-}, \\quad \\hspace{5.69046pt} j \\in \\lbrace 1,\\ldots ,m\\rbrace , \\\\J_{\\gamma _{j,+}}(z)^{-1}, & \\text{\\upshape Im\\,}z > 0 \\mbox{ and } z \\mbox{ below } \\gamma _{j,+}, \\quad \\hspace{5.69046pt} j \\in \\lbrace 1,\\ldots ,m\\rbrace , \\\\J_{\\gamma _{-j,+}}(z)^{-1}, & \\text{\\upshape Im\\,}z > 0 \\mbox{ and } z \\mbox{ below } \\gamma _{-j,+}, \\quad j \\in \\lbrace 1,\\ldots ,m\\rbrace , \\\\J_{\\gamma _{-j,-}}(z), & \\text{\\upshape Im\\,}z < 0 \\mbox{ and } z \\mbox{ above } \\gamma _{-j,-}, \\quad j \\in \\lbrace 1,\\ldots ,m\\rbrace , \\\\I, & \\mbox{otherwise}.\\end{array}\\right.", "}$ $S$ satisfies the following RH problem." ], [ "RH problem for $S$", "(a) $S: \\mathbb {C}\\setminus \\Sigma _{S} \\rightarrow \\mathbb {C}^{3\\times 3}$ is analytic, where $\\Sigma _{S}:=(-\\infty ,+\\infty ) \\cup \\bigcup _{j=1}^{m+1} \\big ( \\gamma _{j,-} \\cup \\gamma _{j,+} \\cup \\gamma _{-j,+} \\cup \\gamma _{-j,-} \\big )$ .", "(b) For $z \\in \\Sigma _{S}\\setminus \\cup _{j=0}^{m}\\lbrace -x_{j},x_{j}\\rbrace $ , $S_{+}(z) = S_{-}(z)J_{S}(z)$ , where $& J_{S}(z)=J_{\\gamma _{j,-}}(z), & & z \\in \\gamma _{j,-}, & & J_{S}(z)=J_{\\gamma _{j,+}}(z), & & z \\in \\gamma _{j,+}, \\\\& J_{S}(z)=J_{\\gamma _{-j,+}}(z), & & z \\in \\gamma _{-j,+}, & & J_{S}(z)=J_{\\gamma _{-j,-}}(z), & & z \\in \\gamma _{-j,-} \\\\& J_{S}(z)= \\begin{pmatrix}0 & s_{j} & 0 \\\\-s_{j}^{-1} & 0 & 0 \\\\0 & 0 & 1\\end{pmatrix}, & & z \\in (x_{j-1},x_{j}), & & J_{S}(z)= \\begin{pmatrix}0 & 0 & s_{j} \\\\0 & 1 & 0 \\\\-s_{j}^{-1} & 0 & 0\\end{pmatrix}, & & z \\in (-x_{j},-x_{j-1}),$ where $j=1,\\ldots ,m,m+1$ .", "(c) As $z \\rightarrow \\infty $ , $\\pm \\text{\\upshape Im\\,}z > 0$ , we have $S(z) = \\bigg ( I+\\frac{T_{1}}{z} +{\\mathcal {O}}(z^{-2}) \\bigg ) \\text{\\upshape diag\\,}\\Big ( z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}} \\Big )L_{\\pm }.$ (d) As $z \\rightarrow z_{\\star }\\in \\cup _{j=1}^{m}\\lbrace -x_{j},x_{j}\\rbrace $ , we have $S(z) = {\\mathcal {O}}(\\log (z-z_{\\star }))$ .", "As $z \\rightarrow 0$ , $S(z) = {\\mathcal {O}}(1)$ .", "(e) $S$ satisfies the symmetry $S(z) = -\\text{\\upshape diag\\,}(1,-1,1)S(-z)\\mathcal {B}$ , where $\\mathcal {B}$ is defined in (REF )." ], [ "Global parametrix", "Using the definitions of $\\theta _{1},\\theta _{2},\\theta _{3}$ given in (), it is easily checked that $J_{S}(z) \\rightarrow I$ as $r \\rightarrow \\infty $ for each $z \\in \\cup _{j=1}^{m+1} \\big ( \\gamma _{j,-} \\cup \\gamma _{j,+} \\cup \\gamma _{-j,+} \\cup \\gamma _{-j,-} \\big )$ .", "The following RH problem, whose solution is denoted $N$ and called the global parametrix, has the same jump conditions on $(-\\infty ,+\\infty )$ than the RH problem for $S$ , and no other jumps.", "We will show in Subsection REF that $N$ is a good approximation to $S$ outside small neighborhoods of $\\cup _{j=0}^{m}\\lbrace -x_{j},x_{j}\\rbrace $ .", "The RH problem for $N$ is as follows." ], [ "RH problem for $N$", "(a) $N:\\mathbb {C}\\setminus (-\\infty ,+\\infty ) \\rightarrow \\mathbb {C}^{3\\times 3}$ is analytic.", "(b) $N$ satisfies the following jump relations: $& N_{+}(z) = N_{-}(z) \\begin{pmatrix}0 & 0 & s_{j} \\\\0 & 1 & 0 \\\\-s_{j}^{-1} & 0 & 0\\end{pmatrix}, & & z \\in (-x_{j},-x_{j-1}), \\quad j=1,\\ldots ,m,m+1 \\\\& N_{+}(z) = N_{-}(z) \\begin{pmatrix}0 & s_{j} & 0 \\\\-s_{j}^{-1} & 0 & 0 \\\\0 & 0 & 1\\end{pmatrix}, & & z \\in (x_{j-1},x_{j}), \\hspace{25.6073pt} j=1,\\ldots ,m,m+1.$ (c) As $z \\rightarrow \\infty $ , $\\pm \\text{\\upshape Im\\,}z >0$ , we have $N(z) = \\bigg ( I + \\frac{1}{z}N_{1} + {\\mathcal {O}}(z^{-2}) \\bigg ) \\text{\\upshape diag\\,}(z^{-\\frac{1}{3}}, 1, z^{\\frac{1}{3}})L_{\\pm },$ for a certain matrix $N_{1}$ .", "(d) As $z \\rightarrow z_{\\star }\\in \\cup _{j=1}^{m}\\lbrace -x_{j},x_{j}\\rbrace $ , $N(z) = {\\mathcal {O}}(1)$ .", "As $z \\rightarrow 0$ , $N(z)={\\mathcal {O}}(1)\\text{\\upshape diag\\,}(z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}}){\\mathcal {O}}(1)$ .", "(e) $N$ satisfies the symmetry $N(z) = -\\text{\\upshape diag\\,}(1,-1,1)N(-z)\\mathcal {B}$ .", "For $m=1$ , the above RH problem was solved explicitly in [24].", "Let us define $\\beta _{j} := \\frac{1}{2\\pi i}u_{j} = \\frac{1}{2\\pi i}\\log \\frac{s_{j}}{s_{j+1}}, \\qquad j=1,\\ldots ,m.$ Inspired by [24], we consider three functions $d_{1},d_{2},d_{3}$ defined by $& d_{1}(z) = {\\left\\lbrace \\begin{array}{ll}\\lambda (z^{\\frac{1}{3}}), & \\text{\\upshape Im\\,}z >0, \\\\\\lambda (\\omega ^{-1}z^{\\frac{1}{3}}), & \\text{\\upshape Im\\,}z <0,\\end{array}\\right.}", "& & d_{2}(z) = {\\left\\lbrace \\begin{array}{ll}\\lambda (\\omega ^{-1}z^{\\frac{1}{3}}), & \\text{\\upshape Im\\,}z >0, \\\\\\lambda (z^{\\frac{1}{3}}), & \\text{\\upshape Im\\,}z <0,\\end{array}\\right.}", "& & d_{3}(z) = \\lambda (\\omega z^{\\frac{1}{3}}),$ where $\\lambda (z) = \\prod _{j=1}^{m}\\bigg ( \\frac{z^{2}-\\omega x_{j}^{2/3}}{z^{2}-x_{j}^{2/3}} \\bigg )^{\\beta _{j}}, \\qquad z \\in \\mathbb {C}\\setminus \\big ( (-x_{m}^{\\frac{1}{3}},x_{m}^{\\frac{1}{3}})\\cup \\omega ^{-1}(-x_{m}^{\\frac{1}{3}},x_{m}^{\\frac{1}{3}})\\big ).$ The branch structure for $\\lambda $ is such that $\\lambda (z) = 1+{\\mathcal {O}}(z^{-1})$ as $z \\rightarrow \\infty $ , and $\\lambda _{+}(z) = \\lambda _{-}(z) {\\left\\lbrace \\begin{array}{ll}s_{j}, & z \\in (-x_{j}^{\\frac{1}{3}},-x_{j-1}^{\\frac{1}{3}})\\cup e^{-\\frac{2\\pi i}{3}}(x_{j-1}^{\\frac{1}{3}},x_{j}^{\\frac{1}{3}}), \\quad j \\in \\lbrace 1,\\ldots ,m\\rbrace , \\\\s_{j}^{-1}, & z \\in e^{\\frac{\\pi i}{3}}(x_{j-1}^{\\frac{1}{3}},x_{j}^{\\frac{1}{3}})\\cup (x_{j-1}^{\\frac{1}{3}},x_{j}^{\\frac{1}{3}}), \\hspace{35.56593pt} j \\in \\lbrace 1,\\ldots ,m\\rbrace ,\\end{array}\\right.", "}$ where the boundary values $\\lambda _{+}$ and $\\lambda _{-}$ are taken with respect to the orientation of the contour as stated in (REF ).", "Using (REF )–(REF ), it can be verified that $& N(z) := C_{N} \\text{\\upshape diag\\,}(z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}}) L_{\\pm } \\text{\\upshape diag\\,}(d_{1}(z),d_{2}(z),d_{3}(z)), \\qquad \\pm \\text{\\upshape Im\\,}z >0, $ is the unique solution to the RH problem for $N$ , where $& C_{N} := \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\-i\\sqrt{3} \\sum _{j=1}^{m} \\beta _{j}x_{j}^{2/3} & 0 & 1\\end{pmatrix}.", "$ It the next subsections we compute more detailed asymptotic expansions than those stated in conditions (c) and (d) of the RH problem for $N$ ." ], [ "Asymptotics of $N(z)$ as {{formula:afec8de2-a6e1-4e2e-8f3c-9bca5da9256e}}", "As $z \\rightarrow \\infty $ , $\\pm \\text{\\upshape Im\\,}z >0$ , we have $N(z) = \\bigg ( I + \\frac{1}{z}N_{1} + {\\mathcal {O}}(z^{-2}) \\bigg ) \\text{\\upshape diag\\,}(z^{-\\frac{1}{3}}, 1, z^{\\frac{1}{3}})L_{\\pm }$ where $N_{1}$ is of the form $N_{1}=\\begin{pmatrix}0 & i \\sqrt{3} \\sum _{j=1}^{m} \\beta _{j}x_{j}^{2/3} & 0 \\\\\\star & 0 & i \\sqrt{3} \\sum _{j=1}^{m} \\beta _{j}x_{j}^{2/3} \\\\0 & \\star & 0\\end{pmatrix}.$ The entries $(N_{1})_{21}$ and $(N_{1})_{32}$ can also be computed explicitly, but their expressions are longer and not important for us." ], [ "Asymptotics of $N(z)$ as {{formula:2111e879-9465-47ed-94ae-070ed0f73632}} , {{formula:6365efea-9ec1-4bd2-8ebe-505ae8467300}}", "As $z \\rightarrow x_{j}$ , $\\text{\\upshape Im\\,}z > 0$ , $j=1,\\ldots ,m$ , $& d_{1}(z) = d_{1,x_{j}}^{(0)} (z-x_{j})^{-\\beta _{j}}\\big ( 1 + d_{1,x_{j}}^{(1)}(z-x_{j}) + {\\mathcal {O}}((z-x_{j})^{2}) \\big ), \\\\& d_{1,x_{j}}^{(0)} = \\bigg ( \\frac{3\\sqrt{3}x_{j}}{2} \\bigg )^{\\beta _{j}}e^{-\\frac{\\pi i \\beta _{j}}{6}} \\prod _{\\begin{array}{c}k=1\\\\k \\ne j\\end{array}}^{m} \\frac{(x_{j}^{2/3}-\\omega x_{k}^{2/3})^{\\beta _{k}}}{(x_{j}^{2/3}-x_{k}^{2/3})_{+}^{\\beta _{k}}}, \\\\& d_{1,x_{j}}^{(1)} = \\frac{(\\omega -5)\\beta _{j}}{6(\\omega -1)x_{j}} + \\sum _{\\begin{array}{c}k=1 \\\\ k \\ne j\\end{array}}^{m} \\frac{2(\\omega -1)x_{k}^{2/3}\\beta _{k}}{3x_{j}^{1/3}(x_{j}^{2/3}-x_{k}^{2/3})(x_{j}^{2/3}-\\omega x_{k}^{2/3})}, \\\\& d_{2}(z) = d_{2,x_{j}}^{(0)} (z-x_{j})^{\\beta _{j}}\\big ( 1 + d_{2,x_{j}}^{(1)}(z-x_{j}) + {\\mathcal {O}}((z-x_{j})^{2}) \\big ), \\\\& d_{2,x_{j}}^{(0)} = \\bigg ( \\frac{2}{3\\sqrt{3}x_{j}} \\bigg )^{\\beta _{j}}e^{-\\frac{\\pi i \\beta _{j}}{6}}\\prod _{\\begin{array}{c}k=1\\\\k \\ne j\\end{array}}^{m}\\frac{(x_{j}^{2/3}-x_{k}^{2/3})_{+}^{\\beta _{k}}}{(x_{j}^{2/3}-\\omega ^{2} x_{k}^{2/3})^{\\beta _{k}}}, \\\\& d_{2,x_{j}}^{(1)} = \\frac{(1-5\\omega )\\beta _{j}}{6(\\omega -1)x_{j}} + \\sum _{\\begin{array}{c}k=1 \\\\ k \\ne j\\end{array}}^{m} \\frac{2(1-\\omega ^{2})x_{k}^{2/3}\\beta _{k}}{3x_{j}^{1/3}(x_{j}^{2/3}-x_{k}^{2/3})(x_{j}^{2/3}-\\omega ^{2} x_{k}^{2/3})}, \\\\& d_{3}(z) = d_{3,x_{j}}^{(0)} \\big ( 1 + d_{3,x_{j}}^{(1)}(z-x_{j}) + {\\mathcal {O}}((z-x_{j})^{2}) \\big ), \\\\& d_{3,x_{j}}^{(0)} = e^{\\frac{\\pi i \\beta _{j}}{3}} \\prod _{\\begin{array}{c}k=1\\\\k \\ne j\\end{array}}^{m}\\frac{(x_{j}^{2/3}-\\omega ^{2}x_{k}^{2/3})^{\\beta _{k}}}{(x_{j}^{2/3}-\\omega x_{k}^{2/3})^{\\beta _{k}}}, \\\\& d_{3,x_{j}}^{(1)} = \\frac{2(\\omega + 1)\\beta _{j}}{3(\\omega -1)x_{j}} + \\sum _{\\begin{array}{c}k=1 \\\\ k \\ne j\\end{array}}^{m} \\frac{2(\\omega ^{2}-\\omega )x_{k}^{2/3}\\beta _{k}}{3x_{j}^{1/3}(x_{j}^{2/3}-\\omega x_{k}^{2/3})(x_{j}^{2/3}-\\omega ^{2} x_{k}^{2/3})}.$ In the above asymptotic expansions, all branches are the principal ones: for example, for the product appearing in $d_{1,x_{j}}^{(0)}$ , we have $& \\Bigg |\\prod _{\\begin{array}{c}k=1\\\\k \\ne j\\end{array}}^{m} \\frac{(x_{j}^{2/3}-\\omega x_{k}^{2/3})^{\\beta _{k}}}{(x_{j}^{2/3}-x_{k}^{2/3})_{+}^{\\beta _{k}}} \\Bigg | = \\exp \\Bigg ( -\\sum _{\\begin{array}{c}k=1\\\\ k \\ne j\\end{array}}^{m} i\\beta _{k} \\arctan \\frac{\\sqrt{3}x_{k}^{2/3}}{x_{k}^{2/3}+2x_{j}^{2/3}} - \\sum _{k=j+1}^{m} \\pi i \\beta _{k} \\Bigg ), \\\\& \\arg \\prod _{\\begin{array}{c}k=1\\\\k \\ne j\\end{array}}^{m} \\frac{(x_{j}^{2/3}-\\omega x_{k}^{2/3})^{\\beta _{k}}}{(x_{j}^{2/3}-x_{k}^{2/3})_{+}^{\\beta _{k}}} = -\\sum _{\\begin{array}{c}k=1\\\\ k \\ne j\\end{array}}^{m} i \\beta _{k} \\log \\frac{|x_{j}^{2/3}-\\omega x_{k}^{2/3}|}{|x_{j}^{2/3}- x_{k}^{2/3}|} \\mod {2}\\pi .$ Hence, as $z \\rightarrow x_{j}$ , $\\text{\\upshape Im\\,}z > 0$ , $j=1,\\ldots ,m$ , $N(z) = (N_{x_{j}}^{(0)} + (z-x_{j})N_{x_{j}}^{(1)} + {\\mathcal {O}}\\big ( (z-x_{j})^{2} \\big ))(z-x_{j})^{-\\beta _{j}\\sigma _{3,1}},$ where $\\sigma _{3,1} = \\text{\\upshape diag\\,}(1,-1,0)$ and $& N_{x_{j}}^{(0)} = C_{N} \\text{\\upshape diag\\,}(x_{j}^{-1/3},1,x_{j}^{1/3}) L_{+} \\text{\\upshape diag\\,}(d_{1,x_{j}}^{(0)},d_{2,x_{j}}^{(0)},d_{3,x_{j}}^{(0)}), \\\\& N_{x_{j}}^{(1)} = C_{N} \\Big ( \\text{\\upshape diag\\,}(x_{j}^{-1/3},1,x_{j}^{1/3}) L_{+} \\text{\\upshape diag\\,}(d_{1,x_{j}}^{(0)}d_{1,x_{j}}^{(1)},d_{2,x_{j}}^{(0)}d_{2,x_{j}}^{(1)},d_{3,x_{j}}^{(0)}d_{3,x_{j}}^{(1)}) \\nonumber \\\\& \\hspace{48.36958pt} + \\text{\\upshape diag\\,}(-\\tfrac{1}{3}x_{j}^{-4/3},0,\\tfrac{1}{3}x_{j}^{-2/3}) L_{+} \\text{\\upshape diag\\,}(d_{1,x_{j}}^{(0)},d_{2,x_{j}}^{(0)},d_{3,x_{j}}^{(0)}) \\Big ).", "\\nonumber $" ], [ "Asymptotics of $N(z)$ as {{formula:7624cc8e-c046-4043-9927-9eb7886cc3ae}}", "As $z \\rightarrow 0$ , $\\text{\\upshape Im\\,}z>0$ , $& d_{\\ell }(z) = d_{\\ell ,0}^{(0)}(1+d_{\\ell ,0}^{(1)}z^{\\frac{2}{3}} + {\\mathcal {O}}(z^{\\frac{4}{3}})), \\qquad \\ell = 1,2,3, \\\\& d_{1,0}^{(0)} = e^{-\\pi i \\sum _{k=1}^{m}\\beta _{k}}(-\\omega )^{\\sum _{k=1}^{m}\\beta _{k}} = s_{1}^{-\\frac{2}{3}}, \\qquad d_{2,0}^{(0)} = d_{3,0}^{(0)} = \\omega ^{\\sum _{k=1}^{m}\\beta _{k}} = s_{1}^{\\frac{1}{3}}, \\\\& d_{1,0}^{(1)} = (1-\\omega ^{2})\\sum _{k=1}^{m} \\beta _{k}x_{k}^{-\\frac{2}{3}}, \\quad d_{2,0}^{(1)} = (\\omega -1)\\sum _{k=1}^{m} \\beta _{k}x_{k}^{-\\frac{2}{3}}, \\quad d_{1,0}^{(1)} = (\\omega ^{2}-\\omega )\\sum _{k=1}^{m} \\beta _{k}x_{k}^{-\\frac{2}{3}},$ where the branches are the principal ones.", "As $z \\rightarrow 0$ , $\\text{\\upshape Im\\,}z>0$ , $N(z) = C_{N} \\text{\\upshape diag\\,}(z^{-\\frac{1}{3}},1,z^{\\frac{1}{3}}) L_{+} \\text{\\upshape diag\\,}(d_{1,0}^{(0)},d_{2,0}^{(0)},d_{3,0}^{(0)}) \\Big ( I + z^{\\frac{2}{3}}\\text{\\upshape diag\\,}(d_{1,0}^{(1)},d_{2,0}^{(1)},d_{3,0}^{(1)}) + {\\mathcal {O}}(z^{\\frac{4}{3}}) \\Big ).$" ], [ "Local parametrices.", "For each $p \\in \\lbrace -x_{m},\\ldots ,-x_{1},0,x_{1},\\ldots ,x_{m}\\rbrace $ , we let $\\mathcal {D}_{p}$ be a small open disk centered at $p$ .", "The local parametrix $P^{(p)}$ is defined inside $\\mathcal {D}_{p}$ , has the same jumps as $S$ in $\\mathcal {D}_{p}$ , and satisfies $S(z)P^{(p)}(z)^{-1}={\\mathcal {O}}(1)$ as $z \\rightarrow p$ .", "Furthermore, we require $P^{(p)}$ to satisfy the following matching condition with $P^{(\\infty )}$ on $\\partial \\mathcal {D}_{p}$ : $P^{(p)}(z) = (I+o(1))P^{(\\infty )}(z), \\qquad \\mbox{as } r \\rightarrow + \\infty ,$ uniformly for $z \\in \\partial \\mathcal {D}_{p}$ ." ], [ "Local parametrix near $x_{j}$ , {{formula:94ca4b81-c092-433e-bd56-51d4214a47cf}}", "Since the (1,2) and (2,1) entries of $J_{S}(z)$ have each a discontinuity at $z=x_{j}$ , we can follow [36] and build $P^{(x_{j})}$ using the model RH problem $\\Phi _{\\mathrm {HG}}$ which is presented in Appendix .", "We also refer to [24] for more details about this construction.", "The local parametrix $P^{(x_{j})}$ is of the form $& P^{(x_{j})}(z) = E_{x_{j}}(z) \\begin{pmatrix}\\Phi _{\\mathrm {HG},11}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & \\Phi _{\\mathrm {HG}, 12}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & 0 \\\\\\Phi _{\\mathrm {HG},21}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & \\Phi _{\\mathrm {HG}, 22}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & 0 \\\\0 & 0 & 1\\end{pmatrix} \\\\& \\hspace{51.21504pt} \\times (s_{j}s_{j+1})^{-\\frac{\\sigma _{3,1}}{4}}e^{\\pm \\frac{1}{2}(\\theta _{2}(rz)-\\theta _{1}(rz))\\sigma _{3,1}}A_{x_{j}}(z), \\nonumber \\\\& A_{x_{j}}(z)= {\\left\\lbrace \\begin{array}{ll}\\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & e^{\\theta _{2}(rz)-\\theta _{3}(rz)} \\\\0 & 0 & 1\\end{pmatrix}, & z \\in \\lbrace z:\\text{\\upshape Im\\,}z > 0\\rbrace \\cap \\mathcal {D}_{x_{j}} \\setminus (\\Omega _{j,+}\\cup \\Omega _{j+1,+}), \\\\\\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & e^{\\theta _{1}(rz)-\\theta _{3}(rz)} \\\\0 & 0 & 1\\end{pmatrix}, & z \\in \\lbrace z:\\text{\\upshape Im\\,}z < 0\\rbrace \\cap \\mathcal {D}_{x_{j}} \\setminus (\\Omega _{j,-}\\cup \\Omega _{j+1,-}), \\\\I, & \\mbox{otherwise},\\end{array}\\right.}", "\\nonumber $ where $\\sigma _{3,1}=\\text{\\upshape diag\\,}(1,-1,0)$ , $\\pm $ stands for $\\pm \\text{\\upshape Im\\,}z > 0$ , $E_{x_{j}}$ is analytic in $\\mathcal {D}_{x_{j}}$ and given by $E_{x_{j}}(z) = N(z) (s_{j}s_{j+1})^{\\frac{\\sigma _{3,1}}{4}} \\left\\lbrace \\begin{array}{l l}\\sqrt{\\frac{s_{j+1}}{s_{j}}}^{\\sigma _{3,1}}, & \\text{\\upshape Im\\,}z > 0 \\\\\\begin{pmatrix}0 & 1 & 0 \\\\-1 & 0 & 0 \\\\0 & 0 & 1\\end{pmatrix}, & \\text{\\upshape Im\\,}z <0\\end{array} \\right\\rbrace e^{-\\frac{1}{2}(\\theta _{2}(rx_{j})-\\theta _{1}(rx_{j}))\\sigma _{3,1}} (r^{\\frac{4}{3}}f_{x_{j}}(z))^{\\beta _{j}\\sigma _{3,1}},$ and $f_{x_{j}}$ is given by $f_{x_{j}}(z) & = r^{-\\frac{4}{3}} \\big [ (\\theta _{2}(rz)-\\theta _{1}(rz)) - (\\theta _{2}(rx_{j})-\\theta _{1}(rx_{j})) \\big ] = \\frac{i \\sqrt{3}}{4} \\Big ( 3(z^{\\frac{4}{3}}-x_{j}^{\\frac{4}{3}}) - \\frac{2\\rho }{r^{\\frac{2}{3}}}(z^{\\frac{2}{3}} - x_{j}^{\\frac{2}{3}}) \\Big ).$ It is easily checked that $A_{x_{j}}(z)$ is exponentially small as $r \\rightarrow + \\infty $ uniformly for $z \\in \\mathcal {D}_{x_{j}}$ , and that $f_{x_{j}}(x_{j}) = 0, \\quad f_{x_{j}}^{\\prime }(x_{j}) = i \\bigg ( \\sqrt{3}x_{j}^{1/3} - \\frac{\\rho }{\\sqrt{3} x_{j}^{1/3} r^{2/3}} \\bigg ), \\quad f_{x_{j}}^{\\prime \\prime }(x_{j}) = i \\bigg ( \\frac{1}{\\sqrt{3}x_{j}^{2/3}} + \\frac{\\rho }{3\\sqrt{3} x_{j}^{4/3}r^{2/3}} \\bigg ).$ Using (REF ), we obtain $& E_{x_{j}}(x_{j}) = N_{x_{j}}^{(0)} \\sqrt{s_{j+1}}^{\\sigma _{3,1}} e^{-\\frac{1}{2}(\\theta _{2}(rx_{j})-\\theta _{1}(rx_{j}))\\sigma _{3,1}} (r^{\\frac{4}{3}}|f^{\\prime }(x_{j})|)^{\\beta _{j}\\sigma _{3,1}}, \\\\& E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j}) = \\begin{pmatrix}\\frac{f_{x_{j}}^{\\prime \\prime }(x_{j})}{2f_{x_{j}}^{\\prime }(x_{j})}\\beta _{j} + d_{1,x_{j}}^{(1)} & \\frac{i\\mathfrak {c}_{j}^{-2}}{3\\sqrt{3}x_{j}}\\frac{d_{2,x_{j}}^{(0)}}{d_{1,x_{j}}^{(0)}} & -\\frac{i\\mathfrak {c}_{j}^{-1}}{3\\sqrt{3}x_{j}}\\frac{d_{3,x_{j}}^{(0)}}{d_{1,x_{j}}^{(0)}} \\\\-\\frac{i\\mathfrak {c}_{j}^{2}}{3\\sqrt{3}x_{j}}\\frac{d_{1,x_{j}}^{(0)}}{d_{2,x_{j}}^{(0)}} & -\\frac{f_{x_{j}}^{\\prime \\prime }(x_{j})}{2f_{x_{j}}^{\\prime }(x_{j})}\\beta _{j} + d_{2,x_{j}}^{(1)} & -\\frac{i\\mathfrak {c}_{j}}{3\\sqrt{3}x_{j}}\\frac{d_{3,x_{j}}^{(0)}}{d_{2,x_{j}}^{(0)}} \\\\\\frac{i\\mathfrak {c}_{j}}{3\\sqrt{3}x_{j}}\\frac{d_{1,x_{j}}^{(0)}}{d_{3,x_{j}}^{(0)}} & \\frac{i\\mathfrak {c}_{j}^{-1}}{3\\sqrt{3}x_{j}}\\frac{d_{2,x_{j}}^{(0)}}{d_{3,x_{j}}^{(0)}} & d_{3,x_{j}}^{(1)}\\end{pmatrix}, \\\\& \\mathfrak {c}_{j} = \\sqrt{s_{j+1}}e^{-\\frac{1}{2}(\\theta _{2}(rx_{j})-\\theta _{1}(rx_{j}))}\\big ( r^{\\frac{4}{3}}|f_{x_{j}}^{\\prime }(x_{j})| \\big )^{\\beta _{j}}.", "$ Using (REF ), we obtain $P^{(x_{j})}(z)N(z)^{-1} = I + \\frac{1}{r^{\\frac{4}{3}}f_{x_{j}}(z)} E_{x_{j}}(z)\\begin{pmatrix}\\Phi _{\\mathrm {HG},1}(\\beta _{j})_{11} & \\Phi _{\\mathrm {HG},1}(\\beta _{j})_{12} & 0 \\\\\\Phi _{\\mathrm {HG},1}(\\beta _{j})_{21} & \\Phi _{\\mathrm {HG},1}(\\beta _{j})_{22} & 0 \\\\0 & 0 & 1\\end{pmatrix}E_{x_{j}}(z)^{-1} + {\\mathcal {O}}(r^{-\\frac{8}{3}}),$ as $r \\rightarrow + \\infty $ uniformly for $z \\in \\partial \\mathcal {D}_{x_{j}}$ ." ], [ "Local parametrix near $-x_{j}$ , {{formula:bb978529-01cb-4539-9596-7f4d6ff9df9f}}", "$P^{(-x_{j})}$ can be constructed in terms of $\\Phi _{\\mathrm {HG}}$ in a similar way as $P^{(x_{j})}$ .", "Alternatively, we can use the symmetry stated in condition (e) of the RH problem for $S$ .", "This observation saves us some effort and allows us to see immediately that $P^{(-x_{j})}(z) = -\\text{\\upshape diag\\,}(1,-1,1)P^{(x_{j})}(z)\\mathcal {B}, \\qquad z \\in \\mathcal {D}_{-x_{j}}.$" ], [ "Local parametrix near 0", "For the local parametrix $P^{(0)}$ , we need to use the model RH problem for $\\Psi $ from [8] (and which is recalled in Subsection REF for the convenience of the reader).", "Define $P^{(0)}(z) = E_{0}(z)\\Psi (rz) e^{-\\Theta (rz)}\\text{\\upshape diag\\,}(s_{1}^{-\\frac{2}{3}},s_{1}^{\\frac{1}{3}},s_{1}^{\\frac{1}{3}}),$ where $E_{0}$ is analytic inside $\\mathcal {D}_{0}$ and given by $E_{0}(z) = -\\sqrt{\\frac{3}{2\\pi }}e^{-\\frac{\\rho ^{2}}{6}}i N(z) \\text{\\upshape diag\\,}(s_{1}^{\\frac{2}{3}},s_{1}^{-\\frac{1}{3}},s_{1}^{-\\frac{1}{3}}) L_{\\pm }^{-1}\\text{\\upshape diag\\,}((rz)^{\\frac{1}{3}},1,(rz)^{-\\frac{1}{3}})\\Psi _{0}^{-1}, \\quad \\pm \\text{\\upshape Im\\,}z > 0,$ and $\\Psi _{0} = \\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 0 \\\\\\kappa _{3}+\\frac{2\\rho }{3} & 0 & 1\\end{pmatrix}, \\qquad \\kappa _{3} = \\frac{\\rho ^{3}}{54}-\\frac{\\rho }{6}.$ In a similar way as in [24], we verify that $P^{(0)}$ has the same jumps as $S$ inside $\\mathcal {D}_{0}$ .", "It is also clear from condition (d) of the RH problem for $\\Psi $ that $P^{(0)}(z)$ remains bounded as $z \\rightarrow 0$ .", "Finally, using (REF ) and (REF ), we obtain $P^{(0)}(z)N(z)^{-1} = I + \\frac{1}{r^{\\frac{2}{3}}z} \\widehat{E}_{0}(z)\\widehat{\\Psi }_{1}\\widehat{E}_{0}(z)^{-1} + {\\mathcal {O}}(r^{-\\frac{4}{3}}), \\qquad \\widehat{\\Psi }_{1} = \\begin{pmatrix}0 & \\kappa _{3} & 0 \\\\0 & 0 & \\kappa _{3} + \\frac{\\rho }{3} \\\\0 & 0 & 0\\end{pmatrix},$ as $r \\rightarrow + \\infty $ uniformly for $z \\in \\partial \\mathcal {D}_{0}$ , where $\\widehat{E}_{0}(z) := N(z) \\text{\\upshape diag\\,}(s_{1}^{\\frac{2}{3}},s_{1}^{-\\frac{1}{3}},s_{1}^{-\\frac{1}{3}}) L_{\\pm }^{-1}\\text{\\upshape diag\\,}(z^{\\frac{1}{3}},1,z^{-\\frac{1}{3}}).$ For future reference, using (REF ) we note that $\\widehat{E}_{0}(0) = C_{N}$ ." ], [ "Final transformation", "Define $R(z) = {\\left\\lbrace \\begin{array}{ll}S(z)N(z)^{-1}, & z \\in \\mathbb {C}\\setminus \\big ( \\cup _{j=1}^{m}(\\mathcal {D}_{x_{j}}\\cup \\mathcal {D}_{-x_{j}}) \\cup \\mathcal {D}_{0} \\big ), \\\\S(z)P^{(p)}(z)^{-1}, & z \\in \\mathcal {D}_{p}, \\; p \\in \\lbrace -x_{m},\\ldots ,-x_{1},0,x_{1},\\ldots ,x_{m}\\rbrace .\\end{array}\\right.", "}$ Using the analysis of Subsections REF –REF , we conclude that $R$ is analytic in $\\cup _{j=1}^{m}(\\mathcal {D}_{x_{j}}\\cup \\mathcal {D}_{-x_{j}}) \\cup \\mathcal {D}_{0}$ , and therefore $R$ is analytic in $\\mathbb {C}\\setminus \\Sigma _{R}$ , where $\\Sigma _{R}:= \\cup _{j=1}^{m}(\\partial \\mathcal {D}_{x_{j}}\\cup \\partial \\mathcal {D}_{-x_{j}})\\cup \\partial \\mathcal {D}_{0} \\cup \\Sigma _{S} \\setminus \\big ( \\cup _{j=1}^{m}(\\mathcal {D}_{x_{j}}\\cup \\mathcal {D}_{-x_{j}}) \\cup \\mathcal {D}_{0}\\cup \\mathbb {R} \\big ).$ For convenience, we orient the boundaries of the $2m+1$ disks in the clockwise direction, and for $z \\in \\Sigma _{R}$ , we define $J_{R}(z):=R_{-}^{-1}(z)R_{+}(z)$ .", "Using the definitions () of $\\theta _{1},\\theta _{2},\\theta _{3}$ , condition (b) of the RH problem for $S$ and (REF ), we verify that $J_{R}(z) = I + {\\mathcal {O}}(e^{-cr}), \\qquad \\mbox{as } r \\rightarrow + \\infty \\mbox{ uniformly for } z \\in \\Sigma _{R}\\setminus \\big ( \\cup _{j=1}^{m}(\\mathcal {D}_{x_{j}}\\cup \\mathcal {D}_{-x_{j}}) \\cup \\mathcal {D}_{0} \\big )$ for a certain $c>0$ .", "Also, by (REF ), (REF ) and (REF ), we have $& J_{R}(z) = I + {\\mathcal {O}}(r^{-\\frac{4}{3}}), & & \\mbox{as } r \\rightarrow + \\infty , \\mbox{ uniformly for } z \\in \\cup _{j=1}^{m}(\\partial \\mathcal {D}_{x_{j}}\\cup \\partial \\mathcal {D}_{-x_{j}}), \\\\& J_{R}(z) = I + \\tfrac{J^{(1)}(z)}{r^{2/3}} + {\\mathcal {O}}(r^{-\\frac{4}{3}}), & & \\mbox{as } r \\rightarrow + \\infty , \\mbox{ uniformly for } z \\in \\partial \\mathcal {D}_{0},$ where $J^{(1)}(z) = \\frac{1}{z} \\widehat{E}_{0}(z)\\widehat{\\Psi }_{1}\\widehat{E}_{0}(z)^{-1}$ .", "Hence, $R$ satisfies a small norm RH problem [26], and we have $R(z) = I + R^{(1)}(z)r^{-\\frac{2}{3}} + {\\mathcal {O}}(r^{-\\frac{4}{3}}), \\qquad \\mbox{as } r \\rightarrow + \\infty ,$ uniformly for $z \\in \\mathbb {C}\\setminus \\Sigma _{R}$ , with $R^{(1)}(z) = \\frac{1}{2\\pi i}\\oint _{\\partial \\mathcal {D}_{0}} \\frac{J^{(1)}(x)dx}{x-z} = {\\left\\lbrace \\begin{array}{ll}\\frac{C_{N}\\widehat{\\Psi }_{1}C_{N}^{-1}}{z}, & z \\notin \\mathcal {D}_{0}, \\\\\\frac{C_{N}\\widehat{\\Psi }_{1}C_{N}^{-1}}{z}-J^{(1)}(z), & z \\in \\mathcal {D}_{0},\\end{array}\\right.", "}$ and where we recall that $\\partial \\mathcal {D}_{0}$ is oriented in the clockwise direction.", "(the method of [26] also implies that $R$ exists for all sufficiently large $r$ ; note however that here we already know from (REF ) and from $\\det (1-\\widetilde{\\mathcal {K}}^{\\mathrm {Pe}}_{\\rho })>0$ that $Y$ exists for all $r>0$ , which implies by the transformations $Y\\rightarrow \\Phi \\rightarrow T \\rightarrow S \\rightarrow R$ that $R$ also exists for all $r>0$ ).", "Finally, the same analysis as in [20] shows that for any $k_{1},\\ldots ,k_{m} \\in \\mathbb {N}_{\\ge 0}$ with $k_{1}+\\ldots +k_{m}\\ge 1$ , we have $\\partial _{u_{1}}^{k_{1}}\\ldots \\partial _{u_{m}}^{k_{m}}R(z) = \\partial _{u_{1}}^{k_{1}}\\ldots \\partial _{u_{m}}^{k_{m}}R^{(1)}(z)r^{-\\frac{2}{3}} + {\\mathcal {O}}((\\log r)^{k_{1}+\\ldots +k_{m}}r^{-\\frac{4}{3}}), \\qquad \\mbox{as } r \\rightarrow + \\infty ,$ uniformly for $z \\in \\mathbb {C}\\setminus \\Sigma _{R}$ ." ], [ "Asymptotic analysis of $\\Phi $ as {{formula:66a637d4-87ef-41f7-a027-7f4a39611a67}}", "The analysis of $\\Phi $ as $r \\rightarrow 0$ is much simpler than the large $r$ analysis of Section .", "Here we generalize [24] to an arbitrary $m$ .", "Let $\\delta >0$ be fixed.", "Define $\\widetilde{N}(z) := -\\sqrt{\\tfrac{3}{2\\pi }}e^{-\\frac{\\rho ^{2}}{6}}i \\Psi _{0}^{-1} \\Psi (z) \\times {\\left\\lbrace \\begin{array}{ll}J_{1}, & \\mbox{for } \\arg z < \\frac{\\pi }{4} \\mbox{ and } \\arg (z-rx_{m})>\\frac{\\pi }{4}, \\\\J_{2}, & \\mbox{for } \\arg z > \\frac{3\\pi }{4} \\mbox{ and } \\arg (z+rx_{m})<\\frac{3\\pi }{4}, \\\\J_{4}^{-1}, & \\mbox{for } \\arg z < -\\frac{3\\pi }{4} \\mbox{ and } \\arg (z+rx_{m})>-\\frac{3\\pi }{4}, \\\\J_{5}^{-1}, & \\mbox{for } \\arg z > -\\frac{\\pi }{4} \\mbox{ and } \\arg (z-rx_{m})<-\\frac{\\pi }{4},\\end{array}\\right.", "}$ and for $|z|<\\delta $ , define $& \\widetilde{P}^{(0)}(z) := -\\sqrt{\\tfrac{3}{2\\pi }}e^{-\\frac{\\rho ^{2}}{6}}i \\Psi _{0}^{-1} \\widetilde{\\Psi }(z) \\nonumber \\\\& \\times \\left( I + \\sum _{j=1}^{m} \\frac{s_{j}-s_{j+1}}{2\\pi i} \\Big ( \\log (z-rx_{j})-\\log (z+rx_{j}) \\Big ) \\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix} \\right) {\\left\\lbrace \\begin{array}{ll}J_{1}^{-1}, & z \\in \\mathrm {I}, \\\\I, & z \\in \\mathrm {II}, \\\\J_{2}^{-1}, & z \\in \\mathrm {III}, \\\\J_{2}^{-1}J_{3}^{-1}, & z \\in \\mathrm {IV}, \\\\J_{1}^{-1}J_{0}^{-1}J_{5}^{-1}, & z \\in \\mathrm {V}, \\\\J_{1}^{-1}J_{0}^{-1}, & z \\in \\mathrm {VI},\\end{array}\\right.}", "$ where the principal branches are chosen for the log, and we recall that $s_{m+1}:=1$ , the regions $\\mathrm {I}$ , $\\mathrm {II}$ , $\\mathrm {III}$ , $\\mathrm {IV}$ , $\\mathrm {V}$ and $\\mathrm {VI}$ are shown in Figure REF , $\\widetilde{\\Psi }$ is defined in (REF ), and the matrices $J_{j}$ , $j=0,\\ldots ,5$ are defined in (REF ).", "Let $\\widetilde{R}(z) := {\\left\\lbrace \\begin{array}{ll}\\Phi (z)\\widetilde{N}(z)^{-1}, & |z| > \\delta , \\\\\\Phi (z)\\widetilde{P}^{(0)}(z)^{-1}, & |z|<\\delta .\\end{array}\\right.", "}$ The definitions (REF ) and (REF ) also ensure that $\\widetilde{R}$ has no jumps on $\\cup _{j=0}^{5}\\Sigma _{j}^{(r)}$ .", "Since $\\widetilde{P}_{-}^{(0)}(z)^{-1}\\widetilde{P}_{+}^{(0)}(z) = J_{5}J_{0}J_{1} \\begin{pmatrix}1 & s_{j}-1 & s_{j}-1 \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix} = \\begin{pmatrix}1 & s_{j} & s_{j} \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix} = \\Phi _{-}(z)^{-1}\\Phi _{+}(z)$ holds for all $z \\in (-rx_{j},-rx_{j-1})\\cup (rx_{j-1},rx_{j})$ , it follows that $\\widetilde{R}(z)$ is also analytic in $(-rx_{m},rx_{m})\\setminus \\cup _{j=1}^{m-1}\\lbrace -rx_{j},rx_{j}\\rbrace $ .", "Furthermore, from a direct inspection of (REF ) and (REF ), we see that the singularities of $\\widetilde{R}(z)$ at $z=-rx_{m},\\ldots ,-rx_{1},0,rx_{1},\\ldots ,rx_{m}$ are removable.", "Hence, $\\widetilde{R}(z)$ is analytic for $z \\in \\mathbb {C}\\setminus \\lbrace z:|z|=\\delta \\rbrace $ .", "Let us orient the circle $|z|=\\delta $ in the clockwise direction, and define $J_{\\widetilde{R}}:=\\widetilde{R}_{-}^{-1}\\widetilde{R}_{+}$ .", "By (REF ), $J_{\\widetilde{R}}(z)=\\widetilde{P}^{(0)}(z)\\widetilde{N}(z)^{-1}$ , and by (REF ), (REF ) and (REF ), $\\widetilde{R}(z) = I + {\\mathcal {O}}(z^{-1})$ as $z \\rightarrow \\infty $ .", "We also check using (REF ) and (REF ) that $J_{\\widetilde{R}}(z)=I+{\\mathcal {O}}(r)$ as $r \\rightarrow 0$ uniformly for $|z|=\\delta $ .", "Thus $\\widetilde{R}(z) = I + {\\mathcal {O}}(r), \\qquad \\mbox{as } r \\rightarrow 0 \\mbox{ uniformly for } z \\in \\mathbb {C}\\setminus \\lbrace z:|z|=\\delta \\rbrace .$" ], [ "Proof of Theorem ", "We already proved in Section that the functions $(p_{0},q_{0},\\lbrace p_{j,1},q_{j,1},p_{j,2},q_{j,2},p_{j,3},q_{j,3}\\rbrace _{j=1}^{m})$ defined by (REF )–() exist and satisfy (REF )–(REF ).", "In this subsection we complete the proof of Theorem REF by obtaining the asymptotic formulas (REF ) and (REF )." ], [ "Asymptotics of $p_{0}$ , {{formula:a3232a5f-6f8f-4cf1-acf0-25609f4ddb69}} , {{formula:419573e1-aa04-4f20-a0ed-ff84e70f5280}} and {{formula:523c4cf8-6749-46ff-940e-8abddfda1efe}} as {{formula:5ece7108-2442-4eeb-b724-72b859549632}}", "We first compute the asymptotics of $p_{0}$ and $q_{0}$ using (REF ).", "Using (REF ) and (REF ), we see that $R(z) = I + \\frac{R_{1}}{z}+{\\mathcal {O}}(z^{-2})$ as $z \\rightarrow \\infty $ , where $R_{1} = C_{N}\\widehat{\\Psi }_{1}C_{N}^{-1}r^{-\\frac{2}{3}} + {\\mathcal {O}}(r^{-\\frac{4}{3}}), \\qquad \\mbox{as } r \\rightarrow +\\infty .$ Inverting the transformations $\\Phi \\mapsto T \\mapsto S \\mapsto R$ in the region outside the disks using (REF ), (REF ) and (REF ), we get $\\Phi (rz) = \\text{\\upshape diag\\,}(r^{-\\frac{1}{3}}, 1, r^{\\frac{1}{3}})R(z)N(z)e^{\\Theta (rz)}, \\qquad z \\in \\mathbb {C}\\setminus \\big ( \\cup _{j=1}^{m}(\\mathcal {D}_{x_{j}}\\cup \\mathcal {D}_{-x_{j}}) \\cup \\mathcal {D}_{0} \\big ).$ From this expression, (REF ) and (REF ), we deduce that $\\Phi _{1} = r \\; \\text{\\upshape diag\\,}(r^{-\\frac{1}{3}}, 1, r^{\\frac{1}{3}}) (R_{1}+N_{1}) \\text{\\upshape diag\\,}(r^{\\frac{1}{3}}, 1, r^{-\\frac{1}{3}}),$ which implies by (REF ) and (REF ) that $& \\Phi _{1,12} = i \\sqrt{3} r^{\\frac{2}{3}}\\sum _{j=1}^{m}\\beta _{j}x_{j}^{\\frac{2}{3}} + \\frac{\\rho ^{3}}{54}- \\frac{\\rho }{6} + {\\mathcal {O}}(r^{-\\frac{2}{3}}), & & \\Phi _{1,23} = i \\sqrt{3} r^{\\frac{2}{3}}\\sum _{j=1}^{m}\\beta _{j}x_{j}^{\\frac{2}{3}} + \\frac{\\rho ^{3}}{54}+ \\frac{\\rho }{6} + {\\mathcal {O}}(r^{-\\frac{2}{3}}),$ as $r \\rightarrow + \\infty $ .", "Substituting these asymptotics in (REF ) gives the large $r$ asymptotics of $p_{0}$ and $q_{0}$ stated in (REF ) and ().", "We now compute the large $r$ asymptotics of $p_{j,k}$ , $j=1,\\ldots ,m$ , $k=1,2,3$ using the definitions ().", "It follows from (REF ) and (REF ) that $\\Phi _{j}^{(0)}(r) = \\lim _{\\begin{array}{c}z \\rightarrow x_{j}\\\\ z \\in \\mathrm {II}\\end{array}} \\Phi (rz)\\begin{pmatrix}1 & \\mathfrak {s}_{j}\\log (rz-rx_{j}) & \\mathfrak {s}_{j}\\log (rz-rx_{j}) \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix}.$ For $z \\in \\mathcal {D}_{x_{j}}$ and $z$ outside the lenses, by (REF ), (REF ) and (REF ) we have $\\Phi (rz) = \\text{\\upshape diag\\,}(r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}}) R(z) P^{(x_{j})}(z)e^{\\Theta (rz)}.$ Hence, using also (REF ), we get $& \\Phi _{j}^{(0)}(r) = \\text{\\upshape diag\\,}(r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}})R(x_{j})E_{x_{j}}(x_{j}) \\lim _{\\begin{array}{c}z \\rightarrow x_{j}\\\\ z \\in \\mathrm {II}\\end{array}} \\Bigg [\\begin{pmatrix}\\Phi _{\\mathrm {HG},11}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & \\Phi _{\\mathrm {HG}, 12}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & 0 \\\\\\Phi _{\\mathrm {HG},21}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & \\Phi _{\\mathrm {HG}, 22}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & 0 \\\\0 & 0 & 1\\end{pmatrix} \\nonumber \\\\& \\times (s_{j}s_{j+1})^{-\\frac{\\sigma _{3,1}}{4}}\\widetilde{\\Theta }(z) \\begin{pmatrix}1 & \\mathfrak {s}_{j}\\log (rz-rx_{j}) & \\mathfrak {s}_{j}\\log (rz-rx_{j}) \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix} \\Bigg ] $ where $\\widetilde{\\Theta }(z) = e^{\\frac{1}{2}(\\theta _{2}(rz)-\\theta _{1}(rz))\\sigma _{3,1}}\\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & e^{\\theta _{2}(rz)-\\theta _{3}(rz)} \\\\0 & 0 & 1\\end{pmatrix}e^{\\Theta (rz)} = e^{-\\frac{\\theta _{3}(rz)}{2}}\\begin{pmatrix}1 & 0 & 0 \\\\0 & 1 & 1 \\\\0 & 0 & e^{\\frac{3}{2}\\theta _{3}(rz)}\\end{pmatrix}.$ Using (REF ), we note that $\\frac{\\sin (\\pi \\beta _{j})}{\\pi } = \\frac{1}{\\Gamma (\\beta _{j})\\Gamma (1-\\beta _{j})} = - \\frac{\\mathfrak {s}_{j}}{\\sqrt{s_{j}s_{j+1}}}.$ Therefore, using (REF ), we can rewrite (REF ) as $& \\Phi _{j}^{(0)}(r) = e^{-\\frac{\\theta _{3}(rx_{j})}{2}}\\text{\\upshape diag\\,}(r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}})R(x_{j})E_{x_{j}}(x_{j}) \\nonumber \\\\& \\times \\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1)\\begin{pmatrix}1 & \\mathfrak {s}_{j}(\\log r - \\log (r^{\\frac{4}{3}}|f_{x_{j}}^{\\prime }(x_{j})|i)) & \\mathfrak {s}_{j}(\\log r - \\log (r^{\\frac{4}{3}}|f_{x_{j}}^{\\prime }(x_{j})|i)) \\\\0 & 1 & 1 \\\\0 & 0 & e^{\\frac{3}{2}\\theta _{3}(rx_{j})}\\end{pmatrix}, $ where $\\Upsilon _{j}^{(0)} = \\begin{pmatrix}\\frac{\\Gamma (1-\\beta _{j})}{(s_{j}s_{j+1})^{1/4}} & \\frac{(s_{j}s_{j+1})^{1/4}}{\\Gamma (\\beta _{j})} \\left( \\frac{\\Gamma ^{\\prime }(1-\\beta _{j})}{\\Gamma (1-\\beta _{j})}+2\\gamma _{\\mathrm {E}} - i \\pi \\right) \\\\\\frac{\\Gamma (1+\\beta _{j})}{(s_{j}s_{j+1})^{1/4}} & \\frac{-(s_{j}s_{j+1})^{1/4}}{\\Gamma (-\\beta _{j})} \\left( \\frac{\\Gamma ^{\\prime }(-\\beta _{j})}{\\Gamma (-\\beta _{j})} + 2\\gamma _{\\mathrm {E}} - i \\pi \\right)\\end{pmatrix},$ and $\\gamma _{\\mathrm {E}}$ is Euler's gamma constant.", "Combining (REF ) with (), we get $& \\begin{pmatrix}q_{j,1}(r) \\;\\; q_{j,2}(r) \\;\\; q_{j,3}(r)\\end{pmatrix}^{t} = e^{-\\frac{\\theta _{3}(rx_{j})}{2}}\\text{\\upshape diag\\,}(r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}})R(x_{j})E_{x_{j}}(x_{j})\\begin{pmatrix}\\Upsilon _{j,11}^{(0)} \\;\\; \\Upsilon _{j,21}^{(0)} \\;\\; 0\\end{pmatrix}^{t}, \\\\& \\begin{pmatrix}p_{j,1}(r) \\;\\; p_{j,2}(r) \\;\\; p_{j,3}(r)\\end{pmatrix}^{t} = - e^{\\frac{\\theta _{3}(rx_{j})}{2}}\\mathfrak {s}_{j}\\text{\\upshape diag\\,}(r^{\\frac{1}{3}},1,r^{-\\frac{1}{3}})R(x_{j})^{-t}E_{x_{j}}(x_{j})^{-t} \\text{\\upshape diag\\,}((\\Upsilon _{j}^{(0)})^{-t},1) \\begin{pmatrix}0 \\;\\; 1 \\;\\; 0\\end{pmatrix}^{t}.$ We then obtain (), (), (), (), () and () after a long computation.", "We omit the details.", "Using (REF ) and (REF ), we have, for $|z|>\\delta $ , $\\Phi (z) = \\big (\\tfrac{\\sqrt{2\\pi }}{\\sqrt{3}}e^{\\frac{\\rho ^{2}}{6}}i\\big )^{-1}(I+{\\mathcal {O}}(r))\\Psi _{0}^{-1}\\Psi (z), \\qquad \\mbox{as } r \\rightarrow 0.$ Using also (REF ) and (REF ), we obtain (REF ).", "We now compute the asymptotics of $p_{j,k}$ and $q_{j,k}$ , $j=1,\\ldots ,m$ , $k=1,2,3$ as $r \\rightarrow 0$ .", "Recall that $p_{j,k}$ and $q_{j,k}$ are defined in ().", "Using (REF ) and (REF ), for $z \\in \\mathrm {II}\\cap \\lbrace z:|z|<\\delta \\, r^{-1}\\rbrace $ we get $\\Phi (rz) = \\widetilde{R}(rz) \\frac{\\Psi _{0}^{-1}}{\\frac{\\sqrt{2\\pi }}{\\sqrt{3}}e^{\\frac{\\rho ^{2}}{6}}i} \\widetilde{\\Psi }(rz) \\left( I - \\sum _{j=1}^{m} \\mathfrak {s}_{j} \\Big ( \\log (rz-rx_{j})-\\log (rz+rx_{j}) \\Big ) \\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix} \\right).$ It then follows from (REF ) that $\\Phi _{j}^{(0)}(r) = \\widetilde{R}(rx_{j}) \\frac{\\Psi _{0}^{-1}}{\\frac{\\sqrt{2\\pi }}{\\sqrt{3}} e^{\\frac{\\rho ^{2}}{6}}i} \\widetilde{\\Psi }(rx_{j}) \\left( I + \\sum _{j=1}^{m} \\mathfrak {s}_{j} \\log (2rx_{j}) \\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix} \\right),$ and then by (REF ) and (REF ), we get $\\Phi _{j}^{(0)}(r) = \\frac{\\Psi _{0}^{-1}}{\\frac{\\sqrt{2\\pi }}{\\sqrt{3}}e^{\\frac{\\rho ^{2}}{6}}i} (\\widetilde{\\Psi }(0)+{\\mathcal {O}}(r)) \\left( I + \\sum _{j=1}^{m} \\mathfrak {s}_{j} \\log (2rx_{j}) \\begin{pmatrix}0 & 1 & 1 \\\\0 & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix} \\right).$ On the other hand, by [24], we have $\\widetilde{\\Psi }(0)\\begin{pmatrix}1 \\;\\; 0 \\;\\; 0\\end{pmatrix}^{t} = \\begin{pmatrix}\\mathcal {P}_{0}(0) \\;\\; 0 \\;\\; \\mathcal {P}_{0}^{\\prime \\prime }(0)\\end{pmatrix}^{t}, \\qquad \\widetilde{\\Psi }(0)^{-t}\\begin{pmatrix}0 \\;\\; 1 \\;\\; 1\\end{pmatrix}^{t} = \\begin{pmatrix}0 \\;\\; \\frac{1}{\\mathcal {P}_{1}^{\\prime }(0)} \\;\\; 0\\end{pmatrix}^{t},$ where $\\mathcal {P}_{1}^{\\prime }(0)\\ne 0$ .", "The asymptotics of () and () now directly follows from ()." ], [ "Proof of Theorem ", "The asymptotics of $H(r)$ as $r \\rightarrow 0$ given by (REF ) are directly obtained from (REF ) and (REF )–().", "Since $F(0\\vec{x},\\vec{u})=1$ by (REF ), the integral representation (REF ) follows by integrating (REF ) from 0 to an arbitrary $r>0$ .", "To compute the asymptotics of $H(r)$ as $r \\rightarrow + \\infty $ , we follow the method of [24] and rely on (REF ).", "Using (REF ) and (REF ), we obtain $\\Phi _{j}^{(1)}(r) =\\frac{1}{r}\\Phi _{j}^{(0)}(r)^{-1} \\lim _{\\begin{array}{c}z \\rightarrow x_{j}\\\\ z \\in \\mathrm {II}\\end{array}} \\left[ \\Phi (rz)\\begin{pmatrix}1 & \\mathfrak {s}_{j}\\log (rz-rx_{j}) & \\mathfrak {s}_{j}\\log (rz-rx_{j}) \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix}\\right]^{\\prime },$ where $^{\\prime }$ denotes the derivative with respect to $z$ .", "We see from (REF ) that $\\begin{pmatrix}0 & 1 & 1\\end{pmatrix}\\Phi _{j}^{(0)}(r)^{-1} = e^{\\frac{1}{2}\\theta _{3}(rx_{j})} \\begin{pmatrix}0 & 1 & 0\\end{pmatrix} \\text{\\upshape diag\\,}((\\Upsilon _{j}^{(0)})^{-1},1) E_{x_{j}}(x_{j})^{-1}R(x_{j})^{-1}\\text{\\upshape diag\\,}(r^{\\frac{1}{3}},1,r^{-\\frac{1}{3}}).$ Also, by (REF ) and (REF ), we have $& \\lim _{\\begin{array}{c}z \\rightarrow x_{j}\\\\ z \\in \\mathrm {II}\\end{array}} \\left[ \\Phi (rz)\\begin{pmatrix}1 & \\mathfrak {s}_{j}\\log (rz-rx_{j}) & \\mathfrak {s}_{j}\\log (rz-rx_{j}) \\\\0 & 1 & 0 \\\\0 & 0 & 1\\end{pmatrix}\\right]^{\\prime }\\begin{pmatrix}1 \\\\ 0 \\\\ 0\\end{pmatrix} = \\text{\\upshape diag\\,}(r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}}) \\\\& \\times \\lim _{\\begin{array}{c}z \\rightarrow x_{j}\\\\ z \\in \\mathrm {II}\\end{array}} \\left[ R(z) E_{x_{j}}(z) \\begin{pmatrix}\\Phi _{\\mathrm {HG},11}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & \\Phi _{\\mathrm {HG}, 12}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & 0 \\\\\\Phi _{\\mathrm {HG},21}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & \\Phi _{\\mathrm {HG}, 22}(r^{\\frac{4}{3}}f_{x_{j}}(z);\\beta _{j}) & 0 \\\\0 & 0 & 1\\end{pmatrix} (s_{j}s_{j+1})^{-\\frac{\\sigma _{3,1}}{4}} \\widetilde{\\Theta }(z)\\right]^{\\prime }\\begin{pmatrix}1 \\\\ 0 \\\\ 0\\end{pmatrix}$ where $\\widetilde{\\Theta }$ is defined in (REF ).", "A direct computation using (REF ) shows that this last limit is given by $& e^{-\\frac{\\theta _{3}(rx_{j})}{2}} \\text{\\upshape diag\\,}(r^{-\\frac{1}{3}},1,r^{\\frac{1}{3}}) \\big [ R^{\\prime }(x_{j})E_{x_{j}}(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1) + R(x_{j})E_{x_{j}}^{\\prime }(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1) \\\\& +r^{\\frac{4}{3}}f_{x_{j}}^{\\prime }(x_{j}) R(x_{j})E_{x_{j}}(x_{j}) \\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)}\\Upsilon _{j}^{(1)},0) - \\tfrac{r\\theta _{3}^{\\prime }(rx_{j})}{2}R(x_{j})E_{x_{j}}(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1) \\big ] \\begin{pmatrix}1 & 0 & 0\\end{pmatrix}^{t} \\nonumber $ where $\\Upsilon ^{(1)}_{j,21}= \\frac{\\beta _{j} \\pi }{\\sqrt{s_{j}s_{j+1}}\\sin (\\pi \\beta _{j})}$ .", "Combining the above equations with (REF ), we then find $& H(r) = -\\frac{1}{r}\\sum _{j=1}^{m}\\mathfrak {s}_{j}x_{j} \\Big [ \\text{\\upshape diag\\,}((\\Upsilon _{j}^{(0)})^{-1},1) E_{x_{j}}(x_{j})^{-1}R(x_{j})^{-1}R^{\\prime }(x_{j})E_{x_{j}}(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1) \\nonumber \\\\& + \\text{\\upshape diag\\,}((\\Upsilon _{j}^{(0)})^{-1},1) E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1)+ r^{\\frac{4}{3}}f_{x_{j}}^{\\prime }(x_{j}) \\text{\\upshape diag\\,}(\\Upsilon _{j}^{(1)},0) \\Big ]_{21}.", "$ The first part in the sum in (REF ) decays as $r \\rightarrow + \\infty $ by (REF ); more precisely $\\text{\\upshape diag\\,}((\\Upsilon _{j}^{(0)})^{-1},1) E_{x_{j}}(x_{j})^{-1}R(x_{j})^{-1}R^{\\prime }(x_{j})E_{x_{j}}(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1) = {\\mathcal {O}}(r^{-\\frac{2}{3}}), \\qquad \\mbox{as } r \\rightarrow + \\infty .$ For the second part in (REF ), we use (REF ) and get $& \\big [\\text{\\upshape diag\\,}((\\Upsilon _{j}^{(0)})^{-1},1) E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\text{\\upshape diag\\,}(\\Upsilon _{j}^{(0)},1)\\big ]_{21} = \\tfrac{\\Gamma (1-\\beta _{j})^{2}}{\\sqrt{s_{j}s_{j+1}}} \\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{21} \\\\& - \\tfrac{\\Gamma (1+\\beta _{j})^{2}}{\\sqrt{s_{j}s_{j+1}}} \\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{12} + \\tfrac{\\Gamma (1-\\beta _{j})\\Gamma (1+\\beta _{j})}{\\sqrt{s_{j}s_{j+1}}}\\Big ( \\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{22}-\\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{11} \\Big ).$ This expression can be further simplified using () and ().", "After a rather long computation, as $r \\rightarrow + \\infty $ we obtain $& \\tfrac{\\Gamma (1-\\beta _{j})\\Gamma (1+\\beta _{j})}{\\sqrt{s_{j}s_{j+1}}}\\Big ( \\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{22}-\\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{11} \\Big ) = \\frac{1}{\\mathfrak {s}_{j}}\\bigg ( \\frac{\\beta _{j}^{2}}{3x_{j}} + 2i \\beta _{j} \\text{\\upshape Im\\,}(d_{1,x_{j}}^{(1)}) \\bigg ) + {\\mathcal {O}}(r^{-\\frac{2}{3}}), \\\\& \\tfrac{\\Gamma (1-\\beta _{j})^{2}}{\\sqrt{s_{j}s_{j+1}}} \\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{21} - \\tfrac{\\Gamma (1+\\beta _{j})^{2}}{\\sqrt{s_{j}s_{j+1}}} \\big [E_{x_{j}}(x_{j})^{-1}E_{x_{j}}^{\\prime }(x_{j})\\big ]_{12} = \\frac{2}{3\\sqrt{3}x_{j}}\\frac{i\\beta _{j}}{\\mathfrak {s}_{j}} \\cos (2\\vartheta _{j}(r)) + {\\mathcal {O}}(r^{-\\frac{2}{3}}),$ where we recall that $\\vartheta _{j}(r)$ is defined in ().", "For the third and last term in (REF ), we use (REF ) and (REF ) to write $\\big [r^{\\frac{4}{3}}f_{x_{j}}^{\\prime }(x_{j}) \\text{\\upshape diag\\,}(\\Upsilon _{j}^{(1)},0)\\big ]_{21} = r^{\\frac{4}{3}}f_{x_{j}}^{\\prime }(x_{j})\\Upsilon _{j,21}^{(1)} = \\frac{i\\beta _{j}}{\\mathfrak {s}_{j}}\\bigg ( -\\sqrt{3}x_{j}^{\\frac{1}{3}}r^{\\frac{4}{3}}+\\frac{\\rho }{\\sqrt{3}x_{j}^{1/3}}r^{\\frac{2}{3}} \\bigg ).$ Substituting the above formulas in (REF ), and noting the remarkable simplification $\\sum _{j=1}^{m}2i \\beta _{j}x_{j}\\text{\\upshape Im\\,}(d_{1,x_{j}}^{(1)}) & = \\sum _{j=1}^{m} 2i\\beta _{j}x_{j}\\text{\\upshape Im\\,}\\bigg ( \\frac{(\\omega -5)\\beta _{j}}{6(\\omega -1)x_{j}} \\bigg ) +\\sum _{1\\le j \\ne k \\le m} \\text{\\upshape Re\\,}\\bigg ( \\frac{4(\\omega -1)x_{k}^{2/3}\\beta _{k}x_{j}^{2/3}\\beta _{j}}{3(x_{j}^{2/3}-x_{k}^{2/3})(x_{j}^{2/3}-\\omega x_{k}^{2/3})} \\bigg ) \\\\& = \\sum _{j=1}^{m} 2i\\beta _{j}x_{j}\\text{\\upshape Im\\,}\\bigg ( \\frac{(\\omega -5)\\beta _{j}}{6(\\omega -1)x_{j}} \\bigg ) = \\sum _{j=1}^{m}\\beta _{j}^{2},$ we obtain (REF ).", "This finishes the proof of Theorem REF ." ], [ "Proof of Theorem ", "Integrating (REF ) from 0 to an arbitrary $r>0$ , we get $\\int _{0}^{r}H(\\tau )d\\tau = & \\int _{0}^{r} \\bigg ( p_{0}(\\tau )q_{0}^{\\prime }(\\tau )+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau )q_{j,k}^{\\prime }(\\tau ) - H(\\tau ) \\bigg ) d\\tau \\nonumber \\\\& -\\frac{1}{4}\\bigg [ 2p_{0}(\\tau )q_{0}(\\tau ) + \\sum _{j=1}^{m}\\Big (p_{j,2}(\\tau )q_{j,2}(\\tau )+2p_{j,3}(\\tau )q_{j,3}(\\tau )\\Big ) -3\\tau H(\\tau ) \\bigg ]_{\\tau =0}^{r}.", "$ For convenience, we write $\\vec{0}=(0,\\ldots ,0) \\in \\mathbb {R}^{m}$ , $\\vec{\\beta }:=(\\beta _{1},\\ldots ,\\beta _{m}), \\qquad \\vec{\\beta }_{\\ell }:=(\\beta _{1},\\ldots ,\\beta _{\\ell },0,\\ldots ,0), \\qquad \\vec{\\beta }_{\\ell }^{\\prime }:=(\\beta _{1},\\ldots ,\\beta _{\\ell -1},\\beta _{\\ell }^{\\prime },0,\\ldots ,0).$ We also write explicitly the dependence of $p_{j,k}$ , $q_{j,k}$ , $p_{0}$ , $q_{0}$ and $H$ in $\\beta _{1},\\ldots ,\\beta _{m}$ using the notation $p_{j,k}(r;\\vec{\\beta })$ , $q_{j,k}(r;\\vec{\\beta })$ , $p_{0}(r;\\vec{\\beta })$ , $q_{0}(r;\\vec{\\beta })$ and $H(r;\\vec{\\beta })$ .", "By (REF ), the parameter $\\gamma $ in (REF ) can also be chosen to be any parameter among $\\beta _{1},\\ldots ,\\beta _{m}$ .", "Let $\\ell \\in \\lbrace 1,\\ldots ,m\\rbrace $ .", "Using (REF ) with $\\vec{\\beta }= \\vec{\\beta }_{\\ell -1}$ and $\\gamma =\\beta _{\\ell }$ , and integrating in $r$ from 0 to $r$ and integrating in $\\beta _{\\ell }$ from 0 to $\\beta _{\\ell }$ , we get $& \\int _{0}^{r} \\bigg ( p_{0}(\\tau ;\\vec{\\beta })q_{0}^{\\prime }(\\tau ;\\vec{\\beta }_{\\ell })+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau ;\\vec{\\beta }_{\\ell })q_{j,k}^{\\prime }(\\tau ;\\vec{\\beta }_{\\ell }) - H(\\tau ;\\vec{\\beta }_{\\ell }) \\bigg ) d\\tau \\nonumber \\\\& - \\int _{0}^{r} \\bigg ( p_{0}(\\tau ;\\vec{\\beta }_{\\ell -1})q_{0}^{\\prime }(\\tau ;\\vec{\\beta }_{\\ell -1})+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau ;\\vec{\\beta }_{\\ell -1})q_{j,k}^{\\prime }(\\tau ;\\vec{\\beta }_{\\ell -1}) - H(\\tau ;\\vec{\\beta }_{\\ell -1}) \\bigg ) d\\tau \\nonumber \\\\& = \\int _{0}^{\\beta _{\\ell }} \\bigg ( \\sum _{k=1}^{3}\\sum _{j=1}^{m}p_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })+p_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }} q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg )d\\beta _{\\ell }^{\\prime }, $ where we have used (REF )–() to conclude that $\\int _{0}^{\\beta _{\\ell }}\\bigg ( \\sum _{k=1}^{3}\\sum _{j=1}^{m}p_{j,k}(0;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{j,k}(0;\\vec{\\beta }_{\\ell }^{\\prime })+p_{0}(0;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }} q_{0}(0;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg )d\\beta _{\\ell }^{\\prime } = 0.$ Summing (REF ) over $\\ell =1,\\ldots ,m$ , we then obtain $& \\int _{0}^{r} \\bigg ( p_{0}(\\tau ;\\vec{\\beta })q_{0}^{\\prime }(\\tau ;\\vec{\\beta })+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau ;\\vec{\\beta })q_{j,k}^{\\prime }(\\tau ;\\vec{\\beta }) - H(\\tau ;\\vec{\\beta }) \\bigg ) d\\tau \\nonumber \\\\& - \\int _{0}^{r} \\bigg ( p_{0}(\\tau ;\\vec{0})q_{0}^{\\prime }(\\tau ;\\vec{0})+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau ;\\vec{0})q_{j,k}^{\\prime }(\\tau ;\\vec{0}) - H(\\tau ;\\vec{0}) \\bigg ) d\\tau \\nonumber \\\\& = \\sum _{\\ell =1}^{m}\\int _{0}^{\\beta _{\\ell }} \\bigg ( \\sum _{k=1}^{3}\\sum _{j=1}^{m}p_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })+p_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }} q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg )d\\beta _{\\ell }^{\\prime }.", "$ If $\\beta _{\\ell }=0$ (or equivalently, if $s_{\\ell }=s_{\\ell +1}$ ), it follows from () that $p_{\\ell ,1}(r)=p_{\\ell ,2}(r)=p_{\\ell ,3}(r)=0$ for all $r> 0$ .", "Also, by (REF ), we have $H(r;\\vec{0})=0$ , and by (REF ) and (REF ), we have $Y|_{\\vec{\\beta }=\\vec{0}} \\equiv I$ .", "Hence, by (REF ) and the relations $\\Phi _{1}=\\Psi _{1}+\\Psi _{0}^{-1}Y_{1}\\Psi _{0}$ and (REF ), we have $p_{0}(r;\\vec{0})=\\frac{1}{\\sqrt{2}}(\\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2})$ and $q_{0}(r;\\vec{0})=\\frac{1}{\\sqrt{2}}(-\\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2})$ .", "Thus, $\\int _{0}^{r} \\bigg ( p_{0}(\\tau ;\\vec{0})q_{0}^{\\prime }(\\tau ;\\vec{0})+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau ;\\vec{0})q_{j,k}^{\\prime }(\\tau ;\\vec{0}) - H(\\tau ;\\vec{0}) \\bigg ) d\\tau = 0$ and (REF ) can be simplified as $\\int _{0}^{r} \\bigg ( p_{0}(\\tau ;\\vec{\\beta })q_{0}^{\\prime }(\\tau ;\\vec{\\beta })+\\sum _{j=1}^{m}\\sum _{k=1}^{3}p_{j,k}(\\tau ;\\vec{\\beta })q_{j,k}^{\\prime }(\\tau ;\\vec{\\beta }) - H(\\tau ;\\vec{\\beta }) \\bigg ) d\\tau \\\\= \\sum _{\\ell =1}^{m}\\int _{0}^{\\beta _{\\ell }} \\bigg ( \\sum _{k=1}^{3}\\sum _{j=1}^{\\ell }p_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })+p_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }} q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg )d\\beta _{\\ell }^{\\prime }.", "$ Substituting (REF ) in (REF ), we obtain $& \\int _{0}^{r}H(\\tau ;\\vec{\\beta })d\\tau = \\sum _{\\ell =1}^{m}\\int _{0}^{\\beta _{\\ell }}\\bigg (\\sum _{k=1}^{3}\\sum _{j=1}^{\\ell }p_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{j,k}(r;\\vec{\\beta }_{\\ell }^{\\prime })+p_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }} q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg )d\\beta _{\\ell }^{\\prime } \\\\& -\\frac{1}{4}\\bigg ( 2p_{0}(r;\\vec{\\beta })q_{0}(r;\\vec{\\beta }) - \\sum _{j=1}^{m}\\Big ( 2p_{j,1}(r;\\vec{\\beta })q_{j,1}(r;\\vec{\\beta })+ p_{j,2}(r;\\vec{\\beta })q_{j,2}(r;\\vec{\\beta })\\Big ) -3rH(r;\\vec{\\beta }) \\bigg )+ \\frac{1}{2}p_{0}(0;\\vec{\\beta })q_{0}(0;\\vec{\\beta }), \\nonumber $ where we have also used (REF ).", "Using ()–(), ()–() and (REF ), we get $& p_{j,1}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{j,1}(r;\\vec{\\beta }_{\\ell }) = p_{j,1}(r;\\vec{\\beta }_{\\ell })q_{j,1}(r;\\vec{\\beta }_{\\ell }) \\partial _{\\beta _{\\ell }}\\log q_{j,1}(r;\\vec{\\beta }_{\\ell }) \\nonumber \\\\& = -\\frac{2i\\beta _{j}}{3}\\bigg ( \\sin \\big ( 2\\vartheta _{j}(r;\\vec{\\beta }_{\\ell }) -\\tfrac{2\\pi }{3} \\big ) + \\Big ( \\sin (2\\vartheta _{j}(r;\\vec{\\beta }_{\\ell })) - \\tfrac{\\sqrt{3}}{2} \\Big ) \\sum _{n=1}^{\\ell }\\sqrt{3}i\\beta _{n}\\frac{x_{n}^{2/3}}{x_{j}^{2/3}} \\bigg ) \\nonumber \\\\& \\quad \\times \\bigg ( \\partial _{\\beta _{\\ell }}\\log \\mathcal {A}_{j}(\\vec{\\beta }_{\\ell }) +\\cot (\\vartheta _{j}(r;\\vec{\\beta }_{\\ell })-\\tfrac{\\pi }{3}) \\partial _{\\beta _{\\ell }}\\vartheta _{j} \\bigg )\\bigg ( 1+{\\mathcal {O}}\\Big (\\frac{\\log r}{r^{2/3}}\\Big ) \\bigg ), \\qquad \\mbox{as } r \\rightarrow + \\infty , $ where we have explicitly written the dependence of $\\vartheta _{j}$ and $\\mathcal {A}_{j}$ in $r$ and $\\vec{\\beta }_{\\ell }$ .", "For $p_{j,2}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{j,2}(r;\\vec{\\beta }_{\\ell })$ and $p_{j,3}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{j,3}(r;\\vec{\\beta }_{\\ell })$ , we obtain after another computation (using again ()–(), ()–() and (REF )) that $& p_{j,2}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{j,2}(r;\\vec{\\beta }_{\\ell }) = -\\frac{2i\\beta _{j}}{3}\\sin \\big ( 2\\vartheta _{j}\\big ) \\bigg ( \\partial _{\\beta _{\\ell }}\\log \\mathcal {A}_{j}(\\vec{\\beta }_{\\ell }) +\\cot (\\vartheta _{j}) \\partial _{\\beta _{\\ell }}\\vartheta _{j} \\bigg )\\bigg ( 1+{\\mathcal {O}}\\Big (\\frac{\\log r}{r^{2/3}}\\Big ) \\bigg ), \\\\& p_{j,3}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{j,3}(r;\\vec{\\beta }_{\\ell }) = \\bigg [ \\tfrac{-2i\\beta _{j}}{3}\\sin \\big ( 2\\vartheta _{j}+\\tfrac{2\\pi }{3} \\big )\\bigg ( \\partial _{\\beta _{\\ell }}\\log \\mathcal {A}_{j}(\\vec{\\beta }_{\\ell }) +\\cot (\\vartheta _{j}+\\tfrac{\\pi }{3}) \\partial _{\\beta _{\\ell }}\\vartheta _{j} \\bigg ) \\nonumber \\\\& + \\tfrac{2i\\beta _{j}}{3}\\Big ( \\sin (2\\vartheta _{j})-\\tfrac{\\sqrt{3}}{2} \\Big ) \\bigg ( \\bigg \\lbrace \\partial _{\\beta _{\\ell }}\\log \\mathcal {A}_{j}(\\vec{\\beta }_{\\ell })+\\cot (\\vartheta _{j}-\\tfrac{\\pi }{3})\\partial _{\\beta _{\\ell }}\\vartheta _{j}\\bigg \\rbrace \\sum _{n=1}^{\\ell }\\sqrt{3}i\\beta _{n}\\tfrac{x_{n}^{2/3}}{x_{j}^{2/3}} + \\sqrt{3}i\\tfrac{x_{\\ell }^{2/3}}{x_{j}^{2/3}} \\bigg )\\bigg ] \\nonumber \\\\& \\times \\bigg ( 1+{\\mathcal {O}}\\Big (\\frac{\\log r}{r^{2/3}}\\Big ) \\bigg ) $ as $r \\rightarrow + \\infty $ , where $\\vartheta _{j}=\\vartheta _{j}(r;\\vec{\\beta }_{\\ell })$ .", "Furthermore, using (REF ) and (REF ), we find $& \\partial _{\\beta _{\\ell }}\\log \\mathcal {A}_{j}(\\vec{\\beta }_{\\ell }) = {\\left\\lbrace \\begin{array}{ll}-\\frac{2\\pi i}{3} + \\partial _{\\beta _{\\ell }} \\log |\\Gamma (1-\\beta _{\\ell })|, & \\mbox{if } j=\\ell , \\\\[0.1cm]\\log \\frac{|x_{j}^{2/3}-\\omega x_{\\ell }^{2/3}|}{|x_{j}^{2/3}- x_{\\ell }^{2/3}|}, & \\mbox{if } j<\\ell .\\end{array}\\right.", "}$ Combining the asymptotics (REF )–(), we get $\\sum _{k=1}^{3}\\sum _{j=1}^{\\ell }p_{j,k}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{j,k}(r;\\vec{\\beta }_{\\ell }) = \\sum _{j=1}^{\\ell } \\beta _{j}\\bigg ( \\frac{x_{\\ell }^{2/3}}{x_{j}^{2/3}}-\\frac{2}{\\sqrt{3}}\\sin (2\\vartheta _{j})\\frac{x_{\\ell }^{2/3}}{x_{j}^{2/3}} - 2 i \\partial _{\\beta _{\\ell }}\\vartheta _{j} \\bigg ) + {\\mathcal {O}}\\bigg ( \\frac{\\log r}{r^{2/3}} \\bigg )$ as $r \\rightarrow + \\infty $ , where again $\\vartheta _{j}=\\vartheta _{j}(r;\\vec{\\beta }_{\\ell })$ .", "Using (REF ) and the fact that $p_{j,1}(r)=p_{j,2}(r)=p_{j,3}(r)=0$ if $\\beta _{j}=0$ , we note that $p_{0}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{0}(r;\\vec{\\beta }_{\\ell }) = - \\sqrt{2}\\sum _{j=1}^{\\ell }p_{j,3}(r;\\vec{\\beta }_{\\ell })q_{j,1}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{0}(r;\\vec{\\beta }_{\\ell }) - \\bigg ( q_{0}(r;\\vec{\\beta }_{\\ell })-\\frac{\\rho }{\\sqrt{2}} \\bigg )\\partial _{\\beta _{\\ell }}q_{0}(r;\\vec{\\beta }_{\\ell }).$ Integrating this identity in $\\beta _{\\ell }$ from 0 to an arbitrary $\\beta _{\\ell }\\in i \\mathbb {R}$ , we get $\\int _{0}^{\\beta _{\\ell }}p_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })d\\beta _{\\ell }^{\\prime } = & \\; - \\sqrt{2}\\sum _{j=1}^{\\ell } \\int _{0}^{\\beta _{\\ell }} p_{j,3}(r;\\vec{\\beta }_{\\ell }^{\\prime })q_{j,1}(r;\\vec{\\beta }_{\\ell }^{\\prime })\\partial _{\\beta _{\\ell }^{\\prime }}q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })d\\beta _{\\ell }^{\\prime } \\nonumber \\\\& - \\bigg [ \\frac{1}{2} q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime })^{2}-\\frac{\\rho }{\\sqrt{2}}q_{0}(r;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg ]_{\\beta _{\\ell }^{\\prime }=0}^{\\beta _{\\ell }}.", "$ Using (), (), () and (REF ), as $r \\rightarrow + \\infty $ we get $& - \\sqrt{2}\\sum _{j=1}^{\\ell } p_{j,3}(r;\\vec{\\beta }_{\\ell })q_{j,1}(r;\\vec{\\beta }_{\\ell })\\partial _{\\beta _{\\ell }}q_{0}(r;\\vec{\\beta }_{\\ell }) = \\sum _{j=1}^{\\ell } \\frac{x_{\\ell }^{2/3}}{x_{j}^{2/3}}\\beta _{j}\\bigg ( \\frac{2}{\\sqrt{3}}\\sin (2\\vartheta _{j}(r;\\vec{\\beta }_{\\ell }))-1 \\bigg )+ {\\mathcal {O}}\\bigg ( \\frac{\\log r}{r^{2/3}} \\bigg ).$ Hence, substituting (REF )–(REF ) in (REF ), we obtain $& \\int _{0}^{r}H(\\tau ;\\vec{\\beta })d\\tau = - 2 \\sum _{\\ell =1}^{m}\\int _{0}^{\\beta _{\\ell }}\\bigg \\lbrace \\sum _{j=1}^{\\ell -1} i\\beta _{j} \\partial _{\\beta _{\\ell }^{\\prime }}\\vartheta _{j}(r;\\vec{\\beta }_{\\ell }^{\\prime })+ i\\beta _{\\ell }^{\\prime } \\partial _{\\beta _{\\ell }^{\\prime }}\\vartheta _{\\ell }(r;\\vec{\\beta }_{\\ell }^{\\prime }) \\bigg \\rbrace d\\beta _{\\ell }^{\\prime } \\nonumber \\\\& -\\bigg ( \\frac{1}{2} q_{0}(r;\\vec{\\beta })^{2}-\\frac{\\rho }{\\sqrt{2}}q_{0}(r;\\vec{\\beta }) \\bigg ) + \\bigg ( \\frac{1}{2} q_{0}(r;\\vec{0})^{2}-\\frac{\\rho }{\\sqrt{2}}q_{0}(r;\\vec{0}) \\bigg ) + \\frac{1}{2}p_{0}(0;\\vec{\\beta })q_{0}(0;\\vec{\\beta }) \\nonumber \\\\& -\\frac{1}{4}\\bigg ( 2p_{0}(r;\\vec{\\beta })q_{0}(r;\\vec{\\beta }) - \\sum _{j=1}^{m}\\Big ( 2p_{j,1}(r;\\vec{\\beta })q_{j,1}(r;\\vec{\\beta })+ p_{j,2}(r;\\vec{\\beta })q_{j,2}(r;\\vec{\\beta })\\Big ) -3rH(r;\\vec{\\beta }) \\bigg ) \\nonumber \\\\& + {\\mathcal {O}}\\Big ( \\frac{\\log r}{r^{2/3}} \\Big ), \\qquad \\mbox{as } r \\rightarrow + \\infty .", "$ Since $p_{0}(r;\\vec{0})=\\frac{1}{\\sqrt{2}}(\\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2})$ , $q_{0}(r;\\vec{0})=\\frac{1}{\\sqrt{2}}(-\\frac{\\rho ^{3}}{54}+\\frac{\\rho }{2})=q_{0}(0;\\vec{\\beta })$ , we see that $\\bigg ( \\frac{1}{2} q_{0}(r;\\vec{0})^{2}-\\frac{\\rho }{\\sqrt{2}}q_{0}(r;\\vec{0}) \\bigg ) + \\frac{1}{2}p_{0}(0;\\vec{\\beta })q_{0}(0;\\vec{\\beta }) = -\\frac{\\rho }{2 \\sqrt{2}}q_{0}(0;\\vec{\\beta }) = \\frac{\\rho ^{4}}{216}-\\frac{\\rho ^{2}}{8}.", "$ Also, using (REF ), (REF ) and (REF ), we obtain $& -\\bigg ( \\frac{1}{2} q_{0}(r)^{2}-\\frac{\\rho }{\\sqrt{2}}q_{0}(r) \\bigg ) -\\frac{1}{4}\\bigg ( 2p_{0}(r)q_{0}(r) - \\sum _{j=1}^{m}\\Big ( 2p_{j,1}(r)q_{j,1}(r)+ p_{j,2}(r)q_{j,2}(r)\\Big ) -3rH(r) \\bigg ) \\nonumber \\\\& = \\frac{q_{0}(r)}{\\sqrt{2}}\\bigg ( \\frac{\\rho }{2}+\\sum _{j=1}^{m}p_{j,3}(r)q_{j,1}(r) \\bigg ) +\\frac{3}{4}rH(r) + \\frac{1}{4}\\sum _{j=1}^{m}\\Big ( 2p_{j,1}(r)q_{j,1}(r)+ p_{j,2}(r)q_{j,2}(r)\\Big ) \\nonumber \\\\& = \\sum _{j=1}^{m}\\bigg ( \\frac{3\\sqrt{3}}{4}i\\beta _{j} (rx_{j})^{\\frac{4}{3}} - \\frac{\\sqrt{3}\\rho }{2}i\\beta _{j} (rx_{j})^{\\frac{2}{3}} - \\beta _{j}^{2} \\bigg ) - \\frac{\\rho ^{4}}{216} + \\frac{\\rho ^{2}}{8} + {\\mathcal {O}}(r^{-\\frac{2}{3}}), \\qquad \\mbox{as } r \\rightarrow + \\infty .", "$ It follows from [24] that $& -2\\sum _{j=1}^{m}\\int _{0}^{\\beta _{\\ell }} i\\beta _{\\ell }^{\\prime } \\partial _{\\beta _{\\ell }^{\\prime }}\\vartheta _{\\ell }(r;\\vec{\\beta }_{\\ell }^{\\prime }) d\\beta _{\\ell }^{\\prime } = \\sum _{j=1}^{m} \\log \\big ( G(1+\\beta _{j})G(1-\\beta _{j}) \\big ) + \\sum _{j=1}^{m}\\beta _{j}^{2}\\bigg ( 1-\\frac{4}{3}\\log (rx_{j}) - \\log \\bigg ( \\frac{9}{2} \\bigg ) \\bigg ).$ For $m \\ge 2$ , we also need the relation $& - 2 \\sum _{\\ell =1}^{m}\\int _{0}^{\\beta _{\\ell }}\\sum _{j=1}^{\\ell -1} i\\beta _{j} \\partial _{\\beta _{\\ell }^{\\prime }}\\vartheta _{j}(r;\\vec{\\beta }_{\\ell }^{\\prime }) d\\beta _{\\ell }^{\\prime } = -2 \\sum _{\\ell =1}^{m}\\sum _{j=1}^{\\ell -1} \\beta _{j}\\beta _{\\ell } \\log \\frac{|x_{j}^{2/3}-\\omega x_{\\ell }^{2/3}|}{|x_{j}^{2/3}- x_{\\ell }^{2/3}|},$ which can be proved directly from () and a direct computation.", "The asymptotic formula (REF ) (without the error term) now follows after substituting (REF ) and (REF ) in (REF ) and performing a rather long calculation which uses (REF ) and (REF ).", "The fact the the error term in (REF ) is ${\\mathcal {O}}(r^{-\\frac{2}{3}})$ and not ${\\mathcal {O}}(r^{-\\frac{2}{3}}\\log r)$ follows directly from (REF ) and (REF ), and (REF ) follows from (REF ).", "This finishes the proof of Theorem REF ." ], [ "Confluent hypergeometric model RH problem", "In this appendix we recall a well-known model RH problem, whose solution depends on a parameter $\\beta \\in i \\mathbb {R}$ and is denoted $\\Phi _{\\mathrm {HG}}(\\cdot ) = \\Phi _{\\mathrm {HG}}(\\cdot ; \\beta )$ .", "(a) $\\Phi _{\\mathrm {HG}} : \\mathbb {C} \\setminus \\Sigma _{\\mathrm {HG}} \\rightarrow \\mathbb {C}^{2 \\times 2}$ is analytic, where $\\Sigma _{\\mathrm {HG}}=e^{\\frac{\\pi i}{4}}(-\\infty ,\\infty )\\cup e^{\\frac{\\pi i}{2}}(-\\infty ,\\infty ) \\cup e^{\\frac{3\\pi i}{4}}(-\\infty ,\\infty )$ .", "(b) $\\Phi _{\\mathrm {HG}}$ satisfies the jump relations $\\Phi _{\\mathrm {HG},+}(z) = \\Phi _{\\mathrm {HG},-}(z)\\widetilde{J}_{k}, \\qquad z \\in \\Gamma _{k}^{\\mathrm {HG}}, \\quad k=1,\\ldots ,6,$ where $& \\Gamma _{1}^{\\mathrm {HG}} = (0,i\\infty ), & & \\Gamma _{2}^{\\mathrm {HG}} = (0,e^{\\frac{3\\pi i}{4}}\\infty ), & & \\Gamma _{3}^{\\mathrm {HG}} = (e^{-\\frac{3\\pi i}{4}}\\infty ,0), \\\\& \\Gamma _{4}^{\\mathrm {HG}} = (-i\\infty ,0), & & \\Gamma _{5}^{\\mathrm {HG}} = (e^{-\\frac{\\pi i}{4}}\\infty ,0), & & \\Gamma _{6}^{\\mathrm {HG}} = (0,e^{\\frac{\\pi i}{4}}\\infty ),$ and $& \\widetilde{J}_{1} = \\begin{pmatrix}0 & e^{-i\\pi \\beta } \\\\ -e^{i\\pi \\beta } & 0\\end{pmatrix}, \\;\\; \\widetilde{J}_{4} = \\begin{pmatrix}0 & e^{i\\pi \\beta } \\\\ -e^{-i\\pi \\beta } & 0\\end{pmatrix}, \\;\\; \\widetilde{J}_{2} = \\widetilde{J}_{6} = \\begin{pmatrix}1 & 0 \\\\ e^{i\\pi \\beta } & 1\\end{pmatrix}, \\;\\; \\widetilde{J}_{3} = \\widetilde{J}_{5} = \\begin{pmatrix}1 & 0 \\\\ e^{-i\\pi \\beta } & 1\\end{pmatrix}.$ (c) As $z \\rightarrow \\infty $ , $z \\notin \\Sigma _{\\mathrm {HG}}$ , we have $\\Phi _{\\mathrm {HG}}(z) = \\left( I + \\frac{\\Phi _{\\mathrm {HG},1}(\\beta )}{z} + {\\mathcal {O}}(z^{-2}) \\right) z^{-\\beta \\sigma _{3}}e^{-\\frac{z}{2}\\sigma _{3}}\\left\\lbrace \\begin{array}{l l}\\displaystyle e^{i\\pi \\beta \\sigma _{3}}, & \\displaystyle \\tfrac{\\pi }{2} < \\arg z < \\tfrac{3\\pi }{2}, \\\\\\begin{pmatrix}0 & -1 \\\\ 1 & 0\\end{pmatrix}, & \\displaystyle -\\tfrac{\\pi }{2} < \\arg z < \\tfrac{\\pi }{2},\\end{array} \\right.$ where $z^{\\beta } = |z|^{\\beta }e^{i\\beta \\arg z}$ with $\\arg z \\in (-\\frac{\\pi }{2},\\frac{3\\pi }{2})$ and $\\Phi _{\\mathrm {HG},1}(\\beta ) = \\beta ^{2} \\begin{pmatrix}-1 & \\tau (\\beta ) \\\\ - \\tau (-\\beta ) & 1\\end{pmatrix}, \\qquad \\tau (\\beta ) = \\frac{- \\Gamma \\left( -\\beta \\right)}{\\Gamma \\left( \\beta + 1 \\right)}.$ (d) $\\Phi _{\\mathrm {HG}}(z) = {\\mathcal {O}}(\\log z)$ as $z \\rightarrow 0$ .", "This model RH problem can be solved explicitly using confluent hypergeometric functions [36].", "By a computation similar to [15] and [24], we have $\\Phi _{\\mathrm {HG}}(z) = \\Upsilon ^{(0)} (I + \\Upsilon ^{(1)}z+{\\mathcal {O}}(z^{2})) \\begin{pmatrix}1 & \\frac{\\sin (\\pi \\beta )}{\\pi } \\log z \\\\0 & 1\\end{pmatrix}, \\qquad \\mbox{as } z \\rightarrow 0, \\, \\arg z \\in (\\tfrac{3\\pi }{4},\\tfrac{5\\pi }{4}),$ where $\\Upsilon ^{(0)} = \\begin{pmatrix}\\Gamma (1-\\beta ) & \\frac{1}{\\Gamma (\\beta )} \\left( \\frac{\\Gamma ^{\\prime }(1-\\beta )}{\\Gamma (1-\\beta )}+2\\gamma _{\\mathrm {E}} - i \\pi \\right) \\\\\\Gamma (1+\\beta ) & \\frac{-1}{\\Gamma (-\\beta )} \\left( \\frac{\\Gamma ^{\\prime }(-\\beta )}{\\Gamma (-\\beta )} + 2\\gamma _{\\mathrm {E}} - i \\pi \\right)\\end{pmatrix}, \\qquad \\Upsilon ^{(1)}_{21}= \\frac{\\beta \\pi }{\\sin (\\pi \\beta )},$ $\\gamma _{\\mathrm {E}}$ is Euler's gamma constant and $\\log z = \\log |z| + i \\arg z, \\qquad \\arg z \\in \\big (-\\tfrac{\\pi }{2},\\tfrac{3\\pi }{2}\\big ),$" ], [ "Acknowledgements.", "C.C.", "acknowledges support from the European Research Council, Grant Agreement No.", "682537, the Swedish Research Council, Grant No.", "2015-05430, and the Ruth and Nils-Erik Stenbäck Foundation.", "P.M. acknowledges support from the Swedish Research Council, Grant No.", "2017-05195." ] ]
2107.01859
[ [ "Part2Word: Learning Joint Embedding of Point Clouds and Text by Matching\n Parts to Words" ], [ "Abstract It is important to learn joint embedding for 3D shapes and text in different shape understanding tasks, such as shape-text matching, retrieval, and shape captioning.", "Current multi-view based methods learn a mapping from multiple rendered views to text.", "However, these methods can not analyze 3D shapes well due to the self-occlusion and limitation of learning manifolds.", "To resolve this issue, we propose a method to learn joint embedding of point clouds and text by matching parts from shapes to words from sentences in a common space.", "Specifically, we first learn segmentation prior to segment point clouds into parts.", "Then, we map parts and words into an optimized space, where the parts and words can be matched with each other.", "In the optimized space, we represent a part by aggregating features of all points within the part, while representing each word with its context information, where we train our network to minimize the triplet ranking loss.", "Moreover, we also introduce cross-modal attention to capture the relationship of part-word in this matching procedure, which enhances joint embedding learning.", "Our experimental results outperform the state-of-the-art in multi-modal retrieval under the widely used benchmark." ], [ "Introduction", "Large 3D models with rich details have been available for 3D deep learning research and applications [3], [31].", "Beyond 3D shapes themselves, text descriptions provide additional information, and make people convenient to retrieve and use these massive 3D models.", "However, it is hard to jointly understand 3D shapes and text at the same time due to the different modalities, which makes it challenging to represent both of them in a common semantic space.", "The state-of-the-art methods aim to map different 3D representations into a learned joint embedding space with text, such as voxel grids [4] and multiple views [13], [12].", "However, both voxel grids and multiple views make these methods struggle to improve the ability of joint understanding of shapes and text, due to the lack of shape information caused by low-resolution of voxel grids and self-occlusion in multiple views.", "Leaning a joint embedding of 3D shapes and text is a promising solution to overcome this challenge.", "However, due to the different representation of 3D shapes, such as existing methods that leverage voxel grids [4] and multiple views [13], [12], it is hard to learn an expressive embedding of 3D shape, because of the lack of 3D information caused by low-resolution of voxels and self-occlusion in multiple views, which will directly lead to the unsatisfactory joint understanding of shapes and text.", "Figure: We propose a method to learn the joint embedding of point clouds and text by matching parts to words.", "Using the learned joint embedding, we can either retrieve shapes using sentences or retrieve sentences using shapes.To resolve this issue, we propose a point-based multi-modal alignment network to learn the joint embedding of point clouds and text.", "To leverage more local shape information, our network is trained to match parts on point clouds to words in sentences.", "Specifically, we first learn segmentation prior to segment point clouds into parts.", "Then, we map parts and words into an optimized space, where the parts and words can be matched with each other.", "In the optimized space, we represent a part by aggregating features of all points within the part, while representing each word with its context information, where we train our network to minimize the triplet ranking loss.", "Moreover, we also introduce cross-modal attention to capture the relationship of part-word in this matching procedure, which enhances joint embedding learning.", "Experimental results show that our method can significantly improve the shapes and text understanding ability.", "Our contributions are listed below, We propose a novel network framework for the matching of text description of 3D shapes based on points with the features of semantic segmentation.", "Comparing with the existing methods, our proposed network achieves SOTA results for matching 3D shapes with text description on various evaluation metrics.", "We demonstrated retrieval and visualization results to further illustrate the effectiveness of our proposed network." ], [ "Related Work", "We review work in related areas such as multi-model representation learning of shapes and text, deep learning of 3D point clouds and text related matching tasks." ], [ "Joint embedding of 3D shapes and text", "In a recent pioneering work, Chen et al.", "chen2018text2shape introduce a novel 3D-Text cross-modal dataset by annotating each 3D shape from ShapeNet [3] with neural language description.", "In order to understand the inherent connections between text and 3D shapes, they employ CNN+RNN and 3D-CNN to extract features from freeform text and 3D voxelized shapes respectively.", "It uses full multi-modal loss to learn the joint embedding and calculate similarity between both modal features.", "However, due to the complexity of computational 3D convolutions, it is hard to generalize this model to high-resolution.", "To resolve this issue, Han et al.", "han2019y2seq2seq propose $\\rm {Y^{2}Seq2Seq}$ , which is a view-based method, to learn cross-modal representations by joint reconstruction and prediction of view and word sequences.", "Although this method can extract texture information from multiple rendered views by CNN and acquire global shape representation by RNN, it ignores local information aggregation such as part-level features of 3D shapes, which proves to be useful for 3D-Text task.", "To take a step further, Han et al.", "han2020shapecaptioner propose to detect shape parts on 2D rendered images, but it is still struggling to fully understand 3D shapes due to the inaccurate boundaries and self-occlusion.", "Differently, our method directly learns from point clouds sampled from shapes, which could better preserve the intrinsic 3D properties, and therefore obtains more discriminative features." ], [ "Point-based 3D deep learning", "Point clouds have been an important representations of 3D shapes due to its simplicity and compactness.", "PointNet [32] and PointNet++ [33] are the pioneer works to understand this kind of irregular data.", "After that, lots of studies [39], [27] are proposed to improve the interpretability of network for point clouds in different tasks, such as segmentation [37], [29], [28], classification [37], [29], [28], reconstruction [14], [18], [9], [11], completion [16], [15], [38].", "Besides, the learned deep features of a single point or the whole shape could also be applied to 3D shape based cross-modal applications, for example, shape-to-text matching in our case.", "In detail, we learn a segmentation prior to segment point clouds into multiple parts, the point-level features of parts will be further aggregated and then matched with words from text." ], [ "Image-text matching", "Image-text matching task allows image or text to mutually find the most relevant instance from the multi-modal database.", "Most existing methods can be roughly categorized into two types: global matching methods and regional matching methods.", "Global matching methods[30] aim to extract the global representation from both images and texts, and then calculate the similarity score.", "Kiros et al.", "kiros2014unifying force image and text to be mapped to the same embedding space by optimizing pairwise ranking loss.", "Faghri et al.", "DBLP:conf/bmvc/FaghriFKF18 try to improve the performance by exploiting the hard negative mining strategy during training.", "Chen et al.", "DBLP:conf/eccv/ChenDL20 train models by a combination of online triplet loss and offline quintuplet loss.", "Zhang et al.", "DBLP:conf/eccv/ZhangL18 propose a CMPM loss and a CMPC loss to learn a discriminative image-text embedding.", "The key of these works is to use different loss functions to project image and text into the same embedding space.", "Besides, Wang et al.", "DBLP:conf/mm/WangYXHS17 and Gu et al.", "DBLP:conf/cvpr/GuCJN018 use generative model to learn textual-visual feature embedding in a common representational space.", "Regional Image-Text Matching first extract image region representation from existing detectors and then take latent visual-semantic correspondence at the level of image regions and words into consideration.", "Karpathy et al.", "karpathy2014deep,karpathy2015deep propose visual semantic matching through inferring their inter-modal alignment, these methods first detect object regions and then acquire the region-word correspondence, finally aggregate the similarity of all possible pairs of image regions and words in sentence to infer the global image-text similarity.", "Inspired by [1], SCAN [25] takes a step towards attending to important image regions and words with each other as context for inferring the image-text similarity.", "Recently, some works [20], [40], [26], [17], [6], [36] attempt to improve SCAN and try to achieve better performance.", "Inspired by the framework of SCAN [25], we introduce a cross-attention mechanism to learn the joint embedding of 3D shapes and text by matching parts from shapes to words from sentences.", "Note that, compared with ShapeCaptioner [12] which learns the regional representation from multi-view images, our method directly utilizes point clouds as the intermediate representation of 3D shapes, and learns deep embedded features of 3D parts obtained by point cloud segmentation, which is a key difference from previous methods." ], [ "We design a network to complete the 3D shape-text matching task, as shown in Figure REF .", "The proposed network includes three modules: shape encoder, text encoder, and matching module.", "To encode a 3D shape $\\mathcal {S}$ , we use a pre-trained segmentation network to obtain the intermediate representation of each sampling point on the input surface model.", "Then, we aggregate these representations to extract the part embedding $\\mathcal {P} \\in \\lbrace p_1, p_2, ..., p_k\\rbrace $ of the input shape $\\mathcal {S}$ .", "For the text encoder, we use the Bi-directional Gate Recurrent Unit (GRU) to learn context sensitive embedding $\\mathcal {W}\\in \\lbrace w_1, w_2, ..., w_m\\rbrace $ of each word in the sentence $\\mathcal {T}$ .", "To achieve the matching between $\\mathcal {P}$ and $\\mathcal {W}$ , we employ an alignment-based matching module, which uses cross attention to align parts with words and acquire similarity score.", "The module contains a pair of symmetrical formulations which are denoted as Shape-Text and Text-Shape.", "Our shape encoder extracts the embedding of parts on each input shape by aggregating the features of corresponding points on the segmented parts, as shown in Figure REF .", "We firstly feed $\\mathcal {S}$ to a pre-trained point-based segmentation network (using PointNet [32] in our case) to extract the features of each point.", "Besides the coordinates, we also incorporate the color information of each point in the shape encoder.", "Then, we concatenate the outputs $f^1, f^2, f^3$ of the last three layers of PointNet to form the embedding of parts, which includes the information from the different semantic hierarchies.", "Moreover, we also concatenate the color representation $color$ of the input shape to leverage the color information.", "We ignore the part which contains less than 25 points, and limit the number of segmented parts is not larger than $\\kappa $ for each input shape.", "Then, we feed the aggregated features and part segmentation information into a Group Average Pooling layer to extract the part embedding $\\mathcal {P}$ of each part." ], [ "Text Encoder", "For the text encoder, we use Bi-directional GRU to extract the context-sensitive word embedding $\\mathcal {W}$ .", "Each text description $\\mathcal {T}$ is first represented by the embedding of each single word in the sentence through a word embedding layer, where the embedding of each single word is also simultaneously learned with other parameters in the network.", "Then, we encode the context of each single word in the bi-directioal GRU.", "For the forward GRU, the hidden state at position $i$ can be calculated from the word embedding at position $i$ and the hidden state at position $i-1$ .", "Similarly, for the reverse GRU, the hidden state at position $i$ is calculated from the word embedding at position $i$ and the hidden state at position $i+1$ .", "Finally, the context-sensitive word embedding is obtained by the averages of the hidden states in the two directions." ], [ "Matching", "Shape-Text matching module matches the input 3D shape $\\mathcal {S}$ and text $\\mathcal {T}$ by the part embedding $\\mathcal {P}$ and context-sensitive word embedding $\\mathcal {W}$ respectively extracted by our shape encoder and text encoder.", "Note that the part embedding first needs to go through a single fully connected layer to ensure that it has the same dimensions as the word embedding.", "Then, we introduce cross attention to compute two symmetrical formulations: Shape-Text matching score, and Text-Shape matching score.", "For the Shape-Text matching, we firstly use cross attention to build the relationship between parts and words.", "We compute cosine similarity between $\\mathcal {P}$ and $\\mathcal {W}$ to obtain the attention matrix $\\mathcal {M}^s$ , and use LeakyReLU to weaken the impact of negative values, as shown in Eq.", "(REF ).", "Then, the attention matrix $\\mathcal {M}^s$ is normalized by part-wise L-2 normalization in Eq.", "(REF ) and word-wise $\\lambda $ -$\\rm {softmax}$ function in Eq.", "(REF ), where $\\lambda $ is the inversed temperature of the softmax function [7].", "After that, we multiple the normalized attention matrix $\\mathcal {M}^s$ and context-sensitive word embedding $\\mathcal {W}$ to obtain the attention sentence embedding $\\mathcal {E}$ corresponding to each shape part in Eq.", "(REF ).", "$\\mathcal {M}^s_{i j}=\\operatorname{LeakyReLU} \\left(\\frac{p_{i}^{T} w_{j}}{\\left\\Vert p_{i}\\right\\Vert \\left\\Vert w_{j}\\right\\Vert }, 0.1\\right), i \\in [1, k], j \\in [1, m]$ $\\mathcal {M}^a_{i, j}=\\frac{\\mathcal {M}^s_{ij}}{\\sqrt{\\sum _{i=1}^{k}\\left(\\mathcal {M}^s_{ij}\\right)^{2}}},$ $\\mathcal {M}^a_{i j}=\\frac{\\exp \\left(\\lambda _{1} \\mathcal {M}^a_{i, j}\\right)}{\\sum _{j=1}^{m} \\exp \\left(\\lambda _{1} \\mathcal {M}^a_{i, j}\\right)}$ $\\mathcal {E}_{i}=\\sum _{j=1}^{m} \\mathcal {M}^a_{i j} \\mathcal {W}_{j}$ Finally, we calculate the cosine similarity between $\\mathcal {P}$ and $\\mathcal {E}$ to represent the relationship between parts and sentences in Eq.", "(REF ).", "And the final Shape-Text similarity score is obtained through the $\\rm {LogSumExp}$ pooling, as shown in Eq.", "REF .", "$R\\left(p_{i}, e_{i}\\right)=\\frac{p_{i}^{T} e_{i}}{\\left\\Vert p_{i}\\right\\Vert \\left\\Vert e_{i}\\right\\Vert }, i\\in [1,k]$ $S_{L S E}(S, T)=\\log \\left(\\sum _{i=1}^{k} \\exp \\left(\\lambda _{2}R\\left(p_{i}, e_{i}\\right)\\right)\\right)^{\\left(1 / \\lambda _{2}\\right)}$ Similarly, the Shape-Text matching score $S_{L S E}(T, S)$ can be calculated by reversing the embedding of parts and words." ], [ "Objective Function", "We use paired ranking loss in our objective function, as shown in Eq.", "(REF ).", "To facilitate the network to better converge and avoid getting into collapsed model, we employ the semi-hard negative sampling mining strategy [34].", "Specifically, for a positive sampling pair $(S,T)$ , we select the hardest negative sampling pair $(\\hat{S}_{semi}, \\hat{T}_{semi})$ which has a smaller similarity score than $(S,T)$ , and calculate the triple loss for the input shape and text respectively.", "Similarly, the triplet loss between the sampling pair $(T,S)$ can also be calculated in the same way.", "The triplet loss for both pairs of $(S,T)$ and $(T,S)$ is defined below, here $\\alpha $ is a margin that is enforced between positive and negative pairs.", "$\\begin{split}L(S,T) = &[\\alpha -S_{LSE}(S,T) + S_{LSE}(S,\\hat{T}_{semi})]_+ + \\\\&[\\alpha -S_{LSE}(S,T)+S_{LSE}(\\hat{S}_{semi},T)]_+\\\\L(T,S) = &[\\alpha -S_{LSE}(T,S) + S_{LSE}(T,\\hat{S}_{semi})]_+ + \\\\&[\\alpha -S_{LSE}(T,S)+S_{LSE}(\\hat{T}_{semi},S)]_+\\end{split}$ In summary, we train our network by minimizing the following loss function, where $\\beta $ is a balance weight and we set $\\beta =1$ in all our experiments.", "$L=L(S,T)+\\beta L(S,T)$ We conducted comparison experiments to evaluate the performance of our proposed network on widely used benchmarks.", "We first introduce the benchmark [4], [31], [3], evaluation metric as well as the parameter setting of our proposed network, then we report the comparison results with the SOTA methods.", "We also show the results of ablation studies to explain the design of our proposed network.", "Finally, we explore the relationship between parts and words by visualizing the attention learned in our network." ], [ "We evaluate our proposed network on 3D-Text cross-modal dataset [4].", "However, this dataset does not include 3D point clouds and the segmentation prior.", "To resolve this challenge, we employed two additional datasets, ShapeNet [3] and PartNet [31], which share the same 3D models.", "ShapeNet [3] contains different 3D representations, including point clouds with color, but no segmentation annotation.", "PartNet [31] contains fine-grained, instance-level, and hierarchical 3D part information which is manually annotated.", "However, the PartNet does not contain color information of 3D point clouds.", "To leverage the color information of 3D point clouds and the part segmentation annotation at the same time, we perform point cloud registration [2] on both point cloud models make an alignment, then we annotate segmentation labels on the point clouds of ShapeNet by the nearest annotated neighbor points on PartNet.", "Finally, We use 11498 3D shapes for training and 1434 3D shapes for testing contains chairs and tables.", "Each 3D shape has an average of 5 text descriptions.", "For the evaluation metrics, we employ recall rate (RR@$k$ ) and NDCG [19] to conduct quantitative evaluation." ], [ "We train the two networks (segmentation network and matching network) separately on the same dataset.", "For the point cloud segmentation network, 2500 points are randomly sampled from point clouds with 10000 points to represent a shape.", "For training, Adadelta [41] is used as the optimizer, the batch size is set to 32, the learning rate is set to $0.01$ , and the training epoch is 300.", "In the matching network, with comparison experiments, we set the max number $\\kappa $ of parts on each shape as 5, and the dimension of part embedding we feed into the matching module as 1024. we set the dimension of word embedding to 300 and the hidden state dimension to 1024, which is consistent with [13], [12].", "We also use the vocabulary of 3587 unique words and a single layer bi-direction GRU as the text encoder.", "For the loss function, we adopt semi-hard negative mining strategy, and the margin $\\alpha $ of triplet ranking loss is set to $0.2$ .", "For training, we use the Adam [23] optimizer and set the learning rate to $0.001$ ." ], [ "Comparison with SOTA methods", "Table REF presents the quantitative results on ShapeNet where our method outperforms the existing approaches [4], [13] in all measures.", "To compare the local part information with the global shape information, we designed an end-to-end model, which simply uses PointNet as the point cloud global feature encoder, Bi-GRU as the text encoder, and also uses semi-hard negative mining triplet ranking loss to train the network.", "We also take different formulation of cross attention into consideration, where S-T represents Shape-Text formulation, T-S represents Text-Shape formulation, and T-S + S-T represents the average of two predicted similarity scores.", "Our results experimentally demonstrate that our method achieves significantly better performance than the end-to-end method using global information.", "Compared with the state-of-the-art methods, our best RR@1 is almost one time better than the results of Y2Seq2seq in both shape-to-text retrieval and text-to-shape retrieval task.", "The examples of T2S and S2T retrieval results are shown in Figure REF .", "For the S2T retrieval task, our proposed model is employed to retrieve the top-5 matched sentences.", "Symmetrically, for the T2S retrieval task, our proposed network is employed to find the top-5 matched 3D shapes.", "In this figure, we mark the ground-truth text descriptions of the corresponding shapes in red." ], [ "Ablation Study", "We first explore the impact of part embedding extracted under different segmentation granularities on the matching model.", "The PartNet dataset contains hierarchical segmentation annotation of 24 object categories.", "Meanwhile, for the text-3D shape matching task, we only need the object and segmentation labels of the two categories of chair and table.", "Therefore, we can obtain semantic segmentation annotations of 17, 72 ,and 90 categories from coarse level to fine-grained level part semantic segmentation, respectively.", "In addition, we also created a 44-category semantic segmentation annotation by merging too detailed semantic parts.", "We employ PointNet to learn the part segmentation model from these four segmentation granularities $m$ , $m \\in \\lbrace 17, 44, 72, 90\\rbrace $ .", "As shown in Table REF , we can find that the part embedding obtained from the 44-category segmentation model achieves the best results on the matching task.", "Through the results of the above experiments, we believe that the predicted segmentation results will become inaccurate, and many segmentation parts are redundant for matching when we employ fine-grained level part segmentation annotation.", "When using coarse level part segmentation, the learned segmentation network has more accurate segmentation results, but the obtained part embedding will ignore the details corresponding to the shape caption.", "Therefore, we need to find a balance between the accuracy and the semantic abundance of the segmentation model.", "In the following, we set the $m$ to 44.", "Table: Comparison of different negative sample learning strategies based on triplet ranking loss.Next, we explore the impact of different negative sample learning strategies based on triplet ranking loss on retrieval.", "As shown in Table REF , we compared three strategies: basic strategy, hardest negative mining, and semi-hard negative mining.", "The basic strategy (Triplet Loss) averages over all the triplet ranking loss of each negative pair in a mini-batch.", "The hardest negative mining strategy (HNM) only focuses on the triplet ranking loss of the hardest negative pair, and the semi-hard negative mining (Semi-hard) selects the negative sample pair which does not exceed the score of the positive sample pair in a mini-batch.", "Our experimental results show that the semi-hard negative mining strategy achieves better performance in all metrics.", "Table: Ablation study on part aggregation operation.Figure: Attention visualization.", "We visualize the attention weight to show the relationship between a part and each word in the sentence.", "The color of red indicates large attention weights.Table REF shows the effectiveness of our proposed part aggregation operation.", "We experimentally prove the necessity of explicitly adding color information by comparing the matching results with part color concatenated to part embedding.", "We improve NDCG@5 about 1.13 and 1.54 separately in S2T and T2S tasks after explicitly using color information.", "The results indicate that we should to explicitly concatenate the color information of each part to part embedding, although the point color is involved as a part of the input of the segmentation network.", "Besides, to compare the performance of our aggregation with the embeddings of different hierarchies, we attempt to replace the concatenated embedding with the feature of the last fully connected layer.", "For a fair comparison, color information is also explicitly added to the part embedding.", "The result shows that the NDCG@5 with our aggregation improves 0.91 and 1.66 in the S2T task and T2S task, respectively.", "We also compare the max pooling with the mean pooling, the results show that mean pooling can slightly improve Recall@1 in S2T and T2S task.", "These experiments demonstrate the effectiveness of our proposed aggregation." ], [ "Visualization", "To interpret our proposed network, we visualize the intermediate results of cross attention matching module, as shown in Figure REF .", "Given pair of shape and text, we use our proposed Part2Word matching model to acquire the attention weight between parts and words.", "The correlation between each word of the input text and each part of the input shape is visualized by controlling their transparency using the corresponding attention weights.", "A visualization example is shown in Figure REF .", "The chair first is divided into 5 parts by trained part segmentation network, and then we use the Part2Word model to calculate the attention weights.", "By analyzing the visualization results, we can find the black seat part matches word \"black\" and \"seat\" in the sentence well, and the part of yellow and black rest also attend the words \"yellow\", \"black\" and \"rest\".", "Besides, the attention weight between the part of blue legs and the word \"blue\" obtained the highest score." ], [ "Limitation", "Although our experimental results demonstrated the proposed network is significantly better than the existing networks, this is the baseline of the point-based matching network since we use PointNet segmentation network to extract the part embedding.", "The performance can be improved greatly by using other advanced point-based networks.", "For the ShapeNet dataset, we found they have color problems on a large number of point cloud data, as shown in Figure REF .", "The color of points is not correct, it may be caused by data processing mistakes.", "Therefore, noise information is involved in our network and affected our final results.", "Finally, comparing with multi-views based approaches, the point-based method should carefully distinguish the difference between the original color of points and rendered color of them.", "And the sparse sampling points may hard to exactly represent the surface color because of the highlights and shadows, according to different rendering environments." ], [ "Conclusion", "We introduce a method to learn joint embedding of 3D point clouds and text.", "Our method successively increases the joint understanding of 3D point clouds and text by learning to match 3D parts to words in an optimized space.", "We obtain the 3D parts by leveraging a 3D segmentation prior, which effectively resolves the self-occlusion issue of parts that suffers current multi-view based methods.", "We also demonstrate that matching 3D parts to words is a good way to merge different modalities including 3D shapes and text in a common space, where the proposed cross-modal attention is also justified to effectively capture the relationship of part-word in this matching procedure.", "Experimental results show that our method outperforms other state-of-the-art methods." ] ]
2107.01872
[ [ "Detecting Concept Drift With Neural Network Model Uncertainty" ], [ "Abstract Deployed machine learning models are confronted with the problem of changing data over time, a phenomenon also called concept drift.", "While existing approaches of concept drift detection already show convincing results, they require true labels as a prerequisite for successful drift detection.", "Especially in many real-world application scenarios-like the ones covered in this work-true labels are scarce, and their acquisition is expensive.", "Therefore, we introduce a new algorithm for drift detection, Uncertainty Drift Detection (UDD), which is able to detect drifts without access to true labels.", "Our approach is based on the uncertainty estimates provided by a deep neural network in combination with Monte Carlo Dropout.", "Structural changes over time are detected by applying the ADWIN technique on the uncertainty estimates, and detected drifts trigger a retraining of the prediction model.", "In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model rather than detecting change on the input data only (which can lead to unnecessary retrainings).", "We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks." ], [ "Introduction", "Across most industries, machine learning (ML) models are deployed to capture the benefits of the ever-increasing amounts of available data.", "When deploying models, most practitioners assume that future incoming data streams are stationary, i.e., the data generating process does not change over time.", "However, this assumption does not hold true for the majority of real-world applications [1].", "In the literature, this phenomenon is referred to as concept drift or dataset shift, which usually leads to a decreasing prediction performance.", "Even small changes or perturbations in the distribution can cause large errors—which has been shown through, e.g., adversarial examples [42].", "The concept drift community has developed several learning algorithms that are able to adapt incrementally [38] or detect concept drift and trigger retrainings of a corresponding learning algorithm [4], [17].", "These techniques usually require full and immediate access to ground-truth labels, which is an unrealistic assumption in most real-world use cases.", "As an example, let us consider a manufacturing line with a manual end-of-line quality control.", "By collecting sensor data from all manufacturing stations and combining this information with previously acquired quality assessments (labels) of human experts, a predictive model can be built to replace the manual quality control and thus reduce repetitive and expensive human labour.", "However, this prediction model is likely exposed to concept drift due to, e.g., modifications in raw materials, machine wear, ageing sensors or changing indoor temperatures due to seasonal changes.", "A continuous stream of true labels for concept drift detection is not available in this use case—which is why traditional concept drift detection algorithms are not applicable.", "To that end, we propose a novel concept drift detection algorithm which detects drifts based on the prediction uncertainty of a neural network at inference time, and we call this method Uncertainty Drift Detection (UDD).", "Specifically, we derive uncertainty by applying Monte Carlo Dropout [16].", "In case of a detected drift, we assume that true labels are available upon request (e.g., provided by domain experts) for retraining of the prediction model.", "In contrast to most drift detection algorithms, UDD can be used for both regression and classification problems.", "We evaluate UDD on two synthetic as well as ten real-world benchmark data sets and show that it outperforms other state-of-the-art drift detection algorithms.", "The ML and data mining communities use different terms to describe the phenomenon of changing data distributions over time and its impact on ML models [29].", "Dataset shift [32] is described as a change in the common probability distribution of input data $x$ and corresponding labels $y$ between training $(tr)$ and test time $(tst)$ : $P_{tr}(x,y) \\ne P_{tst}(x,y)$ .", "This is similar to a common definition of concept drift [18], [45]: $P_{t_0}(x,y) \\ne P_{t_1}(x,y)$ , where $t_0$ and $t_1$ are two different points in time with $t_1>t_0$ .", "Note the difference regarding the indices: Dataset shift focuses on the difference between training and testing environment, whereas concept drift refers to the temporal structure of the data.", "Dataset shift and concept drift can be further divided into different subcategories: Covariate shift (or virtual drift [18]) refers to changes in the distribution of the input data $x$ , without affecting the distribution of labels: $P_{tr}(x) \\ne P_{tst}(x)$ and $P_{tr}(y \\vert x) = P_{tst}(y \\vert x)$ [29].", "Real concept drift refers to any changes in $P(y \\vert x)$ , independent of whether this change is triggered by $P(x)$ or not." ], [ "Handling Concept Drift", "While the above terms all describe related topics, research in the area of concept drift not only deals with the detection of distributional changes and their impact on the prediction quality but also focuses on the question of how to adapt and retrain the affected model [18].", "There are many reasons for changing data.", "Usually, it is intractable to measure all confounding factors—which is why those factors cannot directly be included in the ML model.", "Often, those factors are considered as “hidden context” of the ML models' environment [43].", "In general, three different categories for detecting concept drift can be distinguished [26]: First, error rate-based drift detection, which is also the largest group of methods [26] and aims at tracking changes in the error rate of a ML model.", "Popular algorithms in this category are the Drift Detection Method (DDM) [17], Page-Hinkley test [31], and ADaptive WINdowing (ADWIN) [4].", "Note that the error rate-based drift detection necessarily requires access to ground-truth labels.", "Second, data distribution-based drift detection usually applies some distance function to quantify the similarity between the distributions of a reference batch of data and the current data.", "Algorithms in this category work on the input data $x$ only and do not require true labels for drift detection.", "Popular approaches are based on tests for distribution similarity, such as Kolomogorov-Smirnov test [33].", "Third, the multiple hypothesis test category detects drift by combining several methods from the previous two categories.", "In many real-world applications, the assumption that all true labels are available is unrealistic [23].", "Furthermore, the acquisition of true labels from experts (e.g., in quality control) is likely expensive.", "Those limitations have inspired research on handling concept drift under limited label availability.", "In general, methods can be distinguished based on their (non-)requirement of true labels for either drift detection or for retraining of the corresponding model: The first category of algorithms assumes that true labels are available for both drift detection and retraining, but they are only provided in limited portions at specific points in time.", "In this category, algorithms based on active learning have been developed, where true labels for selected samples are acquired based on a certain decision criterion [12], [47].", "Other approaches under limited label availability apply semi-supervised learning methods with clustering techniques to derive concept clusters which can be investigated for drifts [27], [46].", "The second category requires no true labels for detection of concept drifts, but it uses them for retraining of the model in case of a drift.", "One approach uses confidence scores produced by support vector machines during prediction time and compares those over time by measuring a distance between a reference window and a window of current instances [25].", "If this distance reaches a fixed threshold, an alarm is triggered and the model is retrained using a limited set of current true labels.", "Other algorithms monitor the ratio of samples within the decision margin of a support vector machine for change detection [37].", "An incremental version of the Kolmogorov-Smirnov test has also been applied in this category [34].", "The third category handles concept drift without any label access, neither for drift detection nor for retraining, e.g., by applying ongoing self-supervised learning to the underlying classifier [10], [41].", "Note that the first category requires some true labels continuously over time in order to be able to detect a drift and trigger corresponding retraining.", "In contrast, the second category monitors the data stream for drifts based on the input data only and then requires true labels in case a drift has been detected.", "This is also the category that UDD belongs to.", "The third category can adapt without any true label knowledge.", "However, this category of algorithms also has the least adaption capabilities due to its limited knowledge of changes." ], [ "Uncertainty in Neural Networks", "In many applications it is desirable to understand the certainty of a model's prediction.", "Often times, class probabilites (e.g., outputs of a softmax layer) are erroneously interpreted as a model's confidence.", "In fact, a model can be uncertain in its predictions even with a high softmax output for a particular class [15].", "Generally, neural networks are not good at extrapolating to unseen data [19].", "Hence, if some unusual data is introduced to the model, the output of a softmax layer can be misleading—e.g., unjustifiably high.", "This likely happens in the case of concept drifts.", "Generally, existing literature distinguishes two types of uncertainty: aleatory and epistemic [9].", "The former (also called data uncertainty) can usually be explained by randomness in the data generation process and, e.g., corresponds to the error term in a regression setting.", "The latter (statistical or model uncertainty) usually results from insufficient training data.", "For classification tasks, uncertainty can be for instance quantified through entropy, variation ratios or mutual information [20].", "One state-of-the-art approach to capture model uncertainty for neural networks is Monte Carlo Dropout (MCD) [16].", "While dropout at training time has been widely used as a regularization technique to avoid overfitting [40], the idea of MCD is to introduce randomness in the predictions using dropout at inference time.", "This allows to deduce uncertainty estimates by performing multiple forward passes of a given data instance through the network and analyzing the resulting empirical distribution over the outputs or parameters.", "Another family of methods to quantify predictive uncertainty is called Deep Ensembles [24].", "In essence, the authors of the respective paper [24] propose to enhance the final layer of a neural network such that the model's output is not just a single prediction but a set of distributional parameters, e.g., the mean and variance for a Gaussian distribution.", "The corresponding parameters can then be fitted by using the (negative) log-likelihood as loss function.", "For previously unseen data, the approach suggests then to train an ensemble of several neural networks with different initializations at random.", "The average of all variance estimates can eventually be interpreted as model uncertainty.", "Other recent approaches for quantifying uncertainty in neural networks include variational inference [6], expectation propagation [21], evidential deep learning [36], and stochastic gradient Markov Chain Monte Carlo methods [44], some of which have been applied to areas like active learning [3], [20] and others.", "A good overview of state-of-the-art methods for quantifying uncertainty, including an empirical comparison regarding their performance under dataset shift, is provided by Ovadia et al.", "[30].", "When labels are expensive and their availability is limited, popular drift detection algorithms like ADWIN, DDM and Page-Hinkley are not applicable in their original form, as these algorithms detect drifts based on a change in the prediction error rate (and therefore require true labels).", "As described in Section REF , there are different scenarios for concept drift handling with limited label availability.", "In this paper, we develop a novel approach which detects drifts without access to true labels—yet it requires labels for retraining the model.", "For detecting drifts, we rely on the uncertainty of a (deep) neural network's predictions.", "Previously, it has been shown that the uncertainty of a prediction model is correlated with the test error [22], [35].", "Thus, we argue that model uncertainty can be used as a proxy for the error rate and should therefore be a meaningful indicator of concept drift.", "To investigate this hypothesis, we develop the following approach: For each data instance, we measure the uncertainty of the corresponding prediction issued by the neural network.", "Subsequently, this uncertainty value is used as input for the ADWIN change detection algorithm.", "We choose ADWIN as it as able to work with any kind of real-valued input and does not require any knowledge regarding the input distribution [4].", "Other drift detection algorithms such as DDM [17] or EDDM [2] are designed for inputs with a Binomial distribution and are therefore not applicable to uncertainty measurements (which can have different distributions by nature).", "We call our approach Uncertainty Drift Detection (UDD).", "By applying UDD, we can detect significant changes in the mean uncertainty values over time.", "If a drift is detected, we require true labels for retraining of the model.", "Since there are methods for measuring uncertainty in both regression and classification settings, this approach allows to detect concept drifts for both learning tasks—as opposed to most other concept drift detection algorithms, which handle classification tasks only [23].", "Note that UDD cannot detect any label shift where $P_{tr}(x) = P_{tst}(x)$ and $P_{tr}(y \\vert x) \\ne P_{tst}(y \\vert x)$ .", "However, we assume that in most real-world settings there is no label shift without any changes in the input distribution.", "For drift detection without true label availability, input data-based drift detection, such as Kolmogorov-Smirnov [33], is generally also appropriate.", "However, considering solely input data bears the risk of detecting changes in features that may not be important for the prediction model.", "Specifically, it may occur that input data-based methods detect drifts (including expensive acquisition of new labels) where no retraining is required, because this drift will have little or no impact on the predictions of the model (e.g., virtual drift where $P_{t_0}(x) \\ne P_{t_1}(x)$ and $P_{t_0}(y \\vert x) = P_{t_1}(y \\vert x)$ ).", "Our uncertainty-based approach, on the other hand, detects only changes in the input data that also have an impact (as reflected by the uncertainty) on the predictions.", "For measuring uncertainty and computing predictions, we apply Monte Carlo Dropout (MCD) because it showed the best performance during our experiments.", "However, note that the proposed method can be easily extended to use other uncertainty estimates (e.g., Deep Ensembles) as well.", "In practice, MCD applies dropout at inference time with a different filter for each stochastic forward pass through the network.", "We denote $T$ the number of stochastic forward passes.", "Predictions $\\widehat{p}(y \\vert x)$ are computed by averaging the predictions for each forward pass $T$ given the samples $w_i$ of model parameters from the dropout distribution and the input data $x$ : $\\widehat{p}(y \\vert x) = \\frac{1}{T} \\sum _{i=1}^{T} p_i(y \\vert w_i, x)\\,.$ Regression and classification require different methods for determining predictive uncertainty.", "We choose to evaluate the uncertainty for classification tasks based on Shannon's entropy $H$ over all different label classes $K$ : $ H\\left[\\widehat{p}(y \\vert x)\\right] = - \\sum _{k=1}^{K} \\widehat{p}(y = k \\vert x) * \\log _{2}{\\widehat{p}(y = k \\vert x)}\\,.$ For regression tasks, uncertainty estimates can be obtained by computing the variance of the empirical distribution of the $T$ stochastic forward passes through the network [16]: $\\widehat{\\sigma }^{2} = \\frac{1}{T} \\sum _{i=1}^{T} \\left(p_i(y \\vert w_i, x) - \\widehat{p}(y \\vert x)\\right)^2\\,.$ Real-world data streams for concept drift handling are heterogeneous, e.g., in their number of class labels and size [39].", "This variability is also reflected by heterogeneous distributions of the respective uncertainty indicator.", "Furthermore, due to different approaches for computing uncertainty, this indicator varies significantly in scale and fluctuation between regression and classification problems.", "Therefore, ADWIN has to be adjusted to each data stream, which can be achieved by setting its sensitivity parameter $\\alpha \\in (0,1)$ : A change is detected when two sub-windows of a recent window of observations exhibit an absolute difference in means larger than $\\alpha $ .", "New data instances arrive individually and are predicted at the time of arrival.", "The obtained uncertainty $U_t$ (either expressed as entropy or variance) from the prediction at time $t$ is used as input for an ADWIN change detector.", "Once a drift is detected, a retraining of the prediction model is performed.", "For retraining, UDD uses the most recent data instances in addition to the original training data.", "This way, we can ensure that the model (a) can adapt to new concepts and (b) has enough training data for good generalization.", "Algorithm on page describes the required steps for UDD in a regression ($U_t$ equals variance of prediction $\\widehat{\\sigma }^{2}$ ) or classification setting ($U_t$ equals entropy of prediction $H_t$ ).", "[t] Uncertainty Drift Detection [1] Input: Trained model $M$ ; Data stream $\\mathcal {D}$ ; Training data $\\mathcal {D}_{tr}$ Output: Prediction $\\widehat{y}_t$ at time $t$ Receive incoming instance $x_t$ $\\widehat{y}_t, U_t \\leftarrow $ $M$ .predict($x_t$ ) Add $U_t$ to $ADWIN$ $ADWIN$ detects change Acquire most recent labels $y_{recent}$ $M$ .train($\\mathcal {D}_{tr}$ $\\cup $ $\\mathcal {D}_{recent}$ ) $\\mathcal {D}$ ends" ], [ "Experiments", "For evaluation purposes, we conduct extensive experiments to compare UDD with several competitive benchmark strategies on two synthetic and ten real-world data sets.", "This stands in contrast to most concept drift literature, where new methods are mainly evaluated on simulated data sets with artificially induced concept drifts.", "The code for our experiments can be found under https://github.com/anonymous-account-research/uncertainty-drift-detection." ], [ "Experimental Setup", "Throughout the experiments, for MCD, we set the number of stochastic forward passes $T=100$ for regression tasks and $T=50$ for classification tasks.", "Regarding the deep feed forward network, we vary the structure between three to five hidden layers with relu activation functions depending on the data set.", "Each hidden layer is followed by a dropout layer with dropout rate $0.1$ or $0.2$ , as it is proposed in the original MCD paper [16].", "Details regarding the experimental setup for each data set can be found in Table REF in the appendix.", "For initial model training, we use the first five percent of a data stream's instances.", "We perform a parameter optimization for UDD by requiring the associated ADWIN algorithm to detect one drift on a given validation data set—this yields a concrete value for the sensitivity parameter $\\alpha $ .", "If no drifts are detected on the validation data with the initial value for $\\alpha $ , we assume that no drifts are present in the validation data and $\\alpha $ is set to the scikit-multiflow [28] default value of 0.002.", "We use the ten percent of instances following the initial training data as validation data.", "Every time we detect a drift, we provide the last data instances as well as corresponding labels equivalent to one percent of the overall data stream's length.", "The exemplary partitioning of a data stream is depicted in Figure REF .", "Figure: Partitioning of data stream.In order to benchmark UDD, we compare it against six different strategies within two groups.", "The first group of strategies handles concept drift with Limited Label Availability whereas the second group of strategies allows for Unlimited Label Availability.", "The first benchmark is a non-adaptive model, No Retraining (No Retr.).", "This strategy does not test for drifts and the ML model is only trained once with the initial training set.", "The performance of this strategy constitutes a lower-bound benchmark.", "The second benchmark is an Uninformed Retraining (Uninf.)", "strategy which randomly draws retraining points out of all possible time stamps included in the respective data stream.", "To ensure comparability, we set the number of retrainings of this strategy to be equal to the UDD approach.", "This also ensures that the uninformed retraining strategy receives access to the same number of true labels.", "Otherwise, a strategy with access to more true labels will likely perform better due to larger training set sizes.", "To get a reliable performance estimate for this strategy, we repeat this experiment five times and average the results.", "The third benchmark, Equal Distribution (Equal D.), is similar to the previous benchmark but the retraining points are equally distributed over the course of the data stream.", "The Kolmogorov-Smirnov test-based drift detector (KSWIN) belongs to the category of input data-based drift detection and works by individually investigating each input feature for changes.", "We optimize its sensitivity parameter $\\alpha $ with the same procedure as for UDD.", "This detector is known to produce many false positive concept drift signals, due to multiple hypothesis testing [33].", "Again, we restrict the number of retrainings to be equal to the UDD approach.", "If this strategy detects more drifts, detected drifts are sorted by the order of their p-values and only the top drifts are considered for retraining.", "For this strategy, we use the scikit-multiflow [28] implementation KSWIN (Kolmogorov-Smirnov WINdowing) with the following parameters: $window\\_size = 200$ , $stat\\_size=100$ ." ], [ "Unlimited Label Availability", "The second group of strategies is not restricted with respect to the amount of allowed retrainings.", "Therefore, they are not an appropriate benchmark in a context where true labels are scarce.", "We still include these strategies since they serve as an upper-bound performance benchmark.", "This allows us to estimate the performance loss when confronted with a situation where full label availability is infeasible.", "The Kolmogorov-Smirnov test with unlimited retrainings (KSWIN(unl.))", "benchmark is similar to the previous KSWIN strategy but without restricting the number of retrainings.", "Therefore, all detected drifts trigger a retraining of the prediction model.", "The last benchmark is the ADWIN change detection algorithm applied to the prediction error rate.", "This strategy already requires all true labels for the computation of the error rate and therefore for drift detection.", "Note that all other strategies manage the drift detection without any true labels and then only require labels for retraining.", "For this method, we use the scikit-multiflow [28] implementation with default parameter settings.", "For evaluation, we consider two synthetic data sets (Friedman and Mixed) and ten real-world data sets.", "The Friedman regression data set [14] consists of ten features that are each drawn from a uniform distribution from the interval $[0,1]$ .", "The first five features are relevant for the prediction task, the remaining five are noise.", "The Mixed classification data set is inspired by Gama et al.", "[17] and contains six features where two features are Boolean and the other four features are drawn from a discrete distribution.", "Two of the features are noise which do not influence the classification function.", "By modifying the distribution of some features, we can either induce real or virtual concept drifts (see Section REF ) in both the Friedman and the Mixed data set.", "Furthermore, ten real-world data sets—eight classification and two regression tasks—are used for the evaluation of the UDD method.", "The characteristics of the real-world data sets regarding sample size, number of features, and targets can be found in the appendix in Table REF .", "The Air Quality data set [8] contains measurements from five metal oxide chemical sensors, a temperature, and a humidity sensor.", "The learning task is to predict the benzene concentration, which is a proxy for air pollution.", "The real benzene concentration is measured with a specialized, expensive sensor.", "Concept drift is present due to seasonal weather changes.", "The Bike Sharing data set [13] provides hourly rental data for a bike sharing system between 2011 and 2012 in Washington, D.C., with the objective to predict the hourly demand for bike rentals.", "Concept drift is again assumed to be present due to seasonal weather changes.", "All following classification data sets are taken from the USP Data Stream Repository [39]: The various Insects data sets were gathered by controlled experiments on the use of optical sensors to classify six types of different flying insects (three species with two sexes, respectively).", "Concept drift is artificially induced by changes in temperature.", "The Abrupt data set contains five sudden changes in temperature, whereas in the Incremental (Inc) data set, temperature is slowly increased over time.", "The Incremental Abrupt (IncAbr) data set has three cycles of incremental changes with additional abrupt drifts included as well.", "In the Incremental Reoccurring (IncReo) data set, the temperature increases incrementally within several cycles.", "The KDDCUP99 data set contains TCP connection records from a local area network.", "Features comprise information such as connection duration, protocol type, and transmitted bytes.", "The learning task is to recognize whether the connection is normal or relates to one of 22 different types of attacks.", "The Gas Sensor data set was collected over 36 months at a gas delivery system.", "For each recording, one of six gases is diluted in synthetic dry air inside a sensing chamber, and the objective is to identify the respective gas.", "Both sensor drift (due to aging) and concept drift (due to external alterations) are included in the data.", "The Electricity data set was gathered at the Australian New South Wales Electricity Market.", "Each record contains information about recent electricity consumption and market prices.", "The learning task is to predict whether the market price will increase or decrease compared to the last 24 hours.", "The Rialto Bridge Timelapse data set contains images taken by a webcam close to the Rialto Bridge in Venice, Italy during May and June 2016.", "The objective is to correctly classify nearby buildings with concept drift occurring due to changing weather and lighting conditions." ], [ "Performance Metrics", "Evaluating concept drift detection on real-world data sets is a challenging endeavor as most real-world data sets do not have specified drift points.", "Specifically, for most real-world data, it is intractable to measure the accuracy of drift detection itself.", "Therefore, we perform two different analyses regarding the behaviour of UDD in this work.", "First, we apply UDD on two synthetic data sets to specifically evaluate its drift detection capabilities.", "Second, we perform extensive experiments to investigate its performance on ten real-world data sets.", "For synthetic data sets, the real drift points are known, which allows to compute metrics regarding the drift detection capabilities of a drift detector [5].", "In this work, we compute the Mean Time to Detection (MTD), the False Alarm Count (FAC), and the missed detection count (MDC).", "In contrast, an appropriate evaluation for real-world data sets is more difficult.", "However, one can assume that a drift is present when the prediction performance of a static model decreases over time.", "Since the real drift points are unknown, we evaluate the different strategies based on their prediction performance, as it is common in the concept drift literature [11].", "For regression tasks, we apply the Root Mean Squared Error (RMSE), and we use the Matthews Correlation Coefficient (MCC) for classification tasks.", "MCC is a popular metric for classification settings as it can also handle data sets with class imbalance [7]." ], [ "Analysis on Synthetic Data Sets", "To test the capabilities of UDD, we analyze its behaviour when applied on two synthetic data sets (Friedman and Mixed).", "Both data sets contain virtual as well as real concept drifts.", "Virtual drifts refer to changes in the input data with no effect on the resulting label.", "Hence, UDD should not raise an alarm for these drifts as a retraining of the ML model in this case is unnecessary.", "Recall that this kind of analysis is only feasible on synthetic data sets, as we do not have any knowledge regarding the type of concept drift as well as its timing on real-world data sets.", "On the synthetic data sets, we test UDD and KSWIN(unl.)", "as they both do not require true labels for drift detection.", "The parameters of both approaches are optimized based on a validation set which includes one drift (see Section REF ).", "Figure REF shows the trajectory of the predictive uncertainty over the course of the Friedman data set.", "The uncertainty changes significantly each time a real concept drift occurs.", "Accordingly, this is also detected by UDD.", "As expected, the two virtual drifts (marked by orange vertical lines in the figure) do not trigger a drift detection.", "In contrast, the input data-based detection (KSWIN) detects also these virtual drifts.", "Furthermore, note the overall large number (20) of detected drifts by KSWIN despite a parameter optimization.", "This illustrates KSWIN's problem of high reactivity leading to several false-positive drift detections.", "Figure REF in the appendix depicts this analysis for the Mixed data set.", "Figure: Behaviour of UDD and KSWIN on synthetic Friedman data set.Table: Evaluation on synthetic data sets.Since the real drift points for the synthetic data sets are known, we compute the mean time to detection (MTD), the false alarm count (FAC), and the missed detection count (MDC) for both strategies in Table REF .", "UDD correctly identifies all real concept drifts in both data sets.", "Furthermore, no false alarms are raised.", "However, KSWIN achieves lower MTD values compared to UDD in both data sets, which means that KSWIN recognizes concept drifts faster.", "This can likely be explained by the high sensitivity of KSWIN regarding changes.", "However, this sensitivity also leads to large numbers of false alarms (17 and 11, respectively), as depicted in Table REF .", "Such a behaviour is especially detrimental in scenarios where the acquisition of true labels is expensive.", "Each time a false alarm is raised, new true labels must be acquired at a high cost—even though a retraining is not required since no real concept drift has occurred." ], [ "Experimental Results", "Both UDD and KSWIN require as input a suitable value for $\\alpha $ , which determines their sensitivity regarding concept drift detection.", "Since the data sets included in this experiment are fundamentally different from each other (e.g., different number of class labels), individual values of $\\alpha $ are required for each data set.", "As described in Section REF , we determine the respective value for both strategies by performing a test on a validation data set.", "The parameter values for UDD for each data set are depicted in Table REF in the appendix.", "Table: RMSE (the lower the better) on regression benchmark data sets.", "Number of retrainings in brackets (the lower the less computationally expensive).", "No Retraining depicts the lower-bound benchmark, while KSWIN(unl.)", "and ADWIN represent the upper-bound performance benchmark.Table: MCC (the higher the better) on classification benchmark data sets.", "Number of retrainings in brackets (the lower the less computationally expensive).", "No Retraining depicts the lower-bound benchmark, while KSWIN(unl.)", "and ADWIN represent the upper-bound performance benchmark.A summary of the experimental results on all data sets is provided in Table REF for regression data sets (RMSE) and in Table REF for classification (MCC).", "Furthermore, we provide an additional view on the results by depicting the SMAPE metric in Table REF and the F1-score in Table REF in the appendix.", "For the evaluation, we primarily focus on the first five columns of the table which as a group can be characterized by only requiring a limited amount of true labels.", "This is also illustrated by the values in parentheses which describe how often the corresponding ML models are retrained.", "As explained in Section REF , KSWIN(unl.)", "and ADWIN serve as an upper-bound benchmark due to their requirement of full label availability.", "The best strategy with limited label availability per data set is marked in bold.", "For both regression data sets, UDD outperforms the other four strategies.", "Regarding the classification tasks, UDD achieves the best prediction performance on seven out of eight data sets and always outperforms the strategies No Retraining, Uninformed and Equal Distribution.", "Solely for the Rialto data set, the strategy based on KSWIN performs equally well, which might be explained with rather significant changes in individual input features that can be detected well with KSWIN.", "As expected, the No Retraining strategy usually performs worst.", "Interestingly, the Uninformed already achieves good prediction performance and sometimes even outperforms the KSWIN strategy, especially for the regression tasks.", "By design, the number of retrainings is equal for all four strategies—Uninformed, Equal Distribution, KSWIN, and UDD.", "The right two columns in both Table REF and Table REF show the prediction performance of the KSWIN(unl.)", "and ADWIN strategy.", "As expected, these strategies usually outperform all other strategies but also require significantly more true labels for retraining.", "For the KDDCUP99 data set, the difference in amounts of retrainings for UDD compared to KSWIN(unl.)", "is most striking: While UDD requires 16 retraining, KSWIN(unl.)", "performs 345 retrainings in total.", "Yet, the difference in predictive performance is rather small.", "Also, recall that the ADWIN strategy requires all true labels for drift detection itself.", "For the Insects Abrupt, Insects IncAbr, and the Gas Sensor data set, the UDD strategy performs even better than ADWIN.", "Figure: Relationship between deciles of uncertainty and prediction performance.We also investigate the average prediction performance for UDD based on the level of uncertainty in Figure REF .", "Per data set, we sort instances in deciles, from instances with lowest uncertainty (decile 1) up to instances with highest uncertainty (decile 10) based on entropy $H$ or variance $\\widehat{\\sigma }^{2}$ , respectively.", "Subsequently, we compute the average prediction performance per decile.", "As expected, the RMSE for regression data sets increases with rising uncertainty, as shown in the left plot (a).", "The right plot (b) shows the classification data sets—decile 1 shows the highest mean accuracy and decile 10 the lowest.The KDDCUP99 data set is not included in Figure REF and REF because deciles cannot be computed due to the highly skewed distribution of entropy and confidence values.", "Thus, Figure REF confirms our assumption that uncertainty represents a proxy for the error metric.", "Figure: Relationship between deciles of confidence and accuracy.For classification tasks, we additionally analyze the relationship between confidence and accuracy (similar to [24]).", "We define confidence $c := \\max _{k} \\widehat{p}(y = k \\vert x)$ as the highest predicted probability for one class of a specific data instance.", "All instances of a data set are sorted into confidence deciles, where decile 10 contains all instances with highest confidence.", "Figure REF depicts the relationship.", "As expected, the mean accuracy score increases with larger confidence.", "In this work, we have introduced the Uncertainty Drift Detection (UDD) algorithm for concept drift detection.", "This algorithm does not depend on true labels for detection of concept drift, and—only in case of a detected drift—requires access to a limited set of true labels for retraining of the prediction model.", "Therefore, this algorithm is especially suitable for drift handling in deployed ML settings within real-world environments where the acquisition of true labels is expensive (e.g., quality control).", "Standard drift detection algorithms such as DDM and ADWIN are not applicable in such settings because they require access to the entire set of true labels.", "Our approach is based on the uncertainties derived from a deep neural network in combination with Monte Carlo Dropout.", "Drifts are detected by applying the ADWIN change detector on the stream of uncertainty values over time.", "In contrast to most existing drift detection algorithms, our approach is able to detect drift in both regression and classification settings.", "We have performed an extensive evaluation on two synthetic as well as ten real-world concept drift data sets to demonstrate the effectiveness of UDD for concept drift handling in comparison to other state-of-the-art strategies.", "In future work, we aim to improve the UDD method by including active learning methods.", "Including only those instances with high uncertainty in the retraining set rather than all recent instances could further improve the prediction performance." ], [ "Supplementary Material", "Table REF explains details regarding the neural network architecture for each data set.", "The column Network Structure indicates how many neurons per hidden layer are applied.", "Column Dropout Rate contains the dropout rate for each dropout layer.", "The last column # Forward Passes explains how many forward passes for MCD are computed.", "Table: Neural network architecture for each data set.Table: Characteristics of used data sets.Figure: Behaviour of UDD and KSWIN on synthetic Mixed data set.Table: Different values of α\\alpha for UDD.Table: SMAPE (the lower the better) on regression benchmark data sets.", "Number of retrainings in brackets (the lower the less computationally expensive).", "No Retraining depicts the lower-bound benchmark, while KSWIN(unl.)", "and ADWIN represent the upper-bound performance benchmark.Table: F1-score * ^* (the higher the better) on classification benchmark data sets.", "Number of retrainings in brackets (the lower the less computationally expensive).", "No Retraining depicts the lower-bound benchmark, while KSWIN(unl.)", "and ADWIN represent the upper-bound performance benchmark." ] ]
2107.01873
[ [ "Probabilistic photo-z machine learning models for X-ray sky surveys" ], [ "Abstract Accurate photo-z measurements are important to construct a large-scale structure map of X-ray Universe in the ongoing SRG/eROSITA All-Sky Survey.", "We present machine learning Random Forest-based models for probabilistic photo-z predictions based on information from 4 large photometric surveys (SDSS, Pan-STARRS, DESI Legacy Imaging Survey, and WISE).", "Our models are trained on the large sample of $\\approx$580000 quasars and galaxies selected from the SDSS DR14 spectral catalog and take into account Galactic extinction and uncertainties in photometric measurements for target objects.", "On the Stripe82X test sample we obtained photo-z accuracy for X-ray sources: $NMAD=0.034$ (normalized median absolute deviation) and $n_{>0.15}=0.088$ (catastrophic outliers fraction), which is almost $\\sim2$ times better than best photo-z results available in the literature." ], [ "Introduction", "On July 13, 2019 the SRG X-ray observatory was launched from the Baikonur cosmodrome.", "On Dec. 8th, 2019 SRG started its first All-Sky Survey, which will consist of 8 repeated six month long scans of the entire sky.", "eROSITA telescope [14] onboard SRG operates in the soft X-ray band (0.3–8 keV) and will detect $\\sim 3$ millions X-ray AGNs at the end of survey.", "In order to construct a large-scale structure map of X-ray Universe with eROSITA, accurate measurements of cosmological redshifts for extragalactic X-ray sources (mostly quasars) are needed.", "Redshift measurement methods [16] can be divided into spectroscopic (spec-z, $z_{sp}$ ) and photometric (photo-z, $\\hat{z}_{ph}$ ).", "Spec-z's are time consuming task for faint optical objects ($r\\gtrsim 22^{mag}$ ).", "On the other hand, photo-z measurements can be based on data from modern large photometric sky surveys, it is much cheaper in observational resources than spec-z but also less accurate.", "In this work we present machine learning models for X-ray sources probabilistic photo-z predictions, based on photometric data from 4 modern sky surveys (SDSS, Pan-STARRS1, DESI Legacy Imaging Survey, and WISE)." ], [ "Data", "We use photometric data from SDSS DR14 [1], Pan-STARRS1 DR2 [5], DESI LIS DR8 [6] and WISE [19] sky surveys (WISE forced photometry is taken from DESI LIS).", "Firstly, for all used photometric surveys we calculated hyperbolic magnitudes ($mag$ ) from object fluxes ($flux$ ) and uncertainties ($\\sigma _{flux}$ ): $mag = \\Bigg [asinh\\Bigg (\\frac{flux}{2 \\times \\sigma _{flux}}\\Bigg ) + \\log (\\sigma _{flux})\\Bigg ] \\times \\Bigg (\\frac{-2.5}{\\log 10}\\Bigg ) ~.$ We derived the following set of photometric features: 5 PSF and 5 model magnitudes ($u$ , $g$ , $r$ , $i$ , $z$ ) from SDSS and related 15 colors (i.e.", "$g_{psf}-g_{model}$ , $g_{psf}-r_{psf}$ , etc; colors like $g_{model}-r_{model}$ were not used), 5 PSF and 5 Kron magnitudes ($g$ , $r$ , $i$ , $z$ , $y$ ) from Pan-STARRS and related 15 colors (like with SDSS), 3 model magnitudes ($g$ , $r$ , $z$ ) from DESI LIS and related 6 colors, 2 WISE magnitudes ($w1$ , $w2$ ) and 1 related color ($w1-w2$ ), 10 colors between WISE and SDSS magnitudes, 10 colors between WISE and Pan-STARRS Kron magnitudes, and 6 colors between WISE and DESI LIS $g$ , $r$ , $z$ magnitudes, 3 colors between DESI LIS and SDSS model mags in $g$ , $r$ , $z$ bands.", "Magnitudes and colors were corrected for Galactic extinction by using $E(B-V)$ estimates from DESI LIS and $A/E(B-V)$ coefficients from [17] ($R_V=3.1$ was adopted).", "Our training dataset contains 449751 quasars from SDSS DR14q [13] and 136428 galaxies from SDSS DR14 (a subsample of galaxy catalog filtered to approximate the distribution of spectroscopic subclasses for optical counterparts of X-ray sources).", "To increase the number of distant ($z > 5$ ) quasars in the dataset, we added all sources from VHzQs sample [15]." ], [ "Method", "Our aim was to estimate conditional redshift distribution $p(z|x)$ for each target object with photometric features $x$ .", "We use Random Forest (RF) model, [3], [11], which is considered by many authors among the most accurate ML algorithms for photo-z measurements of galaxies [18], [8] and X-ray quasars [12].", "We used RF ensemble predictions in combination with gaussian Kernel Density Estimation (gKDE), to obtain $p(z|x)$ .", "RF+gKDE model allows one to calculate photo-z point estimate $\\hat{z}_{ph} = \\arg \\max _z p(z|x)$ , confidence intervals, and $zConf = \\int _{\\delta z_{norm} < 0.06} p(z|x)~dz$ .", "At the prediction stage we take into account uncertainties in photometric fluxes of the target object, by perturbing fluxes (according to given uncertainties) for each regression tree in the forest." ], [ "Results", "We trained various photo-z models that use features from different combinations of photometric surveys: SDSS + WISE, Pan-STARRS + WISE, Pan-STARRS + DESI LIS + WISE and SDSS + Pan-STARRS + DESI LIS + WISE.", "We use the Stripe82X sample of X-ray sources with known spectroscopic redshift [2] in order to evaluate accuracy of photo-z models point predictions.", "Standard metrics for point predictions were used: $NMAD=1.4826\\times median(|\\delta z_{norm}|)$ ) — normalized median absolute deviation, $n_{>0.15}$ — fraction of catastrophic outliers with $\\delta z_{norm}>0.15$ , where $\\delta z_{norm} = \\frac{\\hat{z}_{ph} - z_{sp}}{1+z_{sp}}$ .", "Comparison of photo-z models on the Stripe82X sample are shown in Table REF .", "As one can see, our photo-z models demonstrate better accuracy than State-Of-The-Art methods (template photo-z models [2] and neural network photo-z model [4]) on the same test sample.", "Our most accurate photo-z model (based on photometric data from SDSS, PanSTARRS, DESI LIS, WISE surveys) outperforms [2] and [4] results by a factor of $\\approx 2$ .", "Table: Comparison of photo-z models on Stripe 82X test sample of X-ray sources with known spectroscopic redshift." ], [ "Conclusion", "We study the problem of measuring photometric redshifts (photo-z) of extragalactic X-ray sources using machine learning techniques.", "The proposed photo-z models based on Random Forests show accuracy (up to 2 times) better than current SOTA results in the literature (on Stripe82X field).", "The main accuracy improvement comes from using a large training sample ($\\sim $ 600k objects) with features from modern wide photometric surveys.", "First optical spectroscopic observations of eRosita sources show that the proposed photo-z models are effective in the ongoing search for distant X-ray quasars (see e.g.", "[10], [9], [7]).", "The presented photo-z models are integrated into the SRGz system designed to construct a three-dimensional map of X-ray sources on the Eastern Galactic Hemisphere of the SRG/eRosita All-Sky survey.", "The SRGz system is developed in the science working group of RU eROSITA consortium on X-ray source detection, identification, and eROSITA source catalog in the High Energy Astrophysics Department at Space Research Institute of the Russian Academy of Sciences." ] ]
2107.01891
[ [ "The refined local lifting problem for cyclic covers of order four" ], [ "Abstract Suppose $\\phi$ is a $\\mathbb{Z}/4$-cover of a curve over an algebraically closed field $k$ of characteristic $2$, and $\\Phi_1$ is a lift of $\\phi$'s $\\mathbb{Z}/2$-sub-cover to a complete discrete valuation ring $R$ in characteristic zero.", "We show that there exist a finite extension $R'$ of $R$, which is determined by $\\Phi_1$, and a lift $\\Phi$ of $\\phi$ to $R'$ whose $\\mathbb{Z}/2$-sub-cover isomorphic to $\\Phi_1 \\otimes_R R'$.", "That result gives a non-trivial family of cyclic covers where Sa\\\"idi's refined lifting conjecture holds.", "In addition, the manuscript exhibits some phenomena that may shed some light on the mysterious moduli space of wildly ramified Galois covers." ], [ "Introduction", "As the name suggests, the goal of a lifting problem is to construct some objects in characteristic 0 that “lifts” the given ones in characteristic $p>0$ .", "We are interested in the lifting problem for Galois covers of curves, which can be formally stated as below.", "Question 0.1 Let $k$ be an algebraically closed field of characteristic $p>0$ .", "Given $\\Gamma $ a finite group and $\\overline{f}: \\overline{Y} \\rightarrow \\overline{X}$ is a $\\Gamma $ -cover of smooth projective, connected curves.", "Is there a $\\Gamma $ -cover $f: Y \\rightarrow X$ of smooth curves over a discrete valuation ring $R$ in characteristic 0 whose special fiber is isomorphic to $\\overline{f}: \\overline{Y} \\rightarrow \\overline{X}$ ?", "The answer is NOT always a yes.", "Let's see a simple example.", "The number of automorphisms of a curve of genus $g \\ge 2$ in characteristic zero is at most $84(g-1)$ (see, e.g., [7] IV, Ex.", "2.5).", "In characteristic $p$ , Roquette [16] gave an example of the smooth projective cover $\\overline{f}: \\overline{Y} \\rightarrow \\mathbb {P}^1_k$ defined by the affine equation $y^2=x^p-x$ .", "This cover has genus $(p-1)/2$ and the group of deck transformations $\\Gamma $ has cardinality $2p(p^2-1)$ , which is larger than $84(g-1)$ for $p \\ge 5$ .", "Thus, $\\overline{f}$ cannot lift to characteristic zero.", "For another counter-example, see [13].", "However, Oort conjectured (which first appeared in 1995 in a list of questions and conjecture published in [10]) that an arbitrary $\\Gamma $ -covers should lift when $\\Gamma $ is cyclic.", "Recently, Andrew Obus, Stefan Wewers, and Florian Pop proved the conjecture in [13] and [14].", "More precisely, Obus and Wewers proved a general result that a cyclic $\\Gamma $ -cover $\\overline{f}: \\overline{Y} \\rightarrow \\mathbb {P}^1_k$ lifts if it has no essential ramification or essential jump [14].", "Pop completed the proof by showing that every cover cannot be lifted by Obus and Wewers admits an equicharacteristic deformation whose generic fibers are ones with no essential ramification and thus also lift to characteristic zero.", "Moreover, Obus suggested the author that the techniques used in [13] may have the potential to prove a more general form of the Oort Conjecture, which we call the refined Oort conjecture, as follows.", "Conjecture 0.2 [17] Suppose $\\phi : Z \\xrightarrow{} X$ is a cyclic $\\Gamma $ -cover of curves over $k$ , and $\\phi _1: Y \\xrightarrow{} X$ is its Galois sub-cover.", "Suppose, moreover, that $\\Phi _1: \\mathcal {Y}_R \\xrightarrow{} \\mathcal {X}_R$ is a lift of $\\phi _1$ to a finite extension $R/W(k)$ , hence in charactersitic 0.", "Then there exists a finite extension $R^{\\prime }/R$ , and a lift $\\Phi : \\mathcal {Z}_{R^{\\prime }} \\xrightarrow{} \\mathcal {X}_{R^{\\prime }}$ of $\\phi $ over $R^{\\prime }$ that contains $\\Phi _1 \\otimes _R R^{\\prime }: \\mathcal {Y}_{R^{\\prime }} \\xrightarrow{} \\mathcal {X}_{R^{\\prime }}$ as a sub-cover.", "Thanks to a local-global principle (see, e.g., [5]), it suffices to prove the following local version of the conjecture (see [3]).", "Conjecture 0.3 Let $k[[y_2]]/k[[x]]$ be a cyclic $\\Gamma $ -extension.", "Suppose we are given a discrete valuation ring $R$ in characteristic zero and a lift $R[[Y_2]]/R[[X]]$ of a Galois sub-extension $k[[y_1]]/k[[x]]$ .", "Does there exist a finite extension $R^{\\prime }$ of $R$ in characteristic zero with residue field $k$ and a $\\Gamma $ -Galois extension $R^{\\prime }[[Y_2]]/ R^{\\prime }[[X]]$ that lifts $k[[y_2]]/k[[x]]$ and contains $R^{\\prime }[[Y_2]]/R^{\\prime }[[X]]$ as a sub-extension?", "That is, whether one can always fill in the following commutative diagram $\\begin{tikzcd}k[[y_2]] [d,\"\\phi _2\"] [dd, bend right=90, \"\\phi \"] [r, dotted]& R[[Y_2]] [d, dotted,\"\\Phi _2\" left] [dd, dotted, bend left, \"\\Phi \"] \\\\k[[y_1]] {d}{\\phi _1} {r}& R[[Y_1]] [d, \"\\Phi _1\" left] \\\\k[[x]] [black]{d}{} {r}[blue]{} & R[[X]] [black]{d}{}\\\\\\operatorname{Spec}k [black]{r}{} & \\operatorname{Spec}R\\end{tikzcd}$ where $R$ is a finite extension of $W(k)$ , each column is a tower of cyclic extensions, and the group actions on each row are “compatible” in the obvious sense.", "We call Question REF the refined local lifting problem.", "Furthermore, just as in the standard lifting problem, we may reduce our study to the case when the cyclic group $\\Gamma $ of Conjecture REF is $\\mathbb {Z}/p^n$ .", "Proposition 0.4 Suppose $\\Gamma =\\mathbb {Z}/p^n \\times \\mathbb {Z}/m$ , where $m$ is prime to $p$ .", "Then Conjecture REF holds for $\\Gamma $ if and only if it holds for $\\mathbb {Z}/p^n$ .", "It is an easy generalization of [12].", "See also [3] for the proof of the equal characteristic case.", "In this paper, we show that the conjecture holds for the most simple but nontrivial case in characteristic 2.", "More precisely, our main result is the following.", "Theorem 0.5 Conjecture REF holds for $\\Gamma =\\mathbb {Z}/4 \\times \\mathbb {Z}/m$ , where $m$ is odd, in characteristic 2.", "Due to Proposition REF , we may prove the theorem for the case $m=1$ .", "Suppose we are given a $\\mathbb {Z}/4$ -cover $\\phi $ and a lift $\\Phi _1$ of its $\\mathbb {Z}/2$ -sub-cover $\\phi _1$ as in diagram (REF ).", "The proof processes in four steps as follows." ], [ "Step 1", "We start by explicitly constructing an equal characteristic deformation of $\\phi $ over the complete discrete valuation ring in equal characteristic $S:=k[[t]]$ , as described in Figure REF , Figure: Step 1such that $\\psi $ 's generic fiber has no essential jump.", "We further assume that $\\psi $ has $m$ branch points $b_1, \\ldots , b_m$ ." ], [ "Step 2", "Convention: We usually denote by $\\tau _{\\eta (2)}$ and $\\tau _{\\operatorname{s}(2)}$ (resp.", "$\\tau _{\\eta (t)}$ and $\\tau _{\\operatorname{s}(t)}$ ) the generic fiber and the special of a cover $\\tau $ over a 2-adic ring (resp.", "over a $t$ -adic ring).", "Let $V$ be a finite of extension of the 2-adic ring $W\\big ( \\overline{k((t))}\\big )$ .", "We then construct a “two layers deformation” $\\Psi _1$ of $\\phi _{1}$ with respect to $\\Phi _1$ that makes the diagram in Figure REF commutes with compatible $\\mathbb {Z}/2$ -actions on the top (Proposition REF ).", "In other words, $\\Psi _1$ is a lift of the generic fiber $\\psi _{1,\\eta (t)}$ over $V$ that is also defined over (an extension of) $W(k[[t]])$ .", "It hence makes sense to talk about its reduction modulo $t$ aka its $t$ -fiber.", "In addition, the $t$ -fiber, which we denote by $\\Psi _{1, \\operatorname{s}(t)}$ , is isomorphic to $\\Phi _{1,\\eta (2)}$ .", "Figure: Step 2" ], [ "Step 3", "We first explicitly partition $\\psi $ into a product of $\\psi ^1, \\ldots , \\psi ^m$ , where each $\\psi ^i$ is branched only at $b_i$ .", "For each $i$ , let $\\psi _1^i$ be its $\\mathbb {Z}/2$ -sub-cover.", "Then $\\psi _1=\\prod _{i=1}^m \\psi _1^i$ .", "In addition, we also can factor the $\\psi _1$ 's lift in the previous step as $\\Psi _1=\\prod _{i=1}^m \\Psi _1^i$ , where $\\Psi _1^i$ is precisely the lift of $\\psi _{1, \\eta (t)}^i$ to characteristic zero.", "As $\\psi ^i$ has no essential second (upper) jump at its only branch point by construction (in the first step), one may extends the deformation $\\Psi ^i_1$ to a $\\mathbb {Z}/4$ -cover $\\Psi ^i$ that lifts $\\psi ^i$ explicitly, using the technique from [6].", "The product of these lifts, after a slight modification (Proposition REF ), forms a lift $\\Psi $ of $\\psi $ that extends $\\Psi _1$ and has good good reduction modulo $t$ .", "These processes make use extensively the notion of degeneration type, which allows us to control the smoothness of the fibers by slightly moving the branch points." ], [ "Step 4", "Finally, we show that the diagram in Figure REF is commutative with compatible $\\mathbb {Z}/4$ -actions.", "In other words, the $t$ -fiber $R[[Y_2]]$ of $V[[Y_2]]$ , together with the induced $\\mathbb {Z}/4$ -action, of the two layers $\\mathbb {Z}/4$ -deformation constructed in the previous step (§REF ) is exactly what we are looking for, a lift of $\\phi $ to characteristic zero extending $\\Phi _1$ ." ], [ "Acknowledgements", "The author thanks his Ph.D. advisor, Andrew Obus, for helpful discussions and for carefully proofreading an earlier version of this paper.", "He thanks Nathan Kaplan, Joe Krammer-Miller, and the University of California Irvine's Department of Mathematics for their hospitality during his stay in Orange County, California, where this project was initiated, amid the Covid-19 pandemic.", "The author is grateful to the Vietnam Institute of Mathematics, where he did most of the writing, for the excellent working conditions.", "This research is funded by the Simons Foundation Grant Targeted for Institute of Mathematics, Vietnam Academy of Science and Technology, and NSF-DMS grants 1602054 and 1900396." ], [ "Step 1", "Suppose $\\phi $ has branching datum $[\\iota _1, \\iota _2]$ , that is to say, the upper ramification breaks [18] at the only branch point are $(\\iota _1-1,\\iota _2-1)$ .", "Recall that the generic fiber of a deformation $\\psi $ of $\\phi $ can be represented by a $r \\times 2$ matrix as follows $\\begin{bmatrix}\\iota _{1,1} & \\iota _{1,2} \\\\\\dots &\\dots \\\\\\iota _{m,1} & \\iota _{m,2} \\\\\\end{bmatrix},$ where, $\\iota _{i,1}>1$ , $\\iota _{i,2}-p\\iota _{i,1}>1-p$ , and $\\iota _i=\\sum _{j=1}^2 \\iota _{j,i}$ for $i=1,2$ [3].", "That means the generic fiber of $\\psi $ branches at $r$ points, says $b_1, \\ldots , b_r$ , and the branching datum at $b_i$ is $[\\iota _{i,1},\\iota _{i,2}]$ .", "One may, after a change of variables, assume that $b_1=0$ .", "Note that the conditions are necessary for $\\psi $ to be a flat deformation by the different criterion [12].", "One may associate with $\\phi $ a unique HKG-cover (see [8] and [9]) of the projective line $Z_2 \\xrightarrow{} W \\cong \\operatorname{Proj}k[x]$ totally ramified above $x=0$ such that the formal completion above 0 yields $k[[y_2]]/k[[y_1]]$ .", "Pop proves the following result, which says that every cyclic cover is in the same flat family with a cover $\\psi $ that has no essential ramification.", "That means, in the matrix of (REF ), $\\iota _{i,1} \\le p$ and $\\iota _{i,2}-p\\iota _{i,1}\\le 1$ .", "See also [3] for a brief explanation.", "In our case, this equates to $\\iota _{i,1} = 2$ or 0, and $\\iota _{i,2}=3,4$ (when $\\iota _{i,1}=2$ ) or $\\iota _{i,2}=2$ (when $\\iota _{i,1}=0$ ).", "Proposition 1.1 c.f.", "[14] Suppose we are given a $\\mathbb {Z}/p^n$ -local-extension $k((y))/k((x))$ .", "Let ${A}=k[[t,x]] \\supseteq k[[x]]$ , and let ${S}=\\operatorname{Frac}{A}$ .", "There exists an $\\mathbb {Z}/p^n$ -extension ${K}/{S}$ , with $k[[y_2]] \\subseteq {K}$ , having the following properties.", "If ${C}$ is the integral closure of ${A}$ in ${K}$ , we have ${C} \\cong k[[t,y_2]]$ .", "In particular, $({C}/(t))/({A}/(t))$ is $\\mathbb {Z}/p^n$ isomorphic to the original extension.", "Let ${A}_{\\eta }={A}[t^{-1}]$ , and let ${C}_{\\eta }={C}[t^{-1}]$ .", "Then ${C}_{\\eta }/{A}_{\\eta }$ is an $\\mathbb {Z}/p^n$ -extension that has no essential ramification jumps.", "Remark 1.2 The extension ${K}/{S}$ in the above proposition is explicitly constructed from the “standard form” length-$n$ Witt vector that represents $k((y))/k((x))$ ." ], [ "Step 2", "We first give some descriptions for the lift $\\Phi _1$ of $\\phi _1$ .", "Set $\\kappa :=\\operatorname{Frac}k[[x]]$ .", "Proposition 2.1 Suppose we are given $\\phi _1 \\in H^1(\\kappa , \\mathbb {Z}/2)$ , which corresponds to an Artin-Schrier cover defined by $ y^2-y=\\frac{1}{x^m}, $ where $m=2q-1$ (hence odd), and $\\Phi _1$ is a lift of $\\phi _1$ to a ring $R$ in characteristic zero and has 0 as a branch point.", "One may think of $\\Phi _1$ as a character in $H^1(\\mathbb {K},\\mathbb {Z}/2)$ , where $\\mathbb {K}:=\\operatorname{Frac}R[[X]]$ .", "Then $\\Phi _1$ can be represented by a Kummer class of the following form $ Z^2=1+4 \\frac{G}{X\\prod _{i=2}^q(X-u_i)^2},$ where $u_i \\in R$ , $\\nu _2(u_i)>0$ , $G \\in R[[X]]$ , and $\\overline{G}=1$ modulo 2.", "In particular, we have $X\\prod _{i=2}^q(X-u_i)^2+4G=\\prod _{j=1}^m(X-B_i),$ where $B_1, \\ldots , B_m \\in R$ are non-zero distinct branch points of $\\Phi _1$ .", "By Kummer theory, we may assume $\\Phi _1$ is defined by $ W^2= \\prod _{j=1}^m\\bigg (1-\\frac{B_i}{X}\\bigg ) = 1 + \\sum _{i=1}^m \\frac{b_i}{X^i}:=F \\in R[X^{-1}].", "$ One sees that $\\nu _2(F)=0$ where $\\nu _2$ is the Gauss valuation on $\\mathbb {K}$ with $\\nu _2(X)=0$ .", "Applying [13] with $N=q-1$ , one finds $H=1+\\sum _{j=1}^{q-1}\\frac{c_j}{X^j} \\in R[X^{-1}]$ such that $F-H^2=\\sum _{l=1}^m \\frac{a_l}{X^l} \\in R[X^{-1}] $ verifying $a_{2l}=0$ for all $l \\le q-1$ .", "Thus, we have $\\nu _2(F-H^2) \\ge 0$ and $[F-H^2]_0$ , which is the image of $(F-H^2)2^{-\\nu _2(F-H^2)}$ in $\\kappa $ , is not a square.", "Therefore, as $\\Phi _1$ has a good reduction isomorphic to $\\phi _1$ by the hypothesis, [13] asserts that $\\nu _2(F-H^2)=2$ and $[F-H]_0=1/x^m$ .", "In addition, a simple calculation shows that $ \\frac{F}{H^2}=1+4\\frac{G}{X(X^{q-1}+X^{q-2}c_2+ \\ldots +c_{q-1})^2}=:1+4 \\frac{G}{X\\prod _{i=2}^{q}(X-u_i)^2}, $ where $\\overline{G} \\equiv 1 \\pmod {2}$ .", "Finally, as $c_j>0$ as shown in the proof of [13], the Newton polygon technique (see [11]) shows that $\\nu _2(u_i)>0$ for $i=2, \\ldots , q$ , hence the final assertion.", "Proposition 2.2 With the notation as in Proposition REF , the equation $Z^2 = 1 + 4 \\frac{G}{X \\prod _{i=2}^q (X-(u_i+t_i))^2}$ defines a deformation $\\Psi _1$ of $\\Phi _{1,\\eta (2)}$ over $R[t_2, \\ldots , t_q]$ .", "Moreover, the “reduction” of $\\Psi _1$ modulo 2 is a deformation $\\psi _1$ of $\\phi _1$ over $k[t_2, \\ldots , t_q]$ given by $y^2 -y =\\frac{1}{x\\prod _{i=2}^q(x-t_i)^2}.", "$ It is immediate that the $(t_2, \\ldots , t_q)$ reduction of $\\Psi _1$ is birational to $\\Phi _{1,\\eta (2)}$ .", "One may rewrite (REF ) as $ Z^2= \\frac{X\\prod _{i=2}^q(X-(u_i+t_i))^2+4G}{X\\prod _{i=2}^q(X-(u_i+t_i))^2} .", "$ As the numerator modulo $(t_2, \\ldots , t_q)$ is, by (REF ), equal to the separable polynomial $X\\prod _{j=1}^m(X-B_i)$ , itself must be a separable polynomial in $R[t_2, \\ldots , t_q][X]$ of degree $m$ , whose roots are different from 0.", "Thus, the generic fiber $\\Psi _{1,\\eta (t_1, \\ldots , t_q)}$ is a Kummer $\\mathbb {Z}/2$ -extension that branches at $m+1$ distinct point.", "Therefore, by the different criterion [12], $\\Psi _1$ is a flat deformation of $\\Phi _{1, \\eta (2)}$ over $R[t_2, \\ldots , t_q]$ .", "In addition, substituting $Z=:1+2Y$ to (REF ), we obtain $ Y^2+Y= \\frac{G}{X\\prod _{i=2}^q(X-(u_i+t_i))^2}.", "$ A similar reasoning using the different criterion as before shows that $\\Psi _1$ is a lift of the deformation $\\psi _{1,\\eta (t)}$ of $\\phi _1$ .", "One then may replace each $t_i$ ($i=2, \\ldots , q$ ) in (REF ) by an element of $k[[t]]$ with positive $t$ -valuation.", "That gives a deformation of $\\Phi _1$ whose reduction modulo 2 defines a deformation $\\psi _1$ of $\\phi _1$ over $k[[t]]$ as we want, completing the second step." ], [ "Step 3", "In this section, we explicitly extend a local $\\mathbb {Z}/2$ -two-layers-lift to a $\\mathbb {Z}/4$ -one with good reduction modulo $t$ .", "The technique, which is quite explicit in this paper, can be understood better using Hurwitz trees (see, e.g., [2] [4]).", "That interpretation will be presented in a forthcoming paper.", "Proposition 3.1 One may factor $\\psi $ from Step 1 into a product of $\\psi ^1, \\ldots , \\psi ^m$ that verify the followings.", "For $i=1, \\ldots , m$ , $\\psi ^i$ branches only at $b_i$ with type $[\\iota _{i,1}, \\iota _{i,2}]$ .", "The $\\mathbb {Z}/2$ -subcover $\\psi _1^i$ of $\\psi _1^i$ has type $[\\iota _{i,1}]$ and satisfies $\\psi _1=\\prod _{i=1}^m \\psi ^i_1$ .", "The lift $\\Psi _1$ can, in turn, factor into a product of $\\Psi _1^1, \\ldots , \\Psi _1^m$ , where $\\Psi _1^i$ is a lift of $\\psi _{1,\\eta (t)}^i$ to characteristic zero.", "The first part is simply a decomposition of $\\psi $ 's represented Witt vector [3].", "Note that, each nontrivial $\\psi _1^j$ is, locally at each only branch point, given by $ y^2-y=\\frac{1}{x}.", "$ That follows from the fact that $\\iota _{j,1}=0$ or 2 for all $j$ and [15], which says a local Artin-Schreier extension is determined by its ramification jump.", "For the second part, one may write (REF ) as $ Z^2= \\frac{(X-V_{1})\\prod _{j=2}^{q} (X-V_{j,1})(X-V_{j,2})}{X\\prod _{i=1}^q (X-(u_i+t_i))^2} $ where $V_{1} \\equiv 0 \\pmod {2}$ and $V_{j,1} \\equiv V_{j,2} \\equiv b_j \\pmod {2}$ for $j=2, \\ldots , q$ .", "For each $j=2, \\ldots , q$ (resp.", "$j=1$ ), we define $\\Psi _1^j$ (resp.", "$\\Psi _1^1$ ) to be the $\\mathbb {Z}/2$ -extension of $\\operatorname{Frac}V[[X]]$ defined by the order-2 Kummer class of $ (X-V_{j,1})(X-V_{j,2}) \\hspace{8.53581pt} (\\text{resp. }", "X(X-V_1)).$ By [13], each $\\Psi _1^j$ has étale reduction.", "It is straightforward to see that $\\Psi _1^j$ is a lift of $\\psi _1^j$ to characteristic zero and their product gives $\\Psi _1$ .", "One may suppose, after a change of variables, that the $\\mathbb {Z}/4$ -cover $\\psi ^i$ over $k((t))[x]$ is given by the length-two Witt vector $ \\Big ( \\frac{a}{x}, \\frac{a_3}{x^3} + \\frac{a_1}{x} \\Big ), $ where $a, a_3, a_1 \\in k[[t]]$ , $a \\ne 0$ .", "That is to say, the corresponding extension of function field is given by adjoining $y_1, y_2$ satisfying $\\begin{split}y_1^2-y_1 & =\\frac{a}{x}, \\text{ and} \\\\y_2^2-y_2 & = \\frac{y_1 a}{x}+ \\frac{a_3}{x^3} + \\frac{a_1}{x}.\\end{split}$ See [12] for more details.", "In addition, by Proposition REF , the $\\mathbb {Z}/2$ -two-layers-lift of its $\\mathbb {Z}/2$ -sub-cover can be represented by $(Z_1)^2=1+\\frac{4u_1 a}{X},$ where $u_1$ lies in a finite extension of $W(k((t)))$ , $u_1$ modulo $t$ is non-zero and lies in a finite extension of $W(k)$ .", "Let us briefly introduce the notion of degeneration type, a useful invariant for an abelian local cover like $\\Psi ^i_1$ .", "For more details, see, e.g., [13] or [3].", "Definition 3.2 Suppose ${K}$ is a valuation field with valuation $\\nu $ , valuation ring $R$ , and residue field $\\kappa $ .", "Let $\\phi \\in H^1({K}, \\mathbb {Z}/p^n)$ be a cyclic character or order $p^n$ , which can be identified with a $p^n$ -extension of ${K}$ .", "The degeneration of $\\phi $ can be measured by its associated refined Swan conductors.", "Those include the following invariants: The depth Swan conductor $\\operatorname{sw}({\\phi }) \\in \\mathbb {Q}_{\\ge 0}$ , which measures the separability of $\\phi $ 's reduction ($\\operatorname{sw}({\\phi })=0$ if and only if $\\phi $ is unramified with respect to $\\nu $ ), For $\\operatorname{sw}(\\phi )>0$ , the differential Swan conductor $\\operatorname{dsw}(\\phi ) \\in \\Omega ^1_{\\kappa }$ , For $\\operatorname{sw}(\\phi )=0$ , the reduced reduction $\\underline{f}=(f_1, \\ldots , f_n(x)) \\in W_n(\\kappa )$ .", "We call $(\\operatorname{sw}(\\phi ), \\operatorname{dsw}(\\phi ))$ when $\\operatorname{sw}(\\phi )>0$ , or $(0, \\underline{f})$ when $\\operatorname{sw}(\\phi )=0$ , the degeneration type or the reduction type of $\\phi $ .", "Note that, with the notations above and ${K}$ has characteristic zero, an element $F \\in {K}$ gives rise to a character $\\mathfrak {K}(F) \\in H^1({K}, \\mathbb {Z}/p)$ by the Kummer equation $Z^p=F$ .", "Definition 3.3 Let $F, G \\in {K}^{\\times } \\setminus ({K}^{\\times })^p$ , where ${K}$ is a valued field with valuation $\\nu $ (normalized such that $\\nu (p)=1$ ) in characteristic zero and with residue function field in characteristic $p>0$ .", "Suppose, moreover, that $x$ is the variable of ${K}$ 's residue field, and $\\nu (F)=\\nu (G)=0$ .", "We say $F$ and $G$ are in the same Artin-Schreier degeneration class, denote by $F \\sim _{\\mathcal {AS}} G$ , if the corresponding characters (of order $p$ ) have the same degeneration type.", "Proposition 3.4 Suppose that $F$ and $G$ are as in Definition REF .", "Then $F \\sim _{\\mathcal {AS}} G$ if and only if there exist $F^{\\prime }, G^{\\prime } \\in {K}$ such that $\\nu (F-F^{\\prime p})= \\nu (G-G^{\\prime p})=p/(p-1)-\\operatorname{sw}(\\mathfrak {K}_p(F))=p/(p-1)-\\operatorname{sw}(\\mathfrak {K}_p(G))$ and $[F-F^{\\prime p}]=[G-G^{\\prime p}]$ if $\\nu (F-F^{\\prime p}) = \\frac{p}{p-1}$ , $\\frac{d}{dx} [F-F^{\\prime p}]=\\frac{d}{dx} [G-G^{\\prime p}]$ if $\\nu (F-F^{\\prime p})<\\frac{p}{p-1}$ .", "Here $[F-F^{\\prime p}]$ is the reduction of $(F-F^{\\prime p})/p^{\\nu (F-F^{\\prime p})}$ modulo $p$ .", "The proposition is immediate from [13].", "Next, we construct the local extension in three steps corresponding to Proposition REF , Corollary REF , and Corollary REF .", "Proposition 3.5 With the above notation and $a_1=a_3=0$ (in REF ), the following equation defines a $\\mathbb {Z}/4$ extension $\\Psi ^{i,\\min }$ of $\\Psi ^i_1$ that lifts $\\psi ^i$ .", "$(Z^i_2)^2=Z^i_1\\Big (1-\\frac{2{i\\hspace{0.55542pt}}u_1 a}{X}\\Big ).$ Note that, by substituting $Z^i_1=1-2Y^i_1$ to (REF ), we obtain $ (Y^i_1)^2-Y^i_1=\\frac{u_1 a}{X}.", "$ Hence, it must be true that $u_1 \\equiv 1 \\pmod {2}$ .", "We then apply the lifting technique of Green and Matignon [6] as follows.", "Define $Y^i_2$ by $(Z^i_2)^2=:1-4Y^i_2+4(Y^i_2)^2-2(1-{i\\hspace{0.55542pt}})Y^i_1+4(1-{i\\hspace{0.55542pt}})Y^i_1Y^i_2-2{i\\hspace{0.55542pt}}(Y^i_1)^2.$ The right-hand-side of the above equation is the square of $F_1(Y^i_1)+\\lambda Y^i_2$ in [12].", "Applying the above substitution and $Z^i_1=1-2Y^i_1$ to (REF ) gives $(Y^i_2)^2-Y^i_2+(1-{i\\hspace{0.55542pt}})Y^i_1Y^i_2= \\frac{Y^i_1 {i\\hspace{0.55542pt}}a }{X}.$ One sees that (REF ) reduces to the equation defining the extension $\\psi ^i_1$ to $\\psi ^i$ (REF ).", "Thus, [6] shows that the cover defined by (REF ) and (REF ) is a lift of $\\psi ^i$ extending $\\Psi ^i_1$ .", "Proposition 3.6 The following two polynomials $\\begin{split}g(X) & =X^2 -\\sqrt{2}(i\\sqrt{2} u_1 a-2 \\sqrt{\\tilde{a}_3/a})X+2\\tilde{a}_3/a \\text{ and} \\\\h(X) & =X^2-\\sqrt{2}(i\\sqrt{2} u_1 a+2 \\sqrt{\\tilde{a}_3/a})X+2\\tilde{a}_3/a,\\end{split}$ where $\\tilde{a}_3$ is a lift of $a_3$ to characteristic zero and be such that both of the polynomials have distinct roots, satisfy $g \\cdot \\frac{X-2{i\\hspace{0.55542pt}}u_1 a}{X^3} \\sim _{\\mathcal {AS}} h \\cdot \\frac{X-2{i\\hspace{0.55542pt}}u_1 a}{X^3} \\sim _{\\mathcal {AS}} 1-\\frac{4{i\\hspace{0.55542pt}}a_3u_1}{X^3}.$ Let us consider the function $g$ .", "The computation for $h$ is similar.", "Applying the Newton polygon technique [11], one learns that both the roots of $g$ have valuation $1/2$ .", "Denote the two roots by $\\sqrt{2}b$ and $\\sqrt{2}c$ , where $b$ and $c$ are units.", "We then have $\\begin{split}\\frac{(X-2{i\\hspace{0.55542pt}}u_1 a)g}{X^3} & =1-\\sqrt{2} \\frac{(b+c)+\\sqrt{2}{i\\hspace{0.55542pt}}u_1a}{X}+2\\frac{bc+\\sqrt{2}{i\\hspace{0.55542pt}}u_1a(b+c)}{X^2} -\\frac{4{i\\hspace{0.55542pt}}bcu_1a}{X^3} \\\\& = \\sum _{i=0}^3 \\frac{b_i}{X^i}.\\end{split}$ Subtracting $(1+\\sqrt{b_2}/X)^2$ to the right-hand-side of (REF ), one obtains the below expression $-\\sqrt{2} \\frac{b+c +\\sqrt{2}{i\\hspace{0.55542pt}}u_1a+2\\sqrt{bc+i\\sqrt{2}(b+c)u_1a}}{X} -\\frac{4{i\\hspace{0.55542pt}}bcu_1a}{X^3}.$ Set $bc:=\\tilde{a}_3/a$ .", "Then the first term of the above is 0 if and only if $ b+c= {\\left\\lbrace \\begin{array}{ll}i\\sqrt{2} u_1 a-2 \\sqrt{\\tilde{a}_3/a}, \\text{ or} \\\\i\\sqrt{2} u_1 a+2\\sqrt{\\tilde{a}_3/a}.\\end{array}\\right.", "}$ It hence follows from Proposition REF that $g$ and $h$ are in the same Artin-Schreier degeneration class with $1-4i\\tilde{a}_3u_1/X^3 \\sim _{\\mathcal {AS}} 1-4ia_3u_1/X^3$ .", "Corollary 3.7 In the above notation, suppose $a_3 \\ne 0$ and $a_1=0$ .", "Then one may obtain a $\\mathbb {Z}/4$ -extension of $\\Psi ^i_1$ that lift $\\psi ^i$ by replacing $(1-2{i\\hspace{0.55542pt}}u_1 a/X)$ in (REF ) with one of the quadratic equations in Proposition REF .", "Note that, replacing $(1-2iu_1a/X)$ by an equation in (REF ) equates to multiplying the character $\\tau \\in H^1(\\mathbb {K},\\mathbb {Z}/2)$ defined by the Kummer class of $ \\bigg ( 1-\\frac{2{i\\hspace{0.55542pt}}u_1 a}{X} \\bigg ) \\frac{g(X)}{X^3} \\sim _{\\mathcal {AS}} \\frac{X^3-4{i\\hspace{0.55542pt}}a_3u_1}{X^3}=1-\\frac{4{i\\hspace{0.55542pt}}a_3u_1}{X^3} $ to the character representing $\\Psi ^{i,\\min }$ .", "The congruence follows from Proposition REF .", "As $-{i\\hspace{0.55542pt}}a_3u_1 \\equiv a_3 \\pmod {2}$ , the reduction of $\\tau $ is a $\\mathbb {Z}/2$ -character represented by the Artin-Schreier degeneration class of $a_3/x^3$ by Proposition REF .", "Hence, the character $\\Psi ^i:=\\Psi ^{i, \\min } \\cdot \\tau $ has degeneration type $(0, (1/x, a_3/x^3))$ [13] as we want.", "In addition, as $\\Psi ^i$ branches at four distinct points, $0, 4u_1a$ , and the two roots of $f$ or $g$ (which are distinct by Proposition REF ), it is a lift of $\\psi ^i$ by the different criterion.", "By the same line of reasoning as in the above proof, we cover the remaining cases.", "Corollary 3.8 In the above notation, suppose $a_3 \\ne 0$ and $a_1 \\ne 0$ .", "Then one may obtain a $\\mathbb {Z}/4$ -extension $\\Psi ^i$ of $\\Psi ^i_1$ that lift $\\psi ^i$ by replacing a root $u$ of the quadratic equation (REF ) with $u(1+2^{3/2}a_1)$ .", "Replacing $u$ by $u(1+2^{3/2}a_1)$ equates to multiplying the $\\mathbb {Z}/2$ -character defined by $1-2^{3/2}a_1u/(X-u)$ to $\\Psi ^i$ .", "The corollary then follows from the fact that $\\nu _2(u)=1/2$ .", "We thus construct a lift $\\Psi ^i$ at each branch point $b_j$ of $\\psi $ 's generic fiber.", "Globally, their product forms a lift $\\Psi $ of $\\psi _{\\eta (t)}$ , which can be presented by the following equation $Z_2^2=Z_1 \\prod _{l=1}^r \\prod _{j=1}^{\\iota _{l,2}-\\iota _{l,1}} (X-D_{l,j}),$ where $D_{l,j}$ is of the following form $ D_{l,j}=b_l + \\sqrt{2} u_{l,j} \\text{, or } D_{l,j}=b_l + 2 u_{l,j}, $ for some unit $u_{l,j}$ in the 2-adic ring $V$ .", "In addition, by construction, the reduction modulo $t$ of $D_{l,j}$ makes sense.", "Note also that $\\iota _{l,2}-\\iota _{l,1}$ is either 1 or 2 (it is 2 when $a_3 \\ne 0$ ).", "Proposition 3.9 There exists a $\\mathbb {Z}/4$ -lift $\\Psi $ of $\\psi $ defined by a system consisting of (REF ) and $ Z_2^2=Z_1 \\prod _{l=1}^r \\prod _{j=1}^{m_{l,2}-m_{l,1}} (X-D^{\\prime }_{l,j}), $ where $D^{\\prime }_{l,j} \\in W(\\overline{k(t)})$ , and $c^{\\prime }_{l,j}:= D^{\\prime }_{l,j} \\pmod {t}$ are all distinct and different from the $B_i$ 's, $i=1, \\ldots , m$ .", "As in the proof of Corollary REF , replacing $u_{l,j}$ by $u_{l,j}(1+2^mv_{l,j})$ , where $v_{l,j} \\in R$ is a unit and $m >3/2$ , does not change the local reduction type in the previous constructions.", "One thus can find the $D^{\\prime }_{l,j}$ satisfying the conditions in the proposition.", "Note that $\\Psi $ is even defined over a finite extension $W(k[[t]])$ .", "Moreover, it is immediate from the construction that $\\Psi $ 's generic fiber and its reduction modulo $t$ , which are both Kummer coverings, have the same number of branch points, hence, the same degree of different.", "The different criterion then again asserts that $\\Psi $ has good reduction modulo $t$ .", "In the last step, we will show that this reduction is exactly what we are seeking." ], [ "Step 4", "Suppose $\\phi $ is a local cyclic $G$ -cover of $\\operatorname{Spec}k[[x]]$ .", "Suppose $\\psi $ is a deformation of $\\phi $ over $k[[t]]$ .", "Suppose $\\Psi $ is a deformation of $\\psi _{\\eta (t)}$ over (a finite extension of) $W(\\overline{k((t))})$ .", "Finally, suppose that there exists a smooth model $\\tilde{\\Psi }$ of $\\Psi $ over an extension of $W(\\overline{k[[t]]})$ whose $t$ -fiber (where $t=0$ ), which we denote by $\\Phi $ , is smooth.", "To complete the last step, we will prove the following general result.", "Proposition 4.1 In the situation above, $\\Phi $ is a lift of $\\phi $ to characteristic zero.", "We may assume that $\\phi $ is a $G=\\langle \\sigma \\rangle $ -extension $k[[z]]/k[[x]]$ given by $ z \\mapsto z^{\\sigma }= \\sum _{i \\ge 0} c_iz^i \\in k[[z]], $ where $c_0=0$ and $c_1$ is a unit [1].", "Suppose $\\psi $ is a deformation of $\\phi $ over $S:=k[[t]]$ .", "Then there exists a power series defining $\\psi $ as below $ \\sum _{i \\ge 0} d_i \\tilde{z}^i \\in S[[\\tilde{z}]],$ where $d_0 \\in \\mathfrak {m}_S$ , $d_1 \\in S$ , $d_i \\equiv c_i \\pmod {t}$ , and $\\tilde{z} \\equiv z \\pmod {t}$ .", "Without loss of generality, we may identify $\\tilde{z}$ with $z$ (or just plug in $\\tilde{z}=z \\cdot v$ , where $v$ is a unit, to the above series).", "Let $X$ be a lift of $x$ by $\\Psi $ to characteristic zero.", "It suffices to prove the below proposition.", "Proposition 4.2 In the notation above, there exists a uniformizer $Z$ of $V[[X]]$ 's covering via $\\Psi $ , and a power series $ F_{\\sigma }= \\sum _{i \\ge 0} e_i Z^i \\in V[[Z]] $ that gives $\\Psi $ and whose reduction modulo $t$ (resp.", "modulo $p$ ) gives $\\Phi $ (resp.", "$\\psi $ ).", "In particular, the reduction of $F_{\\sigma }$ modulo $(p,t)$ defines $\\phi $ .", "As $\\Psi $ is a lift of $\\psi _{\\eta (t)}$ to characteristic 0, there exists a lift $Z$ of $z$ , which is also a uniformizer of the ring upstairs, and such that $\\Psi $ is determined by the mapping $ Z \\mapsto Z^{\\sigma }= \\sum _{i \\ge 0} e_i Z^i \\in V[[Z]], $ where $e_i \\equiv d_i \\pmod {p}$ .", "In addition, as $\\Psi $ has good reduction modulo $t$ , there exists a uniformizer $Z^{\\prime }$ and a power series in terms of $Z^{\\prime }$ such that $ Z^{\\prime } \\mapsto \\sum _{i \\ge 0} e^{\\prime }_i Z^{\\prime i} \\in V [[Z]] $ defining $\\Psi $ such that $Z^{\\prime }$ modulo $t$ and $e^{\\prime }_i$ modulo $t$ makes sense for each $i$ .", "Finally, as it must be true that $Z=Z^{\\prime } \\cdot u$ for some unit $u$ of the coordinate ring $V[[Z]]$ , $u$ modulo $t$ also makes sense and $e^{\\prime }_i=e_i u^i$ .", "The remaining assertions then immediately follow.", "It is immediate from the above result that the reduction modulo $t$ of the $\\mathbb {Z}/4$ -cover $\\Psi $ in Proposition REF is a lift of $\\phi $ to characteristic zero extending $\\Phi _1$ .", "That completes the proof of Theorem REF , hence the last step.", "$\\Box $" ] ]
2107.01780
[ [ "Self-Contrastive Learning with Hard Negative Sampling for\n Self-supervised Point Cloud Learning" ], [ "Abstract Point clouds have attracted increasing attention.", "Significant progress has been made in methods for point cloud analysis, which often requires costly human annotation as supervision.", "To address this issue, we propose a novel self-contrastive learning for self-supervised point cloud representation learning, aiming to capture both local geometric patterns and nonlocal semantic primitives based on the nonlocal self-similarity of point clouds.", "The contributions are two-fold: on the one hand, instead of contrasting among different point clouds as commonly employed in contrastive learning, we exploit self-similar point cloud patches within a single point cloud as positive samples and otherwise negative ones to facilitate the task of contrastive learning.", "On the other hand, we actively learn hard negative samples that are close to positive samples for discriminative feature learning.", "Experimental results show that the proposed method achieves state-of-the-art performance on widely used benchmark datasets for self-supervised point cloud segmentation and transfer learning for classification." ], [ "Introduction", "3D point clouds serve as an efficient representation of 3D objects or natural scenes, which consist of irregularly sampled 3D points associated with multiple modalities, leading to a wide range of applications such as autonomous driving, robotics and immersive tele-presence.", "Recent advances in geometric deep learning [5] have shown their success in the representation learning of irregular point clouds.", "However, most methods are trained in a (semi-)supervised fashion, requiring a large amount of labeled data to learn adequate feature representations.", "This limits the wide applicability of point clouds, especially for large-scale graphs.", "Hence, it is desirable to learn the feature representations of point clouds in a self-supervised fashion.", "Figure: An illustration of the proposed self-contrastive learning for self-supervised point cloud representation learning.", "Patch A is the anchor; pos.", "and neg.", "denote positive (e.g., patch D) and negative (e.g., patch B or patch C) samples, respectively.", "Note that patch B is the hard negative sample because of its comparative similarity to the anchor.Several attempts have been made for self-supervised representation learning on point clouds.", "These approaches are mainly based on reconstruction [11], [75], [47], [28], [84], [58], [21] or generation [1], [63], [19], [43], [60].", "Besides, contrastive learning instantiates a family of self-supervised methods [32], [3], [7], [30], [33], [57], [73], which often maximizes the agreements between the augmented views of the same image in an embedding feature space, while avoiding the mode collapse of the embedded features by maximizing the disagreements between negative examples constructed from different images.", "This paradigm has been extended to point cloud learning [81], [57], [73], which either contrast among various point clouds or different projected views of the input point cloud for representation learning.", "Under no supervision, the success of previous methods depends on informative sampling of positive and negative pairs, which often resort to manual augmentation (e.g., projections) or additional data (e.g., different point clouds for training).", "To this end, we propose a novel framework for self-supervised point cloud representation learning via self-contrastive learning, which actively learns positive and negative samples from the input single point cloud, motivated by the nonlocal self-similar property of point clouds.", "The key observation is that, as a representation of 3D objects or scenes, a point cloud usually exhibits nonlocal self-similarity, i.e., similar or even the same local geometry after affine transformations, such as symmetric engines and airfoils of an airplane as shown in Fig.", "REF .", "Based on this observation, we propose to learn self-similar point cloud patches as positive samples and otherwise negative ones, without resorting to other point clouds or additional projections.", "Such self-supervised approach marks the first significant departure from the standard practice of contrastive learning.", "Moreover, both local geometric patterns and nonlocal semantic primitives are jointly exploited by self-contrastive learning to improve the robustness against noise and missing data.", "As negative sampling is crucial for discriminative feature learning [76], [56], [70], we learn hard negative samples that are close to positive samples in the representation space for more expressive representations.", "Conditioned on each anchor patch, the corresponding hard negative samples are inferred based on the degree of self-similarity with the anchor.", "Specifically, given an input point cloud, we first train a similarity-learning network to infer the similarity between each pair of point cloud patches.", "To train the network, we apply rotations to each anchor patch to generate similar samples and employ another different patch in the point cloud as a dissimilar sample as input to learn patch similarity.", "Then, given patch pairs in the input, we discriminate if they are similar or dissimilar pairs according to the estimated of the similarity-learning network.", "Similar patches are treated as positive samples while dissimilar ones are negative samples.", "Furthermore, we actively sample hard negative patches—negatives with comparatively large similarity to the anchor patch, where the similarity is measured by the cosine similarity between the features of patch pairs inferred from the similarity-learning network, aiming at more discriminative contrastive learning.", "Since the contrastive network is randomly initialized, naively employing hard negative samples may fall into an unsatisfactory local minimum.", "To avoid this, we first choose the whole negative samples for contrastive learning, and then perform linear annealing [70] on the thresholds of self-similarity to choose hard negative samples.", "Our main contributions are summarized as follows.", "1) We propose a novel self-supervised framework for point cloud representation learning by contrasting patches within each point cloud, aiming at exploiting the nonlocal self-similarity of point clouds.", "Unlike previous works resorting to different point clouds or projections, we advocate the use of self-similar patches within a single point cloud as positive samples and otherwise negative samples for contrastive learning.", "2) To close the gap between supervised and unsupervised learning, we have developed an effective hard negative sampling method for learning discriminative features.", "We exploit the nonlocal self-similarity of point clouds to determine hard negative sampling and employ a linear annealing strategy to dynamically choose the hard negative samples for contrastive learning to avoid falling into an unsatisfactory local minimum.", "3) Experimental results show that the proposed model outperforms current state-of-the-art methods in point cloud segmentation and transfer learning for classification on widely used benchmark datasets, and validate the proposed informative hard negative sampling.", "Auto-encoders (AEs) aim to train an encoder to learn feature representations by reconstructing the input data via a decoder [64], [55], [31], [37].", "The idea is based on that feature representations should contain sufficient information to reconstruct the input data.", "A plethora of approaches have been proposed to learn unsupervised feature representations via AEs, including denoising AEs [64], contrastive AEs [55], transforming AEs [31], variational AEs (VAEs) [37], etc..", "In addition to AEs, Generative Adversarial Networks (GANs) [25], [14], [15] extract feature representations in an unsupervised fashion by generating data from input noise via a pair of generator and discriminator.", "Recent approaches have shown great potential of GANs in providing expressive feature representations by generating images [25], [14], [15], point clouds [1], [63], [43], or graphs [46], [79], [10], [4]." ], [ "Contrastive Representation Learning", "Contrastive learning instantiates a wide range of self-supervised methods [32], [3], [7], [30], [33], [57], [73], which aims to train an encoder to be contrastive between the feature representations of positive samples and negative samples.", "Among them, Deep InfoMax [32] first proposed to maximize the mutual information between a local patch and its global context through a contrastive learning framework.", "SimCLR [7] proposed a simple framework for contrastive learning, which requires a large batch size to achieve superiority performance.", "MoCo [30] improves the efficiency of contrastive learning by introducing a queue to store feature representations, which reduces the memory cost for negative samples.", "AdCo [33] presents an adversarial approach to demonstrate the negative pairs of examples can be directly trained end-to-end together with the backbone network so that the contrastive model can be learned more efficiently as a whole.", "Recently, Info3D [57] proposed to extend the InfoMax and contrastive learning framework on 3D objects, which maximizes the mutual information between 3D objects and their “chunks” to learn representations.", "PointContrast [73] proposed a unified framework of contrastive loss for representation learning on 3D scenes.", "In addition, hard negative sampling has shown to benefit contrastive learning in [70], [34].", "Robinson et al.", "[56] proposed a new conditional distribution for sampling negative samples to distinguish false-negative samples with the same context information as the anchor from hard negative samples.", "Wu et al.", "[70] proposed a negative sample selection method with upper and lower bounds, which calculates a certain distance between the samples and then takes the samples between the upper and lower bounds as negative samples for learning." ], [ "Transformation Equivariant Representation Learning", "Another family of self-supervised approaches attempt to learn transformation equivariant representations, which has been advocated in Hinton's seminal work on learning transformation capsules [31].", "Following this, many approaches have been proposed to learn transformation equivariant representations [23], [13], [12], [9], [42].", "To generalize to generic transformations, Auto-Encoding Transformation (AET) was proposed in [80] to learn unsupervised feature representations by estimating transformations from the learned feature representations of both the original and transformed images, which was further extended in [53], [66], [22].", "Another extension of AET, named GraphTER [21], was introduced to graph-structured data formalized by auto-encoding node-wise transformations in an unsupervised manner.", "The self-attention mechanism was specifically introduced in [17] for 3D point cloud data, which adheres to equivariance constraints, improving robustness to nuisance transformations." ], [ "Deep Learning on Point Clouds", "Deep learning on 3D point clouds has attracted increasing attention in recent years.", "Many approaches have been proposed to address various tasks on point clouds, such as 3D point cloud classification and segmentation [51], [52], [26], [45], [71], [48], [61], [82], [65], [68].", "Among them, one pioneer method PointNet [51] proposed to learn the features of each point independently, while PointNet++ [52] introduces a hierarchical architecture that applies PointNet on a nested partitioning of the input point set to extract local structures.", "Local structures have also been exploited by methods such as PCNN [2], PointCNN [45], PointConv [71], and Relation-Shape CNN [48] to further improve the quality of point cloud representation learning.", "In addition, Graph Convolutional Neural Networks (GCNNs) have also been applied to point clouds by constructing a $k$ -nearest-neighbor ($k$ NN) graph or a complete graph to learn feature representations [61], [82], [65], [68]." ], [ "Self-supervised Point Cloud Learning", "Recently, many approaches have sought to explore semantic information for unsupervised point cloud learning.", "In [81], a method was proposed to learn unsupervised semantic features for point clouds by solving the pretext tasks of part contrasting and object clustering.", "[58] attempted to reconstruct point clouds whose parts have been randomly rearranged to capture semantic properties of the point cloud.", "These representations are learned by inferring the relationship among parts, while the global information is not fully exploited.", "To tackle this issue, a recent work [54] proposed to learn point cloud representations by bidirectional reasoning between the local structures and the global shape.", "However, the semantic nonlocal context information within each point cloud is not fully exploited yet in previous works." ], [ "The Proposed Method", "In this section, we elaborate on the proposed self-supervised point cloud representation learning.", "We start with an overview of the key ideas, and then present our self-contrastive learning with hard negative sampling from nonlocal self-similarity of point clouds.", "Furthermore, we provide analysis about the relationship of our method to self-attention [83] and point cloud transformer [27]." ], [ "Overview", "Given an input point cloud ${\\mathbf {P}}\\in \\mathbb {R}^{N \\times 3}$ , we aim at learning an effective feature extractor $\\mathcal {E}: {\\mathbf {P}}\\mapsto f({\\mathbf {P}})$ that infers the feature $f({\\mathbf {P}})$ of ${\\mathbf {P}}$ in a self-supervised fashion.", "As illustrated in Fig.", "REF , our framework consists of the following three procedures: Self-similarity training.", "We train a similarity-learning network to learn the similarity between each pair of point cloud patches in ${\\mathbf {P}}$ .", "Specifically, for each anchor patch, we generate pairs of similar patches by rotation of the anchor and employ another different patch in ${\\mathbf {P}}$ as a dissimilar sample, which serve as the training data to learn the patch similarity.", "Positive and Hard negative sampling.", "We employ the trained similarity-learning network to learn positive and negative samples, depending on if they are similar or dissimilar pairs.", "Meanwhile, we propose to determine hard negative samples by the degree of self-similarity in the feature space inferred from the similarity-learning network.", "Such hard negative examples will play an important role in facilitating better and faster contrastive learning [34].", "Self-contrastive learning.", "Based on the learned positive and hard negative samples, we perform self-contrastive learning over ${\\mathbf {P}}$ .", "In particular, we adopt a linear schedule in deterministic annealing for hard negative samples to avoid falling into a local minimum during self-contrastive learning." ], [ "Nonlocal Self-similarity in Point Clouds", "Point clouds often exhibit nonlocal self-similarity, as illustrated in Fig.", "REF .", "Unlike image patches, two point cloud patches are similar if they can be connected by 3D geometric transformations (rotation, reflection, translation, etc.).", "Since point clouds characterize 3D shapes, there is often stronger self-similarity in point clouds than that in images, e.g., six sides of a cube are similar to each other, left and right sides of many objects including human faces observe bilateral symmetry, and circular symmetry can be widely observed for vases, balls, and so on.", "This key observation inspires our self-contrastive learning, where self-similar patches serve as positive samples and dissimilar patches become negative samples.", "To fully exploit the nonlocal self-similarity, it is crucial to provide a meaningful definition of nonlocal self-similarity for point clouds.", "Based on the heuristic that two patches in a point cloud are self-similar if their distance is small after some 3D geometric transformation, there are many optimization-based registration methods that use optimization strategies to estimate the transformation matrix [8], [41], [74], and then judge the similarity of two point sets by exploring the properties of the transformation matrix.", "However, these methods usually require high computational cost for complex matching strategies and easily fall into a local optimal value.", "Different from explicitly calculating the shape similarity between patches using optimization strategies, we consider projecting different patches into the same similarity feature space via a similarity-learning network.", "Next, we measure the similarity between patches by calculating the cosine similarity of the corresponding feature vectors.", "Definition 1.", "Given a point cloud ${\\mathbf {P}}$ , two patches ${\\mathbf {M}}\\in {\\mathbf {P}}$ and ${\\mathbf {N}}\\in {\\mathbf {P}}$ are self-similar if $\\Phi [g({\\mathbf {M}}),g({\\mathbf {N}})] < \\epsilon ,$ where $g(\\cdot )$ denotes the feature of a patch learned from a network, and $\\Phi [\\cdot ,\\cdot ]$ is a metric to measure the similarity between the features of patch pairs, such as the cosine similarity.", "$\\epsilon >0$ is a small threshold." ], [ "Self-similarity Training", "Based on Definition 1, we train a similarity-learning network to learn the self-similarity between point cloud patch pairs.", "As shown in the top half of Fig.", "REF , given a point cloud ${\\mathbf {P}}=\\lbrace {\\mathbf {x}}_i\\rbrace _{i=1}^{N}$ , we choose patch centers by iterative farthest point sampling (FPS) [49], leading to a subset of $M$ points $\\mathbf {C}=\\left\\lbrace {\\mathbf {x}}_i\\right\\rbrace _{i=1}^{M}$ .", "We thus construct a set of patches by $\\mathcal {P}_i=\\left\\lbrace {\\mathbf {x}}_j | {\\mathbf {x}}_j \\in \\mathcal {N}({\\mathbf {x}}_i) \\cup {\\mathbf {x}}_i \\right\\rbrace , i=1...,M,$ where $\\mathcal {N}(\\cdot )$ denotes the $k$ nearest neighbors of the center point ${\\mathbf {x}}_i$ (i.e., $|\\mathcal {P}_i|=k+1$ ).", "To learn the features of these patches, we first introduce an encoder $E_1(\\cdot )$ that maps the point cloud ${\\mathbf {P}}$ to the feature space, ${\\mathbf {F}}=E_1({\\mathbf {P}}),$ where ${\\mathbf {F}}=[{\\mathbf {f}}_1,...,{\\mathbf {f}}_N]^\\top \\in \\mathbb {R}^{N \\times F}$ is the feature matrix of the point cloud with $F$ output feature channels, with ${\\mathbf {f}}_i$ as the feature of the $i$ -th point.", "Specifically, we employ DGCNN [68] as the encoder.", "Then each constructed patch is represented by the concatenation of the feature of each point in the patch, i.e., ${\\mathbf {F}}_i \\in \\mathbb {R}^{(k+1)\\times F}$ , $i=1,...,M$ .", "In order to learn the nonlocal self-similarity of patches, given an anchor patch $\\mathcal {P}_i$ , we first construct a similar sample by rotating $\\mathcal {P}_i$ with a random degree, which admits a feature vector representation $\\widetilde{{\\mathbf {F}}}_i$ .", "Meanwhile, we randomly sample another patch $\\mathcal {P}_j, i \\ne j$ as a dissimilar patch to $\\mathcal {P}_i$ .", "Then we summarize the patch representation by an aggregation operator $G_1:\\mathbb {R}^{(k+1) \\times F} \\mapsto \\mathbb {R}^F$ for the subsequent discrimination.", "Next, we employ a discriminator $D: \\left(\\mathbb {R}^{F}, \\mathbb {R}^{F}\\right) \\mapsto \\mathbb {R}$ , where $D(G_1({\\mathbf {F}}_i),G_1({\\mathbf {F}}_j))$ provides the similarity score assigned to the summarized representations of patch pair $(\\mathcal {P}_i,\\mathcal {P}_j)$ .", "A higher score corresponds to a more similar patch pair.", "The similarity-learning network is trained by minimizing $\\begin{split}\\mathcal {L}_S = -\\frac{1}{A+B}\\bigg ( & \\sum _{i=1}^{A} \\log D\\left(G_1({\\mathbf {F}}_i),G_1(\\widetilde{{\\mathbf {F}}}_i)\\right) \\\\& + \\sum _{i=1}^{B} \\log \\Big (1-D\\left(G_1({\\mathbf {F}}_i),G_1({\\mathbf {F}}_j)\\right)\\Big )\\bigg ),\\end{split}$ where $A$ and $B$ are the numbers of sampled similar and dissimilar patch pairs, respectively." ], [ "Positive and Hard Negative Sampling", "Two key ingredients in contrastive learning are definitions of similar (positive) pairs and dissimilar (negative) pairs of data points.", "Having trained the similarity-learning network for self-similarity learning, we proceed with inferring negative samples from the network.", "Unlike most previous works where negative samples are manually chosen, we actively learn negative sampling conditional on each anchor patch.", "Since we have no supervision in unsupervised contrastive learning, we opt to learn negative samples by measuring the similarity with respect to the anchor patch in the feature space.", "In particular, we calculate the cosine similarity of the respective features of two patches inferred from the trained similarity-learning network, i.e., $s(\\mathcal {P}_i, \\mathcal {P}_j) = \\frac{|G_1({\\mathbf {F}}_i)^{\\top }G_1({\\mathbf {F}}_j)|}{\\Vert G_1({\\mathbf {F}}_i)\\Vert _2\\cdot \\Vert G_1({\\mathbf {F}}_j)\\Vert _2} ,$ Furthermore, we propose to extract hard negative samples for discriminative feature learning inspired by recent works [34], [70].", "The level of “hardness” for negative samples is dependent on the similarity with respect to the anchor patch, i.e., more similar patches serve as “harder” negative samples.", "To implement this idea, we choose a closed interval $\\mathcal {B} = [b_l, b_u]$ defined by two thresholds for hard negative samples, where $b_l$ and $b_u$ are the lower and upper similarity thresholds, respectively.", "The negative sampling for each anchor patch $\\mathcal {P}_i$ varies as the sampling thresholds are conditional on the anchor in the point cloud data.", "Regarding positive sampling, we dilate each anchor patch as a positive example like in dilated point convolution [16].", "Specifically, as anchor patch $\\mathcal {P}_i$ is constructed via a $k$ -nearest-neighbor search, we first compute the $k \\times d$ nearest neighbors of the centering point ($d>1$ is an integer), and then select every $d$ -th neighbor to acquire a dilated patch (with increased receptive field) as the positive sample $\\mathcal {P}^+_i$ .", "In our current experiments, we set $d=2$ ." ], [ "Self-contrastive Learning", "Based on the learned positive and hard negative samples, we perform self-contrastive learning over ${\\mathbf {P}}$ .", "We first learn the point-wise features of ${\\mathbf {P}}$ through a representation encoder $E_2(\\cdot )$ , ${\\mathbf {H}}=E_2({\\mathbf {P}}),$ where ${\\mathbf {H}}=[{\\mathbf {h}}_n,...,{\\mathbf {h}}_N]^{\\top } \\in \\mathbb {R}^{N \\times H}$ denotes the feature matrix of ${\\mathbf {P}}$ with ${\\mathbf {H}}$ output feature channels, with ${\\mathbf {h}}_i$ as the feature of the $i$ -th point.", "The point-wise features will be transferred to downstream tasks, such as point cloud classification and segmentation.", "We have employed DGCNN [68] as the encoder in our current implementation.", "Each constructed patch is then represented by the concatenation of encoded features, i.e., ${\\mathbf {H}}_i\\in \\mathbb {R}^{(k+1)\\times H}, i=1,...,M$ .", "Like the similarity-learning network, we also employ an aggregation operator $G_2:\\mathbb {R}^{(k+1)\\times H} \\mapsto \\mathbb {R}^{H}$ to acquire the patch representations.", "The contrastive learning network is then trained by minimizing the InfoNCE [50] contrastive loss, $\\mathcal {L}_N = -\\log \\frac{f({\\mathbf {H}}_i,{\\mathbf {H}}_i^{+})}{f({\\mathbf {H}}_i,{\\mathbf {H}}_i^{+})+\\sum _{s(\\mathcal {P}_i,\\mathcal {P}_j)\\in [b_l,b_u]}f({\\mathbf {H}}_i,{\\mathbf {H}}_j)},$ where $s(\\mathcal {P}_i,\\mathcal {P}_j)\\in [b_l,b_u]$ means $\\mathcal {P}_j$ is a hard negative sample of $\\mathcal {P}_i$ .", "${\\mathbf {H}}_i^{+}$ and ${\\mathbf {H}}_j$ denote the feature map of the positive and negative patches, respectively.", "$f({\\mathbf {H}}_i,{\\mathbf {H}}_j)=\\exp \\left\\lbrace G_2({\\mathbf {H}}_i)^{\\top }G_2({\\mathbf {H}}_j)/\\tau \\right\\rbrace $ , where $\\tau $ is a positive real number denoting the temperature parameter.", "Naively using negative samples may collapse to poor local minimum as discussed in previous works [70], [34].", "Instead, we adaptively perform negative sampling by choosing the thresholds $b_l$ and $b_u$ with a linear annealing policy as in [70].", "Specifically, we opt to reduce the value of $b_u$ by a step size linearly and increase the value of $b_l$ by another step size linearly for every fixed number of epochs after several training epochs of using the whole negative samples, thus selecting more difficult negatives.", "Table: Classification results in transfer learning from ShapeNet Part dataset to ModelNet40 dataset." ], [ "Analysis", "We further analyze the connection of our method with the recently developed self-attention [83] and point cloud transformer [27].", "Self-attention is an attention mechanism relating different positions of spatial and temporal data to model the context dependency within the data, which has shown its effectiveness in machine reading, language understanding, image description generation, and so on.", "The well-known transformer models [35] heavily rely on self-attention mechanism to exploit global or long-range dependencies between the input and output.", "In essence, the nonlocal self-similarity exploited in our self-contrastive learning model is consistent with the principle of self-attention.", "As discussed in [67], self-attention is a special case of non-local operations in the embedded Gaussian version.", "On the other hand, the nonlocal self-similarity of a point cloud distinguishes from the commonly used self-attention or transformer in the explicit geometric meaning as well as the extension to positive and negative sampling in contrastive learning.", "Moreover, the proposed model samples self-similar patches as positives, while patches with less degree of similarity are treated as hard negatives.", "By relating and contrasting patches in different locations of a point cloud, our model captures both local geometric details and nonlocal semantic primitives, leading to accurate local and global representations.", "This is beneficial to downstream tasks such as point cloud segmentation, which will be evaluated in the experiments.", "Table: Part segmentation results on ShapeNet Part dataset.", "Metric is mIoU (%) on points.", "“Sup.” denotes supervised." ], [ "Experimental Results", "In this section, we first validate the feature representations of point clouds learned with our model in a transfer learning setting [58] in Sec.", "REF .", "Then in Sec.", "REF , We compare the proposed part segmentation method with state-of-the-art supervised and unsupervised approaches.", "In our experiments, we employ ModelNet40 [72] and ShapeNet Part [77] datasets to evaluate the generalizability of our model in a transfer learning strategy.", "ModelNet40.", "This dataset contains $12,311$ models from 40 categories, where $9,843$ models are used for training and $2,468$ models are for testing.", "We sample $1,024$ points from the original model.", "ShapeNet Part.", "This dataset contains $16,881$ point clouds from 16 object categories, annotated with 50 different parts.", "We sample $2,048$ points from each 3D point cloud.", "We employ $12,137$ models for training, and $2,874$ models for testing." ], [ "Implementation Details", "In this task, the similarity-learning network and the contrastive network are both trained via the Adam optimizer [36] with a batch size of 32 and an initial learning rate of $0.002$ .", "The similarity-learning network is trained for 128 epochs and the contrastive learning network is trained for 512 epochs.", "The learning rates of the two networks are scheduled to decrease by 0.8 every 20 epochs and $0.5$ every 50 epochs, respectively.", "For the similarity-learning network, we deploy four EdgeConv [68] layers and one fully-connected layer as the encoder $E_1(\\cdot )$ .", "The number of nearest neighbors $k$ is set to 20 for all EdgeConv layers.", "After the four EdgeConv layers, the 128-dimensional point-wise feature matrix goes through a fully-connected layer with input channels of 128 and output channels of 32.", "We then sample similar and dissimilar patch pairs as described in Sec.", "REF , and the point-wise features of these patches are fed into an aggregation operator $G_1(\\cdot )$ , which consists of a GCN [38] and average pooling layer to acquire the representations of patches.", "The representations of similar and dissimilar patch pairs are first concatenated to form a 64-dimensional feature vector, and then fed into a discriminator $D(\\cdot )$ containing one fully-connected layer to predict the similarity of each pair.", "In the linear annealing strategy for hard negative sampling, given that most similarity values between different patch pairs range from 0.8 to 1.0, we opt to reduce the value of $b_u$ by 0.025 and increase the value of $b_l$ by 0.05 for every 20 epochs after 300 training epochs of using the whole negative samples.", "In the encoder $E_2(\\cdot )$ of the contrastive network, we adopt eight EdgeConv layers with the number of nearest neighbors $k=20$ to acquire the point-wise feature representations.", "We concatenate the point-wise features from the eight EdgeConv layers, which will be fed into a fully-connected layer to output a 256-dimensional point-wise feature matrix.", "Similar to the similarity-learning network, we also use a GCN layer and average pooling layer as our aggregation operator $G_2(\\cdot )$ to aggregate the feature representations of each patch.", "Besides, the batch normalization layer and LeakyReLU activation function with a negative slope of 0.2 are employed after each convolutional layer.", "During the evaluation stage, the weights of the encoder $E_2(\\cdot )$ are fixed to extract the point-wise feature representations of point clouds.", "Then an average pooling layer is deployed to acquire the global features, after which a linear SVM classifier is trained to map the global features to classification scores.", "Figure: Comparison in classification accuracy with the most competitive method GraphTER on ModelNet40 dataset at different Gaussian noise levels." ], [ "Classification Results", "We follow the same setting in [81] to train a linear SVM classifier on the feature representations of ModelNet40 obtained from the contrastive network.", "Both similarity-learning and contrastive networks are trained on the ShapeNet part dataset.", "As shown in Tab.", "REF , the proposed method outperforms all other unsupervised competitive methods on the ModelNet40 dataset, which justifies the effectiveness of our method.", "Figure: Comparison in classification accuracy with the most competitive method GraphTER on ModelNet40 dataset at different point cloud densities." ], [ "Ablation Studies", "We test the robustness of our model at different noise levels and densities, which is important to real-world applications where point cloud data often suffer from noise or low density.", "Robustness to Noise.", "In order to test the robustness of our model to random noise, we randomly jitter the original coordinates of 3D point clouds with Gaussian noise, with zero mean and standard deviation $\\sigma \\in \\lbrace 0.01,0.02,0.03,0.04,0.05\\rbrace $ .", "As presented in Fig.", "REF , we compare our method with the most competitive method GraphTER [21] on ModelNet40 dataset, where the horizontal axis denotes the noise level (standard deviation $\\sigma $ ), and the vertical axis is the classification accuracy evaluated from a linear SVM classifier.", "We observe that the proposed model is more robust, with the classification accuracy of $63.2\\%$ even when the noise level $\\sigma =0.05$ .", "We also see that the performance trend of GraphTER is similar and comparable to ours.", "This is because GraphTER employs point-wise transformation (translation, rotation or shearing) for data augmentation, which is beneficial to the robustness of the model to slight noise.", "By contrast, we do not need this strategy but still achieve satisfactory classification results and good robustness.", "Robustness to Density.", "We further test the robustness of our model to 3D point clouds with low density.", "We randomly sample $\\lbrace 1024,896,768,640,512\\rbrace $ points from the original point cloud on ModelNet40 dataset, and compare our method with GraphTER [21].", "As shown in Fig.", "REF , the classification accuracy of our model keeps $75.9\\%$ when the number of points is 512, which outperforms GraphTER ($70.5\\%$ ) significantly.", "Hence, the proposed method is robust to noisy and sparse point clouds.", "This gives credit to the proposed contrastive learning based on nonlocal self-similarity, which captures the relationship between parts of the point cloud." ], [ "Implementation Details", "We also use Adam optimizer to train the similarity-learning network and the contrastive network for 128 epochs and 512 epochs, respectively.", "The hyper-parameters are the same as in Sec.", "REF .", "We adopt a linear classifier with several fully-connected layers to map the point-wise feature representations to segmentation scores, with the input of a 256-dimensional feature vector from the EdgeConv layer of the contrastive network.", "We use the negative log-likelihood loss to train the classifier." ], [ "Unsupervised Results", "We adopt the Intersection-over-Union (IoU) metric and follow the same evaluation protocol as in the PointNet [51]: the IoU of a shape is computed by averaging the IoUs of all shapes belonging to that category.", "The mean IoU (mIoU) is finally calculated by averaging the IoUs of all test shapes.", "We compare our model with both unsupervised approaches and supervised approaches, as listed in Tab.", "REF .", "For fair comparison, we use a different number of fully-connected layers as our classifier to compare with other methods: 1) one fully-connected layer (denoted as “1 FC”) as in [28], and 2) five fully-connected layers (denoted as “5 FCs”) as in [21].", "Note that we reproduce the segmentation results of GraphTER under the “1 FC” setting.", "Under the challenging “1 FC” setting, we achieve an mIoU of $76.0\\%$ , which significantly outperforms the state-of-the-art unsupervised method MAP-VAE [28] by $8.0\\%$ , and GraphTER [21] by $13.5\\%$ .", "Under the “5 FCs” setting, we achieve an mIoU of $82.3\\%$ , which also outperforms the state-of-the-art unsupervised method GraphTER [21] by $0.4\\%$ .", "Moreover, the proposed model achieves comparable performance to the state-of-the-art fully supervised approaches, which pushes greatly closer towards the upper bound set by the fully supervised counterparts.", "We also observe that our model achieves the top performance in most categories such as aeroplane, motorbike and skateboard, which exhibit strong self-similarity and can be well captured by our self-contrastive learning." ], [ "Semi-supervised Results", "We follow the same setting in [84] to test our model with limited labeled training data.", "Specifically, we train the linear classifier with only $1\\%$ and $5\\%$ labeled training data, and test the model on all available test data.", "For fair comparison with other methods as presented in Tab.", "REF , we adopt two settings: 1) Train the linear classifier with weights of the feature extractor not frozen, as in ACD [18], denoted as “Ours (w/ fine-tune)”; 2) Train the linear classifier with weights of the feature extractor frozen, as in other methods, denoted as “Ours (w/o fine-tune)”.", "As listed in Tab.", "REF , under the “w/o fine-tune” setting, our model achieves the state of the art performance, leading to an mIoU of $78.2\\%$ with $5\\%$ of training data; with $1\\%$ of training data, our model still achieves the best performance, with an mIoU of $74.8\\%$ that outperforms Multi-task [29] by $8.0\\%$ .", "Under the “w/ fine-tune” setting, our result is comparable to ACD [18] with $5\\%$ of training data, and outperforms ACD with $1\\%$ of training data.", "Figure: Segmentation results on ShapeNet Part dataset under different percentages of training data.We further reduce the amount of training data to evaluate our model.", "We adopt four percentages of available training data in $\\lbrace 0.1\\%,0.3\\%,0.5\\%,0.7\\%\\rbrace $ to train the linear classifier, with the weights of the feature extractor frozen.", "As presented in Fig.", "REF , the horizontal axis denotes the training epoch of the classifier and the vertical axis is the mIoU.", "We observe that the mIoU increases with the training epoch and basically converges at the 60th epoch, which is stable.", "The mIoU also increases with the percentage of training data.", "At the same time, our model still achieves a reasonable mIoU of $61.11\\%$ with $0.1\\%$ of training data.", "This validates the effectiveness of our model with extremely few labeled training samples." ], [ "Visualization Results", "Further, we qualitatively compare the proposed method with MAP-VAE [28] on the Chair model under the “1 FC” setting, as illustrated in Fig.", "REF .", "The proposed model leads to much more accurate segmentation results than MAP-VAE especially in transition areas, e.g., around the legs and the back of the chair.", "We also compare our method with the state-of-the-art unsupervised method GraphTER [21] on various models under the “5 FCs” setting, as shown in Fig.", "REF .", "We see that our model produces accurate results in detailed regions, e.g., the tail of the aeroplane model, and the wheels of the motorbike and skateboard models, which benefits from the exploitation of nonlocal similarity.", "Figure: OursFigure: Ours" ], [ "Ablation Studies", "We evaluate the advantages of hard negative sampling, and test the robustness of our model.", "Advantages of Hard Negative Sampling.", "To validate the advantages of the proposed hard negative sampling, we simply take all other patches as negative samples to the anchor patch.", "As presented in Tab.", "REF , we see that with hard negative sampling, the performance of our model outperforms that of our model without hard negative sampling by a large margin in all experimental settings.", "Specifically, our model improves the mIoU by an average of $6.35\\%$ in unsupervised setting and $5.5\\%$ in semi-supervised setting over the model without hard negative sampling, proving that the proposed hard negative sampling makes significant contributions to our method for point cloud representation learning.", "Table: Ablation studies on the proposed Hard Negative Sampling in terms of segmentation mIoU (%) on ShapeNet Part.Figure: Comparison in segmentation mIoU with the most competitive method GraphTER on ModelNet40 dataset at different Gaussian noise levels.Robustness to Noise.", "We also test the robustness of our model to noise.", "We randomly jitter the original coordinates of 3D point clouds with Gaussian noise, with zero mean and standard deviation $\\sigma \\in \\lbrace 0.01,0.02,0.03,0.04,0.05\\rbrace $ .", "We compare our model with GraphTER on ShapeNet Part dataset in the “5 FCs” setting.", "As shown in Fig.", "REF , our model outperforms GraphTER at all the noise levels and achieves reasonable segmentation results even at high noise levels.", "We propose a self-contrastive learning framework with hard negative sampling based on nonlocal self-similarity, aiming at accurate point cloud representation learning in a self-supervised fashion.", "It consists of three building blocks, namely self-similarity training, positive/negative sampling (especially hard negative sampling) from nonlocal self-similarity, and self-contrastive learning with a linear annealing schedule.", "By exploiting self-similarity in point cloud, our method has achieved state-of-the-art results in segmentation and transfer learning for classification on popular benchmark datasets.", "We believe the exploitation of nonlocal self-similarity will also benefit other point cloud tasks such as denoising, reconstruction, and generation." ] ]
2107.01886
[ [ "A Testbed for Investigation of Selective Laser Melting at Elevated\n Atmospheric Pressure" ], [ "Abstract Metal additive manufacturing (AM) by laser powder bed fusion (L-PBF) builds upon fundamentals established in the field of laser welding which include the influence of gas and plume dynamics on weld depth and quality.", "L-PBF demands a thorough investigation of the complex thermophysical phenomena that occur where the laser interacts with the metal powder bed.", "In particular, melt pool turbulence and evaporation are influenced by the ambient gas chemistry and pressure.", "This paper presents the design and validation of high pressure laser melting (HPLM) testbed; this accommodates bare metal plate samples as well as manually-coated single powder layers, and operates at up to 300 psig.", "The open architecture of this testbed allows for full control of all relevant laser parameters in addition to ambient gas pressure and gas flow over the build area.", "Representative melt tracks and rasters on bare plate and powder are examined in order to validate system performance, and preliminary analysis concludes that pressure has a significant impact on melt pool aspect ratio.", "The HPLM system thus enables careful study pressure effects on processing of common L-PBF materials, and can be applied in the future to materials that are challenging to process under ambient pressure, such as those with high vapor pressures." ], [ "Introduction", "Laser powder bed fusion (L-PBF) is a category of additive manufacturing (AM) in which laser energy is focused onto a flat bed of powder to progressively fuse together horizontal cross-sections of a three-dimensional component.", "Figure REF a depicts the process: powder is spread across a build platform to form a thin layer; the laser is focused to the flat powder bed and directed by galvanometer mirrors in order to fuse an arbitrary shape in the powder; an inert gas atmosphere minimizes oxidation, and additionally the inert gas is swept across the build platform as a so-called “gas knife” to remove process byproducts from the chamber.", "Figure: Schematic of a typical L-PBF machine and the mesoscale dynamics of the L-PBF process.", "Adapted from .In L-PBF, absorption of laser energy is governed by the powder material itself, as well as the prevalence of secondary reflections as impacted by powder layer properties and previously fused geometry [2], [3], [4], [5], [6].", "The resulting melt pool may or may not reach a steady state, subject to a number of competing forces as the laser progresses across the powder bed.", "The laser continues heating the melt pool while heat is convected, conducted, and radiated away from the laser spot [3], [7], which establishes a temperature gradient.", "This gradient creates a corresponding surface tension gradient that induces Marangoni flow, which in most cases circulates from the laser spot where the surface tension is lowest, to the edges of the melt pool where surface tension is highest [8].", "The velocity, and thus turbulence, of this flow grows with greater energy input to a point at which the melt pool becomes unstable [9] and the intense internal flows may cause voids in the final part [10].", "In addition to liquid flow, gas-liquid interaction plays a significant role in melt pool behavior and resulting L-PBF part quality.", "Khairallah et al.", "[7] describe in detail how evaporation from the molten metal surface generates a recoil pressure above the melt pool, which presses down into the melt pool with sufficient force at high energy densities to create keyhole porosity.", "This injects gas into the melt pool and forces liquid to the sides of the depression with enough force to project spatter upward and away from the laser spot at high speeds.", "Matthews et al.", "[11] observed the denudation of powder with high speed imaging at pressures from 0.5 Torr to 1 bar.", "It was concluded that denudation of powder near the laser spot is strongly influenced by ambient pressure, and that at pressures near 1 bar, denudation decreases with pressure.", "Vaporization due to thermal gradients near the center of the laser spot poses an additional complication when some elements that comprise the working material vaporize more readily than others, resulting in a change in final part composition relative to the feedstock [12], [13].", "In summary, an ideal melt pool receives enough energy to maintain a single, continuous track which reaches the previously-solidified layer below to fuse the layers together, yet not so much energy that the melt pool becomes unstable, resulting in non-uniformity and/or embedded porosity.", "Much work has gone into mapping the parameter space for various materials in order to establish optimal processing conditions, sometimes called a process window, for high quality L-PBF parts.", "Recent work focusing on laser welding and L-PBF in sub-atmospheric conditions suggests that ambient gas pressure may be used to expand such process windows.", "Specifically, weld penetration increases and melt pool width decreases with decreasing pressure below 1 bar [14], [15], [16], [17], [18].", "For example, Pang et al.", "[19] developed a 3D multiphase model of a laser weld keyhole and the resultant vapor plume, and found that adding ambient pressure into the simulation resulted in much lower (and more accurate) gas plume velocity estimates.", "Additionally, Masmoudi [20] presented a 3D numerical model to investigate laser-material-atmosphere interactions at low pressures, and, based on trends observed, proposed that elevated ambient gas pressure may decrease vapor convection near the melt pool, and reduce overall vaporization.", "To our knowledge, there is only one report in the literature of L-PBF at pressures greater than 1 bar.", "Bidare et al.", "[21] describe the design of an open architecture L-PBF testbed which was later outfitted with a vacuum chamber to observe L-PBF of a single powder layer at pressures between 10 $\\mu $ bar and 5 bar [22], [23].", "Using Schlieren imaging, they observe the plume characteristic to L-PBF and report that increased ambient pressure slows its speed and increases its temperature.", "They note that increasing ambient pressure significantly reduces denudation of surrounding powder as the laser progresses across the powder bed.", "They also find that melt pool depth is decreased with increasing ambient pressure up to 5 atm.", "Bidare et al.", "also suggest that elevated pressure results in more aggressive spatter and a decrease in laser energy reaching the working material.", "However, it is not clear that an adequate gas flow over the build surface was maintained to fully decouple the effects of elevated pressure from those of the plume obscuring and scattering laser energy, nor was the scaling of melt penetration quantified in detail.", "Modifying an existing commercial L-PBF machine to achieve pressures significantly above one atmosphere would be challenging, and perhaps impossible given the gas flow and sealing requirements.", "Here, to investigate elevated pressure effects on L-PBF, we present a custom-built high pressure testbed.", "The system’s open architecture gives complete control over all relevant process variables, including laser power, scan speed, scan strategy, spot size, inert gas flow over the print area, and ambient pressure.", "In particular, the gas flow over the build area is simulated and validated, and the system is calibrated to ensure precise measurement of the laser spot size, and uniformity of exposure.", "Studies of melt track dimensions versus laser power and speed, and ambient pressure validate the system's baseline performance, and present preliminary insights as to the influence of elevated pressure on the L-PBF process window." ], [ "Chamber and build platform", "The high-pressure laser melting (HPLM) system was constructed upon a custom chamber (Parr Instrument Company) with a maximum operating pressure of 1900 psig, and a cylindrical 2.5 inch (diameter) by 6.5 inch (length) bore.", "The chamber features an oblong window flange, a removable head with four ¼” NPT ports on one end, and a single ¼” NPT port on the opposing end.", "Sapphire was chosen as the pressure window material owing to its high fracture toughness and high transmission rate of wavelengths required for near- and mid-wave infrared (NIR, MWIR) optical process monitoring.", "A custom-designed 3D printed nozzle is attached internally to one of the four inlet ports, creating a gas “knife” which is designed to sweep vapor and byproducts away from the build area.", "Due to the short distance between the window and the build surface, a fused quartz microscope slide (Fisher Scientific) is suspended between the build area and the sapphire window by a thin frame resting on the build platform.", "This slide serves as a sacrificial shield to block spatter particles that may otherwise penetrate the gas knife and impact the window [24].", "The resulting full print area through the pressure window and microscope slide is approximately 10 $\\times $  60 mm, and the longer dimension is parallel to the gas flow.", "Figure: HLPM testbed: (a) the pressure chamber containing the build platform; (b) schematic of the internal architecture, noting the kinematic constraint of the platform assembly, and the placement of the gas nozzle with respect to the build platform, protective glass slide, and window.", "The chamber bore is cylindrical, 2.5 inches (diameter) by 6.5 inches (length).Alignment between the laser’s focal plane and the powder bed is crucial for achieving uniform quality across the entire build area; misalignment will cause a change in effective laser spot size.", "To this end, a kinematic positioning system, displayed in Figure REF , was devised to exactly constrain the build platform while giving precise control of alignment with the focal plane.", "Two spheres contact the cylindrical bore of the chamber.", "One adjustable ball-end screw contacts a small groove cut into the floor of the chamber, opposite the window flange above, to center the platform and allow for fine adjustment of the platform’s tilt to match the laser focal plane.", "Finally, two spheres contact the back wall of the chamber’s interior, fully constraining the platform in both rotation and translation.", "A sample plate (i.e., the substrate upon which L-PBF is performed) is bolted to the build platform, and the glass slide and its holder are placed atop the platform.", "The build platform, sample, and slide can thus be easily removed and replaced with high repeatability." ], [ "Inert gas flow", "Achieving high quality L-PBF requires a steady, uniform flow of inert gas across the build area [25], [26], [27].", "Regions of low speed flow below a certain threshold must be avoided, as even momentary local stagnation can result in defects due to inadequate removal of the melt-induced vapor plume and spatter.", "It has been recommended that a minimum gas velocity of 1 - 2 m/s over the powder bed is required to obtain optimal L-PBF quality [27], [28], [29].", "Although an even faster flow may be beneficial [25], [30], [31], [26], [27], at a certain velocity the gas flow will begin to disturb the powder bed below, which is undesirable.", "To this end, Shen et al.", "[28], building upon earlier work by Kalman et al.", "[32], modeled and validated the upper bound velocity that will not disturb the powder bed.", "Therefore, a first step in our design process was to determine a target gas velocity for flow within the HPLM chamber.", "Following the above guidance, this was established to be 3.1 m/s, for 15 - 45 $\\mu $ m stainless steel powder (SS316L).", "One unique challenge arises as a result of the small size of the pressure chamber.", "Some amount of local recirculation is common in L-PBF, as the gas flow spreads and is not always fully captured by the exit port.", "Recirculating gas in the chamber may carry with it soot and particles from the melt pool, which may cross the path of the laser, deposit on the laser window, or alight on the powder bed [24], [26], [33].", "Most commercial L-PBF printers use build chambers with several inches or more of vertical space between the build plate and the laser window above, and ample lateral margin around the build area.", "In our much smaller chamber, any recirculation near the window is much more likely to deposit soot onto the window surface, which may attenuate and scatter laser energy and/or cause thermal damage to the window itself.", "Toward preventing recirculation, Chen [33] and Saunders [29] both suggest a secondary flow be established with a second nozzle near the top of the chamber.", "This can be seen in some commercial L-PBF machines, usually positioned near the laser optical window at the top of the chamber.", "In the present study, a representation of the chamber, build platform, and a simple nozzle were modeled using COMSOL Multiphysics software, and various strategies to prevent recirculation were modeled, including baffles, guide vanes, tapering (funneling) the flow path, and enlarging the outlet port.", "Each of these approaches merely shifted the location at which the gas stream splits and recirculates.", "However, the aforementioned secondary nozzle approach was found to successfully prevent a low pressure region from forming and causing recirculation near the HPLM chamber window.", "This solution is illustrated in parts a and b of Figure REF .", "The final design was detailed in CAD (Solidworks) and brought into COMSOL via an integration module (Solidworks LiveLink).", "The critical dimensions were defined as the nozzle heights and widths (W1, H1, W2, H2), proportional entrance area at the split plane ($H3/H4$ ), and lengths (L1, L2).", "These were parametrically varied in the model, and design variations were judged on the velocity achieved above the entire print area, subject to the requirement for no recirculation near the laser window and the desire to minimize overall gas usage (max 15 NLPM).", "The simulated flow field through the final nozzle design and chamber is shown in Figure REF c. A physical model was printed with a Form2 3D printer (Formlabs) and attached to the gas flow system as shown in Figure REF d. Fan (GM816, Amgaze) and hot wire (405i, Testo) anemometers were used to measure gas speeds at distances up to three inches away from both (top and bottom) nozzle outlets.", "It was found that with an input flow of 12 - 15 NLPM, the final nozzle design produces gas velocities greater than $1.4\\,m/s$ above the entire print area under ambient conditions.", "Figure: CFD analysis of gas flow within the HPLM chamber.", "Gas velocity along the central vertical plane of the chamber (a) without and (b) with a secondary nozzle.", "(c) Final design, for which the simulation predicts a gas velocity exceeding 1.5 m/s at all points above the print area with an input flow of 12 NLPM.", "All simulations shown are at 0 psig.", "(d) 3D section view of the final nozzle design, and an image of a 3D printed nozzle coupled to the gas flow system.Yet, as a certain volumetric gas flow is necessary to remove vapor and spatter from above the melt pool, it is necessary to use a higher mass flow with increasing pressure above 0 psig.", "Additional COMSOL simulations showed that steady state gas flow in the chamber at pressures up to 1000 psig remains qualitatively similar to that at atmospheric pressure.", "However, as pressure is increased in the chamber, argon density also increases nearly proportionately.", "This denser gas moving at the same velocity exerts more force on the powder particles, which effectively lowers the maximum gas velocity allowed before powder is disturbed.", "Additionally, the refractive index of argon is known to increase with density according to the Lorentz-Lorenz equation: $\\frac{(n^2-1)}{(n^2+2)}=4\\pi /3\\cdot \\rho \\cdot \\alpha $ where n is refractive index, $\\rho $ is the density $(mol/cm^3)$ , and $\\alpha $ is the polarizability of argon at atmospheric pressure $(cm^3/mol)$ .", "Using a ray-tracing software (OpticStudio 16.5, Zemax), we found that the change in refractive index imparts a vertical shift in the location of the beam waist, effectively changing the spot size of the laser at a chosen focal plane.", "At 300 psig the argon increases the spot size by approximately 16 $\\mu $ m. This shift was not compensated for in our interpretation of pressure-dependent results, but we expect it accounts for a portion of the observed influence of pressure on apparent melt track dimensions under identical (input) laser parameters." ], [ "Gas delivery system", "A flow-through system was devised using a standard Ultra High Purity (UHP) argon bottle with a regulator to supply inert gas to the chamber at pressures as high as the rated pressure of the chamber.", "The system diagram is shown in Figure REF a. Needle valves upstream and downstream of the chamber allow the user to set both the pressure and flow of the argon stream within the operating limits of the system.", "A large filter rated to remove 95% of 10 nm diameter particles (United Filtration Systems, SLH818) protects equipment further downstream while adding minimal pressure drop to the system.", "A flowmeter calibrated for argon (Sierra Instruments, M100H1), reads out the mass flow through the system at a range of 10 - 1100 NLPM with 1%FS accuracy, which accommodates performance at pressures up to 1200 psig.", "A smaller flowmeter (Sierra Instruments, M100L) can be swapped in to measure up to 50 NLPM with 1%FS accuracy for near-atmospheric experiments.", "Finally, an oxygen gas sensor (Vernier) measures oxygen content from the exhaust before it is piped to the building ventilation system.", "For any desired experimental chamber pressure, a certain mass flow of gas through the entire system is required to establish the desired flow velocity over the build area.", "The backpressure resulting from that flow in the chamber with valve 2 fully open must then be minimized to enable the largest possible operable range of pressures and flows in the system.", "In Figure REF b, the system’s flow resistance is measured with the built-in flowmeter and pressure gauge and fit to a second-order curve.", "This curve is then extrapolated to show the full operating range of the system in Figure REF c. Starting with valve 1 fully closed and the valve 2 fully open, opening valve 1 gradually moves the system's operating point outward along the blue curve, and closing valve 2 changes the resistance of the system, sending the blue curve upward to bring higher pressures into reach.", "We use the van der Waals equation to characterize the relationship between pressure and mass flow required for adequate gas knife performance, $\\left[P+\\frac{a}{\\left(V/n\\right)^2}\\right]\\cdot \\left(V/n - b\\right) = R\\cdot T$ Here, $P$ is the gas pressure (atm), $V$ is the gas volume (L), $n$ is the number of moles (mol), $R$ is the ideal gas constant (L-atm)/(mol-K), $T$ is the temperature (K), and $a$ and $b$ are empirical gas constants.", "Holding V constant, using $T$ = 298, $R$ = 0.08206 and empirical constants for argon ($a$ = 1.355 and $b$ = 0.03201), and solving iteratively for n, the mass flow of argon required to meet the steady-state velocity design objective at any given pressure can be calculated.", "This calculation is represented by the black solid line in Figure REF c, originating at 0 psig with 12 NLPM based on nozzle performance.", "Figure: (a) Diagram of the gas delivery system.", "(b) Chamber pressure measurements taken at various mass flows of argon, fit with a second order equation.", "(c) Expected system flow resistance across the full operating range, using an extrapolated fit curve, with valve 2 fully open.", "The black line represents target experimental conditions, which can be reached by gradually closing valve 2 to raise the blue curve." ], [ "Laser scanning system", "A custom-built mirror-based 2D scanning head is used to focus and direct the output of a fiber laser onto the build surface of the HLPM system.", "The laser (SP-0500-C-W-020-15-PIQ-011-001-001, SPI Lasers) produces 30 - 525 W at a wavelength of 1075 - 1080$\\mu $ m. The laser’s output is directed via its output fiber into a collimator (106402X01, Coherent) where the beam is straightened.", "This beam then reflects off an angular alignment mirror (38-900, Edmunds Optics), passes through an AR-coated enclosure window (VPWW42-C, Thorlabs), and then reflects sequentially from a pair of orthogonal galvanometer mirrors (6240H, Novanta) before finally passing through an F-theta lens (FTH254-1064, Thorlabs), which focuses the straight beam down to a flat, horizontal focal plane below (see Figure REF a and REF a).", "Figure: (a) CAD model and (b) image of the scanning apparatus.Unlike commercial L-PBF systems where the optics are fixed inside the upper portion of the machine to prevent dust exposure, a goal of the present design was to enable the scanning head to be precisely repositioned over multiple testbeds, including the HLPM apparatus.", "Therefore, the entire laser system is outfitted with kinematic couplings (VB-75-SM and CS-1125-CPM, Bal-tec) which allow for rotation in increments of 60 degrees atop an aluminum extrusion tower (47065T501, McMaster Carr) attached to a rigid table.", "The laser scanning system and testbeds are enclosed behind a laser safety curtain.", "The galvanometer mirrors and F-theta lens comprising the scanning head are further enclosed (Figure REF ) in a custom stainless steel sheet metal box (Protocase).", "The enclosure is continuously purged with a 1 LPM flow of high-purity nitrogen gas to prevent contamination of the optics inside and provide cooling to the galvanometer motors.", "The aforementioned enclosure window allows the laser to enter the enclosure with minimal reflective losses, while blocking the entrance of dust from the open lab environment.", "Three thermistors were installed behind the angular adjustment mirror and two galvanometer mirrors as a further safety measure, and the control program was set to deactivate automatically and provide notice should a thermistor be exposed to laser energy.", "The galvanometers, F-theta lens, and enclosure (optics head) pivot jointly on two rotation stages (860-0150, Eksma Optics) whose axes of rotation intersect at the centroid of the first galvanometer mirror.", "This ensures that no matter the tip/tilt adjustments made by the user, the laser will strike the first galvanometer mirror at the center of its aperture.", "A top-mounted bubble level (2198A72, McMaster Carr) is used to level the optics head angle with an accuracy of approximately 1 minute in the two rotation stage axes.", "The third axis of rotation is built rigidly into the overall construction, as it only affects the rotation angle of the projected laser focal plane, not its tip nor tilt, and can thus be easily corrected for in the software or in the positioning of the HLPM chamber.", "The collimated laser beam must be precisely aligned to the galvanometer mirrors and F-theta lens to produce repeatable results.", "To this end, the laser fiber terminal, collimator, and angular alignment mirror are jointly suspended on a vertically oriented 2-axis stage (LX20, Thorlabs), which allows for precise translational alignment, and the angular alignment mirror is held in a kinematic mount (KCB2, Thorlabs) to give full control of the laser angle as it enters the galvanometer enclosure.", "As the laser beam exits the F-theta lens, its diameter decreases toward a minimum focal diameter (beam waist) before expanding again.", "Viewed as a volume, the focusing laser energy would take the shape of a hyperboloid of one sheet, with the diameter changing linearly along the axis of laser travel except near the beam waist, where diffraction drives a non-linear relationship.", "The laser spot size can therefore be changed by translating the entire laser head assembly vertically using the system’s motorized vertical rail (SIMO, PBC Linear).", "The vertical rail’s carriage position is tracked using a Mitutoyo 500-171-30 digital caliper (McMaster Carr) with an accuracy of 0.025 mm.", "One jaw of the caliper is bolted securely to the guide rail’s stationary frame and the other jaw is pulled down to touch the carriage for a highly repeatable measurement.", "Data correlating relative carriage position to laser spot size is presented in Figure REF , where each data point is the average of five measurements.", "We compared our experimental data to an optical model created in OpticStudio 16.5 (Zemax) comprising the laser collimator and F-theta lens, and we noted that spot sizes assuming an ideal Gaussian-distributed laser input match our experimental values to within a 9% difference in slope.", "The spot size measurement methodology is further explained in the Supporting Information.", "$z = 0$ corresponds to placement of the build surface at the beam waist, and negative laser head position corresponds to lowering the laser head toward the build platform.", "Although there appear to be two laser head positions which produce a given spot size, it is critical to operate only within $z < 0$ such that the beam waist is below the sample surface, not above.", "Operating with the beam waist (i.e.", "greatest energy density) above the build platform can rapidly heat or even ionize gas in the chamber, which could undesirably scatter or attenuate laser energy.", "Figure: (a) Diagram (not to scale) of exemplary laser focus profile with the beam waist placed virtually below the sample surface.", "(b) Laser spot size versus relative vertical position (z)(z) of the scanning head.The laser system is controlled using a cRIO 9039 FPGA (field programmable gate array) chassis from National Instruments, which communicates with the operator via ethernet to a dedicated PC workstation running LabVIEW software.", "A stepper driver module (NI-9503) actuates the vertical rail for laser focus adjustments, a digital IO module (NI-9375) communicates with the laser’s digital control hardware to operate it remotely, and an analog output module (NI-9269) sends voltages to synchronously control the two galvanometers and the laser power output at 10 $\\mu $ s intervals.", "In order to conduct an experiment, a custom Python pre-processing script converts Gcode commands into a CSV file representing the required sequential voltage commands.", "When the system is activated, a custom LabVIEW program reads the CSV file and passes the values on to the FPGA, which in turn produces the synchronous galvanometer position and laser power signals.", "The PC workstation is also directly connected to the laser with a Serial USB adapter; this connection facilitates operation of the visible pilot laser beam, monitoring of the laser temperature, and manual command of the laser when necessary." ], [ "Nominal melt track analysis", "As a first demonstration of HPLM system functionality, we show an array of vertical lines melted into a solid plate at 0 psig.", "As noted on Fig.", "REF , the D4$\\sigma $ spot diameter at the plate surface is 109 $\\mu $ m. In this first experiment, lines of constant speed (1 m/s) and repeatedly alternating power (200 W, 130 W, 60 W) were applied (perpendicular to the gas flow, moving upstream as the array is printed) with a center-to-center spacing of 1 mm on a 0.25 inch thick bare (i.e., no powder layer) SS316L plate.", "All substrate plates used in this study were prepared by sandblasting the top surface to enhance absorptivity, then cleaning with acetone.", "A high resolution height map of the melt track array was obtained using a laser confocal microscope (Keyence VK-X1050).", "The centerlines of all melt tracks were found to be within 10 $\\mu $ m of their intended lateral location within the print area.", "The position of the melt tracks is influenced by many factors, and thus any error in that spacing represents the accumulated contributions of errors associated with those factors.", "Important considerations include: Alignment of all optical components (collimator, angular alignment mirror, galvos, F-theta lens, chamber optical window) must be maintained to ensure that no laser light is reflected or refracted away from the desired beam path across the full range of galvanometer rotation.", "Software must output properly scaled commands via the analog output module to produce appropriate galvanometer positioning.", "The analog output module must produce high fidelity analog signals to the galvanometer control board.", "The galvanometer control board must precisely convert voltage inputs into galvanometer positions.", "The F-theta lens must remain well aligned and clean to ensure a galvanometer's angular position translates precisely to the expected laser X,Y position.", "Parallelism of the build platform and laser focal plane is needed to ensure actual location of laser incidence on the plate is not distorted.", "The ability to precisely direct the laser spot on the print bed is an important first step, but to delve deeper into system performance, one must inspect the melt track morphology.", "Nine sections were made in the plate as denoted by white lines in Figure REF a.", "All sectioned samples were cut with a high speed diamond saw and mounted in copper or carbon powder, then sequentially ground and polished with 120, 300, 600, and 1200 grit sandpaper, a 3 $\\mu $ m diamond slurry, and finally 0.05 $\\mu $ m alumina suspension.", "The above procedure is according to the SumMet guide produced by Buehler [34].", "All samples were etched with an aqua regia mixture: 1 part hydrochloric acid, 1 part nitric acid, and 1 part deionized water.", "The etched melt pool sections were imaged optically (Zeiss Smartzoom 5), and the resulting images were analyzed using a custom Matlab script to extract the depth, width, height, and cross sectional area.", "Further details on this script are given in the Supporting Information.", "The resulting data points, shown in Figure REF b, are the average profile measurements from three sections of each melt track, showing generally good uniformity across the array of melted lines.", "At 200 W we see a slight quadratic variation in aspect ratio, from deeper, narrower melt pools near the center of the build area to shallower, wider pools at greater distance from the center.", "The variance in melt pool aspect ratio with respect to position on the plate increases with laser power, from $2.1\\times 10\\textsuperscript {-3}$ at 60 W to $4.4\\times 10\\textsuperscript {-3}$ at 130 W and $1.0\\times 10\\textsuperscript {-2}$ at 200 W. Similarly, the variance in melt pool area increases from $3.76\\times 10\\textsuperscript {4}$  $\\mu $ m$^2$ at 60 W to $6.47\\times 10\\textsuperscript {4}$  $\\mu $ m$^2$ at 130 W and $2.30\\times 10\\textsuperscript {5}$  $\\mu $ m$^2$ at 200 W. Figure: (a) Top down view of calibration melt tracks printed at 0 psig with 1 mm spacing.", "White horizontal lines denote section locations.", "A small area is magnified to the right for a clearer view of melt track surface quality.", "(b) Morphology data, taken as the average of three sections of each track.", "The gas knife nozzle terminates at X = 0 to the left of the print area shown, and the laser system origin (F-theta lens centerline) is located at X = 42.", "(c) Exemplary melt track section images from lines printed with a power of 130 W. Coordinates are marked with corresponding white crosses.", "The Y position indicates section distance from the top of a track.We attribute the trend at higher powers to intrinsic behavior of the F-theta lens, which causes the spot size to enlarge at positions farther away from the center of its aperture.", "As such, we measured the spot size when positioned at the laser system origin at X = 42 mm to be 233.8 $\\mu $ m and when at X = 3.9, 16.6, 29.3, 54.7, 67.4, 80.1, $\\sigma $ = 251 241, 242, 234, 241, 244.", "Other effects such as variation in sample surface roughness, defects or contaminants in or on the sapphire pressure window and quartz slide, turbulence or aberrations in gas knife flow, and misalignment between the sample surface and laser focal plane may be present but were not considered to contribute meaningfully to the slight variation in melt pool aspect ratio and area.", "Therefore, we considered this acceptable performance to investigate melting behavior at elevated pressures." ], [ "Laser melting at elevated pressure", "Next, experiments were performed at elevated pressure.", "Specifically, arrays of 8 mm long melt tracks were printed on SS316L plate, with each track oriented perpendicular to the gas flow.", "A matrix of laser parameters ($P$ = 50, 150, 250, 350 W, $\\nu $ = 800, 1039, 1351, 1755, 2280, 2962, 3848, 5000 mm/s) was sorted randomly and printed sequentially starting closest to the chamber outlet, and therefore incrementally farther upstream with respect to the flow direction.", "As such, the spatter and fumes from each melt track were not swept over the location of upcoming melt tracks.", "Six control lines ($P$ = 200 W, $\\nu $ = 4300 mm/s) were added to the array at fixed positions across the entire print area in order to check for systematic errors in each experiment.", "Identical arrays were printed at 0 psig, 150 psig, and 300 psig.", "The sample plates were then sectioned near their midpoints and characterized as described previously.", "The resulting data are shown in Figure REF in the form of melt track aspect ratio (depth/width), and cross-sectional area versus normalized enthalpy.", "Hann et al.", "[35] first proposed normalized enthalpy as a means to study scaling of weld process parameters and bead (i.e, melt track) geometry.", "We use the following equation to define normalized enthalpy as $\\Delta H/h_s = \\frac{AP}{\\rho h_s\\sqrt{\\pi D\\nu \\sigma ^3}},\\ D = \\frac{\\kappa }{\\rho c}$ where $\\Delta H$ is the change in enthalpy, $h_{s}$ is the specific enthalpy of melting, $A$ is laser absorptivity, $P$ is laser power, $D$ is thermal diffusivity, $\\kappa $ is thermal conductivity, $\\rho $ is density, $c$ is the specific heat, $\\nu $ is the laser scan speed, and $\\sigma $ is the laser spot diameter.", "We assume a constant value for absorptivity, $A = 0.4$ .", "Values for enthalpy at melting, $h_{s} = 821\\,kJ/kg$ , density, $\\rho = 7107\\,kg/m^2$ , and solidus temperature, $T_m = 1675\\,K$ , are used according to the measurements of Pichler et al. [36].", "We calculate $D = 7.34E-6\\,m^2/s$ using additional values for thermal conductivity $\\kappa = 35.56\\,W/mK$ and specific heat $c = 681.9\\,J/kgK$ at $T = 1675\\,K$ reported by Kim [37].", "Figure: (a) Optical cross section views of individual exemplary melt tracks printed in SS316L plate.", "Normalized enthalpy values are listed for each row of images, and pressure values for each column.", "Images are at equal scale.", "(b) Melt pool depth/width ratio and total cross sectional area of each individual melt track, with exemplary tracks shown in (a) marked as red data points.As expected, we see in Figure REF a trend of increasing aspect ratio with increasing normalized enthalpy.", "Traditionally in laser welding, $d/w = 0.5$ , i.e., when the depth exceeds the half-width of the melt pool, is considered the threshold between conduction and keyhole mode melting [38], [39].", "Formation of keyhole porosity is a complex process which occurs at some point beyond the aspect ratio threshold, and parabolic melt pool shapes are considered desirable in L-PBF towards remelting into the previous layer, as long as keyhole pores are not formed.", "At elevated pressure, melt pool penetration depth decreases and width increases, relative to the nominal results at 0 psig.", "This is consistent with sub-atmospheric studies of L-PBF and laser welding experiments, where weld depth increased and width decreased with lower pressure [14], [15], [16], [17], [18], [22].", "In the present elevated pressure experiments, the normalized enthalpy at which $d/w$ exceeds 0.5 grows slightly with pressure (e.g., approximately $\\Delta H/h_{s}$ = 1.6 at 150 psig, and $\\Delta H/h_{s}$ = 2.1 at 300 psig), and the deviation in $d/w$ grows with increased normalized enthalpy.", "Yet, no significant shift in melt track sectional area is observed.", "The preliminary results here suggest that ambient pressure manipulates the force balance at the melt pool surface, which in turn influences the melt track geometry and progression of aspect ratio with increasing applied energy.", "However, under these pressures the amount of energy transferred to the melt pool is relatively unchanged, supporting our conclusion that the gas flow within the HPLM system is sufficient to prevent laser attenuation due to plume dynamics.", "Investigation of keyhole pore formation statistics versus pressure is left as future work.", "As a final demonstration of the HPLM system in the specific context of L-PBF, an array of melt tracks was printed on a 76 $\\mu $ m thick powder bed at both 0 psig and 150 psig.", "Gas atomized SS316L powder (John Galt Steel) was used, with a particle size distribution of 15-45 $\\mu $ m reported by the supplier.", "The underlying SS316L sample plate was prepared as before for bare plate studies, then bolted to the build platform with the addition of a thin steel shim which sets the powder layer thickness by forming a well in the print area.", "The shim was cut to fit the sample plate bolt pattern and desired powder deposition area, then lapped to remove burrs and reach a uniform thickness of 76 $\\pm $  3 $\\mu $ m. Once the sample plate and shim were bolted to the platform, powder was dispensed manually to the mid-line of the sample well, and a 0.125 inch thick machinist's edge was held against the shim and drawn length-wise across the print area to spread the powder.", "A flashlight was shone at a slight angle across the powder bed in two orthogonal directions to check for unacceptable streaks or defects in the powder surface.", "An illustration of this procedure is given in the Supporting Information.", "At 0 psig the gas knife performed as designed, reaching the desired flow rate without disturbing powder.", "At 150 psig a 40% reduction in mass flow was required to prevent disturbance of the powder bed during processing.", "Resulting powder bed melt track properties at 0 psig and 150 psig are shown in Figure REF .", "Figure: (a) Optical section views of individual exemplary melt tracks melted into a 76 μ\\mu m thick layer of SS316L powder.", "Normalized enthalpy values are listed for each row of images, and pressure values for each column.", "Images are at equal scale, and the powder layer thickness is denoted by dotted white lines.", "(b) Melt pool depth/width ratio and total total cross sectional area of each individual melt track, with exemplary tracks marked in red.Just as observed with bare plate experiments, increasing pressure produced shallower, wider melt tracks.", "Melt track sectional area remains consistent with increasing pressure, while the melt pool aspect ratio significantly changes at 150 psig.", "Melt tracks transitioned from conduction mode to keyhole mode at similar values of $\\Delta H/h_{s}$ as with bare plate; this was approximately $\\Delta H/h_{s}$ = 1.1 at 0 psig and $\\Delta H/h_{s}$ = 2.0 at 150 psig.", "To investigate the negative impact of reduced gas flow on the resulting line array, the same reduction in gas flow was repeated for melting experiments on a SS316L bare plate and no change in melt track dimensions was observed relative to the nominal flow bare plate experiments.", "Taken together, our findings suggest that pressure has a significant influence on melt pool geometry in L-PBF, and that the HPLM system is capable of precision investigation of L-PBF at pressures of up to 150 psig, considering both machine operation and gas flow dynamics.", "To achieve high pressure multi-layer L-PBF, a larger chamber is required to allow room for a vertical stage and recoating mechanism without impinging on the clear path required for a stable, effective gas knife.", "Critically, greater distance between the powder bed and the laser optical window above will reduce the danger of direct impacts of spatter on the window and greatly alleviate the consequences of recirculation.", "Finally, a gas system which reuses inert gas via a continuous circuit will be required, as at the highest pressures currently reachable by the HPLM system (1200 psig), a standard bottle of argon could be drained in as little as 10 minutes, producing a very short window within which to conduct experiments." ], [ "Conclusion", "This paper presents the design and validation of an instrument that enables research on the influence of elevated ambient pressure on laser materials processing, specifically guided to laser powder bed fusion and laser welding.", "Exemplary bare plate and single layer powder bed experiments demonstrate the testbed’s performance at pressures up to 300 psig.", "The HPLM system will enable direct observation of the impact of elevated pressure on the process windows of commonly processed materials, and will enable the exploration of new materials which cannot currently be processed with sufficient quality in L-PBF." ], [ "Author Contributions: CRediT taxonomy", "David A. Griggs: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review and editing, Visualization, Project administration.", "Jonathan S. Gibbs: Conceptualization, Software, Resources, Writing - original draft.", "Stuart P. Baker: Conceptualization, Software, Resources.", "Ryan W. Penny: Conceptualization, Validation, Formal analysis, Writing - review and editing.", "Martin C. Feldmann: Conceptualization.", "A. John Hart: Conceptualization, Methodology, Writing – review and editing, Visualization, Supervision, Project administration, Funding acquisition." ], [ "Acknowledgements", "Financial support for this project was provided by Honeywell Federal Manufacturing & Technologies (Honeywell FM&T).", "We thank Paul Carson (MIT), Dan Gilbert (MIT), Joe Wight (MIT), Ben Brown (Honeywell FM&T), and Rachel Grodsky (Honeywell FM&T) for their valuable input.", "J.S.G.", "and S.P.B.", "also acknowledge financial support of their graduate studies from the United States Navy and MIT Lincoln Laboratory, respectively.", "figuresection" ], [ "Laser power measurement", "There will necessarily be some amount of laser power lost to reflections or scattering in the optical components.", "The laser's power output was measured with all relevant optics in the beam path (including the sapphire pressure window and quartz slide), such that all optical losses that would occur in normal use are directly represented.", "Measurements at laser power commands from 30 W to 150 W were made with an Ophir Starbright meter and 30(150)A-LP1-18 sensor.", "The high linearity of these measurements $(R^{2} = 1)$ allows for extrapolation of expected laser power at the sample plate across the entire power range of the laser, from 30 W to 500 W. Thus, all mentions of laser power in this paper refer to the actual laser power which reaches the sample.", "Figure: Requested laser power vs. actual laser power before (Supplier Data) and after (Measured) optical losses." ], [ "Laser spot size measurement", "The spot size of the laser was imaged at various laser head relative positions using a DMM 37UX226-ML CMOS camera (The Imaging Source) with a pixel size of 1.85$\\mu $ m$\\times $ 1.85$\\mu $ m, and IC Capture software (The Imaging Source).", "The sensor is mounted directly to the HPLM build platform at a height measured precisely with a dial indicator, such that measurements can be translated directly to any sample of known thickness.", "The laser is fired in low-power mode at a minimum output of 30 W. The angular alignment mirror is replaced with a beam sampler (BSF20-C, Thorlabs) such that approximately 90-99% of the energy (depending on polarization) is transmitted to a beam dump, while the remaining laser energy enters the galvanometer enclosure and is focused by the F-theta lens as in normal operation.", "A combination of reflective and absorptive neutral-density filters (Thorlabs) are placed in the focusing beam path to reduce the energy further to a brightness appropriate for the CMOS sensor.", "Images are taken at a rate of twenty frames per second both to capture dark frames before the laser engages and the spot appears, but also in order to capture the laser spot quickly and thus minimize the impact of thermal warping of the ND filters on spot measurements.", "A Matlab script was created to subtract dark energy from resulting images, then locate the laser spot centroid and plot a two-dimensional Gaussian fit to determine the D4$\\sigma $ diameter of the laser beam.", "The laser used in the HPLM has a Gaussian energy distribution with an M$^{2}$ value of 1.1 $\\pm $  0.1.", "The resulting D4$\\sigma $ measurements, shown in Figure REF c, allow for repeatable spot size selection at the surface of any desired sample of known thickness." ], [ "Laser auxiliary systems", "A thermal safety system was devised in case the angular alignment mirror or galvanometer mirrors should fail while the laser is in use.", "Three thermistors (BC2385-ND, Digikey) are affixed behind the mirrors atop sandblasted aluminum strike-plates, such that laser energy penetrating the mirror would quickly warm the thermistor and/or surrounding aluminum.", "The thermistors are wired in parallel voltage divider circuits to the FPGA via an analog input module (NI-9205, National Instruments).", "The FPGA continuously monitors each voltage input and will quickly shutdown the system should any voltage signal change beyond a set threshold.", "The completed laser tower is presented in Figure REF below.", "Figure: Image of the complete laser system with enclosure installed, and with nitrogen gas supply tubing and thermistor wiring (white/black, red/black, green/black) in place." ], [ "Powder bed sample preparation and monitoring", "The detailed process of preparing a single powder layer is portrayed in Figure REF below.", "The shim and sample plate are bolted to the build platform, then powder is spread in a line along the centerline of the long axis of the sample plate, and finally the powder is spread with a machinists edge in one quick, uniform motion.", "Figure: Powder bed sample preparation process.", "(a) Exploded view of sample plate assembly.", "(b) schematic of powder spreading action.Figure REF demonstrates the process of loading a sample into the pressure chamber.", "Figure: (a) Initial placement of build platform.", "(b) Final alignment of build platform, using a rigid object with a single point of contact to minimize lateral forces.During processing, the powder bed is continuously monitored via a machine vision camera by the operator, such that any scattering of powder can be immediately noted and the experiment terminated.", "Exemplary operator views are shown in Figure REF .", "Figure: Exemplary images of powder bed as monitored by the machine vision camera (a) before and (b) after a print." ], [ "Melt track analysis", "To standardize the analysis of melt track geometry, a semi-automated routine was developed to identify the boundaries of the melt track and measure important track parameters.", "To efficiently accommodate a wide range of melt-track aspect ratios with a minimal number of measurements, we employ numerical integration using Simpson's first method in polar coordinates.", "This method lends itself well to dividing the surface and sub-surface track geometry which frequently diverge in size and shape.", "The program employs a masking technique to remove the upper dark region (the mounting media) in the section image and the lower light region (the undisturbed plate) below the melt-track.", "Further, the surface level was estimated and a rotational transformation was applied to the image to correct for any rotational misalignment of the base plate in the section image.", "Figure REF depicts this level surface estimate as a red line.", "The melt track is then identified by the analyst using a separate region-of-interest bounding box for “above grade” and the “below grade” regions.", "An array of 32 guide lines, evenly spaced $\\Delta \\theta = 11.25$ ° apart, are superimposed on the image radiating outward from the centers of these two boxes where they meet the grade line to aide in identifying each radii to the top or bottom boundary of the melt track.", "Figure: Example of graphical melt track section analysis.", "(a) Optical micrograph of 316L bare plate scanned with laser power of 400 W, scan speed of 1.27 m/s (ΔH/h s \\Delta H/h_{s} = 36.9) at pressure of 150 psig.", "(b) The same track image processed with above grade and below grade regions outlined in blue.Radii are first estimated using the light-dark transitions present in the masked image and then manually adjusted by eye by the analyst.", "The parameters obtained include depth of the melt track below the plate grade, height of the track above the grade, width at grade, width of the above grade (more used in powder-bed studies) and the areas of each region, $A$ .", "$A$ is approximated numerically by applying Simpson's first method in polar coordinates using the $n = 17$ radii, $\\rho _i$ , obtained for each angle of the shape as follows: $A = \\frac{1}{2}\\int _0^{\\pi /2} \\rho (\\theta )^2\\:d\\theta \\approx \\frac{1}{2}\\sum _{i=1}^n \\frac{\\Delta \\theta }{3}\\left(\\rho _i\\right)^2 \\cdot S.M._i$ where the Simpson's Multipliers, $S.M._i$ , are $\\left\\lbrace 1, 4, 2, 4, 2, \\cdots , 2, 4, 2, 4, 1\\right\\rbrace $ .", "Additionally, centroid coordinates $(\\overline{\\rho },\\overline{\\theta })$ (displayed in Figure REF as red circles) are calculated as follows: $\\overline{\\rho } \\approx \\frac{\\sum _{i=1}^n \\left(\\rho _i\\right)^3 \\cdot S.M._i}{\\sum _{i=1}^n \\left(\\rho _i\\right)^2 \\cdot S.M._i}\\\\\\overline{\\theta } \\approx \\frac{\\sum _{i=1}^n \\theta _i\\cdot \\left(\\rho _i\\right)^2 \\cdot S.M._i}{\\sum _{i=1}^n \\left(\\rho _i\\right)^2 \\cdot S.M._i}$ For high aspect ratios, such as instances of severe keyhole morphology, this method was adapted slightly to include 4 additional radii at half the global angular step-size.", "This may be seen in Figure REF in the increased density of points (and guide lines) at the bottom and top of the melt track." ] ]
2107.01744
[ [ "The Least Restriction for Offline Reinforcement Learning" ], [ "Abstract Many practical applications of reinforcement learning (RL) constrain the agent to learn from a fixed offline dataset of logged interactions, which has already been gathered, without offering further possibility for data collection.", "However, commonly used off-policy RL algorithms, such as the Deep Q Network and the Deep Deterministic Policy Gradient, are incapable of learning without data correlated to the distribution under the current policy, making them ineffective for this offline setting.", "As the first step towards useful offline RL algorithms, we analysis the reason of instability in standard off-policy RL algorithms.", "It is due to the bootstrapping error.", "The key to avoiding this error, is ensuring that the agent's action space does not go out of the fixed offline dataset.", "Based on our consideration, a creative offline RL framework, the Least Restriction (LR), is proposed in this paper.", "The LR regards selecting an action as taking a sample from the probability distribution.", "It merely set a little limit for action selection, which not only avoid the action being out of the offline dataset but also remove all the unreasonable restrictions in earlier approaches (e.g.", "Batch-Constrained Deep Q-Learning).", "In the further, we will demonstrate that the LR, is able to learn robustly from different offline datasets, including random and suboptimal demonstrations, on a range of practical control tasks." ], [ "Introduction", "Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press.", "This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version." ], [ "Language", "All manuscripts must be in English." ], [ "Dual submission", "Please refer to the author guidelines on the CVPR 2018 web page for a discussion of the policy on dual submissions." ], [ "Paper length", "Papers, excluding the references section, must be no longer than eight pages in length.", "The references section will not be included in the page count, and there is no limit on the length of the references section.", "For example, a paper of eight pages with two pages of references would have a total length of 10 pages.", "There will be no extra page charges for CVPR 2018.", "Overlength papers will simply not be reviewed.", "This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.", "Note that this guide already sets figure captions and references in a smaller font.", "The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts.", "The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven." ], [ "The ruler", "The style defines a printed ruler which should be present in the version submitted for review.", "The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution.", "If you are preparing a document using a non- document preparation system, please arrange for an equivalent ruler to appear on the final output pages.", "The presence or absence of the ruler should not change the appearance of any other content on the page.", "The camera ready copy should not contain a ruler.", "( users may uncomment the \\cvprfinalcopy command in the document preamble.)", "Reviewers: note that the ruler measurements do not align well with lines in the paper — this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly.", "Just use fractional references (e.g.", "this line is $095.5$ ), although in most cases one would expect that the approximate location will be adequate." ], [ "Mathematics", "Please number all of your sections and displayed equations.", "It is important for readers to be able to refer to any particular equation.", "Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it.", "It is cumbersome to have to use circumlocutions like “the equation second from the top of page 3 column 1”.", "(Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers).", "All authors will benefit from reading Mermin's description of how to write mathematics: http://www.pamitc.org/documents/mermin.pdf." ], [ "Blind review", "Many authors misunderstand the concept of anonymizing for blind review.", "Blind review does not mean that one must remove citations to one's own work—in fact it is often impossible to review a paper unless the previous citations are known and available.", "Blind review means that you do not use the words “my” or “our” when citing previous work.", "That is all.", "(But see below for techreports.)", "Saying “this builds on the work of Lucy Smith [1]” does not say that you are Lucy Smith; it says that you are building on her work.", "If you are Smith and Jones, do not say “as we show in [7]”, say “as Smith and Jones show in [7]” and at the end of the paper, include reference 7 as you would any other cited work.", "An example of a bad paper just asking to be rejected: An analysis of the frobnicatable foo filter.", "In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods.", "Why the previous paper was accepted without this analysis is beyond me.", "[1] Removed for blind review An example of an acceptable paper: An analysis of the frobnicatable foo filter.", "In this paper we present a performance analysis of the paper of Smith [1], and show it to be inferior to all previously known methods.", "Why the previous paper was accepted without this analysis is beyond me.", "[1] Smith, L and Jones, C. “The frobnicatable foo filter, a fundamental contribution to human knowledge”.", "Nature 381(12), 1-213.", "If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work.", "In such cases, include the anonymized parallel submission  as additional material and cite it as [1] Authors.", "“The frobnicatable foo filter”, F&G 2014 Submission ID 324, Supplied as additional material fg324.pdf.", "Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report.", "For conference submissions, the paper must stand on its own, and not require the reviewer to go to a techreport for further details.", "Thus, you may say in the body of the paper “further details may be found in ”.", "Then submit the techreport as additional material.", "Again, you may not assume the reviewers will read this material.", "Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution.", "For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution.", "The work is a development of your celebrated 1968 paper entitled “Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties”, by Zeus .", "You can handle this paper like any other.", "Don't write “We show how to improve our previous work [Anonymous, 1968].", "This time we tested the algorithm on a lunar lander [name of lander removed for blind review]”.", "That would be silly, and would immediately identify the authors.", "Instead write the following: We describe a system for zero-g frobnication.", "This system is new because it handles the following cases: A, B.", "Previous systems [Zeus et al.", "1968] didn't handle case B properly.", "Ours handles it by including a foo term in the bar integral.", "...", "The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know.", "It displayed the following behaviours which show how well we solved cases A and B: ... As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors.", "A reviewer might think it likely that the new paper was written by Zeus , but cannot make any decision based on that guess.", "He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK?", "No.", "Leave them for the final copy.", "Figure: Example of caption.", "It is set in Roman so that mathematics(always set in Roman: BsinA=AsinBB \\sin A = A \\sin B) may be included without anugly clash." ], [ "Miscellaneous", "Compare the following: Table: NO_CAPTIONSee The book, p165.", "The space after , meaning “for example”, should not be a sentence-ending space.", "So is correct, e.g.", "is not.", "The provided \\eg macro takes care of this.", "When citing a multi-author paper, you may save space by using “et alia”, shortened to “” (not “et.", "al.” as “et” is a complete word.)", "However, use it only when there are three or more authors.", "Thus, the following is correct: “ Frobnication has been trendy lately.", "It was introduced by Alpher , and subsequently developed by Alpher and Fotheringham-Smythe , and Alpher  .” This is incorrect: “... subsequently developed by Alpher   ...” because reference  has just two authors.", "If you use the \\etal macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher .", "For this citation style, keep multiple citations in numerical (not chronological) order, so prefer , , to , , .", "Figure: Example of a short caption, which should be centered." ], [ "Formatting your paper", "All text must be in a two-column format.", "The total allowable width of the text area is $6\\frac{7}{8}$ inches (17.5 cm) wide by $8\\frac{7}{8}$ inches (22.54 cm) high.", "Columns are to be $3\\frac{1}{4}$ inches (8.25 cm) wide, with a $\\frac{5}{16}$ inch (0.8 cm) space between them.", "The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page.", "The second and following pages should begin 1.0 inch (2.54 cm) from the top edge.", "On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \\times 11$ -inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page." ], [ "Margins and page numbering", "All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high." ], [ "Type-style and fonts", "Wherever Times is specified, Times Roman may also be used.", "If neither is available on your word processor, please use the font closest in appearance to Times to which you have access.", "MAIN TITLE.", "Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page.", "The title should be in Times 14-point, boldface type.", "Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word).", "Leave two blank lines after the title.", "AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type.", "This information is to be followed by two blank lines.", "The ABSTRACT and MAIN TEXT are to be in a two-column format.", "MAIN TEXT.", "Type main text in 10-point Times, single-spaced.", "Do NOT use double-spacing.", "All paragraphs should be indented 1 pica (approx.", "1/6 inch or 0.422 cm).", "Make sure your text is fully justified—that is, flush left and flush right.", "Please do not place any additional blank lines between paragraphs.", "Figure and table captions should be 9-point Roman type as in Figures REF and REF .", "Short captions should be centred.", "Callouts should be 9-point Helvetica, non-boldface type.", "Initially capitalize only the first word of section titles and first-, second-, and third-order headings.", "FIRST-ORDER HEADINGS.", "(For example, 1.", "Introduction) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after.", "SECOND-ORDER HEADINGS.", "(For example, 1.1.", "Database elements) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after.", "If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line." ], [ "Footnotes", "Please use footnotesThis is what a footnote looks like.", "It often distracts the reader from the main flow of the argument.", "sparingly.", "Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence).", "If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced.", "Use Times 8-point type, single-spaced." ], [ "References", "List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper.", "When referenced in the text, enclose the citation number in square brackets, for example .", "Where appropriate, include the name(s) of editors of referenced books.", "Table: Results.", "Ours is better." ], [ "Illustrations, graphs, and photographs", "All graphics should be centered.", "Please ensure that any point you wish to make is resolvable in a printed copy of the paper.", "Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.", "Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it.", "You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.", "When placing figures in , it's almost always best to use \\includegraphics, and to specify the figure width as a multiple of the line width as in the example below    \\usepackage[dvips]{graphicx} ...    \\includegraphics[width=0.8\\linewidth]                    {myfile.eps}" ], [ "Color", "Please refer to the author guidelines on the CVPR 2018 web page for a discussion of the use of color in your document." ], [ "Final copy", "You must include your signed IEEE copyright release form when you submit your finished paper.", "We MUST have this form before your paper can be published in the proceedings.", "Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or Fax (714) 761-1784." ] ]
2107.01757
[ [ "Variational Bayesian Inference for a Polytomous-Attribute Saturated\n Diagnostic Classification Model with Parallel Computing" ], [ "Abstract As a statistical tool to assist formative assessments in educational settings, diagnostic classification models (DCMs) have been increasingly used to provide diagnostic information regarding examinees' attributes.", "DCMs often adopt a dichotomous division such as the mastery and non-mastery of attributes to express the mastery states of attributes.", "However, many practical settings involve different levels of mastery states rather than a simple dichotomy in a single attribute.", "Although this practical demand can be addressed by polytomous-attribute DCMs, their computational cost in a Markov chain Monte Carlo estimation impedes their large-scale application due to the larger number of polytomous-attribute mastery patterns than that of binary-attribute ones.", "This study considers a scalable Bayesian estimation method for polytomous-attribute DCMs and developed a variational Bayesian (VB) algorithm for a polytomous-attribute saturated DCM -- a generalization of polytomous-attribute DCMs -- by building on the existing literature on polytomous-attribute DCMs and VB for binary-attribute DCMs.", "Furthermore, we proposed the configuration of parallel computing for the proposed VB algorithm to achieve better computational efficiency.", "Monte Carlo simulations revealed that our method exhibited the high performance in parameter recovery under a wide range of conditions.", "An empirical example is used to demonstrate the utility of our method." ], [ "*" ], [ "*" ], [ "left=30mm, right=30mm, top=35mm, bottom=30mm references.bbl breaklinks=true, colorlinks=true, citecolor=BlueViolet, linkcolor=red, urlcolor=blue, 1]Motonori Oka 1]Shun Saso 1]Kensuke Okada [1]Graduate School of Education, The University of Tokyo Author Note Motonori Oka: Figure: NO_CAPTION https://orcid.org/0000-0002-9867-8922 Shun Saso: Figure: NO_CAPTION https://orcid.org/0000-0002-7888-1544 Kensuke Okada: Figure: NO_CAPTION https://orcid.org/0000-0003-1663-5812 Correspondence should be sent to Motonori Oka, Graduate School of Education, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan.", "EMAIL: motonorioka@g.ecc.u-tokyo.ac.jp This work was supported by JSPS KAKENHI (Grant Numbers 19H00616, 21H00936, and 20H01720).", "A preliminary report of this study was presented at the World Meeting of the International Society for Bayesian Analysis 2021." ], [ "abstract", "As a statistical tool to assist formative assessments in educational settings, diagnostic classification models (DCMs) have been increasingly used to provide diagnostic information regarding examinees’ attributes.", "DCMs often adopt dichotomous division such as mastery and non-mastery of attributes to express mastery states of attributes.", "However, many practical settings involve different levels of mastery states rather than a simple dichotomy in a single attribute.", "Although this practical demand can be addressed by polytomous-attribute DCMs, their computational cost in a Markov chain Monte Carlo estimation impedes their large-scale applications due to the larger number of polytomous-attribute mastery patterns than that of binary-attribute ones.", "This study considers a scalable Bayesian estimation method for polytomous-attribute DCMs and developed a variational Bayesian (VB) algorithm for a polytomous-attribute saturated DCM—a generalization of polytomous-attribute DCMs—by building on the existing literature in VB for binary-attribute DCMs and polytomous-attribute DCMs.", "Furthermore, we proposed the configuration of parallel computing for the proposed VB algorithm to achieve better computational efficiency.", "Monte Carlo simulations revealed that our method exhibited the high performance in parameter recovery under a wide range of conditions.", "An empirical example is used to demonstrate the utility of our method.", "Keywords: polytomous attribute, polytomous-attribute saturated DCMs, variational Bayesian inference, parallel computing" ], [ "Introduction", "The recent demand for formative assessments has motivated the development of psychometric models providing diagnostic information of students' latent traits.", "A class of these models is termed “diagnostic classification models (DCMs).” It postulates that latent traits generally consist of multidimensional discrete skills called “attributes,” and students belong to one of the attribute mastery profiles, each of which presents a combination of mastery and non-mastery for attributes ruppdiagnostic2010.", "Based on this assumption, DCMs aim to estimate mastery probabilities of attributes for every individual and classify them into attribute mastery profiles.", "Thanks to this information, practitioners can design classroom activities that place emphasis on students' educational needs.", "Although most of the DCMs assign binary attributes in their parameterization, a few DCMs with polytomous attributes have been developed to perform more finer-grained diagnosis [for example,][]tzurkarelitzordered2004,templingeneralized2004,vondaviergeneral2008,templinmeasuring2013,chengeneral2013.", "The seminal contribution to polytomous-attribute DCMs is Karelitz's () work, which introduced the ordered-category attribute coding (OCAC) framework to treat different attribute mastery levels in a single attribute.", "In contrast to binary attributes that only consider a dichotomous division between their mastery and non-mastery, this framework broadens its dichotomy and addresses the qualitative ordering of cognitive complexity within a single attribute.", "For instance, tjoeidentification2014 incorporated polytomous attributes as a part of the specified attributes for proportional reasoning, where “constructing ratios” and “constructing proportions” are assumed to present different mastery levels.", "Both attributes fall within the scope of ratio.", "However, constructing proportions is more difficult to master than constructing ratios because calculating proportions requires the understanding of ratios.", "The inclusion of such attributes forms a new class of attribute mastery profiles untouched by binary attributes and helps practitioners measure students' levels of mastery over the attributes with more refined granularity.", "This study considers the polytomous-attribute extension of an ordinary binary-attribute saturated DCM.", "A saturated DCM is a generalization of DCMs in the sense that all the main and interaction effects of attributes are allowed, and each possible attribute mastery profile can have a unique correct-response probability.", "Examples of such binary-attribute models include the general diagnostic model [GDM:][]vondaviergeneral2008, log-linear cognitive diagnostic model [LCDM:][]hensondefining2009, and generalized deterministic input, noisy “and” gate (G-DINA) model delatorregeneralized2011.", "For the generalized model of polytomous-attribute DCMs, chengeneral2013 developed the polytomous G-DINA (pG-DINA) model by introducing a method to reduce polytomous attribute mastery profiles to item-specific binary profiles pertaining to unique correct-response probabilities for an item.", "It enables to estimate the parameters of the pG-DINA model in the same manner as the G-DINA model.", "Regardless of polytomy in an attribute, a class of saturated DCMs holds substantial utility.", "First, any sub-models in DCMs, such as the deterministic input, noisy “and” gate [DINA;][]junkercognitive2001 model, deterministic inputs, noisy “or” gate [DINO;][]templinmeasurement2006 model, reduced reparameterized unified model [RRUM;][]hartzfusion2008, and compensatory RUM [CRUM;][]ruppdiagnostic2010, can be expressed within a saturated DCM by imposing appropriate restrictions on its parameters.", "For example, a saturated DCM with only the intercept and highest-order interaction terms of required attributes allowed, is effectively the DINA model, which assumes that examinees must master all the necessary attributes to answer an item correctly.", "This flexibility in modeling different types of item-responding processes brings about the second advantage.", "Since a saturated DCM nests its sub-models, we can infer the nature of an item-responding process behind an item by inspecting its item parameter estimates.", "As was the case with the DINA model, if the intercept and highest-order interaction terms of a saturated DCM are estimated to be high, and other terms are estimated to be close to zero, then the item-responding process behind a corresponding item is likely to be the DINA model.", "As such, item parameters in a saturated DCM richly portray item-responding processes behind items, informing practitioners quantitatively about how examinees responded to given items.", "Despite the merits of a saturated DCM mentioned above, the computing time of its Bayesian estimation can be painstakingly slow at large-scale settings where the numbers of examinees and attributes are sizable.", "This is because a Markov chain Monte Carlo (MCMC)—a commonly used method for Bayesian estimation—is generally not scalable to such large-scale settings owing to its stochastic search for the parameter space of targeted posteriors gelmanbayesian2013.", "To resolve this computational cost, a variational Bayesian (VB) inference method is often employed as an alternative to an MCMC estimation.", "VB inference is a deterministic and fast approach to approximate posterior distributions.", "This deterministic nature of an estimation procedure enables a scalable Bayesian estimation, and many applied researchers have utilized it for their Bayesian modeling to capitalize on its high scalability bleivariational2017.", "Several VB inference algorithms have recently been developed in DCM literature yamaguchivariational2020,yamaguchivariational2021,yamaguchivbmultiplechoice2020.", "In particular, yamaguchivariational2021 developed a VB algorithm for a binary-attribute saturated DCM by introducing a G-matrix to reformulate it as a Bernoulli mixture model so that the priors for its model parameters become conditionally conjugate; this conditional conjugacy makes it easy to derive a coordinate ascent mean-field VB inference wangvariational2013.", "Additionally, yamaguchivariational2021 confirmed the sound accuracy and fast computation of their VB algorithm and showed the superior computational efficiency of VB estimation over the maximum likelihood estimation based on an expectation-maximization (EM) algorithm in a condition where response data is obtained consecutively such as in a computerized adaptive testing.", "By contrast, none of the studies have worked on the problem in the scalability for polytomous-attribute DCMs despite the fact that these models would present a more severe computational problem in their application settings.", "This particular severity in their MCMC computation results from the nature of polytomous attributes in which the increase of mastery levels leads to expanding the parameter space of attribute mastery profiles generally from $2^K$ to $M^K$ .", "Here, $K$ and $M$ denote the number of attributes and mastery levels, respectively.", "This expansion causes more intensive MCMC computation than binary-attribute DCMs.", "Therefore, it is desirable to develop a scalable Bayesian estimation algorithm for a polytomous-attribute saturated DCM.", "Accordingly, in this study, we develop a VB algorithm for a polytomous-attribute saturated DCM, which builds on the prior work by chengeneral2013 and yamaguchivariational2021, and propose its parallelized algorithm based on the configuration of the parallel-E parallel-M algorithm for generalized latent variable models vondavierhigh-performance2016.", "Furthermore, we conduct simulation and empirical studies to assess the utility of our algorithm.", "The remainder of this paper is as follows.", "In Section 2, we first introduce two foundational models of chengeneral2013 and yamaguchivariational2021.", "We then explain the polytomous-attribute G-matrix to derive a VB algorithm for a polytomous-attribute saturated DCM and the configuration of its parallel computing.", "Subsequently, our simulation and empirical studies are presented in Sections 3 and 4.", "Lastly, we discuss the limitations and future directions of this study." ], [ "pG-DINA Model", "Let $i\\;(1,\\cdots ,N)$ , $j\\;(1,\\cdots ,J)$ , and $k\\;(1,\\cdots ,K)$ express respondents, items, and attributes respectively.", "For ease of notation, we assume that all attributes have the same mastery level $M_k=M$ .", "The number of attribute mastery profiles for polytomous-attribute DCMs is $M^K$ , each of which is denoted as $l\\;(1,\\cdots ,L=M^K)$ .", "An implementation of DCMs requires a $J\\times K$ Q-matrix that specifies relationships between items and attributes.", "An entry $q_{jk}$ of a Q-matrix can take values from 0 to $M-1$ .", "If the value of $q_{jk}$ is $m\\;(0,\\cdots ,M-1)$ , it is necessary for a correct response to item $j$ to achieve more than $m$ -th level of mastery in attribute $k$ .", "A set of relevant attributes for item $j$ is indicated by $\\mathbf {q}_{j}=(q_{j1}, \\cdots , q_{jK})^\\mathsf {T}$ , which is the $j$ -th vector of a Q-matrix.", "In addition, let $\\mathbf {\\alpha }_l=(\\alpha _{l1}, \\cdots , \\alpha _{LK})^\\mathsf {T}$ of $\\mathbf {A}=(\\mathbf {\\alpha }_1, \\cdots , \\mathbf {\\alpha }_L)^\\mathsf {T}$ be the $l$ -th vector of an $L\\times K$ attribute mastery profile matrix $\\mathbf {A}$ that comprises all possible attribute mastery profiles.", "An entry of $\\mathbf {\\alpha }_l$ also can take values from 0 to $M-1$ .", "The superscript $T$ represents the transpose.", "Moreover, following the notations adopted in chengeneral2013 and delatorregeneralized2011, we use $K_j^* = \\sum _{k=1}^K I(q_{jk}>0)$ to denote the number of relevant attributes for item $j$ .", "$I(\\cdot )$ is an indicator function that takes the value of 1 when a given condition is satisfied.", "Using the notation of $K_j^*$ , we can reduce each vector of an attribute mastery profile matrix $\\mathbf {A}$ to the reduced vector $\\mathbf {\\alpha }_{jl}^*=(\\alpha _{jl1}^*, \\cdots , \\alpha _{jlK_j^*}^*)^\\mathsf {T}$ for item $j$ , where $l=1, \\cdots , M^{K_j^*}$ and each entry of $\\mathbf {\\alpha }_{jl}^*$ corresponds to attributes satisfying $q_{jk}>0$ on a vector $\\mathbf {q}_j$ .", "Besides, chengeneral2013 modified the reduced vector to the $\\textit {collapsed attribute vector}$ $\\mathbf {\\alpha }_{jl}^{**}=(\\alpha _{jl1}^{**}, \\cdots , \\alpha _{jlK_j^*}^{**})^\\mathsf {T}$ for further simplification, where each element of the reduced attribute vector $\\mathbf {\\alpha }_{lj}^*$ is collapsed into a binary element: $\\alpha ^{**}_{jlk} ={\\left\\lbrace \\begin{array}{ll}0 \\;\\text{if}\\; \\alpha ^{*}_{jlk} < q_{jk} & \\\\1 \\;\\text{otherwise} &\\end{array}\\right.", "}.$ An example of this transformation based on the one shown in chengeneral2013 is illustrated in Table REF .", "Consider the case of $K=3$ , $M=3$ , and the item with $\\mathbf {q}_j=(2, 1, 0)^\\mathsf {T}$ .", "Since $K^*_j$ for this $\\mathbf {q}$ vector is 2, the number of unique reduced attribute vectors $\\mathbf {\\alpha }_{lj}^*$ is $M^{K^*_j}=3^2=9$ .", "Those vectors can be further collapsed into $L_j^*=2^{K^*_j}=2^2=4$ unique collapsed attribute vectors $\\mathbf {\\alpha }_{lj}^{**}$ .", "These collapsed vectors are essentially the patterns that can be distinguished by item $j$ .", "Table: An example of collapsed attribute vectors when 𝐪 j =(2,1,0)\\mathbf {q}_j=(2,1,0)Accordingly, the number of unique correct-response probabilities for this item becomes 4, and each of them is expressed by the item response function of the pG-DINA model: $P(x_{ij}=1 \\vert \\mathbf {\\alpha }^{**}_{jl}) = \\delta _{j0} + \\sum _{k=1}^{K^*_j}\\delta _{jk}\\alpha _{jlk}^{**} + \\sum _{k^{\\prime }>k}^{K^*_j}\\sum _{k=1}^{K^*_j}\\delta _{jkk^{\\prime }}\\alpha _{jlk}^{**}\\alpha _{jlk^{\\prime }}^{**} + \\cdots + \\delta _{j1,\\cdots ,K^*_j}\\prod _{k=1}^{K^*_j}\\alpha _{jlk}^{**},$ where $l=1, \\cdots , L_j^*(=2^{K^*_j})$ .", "Here, $\\delta _{j0}$ and $\\delta _{jk}$ denote the intercept and main-effect terms of the relevant attributes, respectively.", "The terms after the second one denote all the combinations of interaction effects of the relevant attributes.", "With the simplified formulation of attribute mastery profiles elaborated in Table REF , the item response function of the pG-DINA model is formulated in the same manner as the G-DINA model, except that the attribute mastery profiles embedded in the item response function of pG-DINA model are the collapsed attribute vectors, whereas those of the G-DINA model are the binary reduced attribute vectors chengeneral2013.", "Because of such equivalence in their item response functions, the estimation procedure of the G-DINA model can be applied directly to that of the pG-DINA model." ], [ "Binary-Attribute Saturated DCM Using G-Matrices", "Since each item possesses its own correct-response probabilities associated with item-specific attribute mastery profiles, it can be construed that a set of binary responses $\\mathbf {x}_j$ for item $j$ is generated from heterogeneous populations with different Bernoulli distribution functions, where each item-specific attribute mastery profile has a unique Bernoulli probability.", "Based on this insight, yamaguchivariational2021 leveraged the idea of mixture modeling for heterogeneous populations and reformulated a binary-attribute saturated DCM as a Bernoulli mixture model by introducing a latent indicator variable $\\mathbf {z}_i$ and a G-matrix $\\mathbf {G}_j$ .", "In this section, for improved clarity of the notations between binary-attribute and polytomous-attribute DCMs, we use $H=2^K$ to denote the number of attribute mastery profiles for binary-attribute DCMs, although we used $L=M^K$ to denote those for polytomous-attribute DCMs in Section REF .", "A latent indicator variable $\\mathbf {z}_i =(z_{i1}, \\cdots , z_{iH})^\\mathsf {T}$ specifies to which attribute mastery pattern examinee $i$ belongs.", "It is a vector with $H=2^K$ elements in which the $h$ -th element takes the value of 1 when examinee $i$ belongs to class $h$ so that the values of $z_{ih}$ satisfy $z_{ih}=1$ and $\\sum _{h=1}^H z_{ih}=1$ .", "A G-matrix $\\mathbf {G}_j$ is the $H_j^*(= 2^{K^*_j}) \\times H$ matrix and introduced in yamaguchivariational2021 to reduce a latent indicator variable $\\mathbf {z}_i$ to an item-specific latent indicator variable $\\mathbf {z}_{ji}=(z_{i1}, \\cdots , z_{iH_j^*})^\\mathsf {T}$ with $H_j^*$ elements, indicating to which item-specific attribute mastery pattern examinee $i$ belongs.", "An example of a G-matrix $\\mathbf {G}_j$ based on yamaguchivariational2021 is illustrated in Table REF .", "Table: An example of a G-matrix when 𝐪 j =(1,1,0)\\mathbf {q}_j=(1,1,0)Consider the case of $K=3$ and the item with $\\mathbf {q}_j=(1, 1, 0)^\\mathsf {T}$ .", "Since the number of measured attributes for this item is $K^*_j = 2$ , the number of reduced attribute mastery patterns becomes $H^*_j=2^{K^*_j} = 4$ .", "Thus, $\\mathbf {G}_j$ becomes the $4 \\times 8$ matrix.", "With this matrix, an item-specific latent indicator variable $\\mathbf {z}_{ij}$ is computed by multiplying $\\mathbf {G}_j$ by $\\mathbf {z}_i$ such that $\\mathbf {z}_{ij}=\\mathbf {G}_j\\mathbf {z}_i$ .", "Elements of $\\mathbf {z}_{ij}$ are $z_{ijh^*}=\\sum _{h=1}^{H}g_{jh^*h}\\;z_{ih}$ , where $h^*=1, \\cdots , H^*_j(=2^{K_j^*})$ and $h=1,\\cdots ,H(=2^K)$ .", "These elements take the value of 1 when examinee $i$ belongs to the $h^*$ -th item-specific attribute mastery pattern and satisfy $\\sum _{h^*=1}^{H_j^*}z_{ijh^*}=1$ .", "For instance, under the specification in Table REF and $\\mathbf {z}_i =(0,0,0,0,0,0,0,1)^\\mathsf {T}$ , $\\mathbf {z}_{ij}=\\mathbf {G}_j\\mathbf {z}_i$ is computed as $\\mathbf {z}_{ij}=(0,0,0,1)^\\mathsf {T}$ , where $\\mathbf {G}_j$ correctly converts $\\mathbf {z}_i$ to an item-specific latent indicator vector $\\mathbf {z}_{ij}$ that has the value of 1 in its fourth element.", "Based on $\\mathbf {z}_i$ and $\\mathbf {G}_j$ , yamaguchivariational2021 formulated the item response function of the binary-attribute saturated DCM with G-matrices as follows $P(x_{ij}=1 \\vert \\mathbf {z}_i, \\mathbf {\\theta }_j, \\mathbf {G}_j, \\mathbf {q}_j) =\\prod _{h^*=1}^{H^*_j}\\theta _{jh^*}^{z_{ijh^*}},$ where $\\theta _{jh^*}$ denotes a correct-response probability to item $j$ for an examinee with the $h^*$ -th collapsed attribute mastery pattern.", "In addition, under the assumption of local independence given a latent indicator variable $\\mathbf {z}_i$ , yamaguchivariational2021 gives the likelihood function of the binary-attribute saturated DCM using G-matrices: $P(\\mathbf {X} \\vert \\mathbf {Z}, \\mathbf {\\Theta }, \\mathbf {G}, \\mathbf {Q}) &=& \\prod _{i=1}^{N}\\prod _{j=1}^{J}\\prod _{h^*=1}^{H^*_j}P(x_{ij} \\vert \\mathbf {z}_i, \\mathbf {\\theta }_j, \\mathbf {G}_j, \\mathbf {q}_j) \\\\&=& \\prod _{i=1}^{N}\\prod _{j=1}^{J}\\prod _{h^*=1}^{H^*_j} \\left\\lbrace \\theta _{jh^*}^{x_{ij}}(1-\\theta _{jh^*})^{x_{ij}}\\right\\rbrace ^{z_{ijh^*}}.$ Since the above likelihood function is in the Bernoulli mixture formulation, we can apply the well-known procedure of variational Bayesian inference for Bernoulli mixture models." ], [ "Formulate a G-Matrix for a polytomous-Attribute Saturated DCM", "Based on $\\mathbf {z}_i$ and $\\mathbf {G}_j^{poly}$ , the item response function of the polytomous-attribute saturated DCM is defined as $P(x_{ij}=1 \\vert \\mathbf {z}_i, \\mathbf {\\theta }_j, \\mathbf {G}_j^{poly}, \\mathbf {q}_j) =\\prod _{l^*=1}^{L^*_j}\\theta _{jl^*}^{z_{ijl^*}}.$ Its likelihood function is also defined as $P(\\mathbf {X} \\vert \\mathbf {Z}, \\mathbf {\\Theta }, \\mathbf {G}^{poly}, \\mathbf {Q}) &=& \\prod _{i=1}^{N}\\prod _{j=1}^{J}\\prod _{l^*=1}^{L^*_j}P(x_{ij} \\vert \\mathbf {z}_i, \\mathbf {\\theta }_j, \\mathbf {G}_j^{poly}, \\mathbf {q}_j) \\\\&=& \\prod _{i=1}^{N}\\prod _{j=1}^{J}\\prod _{l^*=1}^{L^*_j} \\left\\lbrace \\theta _{jl^*}^{x_{ij}}(1-\\theta _{jl^*})^{x_{ij}}\\right\\rbrace ^{z_{ijl^*}}.$ As yamaguchivariational2021 noted for the binary-attribute saturated DCM using G-matrices, the intercept, main, and interaction effects of attributes in the pG-DINA model can also be obtained by transforming the estimated $\\mathbf {\\theta }_j$ to $\\mathbf {\\delta }_j$ using the least-square estimation elaborated in delatorregeneralized2011.", "Next, we explain the fully Bayesian formulation of the polytomous-attribute saturated DCM using polytomous-attribute G-matrices." ], [ "Bayesian Formulation", "The difference in the formulation between the binary- and polytomous-attribute saturated DCMs lies only in the specification of a G-matrix so that we follow the Bayesian formulation of the binary-attribute saturated DCM in yamaguchivariational2021.", "We first write down the distribution of $\\mathbf {Z}$ as a categorical distribution with the mixing proportions $\\mathbf {\\pi }=(\\pi _1, \\cdots , \\pi _L)^{\\mathsf {T}}$ , which is given as $P(\\mathbf {Z} \\vert \\mathbf {\\pi }) = \\prod _{i=1}^N\\prod _{l=1}^L \\pi _{l}^{z_{il}}.$ The prior over $\\mathbf {\\pi }$ was selected to be a Dirichlet distribution with the parameter $\\mathbf {\\delta }^0=(\\delta ^0_1, \\cdots , \\delta ^0_L)^{\\mathsf {T}}$ and given as, $P(\\mathbf {\\pi } \\vert \\mathbf {\\delta }^0) = \\prod _{l=1}^L \\pi _{l}^{\\delta _{l}^0-1}.$ We chose a Beta distribution for the priors over the correct-response probability parameter $\\theta _{jh^*}$ , which is given as $P(\\theta _{jh^*} \\vert a_{jh^*}^{0}, b_{jh^*}^{0}) \\propto \\theta _{jh^*}^{a_{jh^*}^{0}-1}(1-\\theta _{jh^*})^{b_{jh^*}^{0} -1},$ where $(a_{jh^*}^{0}, b_{jh^*}^{0})$ are the hyperparameters for $\\theta _{jh^*}$ .", "In addition, the assumption of conditional independence on the correct-response probability parameters enables the joint probability of $\\theta _{jh^*}$ to be formulated as follows: $P(\\mathbf {\\Theta } \\vert \\mathbf {A}^{0}, \\mathbf {B}^{0}) \\propto \\prod ^{J}_{j=1}\\prod ^{H^*_j}_{h^*=1}\\theta _{jh^*}^{a_{jh^*}^{0}-1}(1-\\theta _{jh^*})^{b_{jh^*}^{0} -1}.$ Based on the likelihood and the priors, the joint posterior can be obtained as $&&P(\\mathbf {Z}, \\mathbf {\\Theta }, \\mathbf {\\pi } \\vert \\mathbf {X}, \\mathbf {G}^{poly}, \\mathbf {Q}, \\mathbf {\\delta }^0, \\mathbf {A}^{0}, \\mathbf {B}^{0}) \\nonumber \\\\&&\\quad \\propto P(\\mathbf {X} \\vert \\mathbf {Z},\\mathbf {\\Theta },\\mathbf {G}^{poly}, \\mathbf {Q})P(\\mathbf {Z} \\vert \\mathbf {\\pi })P(\\mathbf {\\pi } \\vert \\mathbf {\\delta }^0)P(\\mathbf {\\Theta } \\vert \\mathbf {A}^{0}, \\mathbf {B}^{0}).$" ], [ "Variational Bayesian Inference", "As noted by blitzsteinintroduction2019, saying that “Conditioning is the soul of statistics” (p. 46), the quintessence of Bayesian statistics is to update beliefs in current reasoning toward phenomena of interest based on observations.", "The mathematical formulation of this update is expressed as the Bayes' rule (Eq.", "1), in which the posterior distributions capture the uncertainty of parameters: $P(\\mathbf {\\Psi }\\vert \\mathbf {X}) = \\frac{P(\\mathbf {X}\\vert \\mathbf {\\Psi })P(\\mathbf {\\Psi })}{\\int P(\\mathbf {X}\\vert \\mathbf {\\Psi })P(\\mathbf {\\Psi })d\\mathbf {\\Psi }}.$ Although the Bayes' rule offers the base of principled statistical inference, computing posterior distributions requires the marginalization term in the denominator.", "The closed-form solution for this term is usually unavailable in practice, and its numerical integration involves prohibitive computational cost bishoppattern2006.", "Variational Bayesian inference transforms the intractable posterior computation into an optimization problem in which the objective is to search for the parametric form of distributions, often referred to as “variational distributions”, that produces the best approximation to posteriors galdovariational2020.", "In many cases, the Kullback–Leibler(KL) divergence is selected as a metric to measure the discrepancy between the variational distribution $q(\\mathbf {\\Psi })$ and the posterior distribution $P(\\mathbf {\\Psi }\\vert \\mathbf {X})$ .", "The KL emerges from the well-known decomposition of the log marginal likelihood: $\\log P(\\mathbf {X}) &=& \\int q(\\mathbf {\\Psi })\\log \\frac{P(\\mathbf {X},\\mathbf {\\Psi })}{q(\\mathbf {\\Psi })}d\\mathbf {\\Psi } - \\int q(\\mathbf {\\Psi })\\log \\frac{P(\\mathbf {\\Psi } \\vert \\mathbf {X})}{q(\\mathbf {\\Psi })}d\\mathbf {\\Psi } \\\\&=& L(q) + \\mathrm {KL}[ q(\\mathbf {\\Psi })\\parallel P(\\mathbf {\\Psi }\\vert \\mathbf {X})].$ Here, $L(q)$ is the lower bound of the log marginal likelihood (Variational Lower Bound; VLB).", "The minimization of the KL corresponds to the maximization of the VLB, and $q(\\mathbf {\\Psi })$ equals to $P(\\mathbf {\\Psi }\\vert \\mathbf {X})$ when the KL becomes 0 zhangadvances2019.", "To alleviate the complexity in optimizing the KL, we assume that the variational distributions can be factorized into $S$ components, which is known as the mean-field assumption on $q(\\mathbf {\\Psi })$ : $q(\\mathbf {\\Psi }) = \\prod _{s=1}^S q(\\psi _s).$ Alternately, we update a set of parameters for each $q(\\psi _s)$ in a manner that satisfies $q(\\psi _s) \\propto \\exp (\\mathrm {E}_{s^{\\prime }\\ne s}[ \\log P(\\mathbf {X}, \\mathbf {\\Psi })])$ until the changes of the VLB during iteration achieve the predefined stopping criterion.", "In the case of the polytomous-attribute saturated DCM using polytomous-attribute G-matrices, we applied the mean-field assumption to variational distributions of the model parameters $\\mathbf {\\Psi }=\\lbrace \\mathbf {Z}, \\mathbf {\\Theta }, \\mathbf {\\pi }\\rbrace $ such that: $q(\\mathbf {\\Psi }) &=& q(\\mathbf {Z})q(\\mathbf {\\pi }, \\mathbf {\\Theta }) \\\\&=& \\left(\\prod _{i=1}^{N}q(\\mathbf {z}_i)\\right)\\left(\\prod _{j=1}^{J}\\prod _{l^*=1}^{L_j^*} q(\\theta _{jl^*})\\right)q(\\mathbf {\\pi }).$ As mentioned in Section REF , the difference between the binary- and polytomous-attribute saturated DCMs is in the formulation of a G-matrix and not in the the formulation of their joint posterior.", "Accordingly, the derivation of a variational Bayesian algorithm for the binary-attribute saturated DCM elaborated in yamaguchivariational2021 can be applied directly to the proposed method.", "For the details of its derivation, refer to yamaguchivariational2021." ], [ "Parallelization of the Proposed Algorithm", "Variational Bayesian inference generally consists of two steps: the variational E-step (VE-step) and variational M-step (VM-step).", "This is because the expectation step (E-step) in an EM algorithm corresponds to the update of the variational posteriors of latent variables, and the maximization step (M-step) in an EM algorithm corresponds to the update of the variational posteriors of parameters.", "Hence, the configuration of the parallelized EM algorithm for generalized latent variable models originally developed in vondavierhigh-performance2016 is applicable to the proposed method.", "After initializing the parameters, the parallel-E parallel-M algorithm in the von Davier's study () starts with parallelizing the E-step using $C$ cores, subdivides examinees into $C$ groups, computes posteriors of latent variables given responses of examinees, and calculates the expected counts of examinees in latent classes in parallel by using $C$ worker processes.", "These computed posteriors are aggregated in a master process.", "Subsequently, the parallel-M-step proceeds with subdividing items into $C$ groups, computes gradients of parameters, and updates parameters with these computed gradients in parallel by using $C$ worker processes.", "These updated parameters are aggregated in a master process.", "Finally, a convergence criterion is evaluated using the latent variables and the parameters updated in the previous steps.", "These steps continue until the predefined stopping criterion is achieved.", "In the case of the proposed method, we first allocate $C$ cores for parallelization.", "Then, we subdivide examinees into $C$ groups, update the variational posteriors of the latent variables $q(\\mathbf {z}_i)$ in parallel, and aggregate these results in a master process.", "Subsequently, the variational posteriors of the mixing proportions $q(\\mathbf {\\pi })$ is computed based on the values from the previous step.", "In the VM-step, we subdivide items into $C$ groups and update the variational posteriors of the correct-response probability parameters $q(\\theta _{jh^*})$ in parallel.", "These results are aggregated in a master process, and the value of VLB is computed using the results from the VE- and VM-steps.", "These steps continue until the predefined stopping criterion is achieved." ], [ "Simulation Design", "To confirm whether the proposed algorithm can recover the true parameter values, we conducted a simulation study based on the specifications in yamaguchivariational2021 and chengeneral2013.", "We considered the sample size of 500 or 5000, the number of items 30, the correlation coefficients among attributes .0, .5, or .8, the number of attributes $K=5$ , and the number of mastery levels $M=3$ .", "With respect to true Q-matrices, we employed the two Q-matrices specified in chengeneral2013.", "We set the Q-matrix denoted as “Q1” in Table REF for the simple condition, where the maximum number of measured attributes is two, and the correct-response probability parameters gradually increase as an examinee acquires the required attributes.", "Similarly, we set another Q-matrix denoted as “Q2 in Table REF , for the complex condition, where the maximum number of measured attributes is three, and the different assumptions on item-responding processes are assigned to a set of items.", "For instance, the items from 6 to 10 hold the non-compensatory assumption in which their correct-response probability increases only when all the required attributes are mastered.", "To generate attributes mastery profiles, we applied the following criteria using a multivariate standard normal distribution $\\mathbf {\\lambda }_{i} \\sim \\mathrm {MVN}(\\mathbf {0}, \\mathbf {\\Sigma })$ : $\\alpha _{ik} = {\\left\\lbrace \\begin{array}{ll}M-1 \\; \\mathrm {if} \\;\\lambda _{ik} \\ge \\phi ^{-1}(\\frac{M-1}{M}) & \\\\\\vdots &\\\\1 \\; \\mathrm {if} \\; \\lambda _{ik} \\ge \\phi ^{-1}(\\frac{1}{M}) & \\\\0 \\; \\mathrm {otherwise}\\end{array}\\right.", "}.$ The correlation coefficients among attributes $.0$ , $.5$ , or $.8$ were assigned in the off-diagonal entries of $\\mathbf {\\Sigma }$ .", "This generation method was modified from the original criteria used in chiucluster2009 and liudata-driven2012 to produce polytomous attributes.", "For each artificial dataset, we computed the expected a posteriori (EAP) estimates of the model parameters from their variational posterior distributions.", "Two hundred datasets were simulated for all the conditions.", "Table: Specifications of the Q-matrix and correct-response probability parameters for the simple conditionTable: Specifications of the Q-matrix and correct-response probability parameters for the complex conditionWe assessed the performance of parameter recovery with the bias and root-mean-square error (RMSE).", "Bias and RMSE were computed as $\\mathrm {Bias} = \\frac{1}{200}\\sum _{m=1}^M \\left(\\hat{\\theta }_{\\mathrm {est}}^{(m)} - \\theta _{\\mathrm {true}} \\right) , \\\\\\mathrm {RMSE} = \\sqrt{ \\frac{1}{200}\\sum _{m=1}^M \\left(\\hat{\\theta }_{\\mathrm {est}}^{(m)} - \\theta _{\\mathrm {true}} \\right)^2 },$ where $\\theta _{\\mathrm {true}}$ denotes the true value of a correct-response probability parameter.", "We averaged these measures across the related items." ], [ "Settings for Estimation", "We adopted the same settings on the three hyperparameters $\\mathbf {\\delta }^0$ , $\\mathbf {A}^0$ , and $\\mathbf {B}^0$ as in yamaguchivariational2021.", "Specifically, a vector of ones was assigned to $\\mathbf {\\delta }^0$ .", "For $\\mathbf {A}^0$ and $\\mathbf {B}^0$ , we set the weakly informative priors such that the prior expectation of correct-response probability in the pattern with no relevant attributes mastered was below 0.5, and that of the pattern with all relevant attributes mastered was larger than 0.5.", "The initial values of the variational parameters for latent variables $z_{il}$ were set to be $1/L$ , meaning that the probability of which examinee $i$ belongs to pattern $l$ is equivalent across all the patterns.", "Finally, the stopping criteria for the proposed algorithm was specified such that the iteration stops when the maximum change in VLB becomes less than $10^{-4}$ or the number of iterations reaches 2000.", "We wrote all the programs for this study in the Julia programming language [1.61; ][]bezansonjulia2017.", "You can access the estimation code at the open science framework: https://osf.io/fgn3t/?view_only=1dcd3c070b1d430fb2e0f08b55145629." ], [ "Results", "Table REF shows the biases and RMSEs for the correct-response probability parameters in all the conditions.", "“Q1” denotes the simple Q-matrix, and “Q2” denotes the complex Q-matrix.", "Owing to the large number of the correct-response probability parameters, we summarized their biases and RMSEs in the same manner as yamaguchivariational2021, where the biases and RMSEs were averaged according to the number of measured attributes.", "We observed that the values of biases were small in all the conditions.", "Concerning RMSEs, the increase in sample size lowered the values of RMSEs.", "Additionally, as the number of measured attributes increased, the values of RMSEs worsened.", "Similarly, as the value of correlation coefficients among attributes increased, the RMSEs deteriorated when the number of measured attributes was greater than two.", "Furthermore, large values of RMSEs were observed when the number of measured attributes was three under the condition with the complex Q-matrix and the sample size of 500.", "This result indicates that the estimates of the correct-response probability parameters should be interpreted with caution when the number of measured attributes is large, and the sample size is small.", "Although the instability in parameter estimation appeared in such conditions, the parameter recovery was generally satisfactory under various conditions.", "Table: Biases and RMSEs for correct-response probability parameters in the simulation conditions" ], [ "Empirical Study", "We conducted an empirical study to show the utility of the proposed method by comparing it with the performance of an MCMC estimation.", "The data from the standardized achievement test for Grade 9 Mathematics garnered in 2019 by TOKYO SHOSEKI CO., LTD. was used for the empirical evaluation.", "The contents of this examination target the following four aspects of mathematics: numbers and algebraic expressions, geometrical figures, functions, and making use of data.", "Examples of items modified from the original ones include “simplify: $(-12)xy^2 \\div 6xy \\times (-4xy)$ ” and “solve for $a$ in the following equation: $4a -7 = 8a -9$ .” The sample size is 21,888, and the number of items is 34.", "Regarding attribute specification, the second author identified three binary and polytomous attributes for this test based on the perspectives of cognitive and educational psychology.", "This Q-matrix was validated by two educational psychologists, and its inconsistencies in the validation process were discussed between them.", "The first attribute is binary and represents a computational skill.", "The second attribute is polytomous with the levels of $M=3$ and represents (1) comprehension of procedure and (2) conceptual understanding of procedure.", "The third attribute is binary and represents comprehension of terminology.", "The detailed specification of the Q-matrix for this examination is shown in Table REF .", "Table: The specification of the Q-matrix for the empirical studyConcerning an MCMC estimation, we developed the Gibbs sampler for the polytomous-attribute saturated DCM using polytomous G-matrices.", "The specifications of hyperparameters were also set to be the same as the ones in the simulation study.", "When implementing this Gibbs sampler, we run three chains of 5,000 iterations with 2,000 burn-in.", "We assessed the convergence of the MCMC chains via a convergence diagnostics called the rank-normalized split-$\\hat{R}$ proposed in vehtarirank-normalization2021 and confirmed that all the parameters satisfied $\\hat{R}<1.05$ .", "Additionally, we parallelized generating MCMC chains using three cores.", "For the proposed method, we assigned the same specifications in the simulation study and set the number of cores to $C=8$ .", "We first show the computation time of the Gibbs sampler and the proposed method in Table REF .", "As expected, the parallelized VB algorithm outperformed the parallelized Gibbs sampler in computation time, and the computation time of our method was about 37 times faster than that of the Gibbs sampler.", "Table: Comparison in computation time between the parallelized Gibbs sampler and parallelized VB algorithmNext, we show the results of their parameter estimation.", "Tables REF and REF present the EAP and posterior standard deviation (SD) estimates for the correct-response probability parameters from the Gibbs sampler and the proposed method, respectively.", "The EAP estimates of the correct-response probability parameters from our method were closely aligned with those from the Gibbs sampler.", "Their posterior SD estimates from our method were also close to those from the Gibbs sampler, although our method moderately underestimated the posterior SDs compared to the Gibbs sampler in the items that the corresponding MCMC-estimated posterior SDs were relatively large.", "This underestimation of posterior variance is the well-known tendency of VB inference bishoppattern2006.", "However, the biggest absolute differences in the EAP and posterior SD estimates between these two estimation methods were .0180 and .0730, indicating that the estimated values of the correct-response probability parameters from the proposed method were mostly consistent with those from the Gibbs sampler.", "Table REF presents the EAP and posterior SD estimates for the mixing proportion parameters.", "Similar to the correct-response probability parameters, the EAP and posterior SD estimates from our method were closely aligned with those from the Gibbs sampler.", "The biggest absolute differences in the EAP and posterior SD estimates between these two estimation methods were .0002 and .0022.", "These results indicate that our method can estimate the model parameters as accurately as the Gibbs sampler.", "Table: References" ] ]
2107.01865
[ [ "Beyond Fowler-Nordheim model: Harmonic generation from metallic\n nano-structures" ], [ "Abstract Metallic structures interacting with electromagnetic fields are known to exhibit properties similar to those found in atoms and molecules, such as multi-photon and tunnel ionization.", "Developing this similarity beyond the electron emission current, we generalize the wellknown Fowler-Nordheim model, and predict heretofore unrecognized source of nonlinear optical response from nano-structures exposed to illumination with intense optical pulses." ], [ "Introduction", "The research into physics of structured surfaces and in particular nano-structures has been the subject of a growing interest.", "In this context, one of the richest areas of investigation has been the non-linear light-matter interactions [1], [2], [3], [4] and in particular the emission of electrons due to irradiation by high-intensity optical pulses [5], [6], [7], [8], [9], [10].", "Nonlinear optical properties of nano-scale surfaces is another field which attracts considerable attention [11], [12], [13], [14].", "While the relation between the emitted electron current and the electric field intensity has been studied for almost a century [15], [16], [17], recent experiments open a whole new range of opportunities for applications and basic research [2], [5], [6].", "It has been recognized that the behavior of nano-structures exposed to the external optical pulse exhibits some important similarities to the dynamics of single atoms in the external field [4] such as multiphoton ionization [18] and strong field ionization [19].", "However, so far the parallels have been recognized mainly for the electrons liberated from the system.", "In atomic and molecular systems there is always a nonlinear polarization response which accompanies the strong-field ionization [20].", "In fact, whenever the external field becomes strong enough to cause field-ionization in atomic species, it also necessarily gives rise to a dipole moment as a result of the deformation of electronic wavefunctions in bound-states [21].", "As a result, the ionization and the nonlinear polarization of atomic and molecular species exposed to strong electric fields are closely connected [22].", "They together give rise to physics driven by both bound and freed electrons and their interplay governs many phenomena in modern nonlinear optics, including optical filamentation [23] and long-distance localized pulse propagation [24].", "It is fair to say that whenever ionization by strong electric field can be detected, the nonlinear polarization response is already so strong that it must not be neglected.", "This begs a question if such a connection should be made for nano-structures.", "In this paper we take the atom-nanostructure analogy one step further and find that a strong polarization response exists along the tunneling electrons.", "We aim to show that these two aspects are intimately related and wherever there is a tunneling current, one should also look for a surface polarization accompanying electron emission.", "Our`s is a conceptual study based on a simple model which is in fact exactly solvable.", "While it is certainly too simple to hope for a quantitative description of any real system, it can be studied in detail analytically and numerically and allows us to establish a direct connection between the polarization and the current facets of the optical response of metallic nanostructures.", "One of the widely used results in the context of field-induced current caused by external electric field is the Fowler Nordheim (F-N) relation [16], [17].", "The F-N formula relates the intensity of external field to the tunneling current and the treatment is based on a one-dimensional single-particle model.", "In the present work, we first re-create the F-N scenario and reformulate it using a non-Hermitian formalism.", "Deviating from the standard F-N approach, we use the quantum resonance states rather than the more conventional scattering states.", "Having verified our approach via comparison with the F-N model, we then identify the nonlinear polarization produced at the surface of the sample.", "In order to estimate the strength of this interaction, we make a comparison in terms of per-atom nonlinear dipole, and find it comparable to that induced by strong fields in the most nonlinear noble-gas atoms.", "Finally we discuss possible experimental signatures, specifically in the form of higher-harmonic radiation." ], [ "Model", "The physical model used in this work is essentially the same as what underlines the well-known Fowler-Nordheim description of electron emission from a metal structure exposed to an electric field.", "It is assumed here that electrons in the conduction band can be approximated in terms of the free-electron model, and on the outside of the sample there is an electric field that pulls electrons away from the sample surface.", "In order to account for the three-dimensional density of electronic states, we shall later consider a 3D volume, but in order to keep notation simple, we can start with a one-dimensional description.", "The differential expression associated with the action of the Hamiltonian is split into domain of $-L\\le x \\le 0$ which represents the conduction band and the exterior $0<x<\\infty $ where a homogeneous external field of strength $F$ is applied: $H &= -\\frac{1}{2} \\frac{d^2}{d x^2} - V_0 \\ \\ \\ \\ \\text{for} \\ \\ \\ -L \\le x \\le 0 \\nonumber \\\\H &= -\\frac{1}{2} \\frac{d^2}{d x^2} - s F x \\ \\ \\ \\ \\text{for} \\ \\ \\ x > 0 \\ .$ For the sake of notation, we will assume that quantity $F$ is always positive, and parameter $s$ above is used to indicate whether this field tends to pull particles away from the sample surface ($s>0$ ) or it wants to push the electrons back into the metal ($s<0$ ).", "To select the domain of the Hamiltonian, we ask for the following boundary and continuity conditions to be satisfied: $\\psi (-L) = 0 \\ , \\ \\psi (0^-) = \\psi (0^+) \\ , \\ \\psi ^{\\prime }(0^-) = \\psi ^{\\prime }(0^+) \\ .$ Parameter $V_0$ represents the depth of the conduction band, and we will use $\\phi $ to denote the workfunction of the material.", "While $L$ , the length of the metallic sample is macroscopic, we carry out all calculations for a finite $L$ , and limit $L\\rightarrow \\infty $ will be taken for observable quantities.", "This model is exactly solvable and all quantities can be calculated analytically for arbitrary sample length, but the macroscopic limit will result in greatly simplified expressions.", "Later, when we need to account for the three-dimensional nature of the electron ensemble, the energy eigen functions will take the form $\\psi _W(x) \\exp [ i k_y y + i k_z z]$ with the first factor being the eigenfunction of (REF ) corresponding to energy $W$ , and where $k_{y,z}$ stand for the momenta in the transverse directions.", "For the sake of simplicity we will treat the electrons at the zero temperature, with all states below the Fermi energy being occupied.", "We will use $k_F$ to denote the Fermi velocity in the conduction band." ], [ "Strong-field electron emission", "As a first step, we will calculate the rate of emission of particles tunneling from the conduction band into the vacuum when the external field points away from the surface, i.e.", "for $s>0$ .", "Here we employ a non-Hermitian approach utilizing Stark resonant states.", "The rationale for this deviation from the standard quantum description is that it allows one to go beyond the emitted current and also calculate the concomitant nonlinear induced surface dipole moment which gives rise to up-converted electromagnetic radiation.", "The following two subsections aim to demonstrate that the non-Hermitian approach can reproduce the results of the Fowler-Nordheim model, and we do this to validate our approach." ], [ "Non-Hermitian treatment", "Stark resonances in the context of this work are the energy eigenstates of the differential equation (REF ) which obey the so-called Siegert outgoing-wave boundary conditions for $x\\rightarrow \\infty $  [25].", "For the outside component of the wavefunction they can be found in terms of the solutions to Airy equation like so $\\psi _W(x) &= \\sin (k_W (L+x)) \\text{Ci}^+(\\alpha (\\phantom{+}0 + W/F)) \\ \\ x<0 \\nonumber \\\\\\psi _W(x) &= \\sin (k_W (L+0)) \\text{Ci}^+(\\alpha (+x + W/F)) \\ \\ x>0$ where $\\alpha = -(2 F)^{1/3} \\ , \\ k_W = \\sqrt{2 (V_0 + W)}$ and $\\text{Ci}^+ = \\text{Bi} + i \\text{Ai}$ is the combination of Airy functions that behaves as the outgoing wave at large distances.", "In order to satisfy the requirement of the continuous derivative, complex-valued energy $W$ must be chosen as to satisfy this eigenvalue equation: $k_W \\cos (k_W L) \\text{Ci}^+(\\alpha W/F) = \\alpha \\sin (k_W L) \\text{Ci}^{+^{\\prime }}(\\alpha W/F) \\ .$ We are interested in the solutions for which the real part of $W$ is between the bottom of the conduction band and below the Fermi energy, i.e.", "we seek $-V_0 < \\text{Re}\\lbrace W\\rbrace < -\\phi \\ .$ At the same time, we consider a large $L$ , and the trig functions will localize each $\\text{Re}\\lbrace W\\rbrace $ in an interval of size that scales with $1/L$ .", "So one can expect to obtain a quasi-continuum of solutions, and we turn our attention to their imaginary parts.", "Because of the density of solutions, we may consider the imaginary part of $W$ as being controlled by the real part.", "For values of $F$ which correspond to the range of the laser intensities typically used in experimental setups, the $\\text{Ci}$ in the eigenvalue equation is dominated by $\\text{Bi}$ which is exponentially larger than the $\\text{Ai}$ contribution, as can be verified with the help of their asymptotic forms, $\\exp [+2 \\sqrt{2} |W|^{3/2} /(3 F)] \\sim \\text{Bi}(\\alpha W/F) \\gg \\text{Ai}(\\alpha W/F) \\sim \\exp [-2 \\sqrt{2} |W|^{3/2} /(3 F)] \\ .$ This invites us to treat the $\\text{Ai}$ part of the eigenvalue equation as a small term and look for the solution in the form of $W = W(F) + \\delta W(F) + \\ldots $ where the dominant real-valued $W(F)$ obeys the equation obtained via replacement $\\text{Ci}^+\\rightarrow \\text{Bi}$ : $k_W \\cos (k_W L) \\text{Bi}(\\alpha W/F) = \\alpha \\sin (k_W L) \\text{Bi}^{\\prime }(\\alpha W/F) \\ .$ Then the first correction $\\delta W(F)$ is obtained by inserting $W(F)$ in the small perturbation, and expanding the dominant part in $\\delta W$ .", "As expected, the correction turns out to be purely imaginary: $\\delta W(F) = \\frac{2 i \\alpha (V_0+W)}{\\pi L \\left(2 (V_0+W) \\text{Bi}^2 + \\alpha ^2 \\text{Bi}^{\\prime 2} \\right)}\\approx -i \\frac{\\sqrt{-2 W} (V_0 + W)}{L V_0} \\exp [-\\frac{4 \\sqrt{2} |W|^\\frac{3}{2} }{3 F}]$ To simplify our notation, we used here, and/or will use in what follows $\\text{Bi} \\equiv \\text{Bi}(\\alpha W/F) \\ , \\ \\text{Bi}^{\\prime } \\equiv \\text{Bi}^{\\prime }(\\alpha W/F)\\ , \\ \\text{Ai} \\equiv \\text{Ai}(\\alpha W/F) \\ , \\ \\text{Ai}^{\\prime } \\equiv \\text{Ai}^{\\prime }(\\alpha W/F) \\ .$ As a result we obtain the ionization rate of the state with the real part energy $W$ $\\Gamma (F) = -2 \\text{Im}\\lbrace \\delta W(F)\\rbrace = \\frac{4 \\alpha (V_0+W)}{\\pi L \\left(2 (V_0+W) \\text{Bi}^2 + \\alpha ^2 \\text{Bi}^{\\prime 2} \\right)}\\ .$ The macroscopic current is obtained by summing up the contributions from all occupied states in the conduction band, and in that process dependence on $L$ disappears as should be expected.", "We will do this explicitly for the polarization components of the response to field $F$ in Section 4.", "Here we want to concentrate on comparison with the F-N model.", "The asymptotic form in (REF ) already indicates that the low-field behavior is in fact the same as in the F-N model.", "In the following section we show that the current aspect of our treatment is in fact equivalent to that of the F-N framework." ], [ "Comparison with Fowler-Nordheim treatment", "For the sake of completeness the F-N framework is summarized next.", "It utilizes the scattering states, which one can choose to parameterize as follows $\\psi _W(x) &= \\frac{1}{N_s}\\exp (+i k_W x) + \\frac{1}{N_s} R(W) \\exp (-i k_W x) \\ \\ x<0 \\nonumber \\\\\\psi _W(x) &= \\frac{1}{N_s} T(W) \\text{Ci}^+(\\alpha (+x + W/F)) \\ \\ x>0 \\ ,$ where $R(W)$ and $T(W)$ are fixed so that the wavefunction is smoothly continuous at $x=0$ .", "The normalization factor is in general complicated, but to the leading order in size of the system $L$ it would be $N_s \\sim (2 L)^{1/2}$ .", "The probability current leaking to the positive $x$ -axis can be expressed as $J(W) = \\frac{1}{N_s^2} k_W ( 1 - |R(W)|^2) \\approx \\frac{4 \\alpha (V_0+W)}{\\pi L \\left(2 (V_0+W) \\text{Bi}^2 + \\alpha ^2 \\text{Bi}^{\\prime 2} \\right)} \\ .$ On the left is an exact expression, and when evaluated for $F$ so weak that $\\text{Bi} \\gg \\text{Ai}$ one obtains the expression on the right.", "This quantity should be compared to the Stark-resonance decay rate of $2 \\text{Im}\\lbrace \\delta W(F)\\rbrace $ .", "Thus, the rate of leakage from any given state is the same in the Stark and F-N models as long as one does not commit to unnecessary approximations in F-N treatment.", "This equivalency is a validation of our approach on the current side, this giving us a license to use the non-Hermitian treatment to go beyond F-N and obtain the polarization response in what follows.", "It is worth mentioning that there are many ways in which the F-N can be improved e.g.", "by considering a general potential barrier [26], [27] or by taking into account the effect of image charge induced by electron on the surface of the metal [28], [29] or by including the local field enhancement, and the effect of finite temperature [30], [17].", "While many of such improvements can be also applied in our treatment, it is not our aim for this work.", "Rather, we prefer to keep the model as simple as possible in order to emphasize that the appearance of the nonlinear polarization, which is discussed in the next section, is a universal effect reflecting the fact that the two kinds of response are in fact different sides of the same dynamics." ], [ "Nonlinear induced polarization", "Now that we have successfully reproduced the F-N per-state current in the non-Hermitian formulation, we can show another important aspect of this formalism, namely the calculation of the dipole moment and eventually the surface polarization per atom for the surface of the nano-structure irradiated by the external field.", "This is an experimentally testable prediction which could not be made using the scattering states utilized by the F-N model.", "As we will see in the coming sections, the resulting dipole moment shows a highly non-linear behavior with respect to the external field.", "The physical reason can be inferred from the fact that the electric field does not penetrate easily beneath the metallic surface.", "The formal, or mathematical reason for such behavior is that the calculation of the dipole moment for positive and negative fields must be done separately because the asymptotic behavior of the dominant Airy function in the wavefunction changes when $F$ changes sign." ], [ " Dipole moment calculation for positive field", "Instead of calculating the expectation value of the dipole moment by integration over wavefunction products, we can use the relation that connects the dipole moment to the rate of change of the energy eigenvalue with respect to the field strength [31], namely $M = -\\partial _F W(F) \\ .$ The direct calculation of the dipole moment is in principle possible, but it results in rather unwieldy expressions.", "In contrast, the above equation can be used in conjunction with the field-differentiated eigenvalue equation, $\\partial _F \\left[ k_W \\cos (k_W L) \\text{Bi}^+(\\alpha W/F) - \\alpha \\sin (k_W L) \\text{Bi}^{+^{\\prime }}(\\alpha W/F) \\right] = 0 \\ ,$ to obtain the quantity of interest.", "This calculation simplifies considerably when retaining only the leading order in large $L$ and the result reduces to $M(F) = \\frac{ 2(V_0+W)\\left[ 4 W^2 \\text{Bi}^2 - 2^{1/3} F^{4/3}\\text{Bi} \\text{Bi}^{\\prime } + 2^{5/3}F^{2/3} \\text{Bi}^{\\prime 2} \\right] }{3 F^2 \\left( 2(V_0+W) \\text{Bi}^2 + (2 F)^{2/3} \\text{Bi}^{\\prime 2} \\right) L} \\ .$ Of course, the constant “background” value of $M(0) = (W_0+V_0)/(4LV_0W_0)$ is irrelevant for the interaction with light since the relevant physical quantity is the change in the dipole moment, and we shall subtract $M(0)$ in the following.", "We have thus derived a contribution to the induced polarization of the sample which originates in a single quantum state.", "This constitutes the counterpart of the per-state current as it was derived either in the original F-N framework or in our non-Hermitian approach.", "It becomes clear that the two aspects, namely the polarization and current are expressions of the same mechanism in which the single-particle state adopts to the value of the external field.", "To complete the picture we must turn to the remaining half of the calculation, when the field pushes the particle from the exterior toward the sample surface." ], [ " Dipole moment calculation for negative field", "When the field direction is into the structure ($s<0$ ), the outside solution is best expressed in terms of Airy function $\\text{Ai}$ : $\\psi _W(x) &= \\sin (k_W (L+x)) \\text{Ai}(\\alpha (\\phantom{+}0 + W/F)) \\ \\ x<0 \\nonumber \\\\\\psi _W(x) &= \\sin (k_W (L+0)) \\text{Ai}(\\alpha (+x + W/F)) \\ \\ x>0 \\ .$ In this case, the energy spectrum is purely discrete (and real, of course), and the eigenvalue equation is obtained as $k_W \\cos (k_W L) \\text{Ai}(\\alpha W/F) = -\\alpha \\sin (k_W L) \\text{Ai}^{^{\\prime }}(\\alpha W/F) \\ .$ As in the case of positive field, the induced dipole moment can be obtained from the differentiated eigenvalue equation which leads, again including solely the leading term in $1/L$ , to $M(F) = \\frac{ 2(V_0+W)\\left[ 4 W^2 \\text{Ai}^2 - 2^{1/3} F^{4/3}\\text{Ai} \\text{Ai}^{\\prime } + 2^{5/3}F^{2/3} \\text{Ai}^{\\prime 2} \\right] }{3 F^2 \\left( 2(V_0+W) \\text{Ai}^2 + (2 F)^{2/3} \\text{Ai}^{\\prime 2} \\right) L} \\ .$ The results for positive and negative fields are put together and illustrated in the Fig.", "REF , depicting the resulting microscopic induced dipole moment for several energies.", "Figure: Induced dipole versus field strength for several states with different energies.The shape of these curves for F>0F>0 closely resembles the induced dipole moment in noble-gas atoms.However, the peaks occur at significantly lower field intensities because therelevant energy scale is much smaller than ionization potentials of atoms.The behavior for very strong fields reflects the fact that the classical exit point from the potential barrier approaches the boundary of the system, and thus causes the wavefunction to decay faster as it leaks out.", "It should be noted that a similar behavior can be found in long-lived unstable Stark resonances in noble-gas atoms [22].", "However, the region in which the dipole curve starts to bend downwards is impossible to reach in gases due to propagation effects caused by the concomitant ionization.", "It is conceivable that a similar “screening” can occur in strongly driven metallic tips.", "One can also see that for the range of energies in the vicinity of Fermi-level typical for metals, the response changes fast with the state energy.", "These are the states that contribute the most to the macroscopic response.", "The asymmetric and non-linear shape of these curves indicates that the induced dipole driven by a harmonic field with certain frequency will radiate strong higher-harmonic signals." ], [ "Macroscopic response", "To calculate the macroscopic response of the nano-tip to the irradiation by intense light, we need to add up all microscopic contributions to the induced dipole moment.", "Using the free-electron model for the conduction band at zero temperature, we integrate over states with the three-dimensional momenta less than the Fermi velocity $k_F$ .", "The latter is related to the electron density $n$ (in the conduction band) via $k_F^3 = 3 \\pi ^2 n $ .", "To obtain the field-induced dipole moment, we sum up all contributions expressed in (REF ,REF ) for the populated states: $P(F) = 2\\times 2 \\pi \\int _{-k_F}^{+k_F} M(k_W,F) \\int _0^{\\sqrt{k_F^2-k_W^2}} k_\\perp dk_\\perp \\frac{L^3}{(2\\pi )^3} dk_W \\ .$ The above expression makes it evident that the observed dipole moment is proportional to the surface area of the sample, and it therefore originates in the space adjacent to the nanostructure surface.", "Before continuing, let us obtain an order of magnitude estimate of the dipole moment per atom residing on the surface.", "The surface density of the dipole is $p(F) = P(F)/L^2 =4 \\int _0^{k_F} L M(k_W,F) (k_F^2 - k_W^2) \\frac{1}{(2\\pi )^2} dk_W \\sim k_F^3 \\sim n$ Quantity $L M$ turns out to be of the order of unity so the estimate for dipole moment per atom at the surface, with $a_L$ standing for the lattice constant, is $p_a(F) = p(F)\\times \\text{area per atom} \\sim n a_L^2 \\sim 5\\times 10^{-2} \\text{a.u.", "}$ This is a very rough estimate, but it is in the same range as the nonlinear dipole per noble-gas atom.", "It is an indication that the nonlinear response, while localized at the nano-structure surface, can be significant.", "For a more quantitative picture, we proceed to evaluate the nonlinear response for a few examples motivated by the metallic nano-tips.", "Assuming that the material is characterized in terms of its work function $\\phi $ , together with the Fermi velocity $k_F$ , we can obtain parameter $V_0= \\phi + \\frac{1}{2} k_F^2 $ and proceed to integrate numerically: $p(F) = \\frac{-2}{\\pi ^2} \\int _{-V_0}^{-\\phi } L M(W,F) \\frac{ (\\phi + W)}{\\sqrt{2 (V_0+W)}} dW \\ .$ The induced dipole moment turns out to be mainly controlled by the states that are close to the Fermi level, as intuitively expected.", "The overall shape of the response to the external field resembles the nonlinear dipole moment induced in atoms by optical fields, at least for the positive field values.", "For negative fields, the shape of the response curve is flatter, thus giving rise to an overall asymmetric response causing even-harmonic generation.", "Figure: Induced dipole moment density obtained for workfunction values φ\\phi representing the range typicalof metals.", "For comparison, thin lines represent the response of noble-gas atoms scaled tosurface atom-density of tungsten.These properties are illustrated in Fig.", "REF .", "Nonlinearity of the curve implies strong harmonic generation, and its asymmetry means that even-order harmonics will also be generated.", "For comparison, nonlinear response of noble gas atoms, scaled to the surface atom-density of tungsten is also shown to demonstrate that the per-atom response on tungsten is comparable to that of Argon,[32] and for Barium-Oxide coated tungsten it gets as strong as that of the most nonlinear noble gas of Xenon [33]." ], [ " Nonlinear optical response: Experimental signatures ", "Manifestations of the nonlinear dipole density induced on the surface of a metallic nano-structure will obviously depend not only on the material but also on the geometry of the surface and on the parameters of excitation by optical pulse.", "Here we wish to consider harmonic generation as one possible effect that could be utilized as means of detection of the nonlinear response mechanism put forward in this work.", "We envision a situation in which a large collection of nano-tip structures [34], [35] is irradiated by an optical pulse, giving rise to a coherent array of nonlinear oscillating dipoles.", "The spectral content of the re-radiated field will reflect the properties of the individual structure, and this we look at next.", "What sets the surface nonlinear response apart from the harmonic generation due to e.g.", "Kerr effect is the difference in the relative strength of different harmonic bands.", "In the usual perturbative regime of the third-order nonlinearity, third harmonic radiation is generated first and then higher-orders occur via cascade.", "The result is a conversion efficiency which decreases exponentially with the order, with an order-to-order power-drop of two to three orders of magnitude.", "In contrast, the asymmetric and nonlinear shape of the induced surface dipole density gives rise to all harmonic orders at the same time, and this includes both even and odd harmonic bands.", "As we will see, this response is highly nonlinear and non-perturbative in that higher order harmonic radiation generated from a nano-structure are comparable in strength to that of the lower orders.", "In a sense, the behavior is akin to high-harmonic generation [36] in gases when a plateau can form in which different orders carry almost the same power.", "However, this is something rarely occurring at lower harmonic orders, and can thus serve as an experimental signature of the effect described here.", "We assume that the frequency of the field is in the optical range, and therefore much smaller than the atomic-scale frequencies.", "As a consequence the response is adiabatically following the driving field.", "To illustrate spectral properties of the nonlinear response, we take an example of a pulsed driving electric field, specified as a function of time in the form: $A(t) = F \\sin (\\omega _0 t) e^{-t^2/\\tau ^2}$ Using the result of previous sections, the induced nonlinear polarization $P(t)$ of the surface is a map of the excitation field via the nonlinear dipole function.", "We are mainly interested in its spectrum $\\hat{P}(\\omega )$ : $P(t) = p(A(t)) \\ \\ \\ , \\ \\ \\ \\hat{P}(\\omega ) = \\int e^{+i \\omega t} p( F\\sin (\\omega _0 t) e^{-t^2/\\tau ^2} ) dt \\ ,$ in which we aim to compare power carried by different harmonic bands.", "Figure: Harmonic generation due to nonlinear polarization on surface of tungsten.Left: A sample of polarization induced by a driving pulse with amplitude F=0.04F=0.04a.u.Right: Spectral power integrated over harmonic-frequency bands exhibits relativelystrong contribution of higher harmonics.Figure REF illustrates the non-perturbative nature of the nonlinear dipole induced on the surface of tungsten taken as an example of a material often utilized to manufacture metallic nano-tips.", "For this illustration, it was assumed that a nano-tip was irradiated by an optical pulse, and we calculated the surface polarization response.", "This is shown in the left panel of the figure.", "An interesting signature is a well-developed asymmetry of the polarization curve, which signifies strong second-harmonic generation as well as an optical rectification signal in the “zero-harmonic” frequency range.", "One can also clearly see that different functional shape occurs around the peaks, and this reflects the fact that higher harmonic radiation will also be generated.", "The panel on the right hand side illustrates the power integrated in each harmonic frequency band and shows several of these bands as functions of the driving field amplitude.", "The important property that reflects the highly nonlinear nature of the response is that several harmonic frequencies exhibit power levels which are quite comparable, being all within a one to two orders of magnitude range when the irradiating field is strong, which is illustrated in Fig.REF It can even happen that a higher harmonic becomes stronger than its lower-order counterpart.", "The non-monotonic increase with the driving field amplitude is another signature that seems universal although the precise shape depends on the material (i.e.", "mainly on its work function) Figure: High-harmonic spectral content of the nonlinear induced dipole for a strong driving field F=0.08a.u.While in general this mechanism does not give rise to a plateau like in the true high-harmonic generation,a large number of harmonic bands can exhibit power in a narrow range of just a few orders of magnitude." ], [ "Conclusion", "The main outcome of this work is the generalization of the Fowler-Nordheim model for the electron emission from a metal surface under the influence of external electric field.", "We have reformulated this well-known approach in the non-Hermitian language of metastable states, and this allowed us to reveal that concomitant with the emission current, there occurs a highly nonlinear induced dipole moment density at the surface of the nano-structure.", "While it has been recognized for some time that the response of metallic nano-tips and nano-structures in general bears some similarity to atoms driven by strong electric fields, notable in the form of tunnel ionization and multi-photon ionization, the question was if such similarities can be identified in other aspects of their dynamics.", "This was one of our motivations, and indeed our finding extends the atom-nanostructure analogy from the emitted current to include the induced dipole moment as well.", "In a close analogy to the atomic species responding to the electric fields via emitted electron current together with the nonlinear polarization, we present an exactly solvable model suggesting that whenever the field is strong enough to drive the electron emission, there should exist an induced dipole moment density localized at the metallic surface.", "In optical pulses with wavelength in the near infrared and longer, the response can be considered adiabatic, i.e.", "slaved to the time-dependent electric field resulting in a radiation-source term for Maxwell equations.", "The exact solution for a static field case allows one to estimate the strength of this nonlinear response.", "It turns out that in terms of the induced dipole moment per atom on the surface, the response is roughly as strong or even stronger than that observed in the noble gases like Argon and Xenon [32], [33].", "This finding suggest that the effect should be possible to detect in future experiments.", "Indeed, radiation even from individual nanostructures can be detected (e.g.", "[13], [37]), the question is how to differentiate the proposed effect from other nonlinear mechanisms, including the classical second-harmonic generation from surfaces, and high-harmonic generation enhanced by nano-structured surfaces (e.g.", "[38]).", "Perhaps most promising avenue presents itself in frequency conversion, for example by irradiating carpets [34], [39] of nano-tips with femtosecond pulses of high intensity.", "While it is difficult to estimate the total converted power, we have investigated possible experimental signatures in the spectra of the polarization driven by pulsed excitation.", "The expected harmonic radiation can be characterized by power which decreases very slowly with the harmonic order.", "Besides even and odd harmonics, optical rectification is also expected.", "Together with the non-monotonic growth of the harmonic power with the driving field amplitude, these signatures can potentially identify the specific mechanism we put forward in this work.", "Another potential pathway for experimental investigations could borrow the idea of exposing atoms to the enhanced near-field in the vicinity of a nano-structure [40], [41], [42], [43].", "Atoms or molecules in a close proximity of a light-driven nano-tip would experience high-harmonic field produced by the surface which in turn could result in the species excitation and subsequent radiation.", "In conclusion, we have presented a generalization of a well-known and often utilized model, and this made it possible to extend the analogy between the nonlinear response of atoms on one side and nano-structures on the other beyond the emission of electrons, now including also the nonlinear polarization.", "Acknowledgments This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-18-1-0183." ] ]
2107.01740
[ [ "On the topology of determinantal links" ], [ "Abstract We study the cohomology of the generic determinantal varieties $M_{m,n}^s = \\{ \\varphi \\in \\mathbb C^{m\\times n} : \\mathrm{rank} \\varphi <s \\}$, their polar multiplicities, their sections $D_k \\cap M_{m,n}^s$ by generic hyperplanes $D_k$ of various dimension $k$, and the real and complex links of the spaces $(D_k\\cap M_{m,n}^s,0)$.", "Such complex links were shown to provide the basic building blocks in a bouquet decomposition for the (determinantal) smoothings of smoothable isolated determinantal singularities.", "The detailed vanishing topology of such singularities was still not fully understood beyond isolated complete intersections and a few further special cases.", "Our results now allow to compute all distinct Betti numbers of any determinantal smoothing." ], [ "Introduction and results", "The motivation for this paper is to understand the vanishing (co-)homology of isolated determinantal singularities $(X,0) \\subset (\\mathbb {C}^p,0)$ (abbreviated as IDS in the following) which admit a determinantal smoothing $M$ .", "Despite their seemingly odd definition (see below), they are quite frequently encountered; for instance any normal surface singularity in $(\\mathbb {C}^4,0)$ and any so-called “space curve” $(C,0) \\subset (\\mathbb {C}^3,0)$ is determinantal by virtue of the Hilbert-Burch theorem, see e.g.", "and .", "It was shown in that sections of the generic determinantal varieties $M_{m,n}^s = \\lbrace \\varphi \\in \\mathbb {C}^{m\\times n} : \\operatorname{rank}\\varphi < s \\rbrace $ by hyperplanes $D_k^{\\prime } \\subset \\mathbb {C}^{m\\times n}$ , which are of codimension $k$ and in general position off the origin, provide the fundamental building blocks in a bouquet decomposition of the determinantal smoothing $M$ .", "In this paper we aim to study these building blocks $\\mathcal {L}^k(M_{m,n}^s,0) \\cong M_{m,n}^s \\cap D^{\\prime }_{k+1},$ which we shall refer to as complex links of codimension $k$ , and discuss the implications for the vanishing topology of various singularities.", "Our main result in this regard is that whenever $k$ is sufficiently big so that the hyperplane $D^{\\prime }_{k+1}$ misses the singular locus of $M_{m,n}^s$ and $\\mathcal {L}^k(M_{m,n}^s,0)$ is smooth, then the cohomology in degrees $i$ below the middle degree $d = \\dim \\mathcal {L}^k(M_{m,n}^s,0)$ is $H^i(\\mathcal {L}^k(M_{m,n}^s,0)) \\cong H^i(\\operatorname{Grass}(s-1,m)),$ that is, it is isomorphic to the cohomology of a Grassmannian, see ().", "In the general case, i.e.", "for arbitrary codimension $k$ , we provide a formula (REF ) for the Euler characteristic of $\\mathcal {L}^k(M_{m,n}^s,0)$ based on the polar methods developed by Lê and Teissier.", "We proceed to study the polar multiplicities $e_{m,n}^{r,k}$ of the generic determinantal varieties $(M_{m,n}^{r+1},0)$ in Section REF and obtain a formula (REF ) for these numbers as an integral of Segre classes.", "It should be noted that this formula does not depend on the choice of any generic hyperplane section and it can be implemented effectively in a computer algebra system.", "However, we were unable to prove a closed formula for the numbers $e_{m,n}^{r,k}$ as a function of $m,n,r$ , and $k$ ; even though the calculated Tables REF , and REF – REF yield several patterns.", "Coming back to the case of smooth links, this allows then to also compute the middle Betti number of $\\mathcal {L}^k(M_{m,n}^s,0)$ (but unfortunately not the full cohomology group with integer coefficients in general, i.e.", "there could also be torsion).", "We would like to point out that many methods developed in this article apply much more generally to compute the cohomology of the links of higher codimension for spaces $(X,0) \\subset (\\mathbb {C}^q,0)$ which decompose into orbits of a Lie group action $G \\times X \\rightarrow X$ .", "The generic determinantal varieties treated here are just one particular, accessible example.", "We continue with the discussion of the implications of these results for arbitrary smoothable IDS.", "Let $A \\colon (\\mathbb {C}^p,0) \\rightarrow (\\mathbb {C}^{m\\times n},0)$ be a holomorphic map germ to the space of $m\\times n$ -matrices.", "We say that $(X,0)$ is determinantal for $A$ and of type $(m,n,s)$ if we have $(X,0) = (A^{-1}( M_{m,n}^s),0)$ and such that $\\operatorname{codim}(X,0) = \\operatorname{codim}(M_{m,n}^s,0) = (m-s+1)(n-s+1)$ .", "In this case we also write $(X,0) = (X_A^s,0)$ to emphasize the chosen determinantal structure for $(X,0)$ .", "By definition, a determinantal deformation is induced by an unfolding $\\mathbf {A}(x,t)$ of $A(x)$ on parameters $t = (t_1,\\dots ,t_k)$ such that $A_0 = \\mathbf {A}(-,0)$ is equal to the original map $A$ and $A_t= \\mathbf {A}(-,t)$ is transverse to $M_{m,n}^s$ in a stratified sense for $t \\ne 0\\in \\mathbb {C}^k$ .", "Then $X_t = A_t^{-1}(M_{m,n}^s)$ is a smoothing of $(X,0)$ , provided that $p$ is strictly smaller than the codimension $c = (m-s-2)(n-s-2)$ of the singular locus of $M_{m,n}^s$ .", "In this case we shall speak of the determinantal smoothingNote that in the case of an isolated determinantal hypersurface singularity given by $(X,0) = (\\lbrace \\det A = 0 \\rbrace ,0) \\subset (\\mathbb {C}^p,0)$ a smoothing of the singularity always exists, but it can be realized by a determinantal deformation only when $p<4$ .", "of $(X_A^s,0)$ and it can be shown that, up to diffeomorphism, this smoothing depends only on the choice of the determinantal structure for $(X,0)$ , but not on the particular unfolding of the matrix.", "We will therefore also write $M_A^s$ for the determinantal smoothing $X_t$ of $(X,0)$ .", "Note that every complete intersection singularity $(X,0)\\subset (\\mathbb {C}^p,0)$ of codimension $k$ is determinantal of type $(1,k,1)$ for the matrix $A$ whose entries are given by a regular sequence generating the ideal of $(X,0)$ .", "In this case the determinantal deformations coincide with the usual deformations of the singularity.", "We refer to for the details on determinantal singularities and their deformations and further references for the above statements.", "The aforementioned bouquet decomposition of the determinantal smoothing $M_A^s$ of an IDS $(X,0) = (X_A^s,0)$ from reads $M_A^s \\cong _{\\mathrm {ht}} \\mathcal {L}^{mn-p-1}(M_{m,n}^s,0) \\vee \\bigvee _{i=1}^\\lambda S^{\\dim (X_A^s,0)}$ where we denote by $S^d$ the unit sphere of real dimension $d$ .", "Combining this with our results we find Theorem 1.1 Let $M_A^s$ be the determinantal smoothing of a $d$ -dimensional smoothable isolated determinantal singularity $(X_A^s,0) \\subset (\\mathbb {C}^p,0)$ of type $(m,n,s)$ with $s\\le m\\le n$ .", "Then the truncated cohomology $H^{\\le d}(\\operatorname{Grass}(s-1,m))\\subset H^\\bullet (M_A^s)$ embeds into the cohomology of $M_A^s$ with a quotient concentrated in cohomological degree $d$ .", "Moreover, the left hand side is generated as an algebra by the Segre classes of the vector bundle $E$ on $M_A^s$ whose fiber at a point $x$ is presented by the perturbed matrix $\\mathbb {C}^n \\overset{A_t(x)}{\\longrightarrow } \\mathbb {C}^m \\rightarrow E_x \\rightarrow 0$ which defines the smoothing $M_A^s$ .", "Table: Polar multiplicities e m,m+1 m-1,k e_{m,m+1}^{m-1,k} for the generic determinantal varieties(M m,m+1 m ,0)(M_{m,m+1}^m,0) appearing in the context of the Hilbert-Burch theoremIn the case of smoothable isolated Cohen-Macaulay codimension 2 singularities $(X,0) \\subset (\\mathbb {C}^p,0)$ we obtain some more specific results which might be of particular interest.", "As remarked earlier, these singularities admit a canonical determinantal structure by virtue of the Hilbert-Burch theorem (see , , or for a textbook): Let $f_0,\\dots ,f_m \\in \\mathbb {C}\\lbrace x_1,\\dots ,x_p\\rbrace $ be a minimal set of generators for the ideal $I$ defining $(X,0)$ .", "Then the minimal free resolution of $\\mathcal {O}_{X,0} = \\mathbb {C}\\lbrace x_1,\\dots ,x_p\\rbrace /I$ over the ring $\\mathcal {O}_p = \\mathbb {C}\\lbrace x_1,\\dots ,x_p\\rbrace $ takes the form $0 \\rightarrow \\mathcal {O}_p^m \\overset{A^T}{\\longrightarrow } \\mathcal {O}_p^{m+1} \\overset{f}{\\longrightarrow }\\mathcal {O}_p \\rightarrow \\mathcal {O}_{X,0} \\rightarrow 0$ for some matrix $A \\in \\mathcal {O}^{m\\times (m+1)}$ and then $(X,0)= (X_A^m,0)$ is determinantal of type $(m,m+1,m)$ for the matrix $A$ .", "Moreover, any deformation of $(X,0)$ is also automatically determinantal (see ), i.e.", "it arises from a perturbation of $A$ .", "Isolated Cohen-Macaulay codimension 2 singularities exist up to dimension $d = \\dim (X,0) \\le 4$ and they are smoothable if and only if this inequality is strict.", "The generic determinantal varieties appearing in the context of the Hilbert-Burch theorem are naturally limited to $M_{m,m+1}^m$ and we list their polar multiplicities for values $0\\le m\\le 10$ in Table REF .", "From these numbers we can then also compute the Euler characteristic of the smooth complex links of their generic hyperplane sections, listed in Table REF .", "Table: The Euler characteristic of the smooth complex linksℒ m(m+1)-d-3 (M m,m+1 m ,0)\\mathcal {L}^{m(m+1)-d-3}(M_{m,m+1}^m,0) of dimension dd ofthe generic determinantal varieties appearing in the context of theHilbert-Burch theoremAs a consequence of Theorem REF we obtain: Corollary 1.2 Let $(X,0) \\subset (\\mathbb {C}^5,0)$ be a Cohen-Macaulay germ of codimension 2 with an isolated singularity at the origin and $M$ its smoothing.", "Then $H^2(M) \\cong \\mathbb {Z}$ is free of rank one and generated by the Chern class of the canonical bundle on $M$ .", "This has been conjectured by the author in where a special case of Corollary REF was obtained for those singularities defined by $2\\times 3$ -matrices.", "Our results on the Euler characteristic of the complex links also give lower bounds for the third Betti number $b_3 \\ge - \\chi \\left( \\mathcal {L}^{m(m+1)-6}(M_{m,m+1}^m,0) \\right) - 2$ which can be read off directly from Table REF depending on the size of the matrix $m$ .", "Unfortunately, we were unable to prove a closed algebraic formula for this number as a function of $m$ .", "While in the case of threefolds the cohomology of the complex link prominently sticks out in the sense that it entirely covers the nontrivial contributions outside the middle degree, the situation is a little more hidden in dimensions $d<3$ since then all the non-trivial (reduced) cohomology of the smoothing $M$ is concentrated in the middle degree.", "Nevertheless, using Theorem REF , a lower bound for the middle Betti number can be read off from Table REF as $b_2 \\ge \\chi \\left( \\mathcal {L}^{m(m+1)-5}(M_{m,m+1}^m,0) \\right) - 2$ for smoothings of isolated Cohen-Macaulay codimension 2 surfaces and $b_1 \\ge -\\chi \\left( \\mathcal {L}^{m(m+1)-4}(M_{m,m+1}^m,0) \\right) - 1$ in the case of space curves in $(\\mathbb {C}^3,0)$ .", "The proofs of Theorem REF and Corollary REF will be given in the last section.", "Remark 1.3 Wahl has shown in that for a normal surface singularity $(X,0) \\subset (\\mathbb {C}^4,0)$ which is not Gorenstein, the second Betti number $\\mu $ of the smoothing $M$ and the Tjurina number $\\tau $ of $(X,0)$ obey an inequality $\\mu \\ge \\tau - 1$ with equality whenever $(X,0)$ is quasi-homogeneous.", "In particular, this applies to all generic hyperplane sections $(X,0) = (D \\cap M_{m,m+1}^m,0) \\subset (D,0)$ for $m>1$ and $D$ of dimension 4.", "It follows from the above that $\\chi (M) = \\tau $ and the first few values can be read off from Table REF .", "Comparing Wahl's results with the bouquet decomposition (REF ) we see that for non-linear quasi-homogeneous singularities, the difference of the numbers $\\tau $ and $\\lambda $ is determined by the offset $\\tau - \\lambda = \\chi \\left( \\mathcal {L}^{m(m+1)-5}(M_{m,m+1}^m,0) \\right)$ depending only on the size of the matrix $m >1$ .", "This is different compared to the case of isolated complete intersections (the case $m=1$ ) where we have equalities $\\tau = \\mu = \\lambda $ ; see ." ], [ "Definitions", "Let $X \\hookrightarrow U \\subset \\mathbb {C}^n$ be the closed embedding of an equidimensional reduced complex analytic variety of dimension $d$ in some open set $U$ of $\\mathbb {C}^n$ and suppose $\\lbrace V^\\alpha \\rbrace _{\\alpha \\in A}$ is a complex analytic Whitney stratification for $X$ .", "Such stratifications always exist; a construction of a unique minimal stratification has been described in and if not specified further, we will in the following always assume $X$ to be endowed with its minimal Whitney stratification.", "Moreover, we may suppose that every stratum is connected for if this was not the case, we could replace it by its distinct irreducible components.", "Recall the notions of the real and complex links of $X$ along its strata.", "These can be defined as follows: Let $V^\\alpha $ be a stratum of $X$ and $x \\in V^\\alpha \\subset X$ an arbitrary point.", "For simplicity, we may assume $x = 0 \\in \\mathbb {C}^n$ to be the origin.", "Choose a normal slice to $V^\\alpha $ through $x$ , i.e.", "a submanifold $(N_x,x) \\subset (\\mathbb {C}^n,x)$ of complementary dimension which meets $V^\\alpha $ transversally in $x$ .", "Let $B_\\varepsilon (x)$ be the closed ball of radius $\\varepsilon $ centered at $x$ .", "Then the real link of $X$ along the stratum $V^\\alpha $ is $\\mathcal {K}(X,V^\\alpha ) = X \\cap N_x \\cap \\partial B_\\varepsilon (x)$ for $1 \\gg \\varepsilon >0$ sufficiently small.", "Similarly, the complex link is $\\mathcal {L}(X,V^\\alpha ) = X \\cap N_x \\cap B_\\varepsilon (x) \\cap l^{-1}(\\lbrace \\delta \\rbrace )$ for a sufficiently general linear form $l$ and $1\\gg \\varepsilon \\gg |\\delta |>0$ .", "For a rigorous definition of these objects see for example and .", "There, one can also find a proof of the fact that the real and complex links are independent of the choices involved in their definition: They are invariants of the particular stratification of $X$ and unique up to non-unique homeomorphism.", "In this note we shall also be concerned with the real and complex links of higher codimension for a germ $(X,0)\\subset (\\mathbb {C}^n,0)$ at the point 0.", "Definition 2.1 The real and complex links of codimension $i$ of $(X,0) \\subset (\\mathbb {C}^n,0)$ at the origin are the classical real and complex links of a section $(X \\cap D_i, 0)$ of $X$ with a sufficiently general plane $D_i \\subset \\mathbb {C}^n$ of codimension $i$ .", "Note that in this definition, the classical complex link $\\mathcal {L}(X,0)$ is the complex link of codimension 0, even though it has complex codimension 1 as an analytic subspace of $X$ .", "This choice was made in order to be compatible with the notation for the real links.", "The reader may think of the word “link” as an indicator to increment the codimension by one to arrive at the actual codimension of the space.", "By “sufficiently general” we mean that $D_i$ has to belong to a Zariski open subset $U_i \\subset \\operatorname{Grass}(n-i,n)$ in the Grassmannian.", "This set consists of planes for which variation of $D_i$ results in a Whitney equisingular family in $X \\cap D_i$ and we shall briefly discuss the existence of such a set.", "From this it will follow immediately that the real and complex links of higher codimension are invariants of the germ $(X,0)$ itself and unique up to non-unique homeomorphism.", "Consider the Grassmann modification of $X$ : $G_i X = \\lbrace (x,D) \\in X \\times \\operatorname{Grass}(n-i,n) : x \\in D \\rbrace $ where by $\\operatorname{Grass}(n-i,n)$ we denote the Grassmannian of codimension $i$ planes in the ambient space $\\mathbb {C}^n$ through the point 0.", "It comes naturally with two projections ${G_i X [d]^{\\rho } [r]^-{\\pi } &\\operatorname{Grass}(n-i,n) \\\\X &}$ where by construction $\\rho (\\pi ^{-1}(\\lbrace D\\rbrace )) = X \\cap D$ for any plane $D \\in \\operatorname{Grass}(n-i,n)$ .", "The Grassmannian itself has a natural embedding into $G_i X$ as the zero section $E_0$ of $\\pi $ and we may think of $E_0$ as parametrizing the intersections of $X$ with planes of codimension $i$ .", "For $i\\le d = \\dim X$ the intersection of any plane $D$ with $(X,0)$ will be of positive dimension at 0 and therefore $E_0$ is necessarily contained in the closure of its complement.", "Note that for an arbitrary point $p\\in X$ the fiber $\\rho ^{-1}(\\lbrace p\\rbrace )$ over $p$ in $G_i X$ consists of all planes $D \\in \\operatorname{Grass}(n-i,n)$ containing both $p$ and the origin.", "It follows that the restriction $\\rho \\colon G_i X \\setminus E_0 \\rightarrow X \\setminus \\lbrace 0\\rbrace $ is not an isomorphism, but nevertheless a holomorphic fiber bundle with smooth fibers.", "In particular, any given Whitney stratification on $X$ determines a unique Whitney stratification of $G_i X \\setminus E_0$ by pullback of the strata.", "Given such a stratification on the complement of $E_0$ , there exists some maximal Zariski open subset $U_i \\subset E_0 \\cong \\operatorname{Grass}(n-i,n)$ such that all the strata of $G_i X \\setminus E_0$ satisfy Whitney's conditions (a) and (b) along $U_i$ .", "We may extend the stratification on the open subset $G_i X \\setminus E_0$ to a Whitney stratification of $G_i X$ , containing $U_i$ as a stratum.", "Since the Grassmannian $\\operatorname{Grass}(n-i,n)$ is irreducible and complex analytic subsets have real codimension $\\ge 2$ , this stratum is necessarily dense and connected.", "A plane $D$ of codimension $i$ is sufficiently general in the sense of Definition REF if it belongs to $U_i$ .", "The projection $\\pi $ provides us with a canonical choice of normal slices along $U_i$ and by construction, we may therefore identify the real and complex links of $G_i X$ along $U_i$ with the real and complex links of the germ $(X \\cap D,0) \\subset (\\mathbb {C}^n,0)$ for any $D \\in U_i$ .", "Given that the classical real and complex links are invariants of the strata in a Whitney stratification, it follows immediately that the real and complex links of higher codimension for a germ $(X,0)$ are well defined invariants of the germ itself and unique up to non-unique homeomorphism." ], [ "Synopsis with polar varieties", "While it was already established in the previous discussion that the real and complex links of higher codimension are invariants of the germ $(X,0) \\subset (\\mathbb {C}^n,0)$ itself, a more specific setup will be required in the following.", "Let $\\underline{l} = l_1,l_2,\\dots ,l_d \\in \\operatorname{Hom}(\\mathbb {C}^n,\\mathbb {C})$ be a sequence of linear forms defining a flag of subspaces $\\mathcal {D} : \\mathbb {C}^n = D_0 \\supset D_1 \\supset D_2 \\supset \\dots \\supset D_d \\supset \\lbrace 0\\rbrace $ in $\\mathbb {C}^n$ via $D_{i+1} = D_i \\cap \\ker l_{i+1}$ .", "Definition 2.2 Let $(X,0) \\subset (\\mathbb {C}^n,0)$ be an equidimensional reduced complex analytic germ of dimension $d$ and $\\lbrace V^\\alpha \\rbrace _{\\alpha \\in A}$ a Whitney stratification of $X$ .", "A sequence of linear forms $l_1,l_2,\\dots ,l_d \\in \\operatorname{Hom}(\\mathbb {C}^n,\\mathbb {C})$ is called admissible for $(X,0)$ if for every $0 < i \\le d$ the restriction of $l_i$ to $X_{i-1} := X \\cap D_{i-1}$ does not annihilate any limiting tangent space of $(X_{i-1},0)$ at the origin.", "This particular definition is chosen with a view towards the fibration theorems and inductive arguments used in Section REF .", "However, we shall also need the results on polar varieties by Lê and Teissier from .", "To this end, we recall the necessary definitions and explain how Definition REF fits into the context of their paper.", "The geometric setup for the treatment of limiting tangent spaces is the Nash modification.", "Let $X \\subset U$ be a suitable representative of $(X,0)$ in some open neighborhood $U$ of the origin.", "The Gauss map is defined on the regular locus $X_\\mathrm {reg}$ via $X_\\mathrm {reg}\\rightarrow \\operatorname{Grass}(d,n), \\quad p \\mapsto [T_p X_\\mathrm {reg}\\subset T_p \\mathbb {C}^n].$ Then the Nash blowup $\\tilde{X}$ of $X$ is defined as the closure of the graph of the Gauss map.", "It comes with two morphisms ${\\tilde{X} [d]^\\nu [r]^-\\gamma &\\operatorname{Grass}(d,n) \\\\X}$ where $\\nu $ is the projection to $X$ and $\\gamma $ the natural prolongation of the Gauss map.", "We denote by $\\tilde{T}$ the pullback of the tautological bundle on $\\operatorname{Grass}(d,n)$ along $\\gamma $ and by $\\tilde{\\Omega }^1$ the dual of $\\tilde{T}$ .", "By construction, the fiber $\\nu ^{-1}(\\lbrace p\\rbrace )$ over a point $p \\in X$ consists of pairs $(p,E) \\in \\tilde{X} \\subset \\mathbb {C}^n \\times \\operatorname{Grass}(d,n)$ where $E$ is a limiting tangent space to $X$ at $p$ .", "In particular, such a limiting tangent space is unique at regular points $p \\in X_\\mathrm {reg}$ and $\\nu \\colon \\nu ^{-1}(X_\\mathrm {reg}) \\rightarrow X_\\mathrm {reg}$ is an isomorphism identifying the restriction of $\\tilde{T}$ with the tangent bundle of $X_\\mathrm {reg}$ .", "A flag $\\mathcal {D}$ as above determines a set of degeneraci loci in the Grassmannian as follows.", "Every linear form $l \\in \\operatorname{Hom}(\\mathbb {C}^n,\\mathbb {C})$ can be pulled back to a global section $\\nu ^* l$ in the dual of the tautological bundle of $\\operatorname{Grass}(d,n)$ : On a fiber $E$ of the tautological bundle the section $\\nu ^*l$ is defined to be merely the restriction of $l$ to $E$ regarded as a linear subspace of $\\mathbb {C}^n$ .", "This generalizes in the obvious way for the linear maps $\\varphi _k := l_1 \\oplus l_2 \\oplus \\dots \\oplus l_{d-k+1} \\colon \\mathbb {C}^n \\rightarrow \\mathbb {C}^{d-k+1}$ for every $0 \\le k \\le d$ .", "Adapting the notation from we now have $c_k(\\mathcal {D}) &:=& \\lbrace E \\in \\operatorname{Grass}(d,n) : \\dim E \\cap D_{d-k+1} \\ge k \\rbrace \\\\&=& \\lbrace E \\in \\operatorname{Grass}(d,n) : \\operatorname{rank}\\nu ^* \\varphi _k < d-k+1 \\rbrace .$ Note that by construction $c_0(\\mathcal {D}) = \\operatorname{Grass}(d,n)$ is the whole Grassmannian.", "We set $\\gamma ^{-1}(c_k(\\mathcal {D})) \\subset \\tilde{X}$ to be the corresponding degeneraci locus on the Nash modification and $P_k(\\mathcal {D}) := \\nu ( \\gamma ^{-1}(c_k(\\mathcal {D})))$ its image in $X$ .", "By construction, a point $p \\in X$ belongs to $P_k(\\mathcal {D})$ if and only if there exists a limiting tangent space $E$ to $X_\\mathrm {reg}$ at $p$ such that the restriction of $\\varphi _k$ to $E$ does not have full rank.", "The $k$ -th polar multiplicity of $(X,0)$ is then defined for $0\\le k < d$ to be $m_k(X,0) := m_0( P_{k}(\\mathcal {D})),$ the multiplicity of the $k$ -th polar variety.", "We shall see below that for $k=d$ the variety $P_d(\\mathcal {D})$ is empty for a generic flag so that a definition of $m_d(X,0)$ does not make sense.", "In the other extreme case where $k=0$ , the polar multiplicity $m_0(X,0)$ is simply the multiplicity of $(X,0)$ itself.", "Lemma 2.3 Let $X$ be a sufficiently small representative of $(X,0) \\subset (\\mathbb {C}^n,0)$ .", "A sequence of linear forms $l_1,\\dots ,l_d$ is admissible for $(X,0)$ if and only if for every $0\\le i < d$ one has $\\gamma ^{-1}(c_{d-i}(\\mathcal {D})) \\cap \\widehat{X}_i = \\emptyset $ for the associated flag $\\mathcal {D}$ .", "Here $X_i := X \\cap D_i$ and $\\widehat{X}_i$ denotes the strict transform of $X_i$ in the Nash modification.", "The proof will proceed by induction on $i$ .", "For $i=0$ we have $\\widehat{X}_0 = \\tilde{X}$ and the statement is about the choice of $l_1$ .", "Consider the projectivized analytic set of degenerate covectors $\\lbrace ([l],(p,E)) \\in \\mathbb {P}\\operatorname{Hom}(\\mathbb {C}^n,\\mathbb {C})\\times \\nu ^{-1}(\\lbrace 0\\rbrace ) : l|_E = 0 \\rbrace $ along the central fiber $\\nu ^{-1}(\\lbrace 0\\rbrace )$ of the Nash modification.", "Since this fiber has strictly smaller dimension than $X$ , the set of degenerate covectors has dimension $<n-1$ and therefore the discriminant, i.e.", "the image of its projection to $\\mathbb {P}\\operatorname{Hom}(\\mathbb {C}^n,\\mathbb {C})$ , is a closed analytic set of positive codimension.", "Now (REF ) is satisfied if and only if $l_1$ belongs to the complement of the affine cone of the discriminant.", "Such a choice for $l_1$ determines $X_1 = X \\cap D_1$ .", "Note that by construction this intersection is transversal and therefore $X_1$ inherits a Whitney stratification from the one on $X$ .", "Moreover, at a regular point $p \\in X_1$ the tangent space $T_p X_1 = \\ker \\nu ^*l_1 \\subset T_p X$ is naturally contained in the tangent space of $X$ to that point.", "Let $\\widehat{X}_1 \\subset \\tilde{X}$ be the strict transform of $X_1$ in the Nash modification.", "Taking limits of appropriate (sub-)sequences of regular points, it is easy to see that every limiting tangent space $E^{\\prime }$ of $X_1$ at 0 is contained in a limiting tangent space $E$ of $X$ along $X_1$ .", "Consequently, a second linear form $l_2$ annihilates the limiting tangent space $E^{\\prime }$ of $X_1$ if and only if $l_1 \\oplus l_2$ is degenerate on $E$ .", "But this means nothing else than $(0,E) \\in \\widehat{X}_1 \\cap \\gamma ^{-1}(c_{d-1}(\\mathcal {D})) \\ne \\emptyset $ which establishes the claim for $i=1$ .", "The remainder of the induction is a repetition of the previous steps and left to the reader.", "The previous lemma provides the link of Definition REF with the “Théorème de Bertini idéaliste” by Lê and Teissier .", "They establish the existence of Zariski open subsets $U^{\\prime }_i \\subset \\operatorname{Grass}(n-i,n)$ with certain good properties concerning the variety $\\gamma ^{-1}(c_{d-i+1}(D_i))$ for $D_i \\in U^{\\prime }_i$ .", "A posteriori, they discuss in that if the whole flag $\\mathcal {D}$ has been chosen such that $D_i \\subset U^{\\prime }_i$ for all $i$ , then also (REF ) is in fact satisfied for all $i$ .", "Thus we obtain the following Corollary 2.4 For every equidimensional reduced analytic germ $(X,0)$ there exists a Zariski open and dense subset of admissible sequences of linear forms $l_1,l_2\\dots ,l_d$ .", "Moreover, this sequence can be chosen such that for the associated flag $\\mathcal {D}$ the space $D_i = \\lbrace l_1=\\dots =l_i=0\\rbrace $ is sufficiently general in the sense of Definition REF so that the real and complex links of codimension $i$ are given by $\\mathcal {K}^i(X,0) = X \\cap D_i \\cap \\partial B_\\varepsilon (0)$ and $\\mathcal {L}^i(X,0) = X \\cap D_i \\cap B_\\varepsilon (0) \\cap l_{i+1}^{-1}(\\lbrace \\delta \\rbrace )$ for $1 \\gg \\varepsilon \\gg |\\delta | >0$ , respectively.", "Consider the sets $U^{\\prime }_i \\cap U_i$ with $U^{\\prime }_i$ from Lê's and Teissier's “Théorème de Bertini idéaliste” and $U_i$ the set of Whitney equisingular sections from the discussion of Definition REF .", "Since the intersection of Zariski open sets is again Zariski open, we may choose $l_1,l_2,\\dots ,l_d$ to be any sequence of linear forms such that $D_i = \\lbrace l_1 = \\dots = l_i = 0\\rbrace \\subset U^{\\prime }_i \\cap U_i$ for all $i$ .", "We will henceforth assume that the sequence of linear forms $l_i$ has been chosen such that the associated flag $\\mathcal {D}$ has $D_i \\in U^{\\prime }_i \\cap U_i$ where $U^{\\prime }_i$ is the Zariski open subset of Lê's and Teissiers' “Théorème de Bertini idéaliste” and $U_i$ the Zariski open subset of Whitney equisingular sections of codimension $i$ from the discussion of Definition REF ." ], [ "The Euler characteristic of complex links", "Lê and Teissier have described a method to compute the Euler characteristic of complex links from the polar multiplicities in .", "We briefly sketch how to use their results inductively in our setup from Definition REF .", "As before, let $(X,0) \\subset (\\mathbb {C}^n,0)$ be a reduced, equidimensional complex analytic germ of dimension $d$ , endowed with a Whitney stratification $\\lbrace V^\\alpha \\rbrace _{\\alpha \\in A}$ .", "We will assume that $V^0 = \\lbrace 0\\rbrace $ is a stratum and write $X^\\alpha = \\overline{V^\\alpha }$ for the closure of any other stratum $V^\\alpha $ of $X$ .", "Throughout this section, we will assume that an admissible sequence of linear forms $l_1,\\dots ,l_d$ and the associated flag $\\mathcal {D}$ have been chosen as in Corollary REF for all germs $(X^\\alpha ,0)$ at once.", "This flag being fixed, we will in the following suppress it from our notation and simply write $P_k(X^\\alpha ,0)$ for the polar varieties of the germ $(X^\\alpha ,0)$ with $\\mathcal {D}$ being understood.", "Denote by $\\mathcal {L}(X,V^\\alpha )$ the classical complex links of $X$ along the stratum $V^\\alpha $ and by $\\mathcal {L}^i$ the complex link of codimension $i$ of $X$ at the origin.", "Then by $\\chi \\left( \\mathcal {L}^0 \\right) - \\chi \\left( \\mathcal {L}^1 \\right) =\\sum _{\\alpha \\ne 0} m_0\\left(P_{d(\\alpha )-1} \\left( X^\\alpha ,0 \\right)\\right)\\cdot (-1)^{d(\\alpha )-1}\\left( 1 - \\chi (\\mathcal {L}(X,V^\\alpha )) \\right).$ For our specific setup we may interpret this formula in the context of stratified Morse theory.", "To this end, note that the restriction of $l_2$ to the complex link $\\mathcal {L}^0$ $l_2 \\colon X \\cap B_\\varepsilon (0) \\cap l_1^{-1}(\\lbrace \\delta _1\\rbrace ) \\rightarrow \\mathbb {C}$ is a stratified Morse function with critical points on the interior of the strata $V^\\alpha \\cap \\mathcal {L}^0$ of $\\mathcal {L}^0$ precisely at the intersection points $\\mathcal {L}^0 \\cap P_{d(\\alpha )-1}(X^\\alpha ,0) = \\lbrace q^\\alpha _1,\\dots ,q^\\alpha _{m_0(P(\\alpha )_{d-1}(X^\\alpha ,0))}\\rbrace ,$ cf.", "and .", "The complex link of codimension 2 can be identified with the general fiber $\\mathcal {L}^1 \\cong l_{2}^{-1}(\\lbrace \\delta _2\\rbrace ) \\cap \\mathcal {L}^0$ for some regular value $\\delta _2$ off the discriminant and we can use the function $\\lambda = |l_2-\\delta _2|^2$ as a Morse function in order to reconstruct $\\mathcal {L}^0$ from $\\mathcal {L}^1$ .", "It can be shown that for suitable choices of the represenatives involved, the critical points of $\\lambda $ on the boundary of $\\mathcal {L}^0$ are “outward pointing” and hence do not contribute to changes in topology; see for instance or for a discussion.", "For an interior critical point $q^\\alpha _j \\in V^\\alpha \\cap \\mathcal {L}^0$ we have the product of the tangential and the normal Morse data $\\left( D^{d(\\alpha )-1}, \\partial D^{d(\\alpha )-1} \\right) \\times \\left( C(\\mathcal {L}(X,V^\\alpha )), \\mathcal {L}(X,V^\\alpha ) \\right)$ where $D^{d(\\alpha )}$ is the disc of real dimension $d(\\alpha ) = \\dim _\\mathbb {C}V^\\alpha $ and $C(\\mathcal {L}(X,V^\\alpha ))$ the cone over the complex link of $X$ along $V^\\alpha $ .", "It is a straighforward calculation that the Euler characteristic changes precisely by $(-1)^{d(\\alpha )-1} (1 - \\chi (\\mathcal {L}(X,V^\\alpha )))$ for the attachement of this cell at any of the critical points $q_j^\\alpha $ .", "Summation over all these points on all relevant strata therefore gives us back the Formula (REF ) by Lê and Teissier.", "It is evident that the above procedure can be applied inductively, cf.", ".", "This allows to reconstruct the codimension $i$ complex link $\\mathcal {L}^i$ of $(X,0)$ from its hyperplane sections $\\mathcal {L}^{i} \\supset \\mathcal {L}^{i+1} \\supset \\dots \\supset \\mathcal {L}^{d-1} = \\lbrace x_1,\\dots ,x_{m_0(X,0)}\\rbrace $ starting with $\\mathcal {L}^{d-1}$ which is just a set of points whose number is equal to the multiplicity $m_0(X,0)$ of $(X,0)$ at the origin.", "We leave it to the reader to verify the formula $\\chi ( \\mathcal {L}^i ) =\\sum _{\\alpha \\in A}\\left( \\sum _{j=i+1}^{d(\\alpha )}(-1)^{d(\\alpha )-j} m_0\\left( P_{d(\\alpha )-j}(X^\\alpha ,0) \\right) \\right)\\cdot \\left( 1 - \\chi (\\mathcal {L}(X,V^\\alpha ) \\right).$ The coefficients appearing in this formula are nothing but the local Euler obstruction of $(X^\\alpha \\cap D_{i-1},0)$ at the origin: $\\mathrm {Eu}(X^\\alpha \\cap D_{i-1},0) = \\sum _{j=i}^{d(\\alpha )}(-1)^{d(\\alpha )-j}m_0\\left( P_{d(\\alpha )-j}(X^\\alpha ,0)\\right)$ cf.", "." ], [ "Polar multiplicities of generic determinantal varieties", "We now turn towards the study of the generic determinantal varieties $M_{m,n}^s \\subset \\mathbb {C}^{m\\times n}$ .", "These are equipped with the rank stratification, i.e.", "the decomposition $M_{m,n}^s = \\bigcup _{r<s} V_{m,n}^r, \\quad V_{m,n}^r = \\lbrace \\varphi \\in \\mathbb {C}^{m\\times n} : \\operatorname{rank}\\varphi = r \\rbrace .$ Due to its local analytic triviality, this stratification is easily seen to satisfy both Whitney's conditions (a) and (b).", "The reduced Euler characteristics of the classical complex links have been computed by Ebeling and Gusein-Zade in : $1 - \\chi ( \\mathcal {L}(M_{m,n}^s,0)) = (-1)^{s-1} { m-1 \\atopwithdelims ()s-1 },$ where, without loss of generality, it is assumed that $m \\le n$ .", "The generic determinantal varieties admit a recursive pattern in the following sense.", "For $r<s\\le m \\le n$ , a normal slice to the stratum $V_{m,n}^r \\subset M_{m,n}^s$ through the point $\\mathbf {1}_{m,n}^r$ is given by the set of matrices of the form $N_{m,n}^r = \\mathbf {1}^r \\oplus \\mathbb {C}^{(m-r) \\times (n-r)} =\\begin{pmatrix}\\mathbf {1}^r & 0 \\\\0 & \\mathbb {C}^{(m-r)\\times (n-r)}\\end{pmatrix} \\subset \\mathbb {C}^{m\\times n}.$ It follows immediately that $\\mathcal {L}(M_{m,n}^s,V_{m,n}^r) \\cong \\mathcal {L}(M_{m-r,n-r}^{s-r},0)$ and hence $1-\\chi ( \\mathcal {L}(M_{m,n}^s, V_{m,n}^r))= (-1)^{s-r-1}{ m-r-1 \\atopwithdelims ()s-r-1 }.$ In order to determine the topological Euler characteristic of the complex links of higher codimension $\\mathcal {L}^i(M_{m,n}^s,0)$ of the generic determinantal varieties by means of the previous section, we need to know all the relevant polar multiplicities $e_{m,n}^{r,k} := m_k(M_{m,n}^{r+1},0) = m_0 \\left( P_k(M_{m,n}^{r+1},0) \\right).$ There are several methods to achieve this.", "For instance, one could simply choose random linear forms $l_i$ and compute the resulting multiplicity with the aid of a computer algebra system using e.g.", "Serre's intersection formula.", "However, this approach provides very little insight and the results a priori depend on the choice of the linear forms.", "Recently, X. Zhang has computed the polar multiplicities in using Chern class calculus which is an exact computation not depending on any particular choices.", "His formulas, however, are very complicated since they appear as byproducts of the study of the Chern-Mather classes of determinantal varieties.", "In this section we will follow a more direct approach to the computation of the polar multiplicities using Chern classes.", "In , Lê and Teissier give the following formula for the polar multiplicities of a germ $(X,0) \\subset (\\mathbb {C}^n,0)$ : $m_0\\left( P_{k}(\\mathcal {D}) \\right)(-1)^{d-1}\\int _{\\mathfrak {Y}} c_{k}(\\tilde{T}) \\cdot c_1(\\mathcal {O}(1))^{d-k-1}.$ Here the integral is taken over the exceptional divisor $\\mathfrak {Y}$ of the blowup $\\mathfrak {X}$ of the Nash modification $\\tilde{X}$ along the pullback of the maximal ideal at the origin for $(X,0) \\subset (\\mathbb {C}^n,0)$ and $\\mathcal {O}(1)$ denotes the dual of the tautological bundle for that blowup.", "By construction, these spaces can be arranged in a commutative diagram ${\\mathfrak {Y} [r] @{->}[dr] [d] &\\nu ^{-1}(\\lbrace 0\\rbrace ) @{->}[dr] &\\\\\\pi ^{-1}(\\lbrace 0\\rbrace ) @{->}[dr] &\\mathfrak {X} [r] [d] &\\tilde{X} [d]^\\nu \\\\&\\mathrm {Bl}_0 X [r]^\\pi &X.", "}$ where $\\mathrm {Bl}_0 X$ denotes the usual blowup of $X$ at the origin.", "We will now describe this diagram for the particular case where $(X,0) = (M_{m,n}^s,0) \\subset (\\mathbb {C}^{m\\times n},0)$ is a generic determinantal variety and deduce our particular formula (REF ) from that.", "The Nash blowup of $(M_{m,n}^s,0)\\subset (\\mathbb {C}^{m\\times n},0)$ has been studied by Ebeling and Gusein-Zade in and we briefly review their discussion.", "Let $r=s-1$ be the rank of the matrices in the open stratum $V_{m,n}^{r} \\subset M_{m,n}^s$ .", "The tangent space to $V_{m,n}^r$ at a point $\\varphi $ is known to be $T_\\varphi V_{m,n}^{r} = \\lbrace \\psi \\in \\operatorname{Hom}( \\mathbb {C}^n, \\mathbb {C}^m ) : \\psi ( \\ker \\varphi )\\subset \\mathrm {im}\\, \\varphi \\rbrace .$ This fact can be exploited to replace the Grassmannian for the Nash blowup of $M_{m,n}^{s}$ by a product: Let $\\operatorname{Grass}(r,m)$ be the Grassmannian of $r$ -planes in $\\mathbb {C}^m$ and $\\operatorname{Grass}(r,n)$ the Grassmannian of $r$ -planes in $(\\mathbb {C}^n)^\\vee $ , the dual of $\\mathbb {C}^n$ .", "Then the Gauss map factors through $\\hat{\\gamma }: V_{m,n}^r \\rightarrow \\operatorname{Grass}(r,n) \\times \\operatorname{Grass}(r,m), \\quad \\varphi \\mapsto \\left( (\\ker \\varphi )^\\perp , \\mathrm {im}\\, \\varphi \\right),$ where by $(\\ker \\varphi )^\\perp $ we mean the linear forms in $(\\mathbb {C}^n)^\\vee $ vanishing on $\\ker M \\subset \\mathbb {C}^n$ .", "We denote this double Grassmannian by $G = \\operatorname{Grass}(r,n) \\times \\operatorname{Grass}(r,m)$ .", "On $G$ we have the two tautological exact sequences $0 \\rightarrow S_1 \\overset{i_1}{\\longrightarrow } \\mathcal {O}^n \\overset{\\pi _1}{\\longrightarrow } Q_1 \\rightarrow 0$ and $0 \\rightarrow S_2 \\overset{i_2}{\\longrightarrow } \\mathcal {O}^m \\overset{\\pi _2}{\\longrightarrow } Q_2 \\rightarrow 0$ coming from the two factors with $S_2$ corresponding to the images and $Q_1^\\vee $ to the kernels of the matrices $\\varphi $ under the modified Gauss map $\\hat{\\gamma }$ .", "With this notation, the space of matrices $\\operatorname{Hom}(\\mathbb {C}^n,\\mathbb {C}^m) \\cong (\\mathbb {C}^n)^\\vee \\otimes \\mathbb {C}^m$ pulls back to the trivial bundle $\\mathcal {O}^n \\otimes \\mathcal {O}^m$ from which we may project to the product $\\pi = \\pi _1 \\otimes \\pi _2 \\colon \\mathcal {O}^n \\otimes \\mathcal {O}^m \\rightarrow Q_1 \\otimes Q_2.$ Then the condition on $\\psi $ in (REF ) becomes $\\pi ( \\psi ) = 0$ and consequently the Nash bundle on $\\tilde{M}_{m,n}^s$ is given by $\\tilde{T} = \\hat{\\gamma }^* \\left( \\ker \\pi \\right).$ The Nash transform $\\tilde{M}_{m,n}^s$ itself can easily be seen to be isomorphic to the total space of the vector bundle $\\tilde{M}_{m,n}^s \\cong \\left| \\operatorname{Hom}\\left( (S_1)^\\vee , S_2 \\right) \\right|= \\left| S_1 \\otimes S_2 \\right|$ on $G$ .", "In particular, $\\tilde{M}_{m,n}^s$ is smooth and the maximal ideal of the origin in $\\mathbb {C}^{m\\times n}$ pulls back to the ideal sheaf of the zero section in $\\tilde{M}_{m,n}^s$ .", "Thus the exceptional divisor $\\mathfrak {Y}$ in (REF ), i.e.", "the domain of integration in (REF ), is nothing but the projectivized bundle $\\mathfrak {Y} = \\mathbb {P}\\tilde{M}_{m,n}^s = \\mathbb {P}\\left( S_1\\otimes S_2 \\right).$ Proposition 2.5 The $k$ -th polar multiplicity of $(M_{m,n}^{r+1},0) \\subset (\\mathbb {C}^{m\\times n},0)$ is given by $e_{m,n}^{r,k} = (-1)^{(m+n)\\cdot r - r^2 - 1}\\int _{G} s_{k}\\left( Q_1 \\otimes Q_2 \\right) s_{(m+n)r - 2r^2 -k}(S_1 \\otimes S_2)$ where $G = \\operatorname{Grass}(r,n) \\times \\operatorname{Grass}(r,m)$ , $S_i$ and $Q_i$ are the tautological sub- and quotient bundles coming from either one of the two factors, and $s_k$ denotes the $k$ -th Segre class.", "Starting from the Lê-Teissier formula (REF ) we substitute the different terms according to the above identifications for the generic determinantal varieties.", "From (REF ) we see that for every $k \\ge 0$ $c_k(\\tilde{T}) = s_k(Q_1 \\otimes Q_2)$ is the $k$ -th Segre class of the complement $Q_1 \\otimes Q_2$ of $\\tilde{T}$ in $\\mathcal {O}^n \\otimes \\mathcal {O}^m$ .", "The integral in question becomes $\\int _{\\mathbb {P}(S_1 \\otimes S_2)} s_{k}(Q_1\\otimes Q_2) \\cdot c_1(\\mathcal {O}(1))^{(m+n)r - r^2-1-k}$ and we may perform this integration in two steps with the first one being integration along the fibers of the projection $\\mathbb {P}(S_1 \\otimes S_2) \\rightarrow G$ .", "Since $\\mathcal {O}(1)$ denotes the dual of the tautological bundle for the projectivization of the underlying vector bundle $S_1 \\otimes S_2$ , the result now follows from the projection formula, cf.", ".", "Formula (REF ) can be implemented in computer algebra systems such as Singular .", "For instance, using the library schubert.lib, the computations can easily be carried out for $m,n\\le 5$ on a desktop computer.", "Series such as, for example, $e_{7,8}^{4,k}$ are also still feasible, but take up to 9 minutes to finish.", "We have computed several polar multiplicities of the generic determinantal varieties using this formula.", "The results are listed in Tables REF – REF .", "Remark 2.6 The reader will also note a symmetry that first appears in Table REF : For $0\\le r \\le m\\le n$ the polar multiplicities satisfy $e_{m,n}^{r,k} = e_{m,n}^{m-r,2(m-r)r-k}.$ This phenomenon is based on a duality of the (projectivized) conormal modifications of the generic determinantal varieties, as has been explained to the author by Terence Gaffney in an oral communication.", "Formula (REF ) then follows from .", "Interestingly, we have not succeeded to derive this symmetry from the formula in (REF ), but we nevertheless use it in the following tables in order to not duplicate the statements.", "Remark 2.7 The computation of Chern- and Segre classes of tensor products of vector bundles is a surprisingly expensive task from a computational point of view.", "Different formulas and algorithms have, for instance, been implemented in the Singular libraries chern.lib; see also for a further discussion.", "The $k$ -th Segre class of the tensor product $S_1 \\otimes S_2$ of the two tautological subbundles on $G = \\operatorname{Grass}(r,n)\\times \\operatorname{Grass}(r,m)$ is the restriction of a universal polynomial $P_k\\in \\mathbb {Z}[c_1(S_1),\\dots ,c_r(S_1), c_1(S_2),\\dots ,c_r(S_2)]$ in the Chern classes of the tautological bundles on the product of infinite Grassmannians $\\bigcup _{m,n\\in \\mathbb {N}} \\operatorname{Grass}(r,n)\\times \\operatorname{Grass}(r,m)$ .", "However, these polynomials are not sparse and their degree in the Chern roots is bounded only by $r^2$ .", "Given that we have $2r$ variables, the number of coefficients of the polynomials $P_k$ can roughly be estimated by ${ r^2 + 2r -1 \\atopwithdelims ()2r - 1}$ .", "Already for values for $r\\ge 10$ we can therefore expect to have flooded the full RAM of any modern desktop computer.", "Compared to that, the explicit model for the cohomology of $\\operatorname{Grass}(r,n)$ introduced above leads to an algebra of dimension ${m \\atopwithdelims ()r}{n\\atopwithdelims ()r}$ for the cohomology of $G$ .", "This number is in general strictly smaller than the number of coefficients of $P_k$ .", "Since we shall only need the results for fixed values of $m$ and $n$ , it seems likely that a manual implementation of a modular approach, using the ideals $J_{m,r}$ introduced above, could produce some further results for the polar multiplicities $e_{m,n}^{r,k}$ which can not be reached using the methods provided by schubert.lib and chern.lib.", "Other than that, it would, of course, be even more appealing to find a closed formula for the polar multiplicities as a function of $m,n,r$ , and $k$ .", "Table: The Polar multiplicities e 2,n 1,k e_{2,n}^{1,k} of 2×n2\\times n-matrices for n≤7n\\le 7;values for kk which are not explicitly listed, are zero.Table: Polar multiplicities for 3×n3\\times n-matrices for n≤20n \\le 20;all values for kk which are not explicitly listed are zero.Table: Polar multiplicities for 4×n4\\times n-matrices.", "A - indicates that this value hasnot been computed; all other entries for kk and llwhich are not explicitly listed, are equal to zero.Table: Polar multiplicities for 5×n5\\times n-matrices of rank 2 and 3;all entries for kk and ll which are not explicitly listed, are equal to zero.Table: Polar multiplicities for 5×n5\\times n-matrices of rank 1 and 4;all entries for kk and ll which are not explicitly listed, are equal to zero.Example 2.8 We may use the above tables together with formula (REF ) to compute the Euler characteristics of complex links of higher codimension for the generic determinantal varieties.", "For instance $\\chi \\left( \\mathcal {L}^6(M_{3,4}^3,0) \\right) =e_{3,4}^{2,0}-e_{3,4}^{2,1}+e_{3,4}^{2,2}-e_{3,4}^{2,3}= 6 - 16 + 27 - 24 = -7.$ Note that since $\\mathcal {L}^6(M_{3,4}^3,0)$ is smooth of complex dimension 3, the summation over $\\alpha \\in A$ degenerates and only the smooth stratum $V_{3,4}^2$ is relevant.", "Moreover, the complex link of $M_{3,4}^3$ along this stratum is empty, so that the factor $(1 - \\chi (\\mathcal {L}(M_{3,4}^3,V_{3,4}^2)))$ simply reduces to 1.", "This computation confirms the results in an earlier paper where it was shown that the Betti numbers of $\\mathcal {L}^6(M_{3,4}^3,0)$ are $(b_0,b_1,b_2,b_3) = (1,0,1,9).$ In , this was a very particular example.", "We will discuss in Section how the distinct Betti numbers can be computed for all smooth complex links of generic determinantal varieties.", "To also give an example for a singular complex link, consider $\\mathcal {L}^5(M_{3,4}^3,0)$ : This space is of complex dimension 4 and has isolated singularities which are themselves determinantal of the form $(M_{2,3}^2,0)$ .", "If $D^{\\prime }_6$ is a plane of codimension 6 in $\\mathbb {C}^{3\\times 4}$ in general position off the origin such that $\\mathcal {L}^5(M_{3,4}^3,0) = D^{\\prime }_6 \\cap M_{3,4}^3$ , then these singular points are precisely the intersection points of $D^{\\prime }_6$ with $M_{3,4}^2$ and their number is equal to the multiplicity $e_{3,4}^{1,0} = 10$ .", "If we let $l$ be a further, sufficiently general linear form on $\\mathbb {C}^{3\\times 4}$ , then the generic fiber of its restriction to $M_{3,4}^3 \\cap D^{\\prime }_6$ is the previous space $\\mathcal {L}^6(M_{3,4}^3,0)$ whose topology we already know.", "According to Table (REF ), $l$ has 10 classical Morse critical points on $V_{3,4}^2\\cap D^{\\prime }_6$ and 10 further stratified Morse critical points on $V_{3,4}^1 \\cap D^{\\prime }_6 = M_{3,4}^2 \\cap D^{\\prime }_6$ .", "For the first set of points, 10 more cells of real dimension 4 are added which changes the Euler characteristic by $(-1)^{10-6}\\cdot 10 \\cdot 1 = 10$ in Formula (REF ) (resp.", "(REF )).", "The second set of critical points on the lower dimensional stratum $V_{3,4}^1$ have a nontrivial complex link $\\mathcal {L}(M_{3,4}^3,V_{3,4}^1) \\cong \\mathcal {L}^0(M_{2,3}^2,0)$ appearing in the normal Morse datum.", "This complex link is nothing but the Milnor fiber of the $A_0^+$ -singularity in : Despite being a space of complex dimension 3, it is homotopy equivalent to a 2-sphere and its Betti numbers are $(b_0,b_1,b_2,b_3) = (1,0,1,0)$ in accordance with the Formula by Ebeling and Gusein-Zade (REF ).", "This means that we attach real 3-cells rather than 4-cells and the Euler characteristic changes by $-10$ rather than $+10$ , as one might have expected.", "The overall outcome therefore is $\\chi \\left( \\mathcal {L}^5(M_{3,4}^3,0 )\\right) =\\chi (\\mathcal {L}^6(M_{3,4}^3,0))+ \\underbrace{10 \\cdot (1-(1-0+1))}_{\\alpha = 1}+ \\underbrace{10 \\cdot (1-0)}_{\\alpha = 2} = -7.$ It is interesting to see the cancellation of the two contributions to the Euler characteristic given that the equality of the two relevant multiplicities is not a coincidence, but due to the duality noted by Gaffney.", "We shall see later on that the first four Betti numbers of the open stratum $V_{3,4}^2 \\cap D^{\\prime }_6$ of $\\mathcal {L}^5(M_{3,4}^3,0)$ are $(b_0,b_1,b_2,b_3) = (1,0,1,0).$ It can be shown that the attachements of the 3-cells at the points of $V_{3,4}^1 \\cap D^{\\prime }_6$ glue their boundaries all to the very same generator of the second homology group.", "From the long exact sequence of the pair $(X_{3,4}^2\\cap D^{\\prime }_6, V_{3,4}^2 \\cap D^{\\prime }_6)$ and the previous computation of the Euler characteristic one can then deduce that the Betti numbers of $\\mathcal {L}^5(M_{3,4}^3,0)$ must be $(b_0,b_1,b_2,b_3,b_4) = (1,0,0,9,1).$ In particular we see that the cells attached at the classical critical points of $l$ on the smooth stratum kill off all the cycles in the top homology group of $\\mathcal {L}^6(M_{3,4}^3,0)$ .", "Those coming from the stratified Morse critical points on the lower dimensional stratum survive and lead to new cycles.", "Details for the computation of the Betti numbers in this example will appear in a forthcoming note." ], [ "Determinantal strata as homogeneous spaces", "Let $G$ be a Lie group and $* \\colon G \\times X \\rightarrow X$ a smooth action on a manifold $X$ .", "Then for every point $x\\in X$ the orbit $G*x \\subset X$ is a locally closed submanifold which is diffeomorphic to the quotient $G/G_x$ of $G$ by the stabilizer $G_x$ of $x$ .", "The next lemma shows that up to homotopy we can always find a compact model for this orbit by choosing an appropriate maximal compact subgroup of $G$ .", "Lemma 3.1 Let $G$ be a Lie group, $G^{\\prime } \\subset G$ a closed subgroup, and $U \\subset G$ a maximal compact subgroup such that $U^{\\prime } = U \\cap G^{\\prime }$ is again a maximal compact subgroup of $G^{\\prime }$ .", "Then the inclusion $U/U^{\\prime } \\hookrightarrow G / G^{\\prime }$ is a weak homotopy equivalence.", "The projection $G \\rightarrow G/G^{\\prime }$ is a fiber bundle with fiber $G^{\\prime }$ and the same holds for $U \\mapsto U/U^{\\prime }$ with fiber $U^{\\prime }$ .", "Hence, there is a commutative diagram of long exact sequences of homotopy groups ${\\cdots [r] &\\pi _k( G^{\\prime } ) [r] &\\pi _k( G ) [r] &\\pi _k( G/G^{\\prime } ) [r] &\\pi _{k-1}(G^{\\prime } ) [r] &\\cdots \\\\\\cdots [r] &\\pi _k( U^{\\prime } ) [u] [r] &\\pi _k( U ) [u] [r] [u] &\\pi _k( U/U^{\\prime } ) [r] [u] &\\pi _{k-1}( U^{\\prime } ) [r] [u] &\\cdots }$ and it is well known that for any Lie group the inclusion of its maximal compact subgroup is a homotopy equivalence.", "The assertion therefore follows from the five-lemma." ], [ "The Lie group action on $\\mathbb {C}^{m \\times n}$", "We now turn to the discussion of the strata in the rank stratification as homogeneous spaces.", "Fix integers $0 < m \\le n$ .", "The space $\\mathbb {C}^{m\\times n}$ of complex $m\\times n$ -matrices has a natural left action by the complex Lie group $G_{m,n} := \\operatorname{GL}(m;\\mathbb {C}) \\times \\operatorname{GL}(n;\\mathbb {C})$ via multiplication: $* \\colon G_{m,n} \\times \\mathbb {C}^{m\\times n} \\rightarrow \\mathbb {C}^{m\\times n},\\quad ((P,Q),A) \\mapsto (P,Q)*A = P \\cdot A \\cdot Q^{-1}.$ For two matrices $A \\in \\mathbb {C}^{m\\times n}$ and $B \\in \\mathbb {C}^{m^{\\prime } \\times n^{\\prime }}$ we will denote by $A \\oplus B$ the $(m+m^{\\prime }) \\times (n+n^{\\prime })$ block matrix $\\begin{pmatrix}A & 0 \\\\0 & B\\end{pmatrix}.$ Let $0_{m,n} \\in \\mathbb {C}^{m\\times n}$ be the zero matrix.", "For any number $0\\le r \\le m$ we will write $\\mathbf {1}_{m,n}^r = \\mathbf {1}^r \\oplus 0_{m-r,n-r}$ for the $m\\times n$ -matrix with a unit matrix $\\mathbf {1}^r$ of rank $r$ in the upper left corner and zeroes in all other entries.", "Then clearly $V_{m,n}^r = G * \\mathbf {1}_{m,n}^r \\cong G_{m,n}/G_{m,n}^r,$ where $G_{m,n}^r$ is the stabilizer of $\\mathbf {1}_{m,n}^r$ in $G$ .", "A direct computation yields that $G_{m,n}^r$ consists of pairs of block matrices of the form $\\left(\\begin{pmatrix}A & B \\\\0 & C\\end{pmatrix},\\begin{pmatrix}A & 0 \\\\D & E\\end{pmatrix}\\right)$ with $A \\in \\operatorname{GL}(r,\\mathbb {C})$ , $C \\in \\operatorname{GL}(m-r;\\mathbb {C})$ and $E\\in \\operatorname{GL}(n-r;\\mathbb {C})$ invertible, and $B$ and $D$ arbitrary of appropriate sizes.", "As a compact subgroup $U_{m,n} \\subset G_{m,n}$ we may choose the unitary matrices $U(m) \\times U(n)$ .", "It is easily verified that its intersection $U_{m,n}^r$ with the subgroup $G_{m,n}^r$ consists of pairs of matrices $\\left(\\begin{pmatrix}S & 0 \\\\0 & P\\end{pmatrix},\\begin{pmatrix}S & 0 \\\\0 & Q\\end{pmatrix}\\right)= \\left( S \\oplus P, S \\oplus Q \\right)$ with $S \\in U(r)$ , $P \\in U(m-r)$ , and $Q \\in U(n-r)$ , and that this is in fact a maximal compact subgroup of $G_{m,n}^r$ .", "Note that due to the fact that in contrast to $G_{m,n}^r$ the off-diagonal blocks in the subgroup $U_{m,n}^r$ are all zero, we find that $U_{m,n}^r \\cong U(r) \\times U(m-r) \\times U(n-r)$ is again isomorphic to a product of unitary groups.", "Let us, for the moment, consider only the first factor $U(m)$ of $U_{m,n}$ which we may consider as a subgroup via the inclusion $U(m) \\times \\lbrace \\mathbf {1}^r\\rbrace \\subset U_{m,n}$ .", "The stabilizer of $\\mathbf {1}_{m,n}^r$ of the restriction of the action to $U(m)$ is simply the subgroup $\\mathbf {1}^r \\oplus U(m-r)$ .", "The $U(m)$ -orbit can easily be identified with the Stiefel manifold $\\operatorname{Stief}(r,m)$ of orthonormal $r$ -frames in $\\mathbb {C}^m$ : $\\operatorname{Stief}(r,m) \\cong U(m)/\\left( \\mathbf {1}^r \\oplus U(m-r) \\right)\\cong U(m)* \\mathbf {1}_{m,n}^r$ so that an $r$ -frame $\\underline{v} = (v_1,\\dots ,v_r)$ in $\\mathbb {C}^m$ is given by the first $r$ columns of the matrix $(\\underline{v}) = A \\cdot \\mathbf {1}_{m,n}^r$ for some $A\\in U(m)$ .", "The group $U(r)$ operates naturally on the Stiefel manifold via the left action $U(r) \\times \\operatorname{Stief}(r,m) \\rightarrow \\operatorname{Stief}(r,m), \\quad (S, A\\cdot \\mathbf {1}^r_{m,r} )\\mapsto A \\cdot \\mathbf {1}^r_{m,r} \\cdot S^{-1}.$ The quotient of this action is the Grassmannian of $r$ -planes $\\operatorname{Grass}(r,m)$ since either two $r$ -frames span the same subspace if and only if they lay in the same orbit under this $U(r)$ -action.", "It is easy to see with the above identifications (REF ) that two matrices $A$ and $A^{\\prime }$ in $U(m)$ represent the same element in $\\operatorname{Grass}(r,m)$ if and only if $A^{-1} \\cdot A^{\\prime } \\in U(r) \\oplus U(m-r)$ : $A \\cdot \\mathbf {1}_{m,r}^r &=& A^{\\prime } \\cdot \\mathbf {1}_{m,r}^r \\cdot S^{-1} \\\\\\Leftrightarrow \\qquad \\mathbf {1}_{m,r}^r &=&A^{-1} \\cdot A^{\\prime } \\cdot (S^{-1} \\oplus \\mathbf {1}^{m-r}) \\cdot \\mathbf {1}_{m,r}^r\\\\\\Leftrightarrow \\quad ( \\mathbf {1}^r \\oplus P )&=&A^{-1} \\cdot A^{\\prime } \\cdot (S^{-1} \\oplus \\mathbf {1}^{m-r} ) \\\\\\Leftrightarrow \\quad (S \\oplus P)&=&A^{-1} \\cdot A^{\\prime }$ for some $P \\in U(m-r)$ .", "In other words, the above $U(r)$ action is compatible with the natural inclusion of subgroups $U(r) \\oplus \\mathbf {1}^{m-r} \\hookrightarrow U(r) \\oplus U(m-r) \\hookrightarrow U(m)$ and accordingly $\\operatorname{Grass}(r,m) \\cong \\operatorname{Stief}(r,m)/U(r) \\cong U(m)/U(r) \\oplus U(m-r).$ We can repeat these considerations for the second factor $U(n)$ embedded into $U_{m,n}$ as $\\lbrace \\mathbf {1}^m\\rbrace \\times U(n)$ .", "Then $\\operatorname{Stief}(r,n) \\cong U(n)/(\\mathbf {1}^r\\oplus U(n-r)) \\cong U(n) * \\mathbf {1}_{m,n}^r$ with any $r$ -frame $\\underline{w} \\in \\operatorname{Stief}(r,n)$ given by the first $r$ rows of the matrix $(\\underline{w}) = \\mathbf {1}^{r}_{m,n} \\cdot B^{-1}$ for some $B \\in U(n)$ .", "Accordingly, we will write the left action by $U(r)$ on $\\operatorname{Stief}(r,n)$ as $U(r) \\times \\operatorname{Stief}(r,n) \\rightarrow \\operatorname{Stief}(r,n), \\quad (S, \\mathbf {1}_{m,n}^r \\cdot B^{-1} ) \\mapsto S \\cdot \\mathbf {1}_{m,n}^r \\cdot B^{-1}$ in this case.", "Note that, on the one hand, the subgroup $U_{m,n}^r$ intersects the subgroups $U(m) \\times \\lbrace \\mathbf {1}^n\\rbrace $ and $\\lbrace \\mathbf {1}^m\\rbrace \\times U(n)$ in $\\mathbf {1}^r \\oplus U(m-r)$ and $\\mathbf {1}^r \\oplus U(n-r)$ , respectively, and the action of the latter subgroups affects only either one of the two factors.", "The $U(r)$ -action, on the other hand, is “diagonal” and we may exploit these facts by observing that the quotient $U_{m,n}/U(m-r)\\times U(n-r) \\cong \\operatorname{Stief}(r,m) \\times \\operatorname{Stief}(r,n)$ is a product of Stiefel manifolds, equipped with a free, diagonal $U(r)$ -action.", "The quotient $\\operatorname{Stief}(r,m)\\times \\operatorname{Stief}(r,n)/U(r)$ is then naturally isomorphic to $U_{m,n}/\\left( U(r) \\times U(m-r) \\times U(n-r) \\right)$ and, via the particular choice of the matrix $\\mathbf {1}_{m,n}^r$ , this manifold can be identified with the orbit $U_{m,n}*\\mathbf {1}_{m,n}^r$ .", "We will in the following denote this orbit by $O_{m,n}^r \\subset V_{m,n}^r \\subset \\mathbb {C}^{m\\times n}$ and refer to it as the compact orbit model for the stratum $V_{m,n}^r$ .", "Lemma 3.2 The two natural projections ${\\operatorname{Stief}(r,n) @{^{(}->}[dr] & &\\operatorname{Stief}(r,m) @{_{(}->}[dl] \\\\&O_{m,n}^r[dl]_{\\lambda _2} [dr]^{\\lambda _1} & \\\\\\operatorname{Grass}(r,n) & & \\operatorname{Grass}(r,m)}$ equip the space $U_{m,n}/U_{m,n}^r \\cong U_{m,n}*\\mathbf {1}_{m,n}^r$ with two structures as a fiber bundle over the respective Grassmannian with Stiefel manifolds as fibers.", "It suffices to establish this claim for the first projection $\\lambda _1$ to $\\operatorname{Grass}(r,m)$ .", "Consider the commutative diagram ${\\operatorname{Stief}(r,m)\\times \\operatorname{Stief}(r,n) [d]^{\\lambda ^{\\prime }_1} [r]^-{\\alpha } &O_{m,n}^r [d]^{\\lambda _1} \\\\\\operatorname{Stief}(r,m) [r]^{\\beta } &\\operatorname{Grass}(r,m)}$ where $\\lambda ^{\\prime }_1$ takes a pair of $r$ -frames $(\\underline{v}, \\underline{w})$ to $\\underline{v}$ , $\\beta $ is the quotient map $\\underline{v} \\mapsto \\operatorname{span}\\underline{v}$ from (REF ) and $\\alpha $ the one from the discussion of (REF ).", "We need to describe the fiber of an arbitrary point $W \\in \\operatorname{Grass}(r,m)$ .", "To this end, consider its preimages $(\\beta \\circ \\lambda ^{\\prime }_1)^{-1}(\\lbrace W\\rbrace ) =\\beta ^{-1}(\\lbrace W\\rbrace ) \\times \\operatorname{Stief}(r,n)\\overset{\\lambda ^{\\prime }_1}{\\longrightarrow }\\beta ^{-1}(\\lbrace W\\rbrace ) = U(r) * \\underline{v}$ with $\\underline{v} = (v_1,\\dots ,v_r) \\in \\operatorname{Stief}(r,m)$ some $r$ -frame in $\\mathbb {C}^m$ with $\\operatorname{span}\\underline{v} = W$ .", "The fiber of $\\lambda ^{\\prime }_1$ over $\\underline{v}$ is simply the Stiefel manifold $\\operatorname{Stief}(r,n)$ .", "Now if $\\underline{v}^{\\prime }$ is any other $r$ -frame spanning $W$ , then there exists a unique matrix $S\\in U(r)$ such that $\\underline{v}^{\\prime } = \\underline{v} \\cdot S^{-1}$ .", "The free diagonal action on $\\operatorname{Stief}(r,m) \\times \\operatorname{Stief}(r,n)$ gives a natural identification of the fibers of $\\lambda ^{\\prime }_1$ over $\\underline{v}$ and $\\underline{v}^{\\prime }$ via $(\\underline{v}, \\underline{w}) \\mapsto (\\underline{v}^{\\prime }, S \\cdot \\underline{w})$ and this furnishes an obvious notion of parallel sections of $\\lambda ^{\\prime }_1$ over the orbit $U(r) * \\underline{v}$ .", "These parallel sections can then be identified with either one of the Stiefel manifolds $\\operatorname{Stief}(r,n) \\times \\lbrace \\underline{v}\\rbrace $ over a point $\\underline{v}$ in the orbit.", "Remark 3.3 The structures of the manifolds $O_{m,n}^r$ as fiber bundle is in general not trivial.", "For instance $S^{2m-1} \\cong O_{m,1}^1 \\overset{\\lambda _1}{\\longrightarrow } \\operatorname{Grass}(1,m) \\cong \\mathbb {P}^{m-1}$ is the Hopf fibration which is known not to be a product." ], [ "The Cartan model", "The cohomology of homogeneous spaces can be computed via the Cartan model as outlined by Borel in .", "Let $G$ be a compact, connected Lie group and $U \\subset G$ a closed subgroup thereof.", "Then allows for the description of the cohomology of the quotient $G/U$ as the cohomology of an explicit complex $H^\\bullet ( G/U ) \\cong H\\left( S_{U} \\otimes _\\mathbb {Z}\\bigwedge F \\right)$ under certain favourable assumptions on $G$ and $U$ .", "The objects on the right hand side are the following.", "The ring $S_U$ is the cohomology ring of a classifying space for the group $U$ , see for instance .", "Such a classifying space $BU$ for a compact Lie group $U$ is given by the quotient of any weakly contractible space $EU$ with a free $U$ -action.", "Then the projection $EU \\rightarrow BU$ turns $EU$ into a universal bundle in the sense that every principal $U$ -bundle $P$ over a paracompact Haussdorff space $X$ can be written as $P = f^* EU$ for some continuous map $f \\colon X \\rightarrow BU$ .", "In particular, this property can be used to show that the cohomology ring $S_U = H^\\bullet (BU)$ is in fact unique up to unique isomorphism and independent of the choice of the space $EU$ , see e.g.", "The approach by Borel might seem unnecessarily technical given the Milnor construction of universal bundles in three years later..", "In most practical cases (cf. )", "the cohomology rings of classifying spaces are weighted homogeneous polynomial rings $S_U \\cong \\mathbb {Z}[c_1,\\dots ,c_r]$ in variables of even degree and therefore in particular commutative.", "Furthermore, we note that the total space $EG$ of a universal $G$ bundle is naturally equipped with a free $U$ action, as well.", "It can therefore also be used to construct a classifying space $BU$ as the intermediate quotient $EG \\rightarrow BU = EG/U \\rightarrow BG = EG/G.$ The pullback in cohomology of the projection $BU \\rightarrow BG$ is called the characteristic homomorphism $\\rho \\colon S_G \\rightarrow S_U$ for the inclusion of the subgroup $U \\subset G$ , cf.", ".", "The module $F$ is a free, graded $\\mathbb {Z}$ -module $F = \\mathbb {Z}\\varepsilon _1 \\oplus \\dots \\oplus \\mathbb {Z}\\varepsilon _r$ in generators $\\varepsilon _i$ of odd degree.", "Hopf has shown in that the rational homology of a compact Lie group $U$ is graded isomorphic to the homology of a product of odd-dimensional spheres.", "This result has been strengthened to also include cohomology with integer coefficients in the absence of torsion in $H^\\bullet (U)$ , see e.g.", ".", "Then the generators $\\varepsilon _i$ can be thought of as the volume forms of the spheres and the cup product turns the cohomology of the group into an exterior algebra on these generators $H^\\bullet (U) \\cong \\bigwedge \\left( \\mathbb {Z}\\varepsilon _1 \\oplus \\dots \\oplus \\mathbb {Z}\\varepsilon _r \\right)$ which appears as $\\bigwedge F$ in (REF ).", "The differential $D$ on $S_U \\otimes _\\mathbb {Z}\\bigwedge F$ is given by linear extension of the map $D ( a \\otimes 1 ) = 0, \\quad D( 1 \\otimes \\varepsilon _i ) = \\rho (c_i) \\otimes 1$ for all $a\\in S_U$ , where $c_i$ is a transgression element of $\\varepsilon _i$ in a universal $G$ -bundle and $\\rho \\colon S_G \\rightarrow S_U$ the characteristic homomorphism from before.", "A transgression can be defined in a universal $U$ -bundle $\\pi \\colon EU \\rightarrow BU$ as above: The element $\\varepsilon _i$ is called universally transgressive if there exists a cochain $\\omega _i$ on $EU$ which restricts to the cohomology class $\\varepsilon _i$ in every fiber and for which there exists another cochain $a_i$ on $BG$ with $\\pi ^* a_i = _i$ .", "Then $c_i$ is taken to be the cohomology class of $a_i$ and it is said to “correspond to $\\varepsilon _i$ under transgression”.", "For a more detailed account see .", "We will discuss the particular form of this transgression below in the cases of interest for this article.", "Note that (REF ) is an isomorphism of graded $\\mathbb {Z}$ -modules with grading given by the degree of the cohomology classes on either side.", "But we can also think of the algebra $S_U \\otimes _{\\mathbb {Z}} \\bigwedge F$ as a Koszul algebra in the generators $1 \\otimes _\\mathbb {Z}\\varepsilon _i$ over the ring $S_U$ : $S_U \\otimes _\\mathbb {Z}\\bigwedge F \\cong \\bigoplus _{p=0}^r\\left(\\bigwedge ^p \\left(\\bigoplus _{i=1}^r S_U \\otimes _{\\mathbb {Z}} \\varepsilon _i \\right)\\right).$ This gives another grading on the right hand side of (REF ) by the degrees $p$ of the distinct exterior powers.", "In the following we will refer to the two gradings as the cohomological degree and the Koszul degree respectively." ], [ "Maximal tori and the Weyl group", "As explained in , the characteristic homomorphism $\\rho \\colon S_G \\rightarrow S_U$ associated to an inclusion of a subgroup $U \\subset G$ is best understood in terms of inclusions of maximal tori $S \\subset U$ and $T \\subset G$ .", "This will be an essential ingredient for the computation of the cohomology of the orbits $O_{m,n}^r$ in the determinantal strata.", "For the group $U(1) \\cong S^1 \\subset \\mathbb {C}$ a classifying space $BU(1)$ is given by the infinite projective space $BU(1) \\cong \\bigcup _{k=1}^\\infty \\mathbb {P}^k$ which can be understood as a direct limit with $\\mathbb {P}^k$ included in $\\mathbb {P}^{k+1}$ as the hyperplane section at infinity.", "A universal $U(1)$ -bundle is then given by $\\mathcal {O}(-1)^*$ , the tautological bundle with its zero section removed or, equivalently, by the direct limit of unit spheres $S^{2k+1} \\subset \\mathbb {C}^{k+1}\\setminus \\lbrace 0\\rbrace $ which are projected to $\\mathbb {P}^k$ via the Hopf fibration.", "Then the cohomology ring $S_{U(1)} \\cong \\mathbb {Z}[\\alpha ]$ of $BU(1)$ is a free polynomial ring in $\\alpha $ , the first Chern class of $\\mathcal {O}(1)$ , and the generator $\\varepsilon $ of $H^1( U(1))$ corresponds to $\\alpha $ under transgression.", "Now it is easy to see that for a torus $T = (U(1))^r$ the classifying spaces and universal bundles can be chosen to be merely products of the one just described for $U(1)$ .", "It follows that $S_T$ is a polynomial ring $S_T \\cong \\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _r]$ with all $\\alpha _j$ of degree 2.", "If $T$ is a maximal torus in a compact connected Lie group $G$ , then $S_G$ is contained in $S_T$ as the invariant ring under the action of the Weyl group.", "Moreover, if $U\\subset G$ is a subgroup and the tori $S\\subset U$ and $T\\subset G$ have been chosen such that $S \\subset T$ , then the characteristic homomorphism $\\rho = \\rho (U,G)$ for the inclusion $U \\subset G$ is completely determined by the one for the inclusion $S \\subset T$ so that one has a commutative diagram ${S_G [r]^{\\rho (U,G)} @{^{(}->}[d] &S_U @{^{(}->}[d] \\\\S_T [r]^{\\rho (S,T)} &S_S}$ where the vertical arrows denote the inclusions of invariant subgroups.", "We note that in general one needs real coefficients in cohomology as in .", "For the particular cases that we shall need below, however, the calculations have been carried out for integer coefficients as well." ], [ "Cohomology of Stiefel manifolds and Grassmannians", "As discussed earlier the Stiefel manifolds and Grassmannians can be considered as homogeneous spaces of $U(n)$ modulo various subgroups of block matrices with unitary blocks.", "Also the orbit varieties $O_{m,n}^r$ can be decomposed into these building blocks.", "Therefore we briefly review the theory for the Lie group $U(n)$ as it can be found in or and illustrate the formula (REF ) for the classical cases, thereby fixing notation for the description of the cohomology of the orbit models $O_{m,n}^r$ that we are really aiming for.", "The cohomology of the unitary group $U(n)$ is known to be $H^\\bullet (U(n)) \\cong \\bigwedge \\left(\\mathbb {Z}\\varepsilon _1 \\oplus \\mathbb {Z}\\varepsilon _2 \\oplus \\dots \\mathbb {Z}\\varepsilon _{n-1} \\oplus \\mathbb {Z}\\varepsilon _n\\right)$ with generators $\\varepsilon _i$ of degree $2i-1$ , see for example or .", "We will write $F_n =\\mathbb {Z}[-1] \\oplus \\mathbb {Z}[-3] \\oplus \\dots \\oplus \\mathbb {Z}[3-2n] \\oplus \\mathbb {Z}[1-2n]$ for the free graded module whose direct summands are shifted by $1-2i$ for $1 \\le i \\le n$ so that with this notation $H^\\bullet (U(n)) \\cong \\bigwedge F_n$ .", "A maximal torus in $U(n)$ is given by the diagonal matrices $T &=& \\lbrace \\mathrm {diag}(\\lambda _1,\\dots ,\\lambda _n) : \\lambda _i \\in U(1) \\rbrace $ and the Weyl group is the symmetric group $\\mathfrak {S}_n$ of permutations of $n$ elements in this case.", "Writing $\\alpha _1,\\dots ,\\alpha _n$ for the generators of the cohomology of $S_T$ as above we find that $S_G \\hookrightarrow S_T$ is the inclusion of the invariant subring $S_G = \\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _n]^{\\mathfrak {S}_n}$ .", "According to the fundamental theorem of symmetric functions, $S_G$ is itself a polynomial ring in the elementary symmetric functions $\\sigma _k^n(\\alpha _1,\\dots ,\\alpha _n) := \\sum _{0<i_1<\\dots <i_k\\le n} \\prod _{j=1}^n \\alpha _{i_j}.$ Moreover, the generators $\\varepsilon _k$ correspond to these $\\sigma _k$ under transgression in a universal bundle $EU(n) \\rightarrow BU(n)$ , see for example .", "The cohomology of Stiefel manifolds $\\operatorname{Stief}(r,n)$ turns out to be a truncated version of the cohomology of $U(n)$ : $H^\\bullet ( \\operatorname{Stief}(r,n)) \\cong \\bigwedge F_{r}[2r-2n].$ In order to make the connection with Formula (REF ) recall that $\\operatorname{Stief}(r,n) \\cong U(n)/(\\mathbf {1}^r \\oplus U(n-r))$ .", "As maximal tori in the subgroup $U^{\\prime } := \\mathbf {1}^r \\oplus U(n-r)$ we may choose $T^{\\prime } &=& \\lbrace \\mathbf {1}^r \\oplus \\mathrm {diag}(\\mu _{1},\\dots ,\\mu _{n-r}) : \\mu _j \\in U(1) \\rbrace $ so that $T^{\\prime }\\subset T$ with $T \\subset U(n)$ as before.", "Writing $S_{T^{\\prime }} = \\mathbb {Z}[\\beta _1,\\dots ,\\beta _{n-r}]$ for the cohomology of the classifying space we find that the diagram (REF ) becomes ${\\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _n]^{\\mathfrak {S}_n} [r]^-{\\rho (U^{\\prime },U)} @{^{(}->}[d] &\\mathbb {Z}[\\beta _1,\\dots ,\\beta _{n-r}]^{\\mathfrak {S}_{n-r}} @{^{(}->}[d] \\\\\\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _n] [r]^-{\\rho (T^{\\prime },T)} &\\mathbb {Z}[\\beta _1,\\dots ,\\beta _{n-r}]}$ where the map $\\rho (T^{\\prime },T)$ is given by $\\rho (T^{\\prime },T) \\colon \\alpha _j \\mapsto {\\left\\lbrace \\begin{array}{ll}\\beta _{j-r} & \\textnormal { if } j>r, \\\\0 & \\textnormal { otherwise.", "}\\end{array}\\right.", "}$ It follows that $\\rho (U^{\\prime },U)$ is merely a substitution of the variables in the symmetric functions given by $\\rho (T^{\\prime },T)$ so that $\\rho (U^{\\prime },U) \\colon \\sigma _k^n(\\alpha _1,\\dots ,\\alpha _n) \\mapsto {\\left\\lbrace \\begin{array}{ll}\\sigma _k^{n-r}(\\beta _1,\\dots ,\\beta _{n-r}) & \\textnormal { if } k \\le n-r, \\\\0 & \\textnormal { otherwise.}\\end{array}\\right.", "}$ With these considerations at hand we can now investigate the homology of the complex $S_{U^{\\prime }} \\otimes _\\mathbb {Z}\\bigwedge F_n$ , with its transgression differential $D$ from (REF ).", "To this end we shall apply the following well known reduction lemma.", "Lemma 3.4 Let $R$ be a ring, $M$ an $R$ -module and $x,y_1,\\dots ,y_n \\in R$ be elements.", "When $x$ is a non-zerodivisor on $M$ , then there is a canonical isomorphism $H( \\operatorname{Kosz}(x,y_1,\\dots ,y_n;M)) \\cong H( \\operatorname{Kosz}(y_1,\\dots ,y_n;M/xM)$ for the entire cohomology of the Koszul complexes.", "See .", "The ring $S_{U^{\\prime }} \\cong \\mathbb {Z}[c_1,\\dots ,c_{n-r}]$ is freely generated in the elementary symmetric functions in the $\\beta _1,\\dots ,\\beta _{n-r}$ .", "Now the transgression differential $D$ takes the form $D \\colon 1 \\otimes \\varepsilon _i \\mapsto {\\left\\lbrace \\begin{array}{ll}c_i & \\textnormal { for } i \\le n-r \\\\0 & \\textnormal { otherwise.", "}\\end{array}\\right.", "}$ Since the $c_i$ form a regular sequence on the module $M = S_{U^{\\prime }}$ with quotient $S_{U^{\\prime }}/\\langle c_1,\\dots ,c_{n-r} \\rangle \\cong \\mathbb {Z}$ , it follows inductively from Lemma REF that $H(\\operatorname{Stief}(r,n)) \\cong H\\left( S_{U^{\\prime }} \\otimes _{\\mathbb {Z}} \\bigwedge F_n \\right) \\cong H\\left( \\operatorname{Kosz}(0,\\dots ,0;\\mathbb {Z}\\right) \\cong \\bigwedge F_r[2r-2n]$ as anticipated.", "Let $0 \\le r \\le n$ be integers.", "The case of a Grassmannian $\\operatorname{Grass}(r,n) \\cong U(n) / (U(r) \\oplus U(n-r))$ is similar, only that the maximal tori for $U(n)$ and its subgroup $U^{\\prime } = U(r) \\oplus U(n-r)$ can be chosen to be the same so that $S_T = S_{T^{\\prime }} = \\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _n]$ .", "The difference comes from the Weyl groups.", "For $U(n)$ we again have the full symmetric group $\\mathfrak {S}_n$ , but for the subgroup $U^{\\prime }$ we find $\\mathfrak {S}_r \\times \\mathfrak {S}_{n-r} \\subset \\mathfrak {S}_n$ whose action on $\\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _n]$ respects the partion of the variables $\\alpha _j$ into subsets $\\lbrace \\alpha _1,\\dots ,\\alpha _{r}\\rbrace $ and $\\lbrace \\alpha _{n-r+1}, \\dots ,\\alpha _{n} \\rbrace $ .", "We write $x_j = \\sigma ^r_j(\\alpha _1,\\dots ,\\alpha _r),\\quad y_k = \\sigma ^{n-r}_k(\\alpha _{n-r+1},\\dots ,\\alpha _n)$ for the elementary symmetric polynomials in the respective set of variables.", "Then $S_{U^{\\prime }}$ is a free polynomial subring in these variables $S_{U^{\\prime }} \\cong \\mathbb {Z}[x_j,y_k : 0< j \\le r, \\, 0 < k \\le n-r ] \\subset \\mathbb {Z}[\\alpha _1,\\dots ,\\alpha _n]$ containing $S_U$ as the invariant subring under arbitrary permutations, i.e.", "forgetting about the particular partition.", "It is an elementary exercise in the theory of symmetric functions to verify that in this situation $\\sigma _d^n( \\underline{\\alpha }) =\\sum _{j+k = d} \\sigma _j^r(\\alpha _1,\\dots ,\\alpha _r) \\cdot \\sigma _k^{n-r}(\\alpha _{n-r+1},\\dots ,\\alpha _n)= \\sum _{j+k = d} x_j \\cdot y_k.$ Now the complex $S_{U^{\\prime }} \\otimes _\\mathbb {Z}\\bigwedge F_n$ in (REF ) looks as follows, cf.", ": The differential $D$ takes each one of the generators $\\varepsilon _k$ to $D(1\\otimes \\varepsilon _k) = \\sigma _k^n(\\alpha _1,\\dots ,\\alpha _n)$ .", "These are $n$ weighted homogeneous relations in a free graded polynomial ring with $n$ variables $x_1,\\dots ,x_r,y_1,\\dots ,y_{n-r}$ .", "From the fact that the cohomology of the associated Koszul complex is finite dimensional, we see that the elements (REF ) must form a regular sequence on $S_{U^{\\prime }}$ so that the complex is exact except at Koszul degree zero where we find $H^\\bullet (\\operatorname{Grass}(r,n)) \\cong S_{U^{\\prime }}/I_{r,n}$ with $I_{r,n}$ the weighted homogeneoeus ideal generated by the elements (REF ).", "Remark 3.5 This model for the cohomology is linked to the geometry of the Grassmannians as follows.", "Let $0 \\rightarrow \\mathcal {S} \\rightarrow \\mathcal {O}^n \\rightarrow \\mathcal {Q} \\rightarrow 0$ be the tautological sequence on $\\operatorname{Grass}(r,n)$ .", "Then modulo $I_n$ we find that $c_j = c_j(\\mathcal {S})$ is the $j$ -th Chern class of the tautological bundle and $s_k = c_k(\\mathcal {Q})$ the $k$ -th Chern class of the tautological quotient bundle.", "The latter are nothing but the $k$ -th Segre classes of $\\mathcal {S}$ which gives us precisely the relations (REF ) by expansion of the product of total Chern classes $c_t (\\mathcal {S}) \\cdot c_t (\\mathcal {Q}) = c(\\mathcal {O}^{n}) = 1$ in all degrees.", "This cohomological model can be simplified further.", "Let $x_0 = 1, x_1,\\dots ,x_r$ be the Chern classes of the tautological subbundle and $y_0 = 1, y_1,\\dots ,y_{n-r}$ those of the tautological quotient bundle.", "The relations given by the images $D (1 \\otimes \\varepsilon _i)$ , i.e.", "the generators of the ideal $I_{r,n}$ are $\\begin{matrix}0 = & x_1 & + & y_1 \\\\0 = & x_2 & + & x_1 \\cdot y_1 & + & y_2 \\\\\\vdots & \\vdots & & \\vdots & & & \\ddots \\\\0 = & x_r & + & x_{r-1} \\cdot y_1 & + & \\dots & + & y_r \\\\0 = & x_r \\cdot y_1 & + & x_{r-1} \\cdot y_2 & + & \\dots & + & y_{r+1} \\\\\\vdots & \\vdots & & \\vdots & & & & \\vdots \\\\0 = & x_{r} \\cdot y_{n-2r} & + & x_{r-1} \\cdot y_{n-2r+1} & + & \\dots & + & y_{n-r} \\\\0 = & x_{r} \\cdot y_{n-2r-1} & + & x_{r-1} \\cdot y_{n-2r} & + & \\dots & + & x_1 \\cdot y_{n-r} \\\\0 = & x_{r} \\cdot y_{n-2r} & + & x_{r-1} \\cdot y_{n-2r+1} & + & \\dots & + & x_2 \\cdot y_{n-r} \\\\\\vdots & & \\ddots & & \\ddots & & & \\vdots \\\\0 = & & & x_r \\cdot y_{n-r-2} & + & x_{r-1} \\cdot y_{n-r-1} & + & x_{r-2} \\cdot y_{n-r} \\\\0 = & & & & & x_r \\cdot y_{n-r-1} & + & x_{r-1} \\cdot y_{n-r} \\\\0 = & & & & & & & x_r \\cdot y_{n-r}.\\end{matrix}$ The first $n-r$ equations can be used to eliminate all of the $y$ -variables and express them in terms of $x$ .", "Substituting these into the last $r$ equations we obtain polynomials in $x$ which we denote by $h_1^{(n)}, h_2^{(n)}, \\dots , h_r^{(n)}.$ If we let $J_{r,n} = \\left\\langle h_1^{(n)}, h_2^{(n)}, \\dots , h_r^{(n)} \\right\\rangle \\subset \\mathbb {Z}[x_1,\\dots ,x_r]$ be the ideal generated by these elements then $H^\\bullet ( \\operatorname{Grass}(r,n)) \\cong \\mathbb {Z}[x_1,\\dots ,x_r]/J_{r,n}.$ Moreover, the polynomials $h_k^{(n)}$ satisfy a recurrence relation that can easily be derived from their explicit construction.", "Writing them in a vector we have $\\begin{pmatrix}h_1^{(0)} & h_2^{(0)} & \\dots & h_r^{(0)}\\end{pmatrix}^T=\\begin{pmatrix}x_1 & x_2 & \\dots & x_r\\end{pmatrix}^T$ and $\\begin{pmatrix}h_1^{(n+1)} \\\\h_2^{(n+1)} \\\\\\vdots \\\\h_{r-1}^{(n+1)} \\\\h_r^{(n+1)}\\end{pmatrix}=\\begin{pmatrix}-x_1 & 1 & 0 & \\cdots & 0 \\\\-x_2 & 0 & 1 & \\ddots & \\vdots \\\\\\vdots & \\vdots & \\ddots & \\ddots & 0 \\\\-x_{r-1} & 0 & \\cdots & 0 & 1 \\\\-x_r & 0 & 0 & \\cdots & 0\\end{pmatrix}\\cdot \\begin{pmatrix}h_1^{(n)} \\\\h_2^{(n)} \\\\\\vdots \\\\h_{r-1}^{(n)} \\\\h_r^{(n)}\\end{pmatrix}$ Comparing this with the construction of the infinite Grassmannian $\\operatorname{Grass}(r,\\infty ) = \\bigcup _{n=r}^{\\infty } \\operatorname{Grass}(r,n)$ we see the following: Since the cohomology ring of $\\operatorname{Grass}(r,n)$ is generated by the Chern classes $x_1,\\dots ,x_r$ of the tautological bundle for every $n$ and the tautological bundle on $\\operatorname{Grass}(r,n+1)$ restricts to the one on $\\operatorname{Grass}(r,n)$ , the pullback in cohomology for the inclusion $\\operatorname{Grass}(r,n) \\hookrightarrow \\operatorname{Grass}(r,n+1)$ is given by $\\mathbb {Z}[x_1,\\dots ,x_r]/J_{r,n+1} \\rightarrow \\mathbb {Z}[x_1,\\dots ,x_r]/J_{r,n}$ where the containment of ideals $J_{r,n+1} \\subset J_{r,n}$ is confirmed by the recurrence relation (REF ) above." ], [ "Cohomology of the matrix orbits", "This section will entirely consist of the proof of the following: Proposition 3.6 Let $0<r\\le m\\le n$ be integers.", "The cohomology of the orbit variety $O_{m,n}^{r}$ is graded isomorphic to the Koszul algebra $H^\\bullet \\left( O_{m,n}^{r}\\right) \\cong \\bigwedge \\left( R_m^{r} \\cdot \\eta _1 \\oplus \\dots \\oplus R_{m}^{r} \\cdot \\eta _r \\right)$ over the ring $R_{m}^{r} = H^\\bullet ( \\operatorname{Grass}(r,m))$ with each $\\eta _j$ a free generator of degree $2n-2j+1$ .", "We use the Cartan model (REF ) for $O_{m,n}^{r}$ .", "The group in question is $U_{m,n} = U(m) \\times U(n)$ with the subgroup $U_{m,n}^{r} &:=& \\operatorname{Stab}(\\mathbf {1}_{m,n}^{r}) \\\\&=& \\left\\lbrace (S \\oplus P, \\,S \\oplus Q ) \\in U(m)\\times U(n)\\right\\rbrace \\\\&\\cong & U(r) \\times U(m-r) \\times U(n-r).$ As a maximal torus of this subgroup we choose pairs of diagonal block matrices $\\left( \\operatorname{diag}(\\underline{\\lambda }) \\oplus \\operatorname{diag}(\\underline{\\mu }), \\quad \\operatorname{diag}(\\underline{\\lambda }) \\oplus \\operatorname{diag}(\\underline{\\nu })\\right)$ with all nontrivial entries $\\lambda _1,\\dots ,\\lambda _{r},\\mu _1,\\dots ,\\mu _{m-r}, \\nu _1,\\dots ,\\nu _{n-r} \\in U(1)$ .", "This is contained in the maximal torus $T$ of $U_{m,n}$ in the obvious way.", "We will write $S_{m,n}^{r}$ for the ring $H^\\bullet (BU_{m,n}^{r})$ .", "If we let $\\lbrace \\alpha _j\\rbrace _{j=1}^{r}$ be the set of Chern roots associated to the subgroup $U(r)$ , $\\lbrace \\beta _j\\rbrace _{j=1}^{m-r}$ the ones for $U(m-r)$ and $\\lbrace \\gamma _j\\rbrace _{j=1}^{n-r}$ those for $U(n-r)$ , then similar to the case of flag varieties, the ring $S_{m,n}^{r}$ is the invariant subring of $\\mathbb {Z}[\\underline{\\alpha }, \\underline{\\beta }, \\underline{\\gamma }]$ by the action of the group $\\mathfrak {S}_{m,n}^{r}:=\\mathfrak {S}_{r}\\times \\mathfrak {S}_{m-r} \\times \\mathfrak {S}_{n-r}$ acting by the permutation of the distinct sets of variables.", "We set $x_j &=& \\sigma _j^r(\\alpha _1,\\dots ,\\alpha _{r}) \\quad \\textnormal { for } j=1,\\dots ,r,\\\\y_k &=& \\sigma _k^{m-r}(\\beta _1,\\dots ,\\beta _{m-r}) \\quad \\textnormal { for } k=1,\\dots ,m-r, \\\\z_l &=& \\sigma _l^{n-r}(\\gamma _1,\\dots ,\\gamma _{n-r}) \\quad \\textnormal { for } l=1,\\dots ,n-r$ so that $S_{m,n}^{i_1,\\dots ,i_p} \\cong \\mathbb {Z}[\\underline{c}, \\underline{y}, \\underline{z}]$ is a free polynomial ring containing $\\mathbb {Z}[\\underline{x},\\underline{y}, \\underline{z}]$ as another free polynomial subring.", "Since the group $U_{m,n} = U(m) \\times U(n)$ is a product, also the exterior algebra in (REF ) takes the form of a product: $\\bigwedge (F_m \\oplus F_n) \\cong \\left( \\bigwedge F_m\\right) \\otimes _\\mathbb {Z}\\left( \\bigwedge F_n \\right).$ We let $\\lbrace \\varepsilon _j\\rbrace _{j=1}^m$ be the generators of $F_m$ and $\\lbrace \\varepsilon ^{\\prime }_k\\rbrace _{k=1}^n$ those of $F_n$ .", "With the notation above it is easy to see from the inclusion of maximal tori that the differential $D$ of (REF ) takes these generators to the elements $D(1 \\otimes \\varepsilon _j) &=& \\sum _{r+s=j} x_r \\cdot y_s,\\\\D(1\\otimes \\varepsilon ^{\\prime }_k) &=& \\sum _{r+s=k} x_r \\cdot z_s$ in $S_{m,n}^{i_1,\\dots ,i_p}$ and, as already discussed in Remark REF , these are precisely the relations between the Chern classes of the tautological sub- and quotient bundles.", "We now consider Koszul complexes on $S_{m,n}^{i_1,\\dots ,i_p}$ associated to various subsets of the generators $D(1\\otimes \\varepsilon _j)$ and $D(1\\otimes \\varepsilon ^{\\prime }_k)$ .", "As discussed earlier in Remark REF the elements $D(1\\otimes \\varepsilon _1), \\dots D(1\\otimes \\varepsilon _{m-r}),D(1\\otimes \\varepsilon ^{\\prime }_1), \\dots D(1\\otimes \\varepsilon ^{\\prime }_{n-r})$ form a regular sequence on $\\mathbb {Z}[\\underline{x}, \\underline{y}, \\underline{z}]$ that can be used to eliminate the variables $\\underline{y}$ and $\\underline{z}$ .", "Using the reduction lemma for the homology of Koszul complexes, Lemma REF , this reduces the problem to a Koszul complex on the ring $\\frac{\\mathbb {Z}[x_1,\\dots ,x_r,y_1,\\dots ,y_{m-r},z_1,\\dots ,z_{n-r}]}{\\langle D(1\\otimes \\varepsilon _1), \\dots , D(1\\otimes \\varepsilon _{m-r}),D(1\\otimes \\varepsilon ^{\\prime }_1), \\dots ,D(1\\otimes \\varepsilon ^{\\prime }_{n-r}) \\rangle }\\cong \\mathbb {Z}[x_1,\\dots ,x_r]$ In this quotient, the next $r$ relations reduce to $\\overline{D(1\\otimes \\varepsilon _{m-r+1})}= h^{(m)}_{1},\\quad \\dots , \\quad \\overline{D(1\\otimes \\varepsilon _{m})}= h^{(m)}_r$ and they form another regular sequence with successive quotient $H^\\bullet (\\operatorname{Grass}(r,n))= \\mathbb {Z}[x_1,\\dots ,x_r]/J_{r,n}$ as in Remark REF .", "Consequently, the last $r$ relations $\\overline{D(1\\otimes \\varepsilon ^{\\prime }_{n-r+1})}= h^{(n)}_1, \\quad \\dots ,\\quad \\overline{D(1\\otimes \\varepsilon ^{\\prime }_{n})}= h^{(n)}_{r}$ all reduce to zero in $H^{\\bullet }(\\operatorname{Grass}(r,m))$ due to (REF ) since by assumption $m \\le n$ .", "In terms of the identification of the homology of Koszul complexes from Lemma REF this means $& & H\\left( \\mathbb {Z}[x,y,z] \\otimes _{\\mathbb {Z}} \\bigwedge \\left( F_m \\oplus F_n \\right) \\right)\\\\&\\cong & H\\left( \\mathbb {Z}[x] \\otimes _{\\mathbb {Z}} \\bigwedge \\left( \\mathbb {Z}\\varepsilon _{m-r+1} \\oplus \\dots \\oplus \\mathbb {Z}\\varepsilon _m \\oplus \\mathbb {Z}\\varepsilon ^{\\prime }_{n-r+1} \\oplus \\dots \\oplus \\mathbb {Z}\\varepsilon ^{\\prime }_n \\right) \\right)\\\\& \\cong &H\\left( \\mathbb {Z}[x]/J_{m,r} \\otimes _{\\mathbb {Z}} \\bigwedge \\left(\\mathbb {Z}\\varepsilon _{n-r+1}^{\\prime } \\oplus \\dots \\oplus \\mathbb {Z}\\varepsilon ^{\\prime }_n\\right)\\right)\\\\& \\cong &\\mathbb {Z}[x]/J_{m,r} \\otimes _{\\mathbb {Z}} \\bigwedge \\left(\\mathbb {Z}\\varepsilon _{n-r+1}^{\\prime } \\oplus \\dots \\oplus \\mathbb {Z}\\varepsilon ^{\\prime }_n\\right)\\\\$ where the differentials of the complex in the second last line are all zero so that we may drop the homology functor $H(-)$ .", "To finish the proof of Proposition REF we now set $\\eta _j$ to be equal to $1 \\otimes \\varepsilon ^{\\prime }_{n-r+j}$ for $j=1,\\dots ,r$ in the last line.", "Remark 3.7 We review the fiber bundle structure $\\operatorname{Stief}(r,n) \\hookrightarrow O_{m,n}^r \\overset{\\lambda _1}{\\longrightarrow } \\operatorname{Grass}(r,m)$ described in Lemma REF .", "Given the explicit descriptions of the cohomology of both the Stiefel manifold and the Grassmannian, we may infer from the proof of Proposition REF that, similar to the case the Hopf's theorem, the cohomology of $O_{m,n}^r$ is isomorphic to that of a product $H^\\bullet ( O_{m,n}^r) \\cong H^\\bullet (\\operatorname{Stief}(r,n)\\times \\operatorname{Grass}(r,m)),$ despite the fact that the structure of $O_{m,n}^r$ as a fiber bundle is in general non-trivial.", "By construction, the cohomology classes $\\eta _j$ restrict to the generators of the cohomology of $\\operatorname{Stief}(r,n)$ in every fiber.", "Yet it seems difficult to write down an explicit lift of these elements to the original complex $\\mathbb {Z}[x,y,z]\\otimes \\bigwedge \\left( F_m \\oplus F_n \\right)$ .", "Remark 3.8 It would be nice to have a more geometric understanding of the elements $\\eta _j$ that might, for example, be derived from the tautological sequence ${0 [r] &\\lambda _2^* Q_{2}^\\vee [r] &\\mathcal {O}^n [r]^\\varphi &\\mathcal {O}^m [r] &\\lambda _1^* Q_1 [r] &0}$ on $O_{m,n}^{r}$ where the $Q_i$ denote the tautological quotient bundles on respective Grassmannians for the projections $\\lambda _1$ and $\\lambda _2$ as in Lemma REF and $\\varphi $ denotes the tautological section $O_{m,n}^{r} \\rightarrow \\operatorname{Hom}( \\mathcal {O}^n,\\mathcal {O}^m ), \\quad \\varphi \\mapsto \\varphi $ on $O_{m,n}^{r}$ , identifying the subbundles $\\lambda _2^* S_2^\\vee $ with $\\lambda _1^* S_1$ ." ], [ "Betti numbers of smooth links", "A hyperplane $(D_i,0) \\subset (\\mathbb {C}^{m \\times n},0)$ of codimension $i$ in general position will intersect $M_{m,n}^{r+1}$ in a non-trivial way whenever $i \\le \\dim M_{m,n}^{r+1}$ .", "This intersection $(M_{m,n}^{r+1} \\cap D_i,0)$ will have isolated singularity as long as $D_i$ meets the singular locus $M_{m,n}^{r}$ of $M_{m,n}^{r+1}$ only at the origin, i.e.", "$i\\ge \\dim M_{m,n}^{r}$ .", "The real and complex links of codimension $i$ of $(M_{m,n}^{r+1},0)$ will therefore be smooth within the range $(m+n)(r-1)-(r-1)^2 \\le i < (m+n)r-r^2.$ In this section we will be concerned with the cohomology of $\\mathcal {K}^i(M_{m,n}^{r+1},0)$ and $\\mathcal {L}^i(M_{m,n}^{r+1},0)$ for $i$ in this range.", "More precisely, we will show for that for $i$ in the above range $H^k(\\operatorname{Grass}(r,m)) &\\cong & H^k(\\mathcal {K}^i(M_{m,n}^{r+1},0)) \\quad \\textnormal { for } \\quad k<d(i),\\\\H^k(\\operatorname{Grass}(r,m)) &\\cong &H^k(\\mathcal {L}^i(M_{m,n}^{r+1},0)) \\quad \\textnormal { for } \\quad k<d(i),\\\\H^{d(i)}(\\operatorname{Grass}(r,m)) &\\subset &H^{d(i)}(\\mathcal {K}^i(M_{m,n}^{r+1},0)) \\subset H^{d(i)}(\\mathcal {L}^i(M_{m,n}^{r+1},0))$ where $d(i) = \\dim _\\mathbb {C}\\mathcal {L}^i(M_{m,n}^{r+1},0) = (m+n)r-r^2-i-1$ .", "The left hand sides all agree with the cohomology of $V_{m,n}^r$ in this range for $k$ and the above maps are given by the pullback in cohomology for the natural inclusions of the real and complex links into that stratum.", "For the complex links the middle Betti number can then be computed from the polar multiplicities using Formula (REF ).", "In case $r=1$ this gives a complete description of the classical real and complex links of $(M_{m,n}^2,0)$ , see Remark REF .", "However, besides the case $i=0$ and $r=1$ , we are unable to determine whether or not $H^{d(i)}(\\mathcal {L}^i(M_{m,n}^{r+1},0))$ has torsion.", "Also, the two middle cohomology groups of $\\mathcal {K}^i(M_{m,n}^{r+1},0)$ can not be computed by our methods for $i>0$ ." ], [ "The variation sequence on stratified spaces", "Throughout this section, let $(X,0) \\subset (\\mathbb {C}^q,0)$ be an equidimensional reduced complex analytic germ of complex dimension $d$ , endowed with a complex analytic Whitney stratification $\\lbrace V^\\alpha \\rbrace _{\\alpha \\in A}$ .", "We will denote by $f \\colon (\\mathbb {C}^n,0) \\rightarrow (\\mathbb {C},0)$ the germ of a holomorphic function with an isolated singularity on $(X,0)$ in the stratified sense.", "For a Milnor ball $B_\\varepsilon $ of sufficiently small radius $\\varepsilon > 0$ and representatives $X$ and $f$ of the space and the function we let $\\partial X := \\partial B_\\varepsilon \\cap X$ be the real link of $(X,0)$ and $M = X \\cap B_\\varepsilon \\cap f^{-1}(\\lbrace \\delta \\rbrace )$ the Milnor fiber of $f$ for some $\\varepsilon \\gg \\delta > 0$ .", "Denote the regular part of $X$ by $X_{\\mathrm {reg}}$ .", "Then clearly by construction of $\\partial X$ and $M$ $\\partial X_{\\mathrm {reg}} := \\partial B_\\varepsilon \\cap X_{\\mathrm {reg}}, \\quad M_{\\mathrm {reg}} := M \\cap X_{\\mathrm {reg}}$ are the regular loci of $\\partial X$ and $M$ , respectively.", "Lemma 4.1 The natural maps in cohomology $H^k(\\partial X_{\\mathrm {reg}})\\rightarrow H^{k}(M_{\\mathrm {reg}})$ are injective for $k=d-1$ and an isomorphism for $k<d-1$ .", "It is well known that we may “inflate” the Milnor-Lê fibration for $f$ and identify its total space with an open subset of $\\partial X$ itself: ${X \\cap B_\\varepsilon \\cap f^{-1}\\left( \\partial D_\\delta \\right) [r]^\\cong [d]^{f} &\\partial X \\setminus f^{-1}(D_\\delta ) [d]^{\\arg f} \\\\\\partial D_\\delta [r]^{\\cong } &S^1.", "}$ See for example , or for the particular case where $f$ is linear.", "Since $f$ has isolated singularity on $(X,0)$ it is a stratified submersion on $X$ near the boundary $K := \\partial X \\cap f^{-1}(\\lbrace 0\\rbrace )$ of the central fiber.", "We may therefore identify $\\partial X \\cap f^{-1}(D_\\delta ) \\cong K \\times D_\\delta $ and extend the fibration by $\\arg f$ over $K \\times D_\\delta ^*$ to the whole complement of $K \\subset \\partial X$ .", "The assertion is now a consequence of the variation sequence for the regular loci.", "It is evident from Thom's first isotopy lemma that $\\arg f \\colon \\partial X \\setminus K \\rightarrow S^1$ respects the stratification of $M \\subset \\partial X \\setminus K$ induced from $X$ .", "Therefore, the common chain of isomorphisms for the variation sequence $H^k\\left( \\partial X_{\\mathrm {reg}}, M_{\\mathrm {reg}} \\right) & \\cong &H^k\\left( \\partial X_{\\mathrm {reg}},M_{\\mathrm {reg}} \\cup \\left( \\partial X_\\mathrm {reg}\\cap f^{-1}( \\overline{D}_\\delta ) \\right) \\right) \\\\&\\cong & H^{k}\\left( M_{\\mathrm {reg}} \\times [0,1],K_{\\mathrm {reg}} \\times [0,1] \\cup M_{\\mathrm {reg}} \\times \\lbrace 0,1\\rbrace \\right) \\\\&\\cong & H^{k}\\left( (M_{\\mathrm {reg}}, K_{\\mathrm {reg}}) \\times ([0,1], \\partial [0,1] ) \\right) \\\\&\\cong & H^{k-1}( M_{\\mathrm {reg}}, K_{\\mathrm {reg}} )$ restricts to the regular loci for every $k$ .", "Here, we deliberately identified $K_{\\mathrm {reg}}$ with the boundary part $\\partial B_\\varepsilon \\cap M_{\\mathrm {reg}}$ of the regular locus of $M$ .", "In the last step we used the Künneth formula for pairs of spaces, see e.g.", ".", "For a more thorough treatment see also .", "With these identifications, the long exact sequence of the pair $( \\partial X_\\mathrm {reg}, M_\\mathrm {reg})$ reads $\\dots \\rightarrow H^{k-1}( M_\\mathrm {reg}, K_\\mathrm {reg}) \\rightarrow H^k( \\partial X_\\mathrm {reg}) \\rightarrow H^k( M_\\mathrm {reg}) \\rightarrow H^{k}( M_\\mathrm {reg}, K_\\mathrm {reg}) \\rightarrow \\dots $ The assertion now follows from the fact that $H^k( M_\\mathrm {reg}, K_\\mathrm {reg}) = 0$ for all $k < d-1 = \\dim _\\mathbb {C}M_\\mathrm {reg}$ which is due to complex stratified Morse theory for non-proper Morse functions, see .", "Remark 4.2 The cited theorem is a statement on the relative homology for the smooth locus of a projective algebraic variety and its intersection with a generic hyperplane.", "The statement can be generalized to germs of complex analytic sets and their intersections with a suitable ball, cf.", ".", "Then one can use complex stratified Morse theory for non-proper Morse functions to study the connectivity of the pair $(M_\\mathrm {reg}, K_\\mathrm {reg})$ via a Morsification $\\rho $ of the squared distance function to the origin.", "The key observation is that for the regular locus $M_\\mathrm {reg}$ the local Morse datum at a critical point is always $(d-2)$ -connected.", "The last statement can easily be derived by induction on dimension from .", "Now suppose we are given $(X,0) \\subset (\\mathbb {C}^q,0)$ as above and an admissible sequence of linear forms $l_1,\\dots ,l_d$ in the sense of Definition REF and Corollary REF .", "Then we can apply Lemma REF inductively to $f = l_{i+1}$ on $(X \\cap D_i,0)$ : Proposition 4.3 Let $(X,0) \\subset (\\mathbb {C}^q,0)$ be an equidimensional reduced complex analytic germ of dimension $d$ , and $l_1, l_2, \\dots , l_d$ an admissible sequence of linear forms as in Corollary REF .", "Let $\\mathcal {K}^k_\\mathrm {reg}$ and $\\mathcal {L}^k_\\mathrm {reg}$ be the regular loci of the real and complex links of codimension $k$ of $(X,0)$ .", "Then for arbitrary $0\\le k< d$ the natural maps $H^i(\\mathcal {K}^0_\\mathrm {reg}) \\rightarrow H^i(\\mathcal {K}^k_\\mathrm {reg}) \\rightarrow H^i(\\mathcal {L}^k_{\\mathrm {reg}})$ are isomorphisms for every $i<d-k-1 = \\dim \\mathcal {L}^k$ and injective for $i=d-k-1$ .", "We proceed by induction on the codimension $k$ of the complex links.", "For $k=0$ this follows directly from Lemma REF .", "Now suppose the statement holds up to codimension $k-1$ .", "As already discussed in the proof of Lemma REF , the real link $\\mathcal {K}^k$ can be identified with the boundary of $\\mathcal {L}^{k-1}$ .", "But the pair $(\\mathcal {L}^{k-1}_\\mathrm {reg}, \\mathcal {K}^k_\\mathrm {reg})$ is cohomologically $d-k-1$ -connected due to the LHT and the long exact sequence yields $0 \\rightarrow H^i(\\mathcal {L}^{k-1}_\\mathrm {reg}) \\overset{\\cong }{\\longrightarrow } H^i( \\mathcal {K}^k_\\mathrm {reg}) \\rightarrow 0$ for $i<d-k-1$ and $0 \\rightarrow H^{d-k-1}(\\mathcal {L}^{k-1}_\\mathrm {reg}) \\rightarrow H^{d-k-1}(\\mathcal {K}^k_\\mathrm {reg}) \\rightarrow H^{d-k}(\\mathcal {L}^{k-1}_\\mathrm {reg}, \\mathcal {K}^k_\\mathrm {reg}) \\rightarrow \\cdots $ for the middle part.", "Since by our induction hypothesis the cohomology groups $H^{i}(\\mathcal {L}^{k-1}_\\mathrm {reg}) \\cong H^{i}(\\mathcal {K}^0_\\mathrm {reg})$ are all isomorphic for $i<d-k$ , the first part of the assertion on $H^i(\\mathcal {K}^0_\\mathrm {reg}) \\rightarrow H^i(\\mathcal {K}^k_\\mathrm {reg})$ follows.", "For the second part on $H^i(\\mathcal {K}^0_\\mathrm {reg}) \\rightarrow H^i(\\mathcal {L}^k_\\mathrm {reg})$ consider $l_{k+1} \\colon (X \\cap D_k,0) \\rightarrow (\\mathbb {C},0).$ By construction the Milnor fiber of this function is $\\mathcal {L}^{k}$ while $\\mathcal {K}^{k}$ is the real link of $(X \\cap D_k,0)$ .", "All the relevant cohomology groups of $\\mathcal {K}^k$ have already been determined by our previous considerations so that the remaining statements follow readily from Lemma REF applied to $f = l_{k+1}$ ." ], [ "Proof of Formulas (", "As remarked earlier, the relevant range for the codimension $i$ for the real and complex links $\\mathcal {K}^i(M_{m,n}^{r+1},0)$ and $\\mathcal {L}^i(M_{m,n}^{r+1},0)$ of the generic determinantal variety $(M_{m,n}^{r+1},0)$ is $\\dim M_{m,n}^{r} = (m+n)(r-1) - (r-1)^2 \\le i < (m+n)r-r^2 = \\dim M_{m,n}^{r+1}.$ We may assume $m \\le n$ .", "Then for these values of $i$ the dimension of the complex links does not exceed $d(i):= \\dim _\\mathbb {C}\\mathcal {L}^i(M_{m,n}^{r+1},0)= (m+n)r - r^2 - i - 1 < m+n-2r+1 \\le 2n -2r +1.$ Recall that due to the homogeneity of the singularity, the cohomology of the regular part of the classical real link of $(M_{m,n}^{r+1},0)$ is given by $H^\\bullet (\\mathcal {K}^0_\\mathrm {reg}(M_{m,n}^{r+1},0))& \\cong & H^\\bullet ( V_{m,n}^r ) \\cong H^\\bullet ( O_{m,n}^r ) \\\\& \\cong &\\bigwedge \\left( R_m^{r} \\cdot \\eta _1 \\oplus \\dots \\oplus R_{m}^{r} \\cdot \\eta _r \\right)$ according to Proposition REF .", "Now note that the bound on $d(i)$ assures that each and every generator $\\eta _{j} \\in H^{2n-2j+1}(O_{m,n}^r)$ is taken to a vanishing cohomology group of a smooth complex link $\\mathcal {L}^i(M_{m,n}^{r+1},0)$ .", "The formulas (REF ), (), and () now follow directly from Proposition REF ." ], [ "The middle cohomology groups", "The results of the previous section allow to determine the cohomology below the middle degrees of the real and complex links $\\mathcal {K}^i$ and $\\mathcal {L}^i$ of codimension $i$ for a purely $d$ -dimensional germ $(X,0)$ , provided $i\\ge \\dim X_\\mathrm {sing}$ so that $(X \\cap D_i,0)$ has isolated singularity and the links are smooth.", "We shall now discuss how to obtain information on the remaining part of the cohomology and to which extend this is possible.", "When $i\\ge \\dim X_\\mathrm {sing}$ the complex link $\\mathcal {L}^i = \\mathcal {L}^i_\\mathrm {reg}$ is Stein and the higher cohomology groups vanish due to the LHT.", "We can use Lefschetz duality (see e.g. )", "to identify homology with cohomology: $H^k(\\mathcal {L}^i,\\partial \\mathcal {L}^i) \\cong H_{2(d-i-1)-k}(\\mathcal {L}^i)\\quad \\textnormal {and} \\quad H^k(\\mathcal {L}^i) \\cong H_{2(d-i-1)-k}(\\mathcal {L}^i, \\partial \\mathcal {L}^i).$ Note that the middle homology group $H_{d-i-1}(\\mathcal {L}^i)$ is known to always be free; this is the case even for arbitrary smooth Stein manifolds.", "The real link $\\mathcal {K}^i$ is an oriented smooth compact manifold of real dimension $2d-2i-1$ and we have Poincaré duality (cf.", "): $H^k( \\mathcal {K}^i) \\cong H_{2d-2i-1-k}( \\mathcal {K}^i)$ for all $k$ .", "All these cohomology groups sit in the classical variation sequence with its middle part being of particular interest: $0 \\rightarrow H^{d-i-1}( \\mathcal {K}^i ) \\rightarrow H^{d-i-1}( \\mathcal {L}^i )\\overset{\\mathrm {VAR}}{\\longrightarrow }H^{d-i-1}( \\mathcal {L}^i, \\partial \\mathcal {L}^i) \\rightarrow H^{d-i}( \\mathcal {K}^{i} )\\rightarrow 0.$ We return to the particular case of the generic determinantal singularity $(X,0) =(M_{m,n}^{r+1},0) \\subset (\\mathbb {C}^{m\\times n},0)$ .", "For the smooth complex links $\\mathcal {L}^i = \\mathcal {L}^i(M_{m,n}^{r+1},0)$ we can use the knowledge of the Euler characteristic from the polar multiplicities in Section REF , Formula (REF ) to also determine the rank of the middle cohomology group: $b_{d-i-1}(\\mathcal {L}^i) =\\underbrace{\\left( \\sum _{j=i}^{d}(-1)^{j} m_0\\left( P_{d-j}(X,0) \\right) \\right)}_{(-1)^{d-i-1} \\chi ( \\mathcal {L}^i )}+ \\underbrace{\\left(\\sum _{j=0}^{d-i-2} (-1)^{d-i-j} b_j(O_{m,n}^r) \\right).", "}_{-(-1)^{d-i-1} \\sum _{j=0}^{d-i-2} (-1)^j b_j(\\mathcal {L}^i)}$ Switching from integer to rational coefficients, this allows us to fully compute the rational cohomology, but unfortunately this method does not allow to detect torsion in the middle cohomology group with integer coefficients.", "However, so far no example of a smooth complex link of a generic determinantal variety is known for which the middle cohomology group $H^{d-i-1}(\\mathcal {L}^i(M_{m,n}^{r+1},0))$ does have torsion.", "Being an oriented smooth compact manifold of odd real dimension, $\\chi (\\mathcal {K}^i)$ is always zero and therefore computation of the Euler characteristic does not help in this case; not even the full rational cohomology of $\\mathcal {K}^i$ can be computed by our methods.", "It should be noted, however, that there are examples for which torsion appears: Example 4.4 Consider the singularity $(Y,0) := (M_{2,3}^2\\cap D_2,0) \\subset (\\mathbb {C}^{2\\times 3},0)$ for a generic plane $D_2$ of codimension 2.", "This is an isolated normal surface singularity, the simplest one of the “rational triple points” discussed by Tjurina .", "It was shown in that the so-called Tjurina modification $\\pi \\colon Y^{\\prime } \\rightarrow Y$ of $(Y,0)$ is smooth and hence a resolution of singularities for $(Y,0)$ .", "The space $Y^{\\prime }$ is isomorphic to the total space of the bundle $Y^{\\prime } \\cong | \\mathcal {O}_{\\mathbb {P}^1}(-3) |$ and hence the complex link $\\mathcal {K}^2 = \\mathcal {K}^2(M_{2,3}^2,0)$ can be identified with the sphere bundle of $\\mathcal {O}_{\\mathbb {P}^1}(-3)$ .", "Then a part of the Euler sequence for this bundle reads $\\dots H^0(\\mathbb {P}^1) \\overset{\\cup e}{\\longrightarrow } H^2(\\mathbb {P}^1) \\rightarrow H^2(\\mathcal {K}^2) \\rightarrow 0.$ But with the canonical generators for $H^\\bullet (\\mathbb {P}^1)$ , the cup product with the Euler class $e$ is simply multiplication by $-3$ and we find that $H^2(\\mathcal {K}^2(M_{2,3}^2,0)) \\cong \\mathbb {Z}/3\\mathbb {Z}.$ The complex link $\\mathcal {L}^2 = \\mathcal {L}^2(M_{2,3}^2,0)$ , i.e.", "the “Milnor fiber” in the variation sequence, is known to have middle cohomology $H^1(\\mathcal {L}^2) \\cong \\mathbb {Z}^2$ ; see e.g.", ".", "Since $H^1(\\mathcal {L}^2, \\partial \\mathcal {L}^2) \\cong H_1(\\mathcal {L}^2)\\cong \\mathbb {Z}^2$ is also free and the map $\\mathrm {VAR}$ necessarily has rank 2, it follows that $H^1(\\mathcal {K}^2(M_{2,3}^2,0)) = 0.$ To conclude this section, we mention one last case that might be of particular interest: Remark 4.5 For $r=1$ the generic determinantal varieties $(M_{m,n}^2,0)$ all have isolated singularity and therefore the classical real and complex links $\\mathcal {K}^0 = \\mathcal {K}^0(M_{m,n}^2,0)$ and $\\mathcal {L}^0 = \\mathcal {L}^0(M_{m,n}^2,0)$ are already smooth.", "In this case we know all the cohomology groups of $\\mathcal {K}^0$ $H^\\bullet ( \\mathcal {K}^0(M_{m,n}^2,0)) \\cong H^\\bullet ( \\mathbb {P}^{m-1} \\times S^{2n-1} )\\cong H^\\bullet ( \\mathbb {P}^{m-1}) \\otimes _\\mathbb {Z}\\bigwedge \\mathbb {Z}\\varepsilon _{2n-1}$ which is free Abelian in all degrees.", "Using the variation sequence we infer that the middle cohomology of $\\mathcal {L}^0$ can not have torsion: By Poincaré duality also the relative cohomology group to the left is isomorphic to the middle homology group of $\\mathcal {L}^0$ .", "Now the latter is known to always be free Abelian for Stein manifolds.", "Therefore we have $H^k(\\mathcal {L}^0(M_{m,n}^2,0)) \\cong {\\left\\lbrace \\begin{array}{ll}\\mathbb {Z}& \\textnormal { if k\\le (m-r)r is even } \\\\0 & \\textnormal { otherwise.}\\end{array}\\right.", "}$ We would like to mention one particular consequence of the previous remark.", "It is well known that on a locally complete intersection $X$ the constant sheaf $\\mathbb {Z}_X[\\dim X]$ is perverse for the middle perversity; see e.g.", ".", "Most of the generic determinantal varieties $M_{m,n}^s$ are not complete intersections and we can now show that this algebraic property is also reflected in their topology: Corollary 4.6 For $1<s\\le m < n$ , i.e.", "for non-square matrices, the constant sheaf $\\mathbb {Z}_{M_{m,n}^{s}}[\\dim M_{m,n}^s]$ on the generic determinantal variety, shifted by its dimension, is never a perverse sheaf for the middle perversity.", "For these values of $s,m$ , and $n$ the rank stratification is the minimal Whitney stratification of $M_{m,n}^{s}$ .", "Now the constant sheaf can only be perverse (for the middle perversity) if the variety has a certain rectified homological depth, meaning that along every stratum $V_{m,n}^r \\subset M_{m,n}^s$ the real link $\\mathcal {K}(M_{m,n}^s,V_{m,n}^r)$ along $V_{m,n}^r$ satisfies $H_k(\\mathcal {K}(M_{m,n}^s,V_{m,n}^r)) = 0 \\textnormal { for all }k < \\operatorname{codim}(V_{m,n}^r,M_{m,n}^s)-1.$ See , for a discussion on rectified homological depth and e.g.", "the proof of for its relation to perversity.", "For $r = s-2$ the codimension on the right hand side is $c = (m-s+2)(n-s+2) - (m-s+1)(n-s+1) = m+n -2s + 3 \\ge 4$ for all values $1<s\\le m < n$ .", "But according to Remark REF we have $H^2( \\mathcal {K}^0(M_{m,n}^s,V_{m,n}^{s-2})) \\cong H^2(\\mathcal {K}^0(M_{m-s+2,n-s+2}^{2})) \\cong H^2(\\mathbb {P}^{m-s+1}) \\cong \\mathbb {Z}$ so that the criterion for the perversity given above is violated." ], [ "Proof of Theorem ", "We consider a $\\mathrm {GL}$ -miniversal unfolding $\\mathbf {A} \\colon (\\mathbb {C}^p,0) \\times (\\mathbb {C}^k,0) \\rightarrow (\\mathbb {C}^{m\\times n},0)$ of the defining matrix $A$ for $(X_A^s,0)$ on $k$ parameters $t_1,\\dots ,t_k$ .", "See, for instance, for the underlying notion of $\\operatorname{GL}$ -equivalence and the existence and construction of such miniversal unfoldings.", "The induced deformation ${X_A^s @{^{(}->}[r] [d] &\\mathcal {X}_{\\mathbf {A}}^s [d]^\\pi \\\\\\lbrace 0 \\rbrace @{^{(}->}[r] &\\mathbb {C}^k}$ given by the projection of $\\mathcal {X}_{\\mathbf {A}}^s = \\mathbf {A}^{-1}(M_{m,n}^s) \\subset \\mathbb {C}^p\\times \\mathbb {C}^k$ to the parameter space $\\mathbb {C}^k$ is versal in the sense that it covers all possible determinantal deformations of $(X_A^s,0)$ coming from this particular choice of a determinantal structure.", "Let the discriminant $(\\Delta ,0) \\subset (\\mathbb {C}^k,0)$ be the set of parameters $t = (t_1,\\dots ,t_k)$ with singular fibers $X_{\\mathbf {A}}^s(t)$ .", "For a suitable representative of the unfolding $\\mathbf {A}$ we may choose a Milnor ball $B_\\varepsilon \\subset \\mathbb {C}^p$ around the origin and a polydisc $0 \\in D_\\delta \\subset \\mathbb {C}^k$ in the parameter space such that $\\pi \\colon \\mathcal {X}_{\\mathbf {A}}^s \\cap B_\\varepsilon \\cap (D_\\delta \\setminus \\Delta )\\rightarrow (D_\\delta \\setminus \\Delta )$ is a smooth fiber bundle of manifolds with boundary with fiber $M_A^s$ , the smoothing of $(X_A^s,0)$ .", "Since for $t \\notin \\Delta $ the map $A_t = \\mathbf {A}(-,t) \\colon B_\\varepsilon \\rightarrow \\mathbb {C}^{m\\times n}$ is transverse to the rank stratification implies that $A_t$ does not degenerate on the fiber $X_{\\mathbf {A}}^s(t)$ in the sense that it has constant rank $r = s-1$ at all points $x \\in X_{\\mathbf {A}}^s(t)$ .", "Therefore the cokernel $\\mathbb {C}^n \\overset{A_t}{\\longrightarrow } \\mathbb {C}^m \\rightarrow E \\rightarrow 0$ presented by $A_t$ is a well defined vector bundle of rank $m-r$ on the fiber over $t$ .", "Variation of $t$ in $D_\\delta \\setminus \\Delta $ does not change the fiber $X_{\\mathbf {A}}^s(t)$ up to diffeomorphism.", "In the same sense, it varies the vector bundle $E$ in a $C^\\infty $ way on these fibers, but does not change its isomorphism class as a smooth complex vector bundle.", "Due to the versality of $\\mathbf {A}$ , the various deformations of $(X_A^s,0)$ and its smoothing $M_A^s$ in the proof of the bouquet decomposition (REF ) in can be realized using piecewise smooth paths in the parameter space $\\mathbb {C}^k$ .", "In particular, this leads to a representative $X_{\\mathbf {A}}^s(t)$ of $M_A^s$ which has an embedding of the complex link $\\mathcal {L}^{mn - p -1}(M_{m,n}^s,0) \\hookrightarrow X_{\\mathbf {A}}(t)$ .", "Now it follows from the results in Section that the truncated cohomology of the Grassmannian $H^{\\le d}( \\operatorname{Grass}(r,m) )$ is generated by the algebra of Segre classes of the tautological quotient bundle.", "Since by construction the pullback of this bundle to both $X_{\\mathbf {A}}(t)$ and $\\mathcal {L}^{mn - p -1}(M_{m,n}^s,0)$ is the vector bundle $E$ in question, the result follows." ], [ "Proof of Corollary ", "Let $M \\subset U \\subset \\mathbb {C}^5$ be a smoothing of the isolated Cohen-Macaulay threefold $(X,0) \\subset \\mathbb {C}^5$ on some open set $U$ on which a perturbation $A_t \\colon U \\rightarrow \\mathbb {C}^{m \\times (m+1)}$ of the matrix $A$ is defined.", "By definition the canonical sheaf on $M$ is given by $\\omega _M = \\mathcal {E}xt^2_{\\mathcal {O}_U}(\\mathcal {O}_M,\\mathcal {O}_U)$ which can be computed from a free resolution of $\\mathcal {O}_M$ as an $\\mathcal {O}_U$ -module.", "Such a free resolution is provided by the deformation of the resolution of $\\mathcal {O}_{X,0}$ from the Hilbert-Burch theorem.", "Now it is easy to see from the proof of Theorem REF that the line bundle $\\omega _M$ is presented by the restriction of $A_t$ to $M_A^s \\subset U$ , i.e.", "it is the vector bundle $E$ in Theorem REF and the claim follows." ], [ "Acknowlegdements", "The author wishes to thank Duco van Straten for introducing him to the work of Borel on the cohomology of homogeneous spaces and Sam Hagh Shenas Noshari for further helpful conversations on the topic, Terence Gaffney for discussions on polar varieties and –multiplicities, Xiping Zhang for an exchange on their computation, and James Damon for conversations on the “characteristic cohomology” of determinantal singularities." ] ]
2107.01823
[ [ "An Information-Theoretic Approach for Automatically Determining the\n Number of States when Aggregating Markov Chains" ], [ "Abstract A fundamental problem when aggregating Markov chains is the specification of the number of state groups.", "Too few state groups may fail to sufficiently capture the pertinent dynamics of the original, high-order Markov chain.", "Too many state groups may lead to a non-parsimonious, reduced-order Markov chain whose complexity rivals that of the original.", "In this paper, we show that an augmented value-of-information-based approach to aggregating Markov chains facilitates the determination of the number of state groups.", "The optimal state-group count coincides with the case where the complexity of the reduced-order chain is balanced against the mutual dependence between the original- and reduced-order chain dynamics." ], [ "A fundamental problem when aggregating Markov chains is the specification of the number of state groups.", "Too few state groups may fail to sufficiently capture the pertinent dynamics of the original, high-order Markov chain.", "Too many state groups may lead to a non-parsimonious, reduced-order Markov chain whose complexity rivals that of the original.", "In this paper, we show that an augmented value-of-information-based approach to aggregating Markov chains facilitates the determination of the number of state groups.", "The optimal state-group count coincides with the case where the complexity of the reduced-order chain is balanced against the mutual dependence between the original- and reduced-order chain dynamics.", "Index Terms—Aggregation, model reduction, Markov chains, information theory Markov models have been widely adopted in a variety of disciplines.", "Part of their appeal is that the application and simulation of such models is rather efficient, provided that the corresponding state-space has a small to moderate size.", "Dealing with large state spaces is often troublesome, in comparison, as it may not be possible to adequately and efficiently simulate the underlying models [1], [2], [3], [4], [5].", "A means of rendering the simulation of large-scale models tractable is crucial for many applications.", "One way to do this is by reducing the overall size of the Markov-chain state space via aggregation [6].", "Aggregation typically entails either defining and utilizing a function to partition nodes in the probability transition graph associated with the large-scale chain.", "Groups of nodes, which are related by the their inter-state transition probabilities and have strong interactions, are combined and treated as a single aggregated node in a new graph.", "This results in a lower-order chain with a reduced state space.", "A stochastic matrix for the lower-order chain is then specified, which describes the transitions from one super-state to another.", "This stochastic matrix should roughly mimic the dynamics of the original chain despite the potential loss in information [7], [8], [9].", "There are a variety of methods for aggregating Markov chains.", "Some of the earliest work exploited the strong-weak interaction structure of nearly completely decomposable Markov chains to obtain reduced-order approximations [10], [11], [12], [13].", "Both uncontrolled [14], [15] and controlled Markov chains [16], [17] have been extensively studied in the literature.", "In this paper, we consider an approach for aggregating nearly-completely-decomposable Markov chains, which is composed of two information-theoretic processes [18].", "The first process entails quantifying the dissimilarity of nodes in the original and reduced-order probability transition graphs, despite the difference in state space sizes.", "The second process involves iteratively partitioning similar nodes without explicit knowledge of the number of groups.", "For this second process, we consider the use of an information-theoretic criterion known as the value of information [19], [20] to efficiently partition the probability transition graph.", "The value of information is a constrained, modified-free-energy-difference criterion that describes the maximum benefit associated with a given quantity of information in order to minimize the average distortion [21], [22], [23].", "It is an optimal, non-linear conversion between information, usually in the Shannon sense [24], and either costs or utilities, in the von-Neumann-Morgenstern sense [25].", "Optimizing the value of information in a grouped-coordinate-descent manner yields a single free parameter that represents the effect of the information bound.", "Increasing this parameter from some base value yields a hierarchy of partitions that monotonically decrease the modified-free-energy difference.", "Each hierarchy element corresponds to a partition with an increasing information bound amount and hence a potentially increasing number of state groups.", "Finer-scale group structure in the transition matrix is captured as the parameter value rises.", "After some free-parameter value threshold, however, there are diminishing returns on the aggregation quality.", "Finding this threshold, in a completely data-driven fashion, is thus crucial: over-partitioning the original chain leads to a non-parsimonious representation that can be computationally expensive to evaluate.", "To automatically discern the optimal number of state groups for arbitrary Markov chains, we modify the value-of-information cost function and hence its updates.", "For this modified criterion, the first update step trades off the divergence of the original and aggregated chain transition probabilities against the mutual dependence between the original and aggregated chain dynamics.", "The remaining updates then trade off between how well the reduced-order model dynamics are compressed versus how much information about the original-chain dynamics is retained.", "That is, it follows an information-bottleneck model [26] with the added effect that the resulting Markov chain resembles the original.", "Through update schedule, we are guaranteed that the negative modified-free-energy-difference curve is, in general, convex.", "We can therefore find a maximal value on the curve.", "This value corresponds to the situation where the aggregation process begins to overfit to noise in the Markov-chain transition dynamics versus the underlying nearly-completely-decomposable structure if more state groups are added.", "Figure: NO_CAPTIONFigure: NO_CAPTIONOur approach for aggregating Markov chains can be described as follows.", "Given a stochastic matrix $\\Pi \\!\\in \\!", "\\mathbb {R}^{n \\times n}_+$ of transition probabilities between $n$ states, we seek to partition this matrix to produce a reduced-size stochastic matrix $\\Phi \\!\\in \\!", "\\mathbb {R}^{m \\times m}_+$ with $m$ states.", "Since there are many possible $\\Phi $ 's that can be formed, we would like one with the least divergence to $\\Pi $ for some measure.", "Due to the different sizes of $\\Pi $ and $\\Phi $ , though, directly assessing divergence is not possible.", "To facilitate this comparison, we consider the construction of a joint-model stochastic matrix $\\Theta \\!\\in \\!", "\\mathbb {R}^{m \\times n}_+$ that encodes the dynamics of $\\Phi $ .", "The definition of $\\Theta $ relies on finding a partition matrix $\\Psi \\!\\in \\!", "\\mathbb {R}^{m \\times n}_+$ for determining which states in $\\Pi $ should be combined to create an aggregated state in $\\Phi $ .", "We show that an optimal $\\Psi $ and $\\Theta $ , and hence $\\Phi $ , can be found by solving a modified value-of-information criterion.", "We then characterize the errors associated with estimating this modified value-of-information for Markov chains with a finite number of states.", "Removing the errors penalizes marginal improvements in the divergence associated with including more state groups $m$ .", "This non-linearly transforms the value of information such that it contains a global maximum for a single state group count.", "We then specify an expression for this value, which coincides where the aggregation complexity is balanced against the preserved information.", "2.1$\\;\\;\\;$ Preliminaries For our approach, we consider a first-order, homogeneous Markov chain defined on a finite state space.", "We assume that is nearly-completely decomposable.", "Definition 2.1.", "The transition model of a first-order, homogeneous, nearly-completely-decomposable Markov chain is a weighted, directed graph $R_\\pi $ given by the three-tuple $(V_\\pi ,E_\\pi ,\\Pi )$ with:     (i) A set of $n$ vertices $V_\\pi \\!=\\!", "v_\\pi ^1 \\cup \\ldots \\cup v_\\pi ^n$ representing the states of the Markov chain.", "(ii) A set of $n \\!\\times \\!", "n$ edge connections $E_\\pi \\subset V_\\pi \\!\\times \\!", "V_\\pi $ between reachable states in the Markov chain.", "(iii) A stochastic transition matrix $\\Pi \\!\\in \\!", "\\mathbb {R}_+^{n \\times n}$ .", "Here, $[\\Pi ]_{i,j} \\!=$ $\\pi _{i,j}$ represents the non-negative transition probability between states $i$ and $j$ .", "We impose the constraint that the probability of experiencing a state transition is independent of time.", "Moreover, for a block-diagonal matrix $\\Pi ^*$ with zeros along the diagonal, we have that $\\Pi \\!=\\!", "\\Pi ^* \\!+\\!", "\\varepsilon C$ .", "$\\Pi ^* \\!\\in \\!", "\\mathbb {R}_+^{n \\times n}$ is a completely-decomposable stochastic matrix with $m$ indecomposable sub-matrix blocks $\\Pi ^*_i$ of order $n_i$ .", "The matrix $C \\!\\in \\!", "\\mathbb {R}_+^{n \\times n}$ satisfies $\\sum _{k=1}^{n_i}c_{p_i,k_i} \\!=\\!", "-\\sum _{j \\ne i}\\sum _{q=1}^{n_j} c_{p_i,q_j}$ $\\forall p_i$ , for blocks $\\Pi ^*_i$ and $\\Pi ^*_j$ .", "Throughout, we assume that all Markov chains are irreducible and aperiodic.", "As a consequence, there is a unique invariant probability distribution $\\gamma $ associated with the chain such that $\\gamma ^\\top \\Pi \\!=\\!", "\\gamma ^\\top $ .", "We are interested in comparing pairs of nearly-completely-decomposable Markov chains.", "A means to do this is by considering given rows of the stochastic transition matrix $\\Pi $ with those of a reduced-order chain's stochastic matrix $\\Phi $ .", "We will perform this comparison via the negative Kullback-Leibler divergence.", "It coincides with the Donsker-Varadhan rate function appearing in the large-deviations theory of Markov chains [27], [28] and measures the dissimilarity between chains defined on the same discrete state space.", "Since we are considering the problem of chain aggregation, the discrete state spaces will be different.", "One chain $R_\\pi $ will have $n$ states while another $R_\\varphi $ will have $m$ states.", "The dimensionalities of given rows in the corresponding transition matrices will hence not be equivalent, which precludes a direct comparison using the discrete Kullback-Leibler divergence.", "To resolve this issue, we consider construction of a joint model $R_\\vartheta $ .", "This joint model defines a joint state space composed of the states from $R_\\pi $ and $R_\\varphi $ .", "It, however, re-defines the edge set along with the weighting matrix.", "This weighting matrix $\\Theta $ has the same number of columns as $\\Pi $ and the same dynamics as $\\Phi $ , which facilitates comparisons using conventional divergences.", "The joint model relies on the specification of a partition matrix $\\Psi $ for mapping states from the reduced-order model $R_\\varphi $ to the original model $R_\\pi $ .", "Here, we consider probabilistic partition functions so as to capture the inherent uncertainty in the state combination.", "Definition 2.2.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ and $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ be transition models of two Markov chains over $n$ and $m$ states, respectively.", "A probabilistic partition function $\\psi $ is a surjective mapping between two state index sets, $\\mathbb {Z}_{1:n}$ and $\\mathbb {Z}_{1:m}$ , such that $\\psi ^{-1}(\\mathbb {Z}_{1:m})$ is a partition of $\\mathbb {Z}_{1:n}$ , which has a given probabilistic chance of occurring.", "That is, $\\psi ^{-1}(j) \\!\\subset \\!", "\\mathbb {Z}_{1:n} \\times \\mathbb {R}_+^n$ is not empty and where $\\psi ^{-1}(1) \\cup \\ldots \\cup \\, \\psi ^{-1}(m) \\!=\\!", "\\mathbb {Z}_{1:n}^m \\times \\mathbb {R}_+^{m \\times n}$ , with the real-valued responses being non-negative and summing to one.", "The probabilistic partition of a state index set induces a probabilistic partition matrix $[\\Psi ]_{i,j} \\!=\\!", "\\psi _{i,j}$ , where $\\psi _{i,j} \\!=\\!", "\\zeta $ if $i \\!\\in \\!", "\\psi ^{-1}(j)$ occurs with probability $\\zeta $ .", "The set of all probabilistic partition matrices is $\\lbrace \\Psi \\!\\in \\!", "\\mathbb {R}_+^{n \\times m}|[\\Psi ]_{i,j} \\!=\\!", "\\psi _{i,j} \\!\\in \\!", "[0,1],\\; \\sum _{j=1}^m \\psi _{i,j} \\!=\\!", "1\\rbrace $ .", "Definition 2.3.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ and $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ be transition models of two Markov chains over $n$ and $m$ states, respectively, where $m \\!<\\!", "n$ .", "$R_\\vartheta \\!=\\!", "(V_\\vartheta ,E_\\vartheta ,\\Theta )$ is a joint model, with $m \\!+\\!", "n$ states, that is defined by     (i) A vertex set $V_\\vartheta \\!=\\!", "V_\\pi \\cup V_\\varphi $ , which is the union of all state vertices in $R_\\pi $ and $R_\\varphi $ .", "(ii) An edge set $E_\\vartheta \\subset V_\\varphi \\!\\times \\!", "V_\\pi $ , which are one-to-many mappings from the states in the original transition model $R_\\pi $ to the reduced-order transition model $R_\\varphi $ .", "(iii) A weighting matrix $\\Theta \\!\\in \\!", "\\mathbb {R}_+^{m \\times n}$ .", "The partition function $\\psi $ provides a relationship between the stochastic matrices $\\Phi $ and $\\Theta $ of $R_\\varphi $ and $R_\\vartheta $ , respectively.", "This is given by $\\Phi \\!=\\!", "\\Theta \\Psi $ , or, rather, $\\varphi _{i,j} \\!=\\!", "\\sum _{k=1}^n \\vartheta _{k,j}\\psi _{k,i}$ $\\forall i,j$ , where $\\Psi $ is a probabilistic partition matrix; here, $\\psi _{i,j} \\!=\\!", "p(v_\\varphi ^j|v_\\pi ^i)$ and $\\varphi _{i,j} \\!=\\!", "p(v_\\varphi ^j|v_\\varphi ^i)$ .", "Our corresponding technical report should be consulted for further details about these choices and illustrations of the various concepts [29].", "This report also contains proofs for many of the ensuing claims.", "2.2$\\;\\;\\;$ Aggregating Markov Chains For any given transition model $R_\\pi $ , we would like to find, by way of the joint model $R_\\vartheta $ , another transition model $R_\\varphi $ with fewer states that resembles the dynamics encoded by $R_\\pi $ .", "At the very least, a model $R_\\vartheta $ should be sought with a weighting matrix $\\Theta $ that has the least expected divergence with respect to the transition matrix $\\Pi $ of $R_\\pi $ for some partition.", "Definition 2.4.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ , $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ , and $R_\\vartheta \\!=\\!", "(V_\\vartheta ,E_\\vartheta ,\\Theta )$ be transition models of two Markov chains over $n$ and $m$ states and the joint model over $n \\!+\\!", "m$ states, respectively.", "The least expected divergence between $\\Pi $ and $\\Theta $ , and hence $\\Pi $ and $\\Phi $ , is given in (1).", "Here, $\\gamma _i \\!=\\!", "p(v_\\pi ^i)$ and $\\pi _{i,k} \\!=\\!", "p(v_\\pi ^i|v_\\pi ^k)$ .", "The least expected distortion possesses too few constraints to make discerning the number of state groups $m$ viable, though.", "Figure: NO_CAPTIONFigure: NO_CAPTIONTo help discern $m$ , we impose that the partitions should minimize the information loss associated with the state quantization process.", "That is, the mutual dependence between states in the high-order and low-order chains should be maximized with respect to a supplied bound.", "Simultaneously, the least expected divergence, for this bound, should be achieved while also seeking a maximal compression of the dynamics.", "Realizing each of these competing objectives leads to a combined value-of-information and information-bottleneck aggregation process.", "Definition 2.5.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ , $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ , and $R_\\vartheta \\!=\\!", "(V_\\vartheta ,E_\\vartheta ,\\Theta )$ be transition models of two Markov chains over $n$ and $m$ states and the joint model over $n \\!+\\!", "m$ states, respectively.", "An optimal reduced-order transition model $R_\\varphi $ with respect to the original model $R_\\pi $ can be found as follows:     (i) Optimal partitioning: Find a probabilistic partition matrix $\\Psi $ that solves (2) for some $r \\!\\in \\!", "\\mathbb {R}_+$ ; $r$ has an upper bound of $-\\sum _{i=1}^n \\gamma _i\\textnormal {log}(\\gamma _i)$ .", "As well, find the corresponding weighting matrix for $R_\\vartheta $ : $\\Theta \\!=\\!", "U^\\top \\Pi $ , where $[U]_{i,j} \\!=\\!", "\\gamma _i \\psi _{i,j}/\\sum _{p=1}^n \\gamma _p\\psi _{p,j}$ .", "(ii) Transition matrix construction: Obtain the transition matrix for $R_\\varphi $ via: $\\varphi _{i,j} \\!=\\!", "\\sum _{k=1}^n \\vartheta _{k,j}\\psi _{k,i}$ using the optimal weights $\\Theta $ and the probabilistic partition matrix $\\Psi $ from step (i).", "Here, $\\alpha _j \\!=\\!", "p(v_\\varphi ^j)$ , $\\omega _q \\!=\\!", "p(q)$ , $\\eta _{q,i} \\!=\\!", "p(q|v_\\pi ^i)$ , and $\\kappa _{q,j} \\!=\\!", "p(q|v_\\varphi ^j)$ for some intermediate random variable $q$ .", "A reason for considering this combined model is that simply trading off between the preserved information and the model complexity, as in the information bottleneck, does not encode notions of the underlying weighted-graph geometry.", "This is because Shannon information is geometrically invariant: it yields an infinite set of degenerate solutions that do not minimize a specific divergence measure.", "Global solutions to (2) can be found via the initial condition in (3) and the expectation-maximization-like updates in (4).", "Proposition 2.1.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ and $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ be transition models of two Markov chains over $n$ and $m$ states.", "The optimal partition matrix $\\Psi $ that globally solves (2) can be found via the alternating update in (3) for iteration $k \\!=\\!", "0$ and the alternating updates in (4) for iterations $k \\!=\\!", "1,2,\\ldots $ Here, $\\beta \\!\\in \\!", "\\mathbb {R}_+$ is a Lagrange multiplier for handling the constraint bound $r \\!\\in \\!", "\\mathbb {R}_+$ .", "The following proposition shows that the number of reduced-model state groups increases once $\\beta $ reaches certain critical values.", "Proposition 2.2.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ and $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ be transition models of two Markov chains over $n$ and $m$ states.", "For some $\\beta _0$ , suppose $\\Theta _{\\beta _0}$ , the matrix $\\Theta $ for that value of $\\beta _0$ , satisfies the following inequality $d^2/d\\epsilon ^2\\, F(\\Psi ,K,\\alpha ;\\Pi ,\\Theta _{\\beta _0} \\!+\\!", "\\epsilon Q,H,\\gamma )|_{\\epsilon = 0} > 0$ , for the modified value-of-information Lagrangian from (2).", "Here, $Q \\!\\in \\!", "\\mathbb {R}_+^{m \\times n}$ is such that $\\sum _{k=1}^m q_{k,1:n}^\\top q_{k,1:n} \\!=\\!", "1$ and $\\sum _{j=1}^n q_{i,j} \\!=\\!", "0$ $\\,\\forall i$ .", "A critical value $\\beta _c$ satisfies $\\beta _c \\!=\\!", "\\textnormal {min}_{\\beta > \\beta _0}\\,(d^2/d\\epsilon ^2\\, F(\\Psi ,K,\\alpha ;\\Pi ,\\Theta _{\\beta } \\!+\\!", "\\epsilon Q,H,\\gamma )|_{\\epsilon = 0} \\le 0).$ The number of rows in $\\Theta $ and columns in $\\Psi $ needs to be increased, by one, once $\\beta \\!>\\!", "\\beta _c$ , since a phase change occurs.", "Sweeping over critical values of $\\beta $ yields a hierarchy of probabilistic partitions $\\Psi $ and hence reduced-order-model transition matrices $\\Phi $ with different numbers of state groups.", "This proposition does not, however, reveal a way to find the optimal number of state groups.", "To do that, we will determine when the modified value-of-information begins to overfit to the transition-dynamics noise.", "This leads to additional terms that we can subtract from the criterion to essentially regularize the aggregation and penalize for using too many or too few state groups than can be resolved in the transition dynamics.", "2.2.1$\\;\\;\\;$ Aggregated State Group Count We assume, for the modified value-of-information, that overfitting to noise is a byproduct of dealing with finite-state-space Markov chains and hence introducing estimation errors into the joint probabilities.", "Characterizing the error effects and removing them yields a version of the value-of-information curve that is either convex or monotonically non-decreasing then monotonically non-increasing.", "Definition 2.6.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ , $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ , and $R_\\vartheta \\!=\\!", "(V_\\vartheta ,E_\\vartheta ,\\Theta )$ be transition models of two Markov chains over $n$ and $m$ states and the joint model over $n \\!+\\!", "m$ states, respectively.", "An optimal reduced-order transition model $R_\\varphi $ , with respect to the original model $R_\\pi $ , can be found as follows:     (i) Optimal partitioning: Find a probabilistic partition matrix $\\Psi $ that solves (5) for some $r \\!\\in \\!", "\\mathbb {R}_+$ .", "Equation (5) has the same constraint set as (2).", "Find the corresponding weighting matrix for $R_\\vartheta $ : $\\Theta \\!=\\!", "U^\\top \\Pi $ , where $U$ is defined above.", "(ii) Transition matrix construction: Obtain the transition matrix for $R_\\varphi $ via: $\\varphi _{i,j} \\!=\\!", "\\sum _{k=1}^n \\vartheta _{k,j}\\psi _{k,i}$ using the optimal weights $\\Theta $ and the probabilistic partition matrix $\\Psi $ from step (i).", "The systematic underestimation/overestimation error in (2) is $g$ th-order minimized when solving (5).", "Proposition 2.3.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ and $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ be transition models of two Markov chains over $n$ and $m$ states.", "The optimal partition matrix $\\Psi $ for $R_\\pi $ that globally solves (5) and facilitates the construction of $R_\\varphi $ can be found via the updates in (6) for iteration $k \\!=\\!", "0$ and the update in (7) for iterations $k \\!=\\!", "1,2,\\ldots $ For (6) and (7), the denominators are such that the partition-matrix entries are normalized to become probabilities.", "Here, $\\alpha _j \\!=\\!", "p(v_\\varphi ^j)$ , $\\gamma _i \\!=\\!", "p(v_\\pi ^i)$ , $\\psi _{i,j} \\!=\\!", "p(v_\\varphi ^j|v_\\pi ^i)$ , $\\eta _{q,i} \\!=\\!", "p(q|v_\\pi ^i)$ , $\\kappa _{q,j} \\!=\\!", "p(q|v_\\varphi ^j)$ , $\\tau _{i,j} \\!=\\!", "p(v_\\pi ^i|v_\\varphi ^j)$ , and $\\rho _{q,j} \\!=\\!", "p(q,v_\\varphi ^j)$ .", "The terms $\\overline{\\kappa }_{q,j}$ and $\\overline{\\eta }_{q,i}$ represent the estimation errors associated with $\\kappa _{q,j}$ and $\\eta _{q,i}$ , while $\\overline{\\gamma }_i$ is the error associated with approximating $\\gamma _i$ .", "We assume that the average error is zero: $\\mathbb {E}[\\overline{\\eta }_{q,i}] \\!=\\!", "0$ and $\\mathbb {E}[\\overline{\\gamma }_{i}] \\!=\\!", "0$ .", "Figure: Modified value-of-information-based partitioning results for 9-state nearly-completely-decomposable Markov chains with: (a) four discernible state groups and (b) seven discernible state groups.", "In both (a) and (b), we show the original stochastic matrix Π\\Pi with hardened versions of the partitions Ψ\\Psi overlaid for four critical values of β\\beta ; as noted in the previous section, mm can be inferred from each value of β\\beta .", "The unique colors in the partition plots correspond to which state group in Φ\\Phi a state in Π\\Pi is most likely to be associated.", "The right-most plots highlight the Shannon information in blue and the error-subtracted Shannon information in green.", "After a certain number of clusters, the Shannon information plateaus, indicating that there is negligible benefit for including more state groups in Φ\\Phi .", "The modified Shannon information begins to decrease when this occurs; the results align with the discernible number of groups for these chains.Due to the monotonicity properties of (5), we can construct upper and lower bounds for it and characterize their rate of change with respect to $\\beta $ .", "This permits algebraically discerning where maxima of the equivalent dual problem (5) occur.", "Proposition 2.4.", "Let $R_\\pi \\!=\\!", "(V_\\pi ,E_\\pi ,\\Pi )$ , $R_\\varphi \\!=\\!", "(V_\\varphi ,E_\\varphi ,\\Phi )$ , and $R_\\vartheta \\!=\\!", "(V_\\vartheta ,E_\\vartheta ,\\Theta )$ be transition models of two Markov chains over $n$ and $m$ states and the joint model over $n \\!+\\!", "m$ states, respectively.", "The dual problem to (5) achieves a maximum for the Lagrange multiplier value $\\beta ^* \\!=\\!", "2^{\\sum _{i=1}^n\\sum _{j=1}^m \\gamma _i \\psi _{i,j}\\textnormal {log}(\\psi _{i,j}/\\alpha _j)}/2n$ .", "As before, the partitioning process defined by (6) and (7) undergoes a series of phase changes whereby the number of state groups increase once $\\beta $ exceeds some critical value.", "The value of $\\beta ^*$ specified by the preceding proposition coincides with a partitioning where a certain number of state groups are defined.", "It can be taken as the point in (5) where the complexity of the dynamics compression is balanced against the information contained about the original Markov chain in the reduced-order Markov chain.", "For (2), $\\beta ^*$ often marks the beginning of the value-of-information's asymptotic region where there are diminishing returns for including more clusters.", "In this section, we assess the empirical performance of the modified value-of-information criterion.", "We gauge how well the predicted `optimal' free-parameter value aligns with the discernible number of state groups for nearly-completely-decomposable Markov chains.", "3.1$\\;\\;\\;$ Simulation Protocols We adopted the following simulation protocols for our aggregation framework.", "We initialized the aggregation process with a partition matrix of all ones, $\\Psi \\!=\\!", "[1]_{9 \\times 1}$ , signifying that each state belongs to a single group.", "This is the global optimal solution of the aggregation problem and coincides with a parameter value $\\beta $ of zero for the modified value-of-information.", "We then found the subsequent critical values of $\\beta $ and increased the column count of $\\Psi $ .", "We determined which state group would be further split and modified both the new column and an existing column of $\\Psi $ to randomly allocate the appropriate states.", "This initialization process bootstraps the quantization for the new cluster and typically achieves convergence in only a few iterations.", "It also permits the value of information to reliably track the global minimizer as $\\beta $ increases.", "For certain problems, a priori specifying a fixed amount of partition updates may not permit finding a steady-state solution.", "We therefore run the alternating updates until no entries of the partition matrix change across two iterations.", "3.2$\\;\\;\\;$ Simulation Results We established the performance of the aggregation partitioning process through two examples.", "The first, shown in figure 1(a), corresponds to a Markov chain with nine states and four state groups with strong intra-group interactions and weak inter-group interactions.", "This is a relatively simple aggregation problem.", "The second example, presented in figure 1(b), is of a nine-state Markov chain with a single dominant state group and six outlying states with near-equal transition probabilities.", "This is a more challenging problem than the first, as the outlying states cannot be reliably combined without adversely impacting the mutual dependence.", "In figures 1(a) and 1(b), we provided partitions for four critical values of the free parameter $\\beta $ .", "The `optimal' value of $\\beta $ , as predicted by our modified value of information, leads to four and seven state groups for the first and second examples, respectively.", "Here, we considered a second-order approximation of the underestimation/overestimation error, i.e., $g \\!=\\!", "2$ .", "Higher-order approximations did not modify the shape of the resulting Shannon-information curves much compared to those in figures 1(a) and 1(b); the resulting `optimal' number of state groups remained the same in these cases.", "For both examples, the associated partitions for the `optimal' values of $\\beta $ align well with an inspection of the dynamics of the stochastic matrix: the partitions separate states that are more likely to transition to each other from those that are not.", "Those partitions for `non-optimal' $\\beta $ 's either over- or under-quantize the chain states.", "That is, for critical $\\beta $ 's before the `optimal' value, there is a moderate increase in the Shannon information, while the remaining $\\beta $ s only yield modest increases.", "The `optimal' value of $\\beta $ for both examples, in contrast, lies at the `knee' of this curve, which is where the divergence minimization is balanced against the competing objective of state-mutual-dependence maximization with respect to some bound.", "It is also where the complexity of the aggregation result is in harmony with the information it contains about the original Markov chain.", "To lend credence to these results, we considered forty reformulated graph-based partition validity measures [30], [31] across a hundred Monte Carlo simulations.", "For the Markov chain in figure 1(a), thirty-three indices specified that there were four state groups while seven indices indicated that there were three state groups.", "The results for the chain in figure 1(b) had more uncertainty, which was due to the high number of outlying states and the ability to combine them in multiple ways to reduce the models' divergence.", "Ten indices suggested that there were two state groups, eight indices chose three state groups, while the remaining twenty-two favored either six or seven state groups almost equally.", "0.9" ] ]
2107.01799
[ [ "The basepoint-freeness threshold of a very general abelian surface" ], [ "Abstract For abelian surfaces of Picard rank 1, we perform explicit computations of the cohomological rank functions of the ideal sheaf of one point, and in particular of the basepoint-freeness threshold.", "Our main tool is the relation between cohomological rank functions and Bridgeland stability.", "In virtue of recent results of Caucci and Ito, these computations provide new information on the syzygies of polarized abelian surfaces." ], [ "Introduction", "Throughout this note we work over an algebraically closed field $\\mathbb {K}$ .", "Motivated by the continuous rank functions of Barja, Pardini and Stoppino ([1]), in their paper [10] Jiang and Pareschi introduced the cohomological rank functions $h^i_{F,l}$ associated to a coherent sheaf (or more generally, a bounded complex of coherent sheaves) $F$ on a polarized abelian variety $(A,l)$ .", "For $x\\in \\mathbb {Q}$ , $h^i_{F,l}(x)$ makes sense of the $i$ -th (hyper)cohomological rank of $F$ twisted with a (general) representative of the fractional polarization $xl$ .", "One of the main applications of these functions corresponds to the study of syzygies of abelian varieties.", "Jiang and Pareschi already observed in [10] that the functions of the ideal sheaf $\\mathcal {I}_q$ of a point $q\\in A$ , and more concretely the basepoint-freeness threshold $\\epsilon _1(l)=\\inf \\left\\lbrace x\\in \\mathbb {Q}\\mid h^1_{\\mathcal {I}_q,l}(x)=0\\right\\rbrace ,$ encodes interesting positivity properties of the polarization $l$ : $\\epsilon _1(l)\\le 1$ , with equality if and only if any line bundle representing $l$ has base points.", "[10] If $\\epsilon _1(l)<\\frac{1}{2}$ , then any line bundle representing $l$ is projectively normal.", "Shortly after, Caucci generalized (REF ) to higher syzygies, proving that every line bundle representing $l$ satisfies the property $(N_p)$ as long as $\\epsilon _1(l)<\\frac{1}{p+2}$ ([3]).", "The reader is referred to [12] for a definition of the property $(N_p)$ .", "As a consequence, he obtained a proof of Lazarsfeld's conjecture (originally proved in $\\operatorname{char}(\\mathbb {K})=0$ by Pareschi [16]) in arbitrary characteristic: if $L$ is an ample line bundle on an abelian variety, then $L^m$ satisfies $(N_p)$ for every $m\\ge p+3$ .", "Caucci's result has received considerable attention as an effective tool to understand the syzygies of abelian varieties endowed with a primitive polarization (i.e.", "a polarization which is not a multiple of another one), by means of upper bounds for the basepoint-freeness threshold (see [9], [5], [6]).", "Furthermore, for $p\\ge 1$ the hypothesis $\\epsilon _1(l)<\\frac{1}{p+2}$ ensuring $(N_p)$ has recently been slightly weakened by Ito ([7]).", "In the present note we give explicit expressions for the function $h^0_{\\mathcal {I}_q,l}$ , which is enough for determining $h^1_{\\mathcal {I}_q,l}$ and hence $\\epsilon _1(l)$ .", "We do this for a certain class of polarized abelian surfaces which includes those with Picard rank 1.", "More precisely, our main result is: Theorem A Let $(S,l)$ be a $(1,d)$ -polarized abelian surface such that $D\\cdot l$ is a multiple of $l^2$ for every divisor class $D$ , and let $q\\in S$ be a (closed) point.", "If $d$ is a perfect square, then the cohomological rank function $h^0_{\\mathcal {I}_q,l}$ reads $h^0_{\\mathcal {I}_q,l}(x)=\\left\\lbrace \\begin{array}{c l}0 & x \\le \\frac{\\sqrt{d}}{d}\\\\dx^2-1 & x \\ge \\frac{\\sqrt{d}}{d}\\\\\\end{array}\\right.$ In particular, $\\epsilon _1(l)=\\frac{\\sqrt{d}}{d}$ .", "If $d$ is not a perfect square, then the cohomological rank function $h^0_{\\mathcal {I}_q,l}$ is either (REF ) or $h^0_{\\mathcal {I}_q,l}(x)=\\left\\lbrace \\begin{array}{c l}0 & x \\le \\frac{2\\tilde{y}}{\\tilde{x}+1}\\\\\\frac{d(\\tilde{x}+1)}{2}x^2-2d\\tilde{y}\\cdot x+\\frac{\\tilde{x}-1}{2} & \\frac{2\\tilde{y}}{\\tilde{x}+1} \\le x \\le \\frac{2\\tilde{y}}{\\tilde{x}-1}\\\\dx^2-1 & x \\ge \\frac{2\\tilde{y}}{\\tilde{x}-1}\\\\\\end{array}\\right.$ where $(\\tilde{x},\\tilde{y})$ is a nontrivial positive solution to Pell's equation $x^2-4d\\cdot y^2=1$ .", "In particular, if $(x_0,y_0)$ is the minimal positive solution to this equation, then $\\epsilon _1(l)\\le \\frac{2y_0}{x_0-1}$ .", "Under the hypothesis of (REF ), assume also that $\\operatorname{char}(\\mathbb {K})$ divides neither $x_0^2$ nor $x_0^2-1$ .", "Then the expression for $h^0_{\\mathcal {I}_q,l}$ is the one corresponding to either the minimal solution $(x_0,y_0)$ or to the second smallest positive solution $(x_1,y_1)$ .", "In particular, $\\epsilon _1(l)\\in \\lbrace \\frac{2y_0}{x_0-1},\\frac{2y_1}{x_1-1}\\rbrace $ .", "Parts (REF ) and (REF ) of this result are proved in sec:Upperbound.", "Our main tool is a natural description of cohomological rank functions on abelian surfaces in terms of certain stability conditions on the derived category, which has recently been proved by Lahoz and the author in [13].", "Essentially, this description establishes that $h^0_{\\mathcal {I}_q,l}$ is determined by the Harder-Narasimhan filtrations of $\\mathcal {I}_q$ along the so-called $(\\alpha ,\\beta )$ -plane of stability conditions.", "The key point of this approach is that the potential destabilizing walls for $\\mathcal {I}_q$ are in correspondence with positive solutions to Pell's equation $x^2-4d\\cdot y^2=1$ (see walls).", "The absence of such solutions when $d$ is a perfect square shows (REF ), whereas for $d$ not a perfect square one obtains (REF ).", "The corresponding upper bounds for the basepoint-freeness threshold refine those given by Ito for general complex abelian surfaces ([6]).", "In addition, the expressions of (REF ) and (REF ) reveal the differentiability of $h^0_{\\mathcal {I}_q,l}$ at certain rational points; this is relevant with regard to syzygies, since it enables us to apply Ito's refined version of Caucci's criterion.", "As a result, we have: Corollary Let $(S,l)$ be a $(1,d)$ -polarized abelian surface which satisfies the hypothesis of main, and let $L$ be any line bundle representing the polarization $l$ .", "If $d\\ge 7$ , then $L$ is projective normal.", "If $d>(p+2)^2$ for $p\\ge 1$ , then $L$ satisfies the property $(N_p)$ .", "For $\\mathbb {K}=\\mathbb {C}$ , we point out that syzygies.", "(REF ) recovers a well known result of Iyer ([8], see also [11] for some cases previously covered), and the case $p=1$ of syzygies.", "(REF ) recovers a result of Gross and Popescu ([4]).", "For arbitrary $p$ , syzygies.", "(REF ) improves the bound ensuring the property $(N_p)$ that was given recently by Ito in [6].", "In sec:Lowerbound we deal with the proof of main.", "(REF ).", "This is the problem of determining, when $d$ is not a perfect square, which of the potential functions described in main.", "(REF ) really occur.", "Modulo certain arithmetic restrictions on $\\operatorname{char}(\\mathbb {K})$ , we prove that only two possibilities may happen: those corresponding to the two smallest positive solutions of Pell's equation.", "This is guaranteed by the explicit construction of curves containing all the torsion points of an unexpectedly high order (see symcurve).", "For this construction, we use the classical theory of theta groups developed by Mumford in [15].", "It is worth noting that, for all the non-perfect squares $d$ for which we know the exact value of $\\epsilon _1(l)$ , the equality $\\epsilon _1(l)=\\frac{2y_0}{x_0-1}$ holds.", "In general, this would follow from a small refinement of symcurve, that at present we do not know how to prove (see finalrem.", "(REF ) for details).", "Acknowledgements.", "This work has benefited from helpful conversations with my advisors Martí Lahoz and Joan Carles Naranjo.", "Thanks are also due to Federico Caucci for useful comments." ], [ "Cohomological rank functions", "Let $(A,l)$ be a $g$ -dimensional polarized abelian variety, i.e.", "$l\\in \\operatorname{NS}(A)=\\operatorname{Pic}(A)/\\operatorname{Pic}^0(A)$ is the class of an ample line bundle $L$ .", "We will denote by $\\varphi _l:A\\rightarrow \\operatorname{Pic}^0(A),\\;\\;p\\mapsto t_p^*L\\otimes L^{-1}$ its polarization isogeny, where $t_p$ stands for the translation by $p\\in A$ .", "Let $\\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (A)$ be the bounded derived category of $A$ .", "In the paper [10] (see [3] for positive characteristic), a cohomological rank function $h^i_{F,l}:\\mathbb {Q}\\rightarrow \\mathbb {Q}_{\\ge 0}$ is associated to every object $F\\in \\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (A)$ and every $i\\in \\mathbb {Z}$ .", "If $x_0=\\frac{a}{b}\\in \\mathbb {Q}$ with $b\\in \\mathbb {Z}_{>0}$ , then $h^i_{F,l}(x_0)$ is defined as $h^i_{F,l}(x_0):=\\frac{1}{b^{2g}}h^i(A,\\mu _b^*F\\otimes L^{ab}\\otimes \\alpha )$ for general $\\alpha \\in \\operatorname{Pic}^0(A)$ , where $\\mu _b:A\\rightarrow A$ is the multiplication-by-$b$ isogeny.", "Since $\\mu _b^*l=b^2l$ and $\\deg (\\mu _b)=b^{2g}$ , the number $h^i_{F,l}(x_0)$ gives a meaning to the $i$ -th (hyper)cohomological rank of $F$ twisted with a (general) representative of the fractional polarization $x_0l$ .", "These functions are polynomial in the neighborhood of any fixed $x_0\\in \\mathbb {Q}$ .", "More explicitly, for any sheaf $E$ let $\\chi _{E,l}$ be the Hilbert polynomial of $E$ with respect to $l$ .", "Then for every (rational) $x$ in a right neighborhood of $x_0$ , the following equality holds ([10]): $h^i_{F,l}(x)=\\frac{(x-x_0)^g}{\\chi (l)}\\cdot \\chi _{\\varphi _l^*R^{g-i}\\Phi _{\\mathcal {P}^{\\vee }}((\\mu _{b}^*F\\otimes L^{ab})^{\\vee }),l}\\left(\\frac{1}{b^2(x-x_0)}\\right),$ where $\\Phi _{\\mathcal {P}^{\\vee }}$ denotes the Fourier-Mukai transform with kernel the dual $\\mathcal {P}^\\vee $ of the Poincaré bundle.", "In this note, we concentrate on the functions $h^i_{\\mathcal {I}_q,l}$ for a (closed) point $q\\in A$ ; by independence of $q$ , we fix $q$ to be the origin $0\\in A$ .", "As explained in the introduction, previous work of Jiang-Pareschi, Caucci and Ito shows that they encode information about the polarization $l$ : Theorem 2.1 ([10], [3], [7]) Let $(A,l)$ be a polarized abelian variety, and let $L$ be any ample line bundle representing the polarization $l$ .", "$\\mathcal {I}_0\\langle l\\rangle $ is IT(0) if and only if $L$ is basepoint-free.", "If $\\mathcal {I}_0\\langle \\frac{1}{2}l\\rangle $ is IT(0), then $L$ is projectively normal.", "If $\\mathcal {I}_0\\langle \\frac{1}{p+2}l\\rangle $ is M-regular for some $p\\ge 1$ , then $L$ satisfies the property $(N_p)$ .", "The reader is referred to [10] for the definitions of a $\\mathbb {Q}$ -twisted coherent sheaf $F\\langle x_0l\\rangle $ being IT(0), M-regular or a GV-sheaf.", "In the particular case $F=\\mathcal {I}_0$ we will use the following characterization, which is an immediate consequence of [10]: Lemma Let $x_0\\in \\mathbb {Q}$ be a positive rational number.", "$\\mathcal {I}_0\\langle x_0l\\rangle $ is a GV-sheaf if and only if $h^1_{\\mathcal {I}_0,l}(x_0)=0$ .", "$\\mathcal {I}_0\\langle x_0l\\rangle $ is M-regular if and only if $h^1_{\\mathcal {I}_0,l}(x_0)=0$ and $h^1_{\\mathcal {I}_0,l}$ is of class $\\mathcal {C}^1$ at $x_0$ .", "$\\mathcal {I}_0\\langle x_0l\\rangle $ is IT(0) if and only if there is $\\epsilon >0$ such that $h^1_{\\mathcal {I}_0,l}(x)=0$ for all $x\\in (x_0-\\epsilon ,x_0)$ ." ], [ "The $(\\alpha ,\\beta )$ -plane of a polarized abelian surface", "In this subsection, $(S,l)$ will be a polarized abelian surface.", "We briefly recall the relation between cohomological rank functions and stability in the $(\\alpha ,\\beta )$ -plane associated to $l$ , which appeared recently in [13].", "For every $(\\alpha ,\\beta )\\in \\mathbb {R}_{>0}\\times \\mathbb {R}$ , there exists a Bridgeland stability condition $\\sigma _{\\alpha ,\\beta }=(\\operatorname{Coh}^\\beta (S),Z_{\\alpha ,\\beta })$ (see [14], or [2] for the original treatment), where: [•] $\\operatorname{Coh}^\\beta (S)$ is the heart of a bounded t-structure on $\\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S)$ .", "Concretely, if $\\mu _l=\\frac{l\\cdot \\operatorname{ch}_1}{l^2\\cdot \\operatorname{ch}_0}$ is the slope of a coherent sheaf, complexes $F\\in \\operatorname{Coh}^\\beta (S)$ are those satisfying: $\\mu _l(E)\\le \\beta $ for every subsheaf $E\\subset \\mathcal {H}^{-1}(F)$ , $\\mu _l(Q)>\\beta $ for every quotient $\\mathcal {H}^0(F)\\twoheadrightarrow Q$ , and $\\mathcal {H}^i(F)=0$ for $i\\ne 0,-1$ .", "Let $K_0(\\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S))$ denote the Grothendieck group of $\\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S)$ .", "Then $Z_{\\alpha ,\\beta }:K_0(\\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S))\\rightarrow \\mathbb {C}$ is a group homomorphism with the following properties: $Z_{\\alpha ,\\beta }$ factors through the homomorphism $v:K_0(\\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S))\\rightarrow \\Lambda =\\mathbb {Z}^3$ defined by $v(E)=\\left(l^2\\cdot \\operatorname{ch}_0(E),l\\cdot \\operatorname{ch}_1(E),\\operatorname{ch}_2(E)\\right)$ For every nonzero $E\\in \\operatorname{Coh}^\\beta (S)$ the inequality $\\Im Z_{\\alpha ,\\beta }(E)\\ge 0$ holds, and $\\Re Z_{\\alpha ,\\beta }(E)<0$ whenever $\\Im Z_{\\alpha ,\\beta }(E)=0$ .", "Every object of $\\operatorname{Coh}^\\beta (S)$ admits a Harder-Narasimhan (HN for short) filtration with respect to the tilt slope $\\nu _{\\alpha ,\\beta }(E):=\\left\\lbrace \\begin{array}{c l}\\frac{-\\Re Z_{\\alpha ,\\beta }(E)}{\\Im Z_{\\alpha ,\\beta }(E)} & \\Im Z_{\\alpha ,\\beta }(E) >0\\\\+\\infty & \\Im Z_{\\alpha ,\\beta }(E)=0\\\\\\end{array}\\right.$ It is specially relevant that $\\sigma _{\\alpha ,\\beta }$ satisfies the support property (see [14]) with respect to the following quadratic form in $\\Lambda \\otimes \\mathbb {R}$ : $\\operatorname{\\overline{\\Delta }}=(l\\cdot \\operatorname{ch}_1)^2-2(l^2\\cdot \\operatorname{ch}_0)\\operatorname{ch}_2=v_1^2-2v_0v_2$ This fact, combined with the so-called Bertram's nested wall theorem, gives an effective control of wall-crossing in the $(\\alpha ,\\beta )$ -plane (see e.g.", "[13]); essentially, the walls where $\\sigma _{\\alpha ,\\beta }$ -semistability varies for objects of a fixed class $v\\in \\Lambda $ are nested semicircles.", "For $\\alpha =0$ and $\\beta \\in \\mathbb {Q}$ , the pair $\\sigma _{0,\\beta }=(\\operatorname{Coh}^\\beta (S),Z_{0,\\beta })$ defines a weak stability condition; $Z_{0,\\beta }$ satisfies the properties (REF ) and (REF ) listed above, with the difference that the equality $Z_{0,\\beta }(E)=0$ holds for certain nonzero objects $E\\in \\operatorname{Coh}^\\beta (S)$ .", "As proved in [13], the HN filtrations of any object $F\\in \\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S)$ with respect to these weak stability conditions describe the cohomological rank functions $h^i_{F,l}$ .", "In the simplest situation of an object lying in the heart, this description reads as follows: Proposition ([13]) Let $\\beta \\in \\mathbb {Q}$ , and let $0=F_0\\hookrightarrow F_1\\hookrightarrow ...\\hookrightarrow F_r=F$ be the HN filtration with respect to $\\sigma _{0,\\beta }$ of an object $F\\in \\operatorname{Coh}^\\beta (S)$ .", "Then: $h^i_{F,l}(-\\beta )=0$ for every $i\\ne 0,1$ .", "$h^0_{F,L}(-\\beta )=\\displaystyle \\sum _{\\nu _{0,\\beta }({F_j/F_{j-1}})\\ge 0}\\chi _{F_j/F_{j-1},l}(-\\beta )$ , and $h^1_{F,L}(-\\beta )=\\displaystyle \\sum _{\\nu _{0,\\beta }({F_j/F_{j-1}})<0}-\\chi _{F_j/F_{j-1},l}(-\\beta )$ ." ], [ "The theta group of an ample line bundle", "Let $(A,l)$ be a polarized abelian variety, and let $L$ be an ample line bundle representing $l$ .", "We give a quick review of the representation of the theta group $\\mathcal {G}(L)$ on $H^0(A,L)$ , explicitly described by Mumford in [15].", "Assume that $\\operatorname{char}(\\mathbb {K})$ does not divide $h^0(L)$ .", "This guarantees that the polarization isogeny $\\varphi _l:A\\rightarrow \\operatorname{Pic}^0(A)$ is separable.", "We will write $K(L):=\\ker (\\varphi _l)$ ; for instance, if $L$ is very ample embedding $A$ in $\\mathbb {P}(H^0(A,L)^\\vee )$ , then the points $p\\in K(L)$ are those for which the translation $t_p$ on $A$ extends to a projectivity of $\\mathbb {P}(H^0(A,L)^\\vee )$ .", "This projective representation comes from the aforementioned representation of the theta group $\\mathcal {G}(L):=\\lbrace (x,\\varphi )\\mid x\\in K(L),\\;\\;\\varphi :L\\overset{\\cong }{\\longrightarrow }t_x^*L\\rbrace ,\\;\\;\\;(y,\\psi )\\cdot (x,\\varphi )=(x+y,t_x^*\\psi \\circ \\varphi )$ on $H^0(A,L)$ .", "Note that $\\mathcal {G}(L)$ fits into a short exact sequence $1\\rightarrow \\mathbb {K}^*\\rightarrow \\mathcal {G}(L)\\rightarrow K(L)\\rightarrow 0,$ but it is far from being abelian.", "Indeed, the skew-symmetric pairing $e^L:K(L)\\times K(L)\\rightarrow \\mathbb {K}^*$ measuring the noncommutativity of $\\mathcal {G}(L)$ is non-degenerate (see [15]).", "The representation of $\\mathcal {G}(L)$ on $H^0(A,L)$ is defined as follows: every $(x,\\varphi )\\in \\mathcal {G}(L)$ induces $U_{(x,\\varphi )}:H^0(A,L)\\rightarrow H^0(A,L),\\;\\;\\;s\\mapsto t_{-x}^*(\\varphi (s))$ Theorem 2.2 ([15]) With the notations above, the following statements hold: $K(L)=A(L)\\oplus B(L)$ , where $A(L),B(L)\\subset K(L)$ are maximal totally isotropic subgroups with respect to $e^L$ .", "Moreover, if $L$ is of type $\\delta =(d_1,...,d_g)$ , then $A(L)\\cong {\\mathbb {Z}}{d_1}\\oplus ...\\oplus {\\mathbb {Z}}{d_g}$ and $B(L)\\cong \\widehat{A(L)}=\\operatorname{Hom}_\\mathbb {Z}(A(L),\\mathbb {K}^*)$ via the pairing $e^L$ .", "As a group, $\\mathcal {G}(L)$ is isomorphic to $\\mathcal {G}(\\delta ):=\\mathbb {K}^*\\times A(L)\\times \\widehat{A(L)}$ with the operation $(\\alpha ,t,l)\\cdot (\\alpha ^{\\prime },t^{\\prime },l^{\\prime })=(\\alpha \\alpha ^{\\prime }\\cdot l^{\\prime }(t),t+t^{\\prime },l\\cdot l^{\\prime })$ The representation of $\\mathcal {G}(L)$ on $H^0(A,L)$ is isomorphic to the representation of $\\mathcal {G}(\\delta )$ on $V(\\delta )=\\lbrace \\mathbb {K}\\text{-valued functions on $A(L)={\\mathbb {Z}}{d_1}\\oplus ...\\oplus {\\mathbb {Z}}{d_g}$}\\rbrace $ given, for $(\\alpha ,t,l)\\in \\mathcal {G}(\\delta )$ and $f\\in V(\\delta )$ , as follows: $\\left((\\alpha ,t,l)\\cdot f\\right)(x)=\\alpha \\cdot l(x)\\cdot f(t+x)$ Assume that $\\operatorname{char}(\\mathbb {K})\\ne 2$ and $L$ is totally symmetric: namely, there exists an isomorphism $L\\cong i^*L$ , acting as $+1$ simultaneously on all the fibers $L(p)$ of 2-torsion points $p\\in A_2$ .", "Then the inversion map $i:A\\rightarrow A$ extends to a projectivity of $\\mathbb {P}(H^0(A,L)^\\vee )$ ; under the isomorphism $H^0(A,L)\\cong V(\\delta )$ of (REF ), this projectivity is obtained from $\\tilde{i}:V(\\delta )\\rightarrow V(\\delta ),\\;\\;\\left(\\tilde{i}\\cdot f\\right)(x)=f(-x)$ The main advantage of this description is the existence of a canonical basis for $V(\\delta )$ , which allows an explicit treatment of the endomorphisms $U_{(x,\\varphi )}$ in coordinates.", "We will use this approach in the proof of symcurve." ], [ "Upper bounds for $\\epsilon _1(l)$", "Throughout this section, $(S,l)$ will be a polarized abelian surface satisfying the hypothesis of main, namely: $l$ is of type $(1,d)$ , and for every divisor class $D$ we have $l^2|D\\cdot l$ .", "Since $\\mathcal {I}_0$ is a slope-semistable sheaf, it follows from the very definition that $\\mathcal {I}_0\\in \\operatorname{Coh}^\\beta (S)$ for every $\\beta <0$ .", "Hence we may apply CRFstab to describe the cohomological rank functions of $\\mathcal {I}_0$ for $x\\ge 0$ .", "Moreover, since $\\mathcal {I}_0$ is a Gieseker semistable sheaf, $\\mathcal {I}_0$ is $\\sigma _{\\alpha ,\\beta }$ -semistable for every $\\beta <0$ and $\\alpha \\gg 0$ ([2]).", "Thus our problem is reduced to understand how the HN filtration of $\\mathcal {I}_0$ with respect to $\\sigma _{\\alpha ,\\beta }$ varies, as $\\alpha $ decreases.", "To this end, observe that $\\operatorname{\\overline{\\Delta }}(\\mathcal {I}_0)=2l^2(=4d$ , by Riemann-Roch) takes the minimum possible positive value; indeed, by our assumptions on $(S,l)$ we have $4d|\\operatorname{\\overline{\\Delta }}(v(E))$ for every $E\\in \\mathop {\\mathrm {D}^{\\mathrm {b}}}\\nolimits (S)$ .", "In terms of wall-crossing this is a strong constraint, which guarantees one of the following conditions (see [13]): Either $\\mathcal {I}_0$ is $\\sigma _{\\alpha ,\\beta }$ -semistable for every $\\beta <0$ and $\\alpha >0$ , in which case $h^0_{\\mathcal {I}_0,l}$ reads $h^0_{\\mathcal {I}_0,l}(x)=\\left\\lbrace \\begin{array}{c l}0 & x \\le \\frac{\\sqrt{d}}{d}\\\\\\chi _{\\mathcal {I}_0,l}(x)=dx^2-1 & x \\ge \\frac{\\sqrt{d}}{d}\\\\\\end{array}\\right.$ In particular, the functions $h^i_{\\mathcal {I}_0,l}$ ($i=0,1$ ) are not of class $\\mathcal {C}^1$ at $\\frac{\\sqrt{d}}{d}$ .", "Or $\\mathcal {I}_0$ destabilizes along a semicircular wall $W$ defined by a short exact sequence $0\\rightarrow E\\rightarrow \\mathcal {I}_0\\rightarrow Q\\rightarrow 0$ in $\\operatorname{Coh}^{-\\frac{\\sqrt{d}}{d}}(S)$ , with $\\operatorname{\\overline{\\Delta }}(E)=0=\\operatorname{\\overline{\\Delta }}(Q)$ .", "If $p_Q<p_E$ are the intersection points of this semicircle with the line $\\alpha =0$ , then $h^0_{\\mathcal {I}_0,l}(x)=\\left\\lbrace \\begin{array}{c l}0 & x\\le -p_E\\\\\\chi _{E,l}(x) & -p_E\\le x\\le -p_Q\\\\\\chi _{\\mathcal {I}_0,l}(x)=dx^2-1 & x\\ge -p_Q\\\\\\end{array}\\right.$ In particular, the cohomological rank functions $h^i_{\\mathcal {I}_0,l}$ ($i=0,1$ ) are $\\mathcal {C}^1$ at $-p_E$ and $-p_Q$ .", "Lemma Let $0\\rightarrow E\\rightarrow \\mathcal {I}_0\\rightarrow Q\\rightarrow 0$ be a destabilizing short exact sequence as in (REF ).", "Then $v(E)=(d(\\widetilde{x}+1),-2d\\widetilde{y},\\frac{\\widetilde{x}-1}{2})$ and $v(Q)=((1-\\widetilde{x})d,2d\\widetilde{y},-\\frac{\\widetilde{x}+1}{2})$ , where $(\\widetilde{x},\\widetilde{y})$ is a positive nontrivial solution to Pell's equation $x^2-4d\\cdot y^2=1$ .", "By the assumption $l^2|D\\cdot l$ for every divisor class $D$ , we may write $v(E)=(2dr,2dc,\\chi )$ and $v(Q)=(2d(1-r),-2dc,-1-\\chi )$ for certain integers $r,c$ and $\\chi $ .", "The condition $\\operatorname{\\overline{\\Delta }}(E)=\\operatorname{\\overline{\\Delta }}(Q)$ is easily checked to read as $r=\\chi +1$ .", "Imposing now $\\operatorname{\\overline{\\Delta }}(E)=0$ gives $\\chi (\\chi +1)-dc^2=0$ , which after multiplying by 4 and adding 1 at both sides, becomes $(2\\chi +1)^2-4d\\cdot c^2=1$ .", "Therefore, $(2\\chi +1,c)$ is a solution to the equation $x^2-4d\\cdot y^2=1$ .", "Note that this solution must be non-trivial: otherwise, either $E$ or $Q$ would have class $v=(0,0,-1)$ , which is impossible.", "Finally, we have to determine the signs of the solution $(2\\chi +1,c)$ to Pell's equation.", "Since $E$ is a subobject of the torsion-free sheaf $\\mathcal {I}_0$ in the category $\\operatorname{Coh}^{-\\frac{\\sqrt{d}}{d}}(S)$ , it follows that $E$ is a sheaf with $r=\\operatorname{ch}_0(E)>0$ (hence $2\\chi +1>0$ ).", "Moreover, since $\\operatorname{\\overline{\\Delta }}(E)=0$ , the right intersection point $p_E$ of $W$ with the $\\beta $ -axis equals $\\mu _l(E)=\\frac{c}{r}$ (see e.g.", "[13]).", "$W$ being a wall for $\\mathcal {I}_0$ , it lies entirely in the region with $\\beta <0$ ; this gives $c<0$ , which finishes the proof.", "Combining this characterization of the walls with the previous description of the function $h^0_{\\mathcal {I}_0,l}$ , one concludes the proof of main: If $d$ is a perfect square (equivalently, $4d$ is a perfect square), then Pell's equation involved in walls admits only trivial solutions, so $\\mathcal {I}_0$ is $\\sigma _{\\alpha ,\\beta }$ -semistable along the whole region $\\beta <0$ .", "In this case, $h^0_{\\mathcal {I}_0,l}$ admits the expression given in (REF ).", "Now assume that $d$ is not a perfect square.", "If $\\mathcal {I}_0$ destabilizes (equivalently, $h^0_{\\mathcal {I}_0,l}$ is not the function given in (REF )), then by walls the destabilizing wall corresponds to a positive nontrivial solution $(\\widetilde{x},\\widetilde{y})$ of $x^2-4d\\cdot y^2=1$ , for which the classes $v(E)$ and $v(Q)$ are known.", "Combining these classes with (REF ), we obtain the explicit expression of $h^0_{\\mathcal {I}_0,l}$ .", "Finally, observe that in the same way as the quotients $\\frac{\\widetilde{y}}{\\widetilde{x}}$ converge to $\\frac{\\sqrt{d}}{2d}$ , the walls accumulate towards the point $-\\frac{\\sqrt{d}}{d}$ in the $\\beta $ -axis: Figure: Possible walls for ℐ 0 \\mathcal {I}_0 parametrized by solutions to Pell's equationHence the largest possible wall is associated to the minimal solution $(x_0,y_0)$ , and the inequality $\\epsilon _1(l)\\le \\frac{2y_0}{x_0-1}$ follows.", "Remark Upper bounds for $\\epsilon _1(l)$ have been given by Ito for general abelian surfaces over $\\mathbb {C}$ , using completely different techniques (see [6]).", "When $d$ is a perfect square, he already obtained the equality $\\epsilon _1(l)=\\frac{\\sqrt{d}}{d}$ (and thus the expression for $h^0_{\\mathcal {I}_0,l}$ ).", "On the other hand, for $d$ not a perfect square our upper bound refines the one given by Ito.", "Indeed, both bounds coincide for several values of $d$ , but in general the inequality $\\epsilon _1(l)\\le \\frac{2y_0}{x_0-1}$ is stronger (e.g.", "$d=7,11,13,19,21,22,23,...$ ).", "One of the advantages of our approach is that it also controls the differentiability of the functions, which is meaningful in terms of M-regularity.", "Indeed, as an immediate consequence of main and regularity we directly obtain: Corollary Let $(S,l)$ be a polarized abelian surface satisfying the hypothesis of main.", "If $d$ is a perfect square, then $\\mathcal {I}_0\\langle \\frac{\\sqrt{d}}{d}l\\rangle $ is a GV-sheaf which is not M-regular.", "If $d$ is not a perfect square, then $\\mathcal {I}_0\\langle \\frac{2y_0}{x_0-1}l\\rangle $ is M-regular.", "In particular, for $m\\in \\mathbb {Z}_{>0}$ $\\mathcal {I}_0\\langle \\frac{1}{m}l\\rangle $ is M-regular if and only if $m<\\sqrt{d}$ (i.e.", "$m^2< d$ ).", "We point out that correg gives an affirmative answer, in the case of abelian surfaces, to a question posed by Ito ([7]).", "By means of it, we prove syzygies: Under the assumptions of main, $\\mathcal {I}_0\\langle \\frac{1}{2}l\\rangle $ is $IT(0)$ for every $d\\ge 7$ , as an immediate application of regularity.", "(REF ) and the upper bounds for $\\epsilon _1(l)$ .", "Thus the first assertion follows from ito.", "(REF ).", "If $d>(p+2)^2$ for some $p\\ge 1$ , then $\\mathcal {I}_0\\langle \\frac{1}{p+2}l\\rangle $ is M-regular by correg.", "Hence ito.", "(REF ) guarantees the property $(N_p)$ for representatives of $l$ ." ], [ "Lower bounds for $\\epsilon _1(l)$", "Let $d$ be a positive integer which is not a perfect square, and let $(x_0,y_0)$ be the minimal positive solution to $x^2-4d\\cdot y^2=1$ .", "In the sequel, we will assume that $\\operatorname{char}(\\mathbb {K})$ divides neither $x_0^2$ nor $x_0^2-1$ (in particular, $\\operatorname{char}(\\mathbb {K})\\ne 2$ ).", "This section is devoted to prove main.", "(REF ), which in particular gives lower bounds for $\\epsilon _1(l)$ .", "Our approach is based on the following result (valid without the hypothesis of main): Proposition If $(S,l)$ is a $(1,d)$ -polarized abelian surface and $L$ is a symmetric representative of $l$ , then $h^0(S,\\mu _{x_0}^*\\mathcal {I}_0\\otimes L^{2x_0y_0})\\ge x_0^2$ .", "In other words, the linear system of curves $|L^{2x_0y_0}|$ has at least $x_0^2$ independent elements that contain all the $x_0$ -torsion points of $S$ .", "Since the subgroup $T\\cong \\left({\\mathbb {Z}}{x_0}\\right)^4$ of $x_0$ -torsion points is contained in $K(L^{2x_0y_0})\\cong \\left({\\mathbb {Z}}{2x_0y_0}\\oplus {\\mathbb {Z}}{2dx_0y_0}\\right)\\times \\widehat{\\left({\\mathbb {Z}}{2x_0y_0}\\oplus {\\mathbb {Z}}{2dx_0y_0}\\right)},$ we will use the representation of the theta group $\\mathcal {G}(L^{2x_0y_0})$ on $H^0(S,L^{2x_0y_0})$ to understand how translation by points of $T$ acts on the linear system $|L^{2x_0y_0}|$ .", "We consider the isomorphism of mumford.", "(REF ), which in particular identifies $\\mathcal {G}(L^{2x_0y_0})$ with $\\mathbb {K}^*\\times K(L^{2x_0y_0})$ (with a noncommutative group operation), and $H^0(S,L^{2x_0y_0})$ with $V(2x_0y_0,2dx_0y_0)=\\lbrace \\mathbb {K}\\text{-valued functions on } {\\mathbb {Z}}{2x_0y_0}\\oplus {\\mathbb {Z}}{2dx_0y_0}\\rbrace .$ Denote by $\\lbrace \\delta _{j,k}\\mid (j,k)\\in {\\mathbb {Z}}{2x_0y_0}\\oplus {\\mathbb {Z}}{2dx_0y_0}\\rbrace $ the canonical basis of $V(2x_0y_0,2dx_0y_0)$ , that is: $\\delta _{j,k}(l,m)=1$ if $(j,k)=(l,m)$ , and $\\delta _{j,k}(l,m)=0$ otherwise.", "Moreover, let $\\lbrace a_1,a_2,a_3,a_4\\rbrace $ be the following basis of $T$ inside $K(L^{2x_0y_0})$ : [•] $a_1=(2y_0,0)$ , $a_2=(0,2dy_0)$ in ${\\mathbb {Z}}{2x_0y_0}\\oplus {\\mathbb {Z}}{2dx_0y_0}$ .", "$a_3,a_4\\in \\operatorname{Hom}_\\mathbb {Z}({\\mathbb {Z}}{2x_0y_0}\\oplus {\\mathbb {Z}}{2dx_0y_0},\\mathbb {K}^*)$ are the homomorphisms given by $a_3(1,0)=\\xi ,\\;a_3(0,1)=1,\\;a_4(1,0)=1,\\;a_4(0,1)=\\xi ,$ where $\\xi $ is a primitive $x_0$ -th root of 1.", "Consider the lifts $(1,a_i)\\in \\mathcal {G}(L^{2x_0y_0})$ of $a_i$ (for $i=1,2,3,4$ ) to the theta group.", "According to the representation described in mumford.", "(REF ), they induce the endomorphisms $\\widetilde{a}_1:\\delta _{j,k}\\mapsto \\delta _{j-2y_0,k}\\;,\\;\\;\\;\\;\\;\\widetilde{a}_2:\\delta _{j,k}\\mapsto \\delta _{j,k-2dy_0}\\;,\\;\\;\\;\\;\\;\\widetilde{a}_3:\\delta _{j,k}\\mapsto \\xi ^j\\delta _{j,k}\\;,\\;\\;\\;\\;\\;\\widetilde{a}_4:\\delta _{j,k}\\mapsto \\xi ^k\\delta _{j,k}$ on $H^0(S,L^{2x_0y_0})$ .", "Recall that the projectivization of $\\widetilde{a}_i$ on the linear system $|L^{2x_0y_0}|$ corresponds to (the dual of) the projectivity $t_{a_i}:\\mathbb {P}(H^0(S,L)^\\vee )\\rightarrow \\mathbb {P}(H^0(S,L)^\\vee )$ extending $t_{a_i}:S\\rightarrow S$ .", "Observe that $\\widetilde{a}_3,\\widetilde{a}_4$ are diagonalizable endomorphisms that commute (as corresponds to $a_3,a_4$ generating a totally isotropic subgroup of $K(L^{2x_0y_0})$ ).", "This implies that every eigenspace of $\\widetilde{a}_3$ is an invariant subspace for $\\widetilde{a}_4$ , and conversely.", "Therefore, we can find a decomposition $H^0(S,L^{2x_0y_0})=\\bigoplus _{\\begin{array}{c}l,m\\in \\lbrace 0,...,x_0-1\\rbrace \\end{array}} E_{(l,m)},$ where $E_{(l,m)}$ is a subspace of eigenvectors for both $\\widetilde{a}_3$ and $\\widetilde{a}_4$ (of eigenvalue $\\xi ^l$ for $\\widetilde{a}_3$ , and eigenvalue $\\xi ^m$ for $\\widetilde{a}_4$ ).", "Explicitly, we have $E_{(l,m)}=\\langle \\delta _{j,k}\\mid j\\equiv l\\text{ and }k\\equiv m\\text{ (mod $x_0)$}\\rangle ,$ so every subspace $E_{(l,m)}$ has dimension $2y_0\\cdot 2dy_0=4dy_0^2=x_0^2-1$ .", "The projectivization of $E_{(l,m)}$ represents a $(x_0^2-2)$ -dimensional linear system $\\mathcal {L}_{l,m}\\subset |L^{2x_0y_0}|$ , formed by curves which remain invariant under translation by points of the subgroup $\\langle a_3,a_4\\rangle \\subset T$ .", "In particular, any curve of $\\mathcal {L}_{l,m}$ containing $\\langle a_1,a_2\\rangle \\subset T$ automatically contains all of $T$ .", "Moreover, since $\\gcd (x_0,2dy_0)=1$ , it follows from the above description of $\\widetilde{a}_1,\\widetilde{a}_2$ that the subgroup $\\langle (1,a_1),(1,a_2)\\rangle \\cong \\left({\\mathbb {Z}}{x_0}\\right)^2\\subset \\mathcal {G}(L^{2x_0y_0})$ acts transitively on the set $\\lbrace E_{(l,m)}\\rbrace $ .", "Thus for our purposes it suffices to find a curve $C\\in \\mathcal {L}_{0,0}$ containing the $x_0^2$ points of $\\langle a_1,a_2\\rangle \\subset T$ .", "Indeed, the set of $x_0^2$ curves will be formed by one curve in each $\\mathcal {L}_{l,m}$ , obtained from $C$ by translation with the corresponding point of $\\langle a_1,a_2\\rangle $ .", "Since $L^{2x_0y_0}$ is totally symmetric, we may consider the involution of $H^0(S,L^{2x_0y_0})$ $\\widetilde{i}:\\delta _{j,k}\\mapsto \\delta _{-j,-k},$ whose projectivization extends the inversion $i:S\\rightarrow S$ to a projectivity of $\\mathbb {P}(H^0(S,L^{2x_0y_0})^\\vee )$ .", "The subspace $E_{(0,0)}$ is clearly invariant by this endomorphism, and the restriction $\\widetilde{i}_{|E_{(0,0)}}$ satisfies: [•] The subspace $E_{(0,0)}^1\\subset E_{(0,0)}$ of eigenvectors of eigenvalue 1 has dimension $2dy_0^2+2=\\frac{x_0^2-1}{2}+2$ .", "Explicitly, a basis of $E_{(0,0)}^1$ is given by $\\delta _{sx_0,tx_0}+\\delta _{(2y_0-s)x_0,(2dy_0-t)x_0}$ for $s\\in \\lbrace 0,...,y_0\\rbrace $ , and $t\\in \\lbrace 0,...,2dy_0-1\\rbrace $ (if $s\\ne 0,y_0$ ) or $t\\in \\lbrace 0,...,dy_0\\rbrace $ (if $s=0,y_0$ ).", "The eigenspace $E_{(0,0)}^{-1}\\subset E_{(0,0)}$ of eigenvalue $-1$ has dimension $2dy_0^2-2$ , with basis $\\delta _{sx_0,tx_0}-\\delta _{(2y_0-s)x_0,(2dy_0-t)x_0}$ for $s\\in \\lbrace 0,...,y_0\\rbrace $ , and $t\\in \\lbrace 0,...,2dy_0-1\\rbrace $ (if $s\\ne 0,y_0$ ) or $t\\in \\lbrace 1,...,dy_0-1\\rbrace $ (if $s=0,y_0$ ).", "The projectivization of $E_{(0,0)}^1$ defines a $(\\frac{x_0^2-1}{2}+1)$ -dimensional linear system $\\mathcal {L}_{0,0}^1\\subset \\mathcal {L}_{0,0}$ , formed by symmetric curves that remain invariant under translation by points of $\\langle a_3,a_4\\rangle \\subset T$ .", "Since $x_0$ is odd, the only 2-torsion point of $\\langle a_1,a_2\\rangle \\cong \\left({\\mathbb {Z}}{x_0}\\right)^2$ is the origin of $S$ ; accordingly, points of $\\langle a_1,a_2\\rangle $ impose at most $\\frac{x_0^2-1}{2}+1$ independent conditions on $\\mathcal {L}_{0,0}^1$ .", "It is thus possible to find a curve of $\\mathcal {L}_{0,0}^1$ containing all the points of $\\langle a_1,a_2\\rangle \\subset T$ , which finishes the proof.", "symcurve shows (via Serre duality and cohomology and base change) that the sheaf $R^{2}\\Phi _{\\mathcal {P}^{\\vee }}((\\mu _{x_0}^*\\mathcal {I}_0\\otimes L^{2x_0y_0})^{\\vee })$ is nonzero.", "In virtue of the explicit expression for $h^0_{\\mathcal {I}_0,l}$ given in (REF ), this implies that $h^0_{\\mathcal {I}_0,l}(x)$ is positive for $x>\\frac{2y_0}{x_0}$ .", "On the other hand, since $x_1=x_0^2+4dy_0^2$ and $y_1=2x_0y_0$ , the equality $\\frac{2y_0}{x_0}=\\frac{2y_1}{x_1+1}$ holds.", "Therefore, by main we conclude that only two expressions for $h^0_{\\mathcal {I}_0,l}$ are possible (those corresponding to the solutions $(x_0,y_0)$ and $(x_1,y_1)$ ).", "In particular, $\\epsilon _1(l)\\in \\lbrace \\frac{2y_0}{x_0-1},\\frac{2y_1}{x_1-1}\\rbrace $ .", "Remark It follows, at least when $\\operatorname{char}(\\mathbb {K})=0$ , that $\\epsilon _1(l)$ is rational under the assumptions of main.", "It would be interesting to know whether this holds true for every polarized abelian surface (or more generally, for every polarized abelian variety).", "There are several examples of non-perfect squares $d$ where $\\epsilon _1(l)$ is known for a general $(1,d)$ -polarized (complex) abelian surface $(S,l)$ (see [6]); for all of them, there is an equality $\\epsilon _1(l)=\\frac{2y_0}{x_0-1}$ .", "Thus it seems reasonable to expect this for every non-perfect square $d$ .", "Assume the equality $\\epsilon _1(l)=\\frac{2y_1}{x_1-1}$ holds.", "According to the expression for $h^0_{\\mathcal {I}_0,l}$ given by main, for every $x>\\frac{2y_1}{x_1+1}$ small enough we have $h^0_{\\mathcal {I}_0,l}(x)=\\frac{d(x_1+1)}{2}x^2-2dy_1\\cdot x+\\frac{x_1-1}{2}=dx_0^2\\left(x-\\frac{2y_0}{x_0}\\right)^2,$ and then an elementary manipulation of (REF ) shows that $R^{2}\\Phi _{\\mathcal {P}^{\\vee }}((\\mu _{x_0}^*\\mathcal {I}_0\\otimes L^{2x_0y_0})^{\\vee })$ is a 0-dimensional sheaf of length $x_0^2$ .", "But note that symcurve precisely shows that, if $R^{2}\\Phi _{\\mathcal {P}^{\\vee }}((\\mu _{x_0}^*\\mathcal {I}_0\\otimes L^{2x_0y_0})^{\\vee })$ is 0-dimensional, then it has length $\\ge x_0^2$ .", "Hence a slightly stronger version of symcurve (with $x_0^2+1$ independent curves on $|L^{2x_0y_0}|$ , or with a curve in a translated linear system $|L^{2x_0y_0}\\otimes \\alpha |$ containing also $T$ ) would yield a contradiction, leading to a proof of $\\epsilon _1(l)=\\frac{2y_0}{x_0-1}$ ." ] ]
2107.01896
[ [ "Hot carrier dynamics and electron-optical phonon coupling in\n photoexcited graphene via time-resolved ultrabroadband terahertz spectroscopy" ], [ "Abstract Electron-electron (e-e) interaction is known as a source of logarithmic renormalizations for Dirac fermions in quantum field theory.", "The renormalization of electron--optical phonon coupling (EPC) by e-e interaction, which plays a pivotal role in hot carrier and phonon dynamics, has been discussed after the discovery of graphene.", "We investigate the hot carrier dynamics and the EPC strength using time-resolved ultrabroadband terahertz (THz) spectroscopy combined with numerical simulation based on the Boltzmann transport equation and comprehensive temperature model.", "The large negative photoconductivity and the non-Drude behavior of THz conductivity spectra appear under high pump fluence and can be attributed to the temporal variation of the hot carrier distribution and scattering rate.", "We successfully estimate the dimensionless EPC matrix element of the $A_1^{\\prime}$ optical phonon mode near the $\\mathbf{K}$ point as $\\lambda_{\\mathbf{K}} \\approx$0.09 from the fitting of THz conductivity spectra and temporal evolution of transient THz reflectivity, which is slightly larger than the prediction of the renormalization group." ], [ "Calculation of equilibrium optical conductivity of graphene from THz time domain spectroscopic ellipsometry experiment", "In this section, we explain the calculation procedure of the THz conductivity $\\sigma (\\omega _{\\mathrm {THz}})$ of graphene on the substrate from the ratio of the complex reflection coefficient $(r_{\\mathrm {p}}(\\omega _{\\mathrm {THz}})/r_{\\mathrm {s}}(\\omega _{\\mathrm {THz}}))$ for the p- and s- polarized THz waves measured by THz time domain spectroscopic ellipsometry (THz-TDSE)[1].", "According to the standard thin-film approximation, the reflection coefficients of graphene on a substrate for p- and s-polarized THz wave are given by[2] $\\begin{aligned}r_{\\mathrm {p}}(\\omega _{\\mathrm {THz}}) =\\frac{\\sigma (\\omega _{\\mathrm {THz}}) Z_0+(\\frac{\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})}{(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}}-\\frac{\\epsilon ^{1/2}_{\\mathrm {1}}(\\omega _{\\mathrm {THz}}))}{\\cos \\theta _1})}{\\sigma (\\omega _{\\mathrm {THz}}) Z_0+(\\frac{\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})}{(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}}+\\frac{\\epsilon ^{1/2}_{\\mathrm {1}}(\\omega _{\\mathrm {THz}})}{\\cos \\theta _1})},\\end{aligned}$ $\\begin{aligned}r_{\\mathrm {s}}(\\omega _{\\mathrm {THz}}) =-\\frac{\\sigma (\\omega _{\\mathrm {THz}}) Z_0+(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}-\\epsilon _{1}^{1 / 2}(\\omega _{\\mathrm {THz}}) \\cos \\theta _1}{\\sigma (\\omega _{\\mathrm {THz}}) Z_0+(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}+\\epsilon _{1}^{1 / 2}(\\omega _{\\mathrm {THz}}) \\cos \\theta _1}.\\end{aligned}$ In the above, $Z_0=376.7\\,(\\Omega )$ is the vacuum impedance and $\\theta _1=60^{\\circ }$ is the incidence angle of the THz wave.", "Furthermore, $\\epsilon _{\\mathrm {i}}(\\omega _{\\mathrm {THz}})$ is the dielectric constant of layer i, as indicated in Fig.S1.", "From Eq.", "(SI.1), $\\sigma (\\omega )$ is expressed as $\\sigma (\\omega ) = -\\frac{\\lbrace (r_{\\mathrm {p}}/r_{\\mathrm {s}})(A+B^{\\prime })+A^{\\prime }+B\\rbrace Z_0+\\lbrace ((r_{\\mathrm {p}}/r_{\\mathrm {s}})(A+B^{\\prime })+A^{\\prime }+B)^2-4(1+(r_{\\mathrm {p}}/r_{\\mathrm {s}}))((r_{\\mathrm {p}}/r_{\\mathrm {s}}) A B^{\\prime }+A^{\\prime } B)\\rbrace ^{1/2}}{2(1+(r_{\\mathrm {p}}/r_{\\mathrm {s}}))Z_0},$ where $A=\\frac{\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})}{(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}}+\\frac{\\epsilon ^{1/2}_{\\mathrm {1}}(\\omega _{\\mathrm {THz}})}{\\cos \\theta _1},$ $A^{\\prime }=\\frac{\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})}{(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}}-\\frac{\\epsilon ^{1/2}_{\\mathrm {1}}(\\omega _{\\mathrm {THz}})}{\\cos \\theta _1},$ $B=(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}+\\epsilon _{1}^{1 / 2}(\\omega _{\\mathrm {THz}}) \\cos \\theta _1,$ $B^{\\prime }=(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}-\\epsilon _{\\mathrm {1}}^{1 / 2}(\\omega _{\\mathrm {THz}}) \\cos \\theta _1.$ By substituting $(r_{\\mathrm {p}}/r_{\\mathrm {s}})$ measured by THz-TDSE into Eq.", "(SI.2), $\\sigma (\\omega _{\\mathrm {THz}})$ can be determined.", "Figure: Schematic of systems considered in THz-TDSE and OPTP measurements, showing incident THz probe or optical pump pulse." ], [ "Calculation of hot carrier optical conductivity photoexcited graphene from reflection coefficient by OPTP experiment", "In this section, we present the calculation procedure of the hot carrier THz conductivity $\\sigma (\\omega _{\\mathrm {THz}}, \\tau _1)$ of photoexcited graphene at the pump probe delay $\\tau _1$ from the reflection-type OPTP measurement.", "The reflection-type OPTP measures the ratio of the complex reflection coefficient $X_{\\mathrm {s}}(\\omega _{\\mathrm {THz}},\\tau _1)=r_{\\mathrm {s}}^{\\prime }(\\omega _{\\mathrm {THz}},\\tau _1)/r_{\\mathrm {s}}(\\omega _{\\mathrm {THz}})$ of graphene with and without photoexcitation.", "The reflection coefficient for the s-polarization of graphene with complex conductivity $\\sigma (\\omega _{\\mathrm {THz}})$ at an incident angle of $\\theta _1$ is expressed by Eq.", "(SI.1b).", "Similarly, the THz-amplitude reflection coefficient for the s-polarization of graphene with hot carrier complex conductivity $\\sigma (\\omega _{\\mathrm {THz}}, \\tau _1)$ on the substrate at an incident angle of $\\theta _1$ for the pump probe delay $\\tau _1$ is expressed by $\\begin{aligned}r_{\\mathrm {s}}^{\\prime }(\\omega _{\\mathrm {THz}},\\tau _1)=-\\frac{\\sigma (\\omega _{\\mathrm {THz}},\\tau _1) Z_0+(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}-\\epsilon _{1}^{1 / 2}(\\omega _{\\mathrm {THz}}) \\cos \\theta _1}{\\sigma (\\omega _{\\mathrm {THz}},\\tau _1) Z_0+(\\epsilon _{\\mathrm {2}}(\\omega _{\\mathrm {THz}})-\\epsilon _{\\mathrm {1}}(\\omega _{\\mathrm {THz}}) \\sin ^2 \\theta _1)^{1 / 2}+\\epsilon _{1}^{1 / 2}(\\omega _{\\mathrm {THz}}) \\cos \\theta _1}.\\end{aligned}$ Using Eqs.", "(SI.1b) and (SII.1), we obtain $\\begin{aligned}\\sigma (\\omega _{\\mathrm {THz}},\\tau _1) =-\\frac{B X_{\\mathrm {s}}(\\omega _{\\mathrm {THz}},\\tau _1) r_{\\mathrm {s}}(\\omega _{\\mathrm {THz}})+B^{\\prime }}{Z_0[1+ X_{\\mathrm {s}}(\\omega _{\\mathrm {THz}},\\tau _1) r_{\\mathrm {s}}(\\omega _{\\mathrm {THz}})]},\\end{aligned}$ where $B$ and $B^{\\prime }$ are provided by Eqs.", "(SI.3c) and (SI.3d), respectively, and $r_{\\mathrm {s}}(\\omega _{\\mathrm {THz}})$ is calculated using the equilibrium $\\sigma (\\omega _{\\mathrm {THz}})$ obtained by THz-TDSE.", "We can obtain the $\\sigma (\\omega _{\\mathrm {THz}},\\tau _1)$ by substituting $X_{\\mathrm {s}}(\\omega _{\\mathrm {THz}},\\tau _1)$ into Eq.", "(SII.2)." ], [ "Rate equations for Temperature model ", "In this section, we present the derivation of the hot carrier recombination and generation rate by optical phonon emission and absorption process, respectively, used in the temperature model.", "The Hamiltonian of electron-phonon interaction $H_{cp}$ is $ H_{ep}=\\sum _{\\mathbf {k}, \\mathbf {k}^{\\prime }, \\mathbf {q}} V_{ep}(c^{\\dagger }_{\\mathbf {k}}c_{\\mathbf {k}^{\\prime }}b_{\\mathbf {q}}+ c^{\\dagger }_{\\mathbf {k}^{\\prime }}c_{\\mathbf {k}}b_{-\\mathbf {q}}^{\\dagger })$ Here, $V_{ep}$ is the potential of the electron-phonon interaction, $c^{\\dagger }_{\\mathbf {k}}(c_{\\mathbf {k}})$ is the creation (annihilation ) operator with carrier wave vector $\\mathbf {k}$ , $b^{\\dagger }_{\\mathbf {q}}(b_{\\mathbf {q}})$ is the creation (annihilation ) operator with phonon wave vector $\\mathbf {q}$ .", "From Fermi's golden rule, the carrier transition rate from $\\mathbf {k}$ to $\\mathbf {k}^{\\prime }$ by the emission and absorption of the $\\Gamma _{\\mathrm {LO}}$ phonon or $\\Gamma _{\\mathrm {LO}}$ phonon with the energy of $\\hbar \\omega _{\\bf {\\Gamma }}$ are given by $ \\begin{aligned}P^{\\mathrm {EM/AB},\\bf {\\Gamma }}_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }}&= \\frac{2 \\pi }{\\hbar } \\left| \\left\\langle \\mathbf {k}^{\\prime },\\lambda ^{\\prime }\\left| H_{ep} \\right|\\mathbf {k},\\lambda \\right\\rangle \\right| ^2 \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}} \\pm \\hbar \\omega _{\\bf {\\Gamma }})\\\\&=\\frac{\\pi \\left| \\mathrm {D}^{\\bf {\\Gamma }}_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime } }\\right| ^2}{\\rho \\omega _{\\bf {\\Gamma }} A} (n_{\\bf {\\Gamma }}+\\frac{1}{2} \\pm \\frac{1}{2}) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}} \\pm \\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k} \\pm \\mathbf {q})\\\\\\end{aligned}$ Here, $\\left| \\mathrm {D}^{\\bf {\\Gamma }}_{\\lambda \\mathbf {k} \\lambda ^{\\prime }\\mathbf {k}^{\\prime } }\\right|^2$ is the square of the EPC matrix element.", "For small $\\mathbf {q}$ and $\\mathbf {k}$ , the EPC matrix elements are $\\left| \\mathrm {D}^{\\bf {\\Gamma }}_{\\lambda \\mathbf {k} \\lambda ^{\\prime }\\mathbf {k}^{\\prime } }\\right|^2=\\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F \\left[ 1 \\pm \\cos (\\theta _{\\mathbf {k},\\mathbf {q}} + \\theta _{\\mathbf {k}^{\\prime },\\mathbf {q}})\\right]$ where $\\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F$ is the average on Fermi surface of $\\left| \\mathrm {D}^{\\bf {\\Gamma }}_{\\lambda \\mathbf {k}, \\lambda ^{\\prime }\\mathbf {k}^{\\prime } }\\right|^2$ .", "$\\rho $ is the mass density, A is the area of graphene sample, $\\varepsilon _{\\lambda \\mathbf {k}}=\\lambda \\hbar v_F |\\mathbf {k}|$ is the energy of 2D MDF and $\\lambda =\\pm 1$ is the band index.", "The upper and lower sign corresponds to the optical phonon emission and absorption process, respectively.", "The corresponding hot carrier recombination and generation rate per unit area including both intra- and inter-band transitions are written as $\\begin{aligned}R_{\\bf {\\Gamma }}&=\\frac{1}{A}\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k},\\mathbf {k^{\\prime }}} P^{\\mathrm {EM},\\bf {\\Gamma }}_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }} f_{\\lambda }(\\mathbf {k})(1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime }))\\\\&=\\frac{1}{A} \\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k}} \\sum _{\\mathbf {k}^{\\prime }} \\frac{\\pi \\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F \\left[ 1 \\pm \\cos (\\theta _{ \\mathbf {k},\\mathbf {q}}+\\theta _{ \\mathbf {k}^{\\prime },\\mathbf {q}})\\right] }{\\rho \\omega _{\\bf {\\Gamma }} A} (n_{\\bf {\\Gamma }}+1)f_{\\lambda }(\\mathbf {k)} (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}+\\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}+\\mathbf {q})\\\\&=\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k}} \\frac{\\pi \\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F }{A(2\\pi )^2}\\int d^2\\mathbf {k}^{\\prime } \\frac{\\left[ 1 \\pm \\cos (\\theta _{\\mathbf {k},\\mathbf {q}}+\\theta _{\\mathbf {k}^{\\prime },\\mathbf {q}}))\\right]}{\\rho \\omega _{\\bf {\\Gamma }_{\\mathrm {LO}}} } (n_{\\bf {\\Gamma }}+1) f_{\\lambda }(\\mathbf {k)} (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}+\\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}+\\mathbf {q})\\\\&=\\sum _{\\lambda , \\lambda ^{\\prime }}\\frac{\\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F (n_{\\bf {\\Gamma }}+1)}{4 \\pi \\rho \\omega _{\\bf {\\Gamma }}}\\int f_{\\lambda }(\\varepsilon _{\\lambda \\mathbf {k}}) N(\\varepsilon _{\\lambda \\mathbf {k}}) d\\varepsilon _{\\lambda \\mathbf {k}} \\\\&\\quad \\times \\int d^2 \\mathbf {k}^{\\prime } \\left[ 1 \\pm \\cos (\\theta _{ \\mathbf {k},\\mathbf {q}}+\\theta _{\\mathbf {k}^{\\prime },\\mathbf {q}})\\right] (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}+\\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}+\\mathbf {q})\\\\\\end{aligned}$ $\\begin{aligned}G_{\\bf {\\Gamma }}&=\\frac{1}{A}\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k},\\mathbf {k^{\\prime }}} P^{\\mathrm {AB},\\bf {\\Gamma }}_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }} f_{\\lambda }(\\mathbf {k})(1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime }))\\\\&=\\frac{1}{A} \\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k}} \\sum _{\\mathbf {k}^{\\prime }} \\frac{\\pi \\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F \\left[ 1 \\pm \\cos (\\theta _{\\mathbf {k},\\mathbf {q}}+\\theta _{\\mathbf {k}^{\\prime },\\mathbf {q}})\\right] }{\\rho \\omega _{\\bf {\\Gamma }} A}n_{\\bf {\\Gamma }} f_{\\lambda }(\\mathbf {k)} (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}-\\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}-\\mathbf {q})\\\\&=\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k}} \\frac{\\pi \\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F }{A(2\\pi )^2}\\int d^2\\mathbf {k}^{\\prime } \\frac{\\left[ 1 \\pm \\cos (\\theta _{ \\mathbf {k},\\mathbf {q}}+\\theta _{ \\mathbf {k}^{\\prime },\\mathbf {q}}))\\right]}{\\rho \\omega _{\\bf {\\Gamma }_{\\mathrm {LO}}} } n_{\\bf {\\Gamma }}f_{\\lambda }(\\mathbf {k)} (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}-\\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}-\\mathbf {q})\\\\&=\\sum _{\\lambda , \\lambda ^{\\prime }}\\frac{\\left\\langle \\mathrm {D}_{\\bf {\\Gamma }}^2 \\right\\rangle _F n_{\\bf {\\Gamma }}}{4 \\pi \\rho \\omega _{\\bf {\\Gamma }} }\\int f_{\\lambda }(\\varepsilon _{\\lambda \\mathbf {k}}) N(\\varepsilon _{\\lambda \\mathbf {k}}) d\\varepsilon _{\\lambda \\mathbf {k}} \\\\&\\quad \\times \\int d^2 \\mathbf {k}^{\\prime } \\left[ 1 \\pm \\cos (\\theta _{ \\mathbf {k},\\mathbf {q}}+\\theta _{\\mathbf {k}^{\\prime },\\mathbf {q}})\\right] (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime } \\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}-\\hbar \\omega _{\\bf {\\Gamma }}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}-\\mathbf {q})\\\\\\end{aligned}$ Here, $N(\\varepsilon _{\\lambda \\mathbf {k}})=2|\\varepsilon _{\\lambda \\mathbf {k}}|/\\pi (\\hbar v_F)^2$ is the density of state of 2D MDF.", "Furthermore, the electron distribution function $f_{\\lambda }(\\mathbf {k})$ can be replaced by Fermi-Dirac type distribution $f_0(\\varepsilon _{\\lambda \\mathbf {k}}, T_e)$ for hot carriers in quasi-equilibrium.", "Similarly, the hot carrier recombination and generation rate by K-phonon with the energy of $\\hbar \\omega _{\\bf {K}}$ are given by $\\begin{aligned}R_{\\bf {K}_{\\mathrm {}}}&=\\frac{1}{A}\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k},\\mathbf {k^{\\prime }}} P^{\\mathrm {EM},\\bf {K}}_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }} f_{\\lambda }(\\mathbf {k})(1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime }))\\\\&=\\sum _{\\lambda , \\lambda ^{\\prime }}\\frac{\\left\\langle \\mathrm {D}_{\\bf {K}}^2 \\right\\rangle _F (n_{\\bf {K}}+1)}{4 \\pi \\rho \\omega _{\\bf {K}} }\\int f_{\\lambda }(\\varepsilon _{\\lambda \\mathbf {k}}) N(\\varepsilon _{\\lambda \\mathbf {k}}) d\\varepsilon _{\\lambda \\mathbf {k}}\\\\&\\quad \\times \\int d^2\\mathbf {k}^{\\prime } \\left[ 1 \\pm \\cos (\\theta _{\\mathbf {k},\\mathbf {k}^{\\prime }})\\right] (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime }\\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}+\\hbar \\omega _{\\bf {K}}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}+\\mathbf {q})\\\\\\end{aligned}$ $\\begin{aligned}G_{\\bf {K}}&=\\frac{1}{A}\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k},\\mathbf {k^{\\prime }}} P^{\\mathrm {AB},\\bf {K}}_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }} f_{\\lambda }(,\\mathbf {k})(1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime }))\\\\&=\\sum _{\\lambda , \\lambda ^{\\prime }}\\frac{\\left\\langle \\mathrm {D}_{\\bf {K}}^2 \\right\\rangle _F n_{\\bf {K}}}{4 \\pi \\rho \\omega _{\\bf {K}} }\\int f_{\\lambda }(\\varepsilon _{\\lambda \\mathbf {k}}) N(\\varepsilon _{\\lambda \\mathbf {k}}) d\\varepsilon _{\\lambda \\mathbf {k}}\\\\&\\quad \\times \\int d^2\\mathbf {k}^{\\prime } \\left[ 1 \\pm \\cos (\\theta _{\\mathbf {k},\\mathbf {k}^{\\prime }})\\right] (1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime })) \\delta (\\varepsilon _{\\lambda ^{\\prime }\\mathbf {k}^{\\prime }}-\\varepsilon _{\\lambda \\mathbf {k}}-\\hbar \\omega _{\\bf {K}}) \\delta (\\mathbf {k}^{\\prime }-\\mathbf {k}-\\mathbf {q})\\\\\\end{aligned}$ Using Eqs.", "(SIII.3)-(SIII.4), the total balance between the optical phonon emission and absorption rate is given by $R^{\\mathrm {Net}}_{\\eta }=R_{\\eta }-G_{\\eta }$ .", "In Eq.", "(9), $R^{Net}_{M,\\eta }=R_{\\eta }-G_{\\eta }$ denotes the total balance between the optical phonon emission and absorption rate per number of phonon modes.", "$\\begin{aligned}R_{M,\\bf {\\eta }}&=\\frac{1}{A}\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k},\\mathbf {k^{\\prime }}} P^{\\mathrm {EM},\\eta }_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }} f_{\\lambda }(\\mathbf {k})(1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime }))/M_{\\eta }^{-}(\\lambda \\mathbf {k})\\\\\\end{aligned}$ $\\begin{aligned}G_{M,\\bf {\\eta }}&=\\frac{1}{A}\\sum _{\\lambda , \\lambda ^{\\prime }}\\sum _{\\mathbf {k},\\mathbf {k^{\\prime }}} P^{\\mathrm {AB},\\eta }_{\\lambda \\mathbf {k} \\lambda ^{\\prime } \\mathbf {k}^{\\prime }} f_{\\lambda }(\\mathbf {k})(1-f_{\\lambda ^{\\prime }}(\\mathbf {k}^{\\prime }))/M_{\\eta }^{+}(\\lambda \\mathbf {k})\\\\\\end{aligned}$ Here, $M_{\\eta }^{-}(\\lambda \\mathbf {k})$ and $M_{\\eta }^{+}(\\lambda \\mathbf {k})$ are the number of $\\eta $ -phonon modes $(\\mathbf {q})$ per unit area that participate the phonon emission and absorption processes for carries state $(\\lambda , \\mathbf {k})$ , respectively.", "$\\begin{aligned}M_{\\eta }^{\\pm }(\\lambda \\mathbf {k})&=\\left.", "\\frac{a_{\\eta }}{A} \\left( \\pi |\\mathbf {q}|_{\\mathrm {max}}^2-\\pi |\\mathbf {q}|_{\\mathrm {min}}^2 \\right) \\right\\bad.", "|\\Delta \\mathbf {q}| \\\\&=\\left.", "\\frac{a_{\\eta }}{A} \\left| \\pi \\left(\\frac{2\\varepsilon _{\\lambda \\mathbf {k}} \\pm \\hbar \\omega _{\\eta }}{\\hbar v_F} \\right)^2-\\pi \\left( \\frac{\\hbar \\omega _{\\eta }}{\\hbar v_F}\\right)^2 \\right| \\right\\bad.", "\\left\\lbrace \\frac{(2 \\pi )^2}{A}\\right\\rbrace \\\\&= \\frac{a_{\\eta }}{4 \\pi }\\left|\\left\\lbrace \\left(\\frac{2\\varepsilon _{\\lambda \\mathbf {k}} \\pm \\hbar \\omega _{\\eta }}{\\hbar v_F} \\right)^2-\\left( \\frac{ \\omega _{\\eta }}{ v_F}\\right)^2 \\right\\rbrace \\right|\\end{aligned}$ In this case, $a_{\\bf {\\Gamma }}=1$ for $\\Gamma $ -LO and $\\Gamma $ -TO phonons, and $a_{\\bf {K}}=2$ for $K$ phonon.", "The factor of $a_{\\bf {K}}=2$ represents the degenerate phonon valleys at the $K$ and $K^{\\prime }$ points.", "Using Eqs.", "(SIII.5), the total balance between the optical phonon emission and absorption rate per number of phonon modes is given by $R^{\\mathrm {Net}}_{M,\\eta }=R_{M,\\eta }-G_{M,\\eta }$ ." ], [ "Pump power injected into graphene sample considering saturable absorption", "In this section, we present the derivation of the pump intensity $\\mathcal {F}_{\\mathrm {ab}}$ injected into the graphene sample, considering the multiple reflections inside the substrate and the saturable absorption (SA) effect.", "The SA is an extreme nonlinear phenomenon that consists of the quenching of the optical absorption under high-intensity illumination.", "Following Marini et al.", "[3], we introduce the derivation of saturable absorption coefficient $\\alpha _{\\mathrm {inter}}$ in graphene.", "Thereafter, we explain the derivation of the absorbed pump intensity $\\mathcal {F}_{\\mathrm {ab}}$ by graphene on the substrate at an oblique incidence angle using $\\alpha _{\\mathrm {inter}}$ .", "We study the response of a single electron in graphene under an in-plane x-direction applied field $\\mathbf {E}(t)=E_0 \\mathrm {e}^{-i \\omega t}\\hat{\\mathbf {x}}$ .", "The extended Bloch equations describing the temporal variation in the interband coherence $\\rho _{\\mathbf {k}}$ and population difference $n_{\\mathbf {k}}$ in photoexcited graphene are as follows: $\\dot{\\rho }_{\\mathbf {k}}(t)=-\\frac{i}{2} \\dot{\\theta }_{\\mathbf {k}}(t) n_{\\mathbf {k}}(t) \\mathrm {e}^{2 i \\Omega _{\\mathbf {k}}(t)}-\\frac{\\rho _{\\mathbf {k}}(t)}{\\tau _{\\mathrm {ie}}},$ $\\dot{n}_{\\mathbf {k}}(t)=2 \\dot{\\theta }_{\\mathbf {k}}(t) \\operatorname{Im}\\left\\lbrace \\rho _{\\mathbf {k}}(t) \\mathrm {e}^{-2 i \\Omega _{\\mathbf {k}}(t)}\\right\\rbrace -\\frac{n_{\\mathbf {k}}(t)}{\\tau _{\\mathrm {ie}}},$ where $\\pi =\\hbar \\mathbf {k}=\\hbar k(\\cos \\phi , \\sin \\phi )$ is the electron momentum, the global dynamical phase $\\Omega _{\\mathbf {k}}(t)$ is defined as $\\Omega _{\\mathbf {k}}(t)=v_{\\mathrm {F}} \\int |\\mathbf {k}+(e / \\hbar c) \\mathbf {A}(t)| d t$ , and $\\theta _{\\mathbf {k}}(t)=\\operatorname{atan}\\left\\lbrace k_{y} /\\left[k_{x}+(e / \\hbar c) A(t)\\right]\\right\\rbrace $ is the time-dependent directional angle of the electron quasimomentum $\\mathbf {\\pi }(t)=\\hbar \\mathbf {k}+(e / c) \\mathbf {A}(t)$ .", "The vector potential $\\mathbf {A}(t)$ is $\\mathbf {A}(t)=-\\int \\mathbf {E}(t) dt$ .", "In the near-resonant condition, the optical momentum is negligible [i.e., $\\hbar k \\gg (e/c)A(t)$ ] and does not significantly affect the interband dynamics.", "In this approximation, Eqs.", "(SIV.1) is reduced to $\\dot{\\Gamma }_{\\mathbf {k}}=-\\left(\\frac{1}{\\tau _{\\mathrm {ie}}}+2 i \\omega _{0}\\right) \\Gamma _{\\mathbf {k}}-\\frac{i e}{\\hbar k} \\operatorname{Re}\\left\\lbrace E_{0} \\mathrm {e}^{-i \\omega t}\\right\\rbrace \\sin \\phi n_{\\mathrm {k}}$ $\\dot{n}_{\\mathrm {k}}=-\\frac{1}{\\tau _{\\mathrm {ie}}}\\left[n_{\\mathbf {k}}-\\mathcal {N}\\right]+\\frac{4 e}{\\hbar k} \\operatorname{Re}\\left\\lbrace E_{0} \\mathrm {e}^{-i \\omega t}\\right\\rbrace \\sin \\phi \\operatorname{Im}\\left\\lbrace \\Gamma _{\\mathbf {k}}\\right\\rbrace .$ In this case, $\\Gamma _{\\mathbf {k}}(t)=\\rho _{\\mathbf {k}}(t) \\mathrm {e}^{-2 i \\omega _{0} t}, \\omega _{0}=v_{\\mathrm {F}} |\\mathbf {k}|$ , $\\mathcal {N}(k,T_{\\mathrm {e}})=\\mathcal {F}(\\mathbf {k},T_{\\mathrm {e}})-\\mathcal {F}(-\\mathbf {k},T_{\\mathrm {e}})$ , where $\\mathcal {F}(\\mathbf {k}.T_{\\mathrm {e}})=1/\\lbrace 1+\\exp [(v_F \\hbar |\\mathbf {k}|-\\mu )/k_B T_{\\mathrm {e}}]\\rbrace $ .", "A phenomenological relaxation time $\\tau _{\\mathrm {ie}}$ is introduced, which encompasses the effect of numerous ultrafast decay channels for the out-of-equilibrium electrons into hot carriers and phonons.", "The steady-state ansatz for the Bloch equation is given by $\\Gamma _{\\mathbf {k}}(t)=\\Gamma _{\\mathbf {k}}^{+} \\mathrm {e}^{i \\omega t}+\\Gamma _{\\mathbf {k}}^{-} \\mathrm {e}^{-i \\omega t}$ $n_{\\mathbf {k}}(t)=n_{\\mathbf {k}}^{(0)}+\\operatorname{Re}\\left\\lbrace n_{\\mathbf {k}}^{(2)} \\mathrm {e}^{-2 i \\omega t}\\right\\rbrace $ Using these expressions and neglecting the higher-harmonic terms, Eqs.", "(SIV.2) leads to $n_{\\mathbf {k}}^{(0)}=\\mathcal {N}+4 \\xi \\operatorname{Im}\\left\\lbrace \\frac{1-i \\omega \\tau _{\\mathrm {ie}}}{1-i \\omega _{+} \\tau _{\\mathrm {ie}}} \\Gamma _{\\mathbf {k}}^{-}\\right\\rbrace $ $n_{\\mathbf {k}}^{(2)}=\\frac{-4 i \\xi (1-i \\omega \\tau _{\\mathrm {ie}}) \\Gamma _{\\mathbf {k}}^{-}}{(1-2 i \\omega \\tau _{\\mathrm {ie}})\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)}$ $\\Gamma _{\\mathbf {k}}^{+}=-\\frac{1+i \\omega _{-} \\tau _{\\mathrm {ie}}}{1+i \\omega _{+} \\tau _{\\mathrm {ie}}} \\Gamma _{\\mathbf {k}}^{-*}$ $\\Gamma _{\\mathbf {k}}^{-}=\\frac{(-i \\xi / 2)}{1-i \\omega _{-} \\tau _{\\mathrm {ie}}}\\left(n_{\\mathbf {k}}^{(0)}+\\frac{1}{2} n_{\\mathbf {k}}^{(2)}\\right),$ where $\\xi =\\left(e \\tau _{\\mathrm {ie}} E_{0} / \\hbar k\\right) \\sin \\phi $ and $\\omega _{\\pm }=\\omega \\pm 2 \\omega _{0}$ .", "The macroscopic interband current density depending on the light intensity $I_0=(c/2 \\pi ) |E_0|^2$ at the electronic temperature $T_{\\mathrm {e}}$ is determined by $\\mathbf {J}_{\\text{inter }}(t)=-\\frac{2 e v_{\\mathrm {F}}}{\\pi ^{2}} \\operatorname{Re}\\left\\lbrace i \\mathrm {e}^{-i \\omega t} \\int \\sin \\phi \\left[\\Gamma _{\\mathbf {k}}^{-}-\\Gamma _{\\mathbf {k}}^{+*}\\right] d^{2} \\mathbf {k}\\right\\rbrace \\hat{\\mathbf {x}},$ where $\\Gamma _{\\mathbf {k}}^{-}-\\Gamma _{\\mathbf {k}}^{+*}=-\\frac{\\operatorname{ie\\tau _{\\mathrm {ie}}} E_{0} \\sin \\phi (1-i \\omega \\tau _{\\mathrm {ie}})}{2 \\hbar k\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)}\\left[2 n_{\\mathbf {k}}^{(0)}+n_{\\mathbf {k}}^{(2)}\\right]$ , and $n_{\\mathbf {k}}^{(0)}=\\frac{\\mathcal {N}}{1+2 \\xi ^{2} \\operatorname{Im}\\left\\lbrace \\frac{i(1-i \\omega \\tau _{\\mathrm {ie}})}{\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)}\\left[1-\\frac{\\xi ^{2}(1-i \\omega \\tau _{\\mathrm {ie}})}{(1-2 i \\omega \\tau _{\\mathrm {ie}})\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)+\\xi ^{2}(1-i \\omega \\tau _{\\mathrm {ie}})}\\right]\\right\\rbrace }$ $n_{\\mathbf {k}}^{(2)}=\\frac{-2 \\xi ^{2}(1-i \\omega \\tau _{\\mathrm {ie}}) \\mathcal {N} /\\left[(1-2 i \\omega \\tau _{\\mathrm {ie}})\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)+\\xi ^{2}(1-i \\omega \\tau _{\\mathrm {ie}})\\right]}{1+2 \\xi ^{2} \\operatorname{Im}\\left\\lbrace \\frac{i(1-i \\omega \\tau _{\\mathrm {ie}})}{\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)}\\left[1-\\frac{\\xi ^{2}(1-i \\omega \\tau _{\\mathrm {ie}})}{(1-2 i \\omega \\tau _{\\mathrm {ie}})\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)+\\xi ^{2}(1-i \\omega \\tau )}\\right]\\right\\rbrace }.$ Subsequently, by expressing the integral over the reciprocal space in polar coordinates, the following is obtained: $\\mathbf {J}_{\\text{inter }}(t)=-\\frac{8 e^{2} v_{\\mathrm {F}} \\tau _{\\mathrm {ie}}}{\\pi ^{2} \\hbar } \\operatorname{Re}\\left\\lbrace E_{0} \\mathrm {e}^{-i \\omega t}(1-i \\omega \\tau _{\\mathrm {ie}}) \\int _{0}^{\\pi / 2} d \\phi \\int _{0}^{\\infty } d k \\frac{\\sin ^{2} \\phi \\left[2 n_{\\mathrm {k}}^{(0)}+n_{\\mathrm {k}}^{(2)}\\right]}{2\\left(1-i \\omega _{+} \\tau _{\\mathrm {ie}}\\right)\\left(1-i \\omega _{-} \\tau _{\\mathrm {ie}}\\right)}\\right\\rbrace \\hat{\\mathrm {x}}.$ Using the interband current, the interband absorption coefficient is determined as the ratio of the time-averaged absorbed power over an optical cycle to the incident intensity $I_0$ : $\\alpha _{\\text{inter }} (I_0) \\equiv \\frac{\\int _{-\\pi / \\omega }^{+\\pi / \\omega } \\mathbf {J}_{\\text{inter }}(t) \\cdot \\mathbf {E}(t) d t}{(2 \\pi / \\omega ) I_0}.$ Although the above results were obtained under the CW illumination conditions, these are also applicable to commonly used optical pulses that have a large duration compared to the optical period.", "Taking into account the SA for the interband transition by the pump irradiation, the transmission and reflection coefficients of the s-polarized pump pulse incident on the system of layer i/graphene/layer j from layer i, as illustrated in Fig.", "S1, are calculated by $\\begin{aligned}t_{\\mathrm {ij}}^s(I_0, \\gamma _{\\mathrm {ij}})=\\frac{2 \\epsilon _{i}^{1 / 2} (\\omega _{\\mathrm {pump}}) \\cos \\theta _{\\mathrm {i}}}{\\alpha _{\\text{inter }} (\\gamma _{\\mathrm {ij}} I_0)+(\\epsilon _{\\mathrm {j}}(\\omega _{\\mathrm {pump}})-\\epsilon _{\\mathrm {i}}(\\omega _{\\mathrm {pump}}) \\sin ^2 \\theta _{\\mathrm {i}})^{1 / 2}+\\epsilon _{\\mathrm {i}} (\\omega _{\\mathrm {pump}})^{1 / 2} \\cos \\theta _{\\mathrm {i}}},\\end{aligned}$ $\\begin{aligned}r_{\\mathrm {ij}}^s(I_0, \\gamma _{\\mathrm {ij}})=-\\frac{\\alpha _{\\text{inter }} (\\omega ,\\gamma _{\\mathrm {ij}} I_0,T_{\\mathrm {e}})+(\\epsilon _{\\mathrm {j}}(\\omega _{\\mathrm {p}})-\\epsilon _{\\mathrm {i}}(\\omega _{\\mathrm {p}}) \\sin ^2 \\theta _{\\mathrm {i}})^{1 / 2}-\\epsilon _{\\mathrm {i}}^{1 / 2}(\\omega _{\\mathrm {p}}) \\cos \\theta _{\\mathrm {i}}}{\\alpha _{\\text{inter }} (\\omega ,\\gamma _{\\mathrm {ij}} I_0,T_{\\mathrm {e}})+(\\epsilon _{\\mathrm {j}}(\\omega _{\\mathrm {p}})-\\epsilon _{\\mathrm {i}}(\\omega _{\\mathrm {p}}) \\sin ^2 \\theta _{\\mathrm {i}})^{1 / 2}+\\epsilon _{\\mathrm {i}}^{1 / 2}(\\omega _{\\mathrm {p}}) \\cos \\theta _{\\mathrm {i}}}.\\end{aligned}$ In this case, the pump pulse irradiates the graphene from layer i with the incidence angle of $\\theta _{\\mathrm {i}}$ and transmits it to layer j with the angle $\\theta _{\\mathrm {j}}$ .", "Moreover, $\\gamma _{\\mathrm {ij}}$ is the correction factor.", "Although $\\alpha _{\\text{inter }} (\\omega ,I_0,T_{\\mathrm {e}})$ is appropriate for the case in which the optical pump pulse excites the suspended graphene at the normal incidence angle, the saturation behavior will change when graphene on a substrate is excited by a pump pulse at an oblique incidence angle, where the injected pump power becomes smaller by a factor of $\\gamma _{\\mathrm {ij}}$ .", "The corresponding transmittance and reflectance are determined by $\\begin{aligned}T_{\\mathrm {ij}}^s (I_0, \\gamma _{\\mathrm {ij}})=|t_{\\mathrm {ij}}^s(I_0, \\gamma _{\\mathrm {ij}})|^2 \\frac{\\epsilon _{\\mathrm {j}}^{1 / 2}(\\omega _{\\mathrm {p}}) \\cos \\theta _{\\mathrm {j}}}{\\epsilon _{\\mathrm {i}}^{1 / 2}(\\omega _{\\mathrm {p}}) \\cos \\theta _{\\mathrm {i}}},\\end{aligned}$ $\\begin{aligned}R_{\\mathrm {ij}}^s(I_0, \\gamma _{\\mathrm {ij}})=|r^s_{\\mathrm {ij}}(I_0, \\gamma _{\\mathrm {ij}})|^2.\\end{aligned}$ Using Eq.", "(SIV.10), the absorption of the pump pulse by the graphene layer is provided by $\\begin{aligned}A^{s}_{\\mathrm {ij}}(I_0, \\gamma _{\\mathrm {ij}})=1-T^s_{\\mathrm {ij}}(I_0, \\gamma _{\\mathrm {ij}})-R^s_{\\mathrm {ij}} (I_0, \\gamma _{\\mathrm {ij}}).\\end{aligned}$ The correction factor $\\gamma _{\\mathrm {ij}}$ is calculated by the ratio of the absorption coefficient $\\begin{aligned}\\gamma _{\\mathrm {ij}}=\\frac{A^{s}_{\\mathrm {ij}}(I_0, \\gamma _{\\mathrm {ij}})}{\\alpha _{\\text{inter }} (I_0)},\\end{aligned}$ and can be determined self-consistently.", "Using the converged $\\gamma ^*_{\\mathrm {ij}}$ , the transmittance, reflectance, and absorption coefficients in the experimental condition are obtained by $\\begin{aligned}T^{s*}_{\\mathrm {ij}}(I_0)=T^{s}_{\\mathrm {ij}}(I_0, \\gamma ^*_{\\mathrm {ij}}),\\end{aligned}$ $\\begin{aligned}R^{s*}_{\\mathrm {ij}}(I_0)=R^{s}_{\\mathrm {ij}}(I_0, \\gamma ^*_{\\mathrm {ij}}),\\end{aligned}$ $\\begin{aligned}A^{s*}_{\\mathrm {ij}}(I_0)=A^{s}_{\\mathrm {ij}}(I_0, \\gamma ^*_{\\mathrm {ij}}).\\end{aligned}$ The envelope function of the pump pulse considering the $n^{\\mathrm {th}}$ multiple reflections inside the substrate is given by $\\begin{aligned}\\mathcal {I}(t)=\\sum _{n=0} \\mathcal {I}_n(t+n\\Delta T)\\end{aligned}$ In this case, $\\mathcal {I}_0(t)$ represents the incident pump pulse, which is assumed to have hyperbolic secant form $\\mathcal {I}_0(t)=\\left(F_{\\mathrm {0}} / 2 \\tau _{\\mathrm {pump}}\\right) \\operatorname{sech}^{2}\\left(t / \\tau _{\\mathrm {pump}}\\right)$ , where $F_{\\mathrm {0}}$ is the fluence and $2\\tau _{\\mathrm {pump}}$ is the pulse duration.", "$\\mathcal {I}_n(t)=\\left(F_{\\mathrm {n}} / 2 \\tau _{\\mathrm {pump}}\\right) \\operatorname{sech}^{2}\\left(t / \\tau _{\\mathrm {punmp}}\\right)$ represents the $n^{th}$ reflection of the incident pump pulse and $F_{\\mathrm {n}}$ is the fluence of the $n^{th}$ reflection pulse.", "$\\Delta T$ is the time delay owing to one round trip in the substrate.", "Using Eq.", "(SIV.13) and $I_0=F_{\\mathrm {0}}/ 2 \\tau _{\\mathrm {pump}}$ , $F_n$ for $n \\ge 1$ is obtained by $F_1=F_0 T^{s*}_{\\mathrm {12}}(F_{\\mathrm {0}} / 2 \\tau _{\\mathrm {pump}})R^{s}_{23},$ $F_n=F_{n-1} R^{s*}_{21}(F_{\\mathrm {n-1}} / 2 \\tau _{\\mathrm {pump}})R^{s}_{23},\\; (\\text{for}\\; n \\ge 2).$ In the above, $R^{s}_{23}$ is the reflectance of the pump pulse incident at the substrate (layer 2) /$N_2$ purged (layer 3) interface from the substrate ($\\alpha _{\\text{inter}} (I_0)=0$ in Eq.", "(SIV.13b)).", "Using Eqs.", "(SIV.13), (SIV.14), and (SIV.15), the absorbed pump intensity $\\mathcal {F}_{ab}(t)$ is determined by $\\begin{aligned}\\mathcal {I}_{ab}(t)=\\mathcal {I}_0(t) A^{s*}_{\\mathrm {12}}(F_{\\mathrm {0}} / 2 \\tau _{\\mathrm {pump}})+\\sum _n \\mathcal {I}_n(t+ n \\Delta t) A^{s*}_{\\mathrm {21}}(F_{\\mathrm {n}} / 2 \\tau _{\\mathrm {pump}}).\\end{aligned}$ Figures S2(a)–(f) depict the pump intensity dependence of $\\alpha _{\\mathrm {inter}}$ and $A^{s}_{\\mathrm {ij}}$ for various $\\tau _{\\mathrm {ie}}$ and $T_e$ calculated using Eqs.", "(SIV.8) and (SIV.13c).", "Figure S3 (a) and (b) present the saturated pump intensities $I_s$ for $\\alpha _{\\mathrm {inter}}$ and $A^{s*}_{\\mathrm {12}}$ , respectively, where $I_s$ is defined as $\\alpha _{\\mathrm {inter}}(I_s)=(1/2) \\alpha _{\\mathrm {inter}}(0) $ and $A^{s*}_{\\mathrm {12}}(I_s)=(1/2) A^{s*}_{\\mathrm {12}}(0) $ .", "Figure S4 shows the absorbed pump fluence in graphene with $|\\varepsilon _{\\mathrm {F}}|=$ 0.15 and $0.43\\,\\mathrm {eV}$ , calculated using Eq.", "(SIV.16).", "Figure: (a)–(f) Pump intensity dependence I pump I_{\\mathrm {pump}} of α inter \\alpha _{\\mathrm {inter}} and A ij s A^{s}_{\\mathrm {ij}} at θ=60 ∘ \\theta =60^{\\circ } in heavily doped graphene with |ε F |=0.43 eV |\\varepsilon _{\\mathrm {F}}|=0.43\\,\\mathrm {eV}, assuming ϵ 2 =2.4\\epsilon _2=2.4 for various τ ie \\tau _{\\mathrm {ie}} and T e T_e.", "The F 0 =I 0 ×2τ pump F_0=I_0 \\times 2\\tau _{\\mathrm {pump}} on the upper horizontal axis was calculated by assuming 2τ pump =2\\tau _{\\mathrm {pump}}=220 fs in the OPTP experiment.Figure: T e T_e dependence of saturated pump intensity I s I_s of heavily doped graphene with |ε F |=0.43 eV |\\varepsilon _{\\mathrm {F}}|=0.43\\,\\mathrm {eV}, assuming ϵ 2 =2.4\\epsilon _2=2.4 for (a) α inter \\alpha _{\\mathrm {inter}} and (b) A ij s A^{s}_{\\mathrm {ij}} at θ=60 ∘ \\theta =60^{\\circ }.", "The F s =I s ×2τ pump F_{\\mathrm {s}}=I_{\\mathrm {s}} \\times 2\\tau _{\\mathrm {pump}} on the right vertical axis was calculated by assuming 2τ pump =2\\tau _{\\mathrm {pump}}=220 fs in the OPTP experimentFigure: (a) Envelope function of pump intensity F 0 F_{\\mathrm {0}} incident on graphene at θ=60 ∘ \\theta =60^{\\circ }.", "Absorbed pump intensity F ab F_{\\mathrm {ab}} in graphene with |ε F |=|\\varepsilon _{\\mathrm {F}}|=(b)0.15 and (c)0.43 eV at T e T_e=295 K using ϵ 2 =2.4\\epsilon _2=2.4." ], [ "Calculation of the transient THz reflection change from optical conductivity", "In this section, we explain the calculation procedure of the transient reflection change $\\Delta E_{\\mathrm {r}}(\\tau _1)/E_0$ from $\\sigma (\\omega , \\tau _1)$ , calculated using the iterative solution of the BTE and the four-temperature model.", "The reflected THz electric field in the time domain, $E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}, \\tau _{1}\\right)$ , where $\\tau _{2}$ is the probe trigger delay, is determined by the inverse Fourier transformation of the reflected THz electric field in the frequency domain, $E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\omega _{\\mathrm {THz}}, \\tau _{1}\\right)$ : $\\begin{aligned}E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}, \\tau _{1}\\right) &=\\int E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\omega _{\\mathrm {THz}}, \\tau _{1}\\right) e^{i \\omega _{\\mathrm {THz}} \\tau _{2}} d \\omega _{\\mathrm {THz}}\\\\&=\\int E_{\\mathrm {i}}^{\\mathrm {s}}\\left(\\omega _{\\mathrm {THz}}\\right) r_{\\mathrm {s}}\\left(\\omega _{\\mathrm {THz}}, \\tau _{1}\\right) e^{i \\omega _{\\mathrm {THz}} \\tau _{2}} d \\omega _{\\mathrm {THz}},\\end{aligned}$ where $E_{\\mathrm {i}}^{\\mathrm {s}}\\left(\\omega _{\\mathrm {THz}}\\right)$ is the electric field of the incident THz pulse in the frequency domain and $r_s^{\\prime }\\left(\\omega _{\\mathrm {THz}}, \\tau _{1}\\right)$ is the refection coefficient of the THz probe by the photoexcited graphene at $\\tau _1$ , which is calculated as a function of $\\sigma (\\omega , \\tau _1)$ by Eq.", "(SII.1).", "The normalized reflection change $\\Delta E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}, \\tau _{1}\\right)/E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}\\right)$ $\\Delta E_{\\mathrm {r}}(\\tau _1)/E_0$ as a function of the probe trigger delay $\\tau _2$ at $\\tau _1$ is expressed by $\\frac{\\Delta E_{\\mathrm {r}}(\\tau _1)}{E_0} \\equiv \\frac{\\Delta E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}, \\tau _{1}\\right)}{E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}\\right)}=\\frac{E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}, \\tau _{1}\\right)-E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}\\right)}{E_{\\mathrm {r}}^{\\mathrm {s}}\\left(\\tau _{2}\\right)}.$ In the above, $E_{\\mathrm {r}}^s\\left(\\tau _{2}\\right)$ is the reflected THz field through the graphene sample without pump fluence.", "We define the transient reflectivity $\\Delta E_{\\mathrm {r}}(\\tau _1)/E_0 \\equiv \\Delta E_{\\mathrm {r}}^{\\mathrm {s}}(\\tau _{2}, \\tau _{1})/E_{\\mathrm {r}}^{\\mathrm {s}}(\\tau _{2})$ at $\\tau _2=$ 0 ps when the peak amplitude of $E_{\\mathrm {r}}^s\\left(\\tau _{2}\\right)$ takes the maximum amplitude.", "Figures S5(a) and (b) depict the temporal evolution of $T_e$ and $T_{\\eta }$ of the photoexcited graphene, and the temporal waveforms and Fourier spectra of the THz probe pulse, respectively, used in the calculation of $\\sigma (\\omega ,\\tau _1)$[4] in Figs.", "S5(d)–(f) for $\\left\\langle D_{\\textbf {K}}^{2}\\right\\rangle _{\\mathrm {F}}=193\\,\\mathrm {eV}$ .", "The $\\sigma (\\omega ,\\tau _1)$ values are plotted only in the frequency range corresponding to the bandwidth of the THz probe, because the numerical error occurs outside the frequency of the bandwidth.", "$\\sigma (\\omega ,\\tau _1)$ is strongly dependent on the waveform of the THz probe pulse, and non-Drude frequency dependence clearly appears at $\\tau _1=0.1\\,\\mathrm {ps}$ when the carrier distribution and scattering rate change very rapidly during the THz probing time owing to the photoexcitation.", "Figure S6 depict the $\\left\\langle D_{\\textbf {K}}^{2}\\right\\rangle _{\\mathrm {F}}$ dependence of $\\Delta E_{\\mathrm {r}}(\\tau _1)/E_0$ for different $2\\tau _{\\mathrm {prob}}$ values calculated using Eq.", "(SIV.2).", "The $\\Delta E_{\\mathrm {r}}(\\tau _1)/E_0$ reflects the change of the $\\sigma (\\omega ,\\tau _1)$ around the center frequency of THz probe pulse and the peak value becomes higher depending on the $\\left\\langle D_{\\textbf {K}}^{2}\\right\\rangle _{\\mathrm {F}}$ .", "Figure: (a) Temporal evolution of T e T_e and T η T_{\\eta } of heavily doped graphene with |ε F |=0.43 eV |\\varepsilon _{\\mathrm {F}}|=0.43\\,\\mathrm {eV}.", "(b)Temporal waveform of THz probe pulse with 2τ prob =100,3002 \\tau _{\\mathrm {prob}}=100, 300 and 600( fs )600\\,\\mathrm {(fs)} used in simulation.", "(c) Corresponding Fourier spectrum of THz probe pulse.", "Temporal evolution of σ(ω,τ 1 )\\sigma (\\omega , \\tau _1) at τ 1 =-1.0,0.1,1.0,2.0,\\tau _1=-1.0, 0.1, 1.0, 2.0, and 4.0 ps 4.0\\,\\mathrm {ps} calculated using THz probe with2τ 1 2 \\tau _1= (d) 600, (e) 300, and (f) 100 fs.Figure: D 𝐊 2 F \\left\\langle D_{\\textbf {K}}^{2}\\right\\rangle _{\\mathrm {F}} dependence of ΔE r (τ 1 )/E 0 \\Delta E_{\\mathrm {r}}(\\tau _1)/E_0" ] ]
2107.01802
[ [ "Integrability of the conformal loop ensemble" ], [ "Abstract We demonstrate that the conformal loop ensemble (CLE) has a rich integrable structure by establishing exact formulas for two CLE observables.", "The first describes the joint moments of the conformal radii of loops surrounding three points for CLE on the sphere.", "Up to normalization, our formula agrees with the imaginary DOZZ formula due to Zamolodchikov (2005) and Kostov-Petkova (2007), which is the three-point structure constant of certain conformal field theories that generalize the minimal models.", "This verifies the CLE interpretation of the imaginary DOZZ formula by Ikhlef, Jacobsen and Saleur (2015).", "Our second result is for the moments of the electrical thickness of CLE loops first considered by Kenyon and Wilson (2004).", "Our proofs rely on the conformal welding of random surfaces and two sources of integrability concerning CLE and Liouville quantum gravity (LQG).", "First, LQG surfaces decorated with CLE inherit a rich integrable structure from random planar maps decorated with the O(n) loop model.", "Second, as the field theory describing LQG, Liouville conformal field theory is integrable.", "In particular, the DOZZ formula and the FZZ formula for its structure constants are crucial inputs to our results." ], [ "Introduction", "The conformal loop ensemble ($\\operatorname{CLE}$ ) is a canonical conformally invariant probability measure on infinite collections of non-crossing loops, where each loop looks like a Schramm-Loewner evolution (SLE).", "CLE has been proved or conjectured to be the scaling limit of many important loops models in two dimensional statistical physics, such as percolation, Ising model, the $O(n)$ -loop model, and random cluster model.", "There has been an extensive literature on CLE since the fundamental work of Sheffield [67] and Sheffield-Werner [70]; see e.g.", "[5], [10], [14], [28], [42], [45], [49], [53], [58], [57], [54], [69], [75].", "In this paper we demonstrate that CLE has a rich integrable structure by establishing exact formulas for two CLE observables.", "The first describes the joint moments of the conformal radii of loops surrounding three points for CLE on the sphere.", "Our formula agrees with the imaginary DOZZ formula [76], [38] up to normalization, which verifies the geometric interpretation of this formula proposed in [36].", "Our second formula is for the moments of the electrical thickness of CLE loops first considered by Kenyon and Wilson [44].", "We will focus on the simple-loop regime $\\kappa \\in (8/3,4]$ and treat the non-simple loop regime $\\kappa \\in (4,8)$ in a subsequent paper.", "See Sections REF and REF for the precise statements of our results.", "Our proofs rely on two sources of integrability concerning CLE and Liouville quantum gravity (LQG).", "First, LQG surfaces decorated with CLE inherit a rich integrable structure from random planar maps decorated with the $O(n)$ loop model, as exhibited in [7], [12], [55].", "Second, as the field theory describing LQG, Liouville conformal field theory (LCFT) is integrable.", "In particular, the DOZZ formula [40] and FZZ formula [6] for its structure constants are crucial inputs to our results.", "See Section REF for a summary of our method.", "This paper is part of our program exploring the connections between the integrabilities of SLE, LQG and LCFT.", "In Section REF we discuss some future works and list some open questions." ], [ "Three-point correlation function for CLE and the imaginary DOZZ formula", "For $\\kappa \\in (8/3,4]$ , $\\operatorname{CLE}_\\kappa $ on a simply connected domain $D\\ne can be characterized by the domain Markov property and conformal invariance~\\cite {shef-werner-cle}.", "In this case the loops are disjoint Jordan curves.The full plane $ CLE$ can be obtained by sending $ D .", "Viewed as a loop ensemble living on the extended complex plane $\\hat{=\\lbrace \\infty \\rbrace , the full plane \\operatorname{CLE}_\\kappa is invariant under conformal transformations on \\hat{~\\cite {werner-sphere-cle}.", "Hence we also call a full plane \\operatorname{CLE}_\\kappa a \\operatorname{CLE} on the sphere.", "}Suppose \\Gamma is a full plane \\operatorname{CLE}_\\kappa .Fix three distinct points z_1,z_2,z_3\\in .", "There are infinitely many loops that separate z_1 and \\lbrace z_2,z_3\\rbrace .For each such loop \\eta \\in \\Gamma , we let D_\\eta be the simply connected component of \\hat{\\setminus \\eta containing z_i.", "There exists a loop \\eta _1 with the largest D_\\eta , which we call the outermost loop separating z_1 and z_2,z_3.For i=2,3, we similarly let \\eta _i to be the outermost loop in \\Gamma separating z_i and \\lbrace z_1,z_2,z_3\\rbrace \\setminus \\lbrace z_i \\rbrace .", "}}Given a simple loop $$ on $ surrounding a point $z\\in the conformal radius $ CR(,z)$ of $$ viewed from $ z$ is defined as follows.Let $ be the unit disk.", "Let $D_\\eta $ be the connected component of $\\hat{\\setminus \\eta containing z and let \\psi : D_\\eta be a conformal map such that \\psi (0)=z.", "Then \\mathrm {CR}(\\eta ,z) is defined by |\\psi ^{\\prime }(0)|.", "Given \\eta _i and z_i defined above,we call the joint moment generating function \\mathbb {E}[\\prod _{i=1}^3\\mathrm {CR}( \\eta _i,z_i)^{\\lambda _i}] the \\emph {three-point correlation function} for \\operatorname{CLE}_\\kappa on the sphere.\\begin{lemma}There exists a function {C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}\\in (0, \\infty ] such that\\begin{equation}\\mathbb {E}[\\prod _{i=1}^3\\mathrm {CR}( \\eta _i, z_i)^{\\lambda _i}] = {C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}\\prod _{i=1}^3 |z_i - z_{i+1}|^{\\lambda _i + \\lambda _{i+1} - \\lambda _{i+2}},\\end{equation}where we identify z_j with z_{j-3} and \\lambda _j with \\lambda _{j-3}.\\end{lemma}\\begin{proof}This follows from the conformal invariance of full plane \\operatorname{CLE}_\\kappa , the scaling property of conformal radius, and the fact that any triple of points on the plane are conformally equivalent.\\end{proof}We call {C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)} the \\emph {structure constant} for \\operatorname{CLE}_\\kappa on the sphere.", "This terminology and the three-point correlation function above are inspired by Liouville conformal field theory (LCFT), which is the quantum field theory with Liouville action introduced by Polyakov~\\cite {polyakov-qg1}.", "It was recently made rigorous in~\\cite {dkrv-lqg-sphere} and follow-up works~\\cite {hrv-disk,drv-torus, grv-higher-genus,remy-annulus}; see Section~\\ref {subsec:Liouville-CFT} for more background.For LCFT, the three-point correlation function on the sphere, whichdepends on three points z_1,z_2,z_3 on and three parameters \\alpha _1,\\alpha _2,\\alpha _3\\in \\mathbb {R}, is also of the factorization form~(\\ref {def-structure-CLE})where the dependence on (\\alpha _1,\\alpha _2,\\alpha _3) is encoded in its structure constant.It was first proposed in theoretical physics~\\cite {do-dozz,zz-dozz} and then rigorously proved in~\\cite {krv-dozz} thatthe structure constant has the following remarkable expression known as the DOZZ formula:\\begin{equation}C^\\mathrm {DOZZ}_\\gamma (\\alpha _1, \\alpha _2, \\alpha _3) = \\mathopen {}\\mathclose {\\left( \\frac{\\pi (\\frac{\\gamma }{2})^{2-\\gamma ^2/2} \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1-\\frac{\\gamma ^2}{4})} \\right)}^{\\frac{2Q-\\overline{\\alpha }}{\\gamma }}\\frac{\\Upsilon _{\\frac{\\gamma }{2}}^{\\prime }(0) \\Upsilon _{\\frac{\\gamma }{2}}(\\alpha _1)\\Upsilon _{\\frac{\\gamma }{2}}(\\alpha _2)\\Upsilon _{\\frac{\\gamma }{2}}(\\alpha _3)}{\\Upsilon _{\\frac{\\gamma }{2}}(\\frac{\\overline{\\alpha }}{2} - Q) \\Upsilon _{\\frac{\\gamma }{2}}(\\frac{\\overline{\\alpha }}{2} - \\alpha _1)\\Upsilon _{\\frac{\\gamma }{2}}(\\frac{\\overline{\\alpha }}{2} - \\alpha _2)\\Upsilon _{\\frac{\\gamma }{2}}(\\frac{\\overline{\\alpha }}{2} - \\alpha _3)}.\\end{equation}where \\gamma \\in (0,2) is the coupling parameter for LCFT that determines its central charge, \\overline{\\alpha } is given by \\sum \\alpha _j, and \\Upsilon _{\\frac{\\gamma }{2}}(z) is an explicit entire function called the Upsilon function.", "Here we have assumed the cosmological constant to be 1 for simplicity.See Theorem~\\ref {prop-DOZZ} for a precise statement.", "}Our first main result is an explicit formula for $ CCLE(1,2,3)$ which is closely related to the DOZZ formula.Indeed, under a certain parameter identification their product is factorized.\\begin{theorem}For \\kappa \\in (\\frac{8}{3},4], the structure constant {C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}< \\infty if and only if \\lambda _1, \\lambda _2, \\lambda _3 > \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }.In this case, set \\gamma =\\sqrt{\\kappa } and Q=\\frac{\\gamma }{2}+\\frac{2}{\\gamma }.Let \\alpha _i be a root of \\alpha (Q-\\frac{\\alpha }{2}) - 2 =\\lambda _i for i=1,2,3.", "Then\\begin{equation}{C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\frac{\\prod _{i=1}^3 N_\\gamma (\\alpha _i)}{C^\\mathrm {DOZZ}_\\gamma (\\alpha _1, \\alpha _2, \\alpha _3)}.\\end{equation}where\\begin{equation}N_\\gamma (\\alpha )= \\mathopen {}\\mathclose {\\left( -\\pi \\cos (\\frac{4\\pi }{\\gamma ^2}) \\frac{\\Gamma (\\frac{4}{\\gamma ^2}-1)}{\\Gamma (1-\\frac{\\gamma ^2}{4})} C_\\gamma ^\\mathrm {DOZZ}(\\gamma , \\gamma , \\gamma )^{1/3}\\right)}\\frac{\\Gamma (\\frac{\\gamma }{2}(\\alpha - \\frac{\\gamma }{2}))}{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha )) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha ))} \\mathopen {}\\mathclose {\\left( \\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1-\\frac{\\gamma ^2}{4})} \\right)}^{-\\frac{\\alpha }{\\gamma }}.\\end{equation}\\end{theorem}$ Let $\\Delta _\\alpha :=\\frac{\\alpha }{2}(Q-\\frac{\\alpha }{2})$ be the so-called conformal dimension in LCFT.", "The right hand side of () is a function of $(\\Delta _{\\alpha _i})_{1\\le i\\le 3}$ , hence is indeed a function of $\\lambda _i=2\\Delta _{\\alpha _i}-2$ .", "This relation between $\\lambda $ and $\\alpha $ is an instance of the Knizhnik-Polyakov-Zamolodchikov (KPZ) relation for random fractals coupled with Liouville quantum gravity [39], [22], [16].", "Modulo the choice of normalization $N_\\gamma (\\alpha )$ , the right hand side of () is the so-called imaginary DOZZ formula proposed by Zamolodchikov [76] and independently by Kostov and Petkova [38] with a different normalization.", "It is obtained by solving Teschner's shift relations [72] for the DOZZ formula with the shifts being imaginary.", "As observed in both [76] and [38], the product of the imaginary DOZZ formula and the original one with parameters linked through the KPZ relation factorizes as shown in ().", "The imaginary DOZZ formula, also known as the timelike DOZZ formula, is considered to be the structure constant of a family of conformal field theories that generalize the minimal models [11] and analytically continue LCFT to the central charge $c\\le 1$ regime.", "Understanding these conformal field theories remains an active topic in theoretical physics; see [33], [64], [8] and references therein.", "Ikhlef, Jacobsen and Saleur [36] proposed that the imaginary DOZZ formula can be expressed as the partition function of a CLE reweighted by the nesting statistics around three points, which after renormalization is precisely the joint moments of the conformal radii in ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ .", "(See [57], [58] on the relation between the nesting statistics and conformal radii for CLE.)", "Therefore, our Theorem  is a rigorous justification of the CLE interpretation of the imaginary DOZZ formula in [36].", "Prior to [36], it was argued in [23], [79], [61] that a special value of the imaginary DOZZ formula is related to the three-point connectivity function in Potts and random cluster models.", "See Section REF for more discussion.", "Theorem  immediately raises several questions such as the extension to the case when $\\kappa \\in (4,8)$ , or when there are more than three points.", "These questions will be discussed in Section REF .", "Theorem  should be compared to the formula of Schramm, Sheffield and Wilson [69] on the law of the conformal radius of a single outermost loop of the CLE on the disk, which we recall in Appendix .", "The formula in [69] was proved using stochastic differential equations associated to SLE.", "In contrast, our method for proving Theorem  is based on the coupling of CLE with Liouville quantum gravity and the integrability of LCFT, which we summarize below." ], [ "The proof method based on CLE coupled with Liouville quantum gravity", "Liouville quantum gravity (LQG) is the two dimensional random geometry with a parameter $\\gamma \\in (0,2)$ induced by the exponential of various versions of Gaussian free field.", "See [22], [15], [27] for its construction.", "LQG describes the scaling limits of discrete random surfaces, namely random planar maps under conformal embeddings; see e.g.", "[35], [29].", "We refer to the survey articles [31], [25] for more background on LQG and its relation to random planar maps.", "Our proof of Theorem  relies on the coupling between $\\operatorname{CLE}$ and LQG.", "For $\\kappa \\in (\\frac{8}{3},4)$ , set $\\gamma =\\sqrt{\\kappa }$ .", "Then $\\operatorname{CLE}_\\kappa $ coupled with $\\gamma $ -LQG describes the scaling limit of random planar maps decorated by the $O(n)$ -loop model for $n=-2\\cos (\\frac{4\\pi }{\\kappa })\\in (0,2)$ in the dilute phase [12], [7], [55].", "Suppose the planar map has the spherical topology with three marked points, and is sampled from the critical Boltzmann distribution, then the corresponding $\\gamma $ -LQG surface is the so-called (three-pointed) quantum sphere, which we denote by $\\mathrm {QS}_3$ .", "Consider a full-plane $\\operatorname{CLE}_\\kappa $ on the three-pointed quantum sphere.", "The three outermost loops separating the three marked points cut the quantum sphere into four pieces: one non-simply-connected piece and three simply connected ones; see Figure REF .", "Conditioning on the quantum lengths of these loops, these four pieces are conditionally independent $\\gamma $ -LQG surfaces.", "Moreover, the three simply connected surfaces, each of which contains a marked point, are quantum disks with an interior marked point.", "Here the quantum disk is the $\\gamma $ -LQG surface that describes the scaling limit of random planar maps on a disk in the $\\gamma $ -LQG universality class.", "This decomposition of quantum sphere is an instance of a quantum zipper result, which was pioneered by Sheffield [68].", "We prove this particular instance in Section REF using the recent work of Miller, Sheffield and Werner [55] on $\\operatorname{CLE}$ coupled with the quantum disk, Theorem REF below, and our quantum zipper results proved in [3], [4] with Holden.", "Figure: Cutting a quantum sphere and gluing a quantum pair of pants with quantum disks.We call the non-simply connected surface obtained from cutting the quantum sphere a quantum pair of pants, since it is a natural LQG surface with the topology of a pair of pants.", "We also write the law of the quantum disk with one marked point as $\\mathcal {M}_1^{\\mathrm {disk}}(\\gamma )$ where $\\gamma $ represents the magnitude of the log singularity at the marked point.", "It is proved in [2], [13], [4] that if we conformally embed a sample from $\\mathrm {QS}_3$ on $\\hat{=\\infty with the three marked points located at z_1,z_2,z_3, then thecorresponding variant of Gaussian free field is the so called \\emph {Liouville field} on the sphere~\\cite {dkrv-lqg-sphere,AHS-SLE-integrability} with insertions of weight \\gamma at z_1,z_2,z_3, which we denote by \\mathrm {LF}_{\\gamma ,\\gamma ,\\gamma }.Similarly, if we conformally embed \\mathcal {M}_1^{\\mathrm {disk}}(\\gamma ) on \\mathbb {H} with i as the marked point and fix the remaining degree of freedom in the conformal embeddinguniformly at random, then the resulting field is the Liouville field \\mathrm {LF}^{(\\gamma , i)}_\\mathbb {H} on \\mathbb {H} with a \\gamma -insertion at i~\\cite {hrv-disk,ARS-FZZ}.", "(In stating both results we omitted a \\gamma -dependent scaling constant in front of \\mathrm {LF}.", ")}$ Now we can reverse the sphere-cutting procedure above and conclude that if we glue together a quantum pair of pants with three independent quantum disks appropriately, and conformally embed to $(0,1,e^{i\\pi /3})$ (see Figure REF ), then the law of the resulting field is $\\mathrm {LF}_{\\gamma ,\\gamma ,\\gamma }$ .", "This gluing procedure is an example of conformal welding which we review in Section REF .", "In [6], with Remy we considered a generalization $\\mathcal {M}_1^{\\mathrm {disk}}(\\alpha )$ of $\\mathcal {M}_1^{\\mathrm {disk}}(\\gamma )$ for generic $\\alpha \\in \\mathbb {R}$ such that if a sample from $\\mathcal {M}_1^{\\mathrm {disk}}(\\alpha )$ is embedded to $(\\mathbb {H},i)$ as in the previous paragraph, then the law of the resulting random field is $\\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)}$ .", "Our key observation is that if we glue together the quantum pair of pants with three samples from $\\mathcal {M}_1^{\\mathrm {disk}}(\\alpha _i)$ with $i=1,2,3$ , and conformally embed the spherical surface to $(0,1,e^{i\\pi /3})$ , then the law of the resulting field is ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}\\mathrm {LF}_{\\alpha _1,\\alpha _2,\\alpha _2}$ where ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ is as in Theorem .", "Let $A$ be the total quantum area of the resulting sphere after gluing.", "Then using the relation between $\\mathrm {LF}_{\\alpha _1,\\alpha _2,\\alpha _3}$ and the DOZZ formula as we recall in Theorem , we can conclude that the average of $e^{-A}$ equals ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}\\times \\frac{1}{2} C^\\mathrm {DOZZ}_\\gamma (\\alpha _1, \\alpha _2, \\alpha _3) $ .", "In order to get ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ it suffices to compute the average of $e^{- A}$ in terms of the quantum pair of pants and $\\mathcal {M}_1^{\\mathrm {disk}}(\\alpha _i)$ .", "The area and length distribution for $\\mathcal {M}_1^{\\mathrm {disk}}(\\alpha )$ is encoded by the so-called FZZ formula for LCFT, which we proved in [6] with Remy.", "A substantial effort of this paper is devoted to computing the joint area and length distribution for the quantum pair of pants.", "Two key ingredients of our computation are: 1. the LQG lengths of outermost loops of a CLE on a quantum disk can be described by the jump sizes of a stable Levy process; 2. the surfaces encircled by these outermost loops are independent quantum disks given their boundary lengths.", "These facts are consistent with the assertion that CLE coupled with LQG describes the scaling limit of $O(n)$ -loop model decorated random planar maps, and are extracted from [55], [12], [7] as we summarize in Appendix .", "As an intermediate step towards solving the quantum pair of pants, we compute the joint length and area distribution of the quantum annulus, which is the LQG surface between the disk boundary and an outermost loop for CLE on a quantum disk.", "This argument so far gives Theorem  for $\\kappa \\in (\\frac{8}{3},4)$ .", "We otain the case $\\kappa =4$ by a limiting argument.", "The loop lengths distribution of $\\operatorname{CLE}$ on quantum disk crucially relies on the mating of trees theory for LQG coupled with SLE and CLE, which was developed by Duplantier, Miller and Sheffield [19]; see [25] for a survey.", "The idea of using conformal welding, mating of trees, and LCFT to get integrable results is also applied to a variant of SLE called chordal $\\operatorname{SLE}_\\kappa (\\rho _-;\\rho _+)$ in our recent work with Holden [4].", "Moreover, conformal welding and mating of trees are used in our recent proof with Remy [6] of the FZZ formula.", "In the next subsection we present yet another result based on this idea." ], [ "SLE loop measure and the electrical thickness ", "For $\\kappa \\in (0,4]$ , Kontsevich and Suhov [41] conjectured that up to a multiplicative constant there exists a unique measure on simple loops on $\\hat{with conformal symmetry and a certain restriction property indexed by \\kappa .", "This conjecture is inspired by Malliavin^{\\prime }s work~\\cite {Malliavin}.", "We call loop measures satisfying this conjecture a \\emph {Malliavin-Kontsevich-Suhov (MKS) loop measure}.For \\kappa =\\frac{8}{3}, the existence and uniqueness of the MKS loop measure was proved by Werner~\\cite {werner-loops} where the construction is via the Brownian loop measure.", "}$ Kemppainen and Werner [45] introduced the loop intensity measure of simple CLE, which gives an example of an MKS loop measure for $\\kappa \\in (\\frac{8}{3}, 4]$ .", "Let $\\Gamma $ be a full plane $\\operatorname{CLE}_\\kappa $ with $\\kappa \\in (\\frac{8}{3}, 4]$ .", "Let $\\eta $ be a loop chosen from the counting measure over the set of loops in $\\Gamma $ .", "The loop intensity measure $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}$ for $\\operatorname{CLE}_\\kappa $ is the distribution of $\\eta $ ; this is an infinite measure on simple loops.", "By the conformal invariance of $\\operatorname{CLE}_\\kappa $ , the measure $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}$ is conformally invariant.", "Recently, for all $\\kappa \\in (0,4]$ , Zhan [78] constructed a MKS loop measure $\\operatorname{SLE}^{\\mathrm {loop}}_\\kappa $ , which we will recall in Section REF .", "As an intermediate step towards our integrability results, we will show in Section  that for $\\kappa \\in (\\frac{8}{3}, 4]$ , the measures $\\operatorname{SLE}^{\\mathrm {loop}}_\\kappa $ and $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}$ agree up to a multiplicative constant, which is consistent with the uniqueness conjecture on the MKS loop measure.", "Theorem 1.1 For each $\\kappa \\in (\\frac{8}{3},4]$ , there exists $C\\in (0,\\infty )$ such that $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}=C\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ .", "Although the SLE loop measure is infinite, we can define a probability measure that describes the law of its shape modulo scaling and translation.", "One way to express this shape measure is the following.", "Given a simple loop on $ that surrounds $ 0$, let $ R={ |z|:z}$ and $ = {z: Rz}$.", "Namely, $$ is the rescaling of $$ such that $$ surrounds $ and touches $\\partial .Now suppose $$ is sampled from $ SLEloop$ restricted to the event that $$ surrounds $ 0$.", "By the conformal invariance of $ SLEloop$, the law of $ R$ is translation invariant on $ R$, hence a constant multiple of the Lebesgue measure.Moreover, the conditional law of $$ conditioning on $ R$ does not depend on the value of $ R$, which gives a probability measure on loops thatsurround $ and touch $\\partial .", "We denote this probability measure by $ L$ and call it the \\emph {shape measure} of $ SLEloop$.$ Our next result is an exact formula for the so called electrical thickness for a sample from $\\mathcal {L}_\\kappa $ .", "Theorem 1.2 For $\\kappa \\in (0,4]$ , let $\\eta $ be sample from $\\mathcal {L}_\\kappa $ .", "Let $\\bar{\\eta }$ be the image of $\\eta $ under the inversion map $z\\mapsto z^{-1}$ .", "Let $\\vartheta (\\eta ) = -\\log \\mathrm {CR}( \\eta ,0) - \\log \\mathrm {CR}(\\bar{\\eta },0)$ .", "Then $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] <\\infty $ if and only if $\\lambda <1-\\frac{\\kappa }{8}$ .", "Moreover, $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] = \\frac{\\sin (\\pi (1-\\kappa /4))}{\\pi (1-\\kappa /4)}\\frac{\\pi \\sqrt{(1-\\kappa /4)^2+\\lambda \\kappa /2}}{\\sin (\\pi \\sqrt{ (1-\\kappa /4)^2+\\lambda \\kappa /2})} \\quad \\text{ for } \\lambda < 1-\\frac{\\kappa }{8}.$ We call $\\theta (\\eta )$ the electrical thickness of $\\eta $ .", "It only depends on the shape of $\\eta $ .", "It is non-negative and equals 0 if and only if the loop is a circle around 0.", "This concept was introduced by Kenyon and Wilson [44] as a way of describing how much the shape of a loop differs from a circle.", "Viewing the complex plane as a homogeneous electrical material, the electrical thickness measures the net change in the effective resistance between 0 and $\\infty $ when the loop $\\eta $ becomes a perfect conductor.", "Theorem REF settles a conjecture of Kenyon and Wilson on the electrical thickness of CLE loops for $\\kappa \\in (\\frac{8}{3},4]$ .", "Consider the shape measure $\\widetilde{\\mathcal {L}}_\\kappa $ of $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}$ defined in the same way as $\\mathcal {L}_\\kappa $ .", "Theorem REF implies that $\\mathcal {L}_\\kappa =\\widetilde{\\mathcal {L}}_\\kappa $ .", "Let $(\\eta _n)_{n\\ge 1}$ be the sequence of loops of a $\\operatorname{CLE}_\\kappa $ on the unit disk, ordered such that $\\eta _n$ surrounds $\\eta _{n+1}$ .", "It is proved in [45] that the law of the rescaled loop $\\hat{\\eta }_n$ converges weakly to $\\widetilde{\\mathcal {L}}_\\kappa $ .", "Since $ \\vartheta (\\eta _n)= \\vartheta (\\hat{\\eta }_n)$ , we see that $\\lim _{n\\rightarrow \\infty } \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _n)}]$ equals $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}]$ in Theorem REF .", "Kenyon and Wilson [44] conjectured a formula for $\\lim _{n\\rightarrow \\infty } \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _n)}]$ , as recorded in [69].", "Their formula agrees with the right hand of (REF ) after $\\kappa $ is replaced by $16/\\kappa $ .", "Thus our Theorem REF shows that the conjectural formula is false but only off by this flip.", "As discussed in private communication with Kenyon and Sheffield, the formula should blow up as $\\kappa \\rightarrow 8$ .", "But [69] gives a finite limit hence cannot be correct.", "This conjecture was made for the entire range $\\kappa \\in (\\frac{8}{3}, 8)$ .", "In a subsequent work we will prove that $\\lim _{n\\rightarrow \\infty } \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _n)}]$ converge to the right hand side of (REF ) for $\\kappa \\in (4, 8)$ as well.", "Our proof of Theorem REF also relies on the conformal welding of LQG surfaces and the integrability of LCFT.", "This time we glue together two quantum disks with an interior point to obtain a two-pointed quantum sphere with a loop separating them.", "It was shown by us and Holden in [4] that the law of the resulting loop is $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ .", "From this conformal welding result and a similar idea as outlined in Section REF , we can express $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}]$ in Theorem REF using the FZZ formula and the reflection coefficient for LCFT on the sphere [40], [65].", "A new difficulty arises as the measure $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ is infinite, even after restricting to loops that separate 0 and $\\infty $ .", "Overcoming this is the technical bulk of our proof of Theorem REF in Section ." ], [ "Outlook and perspectives", "There are two natural directions one could pursue from our work.", "First, our Theorem  could serve as the starting point to compute high order correlators in two dimensional random cluster and loop models using the so-called conformal bootstrap method in conformal field theory (CFT) [11].", "With the DOZZ formula established in [40] as an input, the conformal bootstrap for the LCFT was recently carried out in [26].", "However, the CFTs corresponding to random cluster and loop models are less well understood than LCFT, and remain an active topic in physics; see e.g.", "[60], [46], [32].", "Another natural direction is to employ the interplay between different types of integrability of SLE, CLE, LQG, LCFT to obtain results that are inaccessible within a single framework.", "The potential of this program was also demonstrated in our recent papers [4] with Holden and [6] with Remy.", "We refer to the introductions of those papers for more results and problems in this direction.", "Here we conclude our introduction by mentioning three problems that we will pursue in subsequent works and listing three concrete open questions.", "In a forthcoming paper we will extend our method to prove Theorems  and REF for $\\kappa \\in (4,8)$ using the coupling of $\\operatorname{CLE}_\\kappa $ with $\\sqrt{16/\\kappa }$ -LQG from [56].", "In the random cluster models, the connectivity functions $P_n(x_1,\\cdots , x_n)$ are given by the probabilities that $n$ points belong to the same finite cluster [43].", "It was first proposed in [23] and later substantiated in [79], [61] that at criticality in the continuum limit, the ratio $P_3(x_1,x_2,x_3)/\\sqrt{P_2(x_1,x_2)P_2(x_1,x_3)P_2(x_2,x_3)}$ converges to a universal constant only depending on $\\kappa $ , which can be expressed by the imaginary DOZZ formula at a special value.", "In particular, for percolation this constant is $\\approx 1.022$ .", "We plan to rigorously justify this proposal based on results and methods in our paper, assuming the CLE scaling limit.", "LQG surfaces with non-simply connected topology such as the quantum pair of pants and quantum annulus mentioned in Section REF are crucial to our proof of Theorem .", "We plan to establish two general facts on non-simply connected LQG surfaces which are supposed to be the scaling limit of natural decorated random planar maps with the same topology.", "First, conditioned on their conformal structure, their random fields are Liouville fields.", "Second, they behave nicely under conformal welding.", "Beyond their intrinsic interests, we believe that proving these results will be instrumental to answering Questions REF and REF below.", "Question 1.3 Can one prove Theorem  without relying on LQG?", "The imaginary DOZZ formula can be characterized by Teschner's shift relations for DOZZ formula [72] with imaginary shifts.", "One natural idea is to use the strategy in [40] for the proof of the DOZZ formula for LCFT, which is based on CFT ideas including the BPZ equation and operator product expansion.", "Carrying this out for CLE would represent a big step in the first direction mentioned above.", "Question 1.4 What is the law of the random modulus for our quantum annulus conditioning on its two boundary lengths?", "This question is naturally related to the extremal distance between CLE loops, which is well understood for $\\operatorname{CLE}_4$  [5] using the level line coupling with Gaussian free field but not known for other $\\kappa $ .", "Question 1.5 Can the imaginary DOZZ formula in () for $\\kappa \\in (0,8/3]$ be realized as an $\\operatorname{SLE}_\\kappa $ observable?", "For Theorem REF , although it was originally only considered for $\\operatorname{CLE}_\\kappa $  [69], we have shown that it is indeed a formula for the $\\operatorname{SLE}_\\kappa $ loop when $\\kappa \\in (0,8/3]$ .", "Thus one natural idea is to generalize the quantum zipper result in Section REF to a certain measure on triples of curves defined via the $\\operatorname{SLE}_\\kappa $ loop.", "Organization of the paper.", "In the rest of the paper, we first provide background on LQG, LCFT and their coupling with SLE and CLE in Section .", "In Section  we introduce the quantum annulus and determine its length distribution.", "In Section  we compute the joint area and length distribution of the quantum annulus.", "In Section  we prove Theorem REF .", "In Section  we introduce the quantum pair of pants and compute its joint area and length distribution.", "In Section  we carry out the outline in Section REF to prove Theorem .", "In Section  we prove Theorem REF .", "Acknowledgements.", "We are grateful to Nina Holden, Rick Kenyon, Matthis Lehmkuehler, Scott Sheffield and Wendelin Werner for helpful discussions.", "M.A.", "was partially supported by NSF grant DMS-1712862.", "X.S.", "was supported by the NSF grant DMS-2027986 and the NSF Career grant DMS-2046514." ], [ "Preliminaries", "We assume basic familiarity with SLE and CLE in the simple curve regime and refer to [70] for more background.", "In this section, we review the inputs for our proofs from Liouville quantum gravity, Liouville conformal field theory, and their interplay with SLE and CLE." ], [ "Measure theoretical background", "We will frequently consider infinite measures and extend the probability terminology to this setting.", "In particular, suppose $M$ is a $\\sigma $ -finite measure on a measurable space $(\\Omega , \\mathcal {F})$ .", "Suppose $X:(\\Omega ,\\mathcal {F})\\rightarrow (E,\\mathcal {E})$ is an $\\mathcal {F}$ -measurable function taking values in $(E,\\mathcal {E})$ .", "Then we say that $X$ is a random variable on $(\\Omega ,\\mathcal {F})$ and call the pushforward measure $M_X = X_*M$ on $(E,\\sigma (X))$ the law of $X$ .", "We say that $X$ is sampled from $M_X$ .", "We also write the integral $\\int f(x) M_X(dx)$ as $M_X[f]$ or $M_X[f(x)]$ for simplicity.", "For a finite measure $M$ , we write $|M|$ as its total mass and write $M^{\\#}=|M|^{-1}M$ as the probability measure proportional to $M$ .", "Given $(\\Omega , \\mathcal {F}, M)$ as above, let $X:(\\Omega , \\mathcal {F})\\rightarrow (E,\\mathcal {E})$ and $Y:(\\Omega , \\mathcal {F})\\rightarrow (E^{\\prime },\\mathcal {E}^{\\prime })$ be two random variables.", "A family of probability measures $\\lbrace \\mathbb {P}(\\cdot |e): e\\in E \\rbrace $ on $(E^{\\prime },\\mathcal {E}^{\\prime })$ is called the (regular) conditional law of $Y$ given $X$ if for each $A\\in \\sigma (Y) $ , $\\mathbb {P}(A |\\cdot )$ is measurable on $(E,\\mathcal {E})$ and $M[Y\\in A,X\\in B]=\\int _B \\mathbb {P}(A |e) \\, dM_X \\textrm { for each } A\\in \\sigma (Y) \\textrm { and } B\\in \\sigma (X) $ We also need the concept of disintegration in the case $(E,\\mathcal {E})=\\mathbb {R}^n$ for a positive integer $n$ .", "Definition 2.1 (Disintegration) Let $M$ be a measure on a measurable space $(\\Omega , \\mathcal {F})$ .", "Let $X:\\Omega \\rightarrow \\mathbb {R}^n$ be a measurable function with respect to $\\mathcal {F}$ , where $\\mathbb {R}^n$ is endowed with the Borel $\\sigma $ -algebra and Lebesgue measure.", "A family of measures $\\lbrace M_x: x\\in \\mathbb {R}^n \\rbrace $ on $(\\Omega , \\mathcal {F})$ is called a disintegration of $M$ over $X$ if for each set $A\\in \\mathcal {F}$ , the function $x\\mapsto M_x(A)$ is Borel measurable, and $\\int _{A} f(X) \\, dM= \\int _{\\mathbb {R}^n} f(x) M_x(A) \\,d^nx \\textrm { for each non-negative measurable function } f \\textrm {on } \\mathbb {R}^n.$ When (REF ) holds, we simply write $M=\\int _{\\mathbb {R}^n} M_x \\, d^nx$ .", "Lemma 2.2 In the setting of Definition REF , suppose $M$ is $\\sigma $ -finite and $X$ satisfies $M[X\\in B]=0$ for each Borel set $B\\subset \\mathbb {R}^n$ with zero Lebesgue measure.", "Then the disintegration of $M$ over $X$ exists.", "Moreover if $\\lbrace M_x: x\\in \\mathbb {R}^n \\rbrace $ and $\\lbrace M^{\\prime }_x: x\\in \\mathbb {R}^n \\rbrace $ are two disintegrations of $M$ over $X$ , then $M_x=M^{\\prime }_x$ for almost every $x$ .", "When $M$ is a probability measure, since the law $M_X$ of $X$ is absolutely continuous with respect to the Lebesgue measure, we can and must set $|M_x|$ to be the Radon-Nykodim derivative between the two measures.", "Since $\\mathbb {R}^n$ is Polish, we can and must set $\\lbrace M_x^\\#\\rbrace $ to be the regular conditional probability of $M$ given $X$ .", "This gives the desired existence and uniqueness of $M_x=|M_x|M_x^\\#$ .", "By scaling this gives Lemma REF when $M$ is finite.", "If $M$ is infinite, consider $\\Omega _n\\uparrow \\Omega $ with $M(\\Omega _n)<\\infty $ Applying Lemma REF to $M|_{\\Omega _n}$ and then sending $n\\rightarrow \\infty $ give the general result." ], [ "Liouville quantum gravity and Liouville conformal field theory", "In this section we review the precise definition of some $\\gamma $ -LQG surfaces and Liouville fields mentioned in Section REF .", "For more background, we refer to [25], [73] and references therein, as well as the preliminary sections in [4], [6]." ], [ "Gaussian free field, Liouville field, and the DOZZ formula", "Let $\\mathcal {X}$ be the complex plane $ or the upper half plane $ H$.", "Suppose $ X$ is endowed with a smooth metric $ g$such that the metric completion of $ (X, g)$ is a compact Riemannian manifold.", "(We will not distinguish $ X$ with its compactification for notional simplicity.", ")Let $ H1(X)$ be the Sobolev space whose norm is the sum of the $ L2$-norm with respect to $ (X,g)$ and the Dirichlet energy.Let $ H-1(X)$ be the dual space of $ H1(X)$.", "Then $ H1(X)$ and $ H-1(X)$ do not depends on the choice of $ g$.$ We now recall two basic variants of the Gaussian free field (GFF).", "Consider the two functions $G_\\mathbb {H}(z,w) &= -\\log |z-w| - \\log |z-\\overline{w}| + 2 \\log |z|_+ + 2\\log |w|_+.", "\\quad &&z,w\\in \\mathbb {H}\\nonumber \\\\G_z,w) &= -\\log |z-w| + \\log |z|_+ + \\log |w|_+ , \\quad &&z,w\\in \\nonumber $ Here $|z|_+:=\\max \\lbrace |z|,1\\rbrace $ so that $\\log |z|_+ =\\max \\lbrace \\log |z|,0\\rbrace $ .", "Let $h_\\mathcal {X}$ be a random function taking values in $H^{-1}(\\mathcal {X})$ such that $(h_\\mathcal {X},f)$ is a centered Gaussian with variance $\\int f(z) G_\\mathcal {X}(z,w) f(w)d^2zd^2w$ for each $f\\in H^1(\\mathcal {X})$ .", "Then $h_ is a \\emph {whole plane GFF} and $ hH$ is a \\emph {free-boundary GFF} on $ H$, both of which are normalized to have mean zero along $ {zX: |z|=1 }$.We denote the law of $ hX$ by $ PX$.$ We now review the Liouville fields on $ and $ H$ following~\\cite [Section 2.2]{AHS-SLE-integrability}.\\begin{definition}Suppose (h, \\mathbf {c}) is sampled from P_[e^{-2Qc}dc] and set \\phi = h(z) -2Q \\log |z|_+ +\\mathbf {c}.Then we write \\mathrm {LF}_{ as the law of \\phi and call a sample from \\mathrm {LF}_{ a \\emph {Liouville field on }.", "}Suppose (h, \\mathbf {c}) is sampled from P_\\mathbb {H}\\times [e^{-Qc}dc] and set \\phi = h(z) -2Q \\log |z|_+ +\\mathbf {c}.Then we write \\mathrm {LF}_{\\mathbb {H}} as the law of \\phi and call a sample from \\mathrm {LF}_{\\mathbb {H}} a \\emph {Liouville field on \\mathbb {H}}.", "}We also need Liouville fields on with three insertions and those on \\mathbb {H} with one bulk insertion.\\begin{definition}Let (\\alpha _i,z_i) \\in \\mathbb {R}\\times for i = 1, \\dots , m, where m \\ge 1 and the z_i^{\\prime }s are distinct.Let (h, \\mathbf {c}) be sampled from C_{(\\alpha _i,z_i)_i} P_[e^{(\\sum _i \\alpha _i - 2Q)c}dc] whereC_{ ^{(\\alpha _i,z_i)_i}=\\prod _{i=1}^m |z_i|_+^{-\\alpha _i(2Q -\\alpha _i)} e^{\\sum _{i < j} \\alpha _i \\alpha _j G_z_i, z_j)}.Let \\phi (z) = h(z) -2Q \\log |z|_+ + \\sum _{i=1}^m \\alpha _i G_z, z_i) + \\mathbf {c}.We write \\mathrm {LF}_{ ^{(\\alpha _i,z_i)_i} for the law of \\phi and call a sample from \\mathrm {LF}_{ ^{(\\alpha _i,z_i)_i}a \\emph {Liouville field on with insertions (\\alpha _i,z_i)_{1\\le i\\le n}}.", "}\\begin{definition}For \\alpha \\in \\mathbb {R} and z_0 \\in \\mathbb {H}, let (h, \\mathbf {c}) be sampled from (2\\operatorname{Im}z_0)^{-\\alpha ^2/2} |z_0|_+^{-2\\alpha (Q-\\alpha )}P_\\mathbb {H}\\times [e^{(\\alpha -Q)c}dc].", "Let \\phi (z) = h(z) - 2Q \\log |z|_+ + \\alpha G_\\mathbb {H}(z, z_0) + \\mathbf {c}.", "We write \\mathrm {LF}_\\mathbb {H}^{(\\alpha , z_0)} for the law of \\phi and call a sample from \\mathrm {LF}_\\mathbb {H}^{(\\alpha , z_0)} the \\emph {Liouville field on \\mathbb {H} with insertion (\\alpha , z_0)}.\\end{definition}The measure \\mathrm {LF}_{ ^{(\\alpha _i,z_i)_{i}} formally equals \\prod _{i=1}^{m} e^{\\alpha _i \\phi (z_i)}\\mathrm {LF}_{ .In fact, after a regularization and limiting procedure, we will arrive at Definition~\\ref {def-RV-sph}; see~\\cite [Lemma 2.8]{AHS-SLE-integrability}.In the same manner we have \\mathrm {LF}_\\mathbb {H}^{(\\alpha , z_0)}=e^{\\alpha \\phi (z_0)} \\mathrm {LF}_\\mathbb {H}; see~\\cite [Lemma 2.2]{ARS-FZZ}.", "}}Fix \\gamma \\in (0,2), we now recall the quantum area and quantum length measure in \\gamma -LQG.Suppose h is a GFF sampled from P_\\mathcal {X} for \\mathcal {X}= or \\mathbb {H}.For \\varepsilon > 0 and z \\in \\mathcal {X}\\cup \\partial \\mathcal {X}, we write h_\\varepsilon (z) for the average of h on \\partial B_\\varepsilon (z) \\cap \\mathcal {X}, and define the random measure \\mu _h^\\varepsilon := \\varepsilon ^{\\gamma ^2/2} e^{\\gamma h_\\varepsilon (z)}d^2z on \\mathcal {X}, where d^2z is Lebesgue measure on \\mathcal {X}.", "Almost surely, as \\varepsilon \\rightarrow 0, the measures \\mu _h^\\varepsilon converge weakly to a limiting measure \\mu _h called the \\emph {quantum area measure} \\cite {shef-kpz,shef-wang-lqg-coord}.For \\mathcal {X}=\\mathbb {H}, we define the \\emph {quantum boundary length measure} \\nu _h:= \\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^{\\gamma ^2/4}e^{\\frac{\\gamma }{2} h_\\varepsilon (x)} dx, where h_\\varepsilon (x) is the average of h on \\partial B_\\varepsilon (x) \\cap \\mathbb {H}.The definition of quantum area and boundary length can clearly be extended to other variants of GFF such as the Liouville fields, possibly with insertions.", "}We now review the DOZZ formula that gives the Laplacian transform of the quantum area under \\mathrm {LF}_{(\\alpha _i, z_i)_i}.", "Following \\cite [Section 1.2]{krv-dozz}, we recall that \\Upsilon _{\\frac{\\gamma }{2}} is an analytic function on such that\\log \\Upsilon _{\\frac{\\gamma }{2}}(z) = \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\mathopen {}\\mathclose {\\left(\\frac{Q}{2} - z\\right)}^2e^{-t} - \\frac{(\\sinh ((\\frac{Q}{2} - z)\\frac{t}{2}))^2}{\\sinh (\\frac{t\\gamma }{4}) \\sinh (\\frac{t}{\\gamma })} \\right)} \\frac{dt}{t} \\quad \\text{ for }0 < \\operatorname{Re}(z) < Q.", "Moreover \\Upsilon _{\\frac{\\gamma }{2}} satisfies the following shift relations first observed by Teschner~\\cite {Teschner-DOZZ}:\\Upsilon _{\\frac{\\gamma }{2}}(z + \\frac{\\gamma }{2}) = \\frac{\\Gamma (\\frac{\\gamma }{2} z)}{\\Gamma (1-\\frac{\\gamma }{2}z)} (\\frac{\\gamma }{2})^{1-\\gamma z} \\Upsilon _{\\frac{\\gamma }{2}}(z) \\quad \\text{and} \\quad \\Upsilon _{\\frac{\\gamma }{2}}(z + \\frac{2}{\\gamma }) = \\frac{\\Gamma (\\frac{2}{\\gamma }z)}{\\Gamma (1 - \\frac{2}{\\gamma }z)} (\\frac{\\gamma }{2})^{\\frac{4}{\\gamma }z - 1} \\Upsilon _{\\frac{\\gamma }{2}}(z).", "}\\end{definition}\\begin{theorem}[{\\cite {krv-dozz}}]Suppose \\alpha _1, \\alpha _2, \\alpha _3 satisfy the \\emph {Seiberg bounds}\\begin{equation}\\sum _{i=1}^3 \\alpha _i > 2Q, \\qquad \\textrm {and}\\qquad \\alpha _i < Q\\text{ for }i=1,2,3.\\end{equation}Let (u_1, u_2, u_3) = (0, 1, e^{i\\pi /3}).", "Then with \\mu >0 and C^\\mathrm {DOZZ}_\\gamma (\\alpha _1,\\alpha _2,\\alpha _3) defined in~(\\ref {eq:DOZZ}), we have\\mathrm {LF}_{(\\alpha _i, u_i)_i}[ e^{-\\mu \\mu _\\phi (}] = \\frac{1}{2} C^\\mathrm {DOZZ}_\\gamma (\\alpha _1,\\alpha _2,\\alpha _3)\\mu ^{\\frac{2Q-\\alpha _1-\\alpha _2-\\alpha _3}{\\gamma }}.", "\\end{theorem}Here, the Seiberg bounds ensure that \\mathrm {LF}_{(\\alpha _i, z_i)_i}[ e^{-\\mu \\mu _\\phi (}] is finite, and the points u_1, u_2, u_3 are chosen such that |u_1-u_2|=|u_2-u_3|=|u_3-u_1|=1 for convenience.Moreover, as remarked in~ \\cite [Footnote 6]{krv-dozz}, the factor \\frac{1}{2} is included to match with the physics literature normalization.\\end{definition}$" ], [ "Quantum surface and quantum sphere", "A quantum surface is an equivalence class of pairs $(D, h)$ where $D$ is a planar domain and $h$ is a generalized function on $D$ .", "For $\\gamma \\in (0,2)$ , we say that $(D, h) \\sim _\\gamma (\\widetilde{D}, \\widetilde{h})$ if there is a conformal map $\\psi : \\widetilde{D} \\rightarrow D$ such that $\\widetilde{h} = h \\circ \\psi + Q \\log |\\psi ^{\\prime }|.$ We write $ (D, h)/{\\sim _\\gamma }$ as the quantum surface corresponding to $(D,h)$ .", "An embedding of a quantum surface is a choice of its representative.", "Both quantum area and quantum length measures are intrinsic to the quantum surface thanks to (REF ) as shown in [22], [71].", "We can also consider quantum surfaces decorated with other structures.", "For example, let $n\\in \\mathbb {N}$ and $\\mathcal {I}$ be an at most countable index set, consider tuples $(D, h, (\\eta _i)_{i\\in \\mathcal {I}}, z_1,\\cdots ,z_n)$ such that $D$ is a domain, $h$ is a distribution on $D$ , $\\eta _i$ are loops on $D$ and $z_i \\in D\\cup \\partial D$ .", "We say that $(D, h, (\\eta _i)_{i\\in \\mathcal {I}}, z_1,\\cdots ,z_n ) \\sim _\\gamma (\\widetilde{D}, \\widetilde{h},(\\widetilde{\\eta }_i)_{i\\in \\mathcal {I}}, \\widetilde{z}_1,\\cdots ,\\widetilde{z}_n)$ if there is a conformal map $\\psi : \\widetilde{D} \\rightarrow D$ such that (REF ) holds, $\\psi (\\widetilde{z}_i) = z_i$ for all $1\\le i\\le n$ , and $\\psi \\circ \\widetilde{\\eta }_i=\\eta _i$ for all $i\\in \\mathcal {I}$ .", "We call an equivalence class defined through ${\\sim _\\gamma }$ a decorated quantum surface, and likewise an embedding of a decorated quantum surface is a choice of its representative.", "We now recall the two-pointed quantum sphere defined in [19] following the presentation of [3], [4].", "Consider the horizontal cylinder $\\mathcal {C}$ obtained from $\\mathbb {R}\\times [0,2\\pi ]$ by identifying $(x,0) \\sim (x, 2\\pi )$ .", "Let $h_\\mathcal {C}(z)=h_e^z)$ for $z\\in \\mathcal {C}$ where $h_ be sampled from $ P. We call $h_\\mathcal {C}$ the GFF on $\\mathcal {C}$ normalized to have mean zero on the circle $\\lbrace \\operatorname{Re}z=0\\rbrace \\cap \\mathcal {C}$ .", "The field $h_\\mathcal {C}$ can be written as $h_\\mathcal {C}=h^{\\operatorname{1}}_\\mathcal {C}+h^{2}_\\mathcal {C}$ , where $h^{\\operatorname{c}}$ is constant on vertical circles $\\lbrace \\operatorname{Re}z=u\\rbrace \\cap \\mathcal {C}$ for each $u\\in \\mathbb {R}$ , and $h^{\\ell }$ has mean zero on all such circles.", "We call $h^{2}_\\mathcal {C}$ the lateral component of the GFF on $\\mathcal {C}$ .", "Definition 2.3 For $\\gamma \\in (0,2)$ , let $(B_s)_{s \\ge 0}$ be a standard Brownian motion conditioned on $B_{s} - (Q-\\gamma )s<0$ for all $s>0$ , and let $(\\widetilde{B}_s)_{s \\ge 0}$ be an independent copy of $(B_s)_{s \\ge 0}$ .", "Let $Y_t =\\mathopen {}\\mathclose {\\left\\lbrace \\begin{array}{ll}B_{t} - (Q -\\gamma )t & \\mbox{if } t \\ge 0 \\\\\\widetilde{B}_{-t} +(Q-\\gamma ) t & \\mbox{if } t < 0\\end{array}\\right.}", ".$ Let $h^1(z) = Y_{\\operatorname{Re}z}$ for each $z \\in \\mathcal {C}$ .", "Let $h^2_\\mathcal {C}$ be independent of $h^1$ and have the law of the lateral component of the GFF on $\\mathcal {C}$ .", "Let $\\hat{h}=h^1+h^2_\\mathcal {C}$ .", "Let $\\mathbf {c}\\in \\mathbb {R}$ be sampled from $ \\frac{\\gamma }{2} e^{2(\\gamma -Q)c}dc$ independent of $\\hat{h}$ and set $h=\\hat{h}+\\mathbf {c}$ .", "Let $\\mathrm {QS}_2$ be the infinite measure describing the law of the decorated quantum surface $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ .", "We call a sample from $\\mathrm {QS}_2$ a two-pointed quantum sphere.", "The un-pointed and three-pointed quantum spheres are defined from $\\mathrm {QS}_2$ as follows.", "Definition 2.4 Suppose the decorated quantum surface $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ is sampled from $\\mu _h(\\mathcal {C})^{-2} \\mathrm {QS}_2$ .", "We let $\\mathrm {QS}$ be the law of the quantum surface $(\\mathcal {C}, h)/{\\sim _\\gamma }$ .", "Suppose $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ is sampled from $\\mu _h(\\mathcal {C})\\mathrm {QS}_2$ .", "Let $z$ be sampled from the probability measure proportional to $\\mu _h$ .", "Then we let $\\mathrm {QS}_3$ be the law of $(\\mathcal {C},h, z, -\\infty , +\\infty )/{\\sim _\\gamma }$ .", "It is proved in [19] that $\\mathrm {QS}_2$ can be obtained from $\\mathrm {QS}$ by adding two marked points according to the quantum area measure.", "Namely $\\mathrm {QS}_2$ is invariant under re-sampling of its two marked points according to the quantum area.", "We can similarly define $\\mathrm {QS}_n$ for $n\\in \\mathbb {N}$ by adding quantum typical points to $\\mathrm {QS}$ but we only need $\\mathrm {QS}_2$ and $\\mathrm {QS}_3$ in this paper.", "It is shown in [2], [4] that the embedding of $\\mathrm {QS}_3$ in $ gives a Liouville field on $ with three $\\gamma $ -insertions modulo an explicit multiplicative constant.", "Theorem 2.5 ([4]) Let $\\phi $ be sampled from $\\mathrm {LF}_{(\\gamma , u_1),(\\gamma , u_2),(\\gamma , u_3)}$ where $(u_1, u_2, u_3) = (0, 1, e^{i\\pi /3})$ .", "Then the law of the decorated quantum surface $( \\phi ,u_1,u_2,u_3)/\\sim _\\gamma $ is $\\frac{2(Q-\\gamma )^2}{\\pi \\gamma }\\mathrm {QS}_3$ .", "Although $\\mathrm {QS}_2$ has two marked points, it also has a nice Liouville field description on the cylinder.", "Definition 2.6 Let $P_\\mathcal {C}$ be the law of the GFF on the cylinder $\\mathcal {C}$ defined above Definition REF .", "Let $\\alpha \\in \\mathbb {R}$ .", "Sample $(h, \\mathbf {c})$ from $P_\\mathcal {C}\\times [e^{(2\\alpha -2Q)c} \\, dc]$ and let $\\phi (z) = h(z) - (Q-\\alpha ) \\mathopen {}\\mathclose {\\left| \\operatorname{Re}z \\right|} + \\mathbf {c}$ .", "We write $\\mathrm {LF}_\\mathcal {C}^{(\\alpha ,\\pm \\infty )}$ as the law of $\\phi $ .", "Theorem 2.7 ([4]) Let $h$ be as in Definition REF so that the law of $(\\mathcal {C}, h, +\\infty , -\\infty )/{\\sim _\\gamma }$ is $\\mathrm {QS}_2$ .", "Let $T\\in \\mathbb {R}$ be sampled from the Lebesgue measure on $\\mathbb {R}$ independently of $h$ .", "Let $\\phi (z)=h (z+T)$ .", "Then the law of $ \\phi $ is given by $ \\frac{\\gamma }{4(Q-\\gamma )^2} \\mathrm {LF}_\\mathcal {C}^{(\\gamma ,\\pm \\infty )}$ ." ], [ "Quantum disk and the FZZ formula", "We first recall the law of the quantum boundary length under $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}$ obtained in [63] following the presentation of [6].", "Proposition 2.8 ([63]) For $\\alpha > \\frac{\\gamma }{2}$ , the law of the quantum length $\\nu _\\phi (\\mathbb {R})$ under $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}$ is $1_{\\ell >0} \\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}} \\overline{U}(\\alpha )\\ell ^{\\frac{2}{\\gamma }(\\alpha -Q)-1} \\, d\\ell $ where $\\overline{U}(\\alpha ) = \\mathopen {}\\mathclose {\\left( \\frac{2^{-\\frac{\\gamma \\alpha }{2}} 2\\pi }{\\Gamma (1-\\frac{\\gamma ^2}{4})} \\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )}\\Gamma ( \\frac{\\gamma \\alpha }{2}-\\frac{\\gamma ^2}{4}).$ Let $\\lbrace \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ): \\ell \\rbrace $ be the disintegration of $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}$ over $\\nu _\\phi (\\mathbb {R})$ .", "Namely for each non-negative measurable function $f$ on $(0,\\infty )$ and $g$ on $H^{-1}(\\mathbb {H})$ , $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)} [f(\\nu _\\phi (\\mathbb {R}))g(\\phi ) ] = \\int _0^\\infty f(\\ell ) \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) [g(\\phi )] \\, d\\ell .$ Although the general theory of disintegration only defines $ \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell )$ for almost every $\\ell \\in (0,\\infty )$ , the following lemma describes a canonical version of $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)} (\\ell )$ for every $\\ell >0$ .", "Lemma 2.9 ([6]) Let $h$ be a sample from $P_\\mathbb {H}$ and $\\hat{h}(\\cdot )= h(\\cdot ) -2Q \\log \\mathopen {}\\mathclose {\\left|\\cdot \\right|}_+ +\\alpha G_\\mathbb {H}(\\cdot ,z)$ .", "The law of $\\hat{h}+\\frac{2}{\\gamma }\\log \\frac{\\ell }{\\nu _{\\widehat{h}}(\\mathbb {R})}$ under the reweighted measure $2^{-\\alpha ^2/2} \\frac{2}{\\gamma }\\ell ^{-1} \\mathopen {}\\mathclose {\\left(\\frac{\\ell }{\\nu _{\\widehat{h}}(\\mathbb {R})}\\right)}^{\\frac{2}{\\gamma }(\\alpha -Q)} P_\\mathbb {H}$ is a version of the disintegration $\\lbrace \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ): \\ell >0 \\rbrace $ .", "We now recall the quantum disk with one generic bulk insertion mentioned in Section REF .", "Definition 2.10 ([6]) Fix $\\alpha >\\frac{\\gamma }{2}$ and $\\ell >0$ .", "We let $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ;\\ell )$ be the law of $(\\mathbb {H}, \\phi , i)/{\\sim _\\gamma }$ where $\\phi $ is sampled from $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)} (\\ell )$ .", "The quantum disk with quantum typical marked points introduced in [19] are defined similarly as $\\mathrm {QS}_n$ in Definitions REF and REF .", "We will not recall these definitions but record the following theorem from [6] as the working definition.", "Theorem 2.11 ([6]) Fix $\\ell >0$ .", "Let $\\mathrm {QD}_{1,0}(\\ell )$ be the law of the quantum disk with boundary length $\\ell $ and with one interior marked point defined in [19].", "Then $\\mathrm {QD}_{1,0}(\\ell ) = \\frac{\\gamma }{2\\pi (Q-\\gamma )^2}\\mathcal {M}_1^\\mathrm {disk}(\\gamma ;\\ell ).$ Similarly as in the sphere case, we can define the following variants of the quantum disk by adding or removing points from $\\mathrm {QD}_{1,0}(\\ell )$ .", "Definition 2.12 For $\\ell >0$ .", "Let $(D,h,z)/{\\sim _\\gamma }$ be sampled from $\\mathrm {QD}(\\ell )$ .", "Then we let $\\mathrm {QD}(\\ell )$ be the law of $(D,h)/{\\sim _\\gamma }$ under $\\mu _h(D)^{-1}\\mathrm {QD}(\\ell )$ , Let $(D,h,z)/{\\sim _\\gamma }$ be sampled from $\\ell \\mathrm {QD}(\\ell )$ and sample $p\\in \\partial D$ from the probability measure proportional to quantum boundary length measure.", "Then we define $\\mathrm {QD}_{1,1}(\\ell )$ to be the law of $(D,h,z, p)/{\\sim _\\gamma }$ .", "The FZZ formula is the analog of the DOZZ formula for $\\mathrm {LF}^{ (\\alpha ,z)}_\\mathbb {H}$  proposed in [24] and proved in [6].", "We record the most convenient form for our purpose, which uses the modified Bessel function of the second kind $K_\\nu (x)$ [18].", "One concrete representation of $K_\\nu (x)$ in the range of our interest is the following [18]: $K_\\nu (x) := \\int _0^\\infty e^{-x \\cosh t} \\cosh (\\nu t) \\, dt \\quad \\text{ for } x > 0 \\text{ and } \\nu \\in \\mathbb {R}.$ Theorem 2.13 ([6]) For $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ and $\\ell >0$ , let $A$ be the quantum area of a sample from $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )$ .", "The law of $A$ under $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; 1)^\\#$ (i.e.", "the probability measure proportional to $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; 1)$ ) is the inverse gamma distribution with density $1_{x>0} \\frac{(4 \\sin \\frac{\\pi \\gamma ^2}{4})^{-\\frac{2}{\\gamma }(Q-\\alpha )}}{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))} x^{-\\frac{2}{\\gamma }(Q-\\alpha )-1} \\exp \\mathopen {}\\mathclose {\\left(-\\frac{1}{4x \\sin \\frac{\\pi \\gamma ^2}{4}}\\right)}.", "$ Moreover, recall $\\overline{U}(\\alpha )$ from in Proposition REF .", "Then for $\\mu >0$ we have $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )[e^{-\\mu A}] = \\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}} \\overline{U}(\\alpha ) \\ell ^{-1} \\frac{2}{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))} \\mathopen {}\\mathclose {\\left(\\frac{1}{2} \\sqrt{\\frac{\\mu }{\\sin (\\pi \\gamma ^2/4)}} \\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )}K_{\\frac{2}{\\gamma }(Q-\\alpha )} \\mathopen {}\\mathclose {\\left(\\ell \\sqrt{\\frac{\\mu }{\\sin (\\pi \\gamma ^2/4)}} \\right)}.", "$ The following integral identity of $K_\\nu (x)$ is crucial in our computation of ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ .", "Lemma 2.14 For any $c > 0$ and $\\nu \\in (-\\frac{1}{2}, \\frac{1}{2})$ , let $K_\\nu (x)$ be the modified Bessel function of the second kind as in Theorem REF .", "Then $\\int _0^\\infty \\frac{1}{\\sqrt{y}} e^{-cy} K_\\nu (cy) \\, dy = \\frac{\\pi ^{3/2}}{\\sqrt{2c} \\cos (\\pi \\nu )}.", "$ We will prove the $c=1$ case; the general case then follows from a change of variables.", "Recall the integral definition of $K_\\nu (x)$ in (REF ).", "Using $1+\\cosh t = (\\cosh t/2)^2$ and the fact that $\\cosh $ is even, we can express $\\int _0^\\infty \\frac{1}{\\sqrt{x}} e^{-x} K_\\nu (x) \\, dx$ as $\\iint _0^\\infty \\frac{1}{\\sqrt{x}} e^{-x (1 + \\cosh t)} \\cosh (\\nu t)\\, dx \\, dt = \\sqrt{\\pi }\\int _0^\\infty \\frac{\\cosh (\\nu t)}{\\sqrt{1+\\cosh t}}\\, dt = \\frac{1}{2}\\sqrt{\\pi /2} \\int _{-\\infty }^\\infty \\frac{\\cosh (\\nu t)}{\\cosh (t/2)}\\, dt.$ We interpret this as a contour integral bounding $\\mathbb {H}$ .", "The function $z \\mapsto \\frac{\\cosh (\\nu z)}{\\cosh (z/2)}$ has residues $(-1)^{k+1}(e^{(2k+1)\\nu \\pi i} + e^{-(2k+1)\\nu \\pi i} ) i$ at $(2k+1)\\pi i$ for $k = 0, 1, 2, \\dots $ , so by the residue theorem $\\int _{-\\infty }^\\infty \\frac{\\cosh (\\nu t)}{\\cosh (t/2)} \\, dt = \\sum _{k=0}^\\infty 2 \\pi i \\cdot (-1)^{k+1}(e^{(2k+1)\\nu \\pi i} + e^{-(2k+1)\\nu \\pi i} ) i = 2\\pi \\mathopen {}\\mathclose {\\left(\\frac{e^{\\nu \\pi i}}{1+e^{2\\nu \\pi i}} + \\frac{e^{-\\nu \\pi i}}{1+e^{-2\\nu \\pi i}} \\right)}.", "$ This is equal to $2\\pi (\\frac{2}{e^{\\nu \\pi i} + e^{-\\nu \\pi i}}) = \\frac{2\\pi }{\\cos \\pi \\nu }$ , yielding the lemma for $c=1$ as was needed." ], [ "SLE loop measure and the conformal welding of quantum disks", "We first recall the definition of Zhan's loop measure $\\operatorname{SLE}_\\kappa ^\\mathrm {loop}$ from [78] (see Theorem 4.2 there).", "Although we only need the fact that it is a conformally invariant infinite measure on unrooted simple loops that satisfies Theorem REF below, we include its definition for completeness.", "Given two distinct points $p,q\\in and $ (0,4]$, the \\emph {two-sided whole plane} $ SLE$, which we denote by $ SLEp q$ is the probability measure on pairs of curves $ (1,2)$ on $ connecting $p$ and $q$ where $\\eta _1$ is a so-called whole-plane $\\operatorname{SLE}_\\kappa (2)$ from $p$ to $q$ and $\\eta _2$ conditioned on $\\eta _1$ is a chordal $\\operatorname{SLE}_\\kappa $ in $\\widehat{\\backslash \\eta _1.Forgetting p and q, \\operatorname{SLE}^{p \\rightleftharpoons q}_\\kappa can be viewed as a measure on loops on .", "Given a loop \\eta sampled from \\operatorname{SLE}^{p \\rightleftharpoons q}_\\kappa , let \\operatorname{Cont}(\\eta ) be the (1+\\frac{\\kappa }{8})-dimensional Minkowski content,which exists a.s.\\ by~\\cite {lawler-rezai-nat}.\\begin{definition}The \\emph {SLE loop measure} \\operatorname{SLE}^{\\mathrm {loop}}_\\kappa is the infinite measure on loops on given by\\begin{equation}\\operatorname{SLE}_\\kappa ^\\mathrm {loop}= \\operatorname{Cont}(\\eta )^{-2} \\iint _{ |p-q|^{-2(1-\\frac{\\kappa }{8})} \\operatorname{SLE}^{p \\rightleftharpoons q}_\\kappa (d\\eta )\\, d^2p\\, d^2q.", "}\\end{equation}\\end{definition}We now review the conformal welding result established in~\\cite [Theorem 1.3]{AHS-SLE-integrability} for \\operatorname{SLE}^{\\mathrm {loop}}_\\kappa .", "}We first recall the notion of \\emph {conformal welding}.For concreteness, suppose $ S1$ and $ S2$ are two oriented Riemann surfaces, both of which areconformally equivalent to a planar domain whose boundary consists of finite many disjoint circles.For $ i=1,2$, suppose $ Bi$ is a boundary component of $ Si$ and $ i$ is a finite length measure on $ Bi$ with the same total length.Given an oriented Riemann surface $ S$ and a simple loop $$ on $ S$ with a length measure $$,we call $ (S,,)$ a \\emph {conformal welding} of $ (S1,1)$ and $ (S2,2)$ ifthe two connected components of $ S$ with their orientations inherited from $ S$ are conformally equivalent to $ S1$ and $ S2$, and moreover,both $ 1$ and $ 2$ agree with $$.$ We now introduce uniform conformal welding.", "Suppose $(S_1,\\nu _1)$ and $(S_2,\\nu _2)$ from the previous paragraph are such that for each $p_1\\in B_1$ and $p_2\\in B_2$ , modulo conformal automorphism there exists a unique conformal welding identifying $p_1$ and $p_2$ .", "Now let $\\mathbf {p}_1\\in B_1$ and $\\mathbf {p}_2\\in B_1$ be independently sampled from the probability measures proportional to $\\nu _1$ and $\\nu _2$ , respectively.", "We call the conformal welding of $(S_1,\\nu _1)$ and $(S_2,\\nu _2)$ with $\\mathbf {p}_1$ identified with $\\mathbf {p}_2$ their uniform conformal welding.", "Fix $\\kappa \\in (0,4)$ and $\\gamma =\\sqrt{\\kappa }\\in (0,2)$ .", "Recall the measures $\\mathrm {QS}$ and $\\mathrm {QD}(\\ell )$ for $\\ell > 0$ from Section REF that correspond to variants of the quantum sphere and disk in $\\gamma $ -LQG.", "Let $\\mathcal {D}_1$ and $\\mathcal {D}_2$ be quantum surfaces sampled from $\\mathrm {QD}(\\ell ) \\times \\mathrm {QD}(\\ell )$ .", "By Sheffield's work [68], viewed as oriented Riemann surfaces with the quantum length measure, almost surely the conformal welding of $\\mathcal {D}_1$ and $\\mathcal {D}_2$ is unique after specifying a boundary point on each surface.", "We write $\\mathrm {Weld}(\\mathrm {QD}(\\ell ),\\mathrm {QD}(\\ell ))$ as the law of the loop-decorated quantum surface obtained from the uniform conformal welding of $\\mathcal {D}_1$ and $\\mathcal {D}_2$ .", "Theorem 2.15 ([4]) Let $\\mathbb {F}$ be a measure on $H^{-1}($ such that the law of $( h)/{\\sim _\\gamma }$ is $\\mathrm {QS}$ if $h$ is sampled from $\\mathbb {F}$ .", "Let $\\mathrm {QS}\\otimes \\operatorname{SLE}^{\\mathrm {loop}}_\\kappa $ be the law of the decorated quantum surface $( h,\\eta )/{\\sim _\\gamma }$ when $(h,\\eta )$ is sampled from $\\mathbb {F}\\times \\operatorname{SLE}^{\\mathrm {loop}}_\\kappa $ .", "Then there exists a constant $C \\in (0, \\infty )$ such that $\\mathrm {QS}\\otimes \\operatorname{SLE}^{\\mathrm {loop}}_\\kappa = C\\int _0^\\infty \\ell \\cdot \\mathrm {Weld}(\\mathrm {QD}(\\ell ),\\mathrm {QD}(\\ell )) \\, d\\ell .$ Theorem REF implies that the law of the quantum length $L$ of the loop in $\\mathrm {QS}\\otimes \\operatorname{SLE}_\\kappa ^\\mathrm {loop}$ is $C \\ell |\\mathrm {QD}(\\ell )|^2 \\, d\\ell $ , and conditioned on $L$ the two quantum surfaces cut out by the loop are independent samples from $\\mathrm {QD}(L)^\\#$ ." ], [ "The independent coupling of CLE and LQG on the disk", "In this section we review some recent results on CLE$_\\kappa $ coupled with $\\gamma $ -LQG disks, where $\\kappa \\in (\\frac{8}{3},4)$ and $\\gamma =\\sqrt{\\kappa }\\in (\\sqrt{8/3},2)$ .", "Suppose $(D,h)$ is an embedding of a sample from $\\mathrm {QD}$ .", "Let $\\Gamma $ be a $\\operatorname{CLE}_\\kappa $ on $D$ which is independent of $h$ .", "Then we call the decorated quantum surface $(D,h,\\Gamma )/{\\sim _\\gamma }$ a $\\operatorname{CLE}_\\kappa $ decorated quantum disk and denote its law by $\\mathrm {QD}\\otimes \\operatorname{CLE}_\\kappa $ .", "By the conformal invariance of $\\operatorname{CLE}_\\kappa $ , the measure $\\mathrm {QD}\\otimes \\operatorname{CLE}_\\kappa $ does not depend on the choice of the embedding of $(D,h)/{\\sim _\\gamma }$ .", "Fix $a>0$ .", "Recall the probability measure $\\mathrm {QD}(a)^{\\#}=|\\mathrm {QD}(a)|^{-1} \\mathrm {QD}(a)$ that corresponds to the quantum disk with boundary length $a$ .", "We define the probability measure $\\mathrm {QD}(a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ in the same way as $\\mathrm {QD}\\otimes \\operatorname{CLE}_\\kappa $ with $\\mathrm {QD}(a)^{\\#}$ in place of $\\mathrm {QD}$ .", "We can similarly define measures such as $\\mathrm {QD}_{1,0}\\otimes \\operatorname{CLE}_\\kappa $ and $\\mathrm {QD}_{1,0}(a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ .", "Let $(D,h,\\Gamma )$ be an embedding of a sample from $\\mathrm {QD}(a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ .", "Given a loop $\\eta $ in $\\Gamma $ , let $D_\\eta $ be the bounded component of $\\eta $ , namely the region encircled by $\\eta $ .", "A loop $\\eta \\in \\Gamma $ is called outermost if it is not contained in any $D_{\\eta ^{\\prime }}$ for $\\eta ^{\\prime }\\in \\Gamma $ .", "Let $(\\ell _i)_{i\\ge 1}$ be the collection of the quantum lengths of the outermost loops of $\\Gamma $ listed in non-increasing order.", "Two crucial inputs to our paper are the law of $(\\ell _i)_{i\\ge 1}$ and the conditional law of the quantum surfaces encircled by the outermost loops conditioned on $(\\ell _i)_{i\\ge 1}$ .", "We summarize them as the two propositions below.", "Proposition 2.16 ([55], [7], [12]) Set ${\\beta }:= \\frac{4}{\\kappa } + \\frac{1}{2}\\in (\\frac{3}{2},2)$ .", "Let $(\\zeta _t)_{t\\ge 0}$ be a ${\\beta }$ -stable Lévy process whose Lévy measure is $1_{x>0} x^{-\\beta -1} \\, dx$ , so that it has no downward jumps.", "We denote its law by $\\mathbb {P}^\\beta $ .", "Let $\\tau _{-a}=\\inf \\lbrace t: \\zeta _t=-a\\rbrace $ .", "Let $ (x_i)_{i \\ge 1}$ be the sequence of the sizes of the upward jumps of $\\zeta $ on $[0,\\tau _{-a}]$ sorted in decreasing order.", "Then the law of $(\\ell _i)_{i \\ge 1}$ defined right above equals that of $ (x_i)_{i \\ge 1}$ under the reweighted probability $\\frac{\\tau _{-a}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-a}^{-1}]}$ .", "Proposition 2.17 ([55]) Conditioning on $(\\ell _i)_{i\\ge 1}$ , the conditional law of $\\lbrace (D_\\eta , h)/{\\sim _\\gamma }: \\eta \\textrm { is an outermost loop of }\\Gamma \\rbrace $ is given by independent samples from $\\mathrm {QD}(\\ell _i)^\\#$ .", "Here for each domain $U\\subset D$ , we write $(U, h|_{U})/{\\sim _\\gamma }$ as $(U, h)/{\\sim _\\gamma }$ for simplicity.", "Proposition REF is precisely Theorem 1.1 in [55].", "It was pointed out at the end of [12] that Proposition REF can be extracted from [55], [7], [12].", "The reason is that $(\\ell _i)_{i \\ge 1}$ and $ (x_i)_{i \\ge 1}$ are two ways of describing the scaling limit of the outermost loop lengths of an $O(n)$ -loop-decorated planar map model.", "The former follows from [7], [55] and the latter follows from [12].", "We explain this in more detail in Appendix REF .", "We also need the following quantum zipper result for a CLE outermost loop on the a quantum disk.", "Recall the measure $\\mathrm {QD}_{1,0}(a)$ from Definition REF which corresponds to the quantum disk with one interior marked point and boundary length $a$ .", "Recall the probability measure $\\mathrm {QD}_{1,0}(a)^\\#\\otimes \\operatorname{CLE}_\\kappa $ defined at the beginning of this subsection.", "For a $\\operatorname{CLE}_\\kappa $ sample $\\Gamma $ on $D$ and a domain $U\\subset D$ , we write $\\Gamma |_U$ for the subset of loops that lie in $U$ .", "Proposition 2.18 For $a>0$ , let $(D,h,\\Gamma ,z)$ be an embedding of a sample from $\\mathrm {QD}_{1,0}(a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ .", "Let $\\eta $ be the outermost loop of $\\Gamma $ surrounding $z$ .", "Let $D_\\eta $ and $A_\\eta $ be the two connected components of $D\\setminus \\eta $ where $z\\in D_\\eta $ .", "Let $\\ell _h$ be the quantum boundary length measure on $\\eta $ .", "Conditioning on $(h,\\Gamma , \\eta ,z)$ , let $w$ be a point on $\\eta $ sampled from the probability measure proportional to $\\ell _h$ .", "Now consider the joint distribution of $(D,h,\\eta ,z,w)$ .", "Then conditioning on $\\ell _h(\\eta )$ , the decorated quantum surfaces $(D_\\eta , h,z,w)/{\\sim _\\gamma }$ and $(A_\\eta ,h, \\Gamma |_{A_\\eta },w)/{\\sim _\\gamma }$ are conditionally independent, and the conditional law of $(D_\\eta , h,z,w)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))^{\\#}$ .", "Proposition REF is essentially proved in [55], see Appendix REF for more details." ], [ "Quantum annulus", "In this section we introduce a quantum surface of annular topology which we call the quantum annulus.", "It is an intermediate step towards understanding the quantum pair of pants that we will use to prove Theorem  as outlined in Section REF .", "The quantum annulus will also be crucial to our proof of Theorem REF for the SLE loop.", "We assume $\\kappa \\in (\\frac{8}{3},4)$ and $\\gamma =\\sqrt{\\kappa }$ throughout this section.", "Fix $a>0$ .", "Suppose $(D, h, z)$ is an embedding of sample from $\\mathrm {QD}_{1,0}(a)$ .", "Let $\\Gamma $ be a $\\operatorname{CLE}_\\kappa $ on $D$ independent of $h$ .", "Recall that $\\mathrm {QD}_{1,0}\\otimes \\operatorname{CLE}_\\kappa $ is the law of the decorated quantum surface $(D, h, \\Gamma , z)/{\\sim _\\gamma }$ .", "Let $\\eta $ be the outermost loop of $\\Gamma $ surrounding $z$ .", "Let $\\mathfrak {l}_\\eta $ be the quantum length of $\\eta $ .", "To ensure the existence of the disintegration of $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ over $\\mathfrak {l}_\\eta $ , we check the following fact.", "Lemma 3.1 For a Borel set $E\\subset \\mathbb {R}$ with zero Lebesgue measure, $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa [\\mathfrak {l}_\\eta \\in E]=0$ .", "Let $(\\ell _i)_{i \\ge 1}$ be the quantum lengths of outermost loops in a sample from $\\mathrm {QD}_{1,0}(a) \\otimes \\operatorname{CLE}_\\kappa $ , ordered such that $\\ell _1 > \\ell _2 > \\cdots $ .", "By the explicit law of $(\\ell _i)_{i\\ge 1}$ from Proposition REF , for each $i > 0$ we have $\\mathrm {QD}_{1,0}(a) \\otimes \\operatorname{CLE}_\\kappa [\\ell _i \\in E] = 0$ .", "Since $\\lbrace \\mathfrak {l}_\\eta \\in E\\rbrace \\subset \\cup _i \\lbrace \\ell _i \\in E\\rbrace $ , we conclude the proof.", "Given Lemma REF , we can define the disintegration of $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ over $\\mathfrak {l}_\\eta $ , which we denote by $\\lbrace \\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa (\\ell ): \\ell \\in (0,\\infty )\\rbrace $ .", "We now define the quantum annulus.", "Definition 3.2 (Quantum annulus) Given $(D,h,\\Gamma , \\eta , z)$ defined right above, let $A_{\\eta }$ be the non-simply-connected component of $D\\setminus \\eta $ .", "For $a, b>0$ , let $\\widetilde{\\mathrm {QA}}(a,b)$ be the law of the quantum surface $(A, h)/{\\sim _\\gamma }$ under the measure $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa (b)$ .", "Let $\\mathrm {QA}(a,b)$ be such that $b|\\mathrm {QD}_{1,0}(b)|\\mathrm {QA}(a,b)= \\widetilde{\\mathrm {QA}}(a,b)$ We call a sample from $\\mathrm {QA}(a,b)$ a quantum annulus.", "Remark 3.3 (No need to say “for almost every $b$ ”) Using the general theory of regular conditional probability, for each $a>0$ the measure $\\mathrm {QA}(a,b)$ is only well defined for almost every $b>0$ .", "The ambiguity does not affect any application of this concept in this paper because we will take integrations over $b$ ; see e.g.", "Proposition REF below.", "On the other hand, at the end of Appendix REF , we explain that there exists a canonical version of $\\mathrm {QA}(a,b)$ that is continuous in $(a,b)$ in an appropriate topology.", "For these reasons we omit the phrase “for almost every $b$ ” in statements concerning $\\mathrm {QA}(a,b)$ .", "Similarly as in $\\mathrm {Weld}(\\mathrm {QD}(a), \\mathrm {QD}(a))$ from Theorem REF , given a pair of independent samples from $\\mathrm {QA}(a,b)$ and $\\mathrm {QD}_{1,0}(b)$ , we can uniformly conformally weld them along the boundary component with length $b$ to get a loop-decorated quantum surface with one marked point.", "We write $\\mathrm {Weld}(\\mathrm {QA}(a,b),\\mathrm {QD}_{1,0}(b))$ for the law of the resulting decorated quantum surface.", "Then Proposition REF can be reformulated as follows.", "Proposition 3.4 For $a>0$ , let $(D,h,\\Gamma ,z)$ be an embedding of a sample from $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ .", "Let $\\eta $ be the outermost loop of $\\Gamma $ surrounding $z$ .", "Then the law of the decorated quantum surface $(D,h,\\eta ,z)/{\\sim _\\gamma }$ equals $\\int _0^\\infty b \\mathrm {Weld}(\\mathrm {QA}(a,b),\\mathrm {QD}_{1,0}(b)) \\, db$ .", "From Proposition REF and the definitions of $\\widetilde{\\mathrm {QA}}$ and uniform welding, the law of $(D,h,\\eta ,z)/{\\sim _\\gamma }$ under $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa (b)$ is $ \\mathrm {Weld}(\\widetilde{\\mathrm {QA}}(a,b),\\mathrm {QD}_{1,0}(b)^{\\#})$ .", "By (REF ), this measure equals $\\mathrm {Weld}(b|\\mathrm {QD}_{1,0}(b)|\\mathrm {QA}(a,b),\\mathrm {QD}_{1,0}(b)^{\\#})=b\\mathrm {Weld}(\\mathrm {QA}(a,b), \\mathrm {QD}_{1,0}(b)).", ".$ The reason we consider $\\mathrm {QA}$ instead of $\\widetilde{\\mathrm {QA}}$ in Definition REF is because it is in some sense more canonical.", "In particular, the total measure $|\\mathrm {QA}(a,b)|$ has the following simple and symmetric form.", "This will be used to prove Theorem REF , where we will also show that $\\mathrm {QA}(a,b)=\\mathrm {QA}(b,a)$ .", "Proposition 3.5 There exists a constant ${\\mathfrak {C}}\\in (0,\\infty )$ such that $|\\mathrm {QA}(a,b)|= \\frac{{\\mathfrak {C}}}{\\sqrt{ab} (a+b)}.$ The rest of the section is devoted to the proof of Proposition REF .", "In Section REF , we reduce Proposition REF to the setting of Proposition REF .", "In Section REF , we prove Proposition REF using the Levy process in Proposition REF .", "The same Levy process will be used to get the area distribution of $\\mathrm {QA}(a,b)$ in Section ." ], [ "Reduction of Proposition ", "In this section we reduce Proposition REF to the setting of Proposition REF in order to use the Levy process defined there.", "Recall that setting where $(D, h,\\Gamma )$ is an embedding from a sample of $\\mathrm {QD}(a)^\\#\\otimes \\operatorname{CLE}_\\kappa $ for some $a>0$ .", "Sample a loop $\\eta $ from the counting measure on $\\Gamma $ , and let $\\mathbb {M}_a$ be the law of the decorated quantum surface $(D, h, \\Gamma , \\eta )$ .", "In other words, consider $((D, h, \\Gamma )/{\\sim _\\gamma }, \\mathbf {n})$ sampled from the product measure $(\\mathrm {QD}(a)^\\#\\otimes \\operatorname{CLE}_\\kappa ) \\times \\mathrm {Count}_{\\mathbb {N}}$ where $\\mathrm {Count}_{\\mathbb {N}}$ is the counting measure on $\\mathbb {N}$ .", "Let $\\eta \\in \\Gamma $ be the outermost loop with the ${\\mathbf {n}}$ th largest quantum length.", "Then $\\mathbb {M}_a$ is the law of $(D, h, \\Gamma , \\eta )/{\\sim _\\gamma }$ .", "The following proposition is the analog of Proposition REF for $\\mathbb {M}_a$ .", "Proposition 3.6 Under $\\mathbb {M}_a$ , the law of $(D, h, \\eta )/{\\sim _\\gamma }$ is $\\frac{1}{|\\mathrm {QD}(a)|}\\int _0^\\infty b \\mathrm {Weld}(\\mathrm {QA}(a,b), \\mathrm {QD}(b)) \\, db.$ Let $\\mathbb {F}$ be a measure on $H^{-1}($ such that the law of $( h)/{\\sim _\\gamma }$ is $\\mathrm {QD}(a)$ if $h$ is a sample from $\\mathbb {M}$ .", "We write $\\operatorname{CLE}_\\kappa (d\\Gamma )$ as the probability measure for $\\operatorname{CLE}_\\kappa $ on $.", "Write $ Counto(d)$ as the counting measure on the set of outermost loops in $$.", "Then we have the following equality on measures:\\begin{equation}(1_E \\mathrm {Count}_{\\Gamma ^o}(d\\eta )) \\mu _h(d^2z)\\mathbb {F}(dh) \\operatorname{CLE}_\\kappa (d\\Gamma ) = \\mu _h|_{D_\\eta }(d^2z) \\mathrm {Count}_{\\Gamma ^o}(d\\eta )\\mathbb {F}(dh) \\operatorname{CLE}_\\kappa (d\\Gamma ),\\end{equation}where $ D$ is the simply-connected component of $$, and $ E = { z D}$.$ If $(h, \\Gamma , \\eta , z)$ is sampled according to the left hand side of (), then the law of $( h, \\Gamma , z)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,0}(a) \\otimes \\operatorname{CLE}_\\kappa $ and $\\eta $ is the outermost loop around $z$ .", "Therefore we are in the setting of Proposition REF hence the law of $( h, \\eta , z)/{\\sim _\\gamma }$ is $\\int _0^\\infty b \\mathrm {Weld}(\\mathrm {QA}(a,b), \\mathrm {QD}_{1,0}(b)) \\, db$ .", "If $(h, \\Gamma , \\eta )/{\\sim _\\gamma }$ is sampled from $\\mathrm {Count}_{\\Gamma ^o}(d\\eta ) \\mathbb {F}(dh) \\operatorname{CLE}_\\kappa (d\\Gamma )$ , then the law of $(D, h, \\Gamma , \\eta )/{\\sim _\\gamma }$ is $|\\mathrm {QD}(a)|\\cdot \\mathbb {M}_a$ by the definition of $\\mathbb {M}_a$ .", "If we then weight by $\\mu _h(D_\\eta )$ and sample a point $z$ from $(\\mu _h|_{D_\\eta })^\\#$ , then the law of $(h, \\Gamma , \\eta , z)$ is given by the right hand side of (), so the law of $(D, h, \\eta , z)/{\\sim _\\gamma }$ is $\\int _0^\\infty b \\mathrm {Weld}(\\mathrm {QA}(a,b), \\mathrm {QD}_{1,0}(b)) \\, db$ .", "Unweighting by $\\mu _h(D_\\eta )$ and forgetting $z$ , we see that the law of $(D, h, \\eta )/{\\sim _\\gamma }$ under $|\\mathrm {QD}(a)|\\cdot \\mathbb {M}_a$ is $\\int _0^\\infty b \\mathrm {Weld}(\\mathrm {QA}(a,b), \\mathrm {QD}(b)) \\, db$ .", "Dividing by $|\\mathrm {QD}(a)|$ , we conclude the proof.", "Proposition REF immediately follows from its counterpart under $\\mathbb {M}_a$ .", "Proposition 3.7 The law of the quantum length of $\\eta $ under $\\mathbb {M}_a$ is $\\frac{{\\mathfrak {C}}1_{b>0} b |\\mathrm {QD}(b)|db}{\\sqrt{ab} (a+b)|\\mathrm {QD}(a)|} \\quad \\textrm {for some constant }{\\mathfrak {C}}\\in (0,\\infty ) \\textrm { not depending on } a.$ We now explain how Proposition REF yields Proposition REF and then prove it in Section REF [Proof of Proposition REF assuming Proposition REF ] By Proposition REF , the law of $\\ell _h(\\eta )$ under $\\mathbb {M}_a$ is $\\frac{1}{|\\mathrm {QD}(a)|} b |\\mathrm {QA}(a,b)| |\\mathrm {QD}(b)| \\, db$ .", "Comparing this to Proposition REF yields $|\\mathrm {QA}(a,b)| = \\frac{\\mathfrak {C}}{\\sqrt{ab}(a+b)}$ ." ], [ "Distribution of the loop length: proof of Proposition ", "We first reduce Proposition REF to a problem on the Levy process in Proposition REF .", "Let ${\\beta }= \\frac{4}{\\kappa } + \\frac{1}{2}=\\frac{4}{\\gamma ^2}+\\frac{1}{2}.$ Let $\\mathbb {P}^{\\beta }$ be the probability measure on càdlàg processes on $[0,\\infty )$ describing the law of a ${\\beta }$ -stable Lévy process with Lévy measure $1_{x>0} x^{-\\beta -1} \\, dx$ .", "Let $(\\zeta _t)_{t\\ge 0}$ be a sample from $\\mathbb {P}^{\\beta }$ .", "Let $J:=\\lbrace (x,t): t\\ge 0 \\textrm { and }\\zeta _t-\\zeta _{t^-}=x>0 \\rbrace $ be the set of jumps of $\\zeta $ .", "Given $J$ , let $({\\mathbf {b}}, {\\mathbf {t}})\\in J$ be sampled from the counting measure on $J$ .", "Namely $({\\mathbf {b}}, {\\mathbf {t}})$ is chosen uniformly randomly from $J$ similarly as $\\eta $ from the outermost loops of $\\Gamma $ .", "Let $M^\\beta $ be the law of $(\\zeta , {\\mathbf {b}},{\\mathbf {t}})$ , which is an infinite measure.", "For each $a>0$ , let $\\tau _{-a}=\\inf \\lbrace t: \\zeta _t=-a\\rbrace $ and $J_a=\\lbrace (x,t)\\in J: t\\le \\tau _{-a}\\rbrace $ .", "Let $M^{\\beta }_a$ be the restriction of the measure $M^{\\beta }$ to the event $\\lbrace {\\mathbf {t}}\\le \\tau _{-a} \\rbrace $ .", "Let $\\widetilde{M}^\\beta _a = \\frac{\\tau _{-a}^{-1}}{\\mathbb {E}^{\\beta }[\\tau _{-a}^{-1}]} M^{\\beta }_a$ , where $\\mathbb {E}^{\\beta }$ is the expectation with respect to $\\mathbb {P}^{\\beta }$ .", "Then by Proposition REF we have the following.", "Lemma 3.8 Suppose $(D,h,\\Gamma ,\\eta )$ and $\\mathbb {M}_a$ are as in Proposition REF .", "Let $\\ell _1 > \\ell _2 > \\dots $ be the quantum lengths of the outermost loops in $\\Gamma $ and $\\ell _h(\\eta )$ be the quantum length of the distinguished loop $\\eta $ .", "Then the joint law of $\\lbrace \\ell _i: i\\ge 1 \\rbrace $ and $\\ell _h(\\eta )$ under $\\mathbb {M}_a$ equals the joint law of $\\lbrace x: (x,t)\\in J \\rbrace $ and ${\\mathbf {b}}$ under the measure $\\widetilde{M}^\\beta _a$ defined right above.", "Under the measure $\\widetilde{M}_a^{\\beta }$ , the jump $({\\mathbf {b}},{\\mathbf {t}})$ is chosen uniformly from $J_a$ .", "Now Lemma REF follows from Proposition REF and the fact that $\\eta $ is chosen uniformly among all outermost loops of $\\Gamma $ .", "Given Lemma REF , Proposition REF immediately follows from the proposition below.", "Proposition 3.9 The law of ${\\mathbf {b}}$ under $\\widetilde{M}^\\beta _a$ is $\\frac{{\\mathfrak {C}}1_{b>0} }{(a+b)} \\mathopen {}\\mathclose {\\left( \\frac{a}{b} \\right)}^{{\\beta }+1} db \\quad \\textrm {for some constant }{\\mathfrak {C}}\\in (0,\\infty ) \\textrm { not depending on } a.$ [Proof of Proposition REF given Proposition REF ] By Definition REF and Proposition REF , we have $|\\mathrm {QD}(b)|/|\\mathrm {QD}(a)| =(a/b)^{\\frac{4}{\\gamma ^2}+2}= (a/b)^{{\\beta }+\\frac{3}{2}}.$ By Lemma REF and Proposition REF , the $\\mathbb {M}_a$ -law of the quantum length of $\\eta $ is $\\frac{{\\mathfrak {C}}}{(a+b)} \\mathopen {}\\mathclose {\\left( \\frac{a}{b} \\right)}^{{\\beta }+1} db= \\frac{{\\mathfrak {C}}b |\\mathrm {QD}(b)| db}{\\sqrt{ab} (a+b)|\\mathrm {QD}(a)|} \\quad \\textrm { with $ C$ in~(\\ref {eq:Levy-jump})}.", "$$$ Figure: Illustration of Lemma  and .", "The measure M β M^\\beta is ℙ β \\mathbb {P}^\\beta with a uniformly chosen jump (𝐛,𝐭)(\\mathbf {b}, \\mathbf {t}).", "Conditioned on (𝐛,𝐭)({\\mathbf {b}}, {\\mathbf {t}}), removing (𝐛,𝐭)({\\mathbf {b}},{\\mathbf {t}}) and concatenating the two pieces of (ζ t )(\\zeta _t) before and after 𝐭{\\mathbf {t}} gives a sample (ζ ˜ t ) t≥0 (\\widetilde{\\zeta }_t)_{t\\ge 0} of ℙ β \\mathbb {P}^\\beta .", "The event {𝐭<τ -a }\\lbrace {\\mathbf {t}}<\\tau _{-a} \\rbrace for ζ\\zeta becomes {𝐭<τ ˜ -a }\\lbrace {\\mathbf {t}}<\\widetilde{\\tau }_{-a} \\rbrace for ζ ˜\\widetilde{\\zeta }.We now prove Proposition REF .", "We first use Palm's Theorem for Poisson point process to give an alternative description of the measure $M^\\beta $ .", "See Figure REF for an illustration.", "Lemma 3.10 Let $\\widetilde{\\zeta }_t=\\zeta _t- 1_{t\\ge {\\mathbf {t}}} {\\mathbf {b}}$ .", "Then the $M^\\beta $ -law of $({\\mathbf {b}},{\\mathbf {t}})$ is $1_{b>0,t>0} b^{-\\beta -1} \\, db \\, dt$ .", "Conditioning on $({\\mathbf {b}},{\\mathbf {t}})$ , the conditional law of $\\widetilde{\\zeta }_t$ under $M^\\beta $ is $\\mathbb {P}^{\\beta }$ .", "Equivalently, the joint law of $({\\mathbf {b}},{\\mathbf {t}})$ and $(\\widetilde{\\zeta }_t)_{t\\ge 0}$ is the product measure $(1_{b>0,t>0} b^{-\\beta -1} \\, db \\, dt) \\times \\mathbb {P}^{\\beta }$ .", "By the definition of Levy measure, the jump set $J$ of $\\zeta $ is a Poisson point process on $(0,\\infty )^2$ with intensity measure $1_{x>0,t>0} x^{-\\beta -1} \\, dx \\, dt$ .", "Since $({\\mathbf {b}}, {\\mathbf {t}})$ is chosen from the counting measure on $J$ , by Palm's theorem (see e.g.", "[37]), the $M^{\\beta }$ -law of $({\\mathbf {b}}, {\\mathbf {t}})$ is the same as the intensity measure of $J$ , which is $1_{b>0,t>0} b^{-\\beta -1} \\, db \\, dt$ .", "Moreover, conditioning on $({\\mathbf {b}},{\\mathbf {t}})$ , the conditional law of $J\\setminus \\lbrace ({\\mathbf {b}}, {\\mathbf {t}}) \\rbrace $ is given by the original Poisson point process, which is the $\\mathbb {P}^{\\beta }$ -law of $J$ .", "Note that $(\\widetilde{\\zeta }_t)_{0\\le t< {\\mathbf {t}}}=(\\zeta _t)_{0\\le t<{\\mathbf {t}}}$ is measurable with respect to the jump set $\\lbrace (x,t)\\in J: t<{\\mathbf {t}}\\rbrace $ .", "Therefore, the conditional law of $(\\widetilde{\\zeta }_t)_{0\\le t< {\\mathbf {t}}}$ conditioning on $({\\mathbf {b}}, {\\mathbf {t}})$ is the $\\mathbb {P}^{\\beta }$ -law of $(\\zeta _t)_{0\\le t<{\\mathbf {t}}}$ .", "Similarly, conditioning on $({\\mathbf {b}}, {\\mathbf {t}})$ and $(\\widetilde{\\zeta }_t)_{0\\le t< {\\mathbf {t}}}$ , the conditional law of $(\\widetilde{\\zeta }_{t+{\\mathbf {t}}}-\\widetilde{\\zeta }_{{\\mathbf {t}}})_{t\\ge 0}$ under $M^\\beta $ is $\\mathbb {P}^{\\beta }$ .", "Concatenating $(\\widetilde{\\zeta }_t)_{0\\le t< {\\mathbf {t}}}$ and $(\\widetilde{\\zeta }_{t+{\\mathbf {t}}}-\\widetilde{\\zeta }_{{\\mathbf {t}}})_{t\\ge 0}$ we see that the conditional law of $\\widetilde{\\zeta }_t$ under $M^\\beta $ is $\\mathbb {P}^{\\beta }$ .", "Proposition REF is an immediate consequence of the following two lemmas.", "Lemma 3.11 Recall the expectation $\\mathbb {E}^{\\beta }$ for $\\mathbb {P}^{\\beta }$ .", "The law of ${\\mathbf {b}}$ under $\\widetilde{M}^\\beta _a$ in Proposition REF is $\\frac{\\mathbb {E}^{\\beta }\\mathopen {}\\mathclose {\\left[ {\\tau _{-a}}/{\\tau _{-a-b}} \\right]}}{\\mathbb {E}^{\\beta }[\\tau ^{-1}_{-a}]} b^{-{\\beta }-1} 1_{b>0}db.", "$ We start from $(\\zeta _t)_{t\\ge 0}$ and $({\\mathbf {b}}, {\\mathbf {t}})$ under $M^{\\beta }$ .", "Let $\\widetilde{\\zeta }_t=\\zeta _t- 1_{t\\ge {\\mathbf {t}}} {\\mathbf {b}}$ as in Lemma REF .", "Let $\\widetilde{\\tau }_{-\\ell } = \\inf \\lbrace t: \\widetilde{\\zeta }_t=-\\ell \\rbrace $ for each $\\ell >0$ .", "Then $\\tau _{-a}= \\widetilde{\\tau }_{-a+{\\mathbf {b}}}$ .", "Moreover, both the events $\\lbrace {\\mathbf {t}}<\\tau _{-a} \\rbrace $ and $\\lbrace {\\mathbf {t}}<\\widetilde{\\tau }_{-a} \\rbrace $ are the same as $ \\inf \\lbrace \\zeta _t: t\\in (0,{\\mathbf {t}})\\rbrace >-a$ , hence are equal.", "Now the measure $\\widetilde{M}^\\beta _a$ can be described as $\\frac{\\tau ^{-1}_{-a}}{\\mathbb {E}^{\\beta }[\\tau ^{-1}_{-a}]} M^{\\beta }_a= \\frac{\\tau ^{-1}_{-a}1_{{\\mathbf {t}}< \\tau _{-a}}}{\\mathbb {E}^{\\beta }[\\tau ^{-1}_{-a}]} M^{\\beta }= \\frac{ \\widetilde{\\tau }^{-1}_{-a-{\\mathbf {b}}}1_{{\\mathbf {t}}< \\widetilde{\\tau }_{-a}}}{\\mathbb {E}^{\\beta }[\\tau ^{-1}_{-a}]} M^{\\beta }.$ Integrating out ${\\mathbf {t}}$ and $(\\widetilde{\\zeta }_t)_{t\\ge 0}$ on the right side of (REF ) and using the joint law of $({\\mathbf {b}},{\\mathbf {t}})$ and $(\\widetilde{\\zeta }_t)_{t\\ge 0}$ from Lemma REF , we see that the $\\widetilde{M}^\\beta _a$ -law of ${\\mathbf {b}}$ is $\\frac{\\mathbb {E}^{\\beta }\\mathopen {}\\mathclose {\\left[ {\\tau _{-a}}/{\\tau _{-a-b}} \\right]}}{\\mathbb {E}^{\\beta }[\\tau ^{-1}_{-a}]} b^{-{\\beta }-1} 1_{b>0}db$ as desired.", "Lemma 3.12 Let $(\\zeta _t)_{t\\ge 0}$ be sampled from $\\mathbb {P}^{\\beta }$ and $\\tau _{-a}=\\inf \\lbrace t: \\zeta _t=-a \\rbrace $ for each $a>0$ .", "Then $\\mathbb {E}^{\\beta }\\mathopen {}\\mathclose {\\left[ {\\tau _{-a}}/{\\tau _{-a-b}} \\right]}= \\frac{a}{a+b} \\textrm { for each } a,b>0.$ Let $I_t= \\inf \\lbrace \\zeta _t: s\\in [0,t] \\rbrace $ .", "It is well-known that $(\\zeta _t-I_t)_{t\\ge 0}$ is a Markov process and we can consider its excursions away from 0.", "See e.g. [17].", "We write $\\underline{N}$ for the excursion measure, which is a measure on non-negative functions of the form $e:[0,T] \\rightarrow [0,\\infty )$ with $e(0)=e(T)=0$ .", "The excursions of $(\\zeta _t-I_t)_{t\\in [0,\\tau _{-a-b}]}$ can be viewed as a Poisson point process $\\mathrm {Exc}_{a+b}=\\lbrace (e,s): s\\le a+b \\rbrace $ with intensity measure $\\underline{N}\\times 1_{s\\in [0,a+b] } ds$ .", "For $(e,s)\\in \\mathrm {Exc}_{a+b}$ , let $\\sigma $ be the time when $(\\zeta _t - I_t)_{ t\\ge 0}$ starts the excursion corresponding to $e$ .", "Then $s = -I_\\sigma $ .", "Let $T(e)$ be the duration $T$ of $e$ .", "Then $\\tau _{-a-b}=\\sum _{ (e,s)\\in \\mathrm {Exc}_{a+b}} T(e)$ and $\\tau _{-a} = \\sum _{ (e,s)\\in \\mathrm {Exc}_{a+b}} T(e)1_{s\\le a}.$ By a general property of Poisson point processes, conditioning on $\\lbrace e: (e,s)\\in \\mathrm {Exc}_{a+b} \\rbrace $ , the conditional distribution of $\\lbrace s: (e,s)\\in \\mathrm {Exc}_{a+b} \\rbrace $ are a collection of uniformly distributed random variables in $(0,a+b)$ .", "Let $\\mathbb {E}_{a+b}$ represents the conditional expectation given $\\lbrace e: (e,s)\\in \\mathrm {Exc}_{a+b} \\rbrace $ .", "Then $\\mathbb {E}_{a+b}[\\tau _{-a}] = \\sum _{ (e,s)\\in \\mathrm {Exc}_{a+b}} T(e)\\mathbb {E}_{a+b}[1_{s\\le a}]= \\frac{a}{a+b} \\tau _{-a-b}.$ Since $ \\tau _{-a-b}$ is measurable with respect to $\\lbrace e: (e,s)\\in \\mathrm {Exc}_{a+b} \\rbrace $ , we have $\\mathbb {E}_{a+b}\\mathopen {}\\mathclose {\\left[ {\\tau _{-a}}/{\\tau _{-a-b}} \\right]}=\\frac{a}{a+b}$ .", "Further taking the expectation over $\\lbrace e: (e,s)\\in \\mathrm {Exc}_{a+b} \\rbrace $ we conclude the proof.", "[Proof of Proposition REF ] With respect to $\\mathbb {P}^\\beta $ , we have $\\tau _{-a}\\stackrel{d}{=} a^\\beta \\tau _{-1}$ by the scaling property of ${\\beta }$ -stable Lévy processes.", "Therefore $\\mathbb {E}^{\\beta }[\\tau ^{-1}_{-a}] = \\mathbb {E}^{\\beta }[\\tau ^{-1}_{-1}] a^{-{\\beta }}$ .", "Combined with Lemmas REF and REF we get (REF ) with ${\\mathfrak {C}}= 1/\\mathbb {E}^{{\\beta }}[\\tau ^{-1}_{-1}]$ .", "We conclude this section with three facts which are by-products of our proof of Proposition REF .", "They will be needed in Sections  and REF .", "The first one is a description of the quantum lengths of all the outermost loops in $\\Gamma $ except $\\eta $ .", "Proposition 3.13 For $a>0$ , let $(D,h,\\Gamma ,z)$ be an embedding of a sample from $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ .", "Let $(\\ell _i)_{i\\ge 1}$ be the quantum lengths of the outermost loops in $\\Gamma $ and $\\ell _h(\\eta )$ the quantum length of the loop $\\eta $ surrounding $z$ .", "Then conditioned on $\\ell _h(\\eta )$ , the conditional law of $\\lbrace \\ell _i: \\ell _i\\ne \\ell _h(\\eta ), i\\ge 1 \\rbrace $ equals the $\\mathbb {P}^{\\beta }$ -law of $\\lbrace x: (x,t)\\in J_{a+\\ell _h(\\eta )} \\textrm { for some }t \\rbrace $ .", "Moreover, conditioned on $(\\ell _i)_{i \\ge 1}$ , the quantum surfaces cut out by these outermost loops apart from $\\eta $ are independent quantum disks with boundary lengths $\\lbrace \\ell _i : \\ell _i \\ne \\ell _h(\\eta ), i \\ge 1\\rbrace $ .", "Suppose we are in the setting of Proposition REF .", "By Lemma REF and the discussion above (REF ), under the measure $\\widetilde{M}^\\beta _a$ , the conditional law of $J_a\\setminus \\lbrace ({\\mathbf {b}},{\\mathbf {t}}) \\rbrace $ conditioning on ${\\mathbf {b}}$ is given by the $\\mathbb {P}^\\beta $ -law of $J_{a+{\\mathbf {b}}}$ .", "By Lemma REF , we get the following.", "In the setting of Proposition REF , let $\\ell _1 > \\ell _2 > \\dots $ be the quantum lengths of the outermost loops in $\\Gamma $ and $\\ell _h(\\eta )$ be the quantum length of the distinguished loop $\\eta $ .", "Under $\\mathbb {M}_a$ conditioned on $\\ell _h(\\eta )$ , the conditional law of $\\lbrace \\ell _i: \\ell _i\\ne \\ell _h(\\eta ), i\\ge 1 \\rbrace $ is given by the $\\mathbb {P}^{\\beta }$ -law of $\\lbrace x: (x,t)\\in J_{a+\\ell _h(\\eta )} \\textrm { for some }t \\rbrace $ .", "Moreover, further conditioning on $(\\ell _i)_{i \\ge 1}$ , the quantum surfaces cut out by the outermost loops apart from $\\eta $ are independent quantum disks with boundary lengths $\\lbrace \\ell _i : \\ell _h(\\eta ), i \\ge 1\\rbrace $ by Proposition REF .", "By Propositions REF and REF , the same holds with $\\mathbb {M}_a$ replaced by $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ hence Proposition REF holds.", "The second fact is a simple corollary of Proposition REF and Lemma REF .", "Lemma 3.14 In the setting of Proposition REF , conditioning on $(D,\\Gamma , h, z)$ , sample $\\hat{\\eta }$ from the counting measure on the outermost loops of $\\Gamma $ except $\\eta $ .", "Let $(x_i)_{i\\ge 1}$ be the lengths of the outermost loops of $\\Gamma $ except $\\eta $ and $\\hat{\\eta }$ , ranked in decreasing order.", "Then for $b,c>0$ , the disintegration of the law of $\\lbrace x_i: i\\ge 1 \\rbrace $ over $\\lbrace \\ell _h(\\eta )=b,\\ell _h(\\hat{\\eta })=c\\rbrace $ is the law of $\\lbrace \\ell : (\\ell ,t)\\in J_{a+b+c} \\textrm { for some }t \\rbrace $ under $\\frac{\\mathfrak {C}}{\\sqrt{ab}(a+b)} b|\\mathrm {QD}_{1,0}(b)| c^{-\\beta -1} \\tau _{-a-b} d\\mathbb {P}^\\beta (d\\zeta ).$ By Propositions REF and REF , the disintegration of the law of $\\lbrace \\ell _i: \\ell _i\\ne \\ell _h(\\eta ), i\\ge 1 \\rbrace $ in Proposition REF is the law of $\\lbrace \\ell : (\\ell ,t)\\in J_{a+b} \\textrm { for some }t \\rbrace $ under $\\frac{\\mathfrak {C}}{\\sqrt{ab}(a+b)} b|\\mathrm {QD}_{1,0}(b)| d\\mathbb {P}^\\beta (d\\zeta ).$ Since we further remove the loop $\\hat{\\eta }$ chosen from counting measure, by Lemma REF , we get the additional factor $c^{-\\beta -1} \\tau _{-a-b}$ and the change from $J_{a+b}$ to $J_{a+b+c}$ for the jump set.", "The last fact we record is an extension of Lemma REF .", "Lemma 3.15 Let $(\\zeta _t)_{t\\ge 0}$ be sampled from $\\mathbb {P}^{\\beta }$ as in Lemma REF .", "Let $(x_i)_{i\\ge 1}$ be the set of jumps in $\\zeta $ before $\\tau _{-a-b}$ .", "Let $f$ be a non-negative measurable function on $(0,\\infty )$ .", "Then $\\mathbb {E}\\big [\\tau _{-a} \\prod _{i\\ge 1}f(x_i) \\big ]=a \\mathbb {E}\\big [\\prod _{i\\ge 1}f(x_i)\\big ] \\int T(e) \\prod _{x\\in J_e} f(x) \\underline{N} (d e),$ where $\\underline{N} (d e)$ is the excursion measure in the proof of Lemma REF , and for a sample $e$ from $\\underline{N} (d e)$ , $T(e)$ means the duration of $e$ and $J_e$ means the set of jumps in $e$ .", "Let $F(e)=\\prod _{x\\in J_e} f(x) $ .", "As in the proof of (REF ), $\\mathbb {E}_{a+b}\\big [\\tau _{-a} \\prod _{i\\ge 1}f(x_i) \\big ]= \\mathbb {E}_{a+b}\\big [\\tau _{-a} \\prod _{e\\in \\mathrm {Exc}_{a+b} }F(e) \\big ]=\\frac{a}{a+b} \\tau _{-a-b} \\prod _{e\\in \\mathrm {Exc}_{a+b} }F(e).$ Therefore $\\mathbb {E}\\big [\\tau _{-a} \\prod _{i\\ge 1}f(x_i) \\big ]= \\frac{a}{a+b} \\mathbb {E}\\big [\\tau _{-a-b} \\prod _{e\\in \\mathrm {Exc}_{a+b} }F(e)\\big ].$ Now Lemma REF follows from $\\mathbb {E}\\big [\\tau _{-a-b} \\prod _{e\\in \\mathrm {Exc}_{a+b} }F(e) \\big ]=\\mathbb {E}\\big [\\big (\\sum _{e\\in \\mathrm {Exc}_{a+b} } T(e) \\big ) \\prod _{e\\in \\mathrm {Exc}_{a+b} }F(e) \\big ]=(a+b)\\mathbb {E}\\big [ \\prod _{e\\in \\mathrm {Exc}_{a+b} }F(e)\\big ] \\int T(e) F(e) \\underline{N} (d e),$ where the first equality uses $\\tau _{-a-b}=\\sum _{e\\in \\mathrm {Exc}_{a+b} } T(e)$ and the second uses Palm's Theorem." ], [ "Area distribution of the quantum annulus", "In this section we prove the analog of the FZZ formula (Theorem REF ) for the quantum annulus $\\mathrm {QA}(a,b)$ from Definition REF .", "It is an intermediate step towards understanding the area distribution of the quantum pair of pants, which is crucial to the proof of Theorem .", "We assume $\\kappa \\in (\\frac{8}{3},4)$ and $\\gamma =\\sqrt{\\kappa }$ throughout this section.", "Theorem 4.1 For $a>0,b>0$ , let $A$ be the quantum area of a sample from $\\mathrm {QA}(a, b)$ .", "Then with $\\mathfrak {C}$ the constant from Proposition REF we have $\\mathrm {QA}(a, b)[e^{-\\mu A}] = \\mathfrak {C} \\cdot \\frac{e^{-(a+b)\\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}} }{\\sqrt{ab} (a+b)} \\quad \\textrm {for }\\mu \\ge 0.$ Using results from Section REF and Section , Theorem REF follows from the proposition below.", "Proposition 4.2 Let $\\beta = \\frac{4}{\\gamma ^2} + \\frac{1}{2}$ and recall $\\mathbb {P}^\\beta $ from Proposition REF .", "Let $(\\zeta _t)_{t\\ge 0}$ be a stable Lévy process with index ${\\beta }$ sampled from $\\mathbb {P}^\\beta $ .", "For $L>0$ , let $\\tau _{-L} = \\inf \\lbrace t \\: : \\: \\zeta _t \\le -L\\rbrace $ .", "Let $ (x_i)_{i \\ge 1}$ be the collection of sizes of the jumps of $\\zeta $ on $[0,\\tau _{-L}]$ sorted in decreasing order.", "Let $(A_i)_{i\\ge 1}$ be independent copies of the quantum area of a sample from $\\mathrm {QD}(1)^\\#$ , which are also independent of $(x_i)_{i\\ge 1}$ .", "Then $\\mathbb {E}[e^{-\\mu \\sum x_i^2 A_i}] = e^{-L \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}} \\quad \\text{for }\\mu \\ge 0.$ [Proof of Theorem REF given Proposition REF ] Suppose we are in the setting of Definition REF where we have $(D,h,\\Gamma ,\\eta )$ and the annulus $A_\\eta $ under the measure $\\mathrm {QD}_{1,0}\\otimes \\operatorname{CLE}_\\kappa $ .", "By Proposition REF , conditioned on $\\ell _h(\\eta ) = b$ , the quantum lengths of the remaining outermost loops in $\\Gamma $ has the law of $(x_i)_{i \\ge 1}$ in Proposition REF with $L = a+b$ , and further conditioning on these lengths, the law of the enclosed regions are quantum disks with given boundary lengths.", "Note that the quantum area of a sample from $\\mathrm {QD}(x_i)^\\#$ agrees in law with $x_i^2 A_i$ .", "Therefore, under $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ , conditioning on $\\ell _h(\\eta ) = b$ , the conditional law of the quantum area of $A_\\eta $ is given by $\\sum _{i \\ge 1} x_i^2 A_i$ in (REF ) with $L=a+b$ .", "This means that $\\widetilde{\\mathrm {QA}}(a,b)^\\#[e^{-\\mu A}] = e^{-(a+b)\\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}, $ where $\\widetilde{\\mathrm {QA}}(a,b)$ is from Definition REF .", "Since $\\widetilde{\\mathrm {QA}}(a,b)^\\#= \\mathrm {QA}(a,b)^\\#$ by definition, and $\\mathrm {QA}(a,b) =|\\mathrm {QA}(a,b)| \\mathrm {QA}(a,b)^\\#$ , Proposition REF gives the desired formula of $\\mathrm {QA}(a, b)[e^{-\\mu A}] $ in Theorem REF .", "The rest of this section is devoted to the proof of Proposition REF .", "Recall that the law of $A_i$ 's are inverse gamma distributions [1], [6].", "Therefore Proposition REF is a statement about stable Lévy processes and inverse gamma distributions.", "But our proof crucially relies on LQG.", "We will first express $\\mathbb {E}[e^{-\\mu \\sum x_i^2 A_i}] $ in terms of the quantum area of a weighted quantum disk in Section REF , and then finish the proof by linking the weighted quantum disk to the FZZ formula in Section REF ." ], [ "Quantum natural measure on the CLE carpet", "Given a $\\operatorname{CLE}_\\kappa $ on a domain with disk topology, its carpet is the set of points not surrounded by any loop.", "Fix $L>0$ .", "Suppose $(D,h,\\Gamma )$ is an embedding of a sample from $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "For $\\varepsilon >0$ and an open set $U\\subset D$ , let $N_{\\varepsilon , h, \\Gamma }(U)$ be the number of outermost loops of $\\Gamma $ contained entirely in $U$ with quantum length greater than $\\varepsilon $ .", "By [55], the random measure $ \\varepsilon ^{\\beta }N_{\\varepsilon , h, \\Gamma }$ on $D$ converges to a random measure supported on the carpet of $\\Gamma $ which we call the quantum natural measure on the carpet of $\\Gamma $ .", "Let $Y$ be the total mass of this measure.", "By definition, $Y$ is intrinsic to the decorated quantum surface $(D,h,\\Gamma )/{\\sim _\\gamma }$ .", "The following proposition reduces the proof of Proposition REF to understanding $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa $ weighted by $Y$ .", "Lemma 4.3 Given a sample from $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa $ , let $Y$ be the total mass of the quantum natural measure on the CLE carpet and $A$ be the total quantum area of the disk.", "Then $\\mathbb {E}[e^{-\\mu \\sum x_i^2 A_i}]$ defined in Proposition REF equals $\\frac{\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa [Ye^{-\\mu A}]}{\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa [Y]} \\, .", "$ Consider $(x_i)_{i\\ge 1}$ and $(A_i)_{i\\ge 1}$ in Proposition REF under the measure $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ instead of $\\mathbb {P}^\\beta $ .", "By Propositions REF and REF , we couple $(x_i)_{i\\ge 1}$ and $(A_i)_{i\\ge 1}$ under $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ with a sample from $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa $ such that $(x_i)_{i\\ge 1}$ are the quantum lengths of the outermost loops of $\\operatorname{CLE}_\\kappa $ sorted in decreasing order, and $x_i^2A_i$ is the quantum area of the region surrounded by the loop with quantum length $x_i$ .", "In this coupling the total quantum area $A$ of the disk equals $\\sum _{i\\ge 1} x_i^2 A_i$ .", "We now claim that $Y=\\frac{1}{\\beta }\\tau _{-L}$ a.s.", "Given this claim, let $\\mathbb {E}$ and $\\mathbb {E}_{L}$ be the expectation over $\\mathbb {P}^\\beta $ and $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ respectively.", "Then we have $\\mathbb {E}[e^{-\\mu \\sum x_i^2 A_i}] = \\frac{\\mathbb {E}_L[ \\tau _{-L}e^{-\\mu A} ]}{\\mathbb {E}_L[\\tau _{-L}]} =\\frac{\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa [Ye^{-\\mu A}]}{\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa [Y]} \\, .", "$ To prove that $Y=\\frac{1}{\\beta }\\tau _{-L}$ a.s., recall the Levy process $\\zeta $ sampled from $\\mathbb {P}^\\beta $ in Proposition REF .", "For each $T>0$ , let $N_\\varepsilon (T)$ be the number of jumps of $\\zeta $ of size greater than $\\varepsilon $ in the interval $[0,T]$ .", "Since the Levy measure of $\\zeta $ is $1_{x>0} x^{-\\beta -1} \\, dx$ , for each fixed $T$ we have that $\\mathbb {P}^\\beta $ -a.s. $\\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^\\beta N_\\varepsilon (T) =\\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^\\beta \\mathbb {E}[N_\\varepsilon (T)] = \\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^\\beta \\int _0^T\\int _{\\varepsilon }^\\infty x^{-\\beta -1} \\, dx\\, dt = \\frac{1}{\\beta }T.$ Therefore $\\mathbb {P}^\\beta $ -a.s. $\\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^\\beta N_\\varepsilon (\\tau _{-L}) = \\frac{1}{\\beta }\\tau _{-L}$ , hence the same holds under $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ .", "In the above coupling of $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ and $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa $ , we have that $N_\\varepsilon (\\tau _{-L})$ is the number of outermost loops in the $\\operatorname{CLE}_\\kappa $ with quantum length greater than $\\varepsilon $ .", "By the definition of the quantum natural measure on the CLE carpet we have $Y= \\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^{\\beta }N_\\varepsilon (\\tau _{-L}) = \\frac{1}{\\beta }\\tau _{-L}$ a.s. We record the following fact that will be used in Section REF , where we compute (REF ).", "Lemma 4.4 In the setting of Lemma REF , set $\\beta = \\frac{4}{\\gamma ^2}+\\frac{1}{2}$ .", "Then $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa [Y^p]<\\infty \\quad \\textrm {for} \\quad 0< p < 1+\\frac{1}{\\beta }.$ As explained in the proof of Lemma REF , the law of $Y$ under $\\mathrm {QD}(L)^\\# \\otimes \\operatorname{CLE}_\\kappa $ is the same as the law of $\\tau _{-L}$ under $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ where the expectation $\\mathbb {E}$ in is over $\\mathbb {P}^\\beta $ .", "Since the $\\mathbb {P}^\\beta $ -law of $(\\tau _{-t})_{t>0}$ is that of a $\\frac{1}{\\beta }$ -stable subordinator (see e.g.", "[9]), the tail $\\mathbb {P}^\\beta [\\tau _{-L} >x]$ is bounded by a constant multiple of $x^{-\\frac{1}{\\beta }}$ .", "Therefore $\\mathbb {E}[\\tau _{-L}^q] < \\infty $ for $0<q < \\frac{1}{\\beta }$ , hence $\\tau _{-L}$ has moment for all positive $p < 1+\\frac{1}{\\beta }$ under $\\frac{\\tau _{-L}^{-1}\\mathbb {P}^\\beta }{\\mathbb {E}[\\tau _{-L}^{-1}]}$ .", "This concludes the proof." ], [ "FZZ formula and the quantum disk weighted by the carpet measure", "Recall the quantum disk measure $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L)$ from Definition REF whose area distribution is given by the FZZ formula in Theorem REF .", "In this section we prove the following proposition, which will allow us to compute (REF ), hence prove Proposition REF , using the FZZ formula.", "Proposition 4.5 Fix $L>0$ , let $(D, h, \\Gamma )$ be a sample from the probability measure proportional to $Y\\mathrm {QD}(L)\\otimes \\operatorname{CLE}_\\kappa $ where $Y$ is the total mass of the quantum natural measure on the carpet of $\\Gamma $ .", "Conditioning on $(D,h,\\Gamma )$ , let $z$ be a point sampled from the probability measure proportional to the the quantum natural measure on the carpet of $\\Gamma $ .", "Then the law of $(D, h, z)/{\\sim _\\gamma }$ is $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L)^\\#$ with $\\alpha = \\frac{\\gamma }{4} + \\frac{2}{\\gamma }$ .", "[Proof of Proposition REF given Proposition REF ] By Lemma REF and Proposition REF , we have $ \\mathbb {E}[e^{-\\mu \\sum x_i^2 A_i}] = \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L)^\\# [ e^{-\\mu A}],$ where $A$ is the quantum area of a sample from $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L)^\\#$ .", "By Theorem REF and the fact that $K_{\\frac{1}{2}}(x) = \\sqrt{\\pi /2} x^{-\\frac{1}{2}} e^{-x}$ [18], we get $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L) [ e^{-\\mu A}]= CL^{-1/2} \\exp (-L \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)})$ for some $\\gamma $ -dependent constant $C>0$ .", "Setting $\\mu =0$ , we get $|\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L) | = C L^{-1/2}$ .", "Therefore $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L)^\\# [ e^{-\\mu A}] = \\exp (-L \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)})$ hence Proposition REF holds.", "The rest of this section is devoted to the proof of Proposition REF .", "We will work with the embeddings of $\\mathrm {QD}(L)$ on the upper half plane $\\mathbb {H}$ .", "Recall the Gaussian free field measure $P_\\mathbb {H}$ and the Liouville field measure $\\mathrm {LF}_\\mathbb {H}$ from Section REF .", "Sample $h$ from $P_\\mathbb {H}$ and let $\\Gamma $ be a $\\operatorname{CLE}_\\kappa $ independent of $h$ .", "For $\\varepsilon >0$ and an open set $U\\subset \\mathbb {H}$ , let $N_{\\varepsilon , h, \\Gamma }(U)$ be the number of outermost loops of $\\Gamma $ contained entirely in $U$ with quantum length greater than $\\varepsilon $ as in the beginning of Section REF .", "Then by absolutely continuity considerations, $ \\varepsilon ^{\\beta }N_{\\varepsilon , h, \\Gamma }$ converges almost surely to a random measure $M_{h,\\Gamma }$ which we call the quantum natural measure on the carpet of $\\Gamma $ induced by $h$ .", "Suppose $h$ is coupled with a continuous function $\\xi $ on $\\mathbb {H}$ , we can similarly define $M_{h+\\xi ,\\Gamma }$ .", "We write $\\operatorname{CLE}_\\kappa (d\\Gamma )$ as the probability measure for $\\operatorname{CLE}_\\kappa $ on $\\mathbb {H}$ .", "Let $\\mathbb {L}$ be the space of loop ensembles in $\\mathbb {H}$ , on which $\\operatorname{CLE}_\\kappa $ is a probability measure.", "We consider the measure $\\sigma (d^2z, d\\Gamma )$ on $\\mathbb {H}\\times \\mathbb {L}$ such that for each Borel set $B\\subset \\mathbb {H}$ and non-negative measurable function $g$ on $\\mathbb {L}$ , we have $\\int _{\\mathbb {H}\\times \\mathbb {L}} 1_U(z) g(\\Gamma ) \\sigma (d^2z, d\\Gamma ) = \\mathbb {E}[M_{h, \\Gamma }(U) g(\\Gamma )],$ where the expectation $\\mathbb {E}$ is taken over $\\mathbb {P}_\\mathbb {H}\\times \\operatorname{CLE}_\\kappa $ .", "The following lemma ensures that the measure $\\sigma $ is locally finite.", "Lemma 4.6 Sample $h$ from $P_\\mathbb {H}$ and let $\\Gamma $ be a $\\operatorname{CLE}_\\kappa $ independent of $h$ .", "For any compact set $U\\subset \\mathbb {H}$ , we have $\\mathbb {E}[ M_{h,\\Gamma }(U)] <\\infty $ .", "We postpone the proof of Lemma REF and outline the idea for our proof of Proposition REF .", "First, the measure $\\operatorname{CLE}_\\kappa (d\\Gamma ) \\,P_\\mathbb {H}(dh)\\, M_{h, \\Gamma }(d^2z)$ should be thought of as $ {:e^{\\alpha h(z)}:} \\; \\sigma (d^2z, d\\Gamma ) P_\\mathbb {H}(dh ) $ with $\\alpha =\\frac{\\gamma }{4}+\\frac{2}{\\gamma }$ .", "where ${:e^{\\alpha h(z)}:}$ corresponds to a regularization procedure.", "Then by the Girsanov theorem and the relation between $\\mathrm {LF}_\\mathbb {H}$ and $P_\\mathbb {H}$ , we can relate $\\operatorname{CLE}_\\kappa (d\\Gamma ) \\,\\mathrm {LF}_\\mathbb {H}(d\\phi )\\, M_{\\phi , \\Gamma }(d^2z)$ with $\\mathrm {LF}_\\mathbb {H}^{(\\alpha ,z)}$ ; see Proposition REF .", "Using the relation between $\\mathrm {LF}_\\mathbb {H}$ and $\\mathrm {QD}$ we will get Proposition REF .", "We start by proving an effective version of our first assertion involving ${:e^{\\alpha h(z)}:}$ .", "Lemma 4.7 Let $\\alpha = \\frac{\\gamma }{4} + \\frac{2}{\\gamma }$ .", "Let $h$ be sampled from $P_\\mathbb {H}$ and $\\xi $ be a continuous function on $\\mathbb {H}$ .", "Then $M_{h+\\xi , \\Gamma }(d^2z) = e^{\\alpha \\xi (z)} M_{h,\\Gamma }(d^2z) \\quad \\textrm {almost surely as measures on } \\mathbb {H}.$ If $\\xi $ is a constant, by the scaling property of quantum length, for each $\\varepsilon >0$ we have $N_{\\varepsilon , h +\\xi , \\Gamma }(U) = N_{e^{-\\frac{\\gamma }{2} \\xi }\\varepsilon , h, \\Gamma }(U)$ .", "Recall that $\\beta =\\frac{4}{\\gamma ^2}+\\frac{1}{2}$ hence $\\alpha =\\frac{\\gamma }{2} \\beta $ .", "Therefore $M_{h + \\xi , \\Gamma }(U) = \\lim _{\\varepsilon \\rightarrow 0} \\varepsilon ^\\beta N_{\\varepsilon , h+\\xi , \\Gamma }(U) = e^{\\frac{\\gamma }{2} \\beta \\xi }\\lim _{\\varepsilon \\rightarrow 0} (e^{-\\frac{\\gamma }{2} \\xi }\\varepsilon )^\\beta N_{e^{-\\frac{\\gamma }{2} \\xi }\\varepsilon , h, \\Gamma }(U) = e^{\\alpha \\xi }M_{h, \\Gamma }(U).$ If $\\xi $ is not a constant, we can bound it from above and below by piecewise constant functions and take a limit.", "We leave the detail to the reader.", "See [19] for a similar argument.", "Both our proofs of Lemma REF and Proposition REF rely on the uniform embedding of quantum surfaces introduced in [4].", "Let $\\mathbf {m}_\\mathbb {H}$ be a Haar measure on the conformal automorphism group $\\mathrm {PSL}(2,\\mathbb {R})$ of $\\mathbb {H}$ .", "Namely, if $\\mathfrak {g}$ is sampled from $\\mathbf {m}_\\mathbb {H}$ , then for each $f\\in \\mathrm {PSL}(2,\\mathbb {R}) $ both the law of $f \\circ \\mathfrak {g}$ and $\\mathfrak {g} \\circ f$ are $\\mathbf {m}_\\mathbb {H}$ .", "The Haar measure on $\\mathrm {PSL}(2,\\mathbb {R})$ is unique up to a multiplicative factor.", "Given a field $\\phi _0\\in H^{-1}(\\mathbb {H})$ , the law of the field $\\phi _0 \\circ \\mathfrak {g}^{-1} + Q \\log |(\\mathfrak {g}^{-1})^{\\prime }|$ only depends on the quantum surface $(\\mathbb {H},\\phi _0)/{\\sim _\\gamma }$ , which is called the uniform embedding of $(\\mathbb {H},\\phi _0)/{\\sim _\\gamma }$ via $\\mathbf {m}_\\mathbb {H}$ .", "We refer to [4] and [6] for further background on $\\mathbf {m}_\\mathbb {H}$ and the uniform embedding.", "Recall from Lemma REF the Liouville field measure $ \\mathrm {LF}_\\mathbb {H}(L)$ on $\\mathbb {H}$ with boundary length $L>0$ .", "Then $ \\mathrm {LF}_\\mathbb {H}(L)$ is the uniform embedding of a sample from $\\mathrm {QD}(L)$ , up to a multiplicative constant: Proposition 4.8 ([4]) Let $\\mathbb {F}$ be a measure on $H^{-1}(\\mathbb {H})$ such that the law of $(\\mathbb {H},\\phi _0)/{\\sim _\\gamma }$ is $\\mathrm {QD}(L)$ if $\\phi _0$ is sampled from $\\mathbb {F}$ .", "Now sample $(\\phi _0, \\mathfrak {g})$ from $\\mathbb {F}\\times \\mathbf {m}_\\mathbb {H}$ .", "Then the law of the field $\\phi _0 \\circ \\mathfrak {g}^{-1} + Q \\log |(\\mathfrak {g}^{-1})^{\\prime }|$ is $C \\mathrm {LF}_\\mathbb {H}(L)$ for some constant $C>0$ .", "This is [4] after disintegrating by quantum boundary length.", "We now give a variant of Proposition REF for decorated surfaces.", "Lemma 4.9 Fix $p\\in \\mathbb {R}$ .", "Sample $(\\phi ,\\Gamma )$ from the measure $(M_{\\phi , \\Gamma } (\\mathbb {H}) )^{p}\\,\\mathrm {LF}_\\mathbb {H}(L) \\, \\operatorname{CLE}_\\kappa (d\\Gamma )$ .", "Conditioning on $(\\phi ,\\Gamma )$ , sample $\\mathbf {z}$ from the probability measure proportional to $M_{\\phi , \\Gamma }$ .", "Then the joint law of $(\\mathbb {H}, \\phi , \\Gamma , \\mathbf {z})/{\\sim _\\gamma }$ and $\\mathbf {z}$ is the product measure $[Y^p\\mathrm {QD}(L)\\otimes \\operatorname{CLE}_\\kappa ] \\times [1_{z\\in \\mathbb {H}}\\frac{C}{\\operatorname{Im}(z)^2} d^2z]$ for some constant $C$ .", "If $\\mathfrak {g}$ is sampled from $\\mathbf {m}_\\mathbb {H}$ , then the law of $\\mathfrak {g}(i)$ is $\\operatorname{Im}_{z \\in \\mathbb {H}} \\frac{C}{\\operatorname{Im}(z)^2} d^2z$ for some $C>0$ ; see e.g.", "[6].", "By the invariance property of the Haar measure and fact that any two points on $\\mathbb {H}$ can be related by a conformal map, for any $u \\in \\mathbb {H}$ the law of $\\mathfrak {g}(u)$ is $\\operatorname{Im}_{z \\in \\mathbb {H}} \\frac{C}{\\operatorname{Im}(z)^2} d^2z$ .", "Now let $(\\phi _0, \\mathfrak {g})$ be sampled from $\\mathbb {F}\\times \\mathbf {m}_\\mathbb {H}$ as in Proposition REF and let $\\phi = \\phi _0 \\circ \\mathfrak {g}^{-1} + Q \\log |(\\mathfrak {g}^{-1})^{\\prime }|$ .", "By Proposition REF , we can choose $\\mathbf {m}_\\mathbb {H}$ such that the law of $\\phi $ is $\\mathrm {LF}_\\mathbb {H}(L)$ .", "Condition on $(\\phi _0, \\mathfrak {g})$ , sample a $\\operatorname{CLE}_\\kappa $ $\\Gamma _0$ on $\\mathbb {H}$ , and then sample a point $z_0$ from the probability measure proportional to quantum natural measure on the carpet of $\\Gamma _0$ induced by $\\phi _0$ .", "Let $\\Gamma $ be the image of $\\Gamma _0$ under $\\mathfrak {g}$ and $\\mathbf {z}=\\mathfrak {g}(z_0)$ .", "Then the law of $(\\phi ,\\Gamma ,\\mathbf {z})$ is as in Lemma REF with $p=0$ .", "On the other hand, by the previous paragraph, the joint law of $(\\mathbb {H}, \\phi _0, \\Gamma _0, z_0)/{\\sim _\\gamma }$ and $\\mathbf {z}$ is $ [\\mathrm {QD}(L)\\otimes \\operatorname{CLE}_\\kappa ] \\times [1_{z\\in \\mathbb {H}}\\frac{C}{\\operatorname{Im}(z)^2} d^2z]$ .", "Since $(\\mathbb {H}, \\phi _0, \\Gamma _0, z_0)/{\\sim _\\gamma } =(\\mathbb {H}, \\phi , \\Gamma , \\mathbf {z})/{\\sim _\\gamma }$ , we get Lemma REF with $p=0$ .", "For $p\\ne 0$ , since $Y=M_{\\phi ,\\Gamma } (\\mathbb {H})$ , weighting everything by $M_{\\phi ,\\Gamma } (\\mathbb {H})^p$ is the same as weighting by $Y^p$ .", "This gives the general case.", "[Proof of Lemma REF ] Let $(\\phi ,\\Gamma ,\\mathbf {z})$ be as in Lemma REF .", "Then by Lemma REF , the mass of the event $\\lbrace \\mathbf {z} \\in U\\rbrace $ is given by $\\mathrm {LF}_\\mathbb {H}(L) \\times \\operatorname{CLE}_\\kappa \\mathopen {}\\mathclose {\\left[ \\frac{M_{\\phi , \\Gamma }(U)}{M_{\\phi , \\Gamma }(\\mathbb {H})} \\times M_{\\phi , \\Gamma }(\\mathbb {H})^{p} \\right]}= \\mathrm {QD}(L)\\otimes \\operatorname{CLE}_\\kappa [Y^p]\\times \\int _U \\frac{C}{\\operatorname{Im}(z)^2}d^2z.$ Suppose $p\\in (1, 1+\\frac{1}{\\beta })$ .", "Then by Lemma REF , the right hand side of (REF ) is finite.", "Therefore $\\mathrm {LF}_\\mathbb {H}(L) \\times \\operatorname{CLE}_\\kappa \\mathopen {}\\mathclose {\\left[M_{\\phi , \\Gamma }(U)^{p} \\right]} \\le \\mathrm {LF}_\\mathbb {H}(L) \\times \\operatorname{CLE}_\\kappa \\mathopen {}\\mathclose {\\left[M_{\\phi , \\Gamma }(U)M_{\\phi , \\Gamma }(\\mathbb {H})^{p-1} \\right]}<\\infty .$ By the definition of $\\mathrm {LF}_\\mathbb {H}(L)$ in Lemma REF , we have $\\mathrm {LF}_\\mathbb {H}(L) \\times \\operatorname{CLE}_\\kappa \\mathopen {}\\mathclose {\\left[M_{\\phi , \\Gamma }(U)^{p} \\right]}= \\frac{2}{\\gamma }L^{-\\frac{4}{\\gamma ^2} -2} \\mathbb {E}[M_{\\widehat{h} + \\frac{2}{\\gamma }\\log \\frac{L}{\\nu _{\\widehat{h}}(\\mathbb {R})}, \\Gamma }(U)^p \\nu _{\\widehat{h}}(\\mathbb {R})^{\\frac{4}{\\gamma ^2}+1}],$ where the expectation is taken over $(h, \\Gamma )$ sampled from $P_\\mathbb {H}\\times \\operatorname{CLE}_\\kappa $ , and $\\widehat{h} := h - 2Q \\log |\\cdot |_+$ .", "Now we choose $p$ such that $\\frac{4}{\\gamma ^2}+1 - \\frac{2}{\\gamma }\\alpha p=0$ with $\\alpha = \\frac{\\gamma }{4} + \\frac{2}{\\gamma }$ .", "By Lemma REF , we have $\\mathbb {E}[M_{\\widehat{h} + \\frac{2}{\\gamma }\\log \\frac{L}{\\nu _{\\widehat{h}}(\\mathbb {R})}, \\Gamma }(U)^p \\nu _{\\widehat{h}}(\\mathbb {R})^{\\frac{4}{\\gamma ^2}+1}]=L^{\\frac{2}{\\gamma }\\alpha p} \\mathbb {E}[M_{\\widehat{h},\\Gamma }(U)^p \\nu _{\\widehat{h}}(\\mathbb {R})^{\\frac{4}{\\gamma ^2}+1 - \\frac{2}{\\gamma }\\alpha p}]=L^{\\frac{2}{\\gamma }\\alpha p} \\mathbb {E}[M_{\\widehat{h},\\Gamma }(U)^p] ,$ Since this choice of $p$ satisfies $p\\in (1,1+ \\frac{1}{\\beta })$ , we have $\\mathbb {E}[M_{\\widehat{h}}(U)] \\le \\mathbb {E}[M_{\\widehat{h}}(U)^p] <\\infty $ .", "Since $-2Q \\log |\\cdot |_+$ is bounded on the compact set $U\\subset \\mathbb {H}$ , using (REF ) we get $\\mathbb {E}[M_{h,\\Gamma }(U)]<\\infty $ .", "We now relate $\\operatorname{CLE}_\\kappa (d\\Gamma ) \\,\\mathrm {LF}_\\mathbb {H}(d\\phi )\\, M_{\\phi , \\Gamma }(d^2z)$ and $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , z)}$ by adapting an argument in [66].", "The additional complication here is that the base measure $\\sigma $ involves randomness from $\\operatorname{CLE}$ .", "Proposition 4.10 As measures on triples $(\\phi ,\\Gamma , z)\\in H^{-1}(\\mathbb {H})\\times \\mathbb {L} \\times \\mathbb {H}$ , we have $M_{\\phi , \\Gamma }(d^2z) \\,\\operatorname{CLE}_\\kappa (d\\Gamma ) \\,\\mathrm {LF}_\\mathbb {H}(d\\phi )\\, = \\mathopen {}\\mathclose {\\left(\\frac{2 \\operatorname{Im}(z)}{|z|_+^4} \\right)}^{\\alpha ^2/2} \\mathrm {LF}_\\mathbb {H}^{(\\alpha , z)}(d\\phi )\\, \\sigma (d^2z, d\\Gamma ).$ Let $U$ be a bounded Borel set such that $\\overline{U}\\subset \\mathbb {H}$ .", "Let $g$ be a non-negative measurable function on $\\mathbb {L}$ .", "Let $\\rho $ is a smooth compactly supported function on $\\mathbb {H}$ and let $\\xi (z)= \\int _\\mathbb {H}\\rho (y)G_\\mathbb {H}(y, z) dy$ where $G_\\mathbb {H}$ is the covariance kernel for $\\mathbb {P}_\\mathbb {H}$ .", "Let $\\mathbb {E}$ be the expectation over $\\mathbb {P}_\\mathbb {H}$ .", "Then $\\mathbb {E}\\mathopen {}\\mathclose {\\left[ \\int _{\\mathbb {H}\\times \\mathbb {L}} 1_U(z) g(\\Gamma )e^{(h,\\rho )} M_{h, \\Gamma }(d^2z)\\operatorname{CLE}_\\kappa (d\\Gamma )\\right]} &= \\mathbb {E}\\mathopen {}\\mathclose {\\left[ e^{(h,\\rho )} \\right]} \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{\\mathbb {H}\\times \\mathbb {L}} 1_U(z) g(\\Gamma )M_{h+\\xi , \\Gamma }(d^2z) \\operatorname{CLE}_\\kappa (d\\Gamma ) \\right]} \\\\&= \\mathbb {E}\\mathopen {}\\mathclose {\\left[ e^{(h,\\rho )} \\right]} \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{\\mathbb {H}\\times \\mathbb {L}} 1_U(z) g(\\Gamma )e^{\\alpha \\xi (z)} M_{h, \\Gamma }(d^2z) \\operatorname{CLE}_\\kappa (d\\Gamma ) \\right]} \\\\&= \\mathbb {E}\\mathopen {}\\mathclose {\\left[ e^{(h,\\rho )} \\right]} \\int _{\\mathbb {H}\\times \\mathbb {L}} 1_U(z) g(\\Gamma ) e^{\\alpha \\xi (z)} \\sigma (d^2z, d\\Gamma ) \\\\&= \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{\\mathbb {H}\\times \\mathbb {L}} 1_U(z) g(\\Gamma )e^{ (h+ \\alpha G_\\mathbb {H}(\\cdot , z),\\rho ) } \\sigma (d^2z, d\\Gamma ) \\right]}.$ Here, the first step follows from Girsanov's theorem, the second step follows from (REF ), the third step uses the definition of $\\sigma $ , and the last uses Fubini's theorem.", "Therefore for any non-negative measurable function $F$ on $H^{-1}(\\mathbb {H}) \\times \\mathbb {H}\\times \\mathbb {L}$ , we have $\\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{\\mathbb {H}\\times \\mathbb {L}} F(h, z, \\Gamma ) M_{h, \\Gamma }(d^2z) \\operatorname{CLE}_\\kappa (d \\Gamma ) \\right]} = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{\\mathbb {H}\\times \\mathbb {L}} F(h+ \\alpha G_\\mathbb {H}(\\cdot , z), z, \\Gamma ) \\,\\sigma (d^2z, d\\Gamma )\\right]}.$ Here we used the finiteness in Lemma REF and the fact that finite measures on $H^{-1}(\\mathbb {H})$ are characterized by the averages over $e^{(h,\\rho )}$ .", "By (REF ) we have $&\\int _\\mathbb {R}\\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{\\mathbb {H}\\times \\mathbb {L}} F(h - 2Q \\log |z|_+ + c, z, \\Gamma ) e^{\\alpha (-2Q\\log |z|_+ +c)}M_{h, \\Gamma }(d^2z) \\operatorname{CLE}_\\kappa (d \\Gamma ) \\right]} e^{-Qc}\\, dc \\\\&= \\int _{\\mathbb {H}\\times \\mathbb {L}} \\int _\\mathbb {R}\\mathbb {E}\\mathopen {}\\mathclose {\\left[ F(h+ \\alpha G_\\mathbb {H}(\\cdot , z) - 2Q \\log |z|_+ + c, z, \\Gamma ) \\right]} e^{(\\alpha -Q)c}\\, dc\\,|z|_+^{-2Q\\alpha }\\sigma (d^2z, d\\Gamma ),$ By Definitions  and  for $\\mathrm {LF}_\\mathbb {H}$ and $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , z)}$ , the two sides of the above equation equal the integral of $F$ over the two sides of (REF ), respectively.", "This concludes the proof.", "[Proof of Proposition REF ] By Proposition REF and the $p=1$ case of Lemma REF , if we sample $(\\phi , \\Gamma , \\mathbf {z})$ from the right side of (REF ), then the joint law of $(\\mathbb {H}, \\phi , \\Gamma , z)/{\\sim }_\\gamma $ and $ \\mathbf {z}$ is $[Y\\mathrm {QD}(L)\\otimes \\operatorname{CLE}_\\kappa ] \\times [1_{z\\in \\mathbb {H}}\\frac{C}{\\operatorname{Im}(z)^2} d^2z]$ .", "Disintegrating over $\\mathbf {z}$ and forgetting about $\\Gamma $ , we see that if $\\phi $ is sampled from $\\mathrm {LF}_\\mathbb {H}^{(\\alpha , z)}(L)^\\#$ , the law of the quantum surface $(\\mathbb {H}, \\phi , z)/{\\sim _\\gamma }$ is the probability measure proportional to $Y\\mathrm {QD}(L)\\otimes \\operatorname{CLE}_\\kappa $ regardless of $z$ .", "By the definition of $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; L)$ , we conclude the proof." ], [ "Equivalence of two definitions of the SLE loop", "In this section, we prove Theorem REF , namely $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}=C\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ for some $C\\in (0,\\infty )$ , where $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ is Zhan's loop measure from Section REF , and $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}$ is the loop measure obtained from choosing a loop in a full-plane $\\operatorname{CLE}_\\kappa $ from the counting measure.", "Theorem REF yields the following variant of Theorem REF .", "It will be used in Section  to prove the conformal welding result for the quantum pair of pants and quantum disks.", "Theorem 5.1 Let $\\mathbb {F}$ be a measure on $H^{-1}($ such that the law of $( h)/{\\sim _\\gamma }$ is $\\mathrm {QS}$ if $h$ is sampled from $\\mathbb {F}$ .", "Now sample $(h,\\Gamma ,\\eta )$ from $\\mathbb {F}(dh)\\mathrm {Count}_{\\Gamma }(d \\eta )\\operatorname{CLE}_\\kappa ^d\\Gamma )$ , where $\\operatorname{CLE}_\\kappa ^d\\Gamma )$ is the law of a full-plane $\\operatorname{CLE}_\\kappa $ and $\\mathrm {Count}_{\\Gamma }(d \\eta )$ is the counting measure on a sample $\\Gamma $ from $\\operatorname{CLE}_\\kappa ^d\\Gamma )$ .", "For $\\ell >0$ , let $ \\mathrm {Weld}(\\mathrm {QD}(\\ell )\\otimes \\operatorname{CLE}_\\kappa ,\\mathrm {QD}(\\ell )\\otimes \\operatorname{CLE}_\\kappa )$ be the quantum surface decorated by a loop ensemble and a distinct loop obtained from uniformly welding a pair of CLE-decorated quantum disks sampled from $[\\mathrm {QD}(\\ell )\\otimes \\operatorname{CLE}_\\kappa ] \\times [\\mathrm {QD}(\\ell )\\otimes \\operatorname{CLE}_\\kappa ]$ .", "Then the law of $(h,\\Gamma , \\eta )/{\\sim _\\gamma }$ is $C\\int _0^\\infty \\ell \\cdot \\mathrm {Weld}\\big (\\mathrm {QD}(\\ell )\\otimes \\operatorname{CLE}_\\kappa ,\\mathrm {QD}(\\ell )\\otimes \\operatorname{CLE}_\\kappa \\big ) \\, d\\ell \\quad \\textrm {for some }C\\in (0,\\infty ).$ [Proof of Theorem REF given Theorem REF ] This immediately follows from Theorems REF and REF , combined with the following property of full-plane $\\operatorname{CLE}_\\kappa $ from [45].", "Proposition 5.2 ([45]) Sample $(\\Gamma ,\\eta )$ from $\\mathrm {Count}_{\\Gamma }(d \\eta )\\operatorname{CLE}_\\kappa ^d\\Gamma )$ .", "Then conditioning on $\\eta $ , the conditional law of $\\Gamma \\setminus \\lbrace \\eta \\rbrace $ is given by two independent $\\operatorname{CLE}_\\kappa $ 's on the two components of $\\widehat{\\backslash \\eta .", "}$ Recall from Section REF the probability measures $\\mathcal {L}_\\kappa $ and $\\widetilde{\\mathcal {L}}_\\kappa $ that describe the shapes of $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ and $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}$ , respectively.", "Namely, given a loop $\\eta $ on $ surrounding 0, we have $ R={ |z|:z}$ and $ = {z: Rz}$.Suppose $$ is a sample from $ SLEloop$ restricted to the event that $$ surrounds $ 0$.Then the law of $ (, R)$ is a constant multiple of the product measure $ Ldt$, where $ dt$ is the Lebesgue measure on $ R$.The same holds for $ SLEloop$ and $ L$ when $ (83,4]$.By definition Theorem~\\ref {cor-loop-equiv} is equivalent to $ L=L$.$ The main device for the proof of $\\mathcal {L}_\\kappa =\\widetilde{\\mathcal {L}}_\\kappa $ is a natural Markov chain with stationary distribution $\\widetilde{\\mathcal {L}}_\\kappa $ .", "We still describe it in the cylinder coordinate for convenience.", "Consider the horizontal cylinder $\\mathcal {C}= \\mathbb {R}\\times [0,2\\pi ]$ with $x\\in \\mathbb {R}$ identified with $x + 2\\pi i$ .", "We also include $\\pm \\infty $ in $\\mathcal {C}$ so that $\\mathcal {C}$ is conformally equivalent to a Riemann sphere.", "Let $\\psi (z)=e^{-z}$ be a conformal map from $\\mathcal {C}$ to $\\lbrace \\infty \\rbrace $ with $\\psi (+\\infty )=0$ and $\\psi (-\\infty )=\\infty $ .", "Definition 5.3 Fix $\\kappa \\in (0,4]$ .", "Let $\\eta $ be a sample from $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ restricted to the event that $\\eta $ surrounds 0, hence $\\psi ^{-1}(\\eta )$ is a loop on $\\mathcal {C}$ separating $\\pm \\infty $ .", "We let $\\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ be the law of $\\psi ^{-1}(\\eta )$ .", "We also write $\\mathcal {L}_\\kappa (\\mathcal {C}) := (\\psi ^{-1})_* \\mathcal {L}_\\kappa $ and $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}) := (\\psi ^{-1})_* \\widetilde{\\mathcal {L}}_\\kappa $ .", "Let $\\mathrm {Loop}_0(\\mathcal {C})$ be the set of simple loops $\\eta $ on $\\mathcal {C}$ separating $\\pm \\infty $ such that $\\max \\lbrace \\operatorname{Re}z: z\\in \\eta \\rbrace = 0$ .", "The relation between $\\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ and $\\mathcal {L}_\\kappa (\\mathcal {C})$ is the following.", "Lemma 5.4 Let $\\eta $ be a sample from $\\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ .", "Then $\\eta $ can be uniquely written as $\\eta ^0+{\\mathbf {t}}$ where $\\eta ^0\\in \\mathrm {Loop}_0(\\mathcal {C})$ and ${\\mathbf {t}}\\in \\mathbb {R}$ .", "The law of $(\\eta ^0,{\\mathbf {t}})$ is $C\\mathcal {L}_\\kappa (\\mathcal {C})\\times dt$ for a constant $C\\in (0,\\infty )$ , where $dt$ is the Lebesgue measure on $\\mathbb {R}$ .", "If $\\eta $ is a sample from $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ restricted to the event that $\\eta $ surrounds 0, the law of $(\\hat{\\eta }, \\log R_\\eta )$ is a constant multiple of $\\mathcal {L}_\\kappa \\times dt$ .", "Therefore Lemma REF follows from mapping $ to $ C$.$ Now we are ready to describe the Markov chain.", "By definition, we see that $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ is a probability measure on $\\mathrm {Loop}_0(\\mathcal {C})$ .", "Given $\\eta ^0\\in \\mathrm {Loop}_0(\\mathcal {C})$ , let $\\mathcal {C}^+_{\\eta ^0}$ be the connected component of $\\mathcal {C}\\setminus \\eta ^0$ containing $+\\infty $ .", "Sample a $\\operatorname{CLE}_\\kappa $ on $\\mathcal {C}^+_{\\eta ^0}$ and translate its outermost loop surrounding $+\\infty $ to an element $\\eta ^1\\in \\mathrm {Loop}_0(\\mathcal {C})$ .", "Then $\\eta ^0 \\rightarrow \\eta ^1$ defines a Markov transition kernel on $\\mathrm {Loop}_0(\\mathcal {C})$ .", "Lemma 5.5 Fix $\\eta ^0\\in \\mathrm {Loop}_0(\\mathcal {C})$ and let $(\\eta ^i)_{i\\ge 1}$ be the Markov chain starting from $\\eta ^0$ .", "Then the law of $\\eta ^n$ tends to $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ in total variational distance.", "In particular, $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ is the unique stationary probability measure of the Markov chain.", "This is an immediate consequence of [45]; see Proposition REF for details.", "The start point of our proof of $\\mathcal {L}_\\kappa (\\mathcal {C})=\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ is the following conformal welding result for $\\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ .", "Fix $a>0$ .", "Similarly as in $\\mathrm {Weld}(\\mathrm {QD}(a), \\mathrm {QD}(a))$ from Theorem REF , given a pair of independent samples from $\\mathrm {QD}_{1,0}(a)$ , we can uniformly conformally weld them to get a loop-decorated quantum surface with two marked points, since each sample from $\\mathrm {QD}_{1,0}(a)$ has an interior marked point.", "We write $\\mathrm {Weld}(\\mathrm {QD}_{1,0}(a),\\mathrm {QD}_{1,0}(a))$ as the law of the resulting loop-decorated quantum surface with two marked points.", "Proposition 5.6 Fix $\\kappa \\in (\\frac{8}{3},4)$ and $\\gamma =\\sqrt{\\kappa }$ .", "Let $\\mathbb {F}$ be the law of $h$ as in Definition REF , so that the law of $(\\mathcal {C}, h,+\\infty ,-\\infty )/{\\sim _\\gamma }$ is the two-pointed quantum sphere $\\mathrm {QS}_2$ .", "Now sample $(h,\\eta )$ from $\\mathbb {F} \\times \\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ and write $\\mathrm {QS}_2\\otimes \\operatorname{SLE}_\\kappa ^{\\operatorname{sep}}$ as the law of $(\\mathcal {C}, h, \\eta , +\\infty ,-\\infty )/{\\sim _\\gamma }$ .", "Then $\\mathrm {QS}_2\\otimes \\operatorname{SLE}_\\kappa ^{\\operatorname{sep}}=C \\int _0^\\infty a\\cdot \\mathrm {Weld}(\\mathrm {QD}_{1,0}(a), \\mathrm {QD}_{1,0}(a)) \\, da \\quad \\textrm { for some }C>0.$ The proof is similar to that of Proposition REF .", "Let $\\mathbb {F}_0$ be a measure on $H^{-1}($ such that the law of $( h)/{\\sim _\\gamma }$ is the unmarked quantum sphere $\\mathrm {QS}$ if $h$ is a sample from $\\mathbb {F}_0$ .", "Let $(h,\\eta , z,w)$ be sampled from $\\mu _h(d^2z)\\,\\mu _h(d^2w)\\,\\mathbb {F}_0(dh) \\operatorname{SLE}^{\\mathrm {loop}}_\\kappa (d\\eta )$ .", "Let $E_{\\operatorname{sep}}$ be the event that $\\eta $ separates $z,w$ .", "Then the law of $(h,\\eta , z,w)$ restricted to $E_{\\operatorname{sep}}$ can be obtained in two ways: Sample $(h,z,w)$ from $\\mu _h(d^2z)\\,\\mu _h(d^2w)\\,\\mathbb {F}_0(dh)$ ; then sample $\\eta $ from $\\operatorname{SLE}^{\\mathrm {loop}}_\\kappa $ on $ and restrict to the event that $$ separates $ z,w$.\\item Sample $ (h,)$ from $ 2h(D)h(D') F0(dh) SLEloop(d)$, where $ D, D'$ are the connected components of $$; then sample $ (z,w)$ from the probability measure proportional to$$\\mu _h|_{D_\\eta }(d^2z) \\mu _h|_{D^{\\prime }_\\eta }(d^2w) + \\mu _h|_{D^{\\prime }_\\eta }(d^2z) \\mu _h|_{D_\\eta }(d^2w).$$$ Recall Theorem REF and the definitions of $\\mathrm {QS}_2$ and $\\mathrm {QD}_{1,0}$ from Section REF .", "The law of $(h,\\eta , z,w)/{\\sim _\\gamma }$ restricted to $E_{\\operatorname{sep}}$ , from the first sampling, equals $\\mathrm {QS}_2\\otimes \\operatorname{SLE}_\\kappa ^{\\operatorname{sep}}$ .", "From the second sampling, the same law equals $C \\int _0^\\infty a\\cdot \\mathrm {Weld}(\\mathrm {QD}_{1,0}(a), \\mathrm {QD}_{1,0}(a)) \\, da$ for some $C\\in (0,\\infty )$ .", "In the rest of this section we consider the setting where Lemma REF and Proposition REF apply.", "For $\\kappa \\in (8/3,4)$ , let $(h,\\eta )$ be a sample from $\\mathbb {F} \\times \\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ .", "Then $\\eta $ can be uniquely written as $\\eta ^0+{\\mathbf {t}}$ where $\\eta ^0\\in \\mathrm {Loop}_0(\\mathcal {C})$ and ${\\mathbf {t}}\\in \\mathbb {R}$ .", "By Lemma REF , the law of $(\\eta ^0,{\\mathbf {t}})$ is $C\\mathcal {L}_\\kappa (\\mathcal {C})\\times dt$ .", "Conditioning on $(h,\\eta )$ , sample a $\\operatorname{CLE}_\\kappa $ $\\Gamma ^+$ in $\\mathcal {C}^+_\\eta $ , where $\\mathcal {C}^+_\\eta $ is the component of $\\mathcal {C}\\setminus \\eta $ containing $+\\infty $ .", "Let $(\\eta _i)_{i \\ge 1}$ be the set of loops in $\\Gamma ^+$ separating $\\pm \\infty $ ordered from left to right.", "We write $\\eta _i=\\eta ^i+t_i$ where $\\eta ^i\\in \\mathrm {Loop}_0(\\mathcal {C})$ and $t_i\\in \\mathbb {R}$ .", "See Figure REF for an illustration.", "Figure: Left: Illustration of η i \\eta _i and t i t_i.Right: Illustration of η i \\eta ^i.To prove $\\mathcal {L}_\\kappa (\\mathcal {C})=\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ , consider the decorated quantum surface $S_0=(\\mathcal {C},h,\\eta , \\pm \\infty )/{\\sim _\\gamma }$ and $S_i=(\\mathcal {C},h,\\eta _i, \\pm \\infty )/{\\sim _\\gamma }$ for $i\\ge 1$ .", "If $\\mathcal {L}_\\kappa (\\mathcal {C})=\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ holds, then the stationarity of $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ gives the stationarity of $(S_i)_{i\\ge 0}$ .", "Although we cannot show this directly before proving $\\mathcal {L}_\\kappa (\\mathcal {C})=\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ , we will prove using Proposition REF that the law of the subsurface of $S_i$ on the right side of $\\eta _i$ indeed does not depend on $i$ .", "This is the content of Section REF .", "In Section REF we use Lemmas REF and to show that as $i \\rightarrow \\infty $ the law of $S_i$ converges to $\\mathrm {QS}_2\\otimes \\widetilde{\\operatorname{SLE}}^{\\operatorname{sep}}_\\kappa $ in a suitable sense, where $\\mathrm {QS}_2\\otimes \\widetilde{\\operatorname{SLE}}^{\\operatorname{sep}}_\\kappa $ is defined as $\\mathrm {QS}_2\\otimes \\operatorname{SLE}^{\\operatorname{sep}}_\\kappa $ in Proposition REF with $\\widetilde{\\operatorname{SLE}}^{\\mathrm {loop}}_\\kappa $ in place of $\\operatorname{SLE}^{\\mathrm {loop}}_\\kappa $ .", "This convergence would be immediate in the total variational sense if the law of $S_i$ were a probability measure instead of being infinite.", "To handle this subtlety we will prove some intermediate results that will also be useful in Section .", "Once the two steps in the previous paragraph are achieved, we will have that the right subsurface of $\\mathrm {QS}_2\\otimes \\widetilde{\\operatorname{SLE}}^{\\operatorname{sep}}_\\kappa $ has the same law as that of $\\mathrm {QS}_2\\otimes \\operatorname{SLE}^{\\operatorname{sep}}_\\kappa $ .", "Then by left/right symmetry we can conclude that $\\mathrm {QS}_2\\otimes \\widetilde{\\operatorname{SLE}}^{\\operatorname{sep}}_\\kappa =\\mathrm {QS}_2\\otimes \\operatorname{SLE}^{\\operatorname{sep}}_\\kappa $ , hence $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})=\\mathcal {L}_\\kappa (\\mathcal {C})$ .", "We carry out these final steps and complete the proof of Theorem REF in Section REF .", "In Section REF we supply a technical ingredient in the proof coming from the fact that $\\mathrm {QS}_2\\otimes \\operatorname{SLE}^{\\operatorname{sep}}_\\kappa $ is an infinite measure.", "Stationarity of the subsurfaces on the right side of the loops Consider $(h,\\eta )$ and $(\\eta ^i)_{i\\ge 0}$ as defined in and below Proposition REF .", "Let $S_0=(\\mathcal {C},h,\\eta , \\pm \\infty )/{\\sim _\\gamma }$ and $S_i=(\\mathcal {C},h,\\eta _i, \\pm \\infty )/{\\sim _\\gamma }$ for $i\\ge 1$ as discussed right above Section REF .", "For $i\\ge 0$ , let $\\ell _h(\\eta _i)$ be the quantum length of $\\eta _i$ .", "Lemma 5.7 The law of $\\ell _h(\\eta _i)$ is the same as that of $\\ell _h(\\eta )$ in Theorem REF for all $i\\ge 0$ .", "By definition the law of $\\ell _h(\\eta _0)$ is the same as that of $\\ell _h(\\eta )$ in Theorem REF .", "It remains to show that the law of $\\ell _h(\\eta _i)$ does not depends on $i\\ge 0$ .", "By Theorem REF and Proposition REF , the joint law of $\\ell _h(\\eta _0)$ and $\\ell _h(\\eta _1)$ is given by $C|\\mathrm {QD}_{1,0}(a)| |\\mathrm {QA}(a,b)| |\\mathrm {QD}_{1,0}(b)| \\,da\\,db \\quad \\textrm {for some }C\\in (0,\\infty ).$ Since $|\\mathrm {QA}(a,b)|=|\\mathrm {QA}(b,a)|$ by Proposition REF , the laws of $\\ell _h(\\eta _0)$ and $\\ell _h(\\eta _1)$ are the same.", "By Proposition REF , and the fact that $\\Gamma ^+$ restricted to $\\mathcal {C}^+_{\\eta _i}$ is still a $\\operatorname{CLE}_\\kappa $ , the conditional law of $\\ell _h(\\eta _i)$ given $\\ell _h(\\eta _{i-1})$ does not depend on $i$ .", "Since $\\ell (\\eta _0)$ and $\\ell _h(\\eta _1)$ have the same law, so do $\\ell _h(\\eta _i)$ and $\\ell _h(\\eta _{i+1})$ for all $i\\ge 1$ .", "Lemma 5.8 For $i\\ge 0$ , conditioning on $(h,\\eta _i)$ , let $z_i$ be a point on $\\eta _i$ sampled from the probability measure proportional to the quantum length measure.", "Let $S^+_i= (\\mathcal {C}^+_{\\eta _i}, h,+\\infty , z_i)/{\\sim _\\gamma }$ .", "Let $\\mathcal {C}^-_{\\eta _i}$ be the component of $\\mathcal {C}\\setminus \\eta _i$ containing $-\\infty $ and $S_i^-=(\\mathcal {C}^-_{\\eta _i},h, -\\infty , z_i)/{\\sim _\\gamma }$ .", "Then the conditional law of $S_i^+$ given $S_i^-$ is $\\mathrm {QD}_{1,1}(\\ell _{h}(\\eta _i))^{\\#}$ for all $i\\ge 0$ .", "When $i=0$ , this follows from Theorem REF .", "The $i\\ge 1$ case follows iteratively using Proposition REF .", "From Lemmas REF and REF , we see that the law of $S_i^+$ is a quantum disk whose law does not depend on $i$ .", "Moreover, conditional on $\\ell _h(\\eta _i)$ , $S^+_i$ and $S^-_i$ are conditionally independent and $S$ is obtained from the conformal welding of $S^+_i$ and $S^-_i$ .", "Convergence of $S_i$ and the proof of Theorem  REF We retain the setting in Section REF .", "For $z\\in \\mathcal {C}$ , let $\\phi ^0(z )=h(z + {\\mathbf {t}})$ and $\\phi ^i (z )=h(z + t_i)$ for $i\\ge 1$ .", "By definition, $(\\mathcal {C},\\phi ^i, \\eta ^i)/{\\sim _\\gamma } $ is an embedding of the decorated quantum surface $S_i$ for $i\\ge 0$ .", "By Theorem REF , we immediately have the following description of the law of $(\\phi ^i,\\eta ^i)$ .", "Lemma 5.9 Recall the Markov chain in Lemma REF and the Liouville field $\\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}$ from Definition REF .", "Let $P_0=\\mathcal {L}_\\kappa (\\mathcal {C})$ .", "For $i\\ge 1$ , let $P_i$ be the distribution of the $i$ -th step of this Markov chain on $\\mathrm {Loop}_0(\\mathcal {C})$ with the initial distribution $P_0$ .", "Then the law of $(\\phi ^i,\\eta ^i)$ is $C\\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}\\times P_i$ for all $i\\ge 0$ , where $C\\in (0,\\infty )$ is a constant not depending on $i$ .", "By definition the law of $(h, \\eta ^0, \\mathbf {t})$ is $\\mathbb {F} \\times P_0 \\times [C\\, dt]$ , where $\\mathbb {F}$ is the law of $h$ in Definition REF .", "By the definition of $P_i$ and the translation invariance of $[C\\, dt]$ , the law of $(h, \\eta ^i, t_i)$ is $\\mathbb {F} \\times P_i \\times [C\\, dt]$ .", "Finally, Theorem REF yields the result.", "From now on we write the measure $C\\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}$ in Lemma REF as $\\mathrm {LF}_\\mathcal {C}$ for simplicity so that the law of $(\\phi ^i,\\eta ^i)$ is $\\mathrm {LF}_\\mathcal {C}\\times P_i$ .", "Suppose $\\mu $ and $\\nu $ are two measures on a measurable space $(\\Omega ,\\mathcal {F})$ .", "Recall that the total variational distance $d_{\\operatorname{tv}}(\\mu ,\\nu )$ is defined by $d_{\\operatorname{tv}}(\\mu ,\\nu )= \\sup _{A\\in \\mathcal {F}} |\\mu (A)-\\nu (A)|$ .", "From Lemma REF we know that $d_{\\operatorname{tv}}(P_i,\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}))\\rightarrow 0$ .", "We would like to say that $d_{\\operatorname{tv}}(\\mathrm {LF}_\\mathcal {C}\\times P_i,\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa )\\rightarrow 0$ as well, but since $\\mathrm {LF}_\\mathcal {C}$ is an infinite measure we need a truncation.", "Recall that the quantum length $\\ell _h(\\eta )$ is a measurable function of $(\\phi ^0,\\eta ^0)$  [68].", "We write this as $\\ell _h(\\eta )=\\ell (\\phi ^0,\\eta ^0)$ .", "Then $\\ell _h(\\eta _i) = \\ell (\\phi ^i,\\eta ^i)$ a.s. for each $i\\ge 0$ .", "For $\\varepsilon >0$ , consider the event $E_\\varepsilon = \\lbrace \\ell (\\phi ,\\eta )> \\varepsilon \\rbrace $ .", "Then by Lemma REF we have $\\mathrm {LF}_\\mathcal {C}\\times P_i(E_\\varepsilon )=\\mathrm {LF}_\\mathcal {C}\\times P_0(E_\\varepsilon )\\quad \\textrm { for each } i\\ge 0 \\textrm { and } \\varepsilon >0.$ We claim that the total variational convergence holds after restriction to $E_\\varepsilon $ .", "Lemma 5.10 For each $\\varepsilon >0$ , $\\lim _{i\\rightarrow \\infty } d_{{\\operatorname{tv}}}(\\mathrm {LF}_\\mathcal {C}\\times P_i |_{E_\\varepsilon }, \\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}) |_{E_\\varepsilon }) =0$ .", "The proof of Lemma REF is not hard but technical so we postpone it to Section REF .", "We proceed to finish the proof of Theorem REF .", "[Proof of Theorem REF given Lemma REF ] Let $(\\phi , \\widetilde{\\eta })$ be a sample from $\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ .", "Conditioning on $(\\phi ,\\widetilde{\\eta })$ , let $z\\in \\widetilde{\\eta }$ be sampled from the probability measure proportional to the quantum length measure of $\\widetilde{\\eta }$ .", "Recall the decorated quantum surfaces $S_i^+$ and $S_i^-$ from Lemma REF .", "We similarly define $\\widetilde{S}^+=(\\mathcal {C}_{\\widetilde{\\eta }}^+, \\phi ,+\\infty , z)/{\\sim _\\gamma }$ and $\\widetilde{S}^-=(\\mathcal {C}_{\\widetilde{\\eta }}^-, \\phi ,-\\infty , z)/{\\sim _\\gamma }$ , where $\\mathcal {C}_{\\widetilde{\\eta }}^+$ (resp.", "$\\mathcal {C}_{\\widetilde{\\eta }}^-$ ) is the component of $\\mathcal {C}\\setminus \\widetilde{\\eta }$ to the right (resp., left) of $\\widetilde{\\eta }$ .", "By Lemma REF , restricted to the event $E_\\varepsilon $ , the law of $(S_i^+,S_i^-)$ converge in total variational distance to that of $(\\widetilde{S}^+,\\widetilde{S}^-)$ .", "Since $\\varepsilon $ can be arbitrary, by Lemmas REF and REF and Equation (REF ), the joint law of $\\ell (\\phi , \\widetilde{\\eta })$ and $\\widetilde{S}^+$ under $\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ is the same as that of $\\ell (\\phi ,\\eta )$ and $S^+_0$ in Lemma REF .", "Moreover, conditioning on $\\ell (\\phi ,\\widetilde{\\eta })$ , the decorated quantum surfaces $\\widetilde{S}^+$ and $\\widetilde{S}^-$ are conditionally independent.", "Now we use the additional observation that both $\\mathrm {LF}_\\mathcal {C}$ and $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ are invariant under the mapping $z\\mapsto -z$ from $\\mathcal {C}$ to $\\mathcal {C}$ (in the case of $\\widetilde{\\mathcal {L}}_\\kappa $ we also translate the reflected loop so it lies in $\\mathrm {Loop}_0(\\mathcal {C})$ ).", "Therefore $(\\widetilde{S}^+,\\widetilde{S}^-)$ must agree in law with $(\\widetilde{S}^-,\\widetilde{S}^+)$ .", "Hence $(\\widetilde{S}^+,\\widetilde{S}^-)$ agrees in law with $(S_0^+,\\widetilde{S}_0^-)$ in Lemma REF .", "Namely, if we uniformly conformal weld $\\widetilde{S}^+$ and $\\widetilde{S}^-$ , the resulting decorated quantum surface is $\\mathrm {QS}_2\\otimes \\operatorname{SLE}_\\kappa ^{\\operatorname{sep}}$ from Proposition REF .", "The conditional law of $(\\phi , \\widetilde{\\eta })$ given $(\\widetilde{S}^+, \\widetilde{S}^-)$ is obtained by conformally welding $\\widetilde{S}^+, \\widetilde{S}^-$ then embedding the decorated quantum surface in $(\\mathcal {C}, -\\infty , +\\infty )$ in a rotationally invariant way around the axis of the cylinder.", "The same holds for $(\\phi ^0, \\eta ^0)$ and $(S^+_0, S^-_0)$ .", "Consequently $(\\phi , \\widetilde{\\eta })$ and $(\\phi ^0, \\eta ^0)$ agree in distribution, i.e.", "$\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})=\\mathrm {LF}_\\mathcal {C}\\times \\mathcal {L}_\\kappa (\\mathcal {C})$ where $\\mathrm {LF}_\\mathcal {C}\\times \\mathcal {L}_\\kappa (\\mathcal {C})$ is the law of $(\\phi ^0,\\eta ^0)$ by Lemma REF .", "Therefore $\\widetilde{\\mathcal {L}}_\\kappa =\\mathcal {L}_\\kappa $ as desired, hence $\\widetilde{\\operatorname{SLE}}_\\kappa ^{\\mathrm {loop}}=C \\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ for some constant $C$ .", "This proves Theorem REF for $\\kappa \\in (8/3,4)$ .", "For $\\kappa =4$ , we can take the $\\kappa \\uparrow 4$ limit as explained in Lemmas REF and REF to get $\\widetilde{\\mathcal {L}}_4=\\mathcal {L}_4$ and conclude the proof.", "Remark 5.11 (Symmetry of the quantum annulus) As a byproduct of our proof, we can upgrade Proposition REF to $\\mathrm {QA}(a, b)=\\mathrm {QA}(b,a)$ .", "Indeed, let $A_{\\eta _0,\\eta _1}$ be the annulus bounded by $\\eta _0$ and $\\eta _1$ .", "Then the probability measure $\\mathrm {QA}(a,b)^{\\#}$ is the conditional law of the quantum surface $(A_{\\eta _0,\\eta _1}, h)/{\\sim _\\gamma }$ given $\\ell _h(\\eta _0)= a$ and $\\ell _h(\\eta _1)=b$ .", "Given Theorem REF , we see that the law of $(h,\\eta _0,\\eta _1)$ is invariant under the mapping $z\\mapsto -z$ of the cylinder.", "Therefore $\\mathrm {QA}(a, b)^\\#=\\mathrm {QA}(b,a)^\\#$ hence $\\mathrm {QA}(a, b)=\\mathrm {QA}(b,a)$ .", "Convergence in total variation: proof of Lemma  REF We first make a simple observation.", "Lemma 5.12 For $\\varepsilon >0$ , we have $(\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}))[E_\\varepsilon ]\\le (\\mathrm {LF}_\\mathcal {C}\\times \\mathcal {L}_\\kappa (\\mathcal {C}))[E_\\varepsilon ]<\\infty $ .", "We consider a coupling $\\lbrace \\eta ^i:i\\ge 0\\rbrace $ of $(P_i)_{i\\ge 0}$ such that for each $i\\ge 1$ , $\\mathbb {P}[\\eta ^i\\ne \\eta ^0]$ achieves the minimum among all couplings of $P_i$ and $P_0$ .", "Then in this coupling we can find a subsequence $i_k$ such that $\\mathbb {P}[\\eta ^{i_k}\\ne \\eta ^0]\\le 1/k^2$ hence by Borel-Cantelli lemma $\\eta ^{i_k}= \\eta ^0$ a.s. for large enough $k$ .", "Now we take the product measure of $\\mathrm {LF}_\\mathcal {C}$ and this coupling.", "Then by Fatou's lemma we have $ (\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}))[E_\\varepsilon ]\\le \\liminf _{k \\rightarrow \\infty }(\\mathrm {LF}_\\mathcal {C}\\times P_{i_k})[E_\\varepsilon ]$ .", "On the other hand, by Lemma REF , $(\\mathrm {LF}_\\mathcal {C}\\times P_{i})[E_\\varepsilon ] =(\\mathrm {LF}_\\mathcal {C}\\times P_0)[E_\\varepsilon ]=(\\mathrm {LF}_\\mathcal {C}\\times \\mathcal {L}_\\kappa (\\mathcal {C}))[E_\\varepsilon ]$ for all $i\\ge 0$ .", "This concludes the proof.", "Given $\\eta \\in \\mathrm {Loop}_0(\\mathcal {C})$ , let $\\mathcal {C}^+_\\eta $ be the connected component of $\\mathcal {C}\\setminus \\eta $ containing $+\\infty $ .", "Let $f: \\mathcal {C}_+\\rightarrow \\mathcal {C}^+_\\eta $ be a conformal map such that $f(+\\infty )=\\infty $ , where $\\mathcal {C}_+ := \\lbrace z \\in \\mathcal {C}\\: : \\: \\operatorname{Re}z > 0\\rbrace $ is the half cylinder.", "By standard conformal distortion estimates (e.g.", "[52]), there exists a positive constant $C_0$ not depending on $\\eta $ such that $|f(z) - z| < \\frac{C_0}{3}$ and $|f^{\\prime \\prime }(z)| < \\frac{1}{2} < |f^{\\prime }(z)| < 2$ for $\\operatorname{Re}z > \\frac{C_0}{3}$ (these are quantitative versions of the facts that $\\mathrm {CR}(\\exp (-\\eta ),0) \\approx 1$ and $\\lim _{z \\rightarrow +\\infty } f^{\\prime }(z) = 1$ ).", "We need the following estimate for embeddings of $\\mathrm {QD}_{1,0}(\\ell )^\\#$ on $\\mathcal {C}^+_\\eta $ that is uniform in $\\eta \\in \\mathrm {Loop}_0(\\mathcal {C})$ .", "Lemma 5.13 Suppose $\\eta \\in \\mathrm {Loop}_0(\\mathcal {C})$ and $\\phi $ is such that the law of $(\\mathcal {C}^+_\\eta ,\\phi ,+\\infty )/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,0}(\\ell )^\\#$ for some $\\ell > 0$ .", "Suppose $g$ is a smooth function on $\\mathcal {C}$ supported on $ \\lbrace z\\in \\mathcal {C}: \\operatorname{Re}z \\in [C_0,C_0+1] \\rbrace $ such that $\\int g(z) \\,d^2z = 1$ .", "Then $\\mathbb {P}[(\\phi , g) \\notin (-K + \\frac{2}{\\gamma }\\log \\ell , K + \\frac{2}{\\gamma }\\log \\ell )] \\rightarrow 0 \\textrm { as }K\\rightarrow \\infty $ where the convergence rate only depends on $g$ , but not on $\\eta $ , $\\ell $ or the precise law of $\\phi $ .", "If $( \\mathcal {C}^+_\\eta , \\phi , +\\infty )/{\\sim _\\gamma }$ is a sample from $\\mathrm {QD}_{1,0}(\\ell )^\\#$ then $( \\mathcal {C}^+_\\eta , \\phi -\\frac{2}{\\gamma }\\log \\ell , +\\infty )/{\\sim _\\gamma }$ is a sample from $\\mathrm {QD}_{1,0}(1)^\\#$ .", "Therefore it suffices to prove that if the law of $( \\mathcal {C}^+_\\eta , \\phi , +\\infty )/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,0}(1)^\\#$ , then $\\mathbb {P}[(\\phi , g) \\notin (-K, K)] \\rightarrow 0 \\textrm { as }K\\rightarrow \\infty $ where the convergence rate only depends on $g$ .", "To prove (REF ), let $f: \\mathcal {C}_+\\rightarrow \\mathcal {C}^+_\\eta $ be a conformal map such that $f(+\\infty )=+\\infty $ and set $\\phi _0= \\phi \\circ f + Q \\log |f^{\\prime }|$ .", "Then $(\\mathcal {C}_+, \\phi _0, +\\infty )/{\\sim _\\gamma }= (\\mathcal {C}^+_\\eta , \\phi _0, +\\infty )/{\\sim _\\gamma }$ , hence its law is $\\mathrm {QD}(1)^{\\#}$ .", "Let $S$ be the collection of smooth functions $\\xi $ that are supported on $\\lbrace \\operatorname{Re}z \\in [\\frac{2}{3} C_0, \\frac{4}{3}C_0 + 1]\\rbrace \\subset \\mathcal {C}_+$ and satisfy $\\Vert \\xi \\Vert _\\infty \\le 4\\Vert g\\Vert _\\infty $ and $\\Vert \\nabla \\xi \\Vert _\\infty \\le 8(\\Vert g\\Vert _\\infty +\\Vert \\nabla g\\Vert _\\infty )$ .", "By the definitions of $C_0$ and $f$ , we see that $|f^{\\prime }|^2 g \\circ f \\in S$ .", "Let $Y = \\sup _{\\xi \\in S} |(\\phi _0, \\xi )|$ .", "Since $\\phi _0$ is almost surely in the local Sobolev space of index $-1$ , we have $Y<\\infty $ a.s.", "Note that $\\Vert f^{\\prime }(z)1_{\\operatorname{Re}z \\in [C_0, C_0 + 1]}\\Vert _\\infty \\le 2$ so $|(\\phi , g)| = |(\\phi _0 \\circ f^{-1} + Q \\log |(f^{-1})^{\\prime }|, g)| \\le |(\\phi _0, |f^{\\prime }|^2 g\\circ f)| + Q \\log 2 \\le Y + Q \\log 2.", "$ Since the law of $\\phi _0$ is unique modulo rotations around the axis of $\\mathcal {C}$ , the law of $Y$ does not depend on $\\eta $ or the precise law of $\\phi $ .", "Therefore, $\\mathbb {P}[(\\phi , g) \\notin (-K, K)] \\le \\mathbb {P}[Y + Q\\log 2 > K] \\rightarrow 0$ as $K \\rightarrow \\infty $ , as desired.", "[Proof of Lemma REF ] Let $g$ be a smooth function on $\\mathcal {C}$ as in Lemma REF .", "For $K>0$ , define $S_K=E_\\varepsilon \\cap \\lbrace (\\phi ,\\eta )\\in H^{-1}(\\mathcal {C})\\times \\mathrm {Loop}_0(\\mathcal {C}): (\\phi ,g)\\in (-K + \\frac{2}{\\gamma }\\log \\ell (\\phi ,\\eta ), K + \\frac{2}{\\gamma }\\log \\ell (\\phi ,\\eta ) \\rbrace .$ Then $ d_{{\\operatorname{tv}}}(\\mathrm {LF}_\\mathcal {C}\\times P_i |_{E_\\varepsilon }, \\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa |_{E_\\varepsilon })$ is bounded from above by $d_{{\\operatorname{tv}}}(\\mathrm {LF}_\\mathcal {C}\\times P_i |_{S_K}, \\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa |_{S_K})+(\\mathrm {LF}_\\mathcal {C}\\times P_i) [ E_\\varepsilon \\setminus S_{K}] +(\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa ) [ E_\\varepsilon \\setminus S_{K}].$ Since $(\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa ) [ E_\\varepsilon ]<\\infty $ by Lemma REF and $\\cap _{K>\\infty } (E_\\varepsilon \\setminus S_{K})=\\emptyset $ , we see that $\\lim _{K\\rightarrow \\infty } (\\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa ) [ E_\\varepsilon \\setminus S_{K}]=0.$ On the other hand, conditioning on the boundary length being $\\ell $ , the conditional law of $(\\mathcal {C}_{\\eta ^i}^+, \\phi ^i, \\infty )/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,0}(\\ell )^\\#$ .", "Since $\\eta ^i\\in \\mathrm {Loop}_0(\\mathcal {C})$ , by Lemma REF , we have $(\\mathrm {LF}_\\mathcal {C}\\times P_i) [ E_\\varepsilon \\setminus S_K] \\le (\\mathrm {LF}_\\mathcal {C}\\times P_i) [ E_\\varepsilon ] \\times o_K(1)$ where $o_K(1)$ is a function converging to 0 as $K\\rightarrow 0$ uniform in $i\\ge 0$ .", "Since $ (\\mathrm {LF}_\\mathcal {C}\\times P_i) [ E_\\varepsilon ]$ does not depend on $i$ by (REF ), we have $\\lim _{K\\rightarrow \\infty } \\max _{i\\ge 0}(\\mathrm {LF}_\\mathcal {C}\\times P_i) [ E_\\varepsilon \\setminus S_K] =0.$ It remains to handle the first term in (REF ).", "Let $F_{K}=\\lbrace \\phi \\in H^{-1}(\\mathcal {C}): (\\phi , g) > -K + \\frac{2}{\\gamma }\\log \\varepsilon \\rbrace $ .", "Then $S_K \\subset F_{\\varepsilon , K} \\times \\mathrm {Loop}_0$ .", "We claim that $\\mathrm {LF}_\\mathcal {C}[F_{\\varepsilon , K}] < \\infty $ .", "Assuming this, since $\\lim _{i \\rightarrow \\infty }d_\\mathrm {tv}(P_i, \\widetilde{\\mathcal {L}}_\\kappa ) = 0$ , we have $\\lim _{i \\rightarrow \\infty }d_\\mathrm {tv}(\\mathrm {LF}_\\mathcal {C}|_{F_{\\varepsilon , K}} \\times P_i, \\mathrm {LF}_\\mathcal {C}|_{F_{\\varepsilon , K}} \\times \\widetilde{\\mathcal {L}}_\\kappa ) = 0$ hence $\\lim _{i \\rightarrow \\infty } d_\\mathrm {tv}(\\mathrm {LF}_\\mathcal {C}\\times P_i |_{S_K}, \\mathrm {LF}_\\mathcal {C}\\times \\widetilde{\\mathcal {L}}_\\kappa |_{S_K}) = 0$ .", "Thus the quantity (REF ) tends to 0 as $i \\rightarrow \\infty $ then $K \\rightarrow \\infty $ , as desired.", "By the definition of $\\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}$ from Definition REF , our claim $\\mathrm {LF}_\\mathcal {C}[F_{\\varepsilon , K}] < \\infty $ follows from the fact that $\\int _{-\\infty }^\\infty \\mathbb {P}[G > -c] e^{(2\\gamma - 2Q)c}\\,dc < \\infty $ if $G$ is a Gaussian random variable.", "Quantum pair of pants In this section we introduce and analyze the quantum pair of pants similarly as what was done for the quantum annulus in Section .", "We assume $\\kappa \\in (\\frac{3}{8},4)$ and $\\gamma =\\sqrt{\\kappa }$ throughout the section.", "Suppose $( h, z_1,z_2,z_3)$ is an embedding of a sample from $\\mathrm {QS}_3$ .", "Let $\\Gamma $ be the whole-plane $\\operatorname{CLE}_\\kappa $ on $ independent of $ h$.", "We write$ QS3CLE$ as the law of the decorated quantum surface$ ( h, , z1,z2,z3)/$.", "For $ 1i3$, let $ i$ be the outermost loop separating $ zi$ from the other two points as defined in Theorem~\\ref {thm-main}.", "Let $ li$ be the quantum boundary length of $ i$.", "In light of Lemma~\\ref {lem:disint}, the following lemmaensures the existence of the disintegration over $ l1,l2,l3$.\\begin{lemma}For a Borel set E\\subset \\mathbb {R}^3 with zero Lebesgue measure, \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa [(\\mathfrak {l}_1,\\mathfrak {l}_2,\\mathfrak {l}_3)\\in E]=0.\\end{lemma}\\begin{proof}This follows from Theorem~\\ref {thm-loop3} combined with the same argument as in Lemma~\\ref {lem:1-pt-length}.\\end{proof}We now define the quantum pair of pants in the same way we defined the quantum annulus.\\begin{definition}Let \\lbrace \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa (\\ell _1,\\ell _2,\\ell _3):(\\ell _1,\\ell _2,\\ell _3)\\in (0,\\infty )^3 \\rbrace be the disintegration of \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa over (\\mathfrak {l}_1,\\mathfrak {l}_2,\\mathfrak {l}_3).Let P_{\\eta _1,\\eta _2,\\eta _3} be the non-simply-connected component of (\\cup _{i=1}^3 \\eta _i).", "For \\ell _1,\\ell _2,\\ell _3\\in \\mathbb {R}_+^3 where \\mathbb {R}_+=(0,\\infty ),let \\widetilde{\\mathrm {QP}}(\\ell _1,\\ell _2,\\ell _3) be the law of the quantum surface (P_{\\eta _1,\\eta _2,\\eta _3}, h)/{\\sim _\\gamma } under \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa (\\ell _1,\\ell _2,\\ell _3).", "Let\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3) = \\mathopen {}\\mathclose {\\left(\\prod _{i=1}^3 \\ell _i |\\mathrm {QD}_{1,0}(\\ell _i)| \\right)}^{-1} \\widetilde{\\mathrm {QP}}(\\ell _1,\\ell _2,\\ell _3).We call a sample from \\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3) a \\emph {quantum pair of pants}.\\end{definition}$ For $\\ell _1,\\ell _2,\\ell _3$ all distinct, given quantum surfaces sampled from $\\mathrm {QD}_{1,0}(\\ell _1) \\times \\mathrm {QD}_{1,0}(\\ell _2)\\times \\mathrm {QD}_{1,0}(\\ell _3) \\times \\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ , we can uniformly conformal weld them, by identifying cycles with the same length, to produce a quantum surface decorated with three loops and three points.", "We write its law as $\\mathrm {Weld}\\Big (\\mathrm {QD}_{1,0}(\\ell _1) , \\mathrm {QD}_{1,0}(\\ell _2), \\mathrm {QD}_{1,0}(\\ell _3), \\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) \\Big )$ Similarly as in the case of $\\mathrm {QA}(a,b)$ , although $\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ is only well-defined modulo a measure zero set, we will only integrate it over $\\mathbb {R}_+^3$ in applications.", "So we can consider it as canonically defined and omit saying “for almost every $\\ell _1,\\ell _2,\\ell _3$ ” every time.", "For the same reason, although we only specify the meaning of (REF ) for distinct $\\ell _1,\\ell _2,\\ell _3$ , this is sufficient for our applications.", "The main results of this section are the following analogs of Proposition REF and Theorem REF .", "Theorem 6.1 Sample $( \\phi , \\Gamma , z_1, z_2, z_3)/{\\sim _\\gamma }$ from $\\mathrm {QS}_3 \\otimes \\operatorname{CLE}_\\kappa $ , and let $\\eta _i$ be the othermost loop of $\\Gamma $ around $z_i$ separating it from the other two points for $i=1,2,3$ .", "Then the law of the decorated quantum surface $( \\phi , \\eta _1, \\eta _2, \\eta _3, z_1,z_2,z_3)/{\\sim _\\gamma }$ is $\\iiint _{\\mathbb {R}^3_+} \\ell _1\\ell _2\\ell _3\\mathrm {Weld}\\Big ( \\mathrm {QD}_{1,0}(\\ell _1), \\mathrm {QD}_{1,0}(\\ell _2),\\mathrm {QD}_{1,0}(\\ell _3), \\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) \\Big ) \\, d\\ell _1 \\, d\\ell _2 \\, d\\ell _3.$ Theorem 6.2 Let $A$ be the total quantum area of a sample from $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3)$ .", "Then there is a constant $C\\in (0,\\infty )$ only depending on $\\gamma $ such that for all $\\ell _1, \\ell _2, \\ell _3,\\mu >0$ $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}] = C\\mu ^{\\frac{1}{4} - \\frac{2}{\\gamma ^2}}\\frac{1}{\\sqrt{\\ell _1\\ell _2\\ell _3}} e^{-(\\ell _1+ \\ell _2+ \\ell _3)\\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}} .", "$ In contrast to $\\mathrm {QA}(a,b)$ , the measure $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3)$ is infinite, as can be seen from sending $\\mu \\rightarrow 0$ in Theorem REF .", "On the other hand, $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3)[A<C] < \\infty $ for $C>0$ so it is still $\\sigma $ -finite.", "We will prove Theorem REF in Section REF based on Theorem REF .", "Then in Section REF we prove Theorem REF based on Theorem REF .", "Readers who are mainly interested in the proof of Theorem  can safely skip the rest of this section at the first reading.", "Conformal welding of $\\mathrm {QP}$ and $\\mathrm {QD}$ : proof of Theorem  REF Suppose we are in the setting of Definition  and Theorem REF .", "For $i=1,2,3$ , let $D_{\\eta _i}$ and $D^c_{\\eta _1}$ be the two components of $\\eta _i$ where $D_{\\eta _i}$ contains $z_i$ .", "Let $\\mathfrak {l}_i$ be the quantum length of $\\eta _i$ .", "Proposition 6.3 Given $(h,\\eta _1,\\eta _2,\\eta _3)$ , let $p_3$ be a point sampled from the probability measure proportional to the quantum length measure of $\\eta _3$ .", "Then conditioned on $(D^c_{\\eta _3}, h,\\eta _{1}, \\eta _{2}, z_1, z_2,p_3)/{\\sim _\\gamma }$ , the conditional law of $(D_{\\eta _3} , h,z_3, p_3)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\mathfrak {l}_3)^\\#$ .", "Suppose we are in the setting of Theorem REF where $\\mathbb {F}$ is a measure on $H^{-1}($ such that the law of $( h)/{\\sim _\\gamma }$ is $\\mathrm {QS}$ if $h$ is sampled from $\\mathbb {F}$ .", "Now sample $(h,\\Gamma ,\\eta ,z_1,z_2,z_3)$ from $1_E\\prod _{i=1}^{3}\\mu _h(d^2z_i)\\, \\mathbb {F}(dh)\\,\\mathrm {Count}_{\\Gamma }(d \\eta )\\operatorname{CLE}_\\kappa ^d\\Gamma )$ , where $E$ is the event that $\\eta $ separates $z_3$ from $\\lbrace z_1,z_2\\rbrace $ .", "Let $\\mathfrak {\\ell }$ be the quantum length of $\\eta $ .", "Let $p$ be sampled on $\\eta $ from the probability measure proportional to the quantum length of $\\eta $ .", "Let $D_{\\eta }$ and $D^c_{\\eta }$ be the two components of $\\eta $ where $D_{\\eta }$ contains $z_3$ .", "By Theorem REF , conditioned on $(D^c_{\\eta }, h,\\Gamma |_{D^c_{\\eta }}, z_1, z_2,p)/{\\sim _\\gamma }$ , the conditional law of $(D_{\\eta } , h,z_3, p)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\mathfrak {l})^\\#$ .", "In this sample space we still define $\\eta _1,\\eta _2,\\eta _3$ in terms of $\\Gamma , z_1,z_2,z_3$ as in Proposition REF .", "Let $F$ be the event that $z_1$ and $z_2$ are surrounded by distinct outermost loops of $\\Gamma |_{D^c_{\\eta }}$ .", "Then $E\\cap F$ occurs if and only if $\\eta =\\eta _3$ .", "Therefore under $1_F\\cdot 1_E\\prod _{i=1}^{3}\\mu _h(d^2z_i) \\mathbb {F}(dh)\\mathrm {Count}_{\\Gamma }(d \\eta )\\operatorname{CLE}_\\kappa ^d\\Gamma )$ the law of $( h,\\Gamma ,z_1,z_2,z_3)/{\\sim _\\gamma }$ is simply $\\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa $ as in Proposition REF .", "Since $F$ only depends on $(D^c_{\\eta }, h,\\Gamma |_{D^c_{\\eta }}, z_1, z_2,p)/{\\sim _\\gamma }$ , even under the further conditioning on $F$ , the conditional law of $(D_{\\eta } , h,z_3, p)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\mathfrak {l})^\\#$ .", "Since under this conditioning $\\eta =\\eta _3$ , we conclude the proof.", "[Proof of Theorem REF ] Note that Proposition REF still holds if $D_{\\eta _1}$ is replaced by $D_{\\eta _2}$ or $D_{\\eta _3}$ .", "Recall $\\widetilde{\\mathrm {QP}}$ from the disintegration in Definition  .", "By repeatedly applying Proposition REF , we see that the law of $( \\phi , \\eta _1, \\eta _2, \\eta _3, z_1,z_2,z_3)/{\\sim _\\gamma }$ is $ \\iiint _{\\mathbb {R}^3_+} \\mathrm {Weld}\\Big ( \\mathrm {QD}_{1,0}(\\ell _1)^{\\#}, \\mathrm {QD}_{1,0}(\\ell _2)^{\\#},\\mathrm {QD}_{1,0}(\\ell _3)^{\\#},\\widetilde{\\mathrm {QP}}(\\ell _1, \\ell _2, \\ell _3) \\Big ) \\, d\\ell _1 \\, d\\ell _2 \\, d\\ell _3.$ Now Theorem REF follows from $\\mathrm {QD}_{1,0}(\\ell ) =|\\mathrm {QD}_{1,0}(\\ell )| \\mathrm {QD}_{1,0}(\\ell )^{\\#}$ .", "Finally, we give a welding result analogous to Proposition REF we need in the next section.", "Lemma 6.4 Let $(D,h,\\Gamma ,\\eta , \\hat{\\eta },z)$ be as in Lemma REF .", "Namely, for $a>0$ , sample $(D,h,\\Gamma ,z)/{\\sim _\\gamma }$ from $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ and sample $\\hat{\\eta }$ from the counting measure on the outermost loops of $\\Gamma $ except the loop $\\eta $ surrounding $z$ .", "Then for some constant $C = C(\\gamma )$ the law of $(D, h,\\eta , \\hat{\\eta }, z)/{\\sim _\\gamma }$ is $C \\iint _{\\mathbb {R}^2_+} bc \\mathrm {Weld}\\mathopen {}\\mathclose {\\left(\\mathrm {QD}_{1,0}(b), \\mathrm {QD}(c), \\mathrm {QP}(a,b,c)\\right)} \\, db \\, dc.$ We retain the notations in the proof of Proposition REF .", "Sample $(h, \\Gamma , \\eta _1, \\eta _2, \\eta _3, z_1, z_2)$ from $\\mathsf {M} := 1_G \\mu _h(d^2z_2)\\,\\mu _h(d^2z_2)\\, \\mathbb {F}(dh) \\,\\mathrm {Count}_\\Gamma (d\\eta _1)\\,\\mathrm {Count}_\\Gamma (d\\eta _2)\\,\\mathrm {Count}_\\Gamma (d\\eta _3)\\operatorname{CLE}_\\kappa ^d\\Gamma )$ , where $G$ is the event that $\\eta _1, \\eta _2, \\eta _3$ are distinct, $\\eta _1, \\eta _2, z_1, z_2$ all lie on the same side of $\\eta _3$ , and in $\\eta _3$ the outermost loops around $z_1, z_2$ are $\\eta _1, \\eta _2$ respectively.", "We first show that the $\\mathsf {M}$ -law of $( \\eta _1, \\eta _2, \\eta _3, z_1, z_2)/{\\sim _\\gamma }$ is $\\iiint _{\\mathbb {R}_+^3} abc \\mathrm {Weld}(QD_{1,0}(a), \\mathrm {QD}_{1,0}(b), \\mathrm {QD}(c), \\mathrm {QP}(a,b,c)) \\, da\\, db\\, dc.$ Indeed, by Theorem REF , if we sample $(h, \\Gamma , \\eta _1, \\eta _2, \\eta _3, z_1, z_2)$ from $\\mu _h|_{D_3}(d^2z_3) \\mathsf {M}$ where $D_3$ is the component of $\\eta _3$ not containing $z_1, z_2$ , then the law of $( \\eta _1, \\eta _2, \\eta _3, z_1, z_2,z_3)/{\\sim _\\gamma }$ is (REF ).", "Forgetting the point $z_3$ and unweighting by $\\mu _h(D_3)$ , we get (REF ).", "On the other hand, by Proposition REF applied to the loop $\\eta _1$ and the two-pointed quantum sphere $( h, z_1, z_2)/{\\sim _\\gamma }$ , we see that the $\\mathsf {M}$ -law of $( \\eta _1, \\eta _2, \\eta _3, z_1, z_2)/{\\sim _\\gamma }$ is $C\\int _0^\\infty a \\mathrm {Weld}(QD_{1,0}(a), \\widetilde{ \\mathsf {M}}(a)) \\, da, $ where $\\widetilde{ \\mathsf {M}}(a)$ is the measure described in Lemma REF .", "Comparing this to (REF ), we conclude that $C \\widetilde{\\mathsf {M}}(a) = \\iint _{\\mathbb {R}_+^2} bc \\mathrm {Weld}(\\mathrm {QD}_{1,0}(b), \\mathrm {QD}(c), \\mathrm {QP}(a,b,c)\\, db \\, dc$ .", "Area distribution of the quantum pair of pants: proof of Theorem  REF Our proof still crucially relies on the Levy process as in Section REF .", "Lemma 6.5 Let $\\mathbb {P}^{\\beta }$ be the probability measure describing the law of the ${\\beta }$ -stable Lévy process with Lévy measure $1_{x>0} x^{-\\beta -1} \\, dx$ with ${\\beta }=\\frac{4}{\\gamma ^2}+\\frac{1}{2}$ .", "For a sample $(\\zeta _t)_{t\\ge 0}$ from $\\mathbb {P}^{\\beta }$ , let $\\tau _{-\\ell }=\\inf \\lbrace t: \\zeta _t=-\\ell \\rbrace $ for each $\\ell >0$ .", "Let $(x_i)_{i \\ge 1}$ be the jumps of $\\zeta $ in $[0,\\tau _{-a-b-c}]$ .", "Let $(A_i)_{i\\ge 1}$ be independent copies of the quantum area of a sample from $\\mathrm {QD}(1)^\\#$ which are also independent of $\\zeta $ .", "Then there is a constant $C\\in (0,\\infty )$ such that for any non-negative measurable function $f$ and $a,b,c>0$ , we have $\\mathrm {QP}(a,b,c)[f(A)] = \\frac{C}{\\sqrt{abc} (a+b)} \\mathbb {E}[\\tau _{-a-b} f(\\sum _i x_i^2 A_i)],$ where $A$ on the left side means the quantum area of a sample from $\\mathrm {QP}(a,b,c)$ .", "In the setting of Lemma REF , Lemma REF gives the law of the quantum lengths of the outermost loops.", "By Proposition REF , conditioning on the lengths, the quantum surfaces surrounded by the outermost loops are independent quantum disks.", "Therefore combining Lemmas REF and REF we have that for a non-negative measurable function $F$ on $\\mathbb {R}_+^3$ , $&C \\iint _{\\mathbb {R}_+^2} bc |\\mathrm {QD}_{1,0}(b)| |\\mathrm {QD}(c)| \\mathrm {QP}(a,b,c)[F(A,b,c)]\\, db \\, dc \\\\=& \\iint _{\\mathbb {R}_+^2} \\frac{\\mathfrak {C}}{\\sqrt{ab}(a+b)} b|\\mathrm {QD}_{1,0}(b)| c^{-\\beta -1} \\mathbb {E}[\\widetilde{\\tau }_{-a-\\mathbf {b}} F(\\sum _{i} x_i^2 A_i,b,c)]\\, db \\, dc.$ for some $C\\in (0,\\infty )$ not depending on $a,b,c$ .", "Since $|\\mathrm {QD}(c)| \\propto c^{-\\beta -\\frac{3}{2}}$ , by possibly changing $C$ and performing a disintegration, we get (REF ).", "[Proof of Theorem REF ] Let $g(x)= \\mathbb {E}[e^{-\\mu x^2A_1}]$ where $A_1$ is the quantum area of a sample from $\\mathrm {QD}(1)^{\\#}$ .", "Then by Lemma REF , $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}]= \\frac{C}{\\sqrt{\\ell _1\\ell _2\\ell _3} (\\ell _1+\\ell _2)} \\mathbb {E}[\\tau _{-\\ell _1-\\ell _2} \\prod _{i\\ge 1}g(x_i)],$ where the expectation is with respect to $\\mathbb {P}^\\beta $ and $(x_i)_{i\\ge 1}$ is the set of jumps until $\\tau _{-\\ell _1-\\ell _2-\\ell _3}$ .", "By Lemma REF , $\\mathbb {E}[\\tau _{-\\ell _1-\\ell _2} \\prod _{i\\ge 1}g(x_i)]= C(\\mu )(\\ell _1+\\ell _2) \\mathbb {E}\\big [\\prod _{i\\ge 1} g(x_i)\\big ].$ where $C(\\mu )= \\int T(e) \\prod _{x\\in J_e} g(x) \\underline{N} (d e)$ with $\\underline{N}, T(e),J(e)$ as in Lemma REF .", "By Proposition REF , $\\mathbb {E}\\big [\\prod _{i\\ge 1} g(x_i)\\big ]= e^{-(\\ell _1+\\ell _2+\\ell _3) \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}.$ We write $X\\propto _\\gamma Y$ if $X=C_\\gamma Y$ for some $\\gamma $ -dependent constant $C_\\gamma \\in (0,\\infty )$ .", "Then we have $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}] \\propto _\\gamma \\frac{C(\\mu )}{\\sqrt{\\ell _1\\ell _2\\ell _3}} e^{-(\\ell _1+ \\ell _2+ \\ell _3)\\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}$ It remains to show that $C(\\mu ) \\propto _{\\gamma } \\mu ^{\\frac{1}{4} - \\frac{2}{\\gamma ^2}}$ .", "Recall Theorem REF , where $(\\phi ,z_1,z_2,z_3)/{\\sim _\\gamma }$ has the law of $\\mathrm {QS}_3$ .", "It is easy to check from the definition of $\\mathrm {QS}_3$ that $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }] \\propto _\\gamma \\mu ^{\\frac{2Q - 3\\gamma }{\\gamma }}$ .", "On the other hand, we can also use (REF ) to evaluate $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }]$ .", "Since we know $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}]$ from (REF ), together with the FZZ formula in Theorem REF for the area of quantum disks, we have $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }] \\propto _\\gamma C(\\mu ) \\prod _{i=1}^3 \\int _0^\\infty \\mu ^{\\frac{1}{\\gamma }(Q-\\gamma )} K_{\\frac{2}{\\gamma }(Q-\\gamma )}\\mathopen {}\\mathclose {\\left(\\ell _i \\sqrt{\\frac{\\mu }{\\sin (\\pi \\gamma ^2/4)}}\\right)} \\cdot \\frac{e^{-\\ell _i \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}}{\\sqrt{\\ell _i}} \\, d\\ell _i.$ By Lemma REF we have $\\int _0^\\infty y^{-\\frac{1}{2}}e^{-cy} K_\\nu (cy) \\, dy = \\frac{\\pi ^{3/2}}{\\sqrt{2c} \\cos (\\pi \\nu )}$ for $c > 0$ and $\\nu \\in (-\\frac{1}{2}, \\frac{1}{2})$ .", "Therefore $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }] \\propto _\\gamma C(\\mu ) \\mathopen {}\\mathclose {\\left(\\mu ^{\\frac{1}{\\gamma }(Q-\\gamma )-\\frac{1}{4}}\\right)}^3$ hence $C(\\mu ) \\propto _\\gamma \\mu ^{\\frac{1}{4} - \\frac{2}{\\gamma ^2}}$ .", "The three-point correlation function via conformal welding In this section we finish our proof of Theorem  based on the outline from Section REF .", "We recall the setting from Section REF .", "Let $\\Gamma $ be a whole plane $\\operatorname{CLE}_\\kappa $ with $\\kappa \\in (\\frac{8}{3},4]$ .", "We fix $(u_1, u_2, u_3) = (0,1,e^{i\\pi /3})$ for the convenience of using our formulation of the DOZZ formula (Theorem ).", "Let $\\eta _i$ be the outermost loop of $\\Gamma $ separating $z_i$ from the other two points.", "We write $\\mathsf {m}_3$ for the law of the triples of loops $(\\eta _1, \\eta _2, \\eta _3)$ and let $\\alpha _i\\in \\mathbb {R}$ for $i=1,2,3$ .", "Let $\\lambda _i = -\\frac{\\alpha _i^2}{2} + Q \\alpha _i - 2$ .", "Let $\\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ be the following reweighting of $\\mathsf {m}_3$ : $\\frac{d \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}}{d \\mathsf {m}_3}(\\eta _1,\\eta _2,\\eta _3) = \\prod _{i=1}^3 \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta _i, z_i)\\right)}^{-\\frac{\\alpha _i^2}{2} + Q \\alpha _i - 2}.$ By definition, the total mass of $\\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ is $|\\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}| = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\prod _{i=1}^3 \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta _i, z_i)\\right)}^{\\lambda _i}\\right]}=\\frac{{C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}}{2^{\\lambda _1+\\lambda _2+\\lambda _3}}.$ As outlined in Section REF , we set $\\gamma =\\sqrt{\\kappa }$ and consider the Liouville field on the sphere with three insertions $\\mathrm {LF}_{(\\alpha _i, u_i)_i}$ .", "Then by (REF ) we have Lemma 7.1 For $\\alpha _1,\\alpha _2,\\alpha _3$ satisfying the Seiberg bound () in Theorem , we have $\\mathopen {}\\mathclose {\\left(\\mathrm {LF}_{(\\alpha _i, u_i)_i} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}\\right)} [e^{-\\mu _\\phi (}] =2^{-\\lambda _1-\\lambda _2-\\lambda _3-1} C^\\mathrm {DOZZ}_\\gamma (\\alpha _1,\\alpha _2,\\alpha _3) {C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}.$ Theorem  yields $\\mathrm {LF}_{(\\alpha _i, u_i)_i} [e^{-\\mu _\\phi (}] =\\frac{1}{2}C^\\mathrm {DOZZ}_\\gamma (\\alpha _1,\\alpha _2,\\alpha _3)$ .", "Now (REF ) gives (REF ).", "In Section REF , we will show that the product measure $\\mathrm {LF}_{(\\alpha _i, u_i)_3} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ can be obtained from conformally welding the quantum pair of pants $\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ and $\\mathcal {M}_1^\\mathrm {disk}(\\alpha _i;\\ell _i)$ as in Theorem REF .", "In Section REF , we compute the right side of (REF ) using the area distributions of $\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ and $\\mathcal {M}_1^\\mathrm {disk}(\\alpha _i;\\ell _i)$ from Theorem REF and the FZZ formula, which gives ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ .", "Conformal welding with generic insertions In this section we prove the following conformal welding result.", "Recall $( u_1,u_2,u_3)=(0,1,e^{i\\pi })$ .", "Theorem 7.2 Let $\\alpha _i \\in (Q - \\frac{\\gamma }{4}, Q)$ for $i=1,2,3$ .", "Let $(\\phi ,\\eta _1,\\eta _2,\\eta _3)$ be sampled from $\\mathrm {LF}_{(\\alpha _i, u_i)_i}\\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ .", "Then the law of the decorated quantum surface $( \\phi , \\eta _1, \\eta _2, \\eta _3, u_1,u_2,u_3)/{\\sim _\\gamma }$ is $\\frac{\\gamma ^2}{4\\pi ^4(Q-\\gamma )^4}\\iiint _0^\\infty \\ell _1 \\ell _2 \\ell _3 \\mathrm {Weld}\\Big ( \\mathcal {M}_1^\\mathrm {disk}(\\alpha _1;\\ell _1), \\mathcal {M}_1^\\mathrm {disk}(\\alpha _2;\\ell _2),\\mathcal {M}_1^\\mathrm {disk}(\\alpha _3; \\ell _3), \\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) \\Big )\\, d \\ell _1 \\, d\\ell _2\\, d\\ell _3.$ Here $\\mathrm {Weld}$ means uniform conformal welding in the same sense as in Theorem REF .", "We fix the range $(Q - \\frac{\\gamma }{4}, Q)$ in Theorem REF for concreteness since this is the range we will use to prove Theorem .", "Since $\\gamma =\\sqrt{\\kappa }\\in (\\sqrt{8/3},2)$ , we have $\\gamma \\in (Q - \\frac{\\gamma }{4}, Q)$ .", "We first observe that Theorem REF is the special case of Theorem REF where $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ .", "Lemma 7.3 Theorem REF holds for $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ .", "By the relation between $\\mathrm {LF}_{(\\gamma , u_i)_3}$ and $\\mathrm {QS}_3$ from Theorem REF and the relation between $\\mathrm {QD}_{1,0}(\\ell ) $ and $\\mathcal {M}_1^\\mathrm {disk}(\\gamma ; \\ell )$ from Theorem REF , we see that Lemma REF follows from Theorem REF .", "We first explain how to go from $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ to the following case.", "Proposition 7.4 Theorem REF holds for $\\alpha _1=\\alpha \\in (Q - \\frac{\\gamma }{4}, Q)$ and $\\alpha _2=\\alpha _3=\\gamma $ .", "We will prove Proposition REF from Lemma REF by a reweighting procedure.", "By the definition of $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ , if we sample a field $X$ from $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ , then the law of $(\\mathbb {H}, X,i)/{\\sim _\\gamma }$ is $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ .", "We now recall a fact from [6] about the reweighting of $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ by “$e^{(\\alpha -\\gamma )X}$ ”.", "Lemma 7.5 ([6]) For any $\\ell >0,\\varepsilon \\in (0,1)$ and for any nonnegative measurable function $f$ of $X|_{\\mathbb {H}\\setminus B_\\varepsilon (i)}$ we have $\\int f(X|_{\\mathbb {H}\\setminus B_\\varepsilon (i)}) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\,d {\\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell )}= \\int f(X|_{\\mathbb {H}\\setminus B_\\varepsilon (i)}) \\, d\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha , i)}(\\ell ),$ where $X$ is a sample in $H^{-1}(\\mathbb {H})$ and $X_\\varepsilon (i)$ means the average of $X$ on the boundary of the ball $B_\\varepsilon (i)=\\lbrace z: |z-i|< \\varepsilon \\rbrace $ .", "The key to the proof of Proposition REF is the following reweighting result on $\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}$ .", "Lemma 7.6 Let $\\eta $ be a simple loop separating $u_1$ from $u_2$ and $u_3$ .", "Let $D_{\\eta }$ be the connected component of $\\eta $ containing $u_1$ .", "Let $p$ be a point on $\\eta $ and let $\\psi : \\mathbb {H}\\rightarrow D_{\\eta }$ be the conformal map with $\\psi (i) = u_1$ and $\\psi (0) = p$ .", "For $\\varepsilon \\in (0,\\frac{1}{4})$ , let ${\\eta ,p,\\varepsilon } =\\psi (B_\\varepsilon (i))$ .", "For $\\phi $ sampled from $\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}$ , let $X=\\phi \\circ \\psi +Q \\log |\\psi ^{\\prime }|$ so that $(\\mathbb {H},X,i,0)/{\\sim _\\gamma }=(D_{\\eta _1}, \\phi , u_1, p)/{\\sim _\\gamma }$ .", "Then for a fixed $\\alpha \\in (Q-\\frac{\\gamma }{4},Q)$ and for any $\\varepsilon \\in (0,1)$ and any nonnegative measurable function $f$ of $\\phi $ depending only on $\\phi |_{{\\eta ,p,\\varepsilon }}$ we have $\\int f (\\phi ) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\, d\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}= \\int f (\\phi ) \\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , u_1)}{2}\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} d\\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)}.$ Let $\\theta _\\varepsilon $ be the uniform probability measure on $\\partial B_\\varepsilon (i)$ and $\\widehat{\\theta }_\\varepsilon := \\psi _* \\theta _\\varepsilon $ .", "Since $\\psi ^{\\prime }$ is holomorphic, $\\log |\\psi ^{\\prime }|$ is harmonic so $(X, \\theta _\\varepsilon ) = (\\phi \\circ \\psi + Q \\log |\\psi ^{\\prime }|, \\theta _\\varepsilon ) = (\\phi , \\widehat{\\theta }_\\varepsilon ) + Q \\log |\\psi ^{\\prime }(i)|$ .", "Therefore, since $|\\psi ^{\\prime }(i)| = \\frac{1}{2} \\mathrm {CR}(\\eta , 0)$ , $e^{(\\alpha - \\gamma )X_\\varepsilon (i)} = \\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , 0)}{2} \\right)}^{(\\alpha - \\gamma )Q} e^{(\\alpha - \\gamma )(\\phi , \\widehat{\\theta }_\\varepsilon )}.$ Since $\\varepsilon < \\frac{1}{4}$ , the measure $\\widehat{\\theta }_\\varepsilon $ is supported in the unit disk so $(\\log |\\cdot |_+, \\widehat{\\theta }_\\varepsilon ) = 0$ .", "Moreover, for any $z \\in {\\eta , p, \\varepsilon }$ the function $w \\mapsto \\log |z - \\psi (\\cdot )|$ is harmonic on $B_\\varepsilon (i)$ so $(\\log |z - \\cdot |, \\widehat{\\theta }_\\varepsilon ) = (\\log |z - \\psi (\\cdot )|, \\theta _\\varepsilon ) = \\log |z|$ .", "Finally, since $\\widehat{\\theta }_\\varepsilon $ is the harmonic measure on $\\eta _\\varepsilon := \\psi (\\partial B_\\varepsilon (i))$ viewed from 0, it is well known that $(\\log |\\cdot |, \\widehat{\\theta }_\\varepsilon ) = \\log \\mathrm {CR}(\\eta _\\varepsilon ,0) = \\log \\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2}$ .", "Therefore, recalling $G_z,w) = -\\log |z-w| + \\log |z|_+ + \\log |w|_+$ from Section REF , we have $(-2Q \\log |\\cdot |_+ + \\sum _j \\gamma G_\\cdot , u_j), \\widehat{\\theta }_\\varepsilon ) = - \\gamma \\log \\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2}$ .", "For a GFF $h$ sampled from $P_ let $ h := h - 2Q ||+ + j=13 G, uj)$.", "By Definition~\\ref {def-RV-sph},\\begin{equation}\\int f (\\phi ) \\times e^{(\\alpha - \\gamma )(\\phi , \\widehat{\\theta }_\\varepsilon )} \\, d\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)} = \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{-(\\alpha - \\gamma ) \\gamma }\\int \\mathbb {E}[ e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )} f(\\widetilde{h} + c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc.\\end{equation}Define $ G(z, 0) := (Gz, ) , )$; from earlier computations we have $ G(, 0)|, p, = G, 0)|, p, $.By Girsanov^{\\prime }s theorem and the fact that $ f()$ depends only on $ |, p, $,{\\begin{@align*}{1}{-1}\\int \\mathbb {E}[ e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )} f(\\widetilde{h} + c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc &= \\mathbb {E}[e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )}]\\int \\mathbb {E}[ f(\\widetilde{h} + (\\alpha - \\gamma )G^\\varepsilon _\\cdot , 0)+ c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc \\\\&= \\mathbb {E}[e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )}]\\int \\mathbb {E}[ f(\\widetilde{h} + (\\alpha - \\gamma )G_\\cdot , 0)+ c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc.\\end{@align*}}Now $ Var((h, )) = (G(, 0), ) = -CR(, 0)2$.", "Hence $ E[e(-)(h, )] = (CR(, 0)2 )- 12 (- )2$, and using Definition~\\ref {def-RV-sph}, we get\\begin{equation}\\int \\mathbb {E}[ e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )} f(\\widetilde{h} + c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc = \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{- \\frac{1}{2} (\\alpha - \\gamma )^2}\\int f(\\phi ) \\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)}.\\end{equation}Combining~(\\ref {eq-gir-sph-1}),~(\\ref {eq-gir-sph-2}) and~(\\ref {eq-gir-sph-3}), and collecting the prefactors via$$ \\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , 0)}{2} \\right)}^{(\\alpha - \\gamma )Q} \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{-(\\alpha - \\gamma ) \\gamma } \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{- \\frac{1}{2} (\\alpha - \\gamma )^2} = \\varepsilon ^{-\\frac{1}{2}(\\alpha ^2-\\gamma ^2)}\\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , 0)}{2} \\right)}^{-\\frac{\\alpha ^2}{2} + Q\\alpha - 2},$$we conclude the proof.$ We also need the following fact when dealing with uniform weldings that involves $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ .", "Lemma 7.7 For $\\alpha > \\frac{\\gamma }{2}$ and $\\ell > 0$ , let $(D,h,z)$ be an embedding of a sample from $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ .", "Given $(D,h,z)$ , let $p$ be a point sampled from the harmonic measure on $\\partial D$ view from $z$ , then the law of $(D,h,z,p)/{\\sim _\\gamma }$ equals that of $(\\mathbb {H}, X, i,0)/{\\sim _\\gamma }$ where $X$ is sampled from $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ .", "We assume that $(D,h,z)=(\\mathbb {H}, h,i)$ where $h$ is a sample from $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ .", "Let $\\psi _p: \\mathbb {H}\\rightarrow \\mathbb {H}$ be the conformal map with $\\psi _p(i) = i$ and $\\psi _p(p) = 0$ and set $X=h \\circ \\psi _p^{-1} + Q \\log |(\\psi _p^{-1})^{\\prime }|$ .", "Then by the coordinate change for Liouville fields on $\\mathbb {H}$ from [6], the law of $X$ is also $\\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell )$ .", "Since $(D,h,z,p)/{\\sim _\\gamma }=(\\mathbb {H}, X, i,0)/{\\sim _\\gamma }$ we are done.", "We are now ready to prove Proposition REF .", "For notational simplicity for $\\ell >0$ we let $\\mathfrak {M}_\\ell $ be the measure on decorated quantum surfaces corresponding to $\\frac{\\gamma ^2}{4\\pi ^4(Q-\\gamma )^4}\\iint _0^\\infty \\ell _2 \\ell _3\\mathrm {Weld}( \\mathcal {M}_1^\\mathrm {disk}(\\gamma ; \\ell _2), \\mathcal {M}_1^\\mathrm {disk}(\\gamma ; \\ell _3), \\mathrm {QP}(\\ell , \\ell _2, \\ell _3)) \\, d\\ell _2 \\, d\\ell _3,$ so that the relevant integral for Proposition REF is $\\int _0^\\infty \\ell \\mathrm {Weld}( \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ),\\mathfrak {M}_\\ell )\\, d\\ell $ .", "We sample a decorated quantum surface from $\\int _0^\\infty \\ell \\mathrm {Weld}( \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ),\\mathfrak {M}_\\ell )\\, d\\ell $ and let $( \\phi , \\eta _1, \\eta _2, \\eta _3, u_1,u_2,u_3)$ be its embedding; since we specify the locations of the three marked points, the tuple $(\\phi ,\\eta _1,\\eta _2,\\eta _3)$ is uniquely specified by the decorated quantum surface.", "Let $D_{\\eta _1}$ and $D^c_{\\eta _1}$ be the connected components of $\\eta _1$ such that $D_{\\eta _1}$ contains $u_1$ .", "Let $p\\in \\eta _1$ be a point sampled from the harmonic measure of $\\partial D_{\\eta _1}$ viewed from $u_1$ and set $\\mathcal {D}_1=(D_{\\eta _1}, \\phi ,u_1,p )/{\\sim _\\gamma }$ .", "Let $X\\in H^{-1}(\\mathbb {H})$ be such that $\\mathcal {D}_1=(\\mathbb {H},X,i,0)/{\\sim _\\gamma }$ .", "Let $\\mathcal {D}^c_1$ be the decorated quantum surface $(D^c_{\\eta _1}, \\phi ,\\eta _2, \\eta _3,u_2,u_3,p)/{\\sim _\\gamma }$ .", "Then the law of $(X, \\mathcal {D}^c_1)$ is described by the following lemma.", "Lemma 7.8 Given a decorated quantum surface $\\mathcal {S}$ sampled from $\\mathfrak {M}_\\ell $ , we write $\\mathcal {S}^\\bullet $ as the decorated quantum surface obtained by further sampling a point on the boundary of $\\mathcal {S}$ according to the probability measure proportional to its quantum boundary length measure.", "Let $\\mathfrak {M}_\\ell ^\\bullet $ be the law of $\\mathcal {S}^\\bullet $ .", "Then the law of $(X, \\mathcal {D}^c_1)$ defined right above is $\\int _0^\\infty \\ell \\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }d\\ell $ .", "By the definition of uniform welding, conditioning on $\\mathcal {D}_1$ , the conditional law of $\\mathcal {D}^c_1$ is $\\mathfrak {M}_\\ell ^\\bullet $ with $\\ell $ being the boundary length of $\\mathcal {D}_1$ .", "Since the marked boundary point of $\\mathcal {D}_1$ is sampled from the harmonic measure, Lemma REF now yields Lemma REF .", "[Proof of Proposition REF ] For $(\\phi , \\eta _1, \\eta _2, \\eta _3,p)$ defined above Lemma REF with $\\alpha =\\gamma $ , by Lemma REF the law of $(\\phi , \\eta _1, \\eta _2, \\eta _3,p)$ is $\\mathrm {Harm_{u_1,\\eta _1}}(dp)\\,\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)} (d\\phi ) \\,\\mathsf {m}_3(d\\eta _1,d\\eta _2,d\\eta _3)$ where $\\mathrm {Harm}_{u_1,\\eta _1}$ means the harmonic measure on $\\eta _1$ viewed from $u_1$ .", "For $(X, \\mathcal {D}^c_1)$ in Lemma REF , since $(\\phi , \\eta _1, \\eta _2, \\eta _3,p)$ and $(X, \\mathcal {D}^c_1)$ determine each other, Lemma REF with $\\alpha =\\gamma $ can be written as $\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)} (d\\phi )\\,\\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3(d\\eta _1,\\eta _2,\\eta _3)= \\int _0^\\infty \\ell \\mathrm {LF}_\\mathbb {H}^{(\\gamma ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }d\\ell .$ Recall the notations in Lemma REF .", "For $\\varepsilon \\in (0,\\frac{1}{4})$ , for any nonnegative measurable function $f$ of $\\phi |_{{\\eta _1,p,\\varepsilon }}$ , and any nonnegative measure function $g$ of $(\\eta _1,\\eta _2,\\eta _3)$ , we get from (REF ) that $&\\int f(\\phi |_{{\\eta _1,p,\\varepsilon }})g(\\eta _1,\\eta _2,\\eta _3) \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}(d\\phi )\\, \\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3(d\\eta _1,d\\eta _2,d\\eta _3) \\nonumber \\\\&= \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int f(\\phi |_{{\\eta _1,p,\\varepsilon }})g(\\eta _1,\\eta _2,\\eta _3) \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\ell \\mathrm {LF}_\\mathbb {H}^{(\\gamma ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }\\right)}d\\ell .", "$ By Lemma REF , the left side of (REF ) equals $\\int f g \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta , u_1)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} \\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)}(d\\phi )\\, \\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3(d\\eta _1,d\\eta _2,d\\eta _3).$ Here we write $f=f(\\phi |_{{\\eta _1,p,\\varepsilon }})$ and $g=g(\\eta _1,\\eta _2,\\eta _3)$ to ease the notation.", "By Lemma REF , the right side of (REF ) equals $\\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int fg \\ell \\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }\\right)}d\\ell .$ Recall that $\\mathsf {m}_3^{\\alpha ,\\gamma ,\\gamma } = \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta , u_1)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} \\mathsf {m}_3$ .", "Comparing the two integrals above, we get in the same sense as in (REF ) that $\\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)} (d\\phi )\\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3^{\\alpha ,\\gamma ,\\gamma }\\,(d\\eta _1,d\\eta _2,d\\eta _3)= \\int _0^\\infty \\ell \\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }d\\ell .$ Forgetting about the point $p$ we get Proposition REF .", "[Proof of Theorem REF ] The case $(\\alpha _1, \\alpha _2, \\alpha _3) = (\\alpha , \\gamma , \\gamma )$ was proved in Proposition REF by reweighting from the $(\\gamma , \\gamma , \\gamma )$ case.", "The exact same argument in Lemma REF gives the following extension: $&\\int f (\\phi |_{{\\eta ,p,\\varepsilon }}) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\alpha _1^2)} e^{(\\alpha - \\alpha _1)X_\\varepsilon (i)} \\, d\\mathrm {LF}_{(\\alpha _1, u_1), (\\alpha _2, u_2), (\\alpha _3, u_3)}\\\\=& \\int f (\\phi |_{{\\eta ,p,\\varepsilon }}) \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta , u_1)\\right)}^{-\\frac{\\alpha _1^2}{2} + Q \\alpha _1 - 2} \\mathrm {LF}_{(\\alpha , u_1), (\\alpha _2, u_2), (\\alpha _3, u_3)}.$ Applying this argument twice at $u_2$ and $u_3$ yields the general result.", "Matching the quantum area: proof of Theorem  We start by proving the following counterpart of Lemma REF .", "Proposition 7.9 There is a constant $C = C(\\gamma ) \\in (0, \\infty )$ such that for $\\alpha _1, \\alpha _2, \\alpha _3 \\in (Q - \\frac{\\gamma }{4}, Q)$ and $\\mu >0$ , with notation as in Theorem REF and writing $\\phi $ for the random field from $\\mathrm {LF}_{(\\alpha _i, u_i)_i}$ , $\\mathopen {}\\mathclose {\\left(\\mathrm {LF}_{(\\alpha _i, u_i)_3} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}\\right)} [e^{- \\mu _\\phi (}] =C \\prod _{i=1}^3 \\frac{2^{\\alpha _i^2/2 - Q\\alpha _i} \\Gamma (\\frac{\\gamma \\alpha _i}{2} -\\frac{\\gamma ^2}{4}) }{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i)) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha _i))} \\mathopen {}\\mathclose {\\left( \\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1 - \\frac{\\gamma ^2}{4})}\\right)}^{-\\frac{\\alpha _i}{\\gamma }} .$ Recall $\\mathrm {QP}_3(\\ell _1, \\ell _2, \\ell _3)[e^{-A}]$ from Theorem REF .", "Recall $ \\mathcal {M}_1^\\mathrm {disk}(\\alpha _i; \\ell _i)[e^{-A_i}]$ from Theorems REF , where $A_i$ is the quantum area of a sample from $\\mathcal {M}_1^\\mathrm {disk}(\\alpha _i; \\ell _i)$ .", "By Theorem REF , for some $\\gamma $ -dependent constants $C_1,C_2$ we have that $\\mathopen {}\\mathclose {\\left(\\mathrm {LF}_{(\\alpha _i, u_i)_3} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}\\right)} [e^{- \\mu _\\phi (}]$ equals $C_1& \\iiint _0^\\infty \\ell _1\\ell _2\\ell _3 \\mathrm {QP}_3(\\ell _1, \\ell _2, \\ell _3)[e^{-A}] \\prod _{i=1}^3 \\mathcal {M}_1^\\mathrm {disk}(\\alpha _i; \\ell _i)[e^{-A_i}] \\, d\\ell _1\\, d\\ell _2 \\, d\\ell _3 \\\\&= C_2 \\prod _{i=1}^3 \\frac{\\overline{U}(\\alpha _i) (4 \\sin (\\frac{\\pi \\gamma ^2}{4}))^{\\alpha _i/\\gamma }}{2^{\\alpha _i^2/2}\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i))} \\int _0^\\infty \\frac{1}{\\sqrt{\\ell _i}}e^{-\\ell _i \\sqrt{\\frac{1}{\\sin (\\pi \\gamma ^2/4)}}} K_{\\frac{2}{\\gamma }(Q-\\alpha _i)} \\mathopen {}\\mathclose {\\left( \\ell _i\\sqrt{\\frac{1}{\\sin (\\pi \\gamma ^2/4)}} \\right)} \\, d\\ell _i, $ where $\\overline{U}$ is as in (REF ).", "Expanding $\\overline{U}(\\alpha _i)$ and using Lemma REF to simplify the integral on the second line, this is equal to $C \\prod _{i=1}^3 \\frac{2^{\\alpha _i^2/2 - Q\\alpha _i} \\Gamma (\\frac{\\gamma \\alpha _i}{2} -\\frac{\\gamma ^2}{4}) }{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i)) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha _i))} \\mathopen {}\\mathclose {\\left(\\frac{\\sin (\\frac{\\pi \\gamma ^2}{4}) \\Gamma (1-\\frac{\\gamma ^2}{4})^2}{\\pi ^2} \\right)}^{\\frac{\\alpha _i}{\\gamma }} .", "$ The identity $\\Gamma (z)\\Gamma (1-z) = \\frac{\\pi }{\\sin (\\pi z)}$ then yields the result.", "[Proof of Theorem ] We divide the proof into four cases depending on the parameter range.", "Case I: $\\kappa < 4$ and $\\lambda _i \\in (\\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }, \\frac{\\kappa }{8} -1+\\frac{2}{\\kappa })$ for all $i=1,2,3$ .", "In this case we can find $\\alpha _i \\in (Q - \\frac{\\gamma }{4}, Q)$ satisfying $\\lambda _i = -\\frac{\\alpha _i^2}{2} + Q \\alpha _i - 2$ for each $i$ .", "Comparing Lemmas REF and REF yields for some $\\gamma $ -dependent constant $C$ that ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\frac{C}{C_\\gamma ^{\\mathrm {DOZZ}}(\\alpha _1,\\alpha _2,\\alpha _3)} \\prod _{i=1}^3 \\frac{ \\Gamma (\\frac{\\gamma \\alpha _i}{2} -\\frac{\\gamma ^2}{4}) }{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i)) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha _i))} \\mathopen {}\\mathclose {\\left( \\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1 - \\frac{\\gamma ^2}{4})}\\right)}^{-\\frac{\\alpha _i}{\\gamma }} .$ The constant $C$ is recovered by setting $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ , for which $\\lambda _i = 0$ and $C_\\kappa ^{\\operatorname{CLE}}(0,0,0) = 1$ .", "This gives () for ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ and () for the factor $N_\\gamma (\\alpha )$ .", "For the next three cases we need the following monotonicity.", "Since the distance between $u_i$ and $\\eta _i$ is less than 1 by our choice of $ u_1,u_2,u_3$ , by the Koebe 1/4 theorem we have $\\frac{1}{4}\\mathrm {CR}(\\eta _i, u_i) \\le 1$ .", "Therefore $ \\mathbb {E}[\\prod _{i=1}^3 ( \\frac{1}{4}\\mathrm {CR}(\\eta _i, u_i))^{\\lambda _i}] \\ge \\mathbb {E}[\\prod _{i=1}^3 ( \\frac{1}{4}\\mathrm {CR}(\\eta _i,u_i))^{\\widetilde{\\lambda }_i}] $ for any real $\\lambda _i, \\widetilde{\\lambda }_i$ such that $\\lambda _i \\le \\widetilde{\\lambda }_i$ for $i=1,2,3$ , hence $4^{-\\lambda _1 - \\lambda _2 - \\lambda _3}C_\\kappa ^{\\operatorname{CLE}}(\\lambda _1, \\lambda _2, \\lambda _3) \\ge 4^{-\\widetilde{\\lambda }_1 - \\widetilde{\\lambda }_2 - \\widetilde{\\lambda }_3}C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1,\\widetilde{\\lambda }_2,\\widetilde{\\lambda }_3).$ Case II: $\\kappa < 4$ and all $\\lambda _i > \\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }$ .", "By (REF ) ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ is finite, hence analytic, on $\\lbrace (z_1,z_2,z_3)\\in 3: \\operatorname{Re}z_i > \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }\\textrm { for }i=1,2,3\\rbrace $ .", "On the other hand, the right hand side of () is a meromorphic function.", "Since Case II includes Case I, we see that () holds in Case II.", "Case III: $\\kappa < 4$ and $\\lambda _i \\le \\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }$ for some $i$ .", "We will show that ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\infty $ in this case.", "By the monotonicity (REF ) and symmetry, it suffices to prove ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\infty $ for $\\lambda _1 \\le \\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }$ and $\\lambda _2, \\lambda _3 > \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }$ .", "For $\\widetilde{\\lambda }_1>\\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }$ we have $4^{-\\lambda _1 - \\lambda _2 - \\lambda _3}C_\\kappa ^{\\operatorname{CLE}}(\\lambda _1, \\lambda _2, \\lambda _3) > 4^{-\\widetilde{\\lambda }_1 - \\lambda _2 - \\lambda _3}C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3), $ By the explicit formula for $C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3)$ in () we get $C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3)\\rightarrow \\infty $ as $\\widetilde{\\lambda }_1\\downarrow \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }$ ; indeed, writing $(\\widetilde{\\alpha }_1, \\alpha _2, \\alpha _3)\\in (Q-\\frac{\\gamma }{4}, Q)^3$ for the parameters corresponding to $(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3)$ , as $\\widetilde{\\alpha }_1 \\downarrow Q-\\frac{\\gamma }{4}$ we have $\\lim _{\\widetilde{\\alpha }_1 \\downarrow Q - \\frac{\\gamma }{4}}N_\\gamma (\\widetilde{\\alpha }_1) = \\infty $ while $C_\\gamma ^\\mathrm {DOZZ}(\\widetilde{\\alpha }_1, \\alpha _2, \\alpha _3)$ remains positive since $(\\widetilde{\\alpha }_1, \\alpha _2, \\alpha _3)$ satisfies the Seiberg bounds ().", "Therefore ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\infty $ as desired.", "Case IV: $\\kappa = 4$ .", "This follows from Lemma REF via the continuity as $\\kappa \\uparrow 4$ .", "Electrical thickness of the SLE loop via conformal welding In this section we prove Theorem REF .", "Recall from Section  the cylinder $\\mathcal {C}= \\mathbb {R}\\times [0,2\\pi ]$ with $x \\in \\mathbb {R}$ identified with $x + 2\\pi i$ , and that $\\mathcal {L}_\\kappa (\\mathcal {C})$ is the pullback of the loop shape measure $\\mathcal {L}_\\kappa $ under the map $z \\mapsto e^{-z}$ .", "Thus $\\mathcal {L}_\\kappa (\\mathcal {C})$ is a probability measure on loops $\\eta $ in $\\mathcal {C}$ separating $\\pm \\infty $ and satisfying $\\max _{z \\in \\eta } \\operatorname{Re}z = 0$ .", "For a loop $\\eta $ sampled from $\\mathcal {L}_\\kappa (\\mathcal {C})$ , we write $\\vartheta (\\eta )$ for the electrical thickness of $\\exp (\\eta )$ .", "Then Theorem REF is equivalent to $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}]= \\mathopen {}\\mathclose {\\left\\lbrace \\begin{array}{ll}\\frac{\\sin (\\pi (1-\\kappa /4))}{\\pi (1-\\kappa /4)}\\frac{\\pi \\sqrt{(1-\\kappa /4)^2+\\lambda \\kappa /2}}{\\sin (\\pi \\sqrt{ (1-\\kappa /4)^2+\\lambda \\kappa /2})} & \\mbox{if } \\lambda < 1-\\frac{\\kappa }{8}.", "\\\\\\infty & \\mbox{if } \\lambda \\ge 1-\\frac{\\kappa }{8}\\end{array}\\right.", "}$ Consider $\\alpha < Q$ and set $\\lambda = \\frac{\\alpha ^2}{2} - Q\\alpha +2$ .", "Let $\\mathcal {L}_\\kappa ^\\alpha $ be defined by the following reweighting of $\\mathcal {L}_\\kappa (\\mathcal {C})$ .", "$\\frac{d \\mathcal {L}_\\kappa ^\\alpha }{d\\mathcal {L}_\\kappa (\\mathcal {C})}(\\eta ) = (\\frac{1}{4} \\mathrm {CR}(\\exp (\\eta ),0) \\mathrm {CR}(\\exp (-\\eta ),0))^{-\\frac{\\alpha ^2}{2} + Q\\alpha - 2} = 2^{2\\lambda }e^{\\lambda \\vartheta (\\eta )},$ Thus proving Theorem REF amounts to computing the total mass $|\\mathcal {L}_\\kappa ^\\alpha |$ .", "We will achieve this using the same strategy as in the proof of Theorem  in Section : first establish a conformal welding identity and then evaluate an observable in two ways.", "Sample a pair $(\\eta , \\mathbf {t})$ from $\\mathcal {L}_\\kappa ^\\alpha \\times dt$ (where $dt$ is the Lebesgue measure on $\\mathbb {R}$ ), and let $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ be the law of the translated loop $\\eta + \\mathbf {t}$ .", "Then $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ is an infinite measure on loops on $\\mathcal {C}$ separating $\\pm \\infty $ .", "Note that $\\mathcal {L}_\\kappa ^\\gamma =\\mathcal {L}_\\kappa (\\mathcal {C})$ .", "According to Lemma REF , $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\gamma }$ is a constant multiple of the measure $\\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ from Section .", "To prove Theorem REF , we consider a generalization of the loop decorated two-pointed quantum sphere $\\mathrm {QS}_2\\otimes \\operatorname{SLE}_\\kappa ^{\\operatorname{sep}}$ from Proposition REF , where marked points have $\\alpha $ -singularities.", "We first recall the $\\alpha $ -generalization of $\\mathrm {QS}_2$ from [19] and its relation to LCFT following [4].", "Definition 8.1 For $\\gamma \\in (0,2)$ , let $(B_s)_{s \\ge 0}$ be a standard Brownian motion conditioned on $B_{s} - (Q-\\alpha )s<0$ for all $s>0$ , and $(\\widetilde{B}_s)_{s \\ge 0}$ an independent copy of $(B_s)_{s \\ge 0}$ .", "Let $Y_t =\\mathopen {}\\mathclose {\\left\\lbrace \\begin{array}{ll}B_{t} - (Q -\\alpha )t & \\mbox{if } t \\ge 0 \\\\\\widetilde{B}_{-t} +(Q-\\alpha ) t & \\mbox{if } t < 0\\end{array}\\right.}", ".$ Let $h^1(z) = Y_{\\operatorname{Re}z}$ for each $z \\in \\mathcal {C}$ .", "Let $h^2_\\mathcal {C}$ be independent of $h^1$ and have the law of the lateral component of the GFF on $\\mathcal {C}$ .", "Let $\\hat{h}=h^1+h^2_\\mathcal {C}$ .", "Let $\\mathbf {c}\\in \\mathbb {R}$ be sampled from $ \\frac{\\gamma }{2} e^{2(\\alpha -Q)c}dc$ independent of $\\hat{h}$ and set $h=\\hat{h}+\\mathbf {c}$ .", "Let $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ be the infinite measure describing the law of the decorated quantum surface $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ .", "Recall the unit-volume reflection coefficient for LCFT on the sphere [40], [65] $\\overline{R}(\\alpha ) := -\\mathopen {}\\mathclose {\\left(\\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1-\\frac{\\gamma ^2}{4})}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )} \\frac{1}{\\frac{2}{\\gamma }(Q-\\alpha )} \\frac{\\Gamma (-\\frac{\\gamma }{2}(Q-\\alpha ))}{\\Gamma (\\frac{\\gamma }{2}(Q-\\alpha ))\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))} .$ Lemma 8.2 The law of the quantum area of a sample from $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ is $1_{a>0} \\frac{1}{2} \\overline{R}(\\alpha ) a^{\\frac{2}{\\gamma }(\\alpha - Q) - 1} \\, da.", "$ For $0< a < a^{\\prime }$ with $\\widehat{h}$ as in Definition REF , we have $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )[ \\mu _{\\widehat{h} + c} (\\mathcal {C}) \\in (a, a^{\\prime }) ] = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{-\\infty }^\\infty \\mathbf {1}_{e^{\\gamma c} \\mu _{\\widehat{h}}(\\mathcal {C}) \\in (a, a^{\\prime })} \\frac{\\gamma }{2} e^{2(\\alpha -Q)c} \\, dc \\right]} = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _a^{a^{\\prime }} \\frac{\\gamma }{2} \\mathopen {}\\mathclose {\\left(\\frac{y}{\\mu _{\\widehat{h}}(\\mathcal {C})}\\right)}^{\\frac{2}{\\gamma }(\\alpha - Q)} \\frac{1}{\\gamma y} \\, dy \\right]}$ where we have used the change of variables $y = e^{\\gamma c} \\mu _{\\widehat{h}}(\\mathcal {C})$ .", "By [40] and [65], for $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ we have $\\mathbb {E}[\\mu _{\\widehat{h}}(\\mathcal {C})^{\\frac{2}{\\gamma }(Q-\\alpha )}] = \\overline{R}(\\alpha )$ .", "Interchanging the expectation and integral gives the result.", "Fix $\\kappa \\in (\\frac{8}{3},4)$ and $\\gamma =\\sqrt{\\kappa }$ .", "Let $\\mathbb {F}$ be the law of $h$ as in Definition REF , so that the law of $(\\mathcal {C}, h,-\\infty ,+\\infty )/{\\sim _\\gamma }$ is $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ .", "Now sample $(h,\\eta )$ from $\\mathbb {F} \\times \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }$ and write $ \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}}$ as the law of $(\\mathcal {C}, h, \\eta , -\\infty ,+\\infty )/{\\sim _\\gamma }$ .", "Recall the Liouville field with $\\alpha $ -insertions $\\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}$ on $\\mathcal {C}$ from Definition REF .", "We have the following description of $ \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}}$ .", "Proposition 8.3 If $(\\phi , \\eta )$ is sampled from $\\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}\\times \\mathcal {L}_\\kappa ^\\alpha $ , the law of $(\\mathcal {C}, \\phi , \\eta , -\\infty ,+\\infty )/{\\sim _\\gamma }$ is $C (Q-\\alpha )^2 \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } \\quad \\text{for some constant }C=C(\\gamma )\\in (0,\\infty ).$ This is an immediate consequence of the definition of $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ and Theorem 2.11 of [4], which says the following: let $h$ be the field in Definition REF , so the law of $(\\mathcal {C}, h, -\\infty ,+\\infty )/{\\sim }_\\gamma $ is $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ .", "Let $T \\in \\mathbb {R}$ be sampled from Lebesgue measure independently of $h$ , and set $\\phi := h( \\cdot +T)$ .", "Then $\\phi $ has law $\\frac{\\gamma }{4 (Q-\\alpha )^2} \\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}$ .", "We now present the conformal welding result needed for the proof of Theorem REF .", "Proposition 8.4 For $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ and for some constant $C = C(\\gamma )$ we have $C (Q-\\alpha )^2\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } = \\int _0^\\infty \\ell \\cdot \\mathrm {Weld}(\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ), \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )) \\, d\\ell .", "$ We postpone the proof of Proposition REF to Section REF and proceed to the proof of Theorem REF .", "Similarly as in the proof of Theorem , we would like to compare the area of a sample from $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } $ using Propositions REF and REF to obtain $|\\mathcal {L}^\\alpha _\\kappa |$ and hence prove Theorem REF .", "But $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }[e^{- A}]=\\infty $ .", "Therefore we need to find a finite observable to compute.", "Note that $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } $ is a measure on quantum surfaces decorated by two marked points and a loop separating them.", "The loop separates the quantum surface into two connected components.", "For $0<\\varepsilon <\\delta $ , let $E_{\\delta , \\varepsilon }$ be the event that the connected component containing the first marked point has quantum area at least 1 and the loop has quantum length in $(\\varepsilon , \\delta )$ .", "The size of $E_{\\delta , \\varepsilon }$ is easy to compute using Proposition REF .", "Lemma 8.5 Let $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ .", "With $C$ from Proposition REF , $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }[E_{\\delta ,\\varepsilon }]$ equals $\\frac{1}{C (Q-\\alpha )^2}\\times \\frac{(1+o_{\\delta ,\\varepsilon }(1))\\log \\varepsilon ^{-1}}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )},$ where the error term $o_{\\delta ,\\varepsilon }(1)$ satisfies $\\lim _{\\delta \\rightarrow 0} \\lim _{\\varepsilon \\rightarrow 0} o_{\\delta ,\\varepsilon }(1)=0$ .", "Let $A$ be the quantum area of a sample from $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )$ .", "By Proposition REF , to prove Lemma REF , it suffices to prove $\\int _\\varepsilon ^\\delta \\ell \\cdot \\mathopen {}\\mathclose {\\left| \\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )\\right|}\\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )[A > 1] \\, d\\ell = \\frac{(1+o_{\\delta ,\\varepsilon }(1))\\log \\varepsilon ^{-1}}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )}.$ since the left side of (REF ) is the mass of $E_{\\delta , \\varepsilon }$ under $\\int _0^\\infty \\ell \\cdot \\mathrm {Weld}(\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ), \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )) \\, d\\ell $ .", "By the scaling of quantum area and boundary length, we have $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\# [A > 1] = \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; 1)^\\# [A > \\ell ^{-2}].$ By Theorem REF the quantum area law of $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ;1)^\\#$ is inverse gamma with shape $a = \\frac{2}{\\gamma }(Q-\\alpha )$ and scale $b = (4 \\sin \\frac{\\pi \\gamma ^2}{4})^{-1}$ .", "Let $\\underline{\\Gamma }$ be the lower incomplete gamma function; this satisfies $\\lim _{y \\rightarrow 0} \\frac{\\underline{\\Gamma }(a; y)}{y^a} = \\frac{1}{a}$ .", "By the tail asymptotic property of the inverse gamma distribution, as $\\ell \\rightarrow 0$ , $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\# [A > 1] = \\frac{\\underline{\\Gamma }(\\frac{2}{\\gamma }(Q-\\alpha ); \\ell ^2/4\\sin \\frac{\\pi \\gamma ^2}{4})}{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))} = \\frac{1+o_\\ell (1)}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{\\ell ^2}{4 \\sin \\frac{\\pi \\gamma ^2}{4}}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )}.$ By Proposition REF we have $|\\mathcal {M}_1^\\mathrm {disk}(\\alpha ;\\ell )| = \\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\ell ^{\\frac{2}{\\gamma }(\\alpha -Q)-1}$ .", "Therefore $\\int _\\varepsilon ^\\delta \\ell \\cdot \\mathopen {}\\mathclose {\\left| \\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )\\right|}\\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )[A > 1] \\, d\\ell = \\int _\\varepsilon ^\\delta \\ell \\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\ell ^{\\frac{2}{\\gamma }(\\alpha -Q)-1} \\right)}^2 \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\# [A> 1] \\\\= \\frac{1+o_{\\delta ,\\varepsilon }(1)}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )}\\int _\\varepsilon ^\\delta \\ell ^{-1} \\, d\\ell .", "$ We can also compute the size of $E_{\\delta ,\\varepsilon }$ using Proposition REF in terms of $|\\mathcal {L}_\\kappa ^\\alpha |$ and $\\overline{R}(\\alpha )$ .", "Proposition 8.6 For $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ , with the error term in the same sense as in Lemma REF , $(\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha })[E_{\\delta , \\varepsilon }] = (1+o_{\\delta ,\\varepsilon }(1)) \\frac{\\overline{R}(\\alpha )}{2(Q-\\alpha )^2} |\\mathcal {L}_\\kappa ^\\alpha | \\log \\varepsilon ^{-1}.", "$ We postpone the proof of Proposition REF to Section REF and proceed to the proof of Theorem REF using Lemma REF and Proposition REF .", "Proposition 8.7 For some constant $C = C(\\gamma )$ and all $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ we have $2^{-\\alpha ^2 + 2Q\\alpha }|\\mathcal {L}_\\kappa ^\\alpha | = C\\frac{ \\frac{\\gamma }{2} (Q-\\alpha )}{\\sin (\\frac{\\gamma \\pi }{2} (Q-\\alpha ))}.", "$ In this proof, we write $C$ for a $\\gamma $ -dependent constant that may change from line to line.", "By Proposition REF and Lemma REF we get $ (Q-\\alpha )^2 \\frac{\\overline{R}(\\alpha )}{2(Q-\\alpha )^2}|\\mathcal {L}_\\kappa ^\\alpha |=\\frac{C}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )} .$ Using the definitions of $\\overline{R}(\\alpha )$ and $\\overline{U}(\\alpha )$ in (REF ) and (REF ) and cancelling equal terms gives $ \\mathopen {}\\mathclose {\\left(\\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1-\\frac{\\gamma ^2}{4})}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )} \\frac{\\Gamma (\\frac{\\gamma }{2}(\\alpha -Q))}{\\Gamma (\\frac{\\gamma }{2}(Q-\\alpha ))} |\\mathcal {L}_\\kappa ^\\alpha | =\\mathopen {}\\mathclose {\\left(\\frac{\\pi ^2}{\\Gamma (1-\\frac{\\gamma ^2}{4})^2 \\sin (\\frac{\\pi \\gamma ^2}{4})}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )} C 2^{\\alpha ^2 - 2Q \\alpha }\\Gamma (1 + \\frac{\\gamma }{2}(\\alpha - Q))^2 $ The identity $\\Gamma (z)\\Gamma (1-z) = \\frac{\\pi }{\\sin (\\pi z)}$ gives equality of the first terms on the left and right hand sides, so rearranging and using $\\Gamma (z+1) = z\\Gamma (z)$ and $\\Gamma (1-z)\\Gamma (z) = \\frac{\\pi }{\\sin (\\pi z)}$ gives the desired identity: $2^{-\\alpha ^2 + 2Q\\alpha } |\\mathcal {L}_\\kappa ^\\alpha | = C \\frac{\\Gamma (1+\\frac{\\gamma }{2}(\\alpha -Q))}{\\Gamma (\\frac{\\gamma }{2}(\\alpha -Q))} \\Gamma (1- \\frac{\\gamma }{2}(Q-\\alpha ))\\Gamma (\\frac{\\gamma }{2}(Q-\\alpha )) = C \\frac{\\gamma }{2} (\\alpha - Q) \\cdot \\frac{\\pi }{\\sin ( \\frac{\\gamma \\pi }{2} (Q-\\alpha ))}.$ [Proof of Theorem REF ] We break the proof of (REF ) hence Theorem REF into three cases.", "Case I: $\\kappa <4$ and $\\lambda \\in (1 - \\frac{\\kappa }{8} - \\frac{2}{\\kappa }, 1-\\frac{\\kappa }{8})$ .", "Let $\\alpha = Q - \\sqrt{Q^2 - 4 + 2\\lambda } \\in (\\frac{\\gamma }{2}, Q)$ , so $\\lambda = \\frac{\\alpha ^2}{2}-Q\\alpha + 2$ .", "By (REF ) and Proposition REF we have $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] = 2^{-2\\lambda } |\\mathcal {L}_\\kappa ^\\alpha | = C\\frac{ \\frac{\\gamma }{2} (Q-\\alpha )}{\\sin (\\frac{\\gamma \\pi }{2} (Q-\\alpha ))}= C\\frac{ \\sqrt{(1-\\frac{\\kappa }{4})^2+\\frac{\\lambda \\kappa }{2}}}{\\sin (\\pi \\sqrt{(1-\\frac{\\kappa }{4})^2+\\frac{\\lambda \\kappa }{2}})}$ for some constant $C = C(\\gamma )$ .", "Since $\\kappa \\in (0,4)$ , we have $0 \\in (1-\\frac{\\kappa }{8}-\\frac{2}{\\kappa }, 1-\\frac{\\kappa }{8})$ .", "Thus we can obtain the value of $C$ by considering $1 = \\mathbb {E}[e^0] = \\frac{ C (1-\\kappa /4)}{\\sin (\\pi (1-\\kappa /4))}$ .", "This yields (REF ) in this case.", "Case II: $\\kappa < 4$ and $\\lambda \\in \\mathbb {R}$ .", "Since $\\vartheta (\\eta ) \\ge 0$ a.s. the function $\\lambda \\mapsto \\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}]$ is increasing.", "Thus for $\\lambda < 0$ we have $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] \\le \\mathbb {E}[e^{0\\cdot \\vartheta (\\eta )}] = 1$ , and since $1 - \\frac{\\kappa }{8} -\\frac{2}{\\kappa }< 0$ we can use analytic continuation to extend the result from $(1-\\frac{\\kappa }{8} -\\frac{2}{\\kappa }, 1-\\frac{\\kappa }{8})$ to $(-\\infty , 1-\\frac{\\kappa }{8})$ .", "Finally, taking a limit from below, we have for any $\\lambda \\ge 1-\\frac{\\kappa }{8}$ that $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] \\ge \\lim _{\\lambda ^{\\prime } \\uparrow 1-\\frac{\\kappa }{8}} \\mathbb {E}[e^{\\lambda ^{\\prime } \\vartheta (\\eta )}] = \\infty $ , where the limit follows from the explicit formula obtained in Case I.", "Case III: $\\kappa = 4$ .", "For $\\lambda < \\frac{1}{2}$ , we obtain the result by taking $\\kappa \\uparrow 4$ as follows.", "Let $\\eta _\\kappa $ be sampled from $\\mathcal {L}_\\kappa $ , then $\\vartheta (\\eta _\\kappa ) \\rightarrow \\vartheta (\\eta _4)$ in law as $\\kappa \\uparrow 4$ by Lemma REF .", "Fix $\\lambda < \\lambda ^{\\prime } < 1 - \\frac{4}{8}$ .", "For $\\kappa $ sufficiently close to 4 the family $\\lbrace e^{\\lambda \\vartheta (\\eta _\\kappa )}\\rbrace $ is uniformly integrable, since Theorem REF gives a uniform bound on $\\mathbb {E}[e^{\\lambda ^{\\prime } \\vartheta (\\eta _\\kappa )}]$ for $\\kappa $ close to 4.", "Therefore $\\lim _{\\kappa \\uparrow 4} \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _\\kappa )}] = \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _4)}]$ .", "Now, for $\\lambda \\ge \\frac{1}{2}$ , the monotonicity argument of Case II gives $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] = \\infty $ .", "We now proceed to prove Propositions REF and REF in Sections REF and REF , respectively.", "Conformal welding of two quantum disks with generic insertions The goal of this section is to prove Proposition REF .", "Our argument closely follows that of Theorem REF .", "Suppose $\\eta $ is a simple curve in $\\mathcal {C}$ separating $\\pm \\infty $ with two marked points $p^-,p^+ \\in \\eta $ .", "Let $D^\\pm _\\eta $ be the connected components of $\\mathcal {C}\\backslash \\eta $ containing $\\pm \\infty $ , and let $\\psi ^\\pm _\\eta : \\mathbb {H}\\rightarrow D^\\pm _\\eta $ be the conformal maps sending $(i, 0)$ to $(\\pm \\infty , p^\\pm )$ .", "We need the following analogue of Lemma REF .", "Lemma 8.8 Let $\\eta $ be a simple curve in $\\mathcal {C}$ separating $\\pm \\infty $ with two marked points $p^-,p^+ \\in \\eta $ .", "For $\\varepsilon \\in (0,\\frac{1}{4})$ , let $\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon } =\\mathcal {C}\\setminus (\\psi ^- _\\eta (B_\\varepsilon (i)) \\cup \\psi ^+_\\eta (B_\\varepsilon (i)))$ .", "For $\\phi $ sampled from $\\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}$ , let $X^\\pm =\\phi \\circ \\psi ^\\pm _\\eta +Q \\log |(\\psi ^\\pm _\\eta )^{\\prime }|$ .", "Then for a fixed $\\alpha \\in (\\frac{\\gamma }{2},Q)$ and for any $\\varepsilon \\in (0,\\frac{1}{4})$ and any nonnegative measurable function $f$ of $\\phi |_{\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon }}$ we have $&\\int f (\\phi |_{\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon }}) \\times \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma )(X^-_\\varepsilon (i) + X^+_\\varepsilon (i))}\\, d\\mathrm {LF}_\\mathcal {C}^{(\\gamma ,\\pm \\infty )}\\\\&= \\int f (\\phi |_{\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon }}) \\mathopen {}\\mathclose {\\left(\\frac{1}{4}\\mathrm {CR}(\\exp (\\eta ), 0)\\mathrm {CR}(\\exp (-\\eta ), 0)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} \\, d\\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}.$ We reparametrize in $ and apply the argument of Lemma~\\ref {lem:reweight} twice.", "Let $ g: be given by $g(z) = \\frac{z}{z-1}$ and let $G : \\mathcal {C}\\rightarrow be given by $ G = g $.", "By \\cite [Lemma 2.13]{AHS-SLE-integrability}, if $$ is sampled from $ LFC(, )$ then $ := G-1 + Q |(G-1)'|$ has law $ LF(, 0), (, -1)$, and the same is true when $$ is replaced by $$.", "Let $ (, p-, p+) = (g(), g(p-), g(p+))$.", "Since $ G'(0) = 1$ and $ ddz (G(1z))|z=0 = 1$, we see that $ CR((), 0) = CR( , 0)$ and $ CR( (-), 0) = CR(, -1)$.", "Thus, it is equivalent to show that if $ , p, := (G(-(B(i)))G(+( B(i))) )$ and $ f$ is a function of $ |, p, $, then{\\begin{@align*}{1}{-1}&\\int \\hat{f} (\\hat{\\phi }|_{{\\hat{\\eta }, \\hat{p}^\\pm , \\varepsilon }}) \\times \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma )(X^-_\\varepsilon (i) + X^+_\\varepsilon (i))} \\, d\\mathrm {LF}_{(\\gamma ,0), (\\gamma , -1)}\\\\&= \\int \\hat{f} (\\hat{\\phi }|_{{\\hat{\\eta }, \\hat{p}^\\pm , \\varepsilon }}) \\mathopen {}\\mathclose {\\left(\\frac{1}{4}\\mathrm {CR}(\\hat{\\eta }, 0)\\mathrm {CR}(\\hat{\\eta }, -1)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} d\\mathrm {LF}_{(\\alpha , 0), (\\alpha , - 1)}.\\end{@align*}}Indeed, Lemma~\\ref {lem:reweight} shows that{\\begin{@align*}{1}{-1}\\int \\hat{f} (\\hat{\\phi }) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X^-_\\varepsilon (i)} \\, d\\mathrm {LF}_{(\\gamma ,0), (\\gamma , -1)}= \\int \\hat{f} (\\hat{\\phi }) \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\hat{\\eta }, 0)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} d\\mathrm {LF}_{(\\alpha , 0), (\\gamma , - 1)},\\end{@align*}}and applying the argument of Lemma~\\ref {lem:reweight} again to change the insertion at $ -1$ yields the result.$ For a curve $\\eta $ in $\\mathcal {C}$ separating $\\pm \\infty $ , we let $\\mathrm {Harm}_{-\\infty , \\eta }$ (resp.", "$\\mathrm {Harm}_{+\\infty , \\eta }$ ) be the harmonic measure on $\\eta $ viewed from $-\\infty $ (resp., $+\\infty $ ).", "Lemma 8.9 There is a constant $C = C(\\gamma )$ such that the following holds.", "Suppose $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ .", "Sample $(\\phi , \\eta , p^-, p^+)$ from the measure $C \\cdot \\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}(d\\phi )\\, \\mathcal {L}_\\kappa ^\\alpha (d\\eta )\\, \\mathrm {Harm}_{-\\infty , \\eta }(dp^-)\\, \\mathrm {Harm}_{+\\infty , \\eta }(dp^+).$ Let $X_\\pm = \\phi \\circ \\psi _\\eta ^\\pm + Q \\log |(\\psi _\\eta ^\\pm )^{\\prime }|$ .", "Let $\\tau $ be the quantum length of the clockwise arc from $p^-$ to $p^+$ in $D^+_\\eta $ .", "Then the law of $(X^-, X^+, \\tau )$ is $\\int _0^\\infty \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\, d\\ell .$ We first prove the case $\\alpha = \\gamma $ , namely $C \\cdot \\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}(d\\phi )\\, \\mathcal {L}_\\kappa (d\\eta )\\, \\mathrm {Harm}_{-\\infty , \\eta }(dp^-)\\, \\mathrm {Harm}_{+\\infty , \\eta }(dp^+) = \\int _0^\\infty \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\, d\\ell ,$ where with abuse of notation we view the left hand side as a measure on triples $(X^-, X^+, \\tau ) \\in H^{-1}(\\mathbb {H})\\times H^{-1}(\\mathbb {H})\\times [0,\\infty )$ .", "Indeed, (REF ) is an immediate consequence of Proposition REF and Lemma REF .", "Now, let $\\varepsilon \\in (0, \\frac{1}{4})$ and $\\mathbb {H}_\\varepsilon := \\mathbb {H}\\backslash B_\\varepsilon (i)$ .", "Let $f$ be a nonnegative measurable function of $(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau )$ , then reweighting (REF ) gives $C\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma ) (X^-_\\varepsilon (i) + X^+_\\varepsilon (i))}\\mathrm {LF}_\\mathcal {C}^{(\\gamma ,\\pm \\infty )}(d\\phi ) \\mathcal {L}_\\kappa (d\\eta ) \\mathrm {Harm}_{-\\infty , \\eta }(dp^+)\\mathrm {Harm}_{-\\infty , \\eta }(dp^+) \\\\= \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma ) (X^-_\\varepsilon (i) + X^+_\\varepsilon (i))} \\ell \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\right)} \\, d\\ell .$ By Lemma REF , the left hand side equals $ C\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\mathrm {LF}_\\mathcal {C}^{(\\alpha ,\\pm \\infty )}(d\\phi ) \\mathcal {L}_\\kappa ^\\alpha (d\\eta ) \\mathrm {Harm}_{-\\infty , \\eta }(dp^-)\\mathrm {Harm}_{+\\infty , \\eta }(dp^+).$ By Lemma REF , the right hand side equals $ \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\right)} \\, d\\ell .", "$ Since the above two expressions agree for every $\\varepsilon $ and $f$ , we obtain the result.", "[Proof of Proposition REF ] In Lemma REF , the law of $(\\mathcal {C}, \\phi , \\eta , \\pm \\infty )/{\\sim _\\gamma }$ is $C (Q-\\alpha )^2 \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }$ by (REF ), and the joint law of $((\\mathbb {H}, X^-, i)/{\\sim _\\gamma }, (\\mathbb {H}, X^+, i)/{\\sim _\\gamma }, \\tau )$ is $\\int _0^\\infty \\ell \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ) \\times \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ) \\times [1_{\\tau \\in (0,\\ell )} \\ell ^{-1} d\\tau ]\\, d\\ell $ .", "Since $[1_{\\tau \\in (0,\\ell )} \\ell ^{-1} d\\tau ]$ corresponds to the uniform conformal welding, the law of $(\\mathcal {C}, \\phi , \\eta , \\pm \\infty )/{\\sim _\\gamma }$ is $\\int _0^\\infty \\ell \\mathrm {Weld}(\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ), \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ))\\, d\\ell $ .", "Proof of Proposition  REF We first reformulate Proposition REF by embedding in the cylinder.", "Sample the field $h$ as in Definition REF so that the law of $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ is $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ .", "Now we restrict to the event $\\lbrace \\mu _h(\\mathcal {C}) > 1\\rbrace $ and set $\\phi := h(\\cdot - a)$ , where $a \\in \\mathbb {R}$ is such that $\\mu _h((-\\infty , a) \\times [0,2\\pi ]) = 1$ .", "Let $M$ be the law of $\\phi $ under this restriction.", "Lemma 8.10 Let $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )|_{\\lbrace A > 1\\rbrace }$ be the restriction of $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ to quantum surfaces with quantum area greater than 1.", "If we sample $(\\phi , \\mathbf {t}, \\eta ^0)$ from $ M \\times dt\\times \\mathcal {L}_\\kappa ^\\alpha $ where $dt$ is Lebesgue measure on $\\mathbb {R}$ and set $\\eta = \\eta ^0 + \\mathbf {t}$ , then the law of $(\\mathcal {C}, \\phi , \\eta , \\pm \\infty )/{\\sim _\\gamma }$ is $(\\mathcal {M}_2^\\mathrm {sph}(\\alpha )|_{\\lbrace A>1\\rbrace }) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ .", "This is immediate from the definitions of $M$ and $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ .", "For a field and curve $(\\phi , \\eta )$ , writing $D_\\pm $ for the connected components of $\\mathcal {C}\\backslash \\eta $ containing $\\pm \\infty $ , we define the event $E_{\\delta , \\varepsilon } = \\lbrace (\\phi , \\eta ): \\varepsilon < \\nu _\\phi (\\eta )< \\delta \\rbrace \\text{ and } \\mu _\\phi (D_-) >1 \\rbrace $ .", "This is the same event as that of Proposition REF , but phrased in terms of the embedded field and curve.", "By Lemma REF , Proposition REF is equivalent to $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] = (1+o_{\\delta ,\\varepsilon }(1)) \\frac{\\overline{R}(\\alpha )}{2(Q-\\alpha )^2} |\\mathcal {L}_\\kappa ^\\alpha | \\log \\varepsilon ^{-1},$ where the error term $o_{\\delta ,\\varepsilon }(1)$ satisfies $\\lim _{\\delta \\rightarrow 0} \\lim _{\\varepsilon \\rightarrow 0} o_{\\delta ,\\varepsilon }(1)=0$ .", "To prove (REF ), we use the following description of the law of $\\phi |_{\\mathcal {C}_+}$ when $\\phi $ is sampled from $M$ .", "For $t \\ge 0$ let $X_s$ be the average of $\\phi $ on $[s,s+2\\pi i]/{\\sim }$ .", "Lemma 8.11 Conditioned on $\\phi |_{\\mathcal {C}_-}$ , we have $\\phi |_{\\mathcal {C}_+} \\stackrel{d}{=} \\phi _0 + \\mathfrak {h} - (Q-\\alpha ) \\operatorname{Re}\\cdot $ , where $\\phi _0$ is a zero boundary GFF on $\\mathcal {C}_+$ and $\\mathfrak {h}$ is a harmonic function determined by $\\phi |_{\\mathcal {C}_-}$ whose average on $[s, s+2\\pi i]/{\\sim }$ is constant as $s>0$ varies.", "In particular, conditioned on $X_0$ , the conditional law of $(X_s)_{s \\ge 0}$ is Brownian motion with initial value $X_0$ and downward drift of $-(Q-\\alpha )$ .", "This is the sphere analog of [1], and the proof is identical so we omit it.", "Define for $x,y>0$ the random times $\\sigma _x = \\inf \\lbrace s > 0 \\: : \\: X_s < \\frac{2}{\\gamma }\\log x\\rbrace \\vee 0, \\quad \\tau _y = \\sup \\lbrace s > 0 \\: : \\: X_s > \\frac{2}{\\gamma }\\log y\\rbrace \\vee 0.$ Lemma 8.12 $M^\\#$ -a.e.", "the field $\\phi $ satisfies $\\lim _{y \\rightarrow 0} \\frac{\\sigma _y}{\\log y^{-1}} = \\lim _{y \\rightarrow 0} \\frac{\\tau _y}{\\log y^{-1}} = \\frac{2}{\\gamma (Q-\\alpha )}$ .", "Given Lemma REF , this is a straightforward Brownian motion fact.", "For $C>0$ define the event $F_{x,y,C} = \\lbrace (\\phi , \\mathbf {t}, \\eta ^0) \\: : \\: \\mathbf {t} \\in (\\sigma _x - C, \\tau _y) \\rbrace .$ We first bound $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }]$ from above by comparing $E_{\\delta , \\varepsilon }$ to $F_{\\delta ^{1-\\zeta } ,\\varepsilon ^{1+\\zeta }, C}$ .", "Lemma 8.13 For fixed $x>0, C \\ge 0$ we have as $y \\rightarrow 0$ $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[F_{x, y, C}] = (1+o_y(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{2 \\log y^{-1}}{\\gamma (Q-\\alpha )},$ Moreover, $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\le (1+o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{2\\log \\varepsilon ^{-1}}{\\gamma (Q-\\alpha )}.$ Note that (REF ) is equivalent to $M^\\#[\\tau _y - \\sigma _x + C] = (1+o_y(1)) \\frac{\\frac{2}{\\gamma }\\log y^{-1}}{Q-\\alpha }$ .", "The upper bound follows from standard Brownian motion facts.", "Indeed Lemma REF states that the field average process $X_s$ has random starting value $X_0$ and then evolves as Brownian motion with drift $-(Q-\\alpha )$ , so letting $Y_s$ be Brownian motion with starting value $\\frac{2}{\\gamma }\\log x$ and drift $-(Q-\\alpha )$ , and letting $T>0$ be the time $Y_s$ first hits $\\frac{2}{\\gamma }\\log y$ , we have $M^\\#[\\tau _y - \\sigma _x] \\le \\mathbb {E}[T]$ , and it is well known that $\\mathbb {E}[T] = (1+o_y(1)) \\frac{\\frac{2}{\\gamma }\\log y^{-1}}{Q-\\alpha }$ .", "The lower bound follows from Lemma REF .", "This gives (REF ) For (REF ), it suffices to show that there exists some $C>0$ such that for any fixed $\\zeta > 0$ , as $\\varepsilon \\rightarrow 0$ then $\\delta \\rightarrow 0$ , $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[F_{\\delta ^{1-\\zeta }, \\varepsilon ^{1+\\zeta },C} \\mid E_{\\delta , \\varepsilon }] = 1-o_{\\delta ,\\varepsilon }(1),$ since this and (REF ) imply $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\le (1+o_{\\delta ,\\varepsilon }(1)(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[F_{\\delta ^{1-\\zeta }, \\varepsilon ^{1+\\zeta },C}] = (1+o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{(1+\\zeta )\\frac{2}{\\gamma }\\log y^{-1}}{Q-\\alpha },$ and we can send $\\zeta \\rightarrow 0$ to conclude.", "Sample $(\\phi , \\mathbf {t}, \\eta ^0)$ from $M\\times dt \\times \\mathcal {L}_\\kappa ^\\alpha $ and condition on the quantum length of $\\eta ^0$ being $\\ell > 0$ .", "We will show that the conditional probability of $F_{\\ell ^{1-\\zeta }, \\ell ^{1+\\zeta }, C}$ is $1-o_\\ell (1)$ where $\\lim _{\\ell \\rightarrow 0} o_\\ell (1) = 0$ ; this clearly implies (REF ).", "Let $\\mathcal {C}_{\\eta ^0}^+$ be the connected component of $\\mathcal {C}\\backslash \\eta ^0$ containing $+\\infty $ .", "By Lemma REF and Proposition REF the conditional law of the quantum surface $(\\mathcal {C}_{\\eta ^0}^+, \\hat{\\phi }, \\eta ^0, +\\infty )$ is $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\#$ , where $\\hat{\\phi }:= \\phi (\\cdot - \\mathbf {t})$ .", "Although Lemma REF is stated for embeddings of $\\mathrm {QD}_{1,0}(\\ell )^\\#$ , the same argument holds for embeddings of $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\#$ .", "Therefore, there is a constant $C$ not depending on $\\ell $ such that, for a fixed smooth function $g$ with support in $\\lbrace z \\in \\mathcal {C}: \\operatorname{Re}z \\in [C, C+1]\\rbrace $ which is constant on vertical lines and satisfies $\\int g(z) \\,dz = 1$ , we have $\\mathbb {P}[(\\hat{\\phi }, g) \\in ((1+\\zeta ) \\frac{2}{\\gamma }\\log \\ell , (1-\\zeta ) \\frac{2}{\\gamma }\\log \\ell )] = 1 - o_\\ell (1).$ Therefore, with probability $1-o_\\ell (1)$ there is some $s \\in [C,C+1]$ for which the average of $\\hat{\\phi }$ on $[s, s+2\\pi i]$ lies in $((1+\\zeta ) \\frac{2}{\\gamma }\\log \\ell , (1-\\zeta ) \\frac{2}{\\gamma }\\log \\ell )$ , i.e.", "$X_{\\mathbf {t} + s} \\in ((1+\\zeta ) \\frac{2}{\\gamma }\\log \\ell , (1-\\zeta ) \\frac{2}{\\gamma }\\log \\ell )$ .", "This inclusion implies $\\mathbf {t} + s \\in (\\sigma _{\\ell ^{1-\\zeta }}, \\tau _{\\ell ^{1+\\zeta }})$ , so $\\mathbf {t} \\in (\\sigma _{\\ell ^{1-\\zeta }} - C - 1, \\tau _{\\ell ^{1+\\zeta }})$ .", "Thus, $F_{\\ell ^{1-\\zeta }, \\ell ^{1+\\zeta }, C+1}$ occurs with probability $1-o_\\ell (1)$ .", "Now, we obtain the lower bound for $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }]$ by coupling $\\phi $ with a GFF.", "Lemma 8.14 We have $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\ge (1-o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{(1-\\zeta ) \\frac{2}{\\gamma }\\log \\varepsilon ^{-1}}{Q-\\alpha }.$ We first construct a coupling $h$ sampled from $P_\\mathcal {C}$ with $\\phi $ sampled from $M^\\#$ such that $\\phi |_{\\mathcal {C}_+} = h_2|_{\\mathcal {C}_+} + X_{\\operatorname{Re}\\cdot } + \\hat{\\mathfrak {h}} $ where $h_2$ is the projection of $h$ to $\\mathcal {H}_2(\\mathcal {C})$ , $X_s$ is the average of $\\phi $ on $[s, s+2\\pi i]/{\\sim }$ , $\\hat{\\mathfrak {h}}$ is a random harmonic function on $\\mathcal {C}_+$ which has average zero on $[s, s+2\\pi i]/{\\sim }$ for all $s > 0$ , and $h_2$ is independent of $(X_s)_{s \\ge 0}$ .", "Indeed, [51] says that $h$ can be decomposed via $h|_{\\mathcal {C}_+} = h_0 + \\widetilde{\\mathfrak {h}}$ where $h_0$ is a zero boundary GFF on $\\mathcal {C}_+$ and $\\widetilde{\\mathfrak {h}}$ is a harmonic function on $\\mathcal {C}_+$ whose average on $[s, s+2\\pi i]/{\\sim }$ is zero for all $s > 0$ .", "Similarly, Lemma REF gives a decomposition $\\phi |_{\\mathcal {C}_+} = \\phi _0 + \\mathfrak {h} - (Q-\\alpha ) \\operatorname{Re}\\cdot $ where $\\phi _0$ is a zero boundary GFF on $\\mathcal {C}_+$ and $\\mathfrak {h}$ is a harmonic function on $\\mathcal {C}_+$ whose average on $[s, s+2\\pi i]/{\\sim }$ is the same for all $s>0$ .", "We couple $h$ with $\\phi $ so that $h_0 = \\phi _0$ , then the above properties are satisfied.", "Independently sample $\\eta ^0$ from $(\\mathcal {L}_\\kappa ^\\alpha )^\\#$ .", "We write $\\mathbb {P}$ to denote the probability measure on $(\\phi , h, \\eta ^0)$ and $\\mathbb {E}$ for the expectation over $\\mathbb {P}$ .", "Let $\\zeta > 0$ be a parameter we send to zero at the end.", "The properties of $\\hat{\\mathfrak {h}}$ guarantee that a.s. $\\sup _{\\operatorname{Re}z > 1} |\\hat{\\mathfrak {h}}(z)| < \\infty $ , hence $\\mathbb {P}[\\sup _{\\operatorname{Re}z > 1} |\\hat{\\mathfrak {h}}(z)| < \\zeta \\cdot \\frac{2}{\\gamma }\\log \\delta ^{-1}] = 1 - o_{\\delta ,\\varepsilon }(1)$ .", "Also, setting $I := ((1+3\\zeta )\\frac{2}{\\gamma (Q-\\alpha )} \\log \\delta ^{-1} + \\delta ^{-1}, (1-3\\zeta ) \\frac{2}{\\gamma (Q-\\alpha )} \\log \\varepsilon ^{-1})$ , by Lemma REF we have $I \\subset ( \\tau _{\\delta ^{1+2\\zeta }} + \\delta ^{-1} , \\sigma _{\\varepsilon ^{1-2\\zeta }})$ with probability $1-o_{\\delta ,\\varepsilon }(1)$ .", "Finally, $\\mathbb {P}[ \\inf _{z \\in \\eta ^0} \\operatorname{Re}z > -\\delta ^{-1}] = 1-o_{\\delta ,\\varepsilon }(1)$ .", "Let $G$ be the intersection of these three good events, so $\\mathbb {P}[G] = 1-o_{\\delta ,\\varepsilon }(1)$ .", "On $G$ , if $t \\in I$ , then $X_{\\operatorname{Re}z} + \\widehat{h}(z) \\in ((1-\\zeta )\\frac{2}{\\gamma }\\log \\varepsilon , (1+\\zeta ) \\frac{2}{\\gamma }\\log \\delta )$ for all $z$ in a neighborhood of $\\eta ^0 + t$ .", "Thus $(M^\\# \\times dt \\times (\\mathcal {L}_\\kappa ^\\alpha )^\\#)[E_{\\delta , \\varepsilon }] \\ge \\mathbb {E}[1_G \\int _I 1_{\\nu _\\phi (\\eta ^0+t) \\in (\\varepsilon , \\delta )} \\, dt ] &\\ge \\mathbb {E}[1_G \\int _I 1_{\\nu _{h_2}(\\eta ^0+t) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })} \\, dt ]\\\\&\\ge \\mathbb {E}[\\int _I 1_{\\nu _{h_2}(\\eta ^0+t) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })} \\, dt ] - o_{\\delta ,\\varepsilon }(1)|I|.$ In our coupling, $h_2$ and $\\eta ^0$ are independent, and $h_2 \\stackrel{d}{=} h_2(\\cdot - t)$ for each $t \\in \\mathbb {R}$ , so $\\mathbb {E}[\\int _I 1_{\\nu _{h_2}(\\eta ^0+t) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })} \\, dt ] = |I| \\mathbb {P}[\\nu _{h_2}(\\eta ^0) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })] - o_{\\delta ,\\varepsilon }(1)|I| = (1-o_{\\delta ,\\varepsilon }(1))|I|,$ hence $ (M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\ge (1-o_{\\delta ,\\varepsilon }(1))|M||\\mathcal {L}_\\kappa ^\\alpha | |I| \\ge (1-o_{\\delta ,\\varepsilon }(1))|M||\\mathcal {L}_\\kappa ^\\alpha | (1-4\\zeta ) \\frac{2\\log \\varepsilon ^{-1}}{\\gamma (Q-\\alpha )}.$ [Proof of Proposition REF ] By Lemmas REF and REF , we have $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] = (1+o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{\\frac{2}{\\gamma }\\log \\varepsilon ^{-1}}{Q-\\alpha }.$ By Lemma REF , we have $|M| = \\int _1^\\infty \\frac{1}{2} \\overline{R}(\\alpha ) a^{\\frac{2}{\\gamma }(\\alpha -Q)-1} \\, da = \\frac{\\gamma \\overline{R}(\\alpha )}{4 (Q-\\alpha )}$ .", "Therefore we have (REF ).", "Background on CLE coupled with LQG In this section we explain how Propositions REF and REF follow from existing literature.", "Proof of Proposition  REF Proposition REF can be easily reduced to the following statement.", "Proposition A.1 In the setting of Proposition REF , we can find a random point $p\\in \\eta $ such that, conditioning on $(A_\\eta , h, \\Gamma |_{A_\\eta }, p)/{\\sim _\\gamma }$ , the conditional law of $(D_\\eta , h, z, p)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))$ .", "[Proof of Proposition REF given Proposition REF ] Suppose $(h, \\Gamma , p,z)$ satisfies Proposition REF .", "Conditioning on $(h, \\Gamma , p,z)$ , let $U$ be a uniform random variable on $(0,1)$ .", "Let $w$ be the point of $\\eta $ such that the counterclockwise arc on $\\eta $ from $p$ to $w$ is of $\\ell _h$ -length $U\\ell _h(\\eta )$ .", "By definition, $w$ is sampled from the probability measure proportional to $\\ell _h(\\eta )$ .", "By Proposition REF and the re-rooting invariance of $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))$ , conditioning on $(A_\\eta , h, \\Gamma |_{A_\\eta }, p)/{\\sim _\\gamma }$ and $U$ , the conditional law of $(D_\\eta , h, z, w)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))$ .", "Since $(A_\\eta , h, \\Gamma |_{A_\\eta }, w)/{\\sim _\\gamma }$ is determined by $(A_\\eta , h, \\Gamma |_{A_\\eta }, p)/{\\sim _\\gamma }$ and $U$ , we are done.", "To find the desired $p$ in Proposition REF , we use the conformal percolation interface (CPI) within a CLE carpet introduced by Miller, Sheffield and Werner [54].", "Suppose $\\Gamma $ is a $\\operatorname{CLE}_\\kappa $ on a Jordan domain $D$ (i.e.", "$\\partial D$ is a simple curve) for some $\\kappa \\in (\\frac{8}{3},4)$ .", "Given two boundary points $x,y$ , a (chordal) CPI for $\\Gamma $ from $x$ to $y$ is a random curve from $x$ to $y$ coupled with $\\Gamma $ that does not enter the interior of any region surrounded by a loop of $\\Gamma $ (but it can touch the loops).", "We also need to specify how a CPI proceeds upon hitting a loop of $\\Gamma $ on its way from $x$ to $y$ .", "We require that it always leaves the loop to its right.", "In the terminology of [54], this corresponding to CPI with $\\beta =1$ .", "The marginal law of a CPI is $\\operatorname{SLE}_{\\kappa ^{\\prime }}(\\kappa ^{\\prime }-6)$ on $D$ from $x$ to $y$ where $\\kappa ^{\\prime }=16/\\kappa $ [54] and the force point is on the right of $x$ .", "In particular, a CPI is a non-simple curve.", "Intuitively, the chordal CPI describes the chordal interface of a percolation on a CLE carpet.", "It is characterized by certain conformal invariance and Markov properties which are consistent with this intuition; see [54].", "We will not review the full details but will rely on an analogous Markov property that CPI satisfies on a quantum disk background.", "This was established in [55], which we review now.", "Fix $\\kappa \\in (8/3,4)$ and $\\gamma =\\sqrt{\\kappa }$ .", "For $L_0,R_0>0$ , suppose $(D,h,\\Gamma , x)$ be an embedding of a sample from $\\mathrm {QD}_{0,1}^{\\#} (L_0+R_0)\\otimes \\operatorname{CLE}_\\kappa $ .", "Let $y$ be the point on $\\partial D$ such that the quantum length of the counterclockwise arc from $x$ to $y$ is $R_0$ .", "Conditioning on $(h,\\Gamma , x,y)$ , sample a CPI $\\eta ^{\\prime }$ within the carpet of $\\Gamma $ from $x$ to $y$ .", "Since the law of $\\eta ^{\\prime }$ is a $\\operatorname{SLE}_{\\kappa ^{\\prime }}(\\kappa ^{\\prime }-6)$ , there is a quantum natural time parametrization for $\\eta ^{\\prime }$ with respect to $h$  [19], which we use throughout.", "Under this parametrization $\\eta ^{\\prime }$ has a finite duration $T$ .", "For a fixed time $t>0$ , on the event that $t\\le T$ , let $\\widetilde{\\eta }^{\\prime }_t$ be the union of $\\eta ^{\\prime }[0,t]$ and all the loops of $\\Gamma $ touched by $\\eta ^{\\prime }[0,t]$ .", "If $t<T$ , let $D_t$ be the simply connected component of $D\\setminus \\widetilde{\\eta }^{\\prime }_t$ which contains $y$ on its boundary.", "For a fixed $t$ , both $D_t$ and the interior of $D\\setminus D_t$ are Jordan domains a.s. We write the interior $D\\setminus D_t$ as $U_t$ .", "The interface between $D_t$ and $U_t$ are $\\operatorname{SLE}_\\kappa $ types curves, on which there is a well defined quantum length measure.", "Figure: We color D t D_t blue and U t U_t green.", "(a): At all but countably many times tt, we have D t =D t - D_t = D_{t^-}.", "To simplify (b)–(d) we omit their loops.", "(b): A loop discovery time.", "(c), (d): A splitting time.", "At times when the left boundary length L t L_t has a downward jump, there are two possible topologies; we do not illustrate the similar cases when the split is to the right.The following Markov property of CPI on quantum disks was proved in [55], although it was not explicitly stated.", "Proposition A.2 ([55]) For a fixed $t>0$ , on the even that $t<T$ , let $L_t$ and $R_t$ be the quantum lengths of the clockwise and counterclockwise arcs from $\\eta ^{\\prime } (t)$ to $y$ of $\\partial D_t$ .", "Conditioning on the decorated quantum surface $(U_t, h,\\Gamma |_{U_t}, \\eta ^{\\prime }|_{[0,t]})/{\\sim _\\gamma }$ and $(L_t,R_t)$ , the conditional law of $(D_t, h, \\Gamma |_{D_t},y)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}(L_t+R_t)^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "This proposition is essentially [55], except that we condition on more information than they explicitly stated.", "But the argument carries over directly to our setting.", "The point $p$ in Proposition REF that we will find is a point where a CPI hits a loop.", "Therefore we need a stronger variant of the Markov property in Proposition REF at certain random times which we now define.", "For each $t\\in (0,T)$ , let $D_{t^-}$ be the interior of $\\cap _{s<t} D_s$ .", "According to [55], for each fixed time $t$ , on the event that $t<T$ , almost surely $D_{t^-}=D_t$ ; see Figure REF (a).", "But there exist countably many times where $D_{t^-}\\ne D_t$ .", "In this case, there are two scenarios: The point $\\eta ^{\\prime }(t)$ is on a loop of $\\Gamma $ .", "In this case the interior of $D_{t^-}\\setminus D_t$ is the Jordan domain enclosed by the this loop.", "But $D_t$ is not a Jordan domain since $\\eta ^{\\prime }(t)$ corresponds to two points on $\\partial D_t$ .", "See Figure REF (b).", "We call $t$ a loop discovery time.", "The point $\\eta ^{\\prime }(t)$ is not on a loop of $\\Gamma $ .", "In this case, both $D_t$ and $D_{t^-}\\setminus D_t$ are Jordan domains, and their boundaries intersect at the single point $\\eta ^{\\prime }(t)$ .", "See Figure REF (c)–(d).", "We call $t$ a splitting time.", "In both cases, we let $\\mathcal {B}_t$ be the interior of $D_{t^-}\\setminus D_t$ and $U_{t^-}$ be the interior of $D\\setminus D_{t^{-}}$ .", "Then $\\partial \\mathcal {B}_t\\setminus \\partial D$ is an $\\operatorname{SLE}_\\kappa $ type curve.", "By definition, $\\partial \\mathcal {B}_t$ is a loop in $\\Gamma $ if and only if $t$ is a loop discovery time.", "Recall $(L_t,R_t)$ from Proposition REF .", "If $t$ if it is a loop discovery time, then $R_t$ has an upward jump.", "If $t$ is a splitting time, then either $L_t$ or $R_t$ has a downward jump.", "In both cases, the size of the jump equals the quantum length of $\\partial \\mathcal {B}_t$ , which we denote by $ X_t$ .", "We now state the stronger version of Proposition REF .", "Proposition A.3 Fix $\\varepsilon >0$ and a positive integer $n$ .", "Let $\\tau $ be the $n$ -th time such that $D_{t^-}\\ne D_t$ and the quantum length $ X_t$ of $\\partial \\mathcal {B}_t$ is larger than $\\varepsilon $ .", "If this time never occurs, set $\\tau =\\infty $ .", "Conditioning on $\\tau <\\infty $ , the decorated quantum surface $(U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , the indicator $1_{\\lbrace \\partial \\mathcal {B}_\\tau \\textrm { is a loop} \\rbrace }$ , and the quantum lengths $X_\\tau $ of $\\partial \\mathcal {B}_\\tau $ and $L_\\tau ,R_\\tau $ of the two arcs on $D_\\tau $ , the conditional law of $(D_\\tau , h, \\Gamma |_{D_\\tau }, y)/{\\sim _\\gamma }$ and $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is given by independent samples from $\\mathrm {QD}_{0,1}(L_\\tau +R_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ and $\\mathrm {QD}_{0,1}( X_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ , respectively.", "For a fixed $t>0$ , on the event that $t \\le T$ , consider the ordered collection of decorated quantum surfaces $\\lbrace (\\mathcal {B}_s, h, \\Gamma |_{\\mathcal {B}_s} ,\\eta ^{\\prime }(s))/{\\sim _\\gamma } : s\\le t \\textrm { and } D_{s^{-}}\\ne D_s \\rbrace $ .", "It was proved in [55] that conditioning on $(D_t, h, \\Gamma |_{D_t}, \\eta ^{\\prime }(t), y)/{\\sim _\\gamma }$ , and the ordered information of the quantum lengths of their boundaries and whether their times are loop discovery or splitting, the conditional law of these decorated quantum surfaces are independent $\\operatorname{CLE}_\\kappa $ decorated quantum disks with given boundary lengths.", "To see why this assertion follows from [55], we note that Propositions 3.1 and 3.5 of [55] yield the corresponding assertion for the analogous case of CLE on the quantum half-plane.", "The pinching argument of [55] then gives this assertion.", "We claim that conditioning on $\\tau <\\infty $ , $(U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , $1_{\\lbrace \\partial \\mathcal {B}_\\tau \\textrm { is a loop} \\rbrace }$ , and $X_\\tau $ , $L_\\tau ,R_\\tau $ , the conditional law of $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}( X_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "Fix a large $k>0$ .", "Let $s_k$ be the largest integer multiple of $2^{-k}$ smaller than $\\tau $ .", "Let $\\mathcal {U}_t=(U_t, h,\\Gamma |_{U_t}, \\eta ^{\\prime }|_{[0,t]})/{\\sim _\\gamma }$ and $\\mathcal {D}_t=(D_t, h|_{D_t}, \\Gamma |_{D_t},y)/{\\sim _\\gamma }$ .", "For a fixed $j$ , by Proposition REF , conditioning on $\\mathcal {U}_{j2^{-k}}$ and $(L_{j2^{-k}},R_{j2^{-k}})$ , the conditional law of $\\mathcal {D}_{j2^{-k}}$ is $\\mathrm {QD}_{0,1}(L_{j2^{-k}}+R_{j2^{-k}})^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "Note that $\\lbrace s_k=j2^{-k}\\rbrace $ is determined by $\\mathcal {U}_{j2^{-k}}$ and the quantum lengths of the boundaries of elements in $\\lbrace (\\mathcal {B}_s, h, \\Gamma |_{\\mathcal {B}_s} ,\\eta ^{\\prime }(s))/{\\sim _\\gamma } : j2^{-k} \\le s\\le (j+1)2^{-k} \\textrm { and } D_{s^{-}}\\ne D_s \\rbrace $ .", "Applying the assertion of the first paragraph to $\\mathcal {D}_{j2^{-k}}$ with $T=2^{-k}$ , we see that conditioning on $\\tau < \\infty $ , $\\mathcal {U}_{s_k}$ , $\\lbrace s_k=j2^{-k}\\rbrace $ , $1_{\\lbrace \\partial \\mathcal {B}_{\\tau } \\textrm { is a loop} \\rbrace }$ , $X_{\\tau }$ , $L_{s_k+2^{-k}}$ and $R_{s_k+2^{-k}}$ , the conditional law of $(\\mathcal {B}_{\\tau }, h, \\Gamma |_{\\mathcal {B}_{\\tau }}, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}( X_{\\tau })^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "Varying $j$ , we can remove the condition $\\lbrace s_k=j2^{-k}\\rbrace $ .", "Since almost surely $\\mathcal {U}_{s_k}\\rightarrow (U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ and $(L_{s_k+2^{-k}},R_{s_k+2^{-k}})\\rightarrow (L_\\tau ,R_\\tau )$ as $k\\rightarrow \\infty $ , we have proved the desired claim.", "It remains to show that conditioning on $\\tau <\\infty $ , $(U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , $1_{\\lbrace \\partial \\mathcal {B}_\\tau \\textrm { is a loop} \\rbrace }$ , $X_\\tau $ , $L_\\tau ,R_\\tau $ and $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ , the conditional law of $(D_\\tau , h, \\Gamma |_{D_\\tau }, y)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}(L_\\tau +R_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "This follows from a similar but easier argument: we consider the smallest multiple of $2^{-k}$ larger than $\\tau $ and use the Markov property in Proposition REF at this time.", "We omit the details.", "[Proof of Proposition REF ] For $a>0$ , let $(D,h,\\Gamma ,x)$ be an embedding of a sample from $\\mathrm {QD}_{0,1} (a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ .", "Now we reweight $\\mathrm {QD}_{0,1} (a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ by $\\mu _h(D)$ and sample a point $z$ according to the probability measure proportional to $\\mu _h$ .", "This way, the law of $(D,h,\\Gamma ,z,x)$ is $\\mathrm {QD}_{1,1} (a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ as in Propositions REF and REF .", "Let $y$ be the point on $\\partial D$ such that both the two arcs between $x$ to $y$ have quantum length $a/2$ , and sample a CPI $\\eta ^{\\prime }$ from $x$ to $y$ , parametrized by quantum natural time.", "Let $t_0$ be the time such $z\\in \\mathcal {B}_{t_0}$ and set $p_0=\\eta ^{\\prime }(t_0)$ .", "Consider $\\tau $ and $\\mathcal {B}_\\tau $ as defined in Proposition REF with this choice of $(D,h,\\Gamma , x,y)$ .", "Then on the event that $t_0=\\tau $ , namely $z\\in \\mathcal {B}_\\tau $ , conditioning on $(U_{\\tau ^-},h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , $1_{\\lbrace \\partial \\mathcal {B}_t \\textrm { is a loop} \\rbrace }$ , the quantum length $ X_\\tau $ of $\\partial \\mathcal {B}_\\tau $ and $(D_\\tau , h, \\Gamma |_{D_\\tau }, y)/{\\sim _\\gamma }$ , the conditional law of $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, z, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1} ( X_\\tau )^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ , where we have $\\mathrm {QD}_{1,1}$ instead of $\\mathrm {QD}_{0,1}$ because of area weighting.", "This means that conditioning on the quantum intrinsic information on $D\\setminus \\mathcal {B}_\\tau $ , the conditional law of $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, z, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is a CLE decorated marked quantum disk with given boundary length.", "By varying $\\varepsilon $ and $n$ in Proposition REF , the same holds with $(\\mathcal {B}_\\tau , h|_{\\mathcal {B}_\\tau }, \\Gamma |_{\\mathcal {B}_\\tau }, z, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ replaced by $(\\mathcal {B}_{t_0}, h, \\Gamma |_{\\mathcal {B}_{t_0}}, z, \\eta ^{\\prime }(t_0))/{\\sim _\\gamma }$ .", "If $t_0$ is a loop discovery time, then $\\partial \\mathcal {B}_{t_0}$ is the loop $\\eta $ surrounding $z$ and $p_0=\\eta ^{\\prime }(t_0)$ is the desired point we need for Proposition REF .", "Otherwise, we set $D_1=\\mathcal {B}_{t_0}$ and construct $t_1$ , $\\mathcal {B}_{t_1}$ and $ p_1$ as above with $(D,h, \\Gamma , x)$ replaced by $(D_1,h, \\Gamma |_{D_1}, p_0)$ .", "If $p_1\\in \\eta $ then we are done by setting $p=p_1$ .", "Otherwise, we iterate this procedure and construct $p_2,p_3,\\cdots $ .", "We claim that there must be a finite $k$ such that $p_k\\in \\eta $ , hence we can set $p=p_k$ and conclude the proof.", "To see the finiteness of the iteration, recall the set $(\\widetilde{\\eta }^{\\prime }_t)_{t\\ge [0,T]}$ defined from a CPI $\\eta ^{\\prime }$ .", "We now require that once $\\eta ^{\\prime }$ hits a loop, it first finishes tracing the loop counterclockwise and then proceeds in its own track.", "This turns $\\widetilde{\\eta }^{\\prime }_T$ into the trace of a non-self-crossing curve sharing the same endpoints as $\\eta ^{\\prime }$ .", "According to [54], viewed as a curve the law of $\\widetilde{\\eta }^{\\prime }$ is a chordal $\\operatorname{SLE}_\\kappa (\\kappa -6)$ as defined in [67].", "The curve $\\eta ^{\\prime }$ is the so-called trunk of $\\widetilde{\\eta }^{\\prime }$ .", "By the target invariance of $\\operatorname{SLE}_\\kappa (\\kappa -6)$ , if we iterate the above chordal CPI exploration towards $z$ and keep track of the chordal $\\operatorname{SLE}_\\kappa (\\kappa -6)$ 's along the way, we get a radial $\\operatorname{SLE}_\\kappa (\\kappa -6)$ on $D$ from $x$ to $z$ .", "From the relation between the $\\operatorname{SLE}_\\kappa (\\kappa -6)$ exploration tree and $\\operatorname{CLE}_\\kappa $ in [67], after finite many iterations we must reach the loop $\\eta $ at a point $p$ , the place where the radial $\\operatorname{SLE}_\\kappa (\\kappa -6)$ starts exploring $\\eta $ .", "We conclude this section by briefly explaining how to define a canonical version of $\\mathrm {QA}(a,b)$ for all $a, b>0$ , as promised in Remark REF .", "Suppose we are in the setting of the proof of Proposition REF , and we assume we are in the setting where $t_0$ is a loop discovery time, so we are in the scenario of Figure REF (b); the general case follows easily by iteration.", "In this case, let $x^-_{t_0}$ and $x^+_{t_0}$ be the left and right endpoints of $(\\partial D) \\cap (\\partial U_{t_0})$ , and let $(a_1, a_2, a_3)$ be the quantum lengths of the three boundary arcs of $(U_{t_0}, h, x_{t_0}^-, x_{t_0}^+, \\eta ^{\\prime }(t_0))$ in clockwise order from $\\eta ^{\\prime }(t_0)$ .", "Then the quantum annulus $(A_\\eta , h)/{\\sim }_\\gamma $ in Proposition REF is obtained by the conformal welding of $(D_{t_0}, h)/{\\sim }_\\gamma $ and $(U_{t_0}, h)/{\\sim }_\\gamma $ , where the welding pattern is specified by $(a_1, a_2, a_3)$ and the quantum length of $\\partial \\mathcal {B}_{t_0}$ .", "On one hand, the conditional law of $(a_1, a_2, a_3)$ given the quantum length of $\\partial \\mathcal {B}_{t_0} = b$ varies continuously in total variational distance as $b$ varies.", "On the other hand, the conditional law of $(D_{t_0}, h)/{\\sim _\\gamma }$ given $(a_1, a_2, a_3, b)$ is $\\mathrm {QD}(a + a_1 - a_2 + a_3 + b)^\\#$ .", "By [6], $\\mathrm {QD}_{0,1}(L)^\\#$ and $\\mathrm {QD}_{0,1}(L+\\varepsilon )^\\#$ can be coupled with probability $1-o_\\varepsilon (1)$ outside a neighborhood of the marked point.", "Putting this together, we can couple the conditional laws of $(A_\\eta , h)$ given $\\ell _h(\\eta ) = b$ and $\\ell _h(\\eta ) = b+\\varepsilon $ to agree outside any neighborhood of $y$ with probability $1-o_\\varepsilon (1)$ .", "By the definition of $\\mathrm {QA}(a,b)$ , this gives a canonical definition of $\\mathrm {QA}(a,b)$ and shows that it varies continuously in $b$ in a reasonable topology.", "Scaling limit of $O(n)$ -loop-decorated planar maps: Proof of Proposition  REF The fact that $(b_i)_{i \\ge 1}$ and $(\\ell _i)_{i \\ge 1}$ in Proposition REF agree in law holds because they give two descriptions of the scaling limit of the same discrete model: loop lengths in the $O(n)$ -loop-decorated quadrangulation.", "This was pointed out at the end of Section 1 of [12].", "In this section we give a more detailed justification by assembling results in [12], [7], [55].", "We recall the loop-decorated quadrangulation from [12].", "A quadrangulation with boundary is a planar map where each face has degree four except a distinguished face which we call the external face.", "(Others faces are called internal faces.)", "The degree of the external face is called the perimeter.", "A (rigid) loop configuration on a quadrangulation with a boundary is a set of disjoint undirected simple closed paths in the dual map which do not visit the external face, and with the additional constraint that when a loop visits a face of q it must cross it through opposite edges.", "Let $\\mathcal {O}_p$ be the set of pairs $(\\mathbf {q}, \\Gamma )$ such that $\\mathbf {q}$ is a quadrangulation with boundary whose perimeter is $2p$ , and $\\Gamma $ is a loop configuration on $\\mathbf {q}$ .", "Similar as for CLE, for each $\\Gamma $ on $\\mathbf {q}$ , there is a collection of outermost loops whose are not surrounded by any other loop in $\\Gamma $ .", "We now recall a scaling limit result in [12].", "Recall $\\beta =\\frac{4}{\\kappa }+\\frac{1}{2}$ , let $n\\in (0,2)$ be such that $\\beta = \\frac{3}{2} + \\frac{1}{\\pi }\\arccos (n/2)$ .", "For each $(\\mathbf {q}, \\Gamma )\\in \\mathcal {O}_p$ , we let $|\\mathbf {q}|$ be the number of faces of $\\mathbf {q}$ , let $|\\Gamma |$ be the total length of all loops of $\\Gamma $ , and let $\\#\\Gamma $ be the number of loops in $\\Gamma $ .", "For $h>0,g>0$ , assign weight $w(\\mathbf {q}, \\Gamma ) = g^{|\\mathbf {q}| - |\\Gamma |} h^{|\\Gamma |} n^{\\#\\Gamma }$ to $(\\mathbf {q}, \\Gamma )$ .", "For some choices of $(g,h)$ , this gives a finite measure on $\\mathcal {O}_p$ which can be normalized into a probability measure.", "Let $\\mathfrak {M}_p$ be a sample from this measure.", "Let $L_1^p \\ge L_2^p \\ge \\dots $ be the lengths of the outermost loops of $\\mathfrak {M}_p$ in decreasing order.", "Proposition A.4 ([12]) There exists $(g,h)$ such that as $p \\rightarrow \\infty $ , the sequence $(\\frac{a}{2p} L_i^p)_{i \\ge 1}$ converges in law to $(b_i)_{i \\ge 1}$ , where $(b_i)_{i \\ge 1}$ and $a>0$ are as in Proposition REF .", "We refer to [12] for a description of $(g,h)$ such that Proposition REF holds.", "In the rest of the section we explain why the following proposition follows from [7], [55].", "Proposition A.5 For $(g,h)$ satisfying Proposition REF , $(\\frac{a}{2p} L_i^p)_{i \\ge 1}$ converges in law to $(\\ell _i)_{i \\ge 1}$ .", "We first describe $(\\ell _i)_{i \\ge 1}$ in terms of a growth fragmentation process considered in [55] and [7].", "We will not give the full description of the growth fragmentation but only provide enough information to specify the law of $(\\ell _i)_{i \\ge 1}$ .", "Our presentation is based on the treatment in [7].", "The description of growth fragmentation process in [55] is different in appearance.", "But as explained in [55] they correspond to the same process.", "For $\\theta = \\frac{4}{\\kappa }\\in (1, \\frac{3}{2})$ , let $\\nu _\\theta $ be the measure on $(\\frac{1}{2}, \\infty )$ defined by $\\nu _\\theta (dx) = \\frac{\\Gamma (\\theta +1)}{\\pi } \\mathopen {}\\mathclose {\\left( \\frac{1_{1/2 < x < 1}}{(x(1-x))^{\\theta +1}} + \\sin (\\pi (\\theta -\\frac{1}{2})) \\frac{1_{x>1}}{(x(x-1))^{\\theta +1}} \\right)} dx.$ Let $\\Lambda _\\theta $ be the pushforward of $\\nu _\\theta $ under the map $x \\mapsto \\log x$ .", "For $\\lambda >0$ , let $\\Psi _\\theta (\\lambda ) = \\mathopen {}\\mathclose {\\left(\\frac{\\Gamma (2-\\theta )}{2\\Gamma (2-2\\theta )\\sin (\\pi \\theta )} + \\frac{\\Gamma (\\theta +1)B_{\\frac{1}{2}}(-\\theta , 2-\\theta )}{\\pi } \\right)}\\lambda + \\int _\\mathbb {R}(e^{\\lambda y} -1 +\\lambda (1-e^{y})) \\Lambda _\\theta (dy), $ where $B_{\\frac{1}{2}}(a,b) = \\int _0^{\\frac{1}{2}} t^{a-1}(1-t)^{b-1} \\, dt$ is the incomplete Beta function; see [7].", "By [7], there is a real-valued Lévy process $(\\xi (t))_{t \\ge 0}$ whose law is described by $\\mathbb {E}[e^{\\lambda \\xi (t)}] = e^{t \\Psi _\\theta (\\lambda )}$ for all $\\lambda ,t>0$ .", "For $t\\ge 0$ , let $\\tau _t = \\inf \\lbrace r \\ge 0 \\: : \\: \\int _0^r e^{\\theta \\xi (s)} \\, ds \\ge t\\rbrace $ .", "Then $\\tau _t$ a.s. reaches $\\infty $ in finite time.", "For $a>0$ , define $Y_t^a := a \\exp (\\xi (\\tau _{ta^{-\\theta }})) \\quad \\text{for }t \\ge 0, $ with the convention that $\\xi (+\\infty ) = -\\infty $ .", "Then $(Y_t^a)_{t \\ge 0}$ is nonnegative Markov process , with initial value $Y_0^a = a$ and a.s. hits 0 in finite time.", "Since $\\nu _\\theta $ is supported on $(\\frac{1}{2}, \\infty )$ , the downward jumps $y \\rightarrow y - \\ell $ of $(Y_t^a)_{t \\ge 0}$ satisfy $\\ell < \\frac{1}{2}y$ .", "We now relate $(Y_t^a)_{t \\ge 0}$ to the CPI on quantum disks reviewed in Section REF .", "Suppose $(D,h,\\Gamma ,x,y)$ are as in Proposition REF such that the law of $(D,h,\\Gamma ,x)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}(a)^{\\#} \\otimes \\operatorname{CLE}_\\kappa $ .", "The chordal CPI from $x$ to $y$ can be viewed as an exploration of the carpet of $\\Gamma $ such that at any splitting time it goes into the domain with the target $y$ on its boundary.", "Namely, it enters $D_t$ instead of $\\mathcal {B}_t$ to continue.", "We can also alter the exploration rule at these splitting times, each of which defines a variant of CPI and corresponds to a branch in the so-called CPI branching exploration tree rooted at $x$ .", "One particular variant of CPI is such that at any splitting time it enters the domain with the largest quantum boundary length.", "In the terminology of [55], this is called the CPI with ($q=\\infty $ )-exploration mechanism.", "Let $\\widetilde{Y}^a_t$ be the boundary length of the to-be-explored region at time $t$ CPI with ($q=\\infty $ )-exploration mechanism.", "Then by [55], for some constant $c>0$ not depending on $a$ , we have $(\\widetilde{Y}^a_{ct})_{t \\ge 0} \\stackrel{d}{=} (Y^a_t)_{t \\ge 0}$ .", "Moreover, the upward jumps in $\\widetilde{Y}_t$ correspond to times when the CPI discovers a loop, and downward jumps in $\\widetilde{Y}_t$ correspond to times when the CPI splits the to-be-explored region into two.", "Iteratively applying this fact we get the following description the outermost loop lengths is as in Proposition REF .", "Proposition A.6 ([55]) The lengths $(\\ell _i)_{i \\ge 1}$ equals in law with $(L_i)_{i\\ge 1}$ sampled as follows.", "First sample $(Y^a_t)_{t \\ge 0}$ , and let $U_1$ and $D_1$ be the sets of the sizes of upward and downward jumps of $(Y^a_t)_{t \\ge 0}$ .", "Given $D_1$ , sample a collection of independent processes $S_2 = \\lbrace (Y^x_t)_{t \\ge 0} \\: : \\: x \\in D_1 \\rbrace $ , and let $U_2$ and $D_2$ be the sets of the sizes of all upward and downward jumps of processes in $S_2$ .", "Iteratively define $S_i, U_i, D_i$ for all $i$ , and finally set $U = \\bigcup _{i \\ge 1} U_i$ .", "Rank the elements of $U$ as $L_1 \\ge L_2 \\ge \\cdots $ .", "The quantum lengths of loops discovered by CPI with ($q=\\infty $ )-exploration mechanism correspond to the sizes of the upward jumps in $\\widetilde{Y}^a_t$ , which has the same law as the upward jumps in $Y^a_t$ .", "The analogous Markov properties in Propositions REF and REF still hold for this CPI.", "Now we continue this exploration mechanism to explore the rest of CLE carpet.", "Iteratively applying this relation the quantum length of the discovered loops and the upward jumps, we get Proposition REF .", "We now explain how Proposition REF follows from the scaling limit results for $\\mathfrak {M}_p$ proved in [7].", "[Proof of Proposition REF ] Let ${\\mathfrak {M}^{\\prime }_p}$ be the planar map obtained from $\\mathfrak {M}_p$ by removing all the regions surrounded by outermost loops on $\\mathfrak {M}_p$ .", "For $(g,h)$ satisfying Proposition REF , ${\\mathfrak {M}^{\\prime }_p}$ is the so-called critical non-generic Boltzmann planar map considered in [7].", "The map ${\\mathfrak {M}^{\\prime }_p}$ is the discrete analog of the CLE carpet on the LQG background.", "The discrete analog of CPI exploration for CLE carpet is considered in [7] which is called the branching peeling exploration.", "The exact analog of the CPI with ($q=\\infty $ )-exploration mechanism is considered in [7], where the exploration is always towards the component with the largest perimeter when there is splitting.", "It was shown in [7] that the rescaled lengths of the loops discovered by this peeling process converge in law to the sizes of the upward jumps in $Y^a_t$ .", "Iterating the exploration in both discrete and continuum we get the desired convergence.", "[Proof of Proposition REF ] Combining Propositions REF and REF , we conclude the proof.", "Continuity of $\\operatorname{CLE}_\\kappa $ as $\\kappa \\uparrow 4$ In this section we supply the continuity in $\\kappa $ needed to extend Theorems  and REF from $\\kappa <4$ to $\\kappa =4$ .", "We start with a monotonicity statement for $\\operatorname{CLE}_\\kappa $ proved in [70].", "Lemma B.1 ([70]) There exists a coupling of $\\operatorname{CLE}_\\kappa $ on the unit disk $ for $ (83,4]$ such that a.s.\\ each outermost loop of $ CLE1$ is surrounded by an outermost loop of $ CLE2$ if $ 1<24$.$ By [70], the law of the boundaries of outermost loop clusters in a Brownian loop soup with intensity $c_\\kappa = (3\\kappa - 8)(6-\\kappa )/2\\kappa $ is given by the outermost loops of $\\operatorname{CLE}_\\kappa $ .", "Now the monotonicity of $c_\\kappa $ in $\\kappa \\in (\\frac{8}{3},4]$ yields the desired monotonicity in Lemma REF .", "We recall the formula from [69] for the conformal radii of CLE.", "Theorem B.2 ([69]) For $\\kappa \\in (8/3,8)$ , let $\\eta _\\kappa $ be the outermost loop surrounding 0 of a $\\operatorname{CLE}_\\kappa $ on $.", "Let $ CR(, 0)$ be the conformal radius of the region surrounded by $$ viewed from 0.", "Then$$\\mathbb {E}[ \\mathrm {CR}(\\eta _\\kappa , 0)^\\lambda ] = \\frac{- \\cos (\\frac{4\\pi }{\\kappa })}{\\cos \\mathopen {}\\mathclose {\\left( \\pi \\sqrt{(1-\\frac{4}{\\kappa })^2 -\\frac{8\\lambda }{\\kappa }} \\right)}}\\quad \\textrm {for } \\lambda > -1 + \\frac{\\kappa }{8}.$$$ Recall that the Hausdorff distance between two closed sets $E_1,E_2$ on a metric space $(X,d)$ is given by $\\max \\lbrace \\sup _{x\\in E_1}\\lbrace d(x,E_2)\\rbrace , \\sup _{x\\in E_2}\\lbrace d(x,E_1) \\rbrace \\rbrace $ .", "Lemma REF and Theorem REF yield the following continuity result.", "Lemma B.3 Suppose we are in the coupling in Lemma REF .", "For each $z\\in , let $ (z)$ be the outermost loop around $ z$ of the $ CLE$.For any fixed $ z$, viewed as closed sets, $ (z)$ converges almost surely to $ 4(z)$ in the Hausdorff metric as $ 4$.$ By the conformal invariance of $\\operatorname{CLE}_\\kappa $ we assume $z=0$ because the same argument will work for a general $z$ .", "In this case we simply write $\\eta _\\kappa (0)$ as $\\eta _\\kappa $ .", "Since $\\eta _{\\kappa _1}$ is surrounded by $\\eta _{\\kappa _2}$ if $\\kappa _1<\\kappa _2\\le 4$ .", "we see that $\\mathrm {CR}(\\eta _\\kappa , 0)$ is increasing in $\\kappa $ .", "By the explicit formula in Theorem REF , we have $\\lim _{\\kappa \\uparrow 4} \\mathbb {E}[ \\mathrm {CR}(\\eta _\\kappa , 0)] = \\mathbb {E}[\\mathrm {CR}(\\eta _\\kappa , 0)]$ .", "Thus we must have $\\lim _{\\kappa \\uparrow 4} \\mathrm {CR}(\\eta _\\kappa , 0)=\\mathrm {CR}(\\eta _4, 0)$ a.s. Let $D_\\kappa $ be the region surrounded by $\\eta _\\kappa $ and $D_{4^-}=\\cup _{\\kappa <4} D_\\kappa $ .", "The conformal radius of $D_{4^-}$ must be between $\\lim _{\\kappa \\uparrow 4} \\mathrm {CR}(\\eta _\\kappa , 0)$ and $\\mathrm {CR}(\\eta _4, 0)$ , hence equals $\\mathrm {CR}(\\eta _4, 0)$ a.s.", "This means that $D_\\kappa \\uparrow D_4$ a.s. hence $\\eta _\\kappa \\rightarrow \\eta _4$ a.s. in the Hausdorff metric in the coupling in Lemma REF .", "Recall the loop measures $ \\operatorname{SLE}_\\kappa ^\\mathrm {loop}$ , $\\widetilde{\\operatorname{SLE}}_\\kappa ^\\mathrm {loop}$ , $ \\mathcal {L}_\\kappa $ and $\\widetilde{\\mathcal {L}}_\\kappa $ from Section .", "We first give a quantitative version of Lemma REF and then prove the continuity in $\\kappa $ for these measures.", "Proposition B.4 Lemma REF holds.", "Moreover, the convergence is exponential with a uniform rate near $\\kappa =4$ .", "That is, there exists a constant $a(\\kappa )\\in (0,1)$ depending on $\\kappa $ such that the total variation distance between the law of $\\eta ^n$ and $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ is at most $a(\\kappa )^n$ , and in addition, $\\sup \\lbrace a(\\kappa ) \\: : \\: \\kappa \\in [\\kappa _0, 4]\\rbrace < 1$ for $\\kappa _0 \\in (8/3, 4]$ .", "Fix $\\kappa \\in (\\frac{8}{3}, 4]$ .", "[45] shows that there exists $a(\\kappa )\\in (0,1)$ such that regardless of the initial states, the Markov chain in Lemma REF couples in one step with probability $1-a(\\kappa )$ .", "Moreover, inspecting that argument, we see that $\\sup \\lbrace a(\\kappa ) \\: : \\: \\kappa \\in [\\kappa _0, 4]\\rbrace < 1$ for $\\kappa _0 \\in (8/3, 4]$ .", "This gives Proposition REF .", "Lemma B.5 We have $\\lim _{\\kappa \\uparrow 4} \\widetilde{\\mathcal {L}}_\\kappa = \\widetilde{\\mathcal {L}}_4$ weakly with respect to the Hausdorff metric.", "By the domai Markove property of $\\operatorname{CLE}_\\kappa $ and iteratively applying Lemma REF , we see that for each $n\\in \\mathbb {N}$ , the law of $\\eta ^n$ as $\\kappa \\rightarrow 4$ converge weakly to the law of $\\eta ^n$ for $\\kappa =4$ .", "Now the uniformly exponential convergence in Proposition REF gives $\\lim _{\\kappa \\uparrow 4} \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}) = \\widetilde{\\mathcal {L}}_4(\\mathcal {C})$ .", "Transferring from the cylinder to the disk gives $\\lim _{\\kappa \\uparrow 4} \\widetilde{\\mathcal {L}}_\\kappa = \\widetilde{\\mathcal {L}}_4$ .", "Lemma B.6 We have $\\lim _{\\kappa \\uparrow 4} \\mathcal {L}_\\kappa =\\mathcal {L}_4$ weakly with respect to the Hausdorff metric.", "We only explain that under the natural parameterization chordal $\\operatorname{SLE}_\\kappa $ converges to $\\operatorname{SLE}_4$ as $\\kappa \\uparrow 4$ .", "Once this is done we get that the two-sided whole plane $\\operatorname{SLE}_\\kappa $ curve $\\operatorname{SLE}_\\kappa ^{p\\rightleftharpoons q} $ converges to $\\operatorname{SLE}_4^{p\\rightleftharpoons q}$ under natural parametrization as well, since the two-sided curve is characterized by its re-sampling property: conditioned on one of the curve segments between $p$ and $q$ , the conditional law of the other is $\\operatorname{SLE}_\\kappa $ in the complementary domain.", "From this and the definition of $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ we reviewed in Section REF , we get the convergence of $ \\mathcal {L}_\\kappa $ .", "We now show that the law of chordal $\\operatorname{SLE}_\\kappa $ on $\\mathbb {H}$ from 0 to $\\infty $ under natural parametrization is continuous as $\\kappa \\uparrow 4$ .", "We first recall that this family of measures is tight in the local uniform topology of parametrized curves thanks to their Hölder regularity established by Zhan [77].", "On the other hand the natural parametrization of $\\operatorname{SLE}_\\kappa $ is characterized by a conformal invariance and domain Markov property considered by Lawler and Sheffield [48].", "Therefore all subsequential limits of the chordal $\\operatorname{SLE}_\\kappa $ measure agree with $\\operatorname{SLE}_4$ .", "Lemma B.7 Suppose Theorem  holds for $\\kappa \\in (8/3,4)$ .", "Then it holds for $\\kappa =4$ as well.", "The right hand side of () is continuous as $\\kappa \\uparrow 4$ .", "Therefore $\\lbrace \\mathrm {CR}(z_i, \\eta _i)\\rbrace _{1\\le i\\le 3}$ for $\\kappa <4$ converges in law as $\\kappa \\uparrow 4$ and the moment generating function of the limit is given by ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ with $\\kappa =4$ .", "It remains to show that the limit is given by $\\lbrace \\mathrm {CR}(z_i, \\eta _i)\\rbrace _{1\\le i\\le 3}$ with $\\kappa =4$ .", "Fix a small $\\varepsilon >0$ .", "Let $S$ be the set of simple loops in $\\widehat{ separating z_1 from \\lbrace z_2, z_3\\rbrace .", "Let S_\\varepsilon =\\lbrace \\eta \\in S: \\mathrm {CR}(z_1,\\eta )>\\varepsilon \\rbrace .Let \\Gamma be the full-plane \\operatorname{CLE}_\\kappa which the \\eta _i are chosen from.", "Now conditioning on the event that \\Gamma \\cap S_\\varepsilon \\ne \\emptyset ,we uniformly choose a loop \\eta ^\\varepsilon from the finite set \\Gamma \\cap S_\\varepsilon .", "Then the conditional law of \\eta ^\\varepsilon is the probability measure proportional to the restriction of \\widetilde{\\operatorname{SLE}}_\\kappa ^\\mathrm {loop} to {S_\\varepsilon }.", "Let D^\\varepsilon be the connected component of \\hat{\\setminus \\eta ^\\varepsilon not containing z_1.", "By Proposition~\\ref {prop-CLE-markov},conditioning on \\eta ^\\varepsilon , the law of \\Gamma |_{D^\\varepsilon } is a \\operatorname{CLE}_\\kappa in D^\\varepsilon .", "Moreover, \\eta _1,\\eta _2,\\eta _3are the outermost loops in \\Gamma |_{D^\\varepsilon } \\cup \\lbrace \\eta ^\\varepsilon \\rbrace separating z_1,z_2,z_3.", "}By Lemma~\\ref {lem-kappa-shape}, the conditional law of \\eta ^\\varepsilon conditioned on \\Gamma \\cap S_\\varepsilon \\ne \\emptyset for \\kappa <4converges weakly to the same conditional law when \\kappa =4.", "By Lemma~\\ref {lem-kappa-D}, the same continuity holds for the conditional law of (\\eta _1,\\eta _2,\\eta _3)given \\Gamma \\cap S_\\varepsilon \\ne \\emptyset .", "Since \\lim _{\\varepsilon \\rightarrow 0}\\mathbb {P}[\\Gamma \\cap S_\\varepsilon \\ne \\emptyset ]=\\lim _{\\varepsilon \\rightarrow 0}\\mathbb {P}[\\mathrm {CR}(z_1,\\eta _1)>\\varepsilon ]=1uniformly for \\kappa \\in (\\kappa _0,4] for a fixed \\kappa _0 \\in (8/3,4), we can remove the conditioning and conclude that as \\kappa \\uparrow 4 the law of (\\eta _1,\\eta _2,\\eta _3) converges weakly in the Hausdorff metric to the same law when \\kappa =4.", "This gives the desired continuity in \\kappa .", "}$" ], [ "Quantum pair of pants", "In this section we introduce and analyze the quantum pair of pants similarly as what was done for the quantum annulus in Section .", "We assume $\\kappa \\in (\\frac{3}{8},4)$ and $\\gamma =\\sqrt{\\kappa }$ throughout the section.", "Suppose $( h, z_1,z_2,z_3)$ is an embedding of a sample from $\\mathrm {QS}_3$ .", "Let $\\Gamma $ be the whole-plane $\\operatorname{CLE}_\\kappa $ on $ independent of $ h$.", "We write$ QS3CLE$ as the law of the decorated quantum surface$ ( h, , z1,z2,z3)/$.", "For $ 1i3$, let $ i$ be the outermost loop separating $ zi$ from the other two points as defined in Theorem~\\ref {thm-main}.", "Let $ li$ be the quantum boundary length of $ i$.", "In light of Lemma~\\ref {lem:disint}, the following lemmaensures the existence of the disintegration over $ l1,l2,l3$.\\begin{lemma}For a Borel set E\\subset \\mathbb {R}^3 with zero Lebesgue measure, \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa [(\\mathfrak {l}_1,\\mathfrak {l}_2,\\mathfrak {l}_3)\\in E]=0.\\end{lemma}\\begin{proof}This follows from Theorem~\\ref {thm-loop3} combined with the same argument as in Lemma~\\ref {lem:1-pt-length}.\\end{proof}We now define the quantum pair of pants in the same way we defined the quantum annulus.\\begin{definition}Let \\lbrace \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa (\\ell _1,\\ell _2,\\ell _3):(\\ell _1,\\ell _2,\\ell _3)\\in (0,\\infty )^3 \\rbrace be the disintegration of \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa over (\\mathfrak {l}_1,\\mathfrak {l}_2,\\mathfrak {l}_3).Let P_{\\eta _1,\\eta _2,\\eta _3} be the non-simply-connected component of (\\cup _{i=1}^3 \\eta _i).", "For \\ell _1,\\ell _2,\\ell _3\\in \\mathbb {R}_+^3 where \\mathbb {R}_+=(0,\\infty ),let \\widetilde{\\mathrm {QP}}(\\ell _1,\\ell _2,\\ell _3) be the law of the quantum surface (P_{\\eta _1,\\eta _2,\\eta _3}, h)/{\\sim _\\gamma } under \\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa (\\ell _1,\\ell _2,\\ell _3).", "Let\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3) = \\mathopen {}\\mathclose {\\left(\\prod _{i=1}^3 \\ell _i |\\mathrm {QD}_{1,0}(\\ell _i)| \\right)}^{-1} \\widetilde{\\mathrm {QP}}(\\ell _1,\\ell _2,\\ell _3).We call a sample from \\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3) a \\emph {quantum pair of pants}.\\end{definition}$ For $\\ell _1,\\ell _2,\\ell _3$ all distinct, given quantum surfaces sampled from $\\mathrm {QD}_{1,0}(\\ell _1) \\times \\mathrm {QD}_{1,0}(\\ell _2)\\times \\mathrm {QD}_{1,0}(\\ell _3) \\times \\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ , we can uniformly conformal weld them, by identifying cycles with the same length, to produce a quantum surface decorated with three loops and three points.", "We write its law as $\\mathrm {Weld}\\Big (\\mathrm {QD}_{1,0}(\\ell _1) , \\mathrm {QD}_{1,0}(\\ell _2), \\mathrm {QD}_{1,0}(\\ell _3), \\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) \\Big )$ Similarly as in the case of $\\mathrm {QA}(a,b)$ , although $\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ is only well-defined modulo a measure zero set, we will only integrate it over $\\mathbb {R}_+^3$ in applications.", "So we can consider it as canonically defined and omit saying “for almost every $\\ell _1,\\ell _2,\\ell _3$ ” every time.", "For the same reason, although we only specify the meaning of (REF ) for distinct $\\ell _1,\\ell _2,\\ell _3$ , this is sufficient for our applications.", "The main results of this section are the following analogs of Proposition REF and Theorem REF .", "Theorem 6.1 Sample $( \\phi , \\Gamma , z_1, z_2, z_3)/{\\sim _\\gamma }$ from $\\mathrm {QS}_3 \\otimes \\operatorname{CLE}_\\kappa $ , and let $\\eta _i$ be the othermost loop of $\\Gamma $ around $z_i$ separating it from the other two points for $i=1,2,3$ .", "Then the law of the decorated quantum surface $( \\phi , \\eta _1, \\eta _2, \\eta _3, z_1,z_2,z_3)/{\\sim _\\gamma }$ is $\\iiint _{\\mathbb {R}^3_+} \\ell _1\\ell _2\\ell _3\\mathrm {Weld}\\Big ( \\mathrm {QD}_{1,0}(\\ell _1), \\mathrm {QD}_{1,0}(\\ell _2),\\mathrm {QD}_{1,0}(\\ell _3), \\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) \\Big ) \\, d\\ell _1 \\, d\\ell _2 \\, d\\ell _3.$ Theorem 6.2 Let $A$ be the total quantum area of a sample from $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3)$ .", "Then there is a constant $C\\in (0,\\infty )$ only depending on $\\gamma $ such that for all $\\ell _1, \\ell _2, \\ell _3,\\mu >0$ $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}] = C\\mu ^{\\frac{1}{4} - \\frac{2}{\\gamma ^2}}\\frac{1}{\\sqrt{\\ell _1\\ell _2\\ell _3}} e^{-(\\ell _1+ \\ell _2+ \\ell _3)\\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}} .", "$ In contrast to $\\mathrm {QA}(a,b)$ , the measure $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3)$ is infinite, as can be seen from sending $\\mu \\rightarrow 0$ in Theorem REF .", "On the other hand, $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3)[A<C] < \\infty $ for $C>0$ so it is still $\\sigma $ -finite.", "We will prove Theorem REF in Section REF based on Theorem REF .", "Then in Section REF we prove Theorem REF based on Theorem REF .", "Readers who are mainly interested in the proof of Theorem  can safely skip the rest of this section at the first reading." ], [ "Conformal welding of $\\mathrm {QP}$ and {{formula:22903b7b-896b-45c9-9c80-4fecd9acbd9e}} : proof of Theorem ", "Suppose we are in the setting of Definition  and Theorem REF .", "For $i=1,2,3$ , let $D_{\\eta _i}$ and $D^c_{\\eta _1}$ be the two components of $\\eta _i$ where $D_{\\eta _i}$ contains $z_i$ .", "Let $\\mathfrak {l}_i$ be the quantum length of $\\eta _i$ .", "Proposition 6.3 Given $(h,\\eta _1,\\eta _2,\\eta _3)$ , let $p_3$ be a point sampled from the probability measure proportional to the quantum length measure of $\\eta _3$ .", "Then conditioned on $(D^c_{\\eta _3}, h,\\eta _{1}, \\eta _{2}, z_1, z_2,p_3)/{\\sim _\\gamma }$ , the conditional law of $(D_{\\eta _3} , h,z_3, p_3)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\mathfrak {l}_3)^\\#$ .", "Suppose we are in the setting of Theorem REF where $\\mathbb {F}$ is a measure on $H^{-1}($ such that the law of $( h)/{\\sim _\\gamma }$ is $\\mathrm {QS}$ if $h$ is sampled from $\\mathbb {F}$ .", "Now sample $(h,\\Gamma ,\\eta ,z_1,z_2,z_3)$ from $1_E\\prod _{i=1}^{3}\\mu _h(d^2z_i)\\, \\mathbb {F}(dh)\\,\\mathrm {Count}_{\\Gamma }(d \\eta )\\operatorname{CLE}_\\kappa ^d\\Gamma )$ , where $E$ is the event that $\\eta $ separates $z_3$ from $\\lbrace z_1,z_2\\rbrace $ .", "Let $\\mathfrak {\\ell }$ be the quantum length of $\\eta $ .", "Let $p$ be sampled on $\\eta $ from the probability measure proportional to the quantum length of $\\eta $ .", "Let $D_{\\eta }$ and $D^c_{\\eta }$ be the two components of $\\eta $ where $D_{\\eta }$ contains $z_3$ .", "By Theorem REF , conditioned on $(D^c_{\\eta }, h,\\Gamma |_{D^c_{\\eta }}, z_1, z_2,p)/{\\sim _\\gamma }$ , the conditional law of $(D_{\\eta } , h,z_3, p)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\mathfrak {l})^\\#$ .", "In this sample space we still define $\\eta _1,\\eta _2,\\eta _3$ in terms of $\\Gamma , z_1,z_2,z_3$ as in Proposition REF .", "Let $F$ be the event that $z_1$ and $z_2$ are surrounded by distinct outermost loops of $\\Gamma |_{D^c_{\\eta }}$ .", "Then $E\\cap F$ occurs if and only if $\\eta =\\eta _3$ .", "Therefore under $1_F\\cdot 1_E\\prod _{i=1}^{3}\\mu _h(d^2z_i) \\mathbb {F}(dh)\\mathrm {Count}_{\\Gamma }(d \\eta )\\operatorname{CLE}_\\kappa ^d\\Gamma )$ the law of $( h,\\Gamma ,z_1,z_2,z_3)/{\\sim _\\gamma }$ is simply $\\mathrm {QS}_3\\otimes \\operatorname{CLE}_\\kappa $ as in Proposition REF .", "Since $F$ only depends on $(D^c_{\\eta }, h,\\Gamma |_{D^c_{\\eta }}, z_1, z_2,p)/{\\sim _\\gamma }$ , even under the further conditioning on $F$ , the conditional law of $(D_{\\eta } , h,z_3, p)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\mathfrak {l})^\\#$ .", "Since under this conditioning $\\eta =\\eta _3$ , we conclude the proof.", "[Proof of Theorem REF ] Note that Proposition REF still holds if $D_{\\eta _1}$ is replaced by $D_{\\eta _2}$ or $D_{\\eta _3}$ .", "Recall $\\widetilde{\\mathrm {QP}}$ from the disintegration in Definition  .", "By repeatedly applying Proposition REF , we see that the law of $( \\phi , \\eta _1, \\eta _2, \\eta _3, z_1,z_2,z_3)/{\\sim _\\gamma }$ is $ \\iiint _{\\mathbb {R}^3_+} \\mathrm {Weld}\\Big ( \\mathrm {QD}_{1,0}(\\ell _1)^{\\#}, \\mathrm {QD}_{1,0}(\\ell _2)^{\\#},\\mathrm {QD}_{1,0}(\\ell _3)^{\\#},\\widetilde{\\mathrm {QP}}(\\ell _1, \\ell _2, \\ell _3) \\Big ) \\, d\\ell _1 \\, d\\ell _2 \\, d\\ell _3.$ Now Theorem REF follows from $\\mathrm {QD}_{1,0}(\\ell ) =|\\mathrm {QD}_{1,0}(\\ell )| \\mathrm {QD}_{1,0}(\\ell )^{\\#}$ .", "Finally, we give a welding result analogous to Proposition REF we need in the next section.", "Lemma 6.4 Let $(D,h,\\Gamma ,\\eta , \\hat{\\eta },z)$ be as in Lemma REF .", "Namely, for $a>0$ , sample $(D,h,\\Gamma ,z)/{\\sim _\\gamma }$ from $\\mathrm {QD}_{1,0}(a)\\otimes \\operatorname{CLE}_\\kappa $ and sample $\\hat{\\eta }$ from the counting measure on the outermost loops of $\\Gamma $ except the loop $\\eta $ surrounding $z$ .", "Then for some constant $C = C(\\gamma )$ the law of $(D, h,\\eta , \\hat{\\eta }, z)/{\\sim _\\gamma }$ is $C \\iint _{\\mathbb {R}^2_+} bc \\mathrm {Weld}\\mathopen {}\\mathclose {\\left(\\mathrm {QD}_{1,0}(b), \\mathrm {QD}(c), \\mathrm {QP}(a,b,c)\\right)} \\, db \\, dc.$ We retain the notations in the proof of Proposition REF .", "Sample $(h, \\Gamma , \\eta _1, \\eta _2, \\eta _3, z_1, z_2)$ from $\\mathsf {M} := 1_G \\mu _h(d^2z_2)\\,\\mu _h(d^2z_2)\\, \\mathbb {F}(dh) \\,\\mathrm {Count}_\\Gamma (d\\eta _1)\\,\\mathrm {Count}_\\Gamma (d\\eta _2)\\,\\mathrm {Count}_\\Gamma (d\\eta _3)\\operatorname{CLE}_\\kappa ^d\\Gamma )$ , where $G$ is the event that $\\eta _1, \\eta _2, \\eta _3$ are distinct, $\\eta _1, \\eta _2, z_1, z_2$ all lie on the same side of $\\eta _3$ , and in $\\eta _3$ the outermost loops around $z_1, z_2$ are $\\eta _1, \\eta _2$ respectively.", "We first show that the $\\mathsf {M}$ -law of $( \\eta _1, \\eta _2, \\eta _3, z_1, z_2)/{\\sim _\\gamma }$ is $\\iiint _{\\mathbb {R}_+^3} abc \\mathrm {Weld}(QD_{1,0}(a), \\mathrm {QD}_{1,0}(b), \\mathrm {QD}(c), \\mathrm {QP}(a,b,c)) \\, da\\, db\\, dc.$ Indeed, by Theorem REF , if we sample $(h, \\Gamma , \\eta _1, \\eta _2, \\eta _3, z_1, z_2)$ from $\\mu _h|_{D_3}(d^2z_3) \\mathsf {M}$ where $D_3$ is the component of $\\eta _3$ not containing $z_1, z_2$ , then the law of $( \\eta _1, \\eta _2, \\eta _3, z_1, z_2,z_3)/{\\sim _\\gamma }$ is (REF ).", "Forgetting the point $z_3$ and unweighting by $\\mu _h(D_3)$ , we get (REF ).", "On the other hand, by Proposition REF applied to the loop $\\eta _1$ and the two-pointed quantum sphere $( h, z_1, z_2)/{\\sim _\\gamma }$ , we see that the $\\mathsf {M}$ -law of $( \\eta _1, \\eta _2, \\eta _3, z_1, z_2)/{\\sim _\\gamma }$ is $C\\int _0^\\infty a \\mathrm {Weld}(QD_{1,0}(a), \\widetilde{ \\mathsf {M}}(a)) \\, da, $ where $\\widetilde{ \\mathsf {M}}(a)$ is the measure described in Lemma REF .", "Comparing this to (REF ), we conclude that $C \\widetilde{\\mathsf {M}}(a) = \\iint _{\\mathbb {R}_+^2} bc \\mathrm {Weld}(\\mathrm {QD}_{1,0}(b), \\mathrm {QD}(c), \\mathrm {QP}(a,b,c)\\, db \\, dc$ ." ], [ "Area distribution of the quantum pair of pants: proof of Theorem ", "Our proof still crucially relies on the Levy process as in Section REF .", "Lemma 6.5 Let $\\mathbb {P}^{\\beta }$ be the probability measure describing the law of the ${\\beta }$ -stable Lévy process with Lévy measure $1_{x>0} x^{-\\beta -1} \\, dx$ with ${\\beta }=\\frac{4}{\\gamma ^2}+\\frac{1}{2}$ .", "For a sample $(\\zeta _t)_{t\\ge 0}$ from $\\mathbb {P}^{\\beta }$ , let $\\tau _{-\\ell }=\\inf \\lbrace t: \\zeta _t=-\\ell \\rbrace $ for each $\\ell >0$ .", "Let $(x_i)_{i \\ge 1}$ be the jumps of $\\zeta $ in $[0,\\tau _{-a-b-c}]$ .", "Let $(A_i)_{i\\ge 1}$ be independent copies of the quantum area of a sample from $\\mathrm {QD}(1)^\\#$ which are also independent of $\\zeta $ .", "Then there is a constant $C\\in (0,\\infty )$ such that for any non-negative measurable function $f$ and $a,b,c>0$ , we have $\\mathrm {QP}(a,b,c)[f(A)] = \\frac{C}{\\sqrt{abc} (a+b)} \\mathbb {E}[\\tau _{-a-b} f(\\sum _i x_i^2 A_i)],$ where $A$ on the left side means the quantum area of a sample from $\\mathrm {QP}(a,b,c)$ .", "In the setting of Lemma REF , Lemma REF gives the law of the quantum lengths of the outermost loops.", "By Proposition REF , conditioning on the lengths, the quantum surfaces surrounded by the outermost loops are independent quantum disks.", "Therefore combining Lemmas REF and REF we have that for a non-negative measurable function $F$ on $\\mathbb {R}_+^3$ , $&C \\iint _{\\mathbb {R}_+^2} bc |\\mathrm {QD}_{1,0}(b)| |\\mathrm {QD}(c)| \\mathrm {QP}(a,b,c)[F(A,b,c)]\\, db \\, dc \\\\=& \\iint _{\\mathbb {R}_+^2} \\frac{\\mathfrak {C}}{\\sqrt{ab}(a+b)} b|\\mathrm {QD}_{1,0}(b)| c^{-\\beta -1} \\mathbb {E}[\\widetilde{\\tau }_{-a-\\mathbf {b}} F(\\sum _{i} x_i^2 A_i,b,c)]\\, db \\, dc.$ for some $C\\in (0,\\infty )$ not depending on $a,b,c$ .", "Since $|\\mathrm {QD}(c)| \\propto c^{-\\beta -\\frac{3}{2}}$ , by possibly changing $C$ and performing a disintegration, we get (REF ).", "[Proof of Theorem REF ] Let $g(x)= \\mathbb {E}[e^{-\\mu x^2A_1}]$ where $A_1$ is the quantum area of a sample from $\\mathrm {QD}(1)^{\\#}$ .", "Then by Lemma REF , $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}]= \\frac{C}{\\sqrt{\\ell _1\\ell _2\\ell _3} (\\ell _1+\\ell _2)} \\mathbb {E}[\\tau _{-\\ell _1-\\ell _2} \\prod _{i\\ge 1}g(x_i)],$ where the expectation is with respect to $\\mathbb {P}^\\beta $ and $(x_i)_{i\\ge 1}$ is the set of jumps until $\\tau _{-\\ell _1-\\ell _2-\\ell _3}$ .", "By Lemma REF , $\\mathbb {E}[\\tau _{-\\ell _1-\\ell _2} \\prod _{i\\ge 1}g(x_i)]= C(\\mu )(\\ell _1+\\ell _2) \\mathbb {E}\\big [\\prod _{i\\ge 1} g(x_i)\\big ].$ where $C(\\mu )= \\int T(e) \\prod _{x\\in J_e} g(x) \\underline{N} (d e)$ with $\\underline{N}, T(e),J(e)$ as in Lemma REF .", "By Proposition REF , $\\mathbb {E}\\big [\\prod _{i\\ge 1} g(x_i)\\big ]= e^{-(\\ell _1+\\ell _2+\\ell _3) \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}.$ We write $X\\propto _\\gamma Y$ if $X=C_\\gamma Y$ for some $\\gamma $ -dependent constant $C_\\gamma \\in (0,\\infty )$ .", "Then we have $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}] \\propto _\\gamma \\frac{C(\\mu )}{\\sqrt{\\ell _1\\ell _2\\ell _3}} e^{-(\\ell _1+ \\ell _2+ \\ell _3)\\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}$ It remains to show that $C(\\mu ) \\propto _{\\gamma } \\mu ^{\\frac{1}{4} - \\frac{2}{\\gamma ^2}}$ .", "Recall Theorem REF , where $(\\phi ,z_1,z_2,z_3)/{\\sim _\\gamma }$ has the law of $\\mathrm {QS}_3$ .", "It is easy to check from the definition of $\\mathrm {QS}_3$ that $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }] \\propto _\\gamma \\mu ^{\\frac{2Q - 3\\gamma }{\\gamma }}$ .", "On the other hand, we can also use (REF ) to evaluate $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }]$ .", "Since we know $\\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) [e^{-\\mu A}]$ from (REF ), together with the FZZ formula in Theorem REF for the area of quantum disks, we have $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }] \\propto _\\gamma C(\\mu ) \\prod _{i=1}^3 \\int _0^\\infty \\mu ^{\\frac{1}{\\gamma }(Q-\\gamma )} K_{\\frac{2}{\\gamma }(Q-\\gamma )}\\mathopen {}\\mathclose {\\left(\\ell _i \\sqrt{\\frac{\\mu }{\\sin (\\pi \\gamma ^2/4)}}\\right)} \\cdot \\frac{e^{-\\ell _i \\sqrt{\\mu /\\sin (\\pi \\gamma ^2/4)}}}{\\sqrt{\\ell _i}} \\, d\\ell _i.$ By Lemma REF we have $\\int _0^\\infty y^{-\\frac{1}{2}}e^{-cy} K_\\nu (cy) \\, dy = \\frac{\\pi ^{3/2}}{\\sqrt{2c} \\cos (\\pi \\nu )}$ for $c > 0$ and $\\nu \\in (-\\frac{1}{2}, \\frac{1}{2})$ .", "Therefore $\\mathrm {QS}_3[e^{-\\mu \\mu _\\phi ( }] \\propto _\\gamma C(\\mu ) \\mathopen {}\\mathclose {\\left(\\mu ^{\\frac{1}{\\gamma }(Q-\\gamma )-\\frac{1}{4}}\\right)}^3$ hence $C(\\mu ) \\propto _\\gamma \\mu ^{\\frac{1}{4} - \\frac{2}{\\gamma ^2}}$ ." ], [ "The three-point correlation function via conformal welding", "In this section we finish our proof of Theorem  based on the outline from Section REF .", "We recall the setting from Section REF .", "Let $\\Gamma $ be a whole plane $\\operatorname{CLE}_\\kappa $ with $\\kappa \\in (\\frac{8}{3},4]$ .", "We fix $(u_1, u_2, u_3) = (0,1,e^{i\\pi /3})$ for the convenience of using our formulation of the DOZZ formula (Theorem ).", "Let $\\eta _i$ be the outermost loop of $\\Gamma $ separating $z_i$ from the other two points.", "We write $\\mathsf {m}_3$ for the law of the triples of loops $(\\eta _1, \\eta _2, \\eta _3)$ and let $\\alpha _i\\in \\mathbb {R}$ for $i=1,2,3$ .", "Let $\\lambda _i = -\\frac{\\alpha _i^2}{2} + Q \\alpha _i - 2$ .", "Let $\\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ be the following reweighting of $\\mathsf {m}_3$ : $\\frac{d \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}}{d \\mathsf {m}_3}(\\eta _1,\\eta _2,\\eta _3) = \\prod _{i=1}^3 \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta _i, z_i)\\right)}^{-\\frac{\\alpha _i^2}{2} + Q \\alpha _i - 2}.$ By definition, the total mass of $\\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ is $|\\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}| = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\prod _{i=1}^3 \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta _i, z_i)\\right)}^{\\lambda _i}\\right]}=\\frac{{C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}}{2^{\\lambda _1+\\lambda _2+\\lambda _3}}.$ As outlined in Section REF , we set $\\gamma =\\sqrt{\\kappa }$ and consider the Liouville field on the sphere with three insertions $\\mathrm {LF}_{(\\alpha _i, u_i)_i}$ .", "Then by (REF ) we have Lemma 7.1 For $\\alpha _1,\\alpha _2,\\alpha _3$ satisfying the Seiberg bound () in Theorem , we have $\\mathopen {}\\mathclose {\\left(\\mathrm {LF}_{(\\alpha _i, u_i)_i} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}\\right)} [e^{-\\mu _\\phi (}] =2^{-\\lambda _1-\\lambda _2-\\lambda _3-1} C^\\mathrm {DOZZ}_\\gamma (\\alpha _1,\\alpha _2,\\alpha _3) {C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}.$ Theorem  yields $\\mathrm {LF}_{(\\alpha _i, u_i)_i} [e^{-\\mu _\\phi (}] =\\frac{1}{2}C^\\mathrm {DOZZ}_\\gamma (\\alpha _1,\\alpha _2,\\alpha _3)$ .", "Now (REF ) gives (REF ).", "In Section REF , we will show that the product measure $\\mathrm {LF}_{(\\alpha _i, u_i)_3} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ can be obtained from conformally welding the quantum pair of pants $\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ and $\\mathcal {M}_1^\\mathrm {disk}(\\alpha _i;\\ell _i)$ as in Theorem REF .", "In Section REF , we compute the right side of (REF ) using the area distributions of $\\mathrm {QP}(\\ell _1,\\ell _2,\\ell _3)$ and $\\mathcal {M}_1^\\mathrm {disk}(\\alpha _i;\\ell _i)$ from Theorem REF and the FZZ formula, which gives ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ ." ], [ "Conformal welding with generic insertions", "In this section we prove the following conformal welding result.", "Recall $( u_1,u_2,u_3)=(0,1,e^{i\\pi })$ .", "Theorem 7.2 Let $\\alpha _i \\in (Q - \\frac{\\gamma }{4}, Q)$ for $i=1,2,3$ .", "Let $(\\phi ,\\eta _1,\\eta _2,\\eta _3)$ be sampled from $\\mathrm {LF}_{(\\alpha _i, u_i)_i}\\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}$ .", "Then the law of the decorated quantum surface $( \\phi , \\eta _1, \\eta _2, \\eta _3, u_1,u_2,u_3)/{\\sim _\\gamma }$ is $\\frac{\\gamma ^2}{4\\pi ^4(Q-\\gamma )^4}\\iiint _0^\\infty \\ell _1 \\ell _2 \\ell _3 \\mathrm {Weld}\\Big ( \\mathcal {M}_1^\\mathrm {disk}(\\alpha _1;\\ell _1), \\mathcal {M}_1^\\mathrm {disk}(\\alpha _2;\\ell _2),\\mathcal {M}_1^\\mathrm {disk}(\\alpha _3; \\ell _3), \\mathrm {QP}(\\ell _1, \\ell _2, \\ell _3) \\Big )\\, d \\ell _1 \\, d\\ell _2\\, d\\ell _3.$ Here $\\mathrm {Weld}$ means uniform conformal welding in the same sense as in Theorem REF .", "We fix the range $(Q - \\frac{\\gamma }{4}, Q)$ in Theorem REF for concreteness since this is the range we will use to prove Theorem .", "Since $\\gamma =\\sqrt{\\kappa }\\in (\\sqrt{8/3},2)$ , we have $\\gamma \\in (Q - \\frac{\\gamma }{4}, Q)$ .", "We first observe that Theorem REF is the special case of Theorem REF where $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ .", "Lemma 7.3 Theorem REF holds for $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ .", "By the relation between $\\mathrm {LF}_{(\\gamma , u_i)_3}$ and $\\mathrm {QS}_3$ from Theorem REF and the relation between $\\mathrm {QD}_{1,0}(\\ell ) $ and $\\mathcal {M}_1^\\mathrm {disk}(\\gamma ; \\ell )$ from Theorem REF , we see that Lemma REF follows from Theorem REF .", "We first explain how to go from $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ to the following case.", "Proposition 7.4 Theorem REF holds for $\\alpha _1=\\alpha \\in (Q - \\frac{\\gamma }{4}, Q)$ and $\\alpha _2=\\alpha _3=\\gamma $ .", "We will prove Proposition REF from Lemma REF by a reweighting procedure.", "By the definition of $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ , if we sample a field $X$ from $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ , then the law of $(\\mathbb {H}, X,i)/{\\sim _\\gamma }$ is $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ .", "We now recall a fact from [6] about the reweighting of $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ by “$e^{(\\alpha -\\gamma )X}$ ”.", "Lemma 7.5 ([6]) For any $\\ell >0,\\varepsilon \\in (0,1)$ and for any nonnegative measurable function $f$ of $X|_{\\mathbb {H}\\setminus B_\\varepsilon (i)}$ we have $\\int f(X|_{\\mathbb {H}\\setminus B_\\varepsilon (i)}) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\,d {\\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell )}= \\int f(X|_{\\mathbb {H}\\setminus B_\\varepsilon (i)}) \\, d\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha , i)}(\\ell ),$ where $X$ is a sample in $H^{-1}(\\mathbb {H})$ and $X_\\varepsilon (i)$ means the average of $X$ on the boundary of the ball $B_\\varepsilon (i)=\\lbrace z: |z-i|< \\varepsilon \\rbrace $ .", "The key to the proof of Proposition REF is the following reweighting result on $\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}$ .", "Lemma 7.6 Let $\\eta $ be a simple loop separating $u_1$ from $u_2$ and $u_3$ .", "Let $D_{\\eta }$ be the connected component of $\\eta $ containing $u_1$ .", "Let $p$ be a point on $\\eta $ and let $\\psi : \\mathbb {H}\\rightarrow D_{\\eta }$ be the conformal map with $\\psi (i) = u_1$ and $\\psi (0) = p$ .", "For $\\varepsilon \\in (0,\\frac{1}{4})$ , let ${\\eta ,p,\\varepsilon } =\\psi (B_\\varepsilon (i))$ .", "For $\\phi $ sampled from $\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}$ , let $X=\\phi \\circ \\psi +Q \\log |\\psi ^{\\prime }|$ so that $(\\mathbb {H},X,i,0)/{\\sim _\\gamma }=(D_{\\eta _1}, \\phi , u_1, p)/{\\sim _\\gamma }$ .", "Then for a fixed $\\alpha \\in (Q-\\frac{\\gamma }{4},Q)$ and for any $\\varepsilon \\in (0,1)$ and any nonnegative measurable function $f$ of $\\phi $ depending only on $\\phi |_{{\\eta ,p,\\varepsilon }}$ we have $\\int f (\\phi ) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\, d\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}= \\int f (\\phi ) \\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , u_1)}{2}\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} d\\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)}.$ Let $\\theta _\\varepsilon $ be the uniform probability measure on $\\partial B_\\varepsilon (i)$ and $\\widehat{\\theta }_\\varepsilon := \\psi _* \\theta _\\varepsilon $ .", "Since $\\psi ^{\\prime }$ is holomorphic, $\\log |\\psi ^{\\prime }|$ is harmonic so $(X, \\theta _\\varepsilon ) = (\\phi \\circ \\psi + Q \\log |\\psi ^{\\prime }|, \\theta _\\varepsilon ) = (\\phi , \\widehat{\\theta }_\\varepsilon ) + Q \\log |\\psi ^{\\prime }(i)|$ .", "Therefore, since $|\\psi ^{\\prime }(i)| = \\frac{1}{2} \\mathrm {CR}(\\eta , 0)$ , $e^{(\\alpha - \\gamma )X_\\varepsilon (i)} = \\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , 0)}{2} \\right)}^{(\\alpha - \\gamma )Q} e^{(\\alpha - \\gamma )(\\phi , \\widehat{\\theta }_\\varepsilon )}.$ Since $\\varepsilon < \\frac{1}{4}$ , the measure $\\widehat{\\theta }_\\varepsilon $ is supported in the unit disk so $(\\log |\\cdot |_+, \\widehat{\\theta }_\\varepsilon ) = 0$ .", "Moreover, for any $z \\in {\\eta , p, \\varepsilon }$ the function $w \\mapsto \\log |z - \\psi (\\cdot )|$ is harmonic on $B_\\varepsilon (i)$ so $(\\log |z - \\cdot |, \\widehat{\\theta }_\\varepsilon ) = (\\log |z - \\psi (\\cdot )|, \\theta _\\varepsilon ) = \\log |z|$ .", "Finally, since $\\widehat{\\theta }_\\varepsilon $ is the harmonic measure on $\\eta _\\varepsilon := \\psi (\\partial B_\\varepsilon (i))$ viewed from 0, it is well known that $(\\log |\\cdot |, \\widehat{\\theta }_\\varepsilon ) = \\log \\mathrm {CR}(\\eta _\\varepsilon ,0) = \\log \\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2}$ .", "Therefore, recalling $G_z,w) = -\\log |z-w| + \\log |z|_+ + \\log |w|_+$ from Section REF , we have $(-2Q \\log |\\cdot |_+ + \\sum _j \\gamma G_\\cdot , u_j), \\widehat{\\theta }_\\varepsilon ) = - \\gamma \\log \\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2}$ .", "For a GFF $h$ sampled from $P_ let $ h := h - 2Q ||+ + j=13 G, uj)$.", "By Definition~\\ref {def-RV-sph},\\begin{equation}\\int f (\\phi ) \\times e^{(\\alpha - \\gamma )(\\phi , \\widehat{\\theta }_\\varepsilon )} \\, d\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)} = \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{-(\\alpha - \\gamma ) \\gamma }\\int \\mathbb {E}[ e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )} f(\\widetilde{h} + c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc.\\end{equation}Define $ G(z, 0) := (Gz, ) , )$; from earlier computations we have $ G(, 0)|, p, = G, 0)|, p, $.By Girsanov^{\\prime }s theorem and the fact that $ f()$ depends only on $ |, p, $,{\\begin{@align*}{1}{-1}\\int \\mathbb {E}[ e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )} f(\\widetilde{h} + c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc &= \\mathbb {E}[e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )}]\\int \\mathbb {E}[ f(\\widetilde{h} + (\\alpha - \\gamma )G^\\varepsilon _\\cdot , 0)+ c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc \\\\&= \\mathbb {E}[e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )}]\\int \\mathbb {E}[ f(\\widetilde{h} + (\\alpha - \\gamma )G_\\cdot , 0)+ c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc.\\end{@align*}}Now $ Var((h, )) = (G(, 0), ) = -CR(, 0)2$.", "Hence $ E[e(-)(h, )] = (CR(, 0)2 )- 12 (- )2$, and using Definition~\\ref {def-RV-sph}, we get\\begin{equation}\\int \\mathbb {E}[ e^{(\\alpha - \\gamma )(h, \\widehat{\\theta }_\\varepsilon )} f(\\widetilde{h} + c) ] e^{(\\alpha + 2\\gamma - 2Q)c}\\,dc = \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{- \\frac{1}{2} (\\alpha - \\gamma )^2}\\int f(\\phi ) \\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)}.\\end{equation}Combining~(\\ref {eq-gir-sph-1}),~(\\ref {eq-gir-sph-2}) and~(\\ref {eq-gir-sph-3}), and collecting the prefactors via$$ \\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , 0)}{2} \\right)}^{(\\alpha - \\gamma )Q} \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{-(\\alpha - \\gamma ) \\gamma } \\mathopen {}\\mathclose {\\left(\\frac{\\varepsilon \\mathrm {CR}(\\eta , 0)}{2} \\right)}^{- \\frac{1}{2} (\\alpha - \\gamma )^2} = \\varepsilon ^{-\\frac{1}{2}(\\alpha ^2-\\gamma ^2)}\\mathopen {}\\mathclose {\\left(\\frac{\\mathrm {CR}(\\eta , 0)}{2} \\right)}^{-\\frac{\\alpha ^2}{2} + Q\\alpha - 2},$$we conclude the proof.$ We also need the following fact when dealing with uniform weldings that involves $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ .", "Lemma 7.7 For $\\alpha > \\frac{\\gamma }{2}$ and $\\ell > 0$ , let $(D,h,z)$ be an embedding of a sample from $\\mathcal {M}^{\\mathrm {disk}}_1(\\alpha ;\\ell )$ .", "Given $(D,h,z)$ , let $p$ be a point sampled from the harmonic measure on $\\partial D$ view from $z$ , then the law of $(D,h,z,p)/{\\sim _\\gamma }$ equals that of $(\\mathbb {H}, X, i,0)/{\\sim _\\gamma }$ where $X$ is sampled from $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ .", "We assume that $(D,h,z)=(\\mathbb {H}, h,i)$ where $h$ is a sample from $\\mathrm {LF}_{\\mathbb {H}}^{(\\alpha ,i)}(\\ell )$ .", "Let $\\psi _p: \\mathbb {H}\\rightarrow \\mathbb {H}$ be the conformal map with $\\psi _p(i) = i$ and $\\psi _p(p) = 0$ and set $X=h \\circ \\psi _p^{-1} + Q \\log |(\\psi _p^{-1})^{\\prime }|$ .", "Then by the coordinate change for Liouville fields on $\\mathbb {H}$ from [6], the law of $X$ is also $\\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell )$ .", "Since $(D,h,z,p)/{\\sim _\\gamma }=(\\mathbb {H}, X, i,0)/{\\sim _\\gamma }$ we are done.", "We are now ready to prove Proposition REF .", "For notational simplicity for $\\ell >0$ we let $\\mathfrak {M}_\\ell $ be the measure on decorated quantum surfaces corresponding to $\\frac{\\gamma ^2}{4\\pi ^4(Q-\\gamma )^4}\\iint _0^\\infty \\ell _2 \\ell _3\\mathrm {Weld}( \\mathcal {M}_1^\\mathrm {disk}(\\gamma ; \\ell _2), \\mathcal {M}_1^\\mathrm {disk}(\\gamma ; \\ell _3), \\mathrm {QP}(\\ell , \\ell _2, \\ell _3)) \\, d\\ell _2 \\, d\\ell _3,$ so that the relevant integral for Proposition REF is $\\int _0^\\infty \\ell \\mathrm {Weld}( \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ),\\mathfrak {M}_\\ell )\\, d\\ell $ .", "We sample a decorated quantum surface from $\\int _0^\\infty \\ell \\mathrm {Weld}( \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ),\\mathfrak {M}_\\ell )\\, d\\ell $ and let $( \\phi , \\eta _1, \\eta _2, \\eta _3, u_1,u_2,u_3)$ be its embedding; since we specify the locations of the three marked points, the tuple $(\\phi ,\\eta _1,\\eta _2,\\eta _3)$ is uniquely specified by the decorated quantum surface.", "Let $D_{\\eta _1}$ and $D^c_{\\eta _1}$ be the connected components of $\\eta _1$ such that $D_{\\eta _1}$ contains $u_1$ .", "Let $p\\in \\eta _1$ be a point sampled from the harmonic measure of $\\partial D_{\\eta _1}$ viewed from $u_1$ and set $\\mathcal {D}_1=(D_{\\eta _1}, \\phi ,u_1,p )/{\\sim _\\gamma }$ .", "Let $X\\in H^{-1}(\\mathbb {H})$ be such that $\\mathcal {D}_1=(\\mathbb {H},X,i,0)/{\\sim _\\gamma }$ .", "Let $\\mathcal {D}^c_1$ be the decorated quantum surface $(D^c_{\\eta _1}, \\phi ,\\eta _2, \\eta _3,u_2,u_3,p)/{\\sim _\\gamma }$ .", "Then the law of $(X, \\mathcal {D}^c_1)$ is described by the following lemma.", "Lemma 7.8 Given a decorated quantum surface $\\mathcal {S}$ sampled from $\\mathfrak {M}_\\ell $ , we write $\\mathcal {S}^\\bullet $ as the decorated quantum surface obtained by further sampling a point on the boundary of $\\mathcal {S}$ according to the probability measure proportional to its quantum boundary length measure.", "Let $\\mathfrak {M}_\\ell ^\\bullet $ be the law of $\\mathcal {S}^\\bullet $ .", "Then the law of $(X, \\mathcal {D}^c_1)$ defined right above is $\\int _0^\\infty \\ell \\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }d\\ell $ .", "By the definition of uniform welding, conditioning on $\\mathcal {D}_1$ , the conditional law of $\\mathcal {D}^c_1$ is $\\mathfrak {M}_\\ell ^\\bullet $ with $\\ell $ being the boundary length of $\\mathcal {D}_1$ .", "Since the marked boundary point of $\\mathcal {D}_1$ is sampled from the harmonic measure, Lemma REF now yields Lemma REF .", "[Proof of Proposition REF ] For $(\\phi , \\eta _1, \\eta _2, \\eta _3,p)$ defined above Lemma REF with $\\alpha =\\gamma $ , by Lemma REF the law of $(\\phi , \\eta _1, \\eta _2, \\eta _3,p)$ is $\\mathrm {Harm_{u_1,\\eta _1}}(dp)\\,\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)} (d\\phi ) \\,\\mathsf {m}_3(d\\eta _1,d\\eta _2,d\\eta _3)$ where $\\mathrm {Harm}_{u_1,\\eta _1}$ means the harmonic measure on $\\eta _1$ viewed from $u_1$ .", "For $(X, \\mathcal {D}^c_1)$ in Lemma REF , since $(\\phi , \\eta _1, \\eta _2, \\eta _3,p)$ and $(X, \\mathcal {D}^c_1)$ determine each other, Lemma REF with $\\alpha =\\gamma $ can be written as $\\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)} (d\\phi )\\,\\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3(d\\eta _1,\\eta _2,\\eta _3)= \\int _0^\\infty \\ell \\mathrm {LF}_\\mathbb {H}^{(\\gamma ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }d\\ell .$ Recall the notations in Lemma REF .", "For $\\varepsilon \\in (0,\\frac{1}{4})$ , for any nonnegative measurable function $f$ of $\\phi |_{{\\eta _1,p,\\varepsilon }}$ , and any nonnegative measure function $g$ of $(\\eta _1,\\eta _2,\\eta _3)$ , we get from (REF ) that $&\\int f(\\phi |_{{\\eta _1,p,\\varepsilon }})g(\\eta _1,\\eta _2,\\eta _3) \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\mathrm {LF}_{(\\gamma , u_1), (\\gamma , u_2), (\\gamma , u_3)}(d\\phi )\\, \\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3(d\\eta _1,d\\eta _2,d\\eta _3) \\nonumber \\\\&= \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int f(\\phi |_{{\\eta _1,p,\\varepsilon }})g(\\eta _1,\\eta _2,\\eta _3) \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X_\\varepsilon (i)} \\ell \\mathrm {LF}_\\mathbb {H}^{(\\gamma ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }\\right)}d\\ell .", "$ By Lemma REF , the left side of (REF ) equals $\\int f g \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta , u_1)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} \\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)}(d\\phi )\\, \\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3(d\\eta _1,d\\eta _2,d\\eta _3).$ Here we write $f=f(\\phi |_{{\\eta _1,p,\\varepsilon }})$ and $g=g(\\eta _1,\\eta _2,\\eta _3)$ to ease the notation.", "By Lemma REF , the right side of (REF ) equals $\\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int fg \\ell \\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }\\right)}d\\ell .$ Recall that $\\mathsf {m}_3^{\\alpha ,\\gamma ,\\gamma } = \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta , u_1)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} \\mathsf {m}_3$ .", "Comparing the two integrals above, we get in the same sense as in (REF ) that $\\mathrm {LF}_{(\\alpha , u_1), (\\gamma , u_2), (\\gamma , u_3)} (d\\phi )\\mathrm {Harm_{u_1,\\eta _1}}(dp)\\, \\mathsf {m}_3^{\\alpha ,\\gamma ,\\gamma }\\,(d\\eta _1,d\\eta _2,d\\eta _3)= \\int _0^\\infty \\ell \\mathrm {LF}_\\mathbb {H}^{(\\alpha ,i)} (\\ell ) \\times \\mathfrak {M}_\\ell ^{\\bullet }d\\ell .$ Forgetting about the point $p$ we get Proposition REF .", "[Proof of Theorem REF ] The case $(\\alpha _1, \\alpha _2, \\alpha _3) = (\\alpha , \\gamma , \\gamma )$ was proved in Proposition REF by reweighting from the $(\\gamma , \\gamma , \\gamma )$ case.", "The exact same argument in Lemma REF gives the following extension: $&\\int f (\\phi |_{{\\eta ,p,\\varepsilon }}) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\alpha _1^2)} e^{(\\alpha - \\alpha _1)X_\\varepsilon (i)} \\, d\\mathrm {LF}_{(\\alpha _1, u_1), (\\alpha _2, u_2), (\\alpha _3, u_3)}\\\\=& \\int f (\\phi |_{{\\eta ,p,\\varepsilon }}) \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\eta , u_1)\\right)}^{-\\frac{\\alpha _1^2}{2} + Q \\alpha _1 - 2} \\mathrm {LF}_{(\\alpha , u_1), (\\alpha _2, u_2), (\\alpha _3, u_3)}.$ Applying this argument twice at $u_2$ and $u_3$ yields the general result." ], [ "Matching the quantum area: proof of Theorem ", "We start by proving the following counterpart of Lemma REF .", "Proposition 7.9 There is a constant $C = C(\\gamma ) \\in (0, \\infty )$ such that for $\\alpha _1, \\alpha _2, \\alpha _3 \\in (Q - \\frac{\\gamma }{4}, Q)$ and $\\mu >0$ , with notation as in Theorem REF and writing $\\phi $ for the random field from $\\mathrm {LF}_{(\\alpha _i, u_i)_i}$ , $\\mathopen {}\\mathclose {\\left(\\mathrm {LF}_{(\\alpha _i, u_i)_3} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}\\right)} [e^{- \\mu _\\phi (}] =C \\prod _{i=1}^3 \\frac{2^{\\alpha _i^2/2 - Q\\alpha _i} \\Gamma (\\frac{\\gamma \\alpha _i}{2} -\\frac{\\gamma ^2}{4}) }{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i)) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha _i))} \\mathopen {}\\mathclose {\\left( \\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1 - \\frac{\\gamma ^2}{4})}\\right)}^{-\\frac{\\alpha _i}{\\gamma }} .$ Recall $\\mathrm {QP}_3(\\ell _1, \\ell _2, \\ell _3)[e^{-A}]$ from Theorem REF .", "Recall $ \\mathcal {M}_1^\\mathrm {disk}(\\alpha _i; \\ell _i)[e^{-A_i}]$ from Theorems REF , where $A_i$ is the quantum area of a sample from $\\mathcal {M}_1^\\mathrm {disk}(\\alpha _i; \\ell _i)$ .", "By Theorem REF , for some $\\gamma $ -dependent constants $C_1,C_2$ we have that $\\mathopen {}\\mathclose {\\left(\\mathrm {LF}_{(\\alpha _i, u_i)_3} \\times \\mathsf {m}_3^{\\alpha _1, \\alpha _2, \\alpha _3}\\right)} [e^{- \\mu _\\phi (}]$ equals $C_1& \\iiint _0^\\infty \\ell _1\\ell _2\\ell _3 \\mathrm {QP}_3(\\ell _1, \\ell _2, \\ell _3)[e^{-A}] \\prod _{i=1}^3 \\mathcal {M}_1^\\mathrm {disk}(\\alpha _i; \\ell _i)[e^{-A_i}] \\, d\\ell _1\\, d\\ell _2 \\, d\\ell _3 \\\\&= C_2 \\prod _{i=1}^3 \\frac{\\overline{U}(\\alpha _i) (4 \\sin (\\frac{\\pi \\gamma ^2}{4}))^{\\alpha _i/\\gamma }}{2^{\\alpha _i^2/2}\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i))} \\int _0^\\infty \\frac{1}{\\sqrt{\\ell _i}}e^{-\\ell _i \\sqrt{\\frac{1}{\\sin (\\pi \\gamma ^2/4)}}} K_{\\frac{2}{\\gamma }(Q-\\alpha _i)} \\mathopen {}\\mathclose {\\left( \\ell _i\\sqrt{\\frac{1}{\\sin (\\pi \\gamma ^2/4)}} \\right)} \\, d\\ell _i, $ where $\\overline{U}$ is as in (REF ).", "Expanding $\\overline{U}(\\alpha _i)$ and using Lemma REF to simplify the integral on the second line, this is equal to $C \\prod _{i=1}^3 \\frac{2^{\\alpha _i^2/2 - Q\\alpha _i} \\Gamma (\\frac{\\gamma \\alpha _i}{2} -\\frac{\\gamma ^2}{4}) }{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i)) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha _i))} \\mathopen {}\\mathclose {\\left(\\frac{\\sin (\\frac{\\pi \\gamma ^2}{4}) \\Gamma (1-\\frac{\\gamma ^2}{4})^2}{\\pi ^2} \\right)}^{\\frac{\\alpha _i}{\\gamma }} .", "$ The identity $\\Gamma (z)\\Gamma (1-z) = \\frac{\\pi }{\\sin (\\pi z)}$ then yields the result.", "[Proof of Theorem ] We divide the proof into four cases depending on the parameter range.", "Case I: $\\kappa < 4$ and $\\lambda _i \\in (\\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }, \\frac{\\kappa }{8} -1+\\frac{2}{\\kappa })$ for all $i=1,2,3$ .", "In this case we can find $\\alpha _i \\in (Q - \\frac{\\gamma }{4}, Q)$ satisfying $\\lambda _i = -\\frac{\\alpha _i^2}{2} + Q \\alpha _i - 2$ for each $i$ .", "Comparing Lemmas REF and REF yields for some $\\gamma $ -dependent constant $C$ that ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\frac{C}{C_\\gamma ^{\\mathrm {DOZZ}}(\\alpha _1,\\alpha _2,\\alpha _3)} \\prod _{i=1}^3 \\frac{ \\Gamma (\\frac{\\gamma \\alpha _i}{2} -\\frac{\\gamma ^2}{4}) }{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha _i)) \\cos (\\frac{2\\pi }{\\gamma }(Q-\\alpha _i))} \\mathopen {}\\mathclose {\\left( \\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1 - \\frac{\\gamma ^2}{4})}\\right)}^{-\\frac{\\alpha _i}{\\gamma }} .$ The constant $C$ is recovered by setting $\\alpha _1=\\alpha _2=\\alpha _3=\\gamma $ , for which $\\lambda _i = 0$ and $C_\\kappa ^{\\operatorname{CLE}}(0,0,0) = 1$ .", "This gives () for ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ and () for the factor $N_\\gamma (\\alpha )$ .", "For the next three cases we need the following monotonicity.", "Since the distance between $u_i$ and $\\eta _i$ is less than 1 by our choice of $ u_1,u_2,u_3$ , by the Koebe 1/4 theorem we have $\\frac{1}{4}\\mathrm {CR}(\\eta _i, u_i) \\le 1$ .", "Therefore $ \\mathbb {E}[\\prod _{i=1}^3 ( \\frac{1}{4}\\mathrm {CR}(\\eta _i, u_i))^{\\lambda _i}] \\ge \\mathbb {E}[\\prod _{i=1}^3 ( \\frac{1}{4}\\mathrm {CR}(\\eta _i,u_i))^{\\widetilde{\\lambda }_i}] $ for any real $\\lambda _i, \\widetilde{\\lambda }_i$ such that $\\lambda _i \\le \\widetilde{\\lambda }_i$ for $i=1,2,3$ , hence $4^{-\\lambda _1 - \\lambda _2 - \\lambda _3}C_\\kappa ^{\\operatorname{CLE}}(\\lambda _1, \\lambda _2, \\lambda _3) \\ge 4^{-\\widetilde{\\lambda }_1 - \\widetilde{\\lambda }_2 - \\widetilde{\\lambda }_3}C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1,\\widetilde{\\lambda }_2,\\widetilde{\\lambda }_3).$ Case II: $\\kappa < 4$ and all $\\lambda _i > \\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }$ .", "By (REF ) ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ is finite, hence analytic, on $\\lbrace (z_1,z_2,z_3)\\in 3: \\operatorname{Re}z_i > \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }\\textrm { for }i=1,2,3\\rbrace $ .", "On the other hand, the right hand side of () is a meromorphic function.", "Since Case II includes Case I, we see that () holds in Case II.", "Case III: $\\kappa < 4$ and $\\lambda _i \\le \\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }$ for some $i$ .", "We will show that ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\infty $ in this case.", "By the monotonicity (REF ) and symmetry, it suffices to prove ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\infty $ for $\\lambda _1 \\le \\frac{3\\kappa }{32} - 1 + \\frac{2}{\\kappa }$ and $\\lambda _2, \\lambda _3 > \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }$ .", "For $\\widetilde{\\lambda }_1>\\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }$ we have $4^{-\\lambda _1 - \\lambda _2 - \\lambda _3}C_\\kappa ^{\\operatorname{CLE}}(\\lambda _1, \\lambda _2, \\lambda _3) > 4^{-\\widetilde{\\lambda }_1 - \\lambda _2 - \\lambda _3}C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3), $ By the explicit formula for $C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3)$ in () we get $C_\\kappa ^{\\operatorname{CLE}}(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3)\\rightarrow \\infty $ as $\\widetilde{\\lambda }_1\\downarrow \\frac{3\\kappa }{32} -1+\\frac{2}{\\kappa }$ ; indeed, writing $(\\widetilde{\\alpha }_1, \\alpha _2, \\alpha _3)\\in (Q-\\frac{\\gamma }{4}, Q)^3$ for the parameters corresponding to $(\\widetilde{\\lambda }_1, \\lambda _2, \\lambda _3)$ , as $\\widetilde{\\alpha }_1 \\downarrow Q-\\frac{\\gamma }{4}$ we have $\\lim _{\\widetilde{\\alpha }_1 \\downarrow Q - \\frac{\\gamma }{4}}N_\\gamma (\\widetilde{\\alpha }_1) = \\infty $ while $C_\\gamma ^\\mathrm {DOZZ}(\\widetilde{\\alpha }_1, \\alpha _2, \\alpha _3)$ remains positive since $(\\widetilde{\\alpha }_1, \\alpha _2, \\alpha _3)$ satisfies the Seiberg bounds ().", "Therefore ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}= \\infty $ as desired.", "Case IV: $\\kappa = 4$ .", "This follows from Lemma REF via the continuity as $\\kappa \\uparrow 4$ ." ], [ "Electrical thickness of the SLE loop via conformal welding", "In this section we prove Theorem REF .", "Recall from Section  the cylinder $\\mathcal {C}= \\mathbb {R}\\times [0,2\\pi ]$ with $x \\in \\mathbb {R}$ identified with $x + 2\\pi i$ , and that $\\mathcal {L}_\\kappa (\\mathcal {C})$ is the pullback of the loop shape measure $\\mathcal {L}_\\kappa $ under the map $z \\mapsto e^{-z}$ .", "Thus $\\mathcal {L}_\\kappa (\\mathcal {C})$ is a probability measure on loops $\\eta $ in $\\mathcal {C}$ separating $\\pm \\infty $ and satisfying $\\max _{z \\in \\eta } \\operatorname{Re}z = 0$ .", "For a loop $\\eta $ sampled from $\\mathcal {L}_\\kappa (\\mathcal {C})$ , we write $\\vartheta (\\eta )$ for the electrical thickness of $\\exp (\\eta )$ .", "Then Theorem REF is equivalent to $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}]= \\mathopen {}\\mathclose {\\left\\lbrace \\begin{array}{ll}\\frac{\\sin (\\pi (1-\\kappa /4))}{\\pi (1-\\kappa /4)}\\frac{\\pi \\sqrt{(1-\\kappa /4)^2+\\lambda \\kappa /2}}{\\sin (\\pi \\sqrt{ (1-\\kappa /4)^2+\\lambda \\kappa /2})} & \\mbox{if } \\lambda < 1-\\frac{\\kappa }{8}.", "\\\\\\infty & \\mbox{if } \\lambda \\ge 1-\\frac{\\kappa }{8}\\end{array}\\right.", "}$ Consider $\\alpha < Q$ and set $\\lambda = \\frac{\\alpha ^2}{2} - Q\\alpha +2$ .", "Let $\\mathcal {L}_\\kappa ^\\alpha $ be defined by the following reweighting of $\\mathcal {L}_\\kappa (\\mathcal {C})$ .", "$\\frac{d \\mathcal {L}_\\kappa ^\\alpha }{d\\mathcal {L}_\\kappa (\\mathcal {C})}(\\eta ) = (\\frac{1}{4} \\mathrm {CR}(\\exp (\\eta ),0) \\mathrm {CR}(\\exp (-\\eta ),0))^{-\\frac{\\alpha ^2}{2} + Q\\alpha - 2} = 2^{2\\lambda }e^{\\lambda \\vartheta (\\eta )},$ Thus proving Theorem REF amounts to computing the total mass $|\\mathcal {L}_\\kappa ^\\alpha |$ .", "We will achieve this using the same strategy as in the proof of Theorem  in Section : first establish a conformal welding identity and then evaluate an observable in two ways.", "Sample a pair $(\\eta , \\mathbf {t})$ from $\\mathcal {L}_\\kappa ^\\alpha \\times dt$ (where $dt$ is the Lebesgue measure on $\\mathbb {R}$ ), and let $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ be the law of the translated loop $\\eta + \\mathbf {t}$ .", "Then $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ is an infinite measure on loops on $\\mathcal {C}$ separating $\\pm \\infty $ .", "Note that $\\mathcal {L}_\\kappa ^\\gamma =\\mathcal {L}_\\kappa (\\mathcal {C})$ .", "According to Lemma REF , $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\gamma }$ is a constant multiple of the measure $\\operatorname{SLE}^{\\operatorname{sep}}_\\kappa (\\mathcal {C})$ from Section .", "To prove Theorem REF , we consider a generalization of the loop decorated two-pointed quantum sphere $\\mathrm {QS}_2\\otimes \\operatorname{SLE}_\\kappa ^{\\operatorname{sep}}$ from Proposition REF , where marked points have $\\alpha $ -singularities.", "We first recall the $\\alpha $ -generalization of $\\mathrm {QS}_2$ from [19] and its relation to LCFT following [4].", "Definition 8.1 For $\\gamma \\in (0,2)$ , let $(B_s)_{s \\ge 0}$ be a standard Brownian motion conditioned on $B_{s} - (Q-\\alpha )s<0$ for all $s>0$ , and $(\\widetilde{B}_s)_{s \\ge 0}$ an independent copy of $(B_s)_{s \\ge 0}$ .", "Let $Y_t =\\mathopen {}\\mathclose {\\left\\lbrace \\begin{array}{ll}B_{t} - (Q -\\alpha )t & \\mbox{if } t \\ge 0 \\\\\\widetilde{B}_{-t} +(Q-\\alpha ) t & \\mbox{if } t < 0\\end{array}\\right.}", ".$ Let $h^1(z) = Y_{\\operatorname{Re}z}$ for each $z \\in \\mathcal {C}$ .", "Let $h^2_\\mathcal {C}$ be independent of $h^1$ and have the law of the lateral component of the GFF on $\\mathcal {C}$ .", "Let $\\hat{h}=h^1+h^2_\\mathcal {C}$ .", "Let $\\mathbf {c}\\in \\mathbb {R}$ be sampled from $ \\frac{\\gamma }{2} e^{2(\\alpha -Q)c}dc$ independent of $\\hat{h}$ and set $h=\\hat{h}+\\mathbf {c}$ .", "Let $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ be the infinite measure describing the law of the decorated quantum surface $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ .", "Recall the unit-volume reflection coefficient for LCFT on the sphere [40], [65] $\\overline{R}(\\alpha ) := -\\mathopen {}\\mathclose {\\left(\\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1-\\frac{\\gamma ^2}{4})}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )} \\frac{1}{\\frac{2}{\\gamma }(Q-\\alpha )} \\frac{\\Gamma (-\\frac{\\gamma }{2}(Q-\\alpha ))}{\\Gamma (\\frac{\\gamma }{2}(Q-\\alpha ))\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))} .$ Lemma 8.2 The law of the quantum area of a sample from $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ is $1_{a>0} \\frac{1}{2} \\overline{R}(\\alpha ) a^{\\frac{2}{\\gamma }(\\alpha - Q) - 1} \\, da.", "$ For $0< a < a^{\\prime }$ with $\\widehat{h}$ as in Definition REF , we have $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )[ \\mu _{\\widehat{h} + c} (\\mathcal {C}) \\in (a, a^{\\prime }) ] = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _{-\\infty }^\\infty \\mathbf {1}_{e^{\\gamma c} \\mu _{\\widehat{h}}(\\mathcal {C}) \\in (a, a^{\\prime })} \\frac{\\gamma }{2} e^{2(\\alpha -Q)c} \\, dc \\right]} = \\mathbb {E}\\mathopen {}\\mathclose {\\left[\\int _a^{a^{\\prime }} \\frac{\\gamma }{2} \\mathopen {}\\mathclose {\\left(\\frac{y}{\\mu _{\\widehat{h}}(\\mathcal {C})}\\right)}^{\\frac{2}{\\gamma }(\\alpha - Q)} \\frac{1}{\\gamma y} \\, dy \\right]}$ where we have used the change of variables $y = e^{\\gamma c} \\mu _{\\widehat{h}}(\\mathcal {C})$ .", "By [40] and [65], for $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ we have $\\mathbb {E}[\\mu _{\\widehat{h}}(\\mathcal {C})^{\\frac{2}{\\gamma }(Q-\\alpha )}] = \\overline{R}(\\alpha )$ .", "Interchanging the expectation and integral gives the result.", "Fix $\\kappa \\in (\\frac{8}{3},4)$ and $\\gamma =\\sqrt{\\kappa }$ .", "Let $\\mathbb {F}$ be the law of $h$ as in Definition REF , so that the law of $(\\mathcal {C}, h,-\\infty ,+\\infty )/{\\sim _\\gamma }$ is $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ .", "Now sample $(h,\\eta )$ from $\\mathbb {F} \\times \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }$ and write $ \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}}$ as the law of $(\\mathcal {C}, h, \\eta , -\\infty ,+\\infty )/{\\sim _\\gamma }$ .", "Recall the Liouville field with $\\alpha $ -insertions $\\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}$ on $\\mathcal {C}$ from Definition REF .", "We have the following description of $ \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}}$ .", "Proposition 8.3 If $(\\phi , \\eta )$ is sampled from $\\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}\\times \\mathcal {L}_\\kappa ^\\alpha $ , the law of $(\\mathcal {C}, \\phi , \\eta , -\\infty ,+\\infty )/{\\sim _\\gamma }$ is $C (Q-\\alpha )^2 \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } \\quad \\text{for some constant }C=C(\\gamma )\\in (0,\\infty ).$ This is an immediate consequence of the definition of $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ and Theorem 2.11 of [4], which says the following: let $h$ be the field in Definition REF , so the law of $(\\mathcal {C}, h, -\\infty ,+\\infty )/{\\sim }_\\gamma $ is $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ .", "Let $T \\in \\mathbb {R}$ be sampled from Lebesgue measure independently of $h$ , and set $\\phi := h( \\cdot +T)$ .", "Then $\\phi $ has law $\\frac{\\gamma }{4 (Q-\\alpha )^2} \\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}$ .", "We now present the conformal welding result needed for the proof of Theorem REF .", "Proposition 8.4 For $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ and for some constant $C = C(\\gamma )$ we have $C (Q-\\alpha )^2\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } = \\int _0^\\infty \\ell \\cdot \\mathrm {Weld}(\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ), \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )) \\, d\\ell .", "$ We postpone the proof of Proposition REF to Section REF and proceed to the proof of Theorem REF .", "Similarly as in the proof of Theorem , we would like to compare the area of a sample from $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } $ using Propositions REF and REF to obtain $|\\mathcal {L}^\\alpha _\\kappa |$ and hence prove Theorem REF .", "But $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }[e^{- A}]=\\infty $ .", "Therefore we need to find a finite observable to compute.", "Note that $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha } $ is a measure on quantum surfaces decorated by two marked points and a loop separating them.", "The loop separates the quantum surface into two connected components.", "For $0<\\varepsilon <\\delta $ , let $E_{\\delta , \\varepsilon }$ be the event that the connected component containing the first marked point has quantum area at least 1 and the loop has quantum length in $(\\varepsilon , \\delta )$ .", "The size of $E_{\\delta , \\varepsilon }$ is easy to compute using Proposition REF .", "Lemma 8.5 Let $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ .", "With $C$ from Proposition REF , $\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }[E_{\\delta ,\\varepsilon }]$ equals $\\frac{1}{C (Q-\\alpha )^2}\\times \\frac{(1+o_{\\delta ,\\varepsilon }(1))\\log \\varepsilon ^{-1}}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )},$ where the error term $o_{\\delta ,\\varepsilon }(1)$ satisfies $\\lim _{\\delta \\rightarrow 0} \\lim _{\\varepsilon \\rightarrow 0} o_{\\delta ,\\varepsilon }(1)=0$ .", "Let $A$ be the quantum area of a sample from $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )$ .", "By Proposition REF , to prove Lemma REF , it suffices to prove $\\int _\\varepsilon ^\\delta \\ell \\cdot \\mathopen {}\\mathclose {\\left| \\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )\\right|}\\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )[A > 1] \\, d\\ell = \\frac{(1+o_{\\delta ,\\varepsilon }(1))\\log \\varepsilon ^{-1}}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )}.$ since the left side of (REF ) is the mass of $E_{\\delta , \\varepsilon }$ under $\\int _0^\\infty \\ell \\cdot \\mathrm {Weld}(\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ), \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )) \\, d\\ell $ .", "By the scaling of quantum area and boundary length, we have $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\# [A > 1] = \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; 1)^\\# [A > \\ell ^{-2}].$ By Theorem REF the quantum area law of $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ;1)^\\#$ is inverse gamma with shape $a = \\frac{2}{\\gamma }(Q-\\alpha )$ and scale $b = (4 \\sin \\frac{\\pi \\gamma ^2}{4})^{-1}$ .", "Let $\\underline{\\Gamma }$ be the lower incomplete gamma function; this satisfies $\\lim _{y \\rightarrow 0} \\frac{\\underline{\\Gamma }(a; y)}{y^a} = \\frac{1}{a}$ .", "By the tail asymptotic property of the inverse gamma distribution, as $\\ell \\rightarrow 0$ , $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\# [A > 1] = \\frac{\\underline{\\Gamma }(\\frac{2}{\\gamma }(Q-\\alpha ); \\ell ^2/4\\sin \\frac{\\pi \\gamma ^2}{4})}{\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))} = \\frac{1+o_\\ell (1)}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{\\ell ^2}{4 \\sin \\frac{\\pi \\gamma ^2}{4}}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )}.$ By Proposition REF we have $|\\mathcal {M}_1^\\mathrm {disk}(\\alpha ;\\ell )| = \\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\ell ^{\\frac{2}{\\gamma }(\\alpha -Q)-1}$ .", "Therefore $\\int _\\varepsilon ^\\delta \\ell \\cdot \\mathopen {}\\mathclose {\\left| \\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )\\right|}\\mathcal {M}_{1}^\\mathrm {disk}(\\alpha ; \\ell )[A > 1] \\, d\\ell = \\int _\\varepsilon ^\\delta \\ell \\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\ell ^{\\frac{2}{\\gamma }(\\alpha -Q)-1} \\right)}^2 \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\# [A> 1] \\\\= \\frac{1+o_{\\delta ,\\varepsilon }(1)}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )}\\int _\\varepsilon ^\\delta \\ell ^{-1} \\, d\\ell .", "$ We can also compute the size of $E_{\\delta ,\\varepsilon }$ using Proposition REF in terms of $|\\mathcal {L}_\\kappa ^\\alpha |$ and $\\overline{R}(\\alpha )$ .", "Proposition 8.6 For $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ , with the error term in the same sense as in Lemma REF , $(\\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha })[E_{\\delta , \\varepsilon }] = (1+o_{\\delta ,\\varepsilon }(1)) \\frac{\\overline{R}(\\alpha )}{2(Q-\\alpha )^2} |\\mathcal {L}_\\kappa ^\\alpha | \\log \\varepsilon ^{-1}.", "$ We postpone the proof of Proposition REF to Section REF and proceed to the proof of Theorem REF using Lemma REF and Proposition REF .", "Proposition 8.7 For some constant $C = C(\\gamma )$ and all $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ we have $2^{-\\alpha ^2 + 2Q\\alpha }|\\mathcal {L}_\\kappa ^\\alpha | = C\\frac{ \\frac{\\gamma }{2} (Q-\\alpha )}{\\sin (\\frac{\\gamma \\pi }{2} (Q-\\alpha ))}.", "$ In this proof, we write $C$ for a $\\gamma $ -dependent constant that may change from line to line.", "By Proposition REF and Lemma REF we get $ (Q-\\alpha )^2 \\frac{\\overline{R}(\\alpha )}{2(Q-\\alpha )^2}|\\mathcal {L}_\\kappa ^\\alpha |=\\frac{C}{\\frac{2}{\\gamma }(Q-\\alpha )\\Gamma (\\frac{2}{\\gamma }(Q-\\alpha ))}\\mathopen {}\\mathclose {\\left(\\frac{2}{\\gamma }2^{-\\frac{\\alpha ^2}{2}}\\overline{U}(\\alpha ) \\right)}^2\\mathopen {}\\mathclose {\\left(4 \\sin \\frac{\\pi \\gamma ^2}{4}\\right)}^{-\\frac{2}{\\gamma }(Q-\\alpha )} .$ Using the definitions of $\\overline{R}(\\alpha )$ and $\\overline{U}(\\alpha )$ in (REF ) and (REF ) and cancelling equal terms gives $ \\mathopen {}\\mathclose {\\left(\\frac{\\pi \\Gamma (\\frac{\\gamma ^2}{4})}{\\Gamma (1-\\frac{\\gamma ^2}{4})}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )} \\frac{\\Gamma (\\frac{\\gamma }{2}(\\alpha -Q))}{\\Gamma (\\frac{\\gamma }{2}(Q-\\alpha ))} |\\mathcal {L}_\\kappa ^\\alpha | =\\mathopen {}\\mathclose {\\left(\\frac{\\pi ^2}{\\Gamma (1-\\frac{\\gamma ^2}{4})^2 \\sin (\\frac{\\pi \\gamma ^2}{4})}\\right)}^{\\frac{2}{\\gamma }(Q-\\alpha )} C 2^{\\alpha ^2 - 2Q \\alpha }\\Gamma (1 + \\frac{\\gamma }{2}(\\alpha - Q))^2 $ The identity $\\Gamma (z)\\Gamma (1-z) = \\frac{\\pi }{\\sin (\\pi z)}$ gives equality of the first terms on the left and right hand sides, so rearranging and using $\\Gamma (z+1) = z\\Gamma (z)$ and $\\Gamma (1-z)\\Gamma (z) = \\frac{\\pi }{\\sin (\\pi z)}$ gives the desired identity: $2^{-\\alpha ^2 + 2Q\\alpha } |\\mathcal {L}_\\kappa ^\\alpha | = C \\frac{\\Gamma (1+\\frac{\\gamma }{2}(\\alpha -Q))}{\\Gamma (\\frac{\\gamma }{2}(\\alpha -Q))} \\Gamma (1- \\frac{\\gamma }{2}(Q-\\alpha ))\\Gamma (\\frac{\\gamma }{2}(Q-\\alpha )) = C \\frac{\\gamma }{2} (\\alpha - Q) \\cdot \\frac{\\pi }{\\sin ( \\frac{\\gamma \\pi }{2} (Q-\\alpha ))}.$ [Proof of Theorem REF ] We break the proof of (REF ) hence Theorem REF into three cases.", "Case I: $\\kappa <4$ and $\\lambda \\in (1 - \\frac{\\kappa }{8} - \\frac{2}{\\kappa }, 1-\\frac{\\kappa }{8})$ .", "Let $\\alpha = Q - \\sqrt{Q^2 - 4 + 2\\lambda } \\in (\\frac{\\gamma }{2}, Q)$ , so $\\lambda = \\frac{\\alpha ^2}{2}-Q\\alpha + 2$ .", "By (REF ) and Proposition REF we have $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] = 2^{-2\\lambda } |\\mathcal {L}_\\kappa ^\\alpha | = C\\frac{ \\frac{\\gamma }{2} (Q-\\alpha )}{\\sin (\\frac{\\gamma \\pi }{2} (Q-\\alpha ))}= C\\frac{ \\sqrt{(1-\\frac{\\kappa }{4})^2+\\frac{\\lambda \\kappa }{2}}}{\\sin (\\pi \\sqrt{(1-\\frac{\\kappa }{4})^2+\\frac{\\lambda \\kappa }{2}})}$ for some constant $C = C(\\gamma )$ .", "Since $\\kappa \\in (0,4)$ , we have $0 \\in (1-\\frac{\\kappa }{8}-\\frac{2}{\\kappa }, 1-\\frac{\\kappa }{8})$ .", "Thus we can obtain the value of $C$ by considering $1 = \\mathbb {E}[e^0] = \\frac{ C (1-\\kappa /4)}{\\sin (\\pi (1-\\kappa /4))}$ .", "This yields (REF ) in this case.", "Case II: $\\kappa < 4$ and $\\lambda \\in \\mathbb {R}$ .", "Since $\\vartheta (\\eta ) \\ge 0$ a.s. the function $\\lambda \\mapsto \\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}]$ is increasing.", "Thus for $\\lambda < 0$ we have $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] \\le \\mathbb {E}[e^{0\\cdot \\vartheta (\\eta )}] = 1$ , and since $1 - \\frac{\\kappa }{8} -\\frac{2}{\\kappa }< 0$ we can use analytic continuation to extend the result from $(1-\\frac{\\kappa }{8} -\\frac{2}{\\kappa }, 1-\\frac{\\kappa }{8})$ to $(-\\infty , 1-\\frac{\\kappa }{8})$ .", "Finally, taking a limit from below, we have for any $\\lambda \\ge 1-\\frac{\\kappa }{8}$ that $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] \\ge \\lim _{\\lambda ^{\\prime } \\uparrow 1-\\frac{\\kappa }{8}} \\mathbb {E}[e^{\\lambda ^{\\prime } \\vartheta (\\eta )}] = \\infty $ , where the limit follows from the explicit formula obtained in Case I.", "Case III: $\\kappa = 4$ .", "For $\\lambda < \\frac{1}{2}$ , we obtain the result by taking $\\kappa \\uparrow 4$ as follows.", "Let $\\eta _\\kappa $ be sampled from $\\mathcal {L}_\\kappa $ , then $\\vartheta (\\eta _\\kappa ) \\rightarrow \\vartheta (\\eta _4)$ in law as $\\kappa \\uparrow 4$ by Lemma REF .", "Fix $\\lambda < \\lambda ^{\\prime } < 1 - \\frac{4}{8}$ .", "For $\\kappa $ sufficiently close to 4 the family $\\lbrace e^{\\lambda \\vartheta (\\eta _\\kappa )}\\rbrace $ is uniformly integrable, since Theorem REF gives a uniform bound on $\\mathbb {E}[e^{\\lambda ^{\\prime } \\vartheta (\\eta _\\kappa )}]$ for $\\kappa $ close to 4.", "Therefore $\\lim _{\\kappa \\uparrow 4} \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _\\kappa )}] = \\mathbb {E}[e^{\\lambda \\vartheta (\\eta _4)}]$ .", "Now, for $\\lambda \\ge \\frac{1}{2}$ , the monotonicity argument of Case II gives $\\mathbb {E}[e^{\\lambda \\vartheta (\\eta )}] = \\infty $ .", "We now proceed to prove Propositions REF and REF in Sections REF and REF , respectively." ], [ "Conformal welding of two quantum disks with generic insertions", "The goal of this section is to prove Proposition REF .", "Our argument closely follows that of Theorem REF .", "Suppose $\\eta $ is a simple curve in $\\mathcal {C}$ separating $\\pm \\infty $ with two marked points $p^-,p^+ \\in \\eta $ .", "Let $D^\\pm _\\eta $ be the connected components of $\\mathcal {C}\\backslash \\eta $ containing $\\pm \\infty $ , and let $\\psi ^\\pm _\\eta : \\mathbb {H}\\rightarrow D^\\pm _\\eta $ be the conformal maps sending $(i, 0)$ to $(\\pm \\infty , p^\\pm )$ .", "We need the following analogue of Lemma REF .", "Lemma 8.8 Let $\\eta $ be a simple curve in $\\mathcal {C}$ separating $\\pm \\infty $ with two marked points $p^-,p^+ \\in \\eta $ .", "For $\\varepsilon \\in (0,\\frac{1}{4})$ , let $\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon } =\\mathcal {C}\\setminus (\\psi ^- _\\eta (B_\\varepsilon (i)) \\cup \\psi ^+_\\eta (B_\\varepsilon (i)))$ .", "For $\\phi $ sampled from $\\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}$ , let $X^\\pm =\\phi \\circ \\psi ^\\pm _\\eta +Q \\log |(\\psi ^\\pm _\\eta )^{\\prime }|$ .", "Then for a fixed $\\alpha \\in (\\frac{\\gamma }{2},Q)$ and for any $\\varepsilon \\in (0,\\frac{1}{4})$ and any nonnegative measurable function $f$ of $\\phi |_{\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon }}$ we have $&\\int f (\\phi |_{\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon }}) \\times \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma )(X^-_\\varepsilon (i) + X^+_\\varepsilon (i))}\\, d\\mathrm {LF}_\\mathcal {C}^{(\\gamma ,\\pm \\infty )}\\\\&= \\int f (\\phi |_{\\mathcal {C}_{\\eta ,p^\\pm ,\\varepsilon }}) \\mathopen {}\\mathclose {\\left(\\frac{1}{4}\\mathrm {CR}(\\exp (\\eta ), 0)\\mathrm {CR}(\\exp (-\\eta ), 0)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} \\, d\\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}.$ We reparametrize in $ and apply the argument of Lemma~\\ref {lem:reweight} twice.", "Let $ g: be given by $g(z) = \\frac{z}{z-1}$ and let $G : \\mathcal {C}\\rightarrow be given by $ G = g $.", "By \\cite [Lemma 2.13]{AHS-SLE-integrability}, if $$ is sampled from $ LFC(, )$ then $ := G-1 + Q |(G-1)'|$ has law $ LF(, 0), (, -1)$, and the same is true when $$ is replaced by $$.", "Let $ (, p-, p+) = (g(), g(p-), g(p+))$.", "Since $ G'(0) = 1$ and $ ddz (G(1z))|z=0 = 1$, we see that $ CR((), 0) = CR( , 0)$ and $ CR( (-), 0) = CR(, -1)$.", "Thus, it is equivalent to show that if $ , p, := (G(-(B(i)))G(+( B(i))) )$ and $ f$ is a function of $ |, p, $, then{\\begin{@align*}{1}{-1}&\\int \\hat{f} (\\hat{\\phi }|_{{\\hat{\\eta }, \\hat{p}^\\pm , \\varepsilon }}) \\times \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma )(X^-_\\varepsilon (i) + X^+_\\varepsilon (i))} \\, d\\mathrm {LF}_{(\\gamma ,0), (\\gamma , -1)}\\\\&= \\int \\hat{f} (\\hat{\\phi }|_{{\\hat{\\eta }, \\hat{p}^\\pm , \\varepsilon }}) \\mathopen {}\\mathclose {\\left(\\frac{1}{4}\\mathrm {CR}(\\hat{\\eta }, 0)\\mathrm {CR}(\\hat{\\eta }, -1)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} d\\mathrm {LF}_{(\\alpha , 0), (\\alpha , - 1)}.\\end{@align*}}Indeed, Lemma~\\ref {lem:reweight} shows that{\\begin{@align*}{1}{-1}\\int \\hat{f} (\\hat{\\phi }) \\times \\varepsilon ^{\\frac{1}{2}(\\alpha ^2 - \\gamma ^2)} e^{(\\alpha - \\gamma )X^-_\\varepsilon (i)} \\, d\\mathrm {LF}_{(\\gamma ,0), (\\gamma , -1)}= \\int \\hat{f} (\\hat{\\phi }) \\mathopen {}\\mathclose {\\left(\\frac{1}{2}\\mathrm {CR}(\\hat{\\eta }, 0)\\right)}^{-\\frac{\\alpha ^2}{2} + Q \\alpha - 2} d\\mathrm {LF}_{(\\alpha , 0), (\\gamma , - 1)},\\end{@align*}}and applying the argument of Lemma~\\ref {lem:reweight} again to change the insertion at $ -1$ yields the result.$ For a curve $\\eta $ in $\\mathcal {C}$ separating $\\pm \\infty $ , we let $\\mathrm {Harm}_{-\\infty , \\eta }$ (resp.", "$\\mathrm {Harm}_{+\\infty , \\eta }$ ) be the harmonic measure on $\\eta $ viewed from $-\\infty $ (resp., $+\\infty $ ).", "Lemma 8.9 There is a constant $C = C(\\gamma )$ such that the following holds.", "Suppose $\\alpha \\in (\\frac{\\gamma }{2}, Q)$ .", "Sample $(\\phi , \\eta , p^-, p^+)$ from the measure $C \\cdot \\mathrm {LF}_\\mathcal {C}^{(\\alpha , \\pm \\infty )}(d\\phi )\\, \\mathcal {L}_\\kappa ^\\alpha (d\\eta )\\, \\mathrm {Harm}_{-\\infty , \\eta }(dp^-)\\, \\mathrm {Harm}_{+\\infty , \\eta }(dp^+).$ Let $X_\\pm = \\phi \\circ \\psi _\\eta ^\\pm + Q \\log |(\\psi _\\eta ^\\pm )^{\\prime }|$ .", "Let $\\tau $ be the quantum length of the clockwise arc from $p^-$ to $p^+$ in $D^+_\\eta $ .", "Then the law of $(X^-, X^+, \\tau )$ is $\\int _0^\\infty \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\, d\\ell .$ We first prove the case $\\alpha = \\gamma $ , namely $C \\cdot \\mathrm {LF}_\\mathcal {C}^{(\\gamma , \\pm \\infty )}(d\\phi )\\, \\mathcal {L}_\\kappa (d\\eta )\\, \\mathrm {Harm}_{-\\infty , \\eta }(dp^-)\\, \\mathrm {Harm}_{+\\infty , \\eta }(dp^+) = \\int _0^\\infty \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\, d\\ell ,$ where with abuse of notation we view the left hand side as a measure on triples $(X^-, X^+, \\tau ) \\in H^{-1}(\\mathbb {H})\\times H^{-1}(\\mathbb {H})\\times [0,\\infty )$ .", "Indeed, (REF ) is an immediate consequence of Proposition REF and Lemma REF .", "Now, let $\\varepsilon \\in (0, \\frac{1}{4})$ and $\\mathbb {H}_\\varepsilon := \\mathbb {H}\\backslash B_\\varepsilon (i)$ .", "Let $f$ be a nonnegative measurable function of $(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau )$ , then reweighting (REF ) gives $C\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma ) (X^-_\\varepsilon (i) + X^+_\\varepsilon (i))}\\mathrm {LF}_\\mathcal {C}^{(\\gamma ,\\pm \\infty )}(d\\phi ) \\mathcal {L}_\\kappa (d\\eta ) \\mathrm {Harm}_{-\\infty , \\eta }(dp^+)\\mathrm {Harm}_{-\\infty , \\eta }(dp^+) \\\\= \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\varepsilon ^{\\alpha ^2 - \\gamma ^2} e^{(\\alpha - \\gamma ) (X^-_\\varepsilon (i) + X^+_\\varepsilon (i))} \\ell \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\gamma , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\right)} \\, d\\ell .$ By Lemma REF , the left hand side equals $ C\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\mathrm {LF}_\\mathcal {C}^{(\\alpha ,\\pm \\infty )}(d\\phi ) \\mathcal {L}_\\kappa ^\\alpha (d\\eta ) \\mathrm {Harm}_{-\\infty , \\eta }(dp^-)\\mathrm {Harm}_{+\\infty , \\eta }(dp^+).$ By Lemma REF , the right hand side equals $ \\int _0^\\infty \\mathopen {}\\mathclose {\\left(\\int f(X^-|_{\\mathbb {H}_\\varepsilon }, X^+|_{\\mathbb {H}_\\varepsilon }, \\tau ) \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times \\mathrm {LF}_\\mathbb {H}^{(\\alpha , i)}(\\ell ) \\times [1_{\\tau \\in (0, \\ell )} d\\tau ] \\right)} \\, d\\ell .", "$ Since the above two expressions agree for every $\\varepsilon $ and $f$ , we obtain the result.", "[Proof of Proposition REF ] In Lemma REF , the law of $(\\mathcal {C}, \\phi , \\eta , \\pm \\infty )/{\\sim _\\gamma }$ is $C (Q-\\alpha )^2 \\mathcal {M}_2^\\mathrm {sph}(\\alpha ) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep}, \\alpha }$ by (REF ), and the joint law of $((\\mathbb {H}, X^-, i)/{\\sim _\\gamma }, (\\mathbb {H}, X^+, i)/{\\sim _\\gamma }, \\tau )$ is $\\int _0^\\infty \\ell \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ) \\times \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ) \\times [1_{\\tau \\in (0,\\ell )} \\ell ^{-1} d\\tau ]\\, d\\ell $ .", "Since $[1_{\\tau \\in (0,\\ell )} \\ell ^{-1} d\\tau ]$ corresponds to the uniform conformal welding, the law of $(\\mathcal {C}, \\phi , \\eta , \\pm \\infty )/{\\sim _\\gamma }$ is $\\int _0^\\infty \\ell \\mathrm {Weld}(\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ), \\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell ))\\, d\\ell $ ." ], [ "Proof of Proposition ", "We first reformulate Proposition REF by embedding in the cylinder.", "Sample the field $h$ as in Definition REF so that the law of $(\\mathcal {C}, h , -\\infty , +\\infty )/{\\sim _\\gamma }$ is $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ .", "Now we restrict to the event $\\lbrace \\mu _h(\\mathcal {C}) > 1\\rbrace $ and set $\\phi := h(\\cdot - a)$ , where $a \\in \\mathbb {R}$ is such that $\\mu _h((-\\infty , a) \\times [0,2\\pi ]) = 1$ .", "Let $M$ be the law of $\\phi $ under this restriction.", "Lemma 8.10 Let $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )|_{\\lbrace A > 1\\rbrace }$ be the restriction of $\\mathcal {M}_2^\\mathrm {sph}(\\alpha )$ to quantum surfaces with quantum area greater than 1.", "If we sample $(\\phi , \\mathbf {t}, \\eta ^0)$ from $ M \\times dt\\times \\mathcal {L}_\\kappa ^\\alpha $ where $dt$ is Lebesgue measure on $\\mathbb {R}$ and set $\\eta = \\eta ^0 + \\mathbf {t}$ , then the law of $(\\mathcal {C}, \\phi , \\eta , \\pm \\infty )/{\\sim _\\gamma }$ is $(\\mathcal {M}_2^\\mathrm {sph}(\\alpha )|_{\\lbrace A>1\\rbrace }) \\otimes \\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ .", "This is immediate from the definitions of $M$ and $\\operatorname{SLE}_\\kappa ^{\\mathrm {sep},\\alpha }$ .", "For a field and curve $(\\phi , \\eta )$ , writing $D_\\pm $ for the connected components of $\\mathcal {C}\\backslash \\eta $ containing $\\pm \\infty $ , we define the event $E_{\\delta , \\varepsilon } = \\lbrace (\\phi , \\eta ): \\varepsilon < \\nu _\\phi (\\eta )< \\delta \\rbrace \\text{ and } \\mu _\\phi (D_-) >1 \\rbrace $ .", "This is the same event as that of Proposition REF , but phrased in terms of the embedded field and curve.", "By Lemma REF , Proposition REF is equivalent to $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] = (1+o_{\\delta ,\\varepsilon }(1)) \\frac{\\overline{R}(\\alpha )}{2(Q-\\alpha )^2} |\\mathcal {L}_\\kappa ^\\alpha | \\log \\varepsilon ^{-1},$ where the error term $o_{\\delta ,\\varepsilon }(1)$ satisfies $\\lim _{\\delta \\rightarrow 0} \\lim _{\\varepsilon \\rightarrow 0} o_{\\delta ,\\varepsilon }(1)=0$ .", "To prove (REF ), we use the following description of the law of $\\phi |_{\\mathcal {C}_+}$ when $\\phi $ is sampled from $M$ .", "For $t \\ge 0$ let $X_s$ be the average of $\\phi $ on $[s,s+2\\pi i]/{\\sim }$ .", "Lemma 8.11 Conditioned on $\\phi |_{\\mathcal {C}_-}$ , we have $\\phi |_{\\mathcal {C}_+} \\stackrel{d}{=} \\phi _0 + \\mathfrak {h} - (Q-\\alpha ) \\operatorname{Re}\\cdot $ , where $\\phi _0$ is a zero boundary GFF on $\\mathcal {C}_+$ and $\\mathfrak {h}$ is a harmonic function determined by $\\phi |_{\\mathcal {C}_-}$ whose average on $[s, s+2\\pi i]/{\\sim }$ is constant as $s>0$ varies.", "In particular, conditioned on $X_0$ , the conditional law of $(X_s)_{s \\ge 0}$ is Brownian motion with initial value $X_0$ and downward drift of $-(Q-\\alpha )$ .", "This is the sphere analog of [1], and the proof is identical so we omit it.", "Define for $x,y>0$ the random times $\\sigma _x = \\inf \\lbrace s > 0 \\: : \\: X_s < \\frac{2}{\\gamma }\\log x\\rbrace \\vee 0, \\quad \\tau _y = \\sup \\lbrace s > 0 \\: : \\: X_s > \\frac{2}{\\gamma }\\log y\\rbrace \\vee 0.$ Lemma 8.12 $M^\\#$ -a.e.", "the field $\\phi $ satisfies $\\lim _{y \\rightarrow 0} \\frac{\\sigma _y}{\\log y^{-1}} = \\lim _{y \\rightarrow 0} \\frac{\\tau _y}{\\log y^{-1}} = \\frac{2}{\\gamma (Q-\\alpha )}$ .", "Given Lemma REF , this is a straightforward Brownian motion fact.", "For $C>0$ define the event $F_{x,y,C} = \\lbrace (\\phi , \\mathbf {t}, \\eta ^0) \\: : \\: \\mathbf {t} \\in (\\sigma _x - C, \\tau _y) \\rbrace .$ We first bound $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }]$ from above by comparing $E_{\\delta , \\varepsilon }$ to $F_{\\delta ^{1-\\zeta } ,\\varepsilon ^{1+\\zeta }, C}$ .", "Lemma 8.13 For fixed $x>0, C \\ge 0$ we have as $y \\rightarrow 0$ $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[F_{x, y, C}] = (1+o_y(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{2 \\log y^{-1}}{\\gamma (Q-\\alpha )},$ Moreover, $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\le (1+o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{2\\log \\varepsilon ^{-1}}{\\gamma (Q-\\alpha )}.$ Note that (REF ) is equivalent to $M^\\#[\\tau _y - \\sigma _x + C] = (1+o_y(1)) \\frac{\\frac{2}{\\gamma }\\log y^{-1}}{Q-\\alpha }$ .", "The upper bound follows from standard Brownian motion facts.", "Indeed Lemma REF states that the field average process $X_s$ has random starting value $X_0$ and then evolves as Brownian motion with drift $-(Q-\\alpha )$ , so letting $Y_s$ be Brownian motion with starting value $\\frac{2}{\\gamma }\\log x$ and drift $-(Q-\\alpha )$ , and letting $T>0$ be the time $Y_s$ first hits $\\frac{2}{\\gamma }\\log y$ , we have $M^\\#[\\tau _y - \\sigma _x] \\le \\mathbb {E}[T]$ , and it is well known that $\\mathbb {E}[T] = (1+o_y(1)) \\frac{\\frac{2}{\\gamma }\\log y^{-1}}{Q-\\alpha }$ .", "The lower bound follows from Lemma REF .", "This gives (REF ) For (REF ), it suffices to show that there exists some $C>0$ such that for any fixed $\\zeta > 0$ , as $\\varepsilon \\rightarrow 0$ then $\\delta \\rightarrow 0$ , $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[F_{\\delta ^{1-\\zeta }, \\varepsilon ^{1+\\zeta },C} \\mid E_{\\delta , \\varepsilon }] = 1-o_{\\delta ,\\varepsilon }(1),$ since this and (REF ) imply $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\le (1+o_{\\delta ,\\varepsilon }(1)(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[F_{\\delta ^{1-\\zeta }, \\varepsilon ^{1+\\zeta },C}] = (1+o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{(1+\\zeta )\\frac{2}{\\gamma }\\log y^{-1}}{Q-\\alpha },$ and we can send $\\zeta \\rightarrow 0$ to conclude.", "Sample $(\\phi , \\mathbf {t}, \\eta ^0)$ from $M\\times dt \\times \\mathcal {L}_\\kappa ^\\alpha $ and condition on the quantum length of $\\eta ^0$ being $\\ell > 0$ .", "We will show that the conditional probability of $F_{\\ell ^{1-\\zeta }, \\ell ^{1+\\zeta }, C}$ is $1-o_\\ell (1)$ where $\\lim _{\\ell \\rightarrow 0} o_\\ell (1) = 0$ ; this clearly implies (REF ).", "Let $\\mathcal {C}_{\\eta ^0}^+$ be the connected component of $\\mathcal {C}\\backslash \\eta ^0$ containing $+\\infty $ .", "By Lemma REF and Proposition REF the conditional law of the quantum surface $(\\mathcal {C}_{\\eta ^0}^+, \\hat{\\phi }, \\eta ^0, +\\infty )$ is $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\#$ , where $\\hat{\\phi }:= \\phi (\\cdot - \\mathbf {t})$ .", "Although Lemma REF is stated for embeddings of $\\mathrm {QD}_{1,0}(\\ell )^\\#$ , the same argument holds for embeddings of $\\mathcal {M}_1^\\mathrm {disk}(\\alpha ; \\ell )^\\#$ .", "Therefore, there is a constant $C$ not depending on $\\ell $ such that, for a fixed smooth function $g$ with support in $\\lbrace z \\in \\mathcal {C}: \\operatorname{Re}z \\in [C, C+1]\\rbrace $ which is constant on vertical lines and satisfies $\\int g(z) \\,dz = 1$ , we have $\\mathbb {P}[(\\hat{\\phi }, g) \\in ((1+\\zeta ) \\frac{2}{\\gamma }\\log \\ell , (1-\\zeta ) \\frac{2}{\\gamma }\\log \\ell )] = 1 - o_\\ell (1).$ Therefore, with probability $1-o_\\ell (1)$ there is some $s \\in [C,C+1]$ for which the average of $\\hat{\\phi }$ on $[s, s+2\\pi i]$ lies in $((1+\\zeta ) \\frac{2}{\\gamma }\\log \\ell , (1-\\zeta ) \\frac{2}{\\gamma }\\log \\ell )$ , i.e.", "$X_{\\mathbf {t} + s} \\in ((1+\\zeta ) \\frac{2}{\\gamma }\\log \\ell , (1-\\zeta ) \\frac{2}{\\gamma }\\log \\ell )$ .", "This inclusion implies $\\mathbf {t} + s \\in (\\sigma _{\\ell ^{1-\\zeta }}, \\tau _{\\ell ^{1+\\zeta }})$ , so $\\mathbf {t} \\in (\\sigma _{\\ell ^{1-\\zeta }} - C - 1, \\tau _{\\ell ^{1+\\zeta }})$ .", "Thus, $F_{\\ell ^{1-\\zeta }, \\ell ^{1+\\zeta }, C+1}$ occurs with probability $1-o_\\ell (1)$ .", "Now, we obtain the lower bound for $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }]$ by coupling $\\phi $ with a GFF.", "Lemma 8.14 We have $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\ge (1-o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{(1-\\zeta ) \\frac{2}{\\gamma }\\log \\varepsilon ^{-1}}{Q-\\alpha }.$ We first construct a coupling $h$ sampled from $P_\\mathcal {C}$ with $\\phi $ sampled from $M^\\#$ such that $\\phi |_{\\mathcal {C}_+} = h_2|_{\\mathcal {C}_+} + X_{\\operatorname{Re}\\cdot } + \\hat{\\mathfrak {h}} $ where $h_2$ is the projection of $h$ to $\\mathcal {H}_2(\\mathcal {C})$ , $X_s$ is the average of $\\phi $ on $[s, s+2\\pi i]/{\\sim }$ , $\\hat{\\mathfrak {h}}$ is a random harmonic function on $\\mathcal {C}_+$ which has average zero on $[s, s+2\\pi i]/{\\sim }$ for all $s > 0$ , and $h_2$ is independent of $(X_s)_{s \\ge 0}$ .", "Indeed, [51] says that $h$ can be decomposed via $h|_{\\mathcal {C}_+} = h_0 + \\widetilde{\\mathfrak {h}}$ where $h_0$ is a zero boundary GFF on $\\mathcal {C}_+$ and $\\widetilde{\\mathfrak {h}}$ is a harmonic function on $\\mathcal {C}_+$ whose average on $[s, s+2\\pi i]/{\\sim }$ is zero for all $s > 0$ .", "Similarly, Lemma REF gives a decomposition $\\phi |_{\\mathcal {C}_+} = \\phi _0 + \\mathfrak {h} - (Q-\\alpha ) \\operatorname{Re}\\cdot $ where $\\phi _0$ is a zero boundary GFF on $\\mathcal {C}_+$ and $\\mathfrak {h}$ is a harmonic function on $\\mathcal {C}_+$ whose average on $[s, s+2\\pi i]/{\\sim }$ is the same for all $s>0$ .", "We couple $h$ with $\\phi $ so that $h_0 = \\phi _0$ , then the above properties are satisfied.", "Independently sample $\\eta ^0$ from $(\\mathcal {L}_\\kappa ^\\alpha )^\\#$ .", "We write $\\mathbb {P}$ to denote the probability measure on $(\\phi , h, \\eta ^0)$ and $\\mathbb {E}$ for the expectation over $\\mathbb {P}$ .", "Let $\\zeta > 0$ be a parameter we send to zero at the end.", "The properties of $\\hat{\\mathfrak {h}}$ guarantee that a.s. $\\sup _{\\operatorname{Re}z > 1} |\\hat{\\mathfrak {h}}(z)| < \\infty $ , hence $\\mathbb {P}[\\sup _{\\operatorname{Re}z > 1} |\\hat{\\mathfrak {h}}(z)| < \\zeta \\cdot \\frac{2}{\\gamma }\\log \\delta ^{-1}] = 1 - o_{\\delta ,\\varepsilon }(1)$ .", "Also, setting $I := ((1+3\\zeta )\\frac{2}{\\gamma (Q-\\alpha )} \\log \\delta ^{-1} + \\delta ^{-1}, (1-3\\zeta ) \\frac{2}{\\gamma (Q-\\alpha )} \\log \\varepsilon ^{-1})$ , by Lemma REF we have $I \\subset ( \\tau _{\\delta ^{1+2\\zeta }} + \\delta ^{-1} , \\sigma _{\\varepsilon ^{1-2\\zeta }})$ with probability $1-o_{\\delta ,\\varepsilon }(1)$ .", "Finally, $\\mathbb {P}[ \\inf _{z \\in \\eta ^0} \\operatorname{Re}z > -\\delta ^{-1}] = 1-o_{\\delta ,\\varepsilon }(1)$ .", "Let $G$ be the intersection of these three good events, so $\\mathbb {P}[G] = 1-o_{\\delta ,\\varepsilon }(1)$ .", "On $G$ , if $t \\in I$ , then $X_{\\operatorname{Re}z} + \\widehat{h}(z) \\in ((1-\\zeta )\\frac{2}{\\gamma }\\log \\varepsilon , (1+\\zeta ) \\frac{2}{\\gamma }\\log \\delta )$ for all $z$ in a neighborhood of $\\eta ^0 + t$ .", "Thus $(M^\\# \\times dt \\times (\\mathcal {L}_\\kappa ^\\alpha )^\\#)[E_{\\delta , \\varepsilon }] \\ge \\mathbb {E}[1_G \\int _I 1_{\\nu _\\phi (\\eta ^0+t) \\in (\\varepsilon , \\delta )} \\, dt ] &\\ge \\mathbb {E}[1_G \\int _I 1_{\\nu _{h_2}(\\eta ^0+t) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })} \\, dt ]\\\\&\\ge \\mathbb {E}[\\int _I 1_{\\nu _{h_2}(\\eta ^0+t) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })} \\, dt ] - o_{\\delta ,\\varepsilon }(1)|I|.$ In our coupling, $h_2$ and $\\eta ^0$ are independent, and $h_2 \\stackrel{d}{=} h_2(\\cdot - t)$ for each $t \\in \\mathbb {R}$ , so $\\mathbb {E}[\\int _I 1_{\\nu _{h_2}(\\eta ^0+t) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })} \\, dt ] = |I| \\mathbb {P}[\\nu _{h_2}(\\eta ^0) \\in (\\varepsilon ^{\\zeta }, \\delta ^{-\\zeta })] - o_{\\delta ,\\varepsilon }(1)|I| = (1-o_{\\delta ,\\varepsilon }(1))|I|,$ hence $ (M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] \\ge (1-o_{\\delta ,\\varepsilon }(1))|M||\\mathcal {L}_\\kappa ^\\alpha | |I| \\ge (1-o_{\\delta ,\\varepsilon }(1))|M||\\mathcal {L}_\\kappa ^\\alpha | (1-4\\zeta ) \\frac{2\\log \\varepsilon ^{-1}}{\\gamma (Q-\\alpha )}.$ [Proof of Proposition REF ] By Lemmas REF and REF , we have $(M \\times dt \\times \\mathcal {L}_\\kappa ^\\alpha )[E_{\\delta , \\varepsilon }] = (1+o_{\\delta ,\\varepsilon }(1)) |M| |\\mathcal {L}_\\kappa ^\\alpha | \\frac{\\frac{2}{\\gamma }\\log \\varepsilon ^{-1}}{Q-\\alpha }.$ By Lemma REF , we have $|M| = \\int _1^\\infty \\frac{1}{2} \\overline{R}(\\alpha ) a^{\\frac{2}{\\gamma }(\\alpha -Q)-1} \\, da = \\frac{\\gamma \\overline{R}(\\alpha )}{4 (Q-\\alpha )}$ .", "Therefore we have (REF )." ], [ "Background on CLE coupled with LQG", "In this section we explain how Propositions REF and REF follow from existing literature." ], [ "Proof of Proposition ", "Proposition REF can be easily reduced to the following statement.", "Proposition A.1 In the setting of Proposition REF , we can find a random point $p\\in \\eta $ such that, conditioning on $(A_\\eta , h, \\Gamma |_{A_\\eta }, p)/{\\sim _\\gamma }$ , the conditional law of $(D_\\eta , h, z, p)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))$ .", "[Proof of Proposition REF given Proposition REF ] Suppose $(h, \\Gamma , p,z)$ satisfies Proposition REF .", "Conditioning on $(h, \\Gamma , p,z)$ , let $U$ be a uniform random variable on $(0,1)$ .", "Let $w$ be the point of $\\eta $ such that the counterclockwise arc on $\\eta $ from $p$ to $w$ is of $\\ell _h$ -length $U\\ell _h(\\eta )$ .", "By definition, $w$ is sampled from the probability measure proportional to $\\ell _h(\\eta )$ .", "By Proposition REF and the re-rooting invariance of $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))$ , conditioning on $(A_\\eta , h, \\Gamma |_{A_\\eta }, p)/{\\sim _\\gamma }$ and $U$ , the conditional law of $(D_\\eta , h, z, w)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1}(\\ell _h(\\eta ))$ .", "Since $(A_\\eta , h, \\Gamma |_{A_\\eta }, w)/{\\sim _\\gamma }$ is determined by $(A_\\eta , h, \\Gamma |_{A_\\eta }, p)/{\\sim _\\gamma }$ and $U$ , we are done.", "To find the desired $p$ in Proposition REF , we use the conformal percolation interface (CPI) within a CLE carpet introduced by Miller, Sheffield and Werner [54].", "Suppose $\\Gamma $ is a $\\operatorname{CLE}_\\kappa $ on a Jordan domain $D$ (i.e.", "$\\partial D$ is a simple curve) for some $\\kappa \\in (\\frac{8}{3},4)$ .", "Given two boundary points $x,y$ , a (chordal) CPI for $\\Gamma $ from $x$ to $y$ is a random curve from $x$ to $y$ coupled with $\\Gamma $ that does not enter the interior of any region surrounded by a loop of $\\Gamma $ (but it can touch the loops).", "We also need to specify how a CPI proceeds upon hitting a loop of $\\Gamma $ on its way from $x$ to $y$ .", "We require that it always leaves the loop to its right.", "In the terminology of [54], this corresponding to CPI with $\\beta =1$ .", "The marginal law of a CPI is $\\operatorname{SLE}_{\\kappa ^{\\prime }}(\\kappa ^{\\prime }-6)$ on $D$ from $x$ to $y$ where $\\kappa ^{\\prime }=16/\\kappa $ [54] and the force point is on the right of $x$ .", "In particular, a CPI is a non-simple curve.", "Intuitively, the chordal CPI describes the chordal interface of a percolation on a CLE carpet.", "It is characterized by certain conformal invariance and Markov properties which are consistent with this intuition; see [54].", "We will not review the full details but will rely on an analogous Markov property that CPI satisfies on a quantum disk background.", "This was established in [55], which we review now.", "Fix $\\kappa \\in (8/3,4)$ and $\\gamma =\\sqrt{\\kappa }$ .", "For $L_0,R_0>0$ , suppose $(D,h,\\Gamma , x)$ be an embedding of a sample from $\\mathrm {QD}_{0,1}^{\\#} (L_0+R_0)\\otimes \\operatorname{CLE}_\\kappa $ .", "Let $y$ be the point on $\\partial D$ such that the quantum length of the counterclockwise arc from $x$ to $y$ is $R_0$ .", "Conditioning on $(h,\\Gamma , x,y)$ , sample a CPI $\\eta ^{\\prime }$ within the carpet of $\\Gamma $ from $x$ to $y$ .", "Since the law of $\\eta ^{\\prime }$ is a $\\operatorname{SLE}_{\\kappa ^{\\prime }}(\\kappa ^{\\prime }-6)$ , there is a quantum natural time parametrization for $\\eta ^{\\prime }$ with respect to $h$  [19], which we use throughout.", "Under this parametrization $\\eta ^{\\prime }$ has a finite duration $T$ .", "For a fixed time $t>0$ , on the event that $t\\le T$ , let $\\widetilde{\\eta }^{\\prime }_t$ be the union of $\\eta ^{\\prime }[0,t]$ and all the loops of $\\Gamma $ touched by $\\eta ^{\\prime }[0,t]$ .", "If $t<T$ , let $D_t$ be the simply connected component of $D\\setminus \\widetilde{\\eta }^{\\prime }_t$ which contains $y$ on its boundary.", "For a fixed $t$ , both $D_t$ and the interior of $D\\setminus D_t$ are Jordan domains a.s. We write the interior $D\\setminus D_t$ as $U_t$ .", "The interface between $D_t$ and $U_t$ are $\\operatorname{SLE}_\\kappa $ types curves, on which there is a well defined quantum length measure.", "Figure: We color D t D_t blue and U t U_t green.", "(a): At all but countably many times tt, we have D t =D t - D_t = D_{t^-}.", "To simplify (b)–(d) we omit their loops.", "(b): A loop discovery time.", "(c), (d): A splitting time.", "At times when the left boundary length L t L_t has a downward jump, there are two possible topologies; we do not illustrate the similar cases when the split is to the right.The following Markov property of CPI on quantum disks was proved in [55], although it was not explicitly stated.", "Proposition A.2 ([55]) For a fixed $t>0$ , on the even that $t<T$ , let $L_t$ and $R_t$ be the quantum lengths of the clockwise and counterclockwise arcs from $\\eta ^{\\prime } (t)$ to $y$ of $\\partial D_t$ .", "Conditioning on the decorated quantum surface $(U_t, h,\\Gamma |_{U_t}, \\eta ^{\\prime }|_{[0,t]})/{\\sim _\\gamma }$ and $(L_t,R_t)$ , the conditional law of $(D_t, h, \\Gamma |_{D_t},y)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}(L_t+R_t)^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "This proposition is essentially [55], except that we condition on more information than they explicitly stated.", "But the argument carries over directly to our setting.", "The point $p$ in Proposition REF that we will find is a point where a CPI hits a loop.", "Therefore we need a stronger variant of the Markov property in Proposition REF at certain random times which we now define.", "For each $t\\in (0,T)$ , let $D_{t^-}$ be the interior of $\\cap _{s<t} D_s$ .", "According to [55], for each fixed time $t$ , on the event that $t<T$ , almost surely $D_{t^-}=D_t$ ; see Figure REF (a).", "But there exist countably many times where $D_{t^-}\\ne D_t$ .", "In this case, there are two scenarios: The point $\\eta ^{\\prime }(t)$ is on a loop of $\\Gamma $ .", "In this case the interior of $D_{t^-}\\setminus D_t$ is the Jordan domain enclosed by the this loop.", "But $D_t$ is not a Jordan domain since $\\eta ^{\\prime }(t)$ corresponds to two points on $\\partial D_t$ .", "See Figure REF (b).", "We call $t$ a loop discovery time.", "The point $\\eta ^{\\prime }(t)$ is not on a loop of $\\Gamma $ .", "In this case, both $D_t$ and $D_{t^-}\\setminus D_t$ are Jordan domains, and their boundaries intersect at the single point $\\eta ^{\\prime }(t)$ .", "See Figure REF (c)–(d).", "We call $t$ a splitting time.", "In both cases, we let $\\mathcal {B}_t$ be the interior of $D_{t^-}\\setminus D_t$ and $U_{t^-}$ be the interior of $D\\setminus D_{t^{-}}$ .", "Then $\\partial \\mathcal {B}_t\\setminus \\partial D$ is an $\\operatorname{SLE}_\\kappa $ type curve.", "By definition, $\\partial \\mathcal {B}_t$ is a loop in $\\Gamma $ if and only if $t$ is a loop discovery time.", "Recall $(L_t,R_t)$ from Proposition REF .", "If $t$ if it is a loop discovery time, then $R_t$ has an upward jump.", "If $t$ is a splitting time, then either $L_t$ or $R_t$ has a downward jump.", "In both cases, the size of the jump equals the quantum length of $\\partial \\mathcal {B}_t$ , which we denote by $ X_t$ .", "We now state the stronger version of Proposition REF .", "Proposition A.3 Fix $\\varepsilon >0$ and a positive integer $n$ .", "Let $\\tau $ be the $n$ -th time such that $D_{t^-}\\ne D_t$ and the quantum length $ X_t$ of $\\partial \\mathcal {B}_t$ is larger than $\\varepsilon $ .", "If this time never occurs, set $\\tau =\\infty $ .", "Conditioning on $\\tau <\\infty $ , the decorated quantum surface $(U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , the indicator $1_{\\lbrace \\partial \\mathcal {B}_\\tau \\textrm { is a loop} \\rbrace }$ , and the quantum lengths $X_\\tau $ of $\\partial \\mathcal {B}_\\tau $ and $L_\\tau ,R_\\tau $ of the two arcs on $D_\\tau $ , the conditional law of $(D_\\tau , h, \\Gamma |_{D_\\tau }, y)/{\\sim _\\gamma }$ and $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is given by independent samples from $\\mathrm {QD}_{0,1}(L_\\tau +R_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ and $\\mathrm {QD}_{0,1}( X_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ , respectively.", "For a fixed $t>0$ , on the event that $t \\le T$ , consider the ordered collection of decorated quantum surfaces $\\lbrace (\\mathcal {B}_s, h, \\Gamma |_{\\mathcal {B}_s} ,\\eta ^{\\prime }(s))/{\\sim _\\gamma } : s\\le t \\textrm { and } D_{s^{-}}\\ne D_s \\rbrace $ .", "It was proved in [55] that conditioning on $(D_t, h, \\Gamma |_{D_t}, \\eta ^{\\prime }(t), y)/{\\sim _\\gamma }$ , and the ordered information of the quantum lengths of their boundaries and whether their times are loop discovery or splitting, the conditional law of these decorated quantum surfaces are independent $\\operatorname{CLE}_\\kappa $ decorated quantum disks with given boundary lengths.", "To see why this assertion follows from [55], we note that Propositions 3.1 and 3.5 of [55] yield the corresponding assertion for the analogous case of CLE on the quantum half-plane.", "The pinching argument of [55] then gives this assertion.", "We claim that conditioning on $\\tau <\\infty $ , $(U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , $1_{\\lbrace \\partial \\mathcal {B}_\\tau \\textrm { is a loop} \\rbrace }$ , and $X_\\tau $ , $L_\\tau ,R_\\tau $ , the conditional law of $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}( X_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "Fix a large $k>0$ .", "Let $s_k$ be the largest integer multiple of $2^{-k}$ smaller than $\\tau $ .", "Let $\\mathcal {U}_t=(U_t, h,\\Gamma |_{U_t}, \\eta ^{\\prime }|_{[0,t]})/{\\sim _\\gamma }$ and $\\mathcal {D}_t=(D_t, h|_{D_t}, \\Gamma |_{D_t},y)/{\\sim _\\gamma }$ .", "For a fixed $j$ , by Proposition REF , conditioning on $\\mathcal {U}_{j2^{-k}}$ and $(L_{j2^{-k}},R_{j2^{-k}})$ , the conditional law of $\\mathcal {D}_{j2^{-k}}$ is $\\mathrm {QD}_{0,1}(L_{j2^{-k}}+R_{j2^{-k}})^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "Note that $\\lbrace s_k=j2^{-k}\\rbrace $ is determined by $\\mathcal {U}_{j2^{-k}}$ and the quantum lengths of the boundaries of elements in $\\lbrace (\\mathcal {B}_s, h, \\Gamma |_{\\mathcal {B}_s} ,\\eta ^{\\prime }(s))/{\\sim _\\gamma } : j2^{-k} \\le s\\le (j+1)2^{-k} \\textrm { and } D_{s^{-}}\\ne D_s \\rbrace $ .", "Applying the assertion of the first paragraph to $\\mathcal {D}_{j2^{-k}}$ with $T=2^{-k}$ , we see that conditioning on $\\tau < \\infty $ , $\\mathcal {U}_{s_k}$ , $\\lbrace s_k=j2^{-k}\\rbrace $ , $1_{\\lbrace \\partial \\mathcal {B}_{\\tau } \\textrm { is a loop} \\rbrace }$ , $X_{\\tau }$ , $L_{s_k+2^{-k}}$ and $R_{s_k+2^{-k}}$ , the conditional law of $(\\mathcal {B}_{\\tau }, h, \\Gamma |_{\\mathcal {B}_{\\tau }}, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}( X_{\\tau })^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "Varying $j$ , we can remove the condition $\\lbrace s_k=j2^{-k}\\rbrace $ .", "Since almost surely $\\mathcal {U}_{s_k}\\rightarrow (U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ and $(L_{s_k+2^{-k}},R_{s_k+2^{-k}})\\rightarrow (L_\\tau ,R_\\tau )$ as $k\\rightarrow \\infty $ , we have proved the desired claim.", "It remains to show that conditioning on $\\tau <\\infty $ , $(U_{\\tau ^-}, h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , $1_{\\lbrace \\partial \\mathcal {B}_\\tau \\textrm { is a loop} \\rbrace }$ , $X_\\tau $ , $L_\\tau ,R_\\tau $ and $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ , the conditional law of $(D_\\tau , h, \\Gamma |_{D_\\tau }, y)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}(L_\\tau +R_\\tau )^\\# \\otimes \\operatorname{CLE}_\\kappa $ .", "This follows from a similar but easier argument: we consider the smallest multiple of $2^{-k}$ larger than $\\tau $ and use the Markov property in Proposition REF at this time.", "We omit the details.", "[Proof of Proposition REF ] For $a>0$ , let $(D,h,\\Gamma ,x)$ be an embedding of a sample from $\\mathrm {QD}_{0,1} (a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ .", "Now we reweight $\\mathrm {QD}_{0,1} (a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ by $\\mu _h(D)$ and sample a point $z$ according to the probability measure proportional to $\\mu _h$ .", "This way, the law of $(D,h,\\Gamma ,z,x)$ is $\\mathrm {QD}_{1,1} (a)^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ as in Propositions REF and REF .", "Let $y$ be the point on $\\partial D$ such that both the two arcs between $x$ to $y$ have quantum length $a/2$ , and sample a CPI $\\eta ^{\\prime }$ from $x$ to $y$ , parametrized by quantum natural time.", "Let $t_0$ be the time such $z\\in \\mathcal {B}_{t_0}$ and set $p_0=\\eta ^{\\prime }(t_0)$ .", "Consider $\\tau $ and $\\mathcal {B}_\\tau $ as defined in Proposition REF with this choice of $(D,h,\\Gamma , x,y)$ .", "Then on the event that $t_0=\\tau $ , namely $z\\in \\mathcal {B}_\\tau $ , conditioning on $(U_{\\tau ^-},h,\\Gamma |_{U_{\\tau ^-}}, \\eta ^{\\prime }|_{[0,\\tau ]})/{\\sim _\\gamma }$ , $1_{\\lbrace \\partial \\mathcal {B}_t \\textrm { is a loop} \\rbrace }$ , the quantum length $ X_\\tau $ of $\\partial \\mathcal {B}_\\tau $ and $(D_\\tau , h, \\Gamma |_{D_\\tau }, y)/{\\sim _\\gamma }$ , the conditional law of $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, z, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is $\\mathrm {QD}_{1,1} ( X_\\tau )^{\\#}\\otimes \\operatorname{CLE}_\\kappa $ , where we have $\\mathrm {QD}_{1,1}$ instead of $\\mathrm {QD}_{0,1}$ because of area weighting.", "This means that conditioning on the quantum intrinsic information on $D\\setminus \\mathcal {B}_\\tau $ , the conditional law of $(\\mathcal {B}_\\tau , h, \\Gamma |_{\\mathcal {B}_\\tau }, z, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ is a CLE decorated marked quantum disk with given boundary length.", "By varying $\\varepsilon $ and $n$ in Proposition REF , the same holds with $(\\mathcal {B}_\\tau , h|_{\\mathcal {B}_\\tau }, \\Gamma |_{\\mathcal {B}_\\tau }, z, \\eta ^{\\prime }(\\tau ))/{\\sim _\\gamma }$ replaced by $(\\mathcal {B}_{t_0}, h, \\Gamma |_{\\mathcal {B}_{t_0}}, z, \\eta ^{\\prime }(t_0))/{\\sim _\\gamma }$ .", "If $t_0$ is a loop discovery time, then $\\partial \\mathcal {B}_{t_0}$ is the loop $\\eta $ surrounding $z$ and $p_0=\\eta ^{\\prime }(t_0)$ is the desired point we need for Proposition REF .", "Otherwise, we set $D_1=\\mathcal {B}_{t_0}$ and construct $t_1$ , $\\mathcal {B}_{t_1}$ and $ p_1$ as above with $(D,h, \\Gamma , x)$ replaced by $(D_1,h, \\Gamma |_{D_1}, p_0)$ .", "If $p_1\\in \\eta $ then we are done by setting $p=p_1$ .", "Otherwise, we iterate this procedure and construct $p_2,p_3,\\cdots $ .", "We claim that there must be a finite $k$ such that $p_k\\in \\eta $ , hence we can set $p=p_k$ and conclude the proof.", "To see the finiteness of the iteration, recall the set $(\\widetilde{\\eta }^{\\prime }_t)_{t\\ge [0,T]}$ defined from a CPI $\\eta ^{\\prime }$ .", "We now require that once $\\eta ^{\\prime }$ hits a loop, it first finishes tracing the loop counterclockwise and then proceeds in its own track.", "This turns $\\widetilde{\\eta }^{\\prime }_T$ into the trace of a non-self-crossing curve sharing the same endpoints as $\\eta ^{\\prime }$ .", "According to [54], viewed as a curve the law of $\\widetilde{\\eta }^{\\prime }$ is a chordal $\\operatorname{SLE}_\\kappa (\\kappa -6)$ as defined in [67].", "The curve $\\eta ^{\\prime }$ is the so-called trunk of $\\widetilde{\\eta }^{\\prime }$ .", "By the target invariance of $\\operatorname{SLE}_\\kappa (\\kappa -6)$ , if we iterate the above chordal CPI exploration towards $z$ and keep track of the chordal $\\operatorname{SLE}_\\kappa (\\kappa -6)$ 's along the way, we get a radial $\\operatorname{SLE}_\\kappa (\\kappa -6)$ on $D$ from $x$ to $z$ .", "From the relation between the $\\operatorname{SLE}_\\kappa (\\kappa -6)$ exploration tree and $\\operatorname{CLE}_\\kappa $ in [67], after finite many iterations we must reach the loop $\\eta $ at a point $p$ , the place where the radial $\\operatorname{SLE}_\\kappa (\\kappa -6)$ starts exploring $\\eta $ .", "We conclude this section by briefly explaining how to define a canonical version of $\\mathrm {QA}(a,b)$ for all $a, b>0$ , as promised in Remark REF .", "Suppose we are in the setting of the proof of Proposition REF , and we assume we are in the setting where $t_0$ is a loop discovery time, so we are in the scenario of Figure REF (b); the general case follows easily by iteration.", "In this case, let $x^-_{t_0}$ and $x^+_{t_0}$ be the left and right endpoints of $(\\partial D) \\cap (\\partial U_{t_0})$ , and let $(a_1, a_2, a_3)$ be the quantum lengths of the three boundary arcs of $(U_{t_0}, h, x_{t_0}^-, x_{t_0}^+, \\eta ^{\\prime }(t_0))$ in clockwise order from $\\eta ^{\\prime }(t_0)$ .", "Then the quantum annulus $(A_\\eta , h)/{\\sim }_\\gamma $ in Proposition REF is obtained by the conformal welding of $(D_{t_0}, h)/{\\sim }_\\gamma $ and $(U_{t_0}, h)/{\\sim }_\\gamma $ , where the welding pattern is specified by $(a_1, a_2, a_3)$ and the quantum length of $\\partial \\mathcal {B}_{t_0}$ .", "On one hand, the conditional law of $(a_1, a_2, a_3)$ given the quantum length of $\\partial \\mathcal {B}_{t_0} = b$ varies continuously in total variational distance as $b$ varies.", "On the other hand, the conditional law of $(D_{t_0}, h)/{\\sim _\\gamma }$ given $(a_1, a_2, a_3, b)$ is $\\mathrm {QD}(a + a_1 - a_2 + a_3 + b)^\\#$ .", "By [6], $\\mathrm {QD}_{0,1}(L)^\\#$ and $\\mathrm {QD}_{0,1}(L+\\varepsilon )^\\#$ can be coupled with probability $1-o_\\varepsilon (1)$ outside a neighborhood of the marked point.", "Putting this together, we can couple the conditional laws of $(A_\\eta , h)$ given $\\ell _h(\\eta ) = b$ and $\\ell _h(\\eta ) = b+\\varepsilon $ to agree outside any neighborhood of $y$ with probability $1-o_\\varepsilon (1)$ .", "By the definition of $\\mathrm {QA}(a,b)$ , this gives a canonical definition of $\\mathrm {QA}(a,b)$ and shows that it varies continuously in $b$ in a reasonable topology." ], [ "Scaling limit of $O(n)$ -loop-decorated planar maps: Proof of Proposition ", "The fact that $(b_i)_{i \\ge 1}$ and $(\\ell _i)_{i \\ge 1}$ in Proposition REF agree in law holds because they give two descriptions of the scaling limit of the same discrete model: loop lengths in the $O(n)$ -loop-decorated quadrangulation.", "This was pointed out at the end of Section 1 of [12].", "In this section we give a more detailed justification by assembling results in [12], [7], [55].", "We recall the loop-decorated quadrangulation from [12].", "A quadrangulation with boundary is a planar map where each face has degree four except a distinguished face which we call the external face.", "(Others faces are called internal faces.)", "The degree of the external face is called the perimeter.", "A (rigid) loop configuration on a quadrangulation with a boundary is a set of disjoint undirected simple closed paths in the dual map which do not visit the external face, and with the additional constraint that when a loop visits a face of q it must cross it through opposite edges.", "Let $\\mathcal {O}_p$ be the set of pairs $(\\mathbf {q}, \\Gamma )$ such that $\\mathbf {q}$ is a quadrangulation with boundary whose perimeter is $2p$ , and $\\Gamma $ is a loop configuration on $\\mathbf {q}$ .", "Similar as for CLE, for each $\\Gamma $ on $\\mathbf {q}$ , there is a collection of outermost loops whose are not surrounded by any other loop in $\\Gamma $ .", "We now recall a scaling limit result in [12].", "Recall $\\beta =\\frac{4}{\\kappa }+\\frac{1}{2}$ , let $n\\in (0,2)$ be such that $\\beta = \\frac{3}{2} + \\frac{1}{\\pi }\\arccos (n/2)$ .", "For each $(\\mathbf {q}, \\Gamma )\\in \\mathcal {O}_p$ , we let $|\\mathbf {q}|$ be the number of faces of $\\mathbf {q}$ , let $|\\Gamma |$ be the total length of all loops of $\\Gamma $ , and let $\\#\\Gamma $ be the number of loops in $\\Gamma $ .", "For $h>0,g>0$ , assign weight $w(\\mathbf {q}, \\Gamma ) = g^{|\\mathbf {q}| - |\\Gamma |} h^{|\\Gamma |} n^{\\#\\Gamma }$ to $(\\mathbf {q}, \\Gamma )$ .", "For some choices of $(g,h)$ , this gives a finite measure on $\\mathcal {O}_p$ which can be normalized into a probability measure.", "Let $\\mathfrak {M}_p$ be a sample from this measure.", "Let $L_1^p \\ge L_2^p \\ge \\dots $ be the lengths of the outermost loops of $\\mathfrak {M}_p$ in decreasing order.", "Proposition A.4 ([12]) There exists $(g,h)$ such that as $p \\rightarrow \\infty $ , the sequence $(\\frac{a}{2p} L_i^p)_{i \\ge 1}$ converges in law to $(b_i)_{i \\ge 1}$ , where $(b_i)_{i \\ge 1}$ and $a>0$ are as in Proposition REF .", "We refer to [12] for a description of $(g,h)$ such that Proposition REF holds.", "In the rest of the section we explain why the following proposition follows from [7], [55].", "Proposition A.5 For $(g,h)$ satisfying Proposition REF , $(\\frac{a}{2p} L_i^p)_{i \\ge 1}$ converges in law to $(\\ell _i)_{i \\ge 1}$ .", "We first describe $(\\ell _i)_{i \\ge 1}$ in terms of a growth fragmentation process considered in [55] and [7].", "We will not give the full description of the growth fragmentation but only provide enough information to specify the law of $(\\ell _i)_{i \\ge 1}$ .", "Our presentation is based on the treatment in [7].", "The description of growth fragmentation process in [55] is different in appearance.", "But as explained in [55] they correspond to the same process.", "For $\\theta = \\frac{4}{\\kappa }\\in (1, \\frac{3}{2})$ , let $\\nu _\\theta $ be the measure on $(\\frac{1}{2}, \\infty )$ defined by $\\nu _\\theta (dx) = \\frac{\\Gamma (\\theta +1)}{\\pi } \\mathopen {}\\mathclose {\\left( \\frac{1_{1/2 < x < 1}}{(x(1-x))^{\\theta +1}} + \\sin (\\pi (\\theta -\\frac{1}{2})) \\frac{1_{x>1}}{(x(x-1))^{\\theta +1}} \\right)} dx.$ Let $\\Lambda _\\theta $ be the pushforward of $\\nu _\\theta $ under the map $x \\mapsto \\log x$ .", "For $\\lambda >0$ , let $\\Psi _\\theta (\\lambda ) = \\mathopen {}\\mathclose {\\left(\\frac{\\Gamma (2-\\theta )}{2\\Gamma (2-2\\theta )\\sin (\\pi \\theta )} + \\frac{\\Gamma (\\theta +1)B_{\\frac{1}{2}}(-\\theta , 2-\\theta )}{\\pi } \\right)}\\lambda + \\int _\\mathbb {R}(e^{\\lambda y} -1 +\\lambda (1-e^{y})) \\Lambda _\\theta (dy), $ where $B_{\\frac{1}{2}}(a,b) = \\int _0^{\\frac{1}{2}} t^{a-1}(1-t)^{b-1} \\, dt$ is the incomplete Beta function; see [7].", "By [7], there is a real-valued Lévy process $(\\xi (t))_{t \\ge 0}$ whose law is described by $\\mathbb {E}[e^{\\lambda \\xi (t)}] = e^{t \\Psi _\\theta (\\lambda )}$ for all $\\lambda ,t>0$ .", "For $t\\ge 0$ , let $\\tau _t = \\inf \\lbrace r \\ge 0 \\: : \\: \\int _0^r e^{\\theta \\xi (s)} \\, ds \\ge t\\rbrace $ .", "Then $\\tau _t$ a.s. reaches $\\infty $ in finite time.", "For $a>0$ , define $Y_t^a := a \\exp (\\xi (\\tau _{ta^{-\\theta }})) \\quad \\text{for }t \\ge 0, $ with the convention that $\\xi (+\\infty ) = -\\infty $ .", "Then $(Y_t^a)_{t \\ge 0}$ is nonnegative Markov process , with initial value $Y_0^a = a$ and a.s. hits 0 in finite time.", "Since $\\nu _\\theta $ is supported on $(\\frac{1}{2}, \\infty )$ , the downward jumps $y \\rightarrow y - \\ell $ of $(Y_t^a)_{t \\ge 0}$ satisfy $\\ell < \\frac{1}{2}y$ .", "We now relate $(Y_t^a)_{t \\ge 0}$ to the CPI on quantum disks reviewed in Section REF .", "Suppose $(D,h,\\Gamma ,x,y)$ are as in Proposition REF such that the law of $(D,h,\\Gamma ,x)/{\\sim _\\gamma }$ is $\\mathrm {QD}_{0,1}(a)^{\\#} \\otimes \\operatorname{CLE}_\\kappa $ .", "The chordal CPI from $x$ to $y$ can be viewed as an exploration of the carpet of $\\Gamma $ such that at any splitting time it goes into the domain with the target $y$ on its boundary.", "Namely, it enters $D_t$ instead of $\\mathcal {B}_t$ to continue.", "We can also alter the exploration rule at these splitting times, each of which defines a variant of CPI and corresponds to a branch in the so-called CPI branching exploration tree rooted at $x$ .", "One particular variant of CPI is such that at any splitting time it enters the domain with the largest quantum boundary length.", "In the terminology of [55], this is called the CPI with ($q=\\infty $ )-exploration mechanism.", "Let $\\widetilde{Y}^a_t$ be the boundary length of the to-be-explored region at time $t$ CPI with ($q=\\infty $ )-exploration mechanism.", "Then by [55], for some constant $c>0$ not depending on $a$ , we have $(\\widetilde{Y}^a_{ct})_{t \\ge 0} \\stackrel{d}{=} (Y^a_t)_{t \\ge 0}$ .", "Moreover, the upward jumps in $\\widetilde{Y}_t$ correspond to times when the CPI discovers a loop, and downward jumps in $\\widetilde{Y}_t$ correspond to times when the CPI splits the to-be-explored region into two.", "Iteratively applying this fact we get the following description the outermost loop lengths is as in Proposition REF .", "Proposition A.6 ([55]) The lengths $(\\ell _i)_{i \\ge 1}$ equals in law with $(L_i)_{i\\ge 1}$ sampled as follows.", "First sample $(Y^a_t)_{t \\ge 0}$ , and let $U_1$ and $D_1$ be the sets of the sizes of upward and downward jumps of $(Y^a_t)_{t \\ge 0}$ .", "Given $D_1$ , sample a collection of independent processes $S_2 = \\lbrace (Y^x_t)_{t \\ge 0} \\: : \\: x \\in D_1 \\rbrace $ , and let $U_2$ and $D_2$ be the sets of the sizes of all upward and downward jumps of processes in $S_2$ .", "Iteratively define $S_i, U_i, D_i$ for all $i$ , and finally set $U = \\bigcup _{i \\ge 1} U_i$ .", "Rank the elements of $U$ as $L_1 \\ge L_2 \\ge \\cdots $ .", "The quantum lengths of loops discovered by CPI with ($q=\\infty $ )-exploration mechanism correspond to the sizes of the upward jumps in $\\widetilde{Y}^a_t$ , which has the same law as the upward jumps in $Y^a_t$ .", "The analogous Markov properties in Propositions REF and REF still hold for this CPI.", "Now we continue this exploration mechanism to explore the rest of CLE carpet.", "Iteratively applying this relation the quantum length of the discovered loops and the upward jumps, we get Proposition REF .", "We now explain how Proposition REF follows from the scaling limit results for $\\mathfrak {M}_p$ proved in [7].", "[Proof of Proposition REF ] Let ${\\mathfrak {M}^{\\prime }_p}$ be the planar map obtained from $\\mathfrak {M}_p$ by removing all the regions surrounded by outermost loops on $\\mathfrak {M}_p$ .", "For $(g,h)$ satisfying Proposition REF , ${\\mathfrak {M}^{\\prime }_p}$ is the so-called critical non-generic Boltzmann planar map considered in [7].", "The map ${\\mathfrak {M}^{\\prime }_p}$ is the discrete analog of the CLE carpet on the LQG background.", "The discrete analog of CPI exploration for CLE carpet is considered in [7] which is called the branching peeling exploration.", "The exact analog of the CPI with ($q=\\infty $ )-exploration mechanism is considered in [7], where the exploration is always towards the component with the largest perimeter when there is splitting.", "It was shown in [7] that the rescaled lengths of the loops discovered by this peeling process converge in law to the sizes of the upward jumps in $Y^a_t$ .", "Iterating the exploration in both discrete and continuum we get the desired convergence.", "[Proof of Proposition REF ] Combining Propositions REF and REF , we conclude the proof." ], [ "Continuity of $\\operatorname{CLE}_\\kappa $ as {{formula:2b112d66-b5c5-4b67-858e-5fcf3cc8f0e3}}", "In this section we supply the continuity in $\\kappa $ needed to extend Theorems  and REF from $\\kappa <4$ to $\\kappa =4$ .", "We start with a monotonicity statement for $\\operatorname{CLE}_\\kappa $ proved in [70].", "Lemma B.1 ([70]) There exists a coupling of $\\operatorname{CLE}_\\kappa $ on the unit disk $ for $ (83,4]$ such that a.s.\\ each outermost loop of $ CLE1$ is surrounded by an outermost loop of $ CLE2$ if $ 1<24$.$ By [70], the law of the boundaries of outermost loop clusters in a Brownian loop soup with intensity $c_\\kappa = (3\\kappa - 8)(6-\\kappa )/2\\kappa $ is given by the outermost loops of $\\operatorname{CLE}_\\kappa $ .", "Now the monotonicity of $c_\\kappa $ in $\\kappa \\in (\\frac{8}{3},4]$ yields the desired monotonicity in Lemma REF .", "We recall the formula from [69] for the conformal radii of CLE.", "Theorem B.2 ([69]) For $\\kappa \\in (8/3,8)$ , let $\\eta _\\kappa $ be the outermost loop surrounding 0 of a $\\operatorname{CLE}_\\kappa $ on $.", "Let $ CR(, 0)$ be the conformal radius of the region surrounded by $$ viewed from 0.", "Then$$\\mathbb {E}[ \\mathrm {CR}(\\eta _\\kappa , 0)^\\lambda ] = \\frac{- \\cos (\\frac{4\\pi }{\\kappa })}{\\cos \\mathopen {}\\mathclose {\\left( \\pi \\sqrt{(1-\\frac{4}{\\kappa })^2 -\\frac{8\\lambda }{\\kappa }} \\right)}}\\quad \\textrm {for } \\lambda > -1 + \\frac{\\kappa }{8}.$$$ Recall that the Hausdorff distance between two closed sets $E_1,E_2$ on a metric space $(X,d)$ is given by $\\max \\lbrace \\sup _{x\\in E_1}\\lbrace d(x,E_2)\\rbrace , \\sup _{x\\in E_2}\\lbrace d(x,E_1) \\rbrace \\rbrace $ .", "Lemma REF and Theorem REF yield the following continuity result.", "Lemma B.3 Suppose we are in the coupling in Lemma REF .", "For each $z\\in , let $ (z)$ be the outermost loop around $ z$ of the $ CLE$.For any fixed $ z$, viewed as closed sets, $ (z)$ converges almost surely to $ 4(z)$ in the Hausdorff metric as $ 4$.$ By the conformal invariance of $\\operatorname{CLE}_\\kappa $ we assume $z=0$ because the same argument will work for a general $z$ .", "In this case we simply write $\\eta _\\kappa (0)$ as $\\eta _\\kappa $ .", "Since $\\eta _{\\kappa _1}$ is surrounded by $\\eta _{\\kappa _2}$ if $\\kappa _1<\\kappa _2\\le 4$ .", "we see that $\\mathrm {CR}(\\eta _\\kappa , 0)$ is increasing in $\\kappa $ .", "By the explicit formula in Theorem REF , we have $\\lim _{\\kappa \\uparrow 4} \\mathbb {E}[ \\mathrm {CR}(\\eta _\\kappa , 0)] = \\mathbb {E}[\\mathrm {CR}(\\eta _\\kappa , 0)]$ .", "Thus we must have $\\lim _{\\kappa \\uparrow 4} \\mathrm {CR}(\\eta _\\kappa , 0)=\\mathrm {CR}(\\eta _4, 0)$ a.s. Let $D_\\kappa $ be the region surrounded by $\\eta _\\kappa $ and $D_{4^-}=\\cup _{\\kappa <4} D_\\kappa $ .", "The conformal radius of $D_{4^-}$ must be between $\\lim _{\\kappa \\uparrow 4} \\mathrm {CR}(\\eta _\\kappa , 0)$ and $\\mathrm {CR}(\\eta _4, 0)$ , hence equals $\\mathrm {CR}(\\eta _4, 0)$ a.s.", "This means that $D_\\kappa \\uparrow D_4$ a.s. hence $\\eta _\\kappa \\rightarrow \\eta _4$ a.s. in the Hausdorff metric in the coupling in Lemma REF .", "Recall the loop measures $ \\operatorname{SLE}_\\kappa ^\\mathrm {loop}$ , $\\widetilde{\\operatorname{SLE}}_\\kappa ^\\mathrm {loop}$ , $ \\mathcal {L}_\\kappa $ and $\\widetilde{\\mathcal {L}}_\\kappa $ from Section .", "We first give a quantitative version of Lemma REF and then prove the continuity in $\\kappa $ for these measures.", "Proposition B.4 Lemma REF holds.", "Moreover, the convergence is exponential with a uniform rate near $\\kappa =4$ .", "That is, there exists a constant $a(\\kappa )\\in (0,1)$ depending on $\\kappa $ such that the total variation distance between the law of $\\eta ^n$ and $\\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C})$ is at most $a(\\kappa )^n$ , and in addition, $\\sup \\lbrace a(\\kappa ) \\: : \\: \\kappa \\in [\\kappa _0, 4]\\rbrace < 1$ for $\\kappa _0 \\in (8/3, 4]$ .", "Fix $\\kappa \\in (\\frac{8}{3}, 4]$ .", "[45] shows that there exists $a(\\kappa )\\in (0,1)$ such that regardless of the initial states, the Markov chain in Lemma REF couples in one step with probability $1-a(\\kappa )$ .", "Moreover, inspecting that argument, we see that $\\sup \\lbrace a(\\kappa ) \\: : \\: \\kappa \\in [\\kappa _0, 4]\\rbrace < 1$ for $\\kappa _0 \\in (8/3, 4]$ .", "This gives Proposition REF .", "Lemma B.5 We have $\\lim _{\\kappa \\uparrow 4} \\widetilde{\\mathcal {L}}_\\kappa = \\widetilde{\\mathcal {L}}_4$ weakly with respect to the Hausdorff metric.", "By the domai Markove property of $\\operatorname{CLE}_\\kappa $ and iteratively applying Lemma REF , we see that for each $n\\in \\mathbb {N}$ , the law of $\\eta ^n$ as $\\kappa \\rightarrow 4$ converge weakly to the law of $\\eta ^n$ for $\\kappa =4$ .", "Now the uniformly exponential convergence in Proposition REF gives $\\lim _{\\kappa \\uparrow 4} \\widetilde{\\mathcal {L}}_\\kappa (\\mathcal {C}) = \\widetilde{\\mathcal {L}}_4(\\mathcal {C})$ .", "Transferring from the cylinder to the disk gives $\\lim _{\\kappa \\uparrow 4} \\widetilde{\\mathcal {L}}_\\kappa = \\widetilde{\\mathcal {L}}_4$ .", "Lemma B.6 We have $\\lim _{\\kappa \\uparrow 4} \\mathcal {L}_\\kappa =\\mathcal {L}_4$ weakly with respect to the Hausdorff metric.", "We only explain that under the natural parameterization chordal $\\operatorname{SLE}_\\kappa $ converges to $\\operatorname{SLE}_4$ as $\\kappa \\uparrow 4$ .", "Once this is done we get that the two-sided whole plane $\\operatorname{SLE}_\\kappa $ curve $\\operatorname{SLE}_\\kappa ^{p\\rightleftharpoons q} $ converges to $\\operatorname{SLE}_4^{p\\rightleftharpoons q}$ under natural parametrization as well, since the two-sided curve is characterized by its re-sampling property: conditioned on one of the curve segments between $p$ and $q$ , the conditional law of the other is $\\operatorname{SLE}_\\kappa $ in the complementary domain.", "From this and the definition of $\\operatorname{SLE}_\\kappa ^{\\mathrm {loop}}$ we reviewed in Section REF , we get the convergence of $ \\mathcal {L}_\\kappa $ .", "We now show that the law of chordal $\\operatorname{SLE}_\\kappa $ on $\\mathbb {H}$ from 0 to $\\infty $ under natural parametrization is continuous as $\\kappa \\uparrow 4$ .", "We first recall that this family of measures is tight in the local uniform topology of parametrized curves thanks to their Hölder regularity established by Zhan [77].", "On the other hand the natural parametrization of $\\operatorname{SLE}_\\kappa $ is characterized by a conformal invariance and domain Markov property considered by Lawler and Sheffield [48].", "Therefore all subsequential limits of the chordal $\\operatorname{SLE}_\\kappa $ measure agree with $\\operatorname{SLE}_4$ .", "Lemma B.7 Suppose Theorem  holds for $\\kappa \\in (8/3,4)$ .", "Then it holds for $\\kappa =4$ as well.", "The right hand side of () is continuous as $\\kappa \\uparrow 4$ .", "Therefore $\\lbrace \\mathrm {CR}(z_i, \\eta _i)\\rbrace _{1\\le i\\le 3}$ for $\\kappa <4$ converges in law as $\\kappa \\uparrow 4$ and the moment generating function of the limit is given by ${C^{\\operatorname{CLE}}_\\kappa (\\lambda _1,\\lambda _2,\\lambda _3)}$ with $\\kappa =4$ .", "It remains to show that the limit is given by $\\lbrace \\mathrm {CR}(z_i, \\eta _i)\\rbrace _{1\\le i\\le 3}$ with $\\kappa =4$ .", "Fix a small $\\varepsilon >0$ .", "Let $S$ be the set of simple loops in $\\widehat{ separating z_1 from \\lbrace z_2, z_3\\rbrace .", "Let S_\\varepsilon =\\lbrace \\eta \\in S: \\mathrm {CR}(z_1,\\eta )>\\varepsilon \\rbrace .Let \\Gamma be the full-plane \\operatorname{CLE}_\\kappa which the \\eta _i are chosen from.", "Now conditioning on the event that \\Gamma \\cap S_\\varepsilon \\ne \\emptyset ,we uniformly choose a loop \\eta ^\\varepsilon from the finite set \\Gamma \\cap S_\\varepsilon .", "Then the conditional law of \\eta ^\\varepsilon is the probability measure proportional to the restriction of \\widetilde{\\operatorname{SLE}}_\\kappa ^\\mathrm {loop} to {S_\\varepsilon }.", "Let D^\\varepsilon be the connected component of \\hat{\\setminus \\eta ^\\varepsilon not containing z_1.", "By Proposition~\\ref {prop-CLE-markov},conditioning on \\eta ^\\varepsilon , the law of \\Gamma |_{D^\\varepsilon } is a \\operatorname{CLE}_\\kappa in D^\\varepsilon .", "Moreover, \\eta _1,\\eta _2,\\eta _3are the outermost loops in \\Gamma |_{D^\\varepsilon } \\cup \\lbrace \\eta ^\\varepsilon \\rbrace separating z_1,z_2,z_3.", "}By Lemma~\\ref {lem-kappa-shape}, the conditional law of \\eta ^\\varepsilon conditioned on \\Gamma \\cap S_\\varepsilon \\ne \\emptyset for \\kappa <4converges weakly to the same conditional law when \\kappa =4.", "By Lemma~\\ref {lem-kappa-D}, the same continuity holds for the conditional law of (\\eta _1,\\eta _2,\\eta _3)given \\Gamma \\cap S_\\varepsilon \\ne \\emptyset .", "Since \\lim _{\\varepsilon \\rightarrow 0}\\mathbb {P}[\\Gamma \\cap S_\\varepsilon \\ne \\emptyset ]=\\lim _{\\varepsilon \\rightarrow 0}\\mathbb {P}[\\mathrm {CR}(z_1,\\eta _1)>\\varepsilon ]=1uniformly for \\kappa \\in (\\kappa _0,4] for a fixed \\kappa _0 \\in (8/3,4), we can remove the conditioning and conclude that as \\kappa \\uparrow 4 the law of (\\eta _1,\\eta _2,\\eta _3) converges weakly in the Hausdorff metric to the same law when \\kappa =4.", "This gives the desired continuity in \\kappa .", "}$" ] ]
2107.01788
[ [ "Effect of dust rotational disruption by radiative torques on radiation\n pressure feedback from massive protostars" ], [ "Abstract Radiation pressure on dust is thought to play a crucial role in the formation process of massive stars by acting against gravitational collapse onto the central protostar.", "However, dust properties in dense regions irradiated by the intense radiation of massive protostars are poorly constrained.", "Previous studies usually assume the standard interstellar dust model to constrain the maximum mass of massive stars formed by accretion, which appears to contradict with dust evolution theory.", "In this paper, using the fact that stellar radiation exerts on dust simultaneous radiation pressure and radiative torques, we study the effects of grain rotational disruption by radiative torques (RATs) on radiation pressure and explore its implications for massive star formation.", "For this paper, we focus on the protostellar envelope and adopt a spherical geometry.", "We find that original large grains of micron-sizes presumably formed in very dense regions can be rapidly disrupted into small grains by RATs due to infrared radiation from the hot dust shell near the sublimation front induced by direct stellar radiation.", "Owing to the modification in the size distribution by rotational disruption, the radiation pressure opacity can be decreased by a factor of $\\sim 3$ from the value expected from the original dust model.", "However, to form massive stars via spherical accretion, the dust-to-gas mass ratio needs to be reduced by a factor of $\\sim 5$ as previously found." ], [ "Introduction", "Radiation pressure on dust plays a central role in numerous astrophysical processes, including formation and feedback of massive stars ([41]; [50]; [58]) and supermassive black holes and active galactic nuclei (AGN) ([14]), and stellar and galactic winds ([52]).", "The radiation pressure depends on dust properties (composition and size distribution), which are poorly known in intense radiation fields.", "Massive stars are ubiquitous in the universe, yet its formation mechanism is still hotly debated (see reviews by [40]; [50]; [67]; [51]).", "From the early studies on massive star formation, radiation pressure on dust has been suggested to be a major barrier for the formation of massive stars ([41]; [32]).", "Dust in the central hottest region is sublimated due to high temperatures, producing a cocoon or a torus surrounding the central core.", "Most UV photons of massive protostars are absorbed by dust in a thin layer just beyond the sublimation front and are re-emitted in infrared (IR) radiation.", "The latter radiation of long wavelengths can penetrate deeper into the protostellar envelope and induce radiation pressure on dust.", "[41] found radiation pressure force induced by IR dust radiation could exceed gravitational force and halt the accretion when the stellar mass exceeds $M_{\\star }\\sim 20M_{\\odot }$ .", "[62] (hereafter WC87) studied in detail the maximum stellar mass implied by radiation pressure.", "The authors calculated the radiation pressure on dust in the outer envelope induced by photons emitted from a thin shell of hot dust by following dust evolution.", "The authors found that, to produce massive stars of $M>20M_{\\odot }$ , the dust-to-gas mass ratio must be reduced by a factor of 4, and the grain size distribution must be modified such that large grains of $a\\sim 0.05-0.25\\,{\\mu \\rm {m}}$ are depleted in the protostellar envelope.", "[64] performed two-dimensional radiation-hydrodynamic simulations of the collapse of massive cores, including radiation pressure and non-spherical collapse due to rotation.", "The authors found that the effects of non-spherical collapse could help to form protostars of 30-40$M_{\\odot }$ assuming the standard interstellar dust opacity.", "However, they were unable to make objects much larger than about 40$M_{\\odot }$ , even when starting with an initial core of 120$M_{\\odot }$ ; eventually the radiative acceleration halted collapse.", "Three-dimensional radiation-hydrodynamic simulations by [37] reveal the inefficiency of radiation pressure in halting accretion because of radiation Rayleigh-Taylor instability that allows radiation to escape through low density region ( see recent reviews by [35] and [58]).", "Hydrodynamic simulations by [34] came to the same conclusion that radiation pressure does not halt the accretion, although the standard grain size distribution of the diffuse ISM is assumed.", "Nevertheless, to understand the exact role of radiation pressure feedback in massive star formation, it is necessary to accurately understand dust properties in the envelope (cocoon) surrounding the massive (proto-)star.", "Previous studies usually assume the standard interstellar dust model from [48] (hereafter MRN model) with the maximum grain size of $a_{\\rm max}=0.25\\,{\\mu \\rm {m}}$ ([61]; [62]).", "The assumption of the MRN size distribution is difficult to reconcile with the existence of large grains of micron sizes in dense cores implied by theoretical calculations (e.g., [19]) and inferred from observations (e.g., [53]; [44]; [55]; [65]).", "Grain growth is also observed toward protostellar disks, a later phase of star formation [39].", "Such large grains are most likely to be porous or have composite structures of ice mantles [15].", "Therefore, dust in the protostellar envelope plausibly has similar properties as in prestellar cores or even grows to larger sizes, instead of having the standard MRN distribution.", "Dust size distribution is especially important for understanding the radiation feedback in massive star clusters (see a review by [38]).", "In addition to radiation pressure, stellar radiation is known to induce radiative torques (RATs) on dust grains of irregular shapes ([6]; [12]; [42]).", "Such RATs act to spin up the grain to suprathermal rotation ([12]; [24]).", "[30] realized that centrifugal stress resulting from such suprathermal rotation can exceed the maximum tensile strength of grain material, resulting in the disruption of the grain into fragments.", "This new physical mechanism was termed Radiative Torque Disruption (RATD).", "[56] first noticed that fluffy grains in the solar system could be disrupted by spin-up due to RATs.", "Since rotational disruption acts to break loose bonds between the grain constituents, unlike breaking strong chemical bonds between atoms in thermal sublimation, RATD can work with the average interstellar radiation field (see [21] for a review).", "The RATD mechanism introduces a new environment parameter for dust evolution, namely local radiation intensity, and is found to be the most efficient mechanism that constrains the upper limit of the size distribution ([20]; [31]).", "We will show in Section that grain size modification by RATD occurs faster than grain acceleration by radiation pressure.", "Thus, the grain size distribution constrained by RATD should be used for calculations of radiation pressure rather than the original dust size distribution expected for the dense prestellar core or the MRN size distribution.", "This paper aims to revisit the radiation pressure problem by considering the effect of RATD on the radiation pressure opacity and study its implications for massive star formation.", "To explore the effect of RATD and its resulting radiation pressure opacity, we assume a spherical collapse for the protostellar envelope.", "Indeed, both observations (see [4]) and numerical simulations (e.g., [36]) establish that massive star formation proceeds with the formation of an accretion disk.", "However, the exact radius of accretion disks is uncertain, depending on the various effects of turbulence, gravity, magnetic fields, feedback, and non-ideal Magneto-hydrodynamics (MHD) effects.", "Thus, the detailed modeling is devoted in a followup paper.", "The structure of the present paper is as follows.", "In Section we review radiation pressure and study its dependence on the incident radiation field and grain size distribution.", "In Section , we review radiative torques on dust and the rotational disruption mechanism.", "In Section , we describe the working model of massive protostellar cloud and radiation fields.", "In Section , we present numerical results for the disruption size by RATD and calculate resulting radiation pressure opacity.", "In Sections and , we discuss our main findings and present conclusions." ], [ "Radiation Pressure on Dust", "Photons carry energy, momentum, and angular momentum (spin).", "Dust grains exposed to a radiation field experience radiation pressure due to light absorption and scattering, which is known to be important in astrophysics (e.g., [57]).", "Moreover, interstellar dust grains are likely to have irregular shapes as inferred from interstellar polarization ([16]; [18]).", "When subject to an anisotropic radiation field, such grains experience radiative torques ([6]; [12]; [1]).", "As a result, grains simultaneously experience radiation pressure and radiative torques when irradiated by a radiation beam, which affects grain translational and rotational dynamics.", "In this section, we first describe radiation pressure and its dependence on grain properties.", "Radiative torques will be described in Section ." ], [ "Radiation pressure cross-section", "Let $u_{\\lambda }$ be the spectral energy density of radiation field at wavelength $\\lambda $ .", "The energy density of the radiation field is then $u_{{\\rm rad}}=\\int _{0}^{\\infty } u_{\\lambda }d\\lambda $ .", "To describe the strength of a radiation field, let define $U=u_{\\rm rad}/u_{\\rm ISRF}$ with $u_{\\rm ISRF}=8.64\\times 10^{-13}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ being the energy density of the average interstellar radiation field (ISRF) in the solar neighborhood as given by [47].", "For a black body of temperature $T$ , the radiation spectrum is $u_{\\lambda }= B_{\\lambda }(T)/c$ .", "The radiation pressure cross-section efficiency for a spherical grain of size $a$ is defined by $Q_{\\rm pr}(a,\\lambda )=Q_{\\rm abs}+Q_{\\rm sca}(1-\\langle \\cos \\theta \\rangle ),$ where $\\langle \\cos \\theta \\rangle $ is the mean cosine of scattering angle $\\theta $ , and $Q_{\\rm abs}=C_{\\rm abs}/\\pi a^{2}$ , $Q_{\\rm sca}=C_{\\rm sca}/(\\pi a^{2})$ are the absorption and scattering efficiency.", "The radiation force on a grain is calculated as $F_{\\rm rad,a}=\\int _{0}^{\\infty } Q_{\\rm pr}(a,\\lambda )\\pi a^{2}=u_{\\rm rad}\\bar{Q}_{\\rm pr}\\pi a^{2}u_{\\lambda } d\\lambda ,$ where the average cross-section efficiency is given by $\\bar{Q}_{\\rm pr}(a)=\\frac{\\int _{0}^{\\infty } Q_{\\rm pr}(a,\\lambda )u_{\\lambda }d\\lambda }{u_{\\rm rad}}.$" ], [ "Radiation pressure opacity", "Dust grains have a grain size distribution, which is usually described by a power law, dnjda = Cj nH a, where $j$ denotes the grain composition (silicate and graphite), $C_{j}$ is the normalization constant, and $\\alpha $ is the power slope.", "The lower cutoff of the size distribution is taken to be $a_{\\rm min}=0.005\\,{\\mu \\rm {m}}$ as in [48].", "The upper cutoff of the size distribution, $a_{\\rm max}$ , is a free parameter in this section.", "Integrating over the grain size distribution for all grains in a unit volume, the radiation force on a mass unit is given by $F_{\\rm rad}=\\int _{a_{\\min }}^{a_{\\max }} u_{\\rm rad}\\bar{Q}_{\\rm pr}\\pi a^{2} \\frac{dn_{gr}}{da}da=u_{\\rm rad}\\langle \\kappa \\rangle _{\\rm pr,d}\\rho _{d},$ where the radiation pressure coefficient per dust mass is given by pr,d = j=sil,gra aminamax a2 Qprj(a)(dnjda)da, and the dust mass density $\\rho _{d}= \\sum _{j=\\rm sil,gra} \\int _{a_{\\rm min}}^{a_{\\rm max}} (4/3)\\rho _{j}\\pi a^{3}\\left(\\frac{dn_{j}}{da}\\right)da,$ where $\\rho _{\\rm sil}\\approx 3.5\\,{\\rm g}\\,{\\rm cm}^{-3}$ and $\\rho _{\\rm gra}\\approx 0.2\\,{\\rm g}\\,{\\rm cm}^{-3}$ (see e.g., [7]).", "Let $f_{d/g}$ be the dust-to-gas mass ratio.", "Then, the dust radiation pressure coefficient per gas mass, ${\\kappa }_{\\rm pr}$ , is given by $\\kappa _{\\rm pr}=\\langle \\kappa \\rangle _{\\rm pr,d}f_{d/g}.$" ], [ "Dependence of Radiation Pressure Opacity on Incident Radiation and Grain Size Distribution", "We first calculate the radiation pressure cross-sections for spherical grains using the Mie theory for astronomical silicate and graphite grains ([11]).", "We then calculate the radiation pressure cross-section averaged over the different black body radiation spectra characterized by a black body temperature $T_{\\rm rad}$ , $\\bar{Q}_{\\rm pr}$ , using Equation (REF ).", "Figure REF shows $\\bar{Q}_{\\rm pr}$ as a function of the grain size for different radiation spectra.", "For low temperatures of $T_{\\rm rad}\\lesssim 2000\\,{\\rm K}$ , $\\bar{Q}_{\\rm pr}$ decreases rapidly when the grain size decreases from $a=1\\,{\\mu \\rm {m}}$ because the peak wavelength, $\\lambda _{\\max }= 2898\\,{\\mu \\rm {m}}\\,{\\rm K}/T_{\\rm rad}\\sim 3(10^{3}\\,{\\rm K}/T_{\\rm rad})\\,{\\mu \\rm {m}}$ , is much larger than the grain size.", "For hot stars of $T_{\\rm rad}>2\\times 10^{4}\\,{\\rm K}$ , $\\bar{Q}_{\\rm pr}$ first increases slowly to its maximum and then decreases when the grain size becomes small enough ($\\sim 0.01-0.1\\,{\\mu \\rm {m}}$ ) such that $\\lambda _{\\max }/a>1$ .", "Figure: Average radiation pressure cross-section efficiency for different temperatures of the black body radiation spectrum, T rad T_{\\rm rad}, as a function of the grain size for silicates.", "Rapid decrease of Q ¯ pr \\bar{Q}_{\\rm pr} with the grain size is seen for small grains depending on T rad T_{\\rm rad}.Figure: Effect of the maximum grain size a max a_{\\max } on radiation pressure opacity for different black body radiation spectra from the central star (left panel) and hot dust shell (right panel), assuming silicate material (upper panels) and graphite (lower panels).", "The opacity κ pr \\kappa _{\\rm pr} decreases rapidly when a max a_{\\max } decreases and becomes flat for a≲0.1μma\\lesssim 0.1\\,{\\mu \\rm {m}}.To study the effects of the grain size distribution on radiation pressure, we calculate the radiation pressure opacity for the different values of $a_{\\max }$ and a constant slope $\\alpha =-3.5$ using Equations (REF ) and (REF ).", "Figure REF (upper) shows the effect of the maximum size on the radiation pressure opacity averaged over the stellar radiation and infrared dust emission field.", "For hot stars of high stellar temperatures of $T_{\\rm rad}>20000K$ , the radiation opacity increases when $a_{\\max }$ decreases from $1\\,{\\mu \\rm {m}}$ to $0.01\\,{\\mu \\rm {m}}$ because the radiation spectrum from hot stars is mostly in the UV.", "For low source temperatures (lower panel), the opacity $\\kappa _{\\rm pr}$ decreases rapidly when $a_{\\max }$ decreases to $\\lesssim 0.1\\,{\\mu \\rm {m}}$ because these cool sources emit radiation mostly in optical and near-infrared (NIR).", "For the typical of the hot dust shell of $T_{\\rm rad}\\sim 1000\\,{\\rm K}$ adopted by WC87, the opacity decreases by a factor of 3 when the maximum size is reduced to $a_{\\max }=0.1\\,{\\mu \\rm {m}}$ .", "The results shown in Figure REF reveal that the maximum grain size is an important factor of dust radiation pressure coefficient.", "Therefore, an accurate understanding of how $a_{\\max }$ changes in the protostellar envelope is critically important.", "In the following section, we will show that the maximum size $a_{\\max }$ decreases with increasing radiation intensity, which make the opacity to change with the distance to the radiation source accordingly.", "Dust grains of irregular shape irradiated by an anisotropic radiation experience radiative torques ([6]; [12]; [1]).", "The magnitude of RATs is defined as ${\\Gamma }_{\\lambda }=\\pi a^{2}\\gamma u_{\\lambda } \\left(\\frac{\\lambda }{2\\pi }\\right){Q}_{\\Gamma },$ where $\\gamma $ is the anisotropy degree of the radiation field, ${Q}_{\\Gamma }$ is the RAT efficiency ([12]; [42]).Formally, $a$ here is the effective size of the grain which is defined as the radius of the sphere with the same volume as the irregular grain, but for simplicity, we take $a$ without significant uncertainty (within a order of unity).", "The magnitude of RAT efficiency can be approximated by a power-law $Q_{\\Gamma }\\approx \\alpha \\left(\\frac{{\\lambda }}{a}\\right)^{-\\eta }$ for $\\lambda /a\\gtrsim 0.1$ , where $\\alpha $ and $\\eta $ are the constants that depend on the grain size, shape, and optical constants.", "Numerical calculations of RATs for several shapes of different optical constants in [42] find the slight difference in RATs among the realization.", "They adopted the coefficients $\\alpha =0.4,\\eta =0$ for $a_{\\rm trans}<a<\\lambda /0.1$ , and $\\alpha =2.33,\\eta =3$ for $a<a_{\\rm trans}$ where $a_{\\rm trans}=\\lambda /1.8$ denotes the transition size at which the RAT efficiency slope changes.", "Thus, the maximum RAT efficiency is $Q_{\\Gamma ,\\max }=\\alpha $ .", "The radiative torque averaged over the incident radiation spectrum is defined as $\\overline{\\Gamma }_{\\rm RAT}&=&\\int {\\Gamma }_{\\lambda }d\\lambda =\\pi a^{2}\\gamma u_{{\\rm rad}} \\left(\\frac{\\overline{\\lambda }}{2\\pi }\\right)\\overline{Q}_{\\Gamma },$ where the average radiative torque efficiency over the radiation spectrum is defined as $\\overline{Q}_{\\Gamma } = \\frac{\\int _{0}^{\\infty } \\lambda Q_{\\Gamma }u_{\\lambda } d\\lambda }{\\int _{0}^{\\infty } \\lambda u_{\\lambda } d\\lambda }=\\frac{\\int _{0}^{\\infty } \\lambda Q_{\\Gamma }u_{\\lambda } d\\lambda }{\\bar{\\lambda }u_{\\rm rad}},$ where the integrals are taken over the entire radiation spectrum.", "For a radiation spectrum of black body temperature $T_{\\rm rad}$ , the mean wavelength of the stellar radiation field is given by $\\bar{\\lambda }(T_{{\\rm rad}})&&=\\frac{\\int _{0}^{\\infty } \\lambda B_{\\lambda }(T_{\\rm rad})d\\lambda }{\\int _{0}^{\\infty }B_{\\lambda }(T_{\\rm rad})d\\lambda }\\nonumber \\\\&=&\\left(\\frac{2\\pi k^{3}\\Gamma (3)\\zeta (3)}{\\sigma ch^{2}}\\right)\\frac{1}{T_{{\\rm rad}}}\\simeq \\frac{0.53\\,{\\rm cm}\\,{\\rm K}}{T_{\\rm rad}},$ where $\\Gamma $ and $\\zeta $ are the Gamma and Riemann functions, and we have used the integral formula $\\int _{0}^{\\infty } x^{s-1}dx/(e^{x}-1)=\\Gamma (s)\\zeta (s)$ for $s>1$ .", "For small grains of $a<\\bar{\\lambda }/1.8$ , plugging $Q_{\\Gamma }$ from Equation (REF ) and $u_{\\lambda }\\propto B_{\\lambda }(T_{{\\rm rad}})$ into Equation (REF ), one obtains the following after taking the integral, $\\overline{Q}_{\\Gamma }&=&\\frac{2\\pi \\alpha k^{\\eta +3}}{\\sigma h^{\\eta +2}c^{\\eta +1}}\\left(\\frac{\\zeta (3)\\Gamma (3)2\\pi k^{3}}{\\sigma ch^{2}} \\right)^{\\eta -1}\\Gamma (\\eta +3)\\zeta (\\eta +3)\\nonumber \\\\&&\\times \\left(\\frac{\\bar{\\lambda }}{a}\\right)^{-\\eta }.$ Plugging the RAT parameters of $\\alpha =2.33$ and $\\eta =3$ into Equation (REF ), one obtains the average RAT efficiency for a stellar radiation field $\\overline{Q}_{\\Gamma }\\simeq 6\\left(\\frac{\\bar{\\lambda }}{a}\\right)^{-3}.$ Because the average RAT efficiency cannot exceed its maximum RAT efficiency, $Q_{\\Gamma ,\\max }$ , the above equation is only valid for grains of size $a\\lesssim (Q_{\\Gamma ,\\max }/6)^{1/3}\\bar{\\lambda }= \\bar{\\lambda }/2.5$ .", "Large grains of $a>\\bar{\\lambda }/2.5$ then have $\\bar{Q}_{\\Gamma }=Q_{\\Gamma ,\\max }=0.4$ .", "Let $a_{\\rm trans,\\star }\\equiv \\bar{\\lambda }/2.5$ be the transition size of the RAT averaged over the stellar radiation spectrum." ], [ "Grain Rotation and Rotational Disruption by Radiative Torques", "For radiation sources with stable luminosity considered in this paper, radiative torque, $\\overline{\\Gamma }_{\\rm RAT}$ , is constant, so that the grain angular velocity is steadily increased over time.", "The equilibrium angular velocity can be achieved when the spin-up rate by RATs is equal to the damping rate (see [42]; [24]; [25]): $\\omega _{\\rm RAT}=\\frac{\\overline{\\Gamma }_{\\rm RAT}\\tau _{\\rm damp}}{I},$ where $\\tau _{\\rm damp}$ is the rotational damping time (see Eq.", "REF ), which is induced by gas collisions and IR emission (see Appendix ).", "For the gas with hydrogen density $n_{{\\rm H}}$ and temperature $T_{{\\rm gas}}$ , plugging $\\tau _{\\rm damp}$ and $\\overline{\\Gamma }_{\\rm RAT}$ with $Q_{\\rm RAT}$ from Equation (REF ) into Equation (REF ), one obtains $\\omega _{\\rm RAT}&=& \\frac{3\\gamma u_{\\rm rad}a\\bar{\\lambda }^{-2}}{1.6n_{\\rm H}\\sqrt{2\\pi m_{\\rm H}kT_{\\rm gas}}}\\left(\\frac{1}{1+F_{\\rm IR}}\\right)\\nonumber \\\\&\\simeq &9.4\\times 10^{5} a_{-5}\\left(\\frac{\\bar{\\lambda }}{1.2\\,{\\mu \\rm {m}}}\\right)^{-2}\\left(\\frac{\\gamma _{-1}U}{n_{3}T_{1}^{1/2}}\\right)\\nonumber \\\\&&\\times \\left(\\frac{1}{1+F_{\\rm IR}}\\right){\\rm rad}\\,{\\rm s}^{-1},$ for grains with $a\\lesssim a_{\\rm trans, \\star }$ , and $\\omega _{\\rm RAT}&=&\\frac{1.5\\gamma u_{\\rm rad}\\bar{\\lambda }a^{-2}}{16n_{\\rm H}\\sqrt{2\\pi m_{\\rm H}kT_{\\rm gas}}}\\left(\\frac{1}{1+F_{\\rm IR}}\\right)\\nonumber \\\\&\\simeq & 8.1\\times 10^{7}a_{-5}^{-2}\\left(\\frac{\\bar{\\lambda }}{1.2\\,{\\mu \\rm {m}}}\\right) \\left(\\frac{\\gamma _{-1}U}{n_{3}T_{1}^{1/2}}\\right)\\nonumber \\\\&&\\times \\left(\\frac{1}{1+F_{\\rm IR}}\\right){\\rm rad}\\,{\\rm s}^{-1},$ for grains with $a> a_{\\rm trans, \\star }$ .", "Here $a_{-5}=a/10^{-5}\\,{\\rm cm}$ , $n_{3}=n_{{\\rm H}}/10^{3}\\,{\\rm cm}^{-3}$ , $T_{1}=T_{{\\rm gas}}/10\\,{\\rm K}$ , $F_{\\rm IR}$ is the IR damping coefficient (see Eq.", "REF ), and $\\gamma _{-1}=\\gamma /0.1$ is the anisotropy of radiation field relative to the typical anisotropy of the diffuse interstellar radiation field of $\\gamma =0.1$ (e.g., [12]).", "The stellar radiation field has $\\gamma =1$ .", "A spherical dust grain of radius $a$ rotating at velocity $\\omega $ develops an average tensile stress due to centrifugal force which scales as (see [30]) $S=\\frac{\\rho a^{2} \\omega ^{2}}{4},$ where $\\rho $ is the dust mass density.", "When the rotation rate is sufficiently high such as the tensile stress exceeds the maximum limit (i.e., tensile strength), $S_{\\rm max}$ , the grain is disrupted.", "The critical rotational velocity is given by $S=S_{\\rm max}$ : $\\omega _{\\rm disr}&=&\\frac{2}{a}\\left(\\frac{S_{\\max }}{\\rho } \\right)^{1/2}\\nonumber \\\\&\\simeq & \\frac{3.65\\times 10^{8}}{a_{-5}}S_{\\max ,7}^{1/2}\\hat{\\rho }^{-1/2}~{\\rm rad}\\,{\\rm s}^{-1},$ where $\\hat{\\rho }=\\rho /(3\\,{\\rm g}\\,{\\rm cm}^{-3})$ , and $S_{\\max ,7}=S_{\\max }/10^{7} \\,{\\rm erg}\\,{\\rm cm}^{-3}$ ([30]).", "The tensile strength of interstellar dust depends on grain structure, which is uncertain ([46]).", "Compact grains have large tensile strength of $S_{\\rm max}\\gtrsim 10^{9}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ , whereas composite/fluffy grains have a much lower tensile strength [20].", "Large interstellar grains (radius $a>0.1\\,{\\mu \\rm {m}}$ ) are expected to have a composite structure ([49]; [9]) as a result of coagulation process in molecular clouds or in the interstellar medium (ISM).", "Numerical simulations for porous grain aggregates from [59] find that the tensile strength decreases with increasing the monomer radius and can be fitted with an analytical formula (see [33] for more details) $S_{\\max } &\\simeq & 9.51\\times 10^{4} \\left(\\frac{\\gamma _{\\rm sf}}{100 \\,{\\rm erg}\\,{\\rm cm}^{-2}}\\right) \\nonumber \\\\&\\times &\\left(\\frac{r_{0}}{0.1\\,{\\mu \\rm {m}}}\\right)^{-1}\\left(\\frac{\\phi }{0.1}\\right)^{1.8} \\,{\\rm erg}\\,{\\rm cm}^{-3},$ where $\\gamma _{\\rm sf}$ is the surface energy per unit area of the material, $r_{0}$ is the monomer radius, and $\\phi $ is the volume filling factor of monomers.", "For large grains ($a>0.1\\,{\\mu \\rm {m}}$ ) made of ice-mantle monomers of radius $r_{0}=0.1\\,{\\mu \\rm {m}}$ and $\\phi =0.1$ , Equation (REF ) implies $S_{\\max }\\approx 10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ , assuming the surface energy of $\\gamma _{\\rm sf}=0.1 J m^{-2}$ for ice mantles in contact.", "We note that according to the dust evolution, grains in dense cores are expected to be porous aggregates due to the coagulation of ice mantles grains because ice mantles develop at $A_{V}\\sim 3$ .", "Therefore, the typical value of $S_{\\max }$ for dust in the massive prestellar cores is $S_{\\max }\\sim 10^{5} \\,{\\rm erg}\\,{\\rm cm}^{-3}$ .", "Comparing Equations (REF ) and (REF ), one can obtain the disruption grain size: $a_{\\rm disr}&=&\\left(\\frac{3.2n_{\\rm H}\\sqrt{2\\pi m_{\\rm H}kT_{\\rm gas}}}{3\\gamma u_{\\rm rad}\\bar{\\lambda }^{-2}}\\right)^{1/2}\\left(\\frac{S_{\\rm max}}{\\rho }\\right)^{1/4}(1+F_{\\rm IR})^{1/2}\\nonumber \\\\&\\simeq & 1.96 \\left(\\frac{\\gamma _{-1}U}{n_{3}T_{1}^{1/2}}\\right)^{-1/2}\\left(\\frac{\\bar{\\lambda }}{1.2\\,{\\mu \\rm {m}}}\\right) \\hat{\\rho }^{-1/4}S_{\\max ,7}^{1/4}\\nonumber \\\\&&\\times (1+F_{\\rm IR})^{1/2}\\,{\\mu \\rm {m}},$ which depends on the local gas properties, radiation field, and the grain tensile strength.", "Due to the decrease of the rotation rate for $a>a_{\\rm trans,\\star }$ (see Eq.", "REF ), the rotational disruption occurs only if $a_{\\rm disr}<a_{\\rm trans, \\star }$ .", "In this case, there exists a maximum size of grains that can still be disrupted by centrifugal stress ([29]), $a_{\\rm disr,max}&=&\\frac{\\gamma u_{\\rm rad}\\bar{\\lambda }}{12n_{\\rm H}\\sqrt{2\\pi m_{\\rm H}kT_{\\rm gas}}}\\left(\\frac{S_{\\rm max}}{\\rho }\\right)^{-1/2}(1+F_{\\rm IR})^{-1}\\nonumber \\\\&\\simeq &0.04\\left(\\frac{\\gamma _{-1} U}{n_{3}T_{1}^{1/2}}\\right)\\left(\\frac{\\bar{\\lambda }}{1.2\\,{\\mu \\rm {m}}}\\right)\\hat{\\rho }^{1/2}S_{\\max ,7}^{-1/2}\\nonumber \\\\&\\times &(1+F_{\\rm IR})^{-1}\\,{\\mu \\rm {m}}.$ In general, due to dependence of $F_{\\rm IR}$ on the grain size $a$ , one only obtain analytical results for $a_{\\rm disr}$ when $F_{\\rm IR}\\ll 1$ .", "In general, we first calculate numerically $\\omega _{\\rm RAT}$ using Equation (REF ) and compare it with $\\omega _{\\rm disr}$ to find $a_{\\rm disr}$ numerically, which will be referred to as numerical results." ], [ "Rotational Disruption vs. Radiation Pressure Acceleration", "Under the intense radiation field, grains are also known to be accelerated by radiation force.", "To see whether grain acceleration can be more effecient than rotational disruption, we now compare the characteristic timescales of these two processes.", "The characteristic timescale for rotational disruption of a grain of size $a$ can be estimated as ([30]): $t_{\\rm disr}&=&\\frac{I\\omega _{\\rm disr}}{\\Gamma _{\\rm RAT}}=\\frac{I\\omega _{\\rm disr}}{\\pi a^{2}u_{\\rm rad}(\\bar{\\lambda }/2\\pi )\\overline{Q}_{\\Gamma }},\\nonumber \\\\&=&\\frac{32\\pi a^{2}(\\rho S_{\\max })^{1/2}}{15u_{\\rm rad}\\bar{\\lambda }\\overline{Q}_{\\Gamma }}\\nonumber \\\\&&\\simeq 101a_{-5}^{2}(\\hat{\\rho }S_{\\max ,7})^{1/2}\\left(\\frac{r}{100\\,{\\rm au}}\\right)^{2}\\left(\\frac{0.2\\,{\\mu \\rm {m}}}{\\bar{\\lambda }\\bar{Q}_{\\Gamma }}\\right)\\,{\\rm s}.~~~~~$ The characteristic timescale to accelerate the grain at rest to a velocity $v$ by radiation pressure is estimated as, $t_{\\rm acc}&&=\\frac{m_{gr}v}{F_{\\rm rad}(a)}=\\frac{4\\rho a v}{3\\bar{Q}_{\\rm pr}u_{\\rm rad}}\\nonumber \\\\&&\\simeq 8815a_{-5}v_{1}\\hat{\\rho }\\left(\\frac{r}{100\\,{\\rm au}}\\right)^{2}\\left(\\frac{1.0}{\\bar{Q}_{\\rm pr}L_{6}}\\right)\\,{\\rm s},~~~~~$ where $v_{1}=v/(10\\, {\\rm km}\\,{\\rm s}^{-1})$ .", "The ratio of the disruption time to acceleration time is then equal to $\\frac{t_{\\rm disr}}{t_{\\rm acc}}&=&\\frac{8\\pi a Q_{\\rm pr}(S_{\\max }/\\rho )^{1/2}}{5 v\\bar{\\lambda }\\overline{Q}},\\nonumber \\\\&\\simeq & 0.01a_{-5}\\left(\\frac{\\bar{Q}_{\\rm pr}}{1.0}\\right)\\frac{(S_{\\rm max,7}/\\hat{\\rho })^{1/2}}{v_{1}(\\bar{\\lambda }/0.2\\,{\\mu \\rm {m}})(\\overline{Q}_{\\Gamma }/0.4)},$ where the radiation pressure and torque coefficients are normalized over their typical values of $\\bar{Q}_{\\rm pr}= 1$ and $\\overline{Q}_{\\Gamma }= 0.4$ .", "Equation (REF ) implies that the rotational disruption occurs on a much shorter timescale compared to acceleration by radiation pressure for $S_{\\max }<10^{11}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ .", "Therefore, essentially, grains of both porous and compact structures can be disrupted before being accelerated by radiation pressure.", "Therefore, the radiation pressure opacity would be determined by the dust size distribution constrained by RATD instead of the original size distribution of dust in prestellar cores on the standard interstellar dust model.", "We will quantify the effect of RATD on the radiation opacity in the next section.", "The special case is in an extremely high density where grain rotational damping by gas collisions is faster than disruption.", "For this case, grains cannot be disrupted, but dust grains could be slowly accelerated and coupled to the gas such as in stellar winds.", "Let's compare the disruption time the dynamical times of the star formation process.", "The characteristic timescale for a molecular core to collapse to form a zero main-sequence star is described by the Kelvin-Helmholtz timescale, $t_{\\rm KH}&&=\\frac{GM_{\\star }^{2}}{R_{\\star }L}\\nonumber \\\\&&\\simeq 3140 \\left(\\frac{M_{\\star }}{100M_{\\odot }}\\right)^{2}\\left(\\frac{10R_{\\odot }}{R_{\\star }}\\right)\\left(\\frac{10^{6}L_{\\odot }}{L}\\right){\\rm yr},$ which is much longer than the disruption and acceleration time (see Eq.REF ).", "The free-fall time of gravitational collapse is, $t_{\\rm ff}=\\left(\\frac{3\\pi }{32 G\\rho _{{\\rm gas}}}\\right)^{1/2}\\simeq 1.63\\times 10^{4}n_{7}^{-1/2}{\\rm yr},$ where $n_{7}=n_{{\\rm H}}/10^{7}\\,{\\rm cm}^{-3}$ is normalized over the density of the central core with $\\rho _{{\\rm gas}}=\\mu n_{{\\rm H}}m_{{\\rm H}}$ with $\\mu $ being the molecular weight of the gas.", "This is also much longer than the acceleration and disruption times." ], [ "Density profile", "Let $\\dot{M}$ be the accretion rate of the matter onto the central massive protostar of mass $M_{\\star }$ .", "The accretion rate is determined by the initial condition.", "For free-fall accretion, the inflow velocity at radius $r$ is $v(r)=(2GM_{\\star }/r)^{1/2}$ .", "The gas density is then given by $n_{{\\rm H}}(r)&=&\\frac{\\dot{M}X({\\rm H})}{4\\pi r^{2}m_{{\\rm H}}v}\\nonumber \\\\&\\simeq & 2.2\\times 10^{8}\\frac{X({\\rm H})}{0.7}\\left(\\frac{M_{\\star }}{100M_{\\odot }}\\right)^{-1/2}\\left(\\frac{\\dot{M}}{10^{-3}M_{\\odot }{\\rm yr}^{-1}}\\right)\\nonumber \\\\&&\\times \\left(\\frac{r}{100\\,{\\rm au}}\\right)^{-3/2}\\,{\\rm cm}^{-3}.$ where $X({\\rm H})$ is the mass fraction of hydrogen mass in the infalling gas.", "The density profile depends on the stellar mass and accretion rate." ], [ "Radiation field of the central massive star", "For massive stars, nuclear fusion already started when the accretion process is still ongoing, which is different from the low-mass protostar.", "The total luminosity from the central protostar is given by $L_{\\rm tot}=\\int _{0}^{\\infty } L_{\\nu }d\\nu = L_{\\star }+L_{\\rm acc},$ where $L_{\\star }$ is the stellar luminosity produced by the nuclear burning core, and $L_{\\rm acc}$ is the luminosity induced by accretion shock at the star surface.", "The latter is equal to the rate of gravitational energy released due to collapse, $L_{\\rm acc}&=&\\frac{GM_{\\star }\\dot{M}}{R_{\\star }}\\simeq 3.1\\times 10^{5}L_{\\odot }\\left(\\frac{M_{\\star }}{100M_{\\odot }}\\right)\\nonumber \\\\&&\\times \\left(\\frac{\\dot{M}}{10^{-3}M_{\\odot }{\\rm yr}^{-1}}\\right)\\left(\\frac{R_{\\star }}{10R_{\\odot }}\\right),$ where $M_{\\star }$ is the core mass and $R_{\\star }$ core radius.", "Following [45], one has $L_{\\star }=10^{6}(M/100M_{\\odot })^{\\alpha }$ with $\\alpha \\sim 1.5$ for $M\\sim 100 M_{\\odot }$ .", "Thus, the stellar luminosity dominates over the accretion luminosity for very massive stars.", "The central star is assumed to radiate as a black body from photosphere of temperature $T_{\\rm ph}$ (e.g., [61]; [62]).", "Thus, the stellar temperature is related to the stellar luminosity as $L_{\\rm tot}=4\\pi R_{\\rm ph}^{2}\\sigma T_{\\rm ph}^{4},$ where $R_{\\rm ph}$ is the stellar photosphere radius.", "Figure: Schematic illustration of a protostellar envelope, including dust-free sublimation zone, first dust absorption zone (also disruption zone), and outer stellar-shielded envelope.", "Disruption of grains in the first absorption zone changes the radiation spectrum irradiating the outer zone as well as radiation acceleration of the first zone." ], [ "Inner envelope: irradiated by direct stellar radiation", "Dust near the central star is sublimated due to heating by intense stellar radiation.", "The radius of the dust sublimation front is given by setting the grain temperature $T_{d}$ to be equal to the sublimation threshold, $T_{\\rm sub}$ , which yields (see [28]) $r_{\\rm sub}\\simeq 155.3\\left(\\frac{L_{\\rm tot}}{10^{6}L_{\\odot }}\\right)^{1/2}\\left(\\frac{T_{\\rm sub}}{1500\\,{\\rm K}}\\right)^{-5.6/2}\\,{\\rm au}.$ Due to the extinction by intervening dust, the radiation strength of the stellar radiation field at radial distance $r$ from the central star is given by $U_{\\star }(r)=\\frac{\\int _{0}^{\\infty } u_{\\lambda }(T_{\\star })e^{-\\tau (\\lambda )}d\\lambda }{u_{\\rm ISRF}},$ where $u_{\\lambda }(T_{\\star })=L_{\\lambda }/(4\\pi r^{2}c)$ is the spectral energy density in the absence of dust extinction, $\\tau _{\\lambda }$ is the optical depth of intervening dust.", "The mean wavelength of the reddened stellar spectrum is given by Equation (REF ) with $u_{\\lambda }(T_{\\star })\\rightarrow u_{\\lambda }(T_{\\star })e^{-\\tau _{\\lambda }}$ .", "For massive stars, due to dominance of UV photons, the stellar radiation is mostly absorbed by a thin shell of visual extinction of $\\sim 1$ beyond the sublimation front (see Figure REF )." ], [ "Outer envelope: irradiated by thermal emission from hot dust", "Dust grains in the hot dust shell just beyond the sublimation front are irradiated by direct stellar radiation and thermal emission from the hot dust shell (see Figure REF ).", "Assuming that the hot dust shell emits as a black body, the total luminosity emitted by the hot shell is equal to the bolometric luminosity $L_{\\rm tot}$ , which has a specific luminosity of $L_{\\rm shell,\\nu }=4\\pi R_{\\rm shell}^{2}F_{\\nu }=4\\pi R_{\\rm shell}^{2}\\pi B_{\\lambda }(T_{\\rm shell}),$ where $R_{\\rm shell}$ is the radius of the hot dust shell, and $F_{\\nu }=\\pi B_{\\nu }(T_{\\rm shell})$ is the spectral emergent flux from the thin shell.", "The thin hot dust shell is assumed to have temperature of $T=T_{\\rm shell}$ , given by $L=\\int L_{\\rm shell,\\nu }d\\nu =4\\pi R_{\\rm shell}^{2} \\sigma T_{\\rm shell}^{4}.$ The spectral energy density of thermal dust emission at distance $r$ is given by $u_{\\lambda ,\\rm shell}=\\frac{4\\pi R_{\\rm shell}^{2}\\pi B_{\\lambda }(T_{\\rm shell})}{4\\pi r^{2}c}.$ The mean wavelength of the reddened stellar spectrum is given by Equation (REF ) with $u_{\\lambda }(T_{\\star })\\rightarrow u_{\\lambda ,\\rm shell}e^{-\\tau _{\\lambda }}$ .", "The mean wavelength of the hot dust emission is $\\bar{\\lambda }_{\\rm shell}= 0.53\\rm \\,{\\rm K}/T_{\\rm shell}\\sim 5.3\\,{\\mu \\rm {m}}(1000\\,{\\rm K}/T_{\\rm shell})$ .", "Thus, NIR-MIR emission from hot dust is important for disruption of grains in the outer layer, which are important for large grains only of $a>\\bar{\\lambda }_{\\rm shell}/2\\sim 2.7\\,{\\mu \\rm {m}}$ .", "Smaller grains can also be disrupted if the luminosity is sufficiently large.", "Table: Best-fit parameters of the radiation strength and mean wavelength to numerical calculations for different stellar temperatures." ], [ "Grain Rotational Disruption", "According to the theory described in Section , to calculate the grain disruption size by RATD, we first need to find the strength and mean wavelength of the radiation field in the protostellar envelope.", "The column density of gas obscuring the massive protostar at a distance $r$ from the inner radius of the dust cocoon, $r_{\\rm in}$ , is given by $N_{\\rm H}(r)&=&\\int _{r_{\\rm in}}^{r} n_{\\rm H}(r^{\\prime })dr^{\\prime }\\\\&=&\\frac{n_{\\rm in}r_{\\rm in}}{p-1}\\left[1- \\left(\\frac{r}{r_{\\rm in}}\\right)^{-p+1}\\right],\\nonumber \\\\&\\simeq &1.5\\times 10^{23}n_{\\rm in,8}r_{\\rm in,2}\\nonumber \\\\&&\\times \\left(\\frac{1}{p-1}\\right)\\left[1-\\left(\\frac{r}{r_{\\rm in}}\\right)^{-p+1}\\right]\\,{\\rm cm}^{-2},\\nonumber $ where $p=3/2$ (see Eq.", "REF ), $r_{\\rm in}=r_{\\rm sub}$ (Eq.", "REF ), $n_{\\rm in,8}=n_{\\rm in}/10^{8}\\,{\\rm cm}^{-3}$ and $r_{\\rm in,2}=r_{\\rm in}/100\\,{\\rm au}$ .", "The wavelength-dependent dust extinction is described by $A_{\\lambda ,\\star }=1.086\\tau (\\lambda )$ , and the visual extinction of the central protostar is related to the column density as $A_{V,\\star }/N_{\\rm H}=R_{V}/(5.8\\times 10^{21}\\,{\\rm cm}^{-2})$ (see [8]).", "The visual extinction measured from the protostar to a radial distance $r$ in the envelope is then given by $A_{V,\\star }(r)&=&\\left(\\frac{N_{\\rm H}(r)}{5.8\\times 10^{21}\\,{\\rm cm}^{-2}}\\right)R_{V},\\nonumber \\\\&=&\\frac{A_{V,in}}{p-1}\\left[1-\\left(\\frac{r}{r_{\\rm in}}\\right)^{-p+1}\\right],~~~$ where $A_{V,in}=n_{\\rm in}r_{\\rm in}R_{V}/(5.8\\times 10^{21}\\,{\\rm cm}^{-2})\\approx 129n_{\\rm in,8}r_{\\rm in,2}R_{V}/5$ , and $A_{V,\\star }(r)=0$ for $r<r_{\\rm in}$ .", "Due to dust extinction, the intensity of the stellar radiation field decreases but its mean wavelength increases with visual extinction $A_{V,\\star }$ .", "Following [31], the radiation strength of the reddened radiation field at $A_{V,\\star }$ from the source can be described by $U=\\frac{U_{\\rm source}}{1+c_{1}A_{V,\\star }^{c_{2}}}=\\frac{U_{\\rm in}}{1+c_{1}A_{V,\\star }^{c_{2}}}\\left(\\frac{r}{r_{\\rm in}}\\right)^{-2},$ where $U_{\\rm source}$ is the strength of the radiation field (e.g., direct stellar radiation or dust emission from the hot shell) at radial distance $r$ in the absence of dust extinction, $U_{\\rm in}=L_{\\star }/(4\\pi r_{\\rm in}^{2}cu_{\\rm ISRF})$ is the radiation strength at $r=r_{\\rm in}$ , and $c_{1},c_{2}$ are the fitting parameters.", "The mean wavelength of the attenuated stellar spectrum is $\\bar{\\lambda }=\\bar{\\lambda }_{\\rm source}(1+c_{3}A_{V,\\star }^{c_{4}}),$ where $c_{3}$ and $c_{4}$ are the fitting parameters.", "As in [31], we perform a least chi-square fitting of $U/U_{\\rm source}$ using Equation (REF ) and $\\bar{\\lambda }/\\bar{\\lambda }_{\\rm source}$ using Equation (REF ) to their numerical values to obtain the best-fit parameters.", "The best-fit parameters and their uncertainties are shown in Table REF .", "Figure REF shows the decrease of $U$ with $A_{V,\\star }$ for both direct stellar radiation (upper panel) and IR radiation from the hot dust shell (lower panel).", "As shown, more than $99\\%$ of radiation energy from hot stars are absorbed within a thin layer of $A_{V}<5$ .", "This is understandable because such hot stars emit mostly at UV wavelengths and $A_{\\rm UV}> 2A_{V}$ .", "Thus, the radiation strength of hot protostars decreases as $U(A_{V})/U_{\\rm source}< e^{-2A_{V}}< 10^{-2}$ at $A_{V}>2$ .", "For the radiation field of the hot dust shell, the decrease of $U$ is much slower than in the case of hot stars due to the visual extinction at longer wavelengths.", "Assuming the gas-dust thermal equilibrium and using Equation (REF ), one obtains the gas temperature as a power-law $T_{{\\rm gas}}=T_{\\rm in}\\left(\\frac{r}{r_{\\rm in}}\\right)^{-q}(1+c_{1}A_{V,\\star }^{c2})^{-q/2},$ where $T_{\\rm in}= T_{d,0}U_{\\rm in}^{1/(4+\\beta )}\\,{\\rm K}$ is the grain temperature at $r_{\\rm in}$ and $q=2/(4+\\beta )$ where $T_{d,0}$ is the grain temperature at $U=1$ , with $T_{d,0}\\approx 16.4\\,{\\rm K}$ and $\\beta \\approx 2$ for silicates.", "The assumption of gas-dust thermal equilibrium for the protostellar core is invalid in the photodissociation region (PDR) around high-mass protostars where gas heating by photoelectric effect is important.", "Following [31], the disruption size at visual extinction $A_{V,\\star }$ from the source is, $a_{\\rm disr}&=&\\left(\\frac{3.2n_{\\rm H}\\sqrt{2\\pi m_{\\rm H}kT_{\\rm gas}}}{3\\gamma u_{\\rm rad}\\bar{\\lambda }^{-2}}\\right)^{1/2}\\left(\\frac{S_{\\rm max}}{\\rho }\\right)^{1/4}(1+F_{\\rm IR})^{1/2}\\nonumber \\\\&\\simeq & 0.35 \\hat{\\rho }^{-1/4}S_{\\max ,7}^{1/4}\\left(\\frac{U_{\\rm in,6}}{n_{\\rm in,8}T_{1,2}^{1/2}}\\right)^{-1/2}\\left(\\frac{\\bar{\\lambda }_{\\rm source}}{1.2\\,{\\mu \\rm {m}}}\\right)\\nonumber \\\\&&\\times (1+c_{1}A_{V,\\star }^{c_{2}})^{(1-q/4)/2}(1+c_{3}A_{V,\\star }^{c_{4}})\\nonumber \\\\&&\\times \\left(\\frac{r}{r_{\\rm in}}\\right)^{(2-p-q/2)/2}(1+F_{\\rm IR})^{1/2}\\,{\\mu \\rm {m}},$ where $U_{\\rm in,6}=U_{\\rm in}/10^{6}$ .", "The RATD effect occurs only if $a_{\\rm disr}<a_{\\rm trans,\\star }=\\bar{\\lambda }/2.5$ because grains larger than $a_{\\rm trans,\\star }$ have $\\omega _{\\rm RAT}$ decreasing with $a$ (Eq.", "REF ).", "The maximum size for RATD is given by Equation (REF ), $a_{\\rm disr,max}&=&\\frac{3\\gamma u_{\\rm rad}\\bar{\\lambda }}{64n_{\\rm H}\\sqrt{2\\pi m_{\\rm H}kT_{\\rm gas}}}\\left(\\frac{S_{\\rm max}}{\\rho }\\right)^{-1/2}(1+F_{\\rm IR})^{-1}\\nonumber \\\\&\\simeq &0.7\\hat{\\rho }^{1/2}S_{\\max ,7}^{-1/2}\\left(\\frac{U_{\\rm in,6}}{n_{\\rm in,8}T_{1,2}^{1/2}}\\right)\\left(\\frac{\\bar{\\lambda }_{\\rm source}}{1.2\\,{\\mu \\rm {m}}}\\right)\\nonumber \\\\&&\\times (1+c_{1}A_{V,\\star }^{c_{2}})^{-(1-q/4)}(1+c_{3}A_{V,\\star }^{c_{4}})\\nonumber \\\\&&\\times \\left(\\frac{r}{r_{\\rm in}}\\right)^{-(2-p-q/2)}(1+F_{\\rm IR})^{-1}~\\,{\\mu \\rm {m}}.$ Due to a high gas density of the protostellar core, Equation (REF ) implies $F_{\\rm IR}\\ll 1$ .", "Thus, the $1+F_{\\rm IR}$ term in the above equations can be ignored, so that we can obtain analytical results for the disruption size $a_{disr}$ .", "The above equations can be used to obtain the disruption size by the direct stellar radiation and the infrared emission from the hot dust shell (see Eq.", "REF ).", "To check the validity of our analytical results, we will numerically calculate the alignment size (disruption size) using $\\omega _{\\rm RAT}$ from Equation (REF ) where $\\Gamma _{\\rm RAT}$ is numerically computed using Equation (REF ) and apply the criteria for grain alignment (disruption) for the stellar radiation using the reddening law (Eq.", "REF ).", "The obtained results are referred to as numerical results.", "Note that for numerical calculations, the IR damping is considered (cf.", "analytical results).", "We use Equations (REF and REF ) to calculate the disruption by both the stellar radiation and by the IR emission from the hot dust shell.", "For the stellar radiation, we take $T_{\\rm source}=T_{\\star }$ and $T_{\\rm source}=T_{\\rm shell}$ for the IR emission from the hot shell.", "Figure: Decrease of the radiation strength due to dust extinction for the stellar radiation and the emission from hot dust.", "Most of radiation from the hot stars (>90%>90\\%) are absorbed within A V ∼1A_{V}\\sim 1, producing a hot dust thin shell.", "Radiation from the hot dust shell can penetrate deeper into the envelope (lower panel)." ], [ "Grain disruption size", "We first calculate $a_{\\rm disr}$ as a function of the radius $r$ from the central massive star for the different model parameters shown in Table REF .", "We fix the protostar mass of $M_{\\star }=60M_{\\odot }$ and $100M_{\\odot }$ and the total stellar luminosity of $L_{\\rm tot}$ .", "The accretion rate is varied between $10^{-4}-5\\times 10^{-3}M_{\\odot }{\\rm yr}^{-1}$ , which results in the different density profiles (Eq.", "REF ).", "We assume that large grains have a composite structure and adopt $r_{0}=0.1\\,{\\mu \\rm {m}}$ , yielding the typical tensile strength of $S_{\\rm max}=10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ .", "This assumption is consistent with the popular paradigm of grain evolution in which grains grow in dense clouds due to coagulation, resulting in composite/fluffy grain structure ([46]).", "We also explore the possibilities that large grains are made of smaller monomers ($r_{0}<0.1\\,{\\mu \\rm {m}}$ ) that have a larger tensile strength of $S_{\\rm max}=10^{7}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ .", "We consider only silicate grains, but our present theory can be generalized for other dust compositions because RATs are insensitive to dust compositions ([42]; [17]).", "We calculate the disruption size for both the stellar radiation and infrared emission from the hot dust shell.", "We find that disruption by stellar radiation is negligible because the UV radiation is significantly decreased at $A_{V,\\star }\\sim 5$ .", "In the following, we only discuss the results induced by the hot dust shell.", "Table: Model ParametersFigure: Variation of the disruption size obtained from analytical equations and numerical methods, as a function of visual extinction for different accretion rates M ˙=5×10 -3 -10 -4 M ⊙ yr -1 \\dot{M}=5\\times 10^{-3}-10^{-4}M_{\\odot }{\\rm yr}^{-1}, assuming S max =10 5 erg cm -3 S_{\\max }=10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3} (left panel) and 10 7 erg cm -3 10^{7}\\,{\\rm erg}\\,{\\rm cm}^{-3} (right panel).", "In each panel, the lower and upper lines show a disr a_{\\rm disr} and a disr , max a_{\\rm disr,max}, respectively, and the space constrained by a disr ,a disr , max a_{\\rm disr},a_{\\rm disr,max} determine the disruption zone.", "RATD is considered effective if a disr <a max , orig a_{\\rm disr}<a_{\\rm max, orig}.", "RATD efficiency increases toward the central star due to increase of the radiation field.", "RATD efficiency is stronger for smaller accretion rate.Figure REF shows the variation of $a_{\\rm disr}$ and $a_{\\rm disr,max}$ obtained from our analytical formulae (solid lines) with numerical results (dashed lines) as functions of $A_{V,\\star }$ , assuming the typical tensile strength of $S_{\\max }=10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ for large composite grains of $a>0.1\\,{\\mu \\rm {m}}$ (see Eq.", "REF ).", "One can see that the grain disruption size is larger for higher accretion rate $\\dot{M}$ due to higher gas density (Eq.", "REF ) which results in stronger rotational damping.", "The disruption size decreases rapidly toward the central protostar due to increasing radiation intensity.", "For very high accretion rate of $\\dot{M}>10^{-3}M_{\\odot }{\\rm yr}^{-1}$ , grain disruption is inefficient (i.e., $a_{\\rm disr}>1\\,{\\mu \\rm {m}}$ ) in the outer region due to large gas density.", "For accretion rates of $\\dot{M}\\lesssim 10^{-3}M_{\\odot }{\\rm yr}^{-1}$ , micron-sized grains ($a> 1\\,{\\mu \\rm {m}}$ ) can be disrupted even in the outer envelope, therefore, dust in the entire envelope is completely processed by RATD and their size distribution returns to that of the diffuse ISM with $a_{\\rm max}\\sim 0.1\\,{\\mu \\rm {m}}$ (see Figure REF )." ], [ "Radiation Pressure Opacity in the presence of RATD", "Due to RATD, the grain size distribution is modified from the original size distribution which is plausibly that of the prestellar cores.", "Because grain growth is efficient in dense cores such as in dense protostellar cores and protostellar disks of very high densities of $n_{{\\rm H}}\\sim 10^{7}-10^{8}\\,{\\rm cm}^{-3}$ given by Equation (REF ), we assume the maximum size of the original dust is $a_{\\rm max, orig}=10\\,{\\mu \\rm {m}}$ ([19].", "In the presence of RATD, the maximum size $a_{\\rm max}$ is determined by $\\min (a_{\\rm disr},a_{\\max ,orig}$ because $a_{\\rm disr,max}>0.5\\,{\\mu \\rm {m}}$ .", "Figure: Radiation pressure opacity per dust mass as a function of A V A_{V} from the central star for M ☆ =100M ⊙ M_{\\star }=100M_{\\odot } and different accretion rates, for f d/g =f d/g ( MW )=0.01f_{d/g}=f_{d/g}(\\rm MW)=0.01 (red lines) and f d/g =f d/g ( MW )/5f_{d/g}=f_{d/g}(\\rm MW)/5 (blue lines), assuming S max =10 5 S_{\\rm max}=10^{5} and 10 7 erg cm -3 10^{7}\\,{\\rm erg}\\,{\\rm cm}^{-3}.", "The opacity κ pr \\kappa _{\\rm pr} decreases rapidly from the outer envelope (A V,☆ ∼300-500A_{V,\\star }\\sim 300-500) toward the central star and becomes smaller than its maximum value at the outer envelope by a factor ∼3\\sim 3.Using $a_{\\rm disr}(r)$ from RATD, we compute the radiation pressure opacity using Equation (REF ) with $u_{\\lambda }$ given by the hot dust shell.", "Figure REF shows the decrease of the radiation pressure opacity with the visual extinction for two values of the tensile strength, $S_{\\max }=10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ (upper panel) and $S_{\\max }=10^{7}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ (right panel) for two cases of the standard (red lines) and reduced (blue lines) dust-to-gas ratio, $f_{d/g}$ .", "For larger accretion rate of $\\dot{M}=5\\times 10^{-3}M_{\\odot }{\\rm yr}^{-1}$ , disruption does not occur at large $A_{V}$ and the maximum opacity of $\\kappa _{\\rm pr}\\sim 13.2 \\,{\\rm cm}^{2}\\,{\\rm g}^{-1}$ at the outer envelope ($A_{V,\\star }\\sim 500$ ).", "It decreases to the minimum value of $\\kappa _{\\rm pr}\\sim 4.7\\,{\\rm cm}^{2}\\,{\\rm g}^{-1}$ at $A_{V}\\sim 50$ .", "For lower accretion rate, $\\dot{M}\\lesssim 10^{-3}M_{\\odot }{\\rm yr}^{-1}$ , the disruption occurs even at larger $A_{V}$ and $\\kappa _{\\rm pr}$ decreases significantly.", "Figure: Same as Figure but for M ☆ =60M ⊙ M_{\\star }=60M_{\\odot } and L tot =5×10 5 L ⊙ L_{\\rm tot}=5\\times 10^{5}L_{\\odot }." ], [ "Implications for Massive Star Formation", "Infalling gas in the envelope is subject to gravitational force and radiation pressure on dust.", "The ratio of radiation pressure force on dust to gravity is given by $\\Lambda = \\frac{F_{\\rm rad}}{F_{\\rm grav}}&=&\\frac{\\kappa _{\\rm pr}L_{\\rm tot}/(4\\pi r^{2}c)}{GM_{\\star }/r^{2}}=\\frac{L_{\\rm tot}\\kappa _{\\rm pr}}{4\\pi G M_{\\star }c}\\nonumber \\\\&=&3.9\\frac{f_{d/g}}{0.01}\\left(\\frac{\\kappa _{\\rm pr}}{5\\,{\\rm g}\\,{\\rm cm}^{-2}}\\right)\\left(\\frac{L_{\\rm tot}}{10^{6}L_{\\odot }}\\right)\\left(\\frac{M_{\\star }}{100M_{\\odot }}\\right)^{-1},$ where $\\kappa _{\\rm pr}$ is given by Equation (REF ).", "To form a massive star via accretion, the gravity must exceed the radiation pressure, i.e., $\\Lambda <1$ , which corresponds to $\\frac{L_{\\rm tot}}{M_{\\star }}<\\frac{L_{\\rm Edd,d}}{M_{\\star }}=2564.1\\frac{f_{d/g}}{0.01}\\left(\\frac{5\\,{\\rm g}\\,{\\rm cm}^{-2}}{\\kappa _{\\rm pr}}\\right) \\frac{L_{\\odot }}{M_{\\odot }},$ where $L_{\\rm Edd,d}$ is the Eddington luminosity, maximum luminosity that the envelope is not blown away.", "However, massive stars of $M_{\\star }>20M_{\\odot }$ already have $L_{\\rm tot}/M_{\\star }>2500$ .", "Therefore, if the dust opacity is similar to the standard ISM, accretion cannot form stars above $20M_{\\odot }$ .", "Using $\\kappa _{\\rm pr}(r)$ obtained from the previous section, as shown in Figure REF , we now compute $\\Lambda =F_{\\rm rad}/F_{\\rm grav}$ at several distances $r$ due to RATD, assuming the different values of $\\dot{M}$ , $M_{\\star }=100M_{\\odot }$ , same parameters as in WC87 (Figure 3).", "Figure: The ratio of radiation pressure vs. gravity, Λ\\Lambda , for the standard f g/d f_{g/d} (upper panel) and f d/g =f d/g ( MW )/5f_{d/g}=f_{d/g}(\\rm MW)/5 assuming S max =10 5 erg cm -3 S_{\\max }=10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3}.Figure: Same as Figure but for for M ☆ =60M ⊙ M_{\\star }=60M_{\\odot } and L tot =5×10 5 L ⊙ L_{\\rm tot}=5\\times 10^{5}L_{\\odot }.Figure REF shows $\\Lambda $ as a function of the radial distance for the different tensile strengths , assuming the typical $M_{\\star }=100M_{\\odot }$ for $f_{d/g}=f_{d/g}(\\rm MW)=0.01$ and $f_{d/g}=f_{d/g}(\\rm MW)/5$ .", "One can see that $\\Lambda $ decreases rapidly with $r$ due to the effect of RATD.", "A smaller accretion rate $\\dot{M}$ can reduce $\\Lambda $ at a large distance, and a higher $\\dot{M}$ only reduces $\\Lambda $ in the inner region near the sublimation front.", "For the case of $f_{d/g}=f_{d/g}(\\rm MW)/5$ , one can see that $\\Lambda <1$ for $A_{V,\\star }<100-200$ .", "For a low accretion rate of $\\dot{M}\\lesssim 0.001M_{\\odot }{\\rm yr}^{-1}$ , one has $\\Lambda <1$ for $S_{\\max }=10^{5}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ .", "Therefore, the radiation pressure can be overcome if grains in dense clouds are composite.", "For more compact grains with $S_{\\max }=10^{7}\\,{\\rm erg}\\,{\\rm cm}^{-3}$ , grain disruption is less efficient, and one has $\\Lambda <1$ only for $A_{V,\\star }<100$ .", "Moreover, the decrease of $f_{d/g}$ will enable RATD at large distances from the source, which reduces the dust opacity.", "Figure REF shows the same results but for lower stellar mass and luminosity with $M_{\\star }=60M_{\\odot }$ and $L_{\\rm tot}=5\\times 10^{5}L_{\\odot }$ .", "The main features are similar to those in Figure REF , but $\\Lambda $ is lower due to smaller luminosity $L_{\\rm tot}$ ." ], [ "Dust properties in massive protostellar envelopes", "Dust properties are crucially important for understanding the role of radiation pressure feedback in massive star formation.", "Unfortunately, the dust properties are not well constrained in protostellar envelopes.", "Large grains are expected to be present in the prestellar cores due to grain growth implied by theory [19]) and observations ([53]; [44]; [55]; [65]).", "However, under intense radiation field from massive protostars, dust properties are expected to change.", "Recent advances in dust physics reveal that dust grains could be disrupted by the RATD mechanism ([30]; [21]) beyond the sublimation front where the temperature is much below $T_{\\rm sub}$ .", "Using the RATD mechanism, we showed that micron-sized grains are rapidly disrupted into smaller ones by radiative torques.", "Therefore, the abundance of large grains is reduced in the inner envelope of protostars, while the abundance of small grains increases toward the central protostar.", "The effect is efficient for realistic accretion rates $\\dot{M}$ , and only inefficient for very high accretion rate of $\\dot{M}>5\\times 10^{-3}M_{\\odot }{\\rm yr}^{-1}$ .", "Modeling of observed spectral energy density from the dense regions around O stars usually infer the grain size distribution similar to the MRN with $a_{\\max }=0.25\\,{\\mu \\rm {m}}$ ([61]; [62]; [5]).", "This is unexpected because grains grow efficiently to micron sizes in dense cores where massive stars form.", "Therefore, RATD is a plausible mechanism to resolve this tension." ], [ "Reduced radiation pressure by RATD and massive star formation", "The radiation pressure on dust is thought to be the major barrier for the accretion by gravity, which prevents the formation of very massive stars ([41]).", "Reduction of radiation pressure is required to form massive stars.", "Previous studies assuming the MRN size distribution ([61]; [62] suggested that if large grains could be removed and the dust mass is reduced by a factor of 4, then the radiation pressure can be circumvented and form stars of $M_{\\star }>20M_{\\odot }$ .", "We first found that with the realistic grain size distribution in the protostellar envelopes, the radiation pressure opacity is several times larger than obtained with the MRN distribution (see Figure REF ), which strengthens the radiation pressure problem.", "Thanks to RATD, large grains are disrupted into small ones, and dust becomes smaller toward the star.", "As a result, the radiation pressure opacity in the envelope irradiated by the hot dust shell's radiation is found to decrease toward the central protostar.", "Accordingly, the ratio of the radiation pressure to the gravitational force decreases toward the central star due to RATD.", "Nevertheless, to form very massive stars, the dust-to-gas ratio still needs to be reduced by a factor of $\\sim 5$ .", "The physical mechanism underlying such dust destruction is unknown." ], [ "Effect of hot dust emission on grain alignment", "In addition to rotational disruption (RATD effect), RATs are known to induce grain alignment ([13]; [42]; [23]; [26]).", "The relation between RAT alignment and RATD is discussed in detail by [43].", "Observations of RAT alignment are presented in a review by [3].", "Grain alignment by protostellar radiation in dense clouds is studied in detail by [31] for low-mass and high-mass protostars.", "The authors found that stellar radiation can align grains at large $A_{V,\\star }$ from the central source.", "However, the authors only consider the stellar temperature of $T_{\\star }\\lesssim 10^{4}\\,{\\rm K}$ where most stellar radiation energy is concentrated in optical-NIR.", "We now discuss the alignment by stellar radiation from very massive protostars with $T_{\\star }\\gtrsim 2\\times 10^{4}\\,{\\rm K}$ for which stellar radiation is substantially absorbed within a narrow region of $A_{V,\\star }\\lesssim 5$ (see Figure REF ), and the reemission by the hot dust shell plays a more important role in aligning grains in the envelope.", "Following [31], the minimum size of aligned grains (hereafter alignment size) at visual extinction $A_{V,\\star }$ from the protostar, $a_{\\rm align} &=&\\left(\\frac{4n_{\\rm H}T_{\\rm gas}}{3\\gamma u_{\\rm rad}\\bar{\\lambda }^{-2}}\\right)^{2/7}\\left(\\frac{15m_{\\rm H}k^{2}}{4\\rho }\\right)^{1/7}(1+F_{\\rm IR})^{2/7}\\nonumber \\\\&\\simeq &0.031\\hat{\\rho }^{-1/7} \\left(\\frac{U_{\\rm in,6}}{n_{\\rm in,8}T_{\\rm in,2}}\\right)^{-2/7}\\left(\\frac{\\bar{\\lambda }_{\\rm source}}{1.2\\,{\\mu \\rm {m}}}\\right)^{4/7} \\nonumber \\\\&&\\times (1+c_{1}A_{V,\\star }^{c_{2}})^{(2-q)/7} (1+c_{3}A_{V,\\star }^{c_{4}})^{4/7}\\nonumber \\\\&&\\times \\left(\\frac{r}{r_{\\rm in}}\\right)^{2(2-p-q)/7} (1+F_{\\rm IR})^{2/7}\\,{\\mu \\rm {m}},$ where $\\gamma =1$ is adopted for the unidirectional field from the protostar.", "Figure REF shows the alignment size due to infrared emission from the hot dust shell of $T_{d}=1000\\,{\\rm K}$ obtained using Equation (REF ) by setting $F_{\\rm IR}=0$ (i.e., analytical results) and numerical results obtained using numerical method (see [31]).", "Grains can still be aligned at large distances to the reprocessed stellar radiation and become more efficient when moving toward the star.", "We also calculate the alignment by direct stellar radiation and find that it is effective only within $A_{V,\\star }<10$ , i.e., in a much narrow region compared to alignment by hot dust shell.", "Therefore, near- to mid-IR emission from hot dust could be important for alignment of grains at large $A_{V,\\star }$ from hot OB stars with temperatures of $T_{\\star }\\gtrsim 2 \\times 10^{4}\\,{\\rm K}$ .", "Figure: Grain alignment size as a function of the visual extinction to the star by reemission from the hot dust, assuming M ☆ =100M ⊙ M_{\\star }=100M_{\\odot } and L ☆ =10 6 L ⊙ L_{\\star }=10^{6}L_{\\odot }.", "Both analytical results (Eq.", "with F IR =0F_{\\rm IR}=0) and numerical results are shown.", "The alignment size is slightly larger for numerical results due to the more important contribution of IR damping when the accretion rate is lower.It is worthy to note that stellar winds might reduce dust mass due to sputtering in the shocked region.", "However, during the massive star formation, the ram pressure by infalling gas can be much larger than that by the stellar winds because the accretion rate is several orders of magnitude larger than the mass loss rate by stellar winds.", "Thus, stellar winds are diminished rapidly by infalling gas." ], [ "Effect of RATD in protostellar disks", "To get insight into the effect of RATD in the protostellar envelope on the radiation pressure feedback, we have assumed a spherical collapse toward the central protostar for our numerical calculations.", "Here we discuss the effect of RATD in the protostellar disks around the massive protostars.", "Both observations (see [4]) and numerical simulations (e.g., [36]) establish that massive star formation proceeds with the formation of an accretion disk.", "The formation of accretion disks is an effective way to overcome radiation pressure barrier because radiation can easily escape along the rotation axis due to its low density.", "However, the exact radius of accretion disks is uncertain.", "While hydrodynamics simulations by [34] reveal the rapid formation of large, rotationally-supported disks due to gravity, simulations including stellar feedback and magnetic fields by [54] show that the large scale disks cannot form as expected from the magnetic braking effect ([2]).", "Misaligned magnetic field and rotation axis as well as non-ideal MHD effects are expected to form a small accretion disk (see [63] and [66] for recent reviews).", "We performed calculations for an extended envelope from $r_{in}=r_{sub}\\sim 100-150$ au for $L\\sim 10^{6}L_{\\odot }$ to $r_{out}\\sim 10^{4}$ au (see Eq.", "REF ).", "For the typical disk radius of $r_{disk}<1000$ au, our results shown in Figure REF perhaps remain valid for $r>10r_{in}$ , but become inapplicable for the inner region of $r<10r_{in}$ because the gas density is much larger.", "For example, for a protostellar disk formed from hydrodynamic simulations in [34], the gas density is $n_{{\\rm H}}\\sim 10^{11} \\,{\\rm cm}^{-3}$ at $\\sim 100$ au and $\\sim 10^{8}\\,{\\rm cm}^{-3}$ at $\\sim 1000$ au from the central star (see their Figure 11).", "For a disk from simulations with the turbulence and stellar feedback in [54], the density is $n_{{\\rm H}}\\sim 10^{8}\\,{\\rm cm}^{-3}$ at 1000 au from the star in the absence of the magnetic field.", "In the presence of magnetic fields, the gas density is much smaller at the same radius (see [54]).", "These gas densities are larger than the density given by our density profile (Eq.", "REF ), which implies $n_{{\\rm H}}\\sim 10^{10}\\,{\\rm cm}^{-3}$ for $\\dot{M}=0.005 M_{\\odot }~yr^{-1}$ at $r\\sim 100 $ au.", "Therefore, our results for the high accretion rate of $\\dot{M}=0.005M_{\\odot }yr^{-1}$ may be not too far from the results obtained for a realistic protostellar disk.", "It is worth to mention that [60] performed a detailed numerical modeling of RATD for the circumstellar disk of radius $\\sim 300$ au around low mass stars of luminosity $L_{\\star }\\sim 35L_{\\odot }$ with the radiative transfer treated by RADMC-3D.", "The authors found that the RATD is efficient in the disk surface and intermediate layer only, but is inefficient in the disk mid-plane.", "Therefore, with the massive protostar luminosity of much higher luminosity of $L_{\\star }\\sim 10^{6}L_{\\odot }$ , the effect of RATD would be more efficient than in the disk around low and intermediate mass stars.", "We will carry out a detailed modeling of RATD for protostellar disks around massive stars in a follow-up paper." ], [ "Summary", "We study the dynamical effects of intense radiation feedback from massive protostars on the dust in the protostellar envelope and explore its implications for massive star formation.", "Our main findings are summarized as follows: The dust radiation pressure opacity depends crucially on the grain size distribution (described by the maximum size $a_{\\max }$ ) and the radiation field spectrum.", "We study the effect of radiation force and torques on the dust and find that grain rotational disruption by RATs (i.e., RATD) is always faster than acceleration by radiation pressure.", "Thus, dust properties in intense radiation are radically different from the dust in starless cores prior to the onset of star formation.", "We find that large, micron-sized grains of porous structures, which are expected in dense protostellar envelopes due to grain evolution, are rapidly disrupted into smaller ones by IR radiation from the hot dust shell heated by the intense stellar radiation.", "This effect transforms the original dust size distribution into the type of the diffuse ISM.", "The disruption is efficient in the dust cocoon and increases toward the inner region.", "We calculate dust radiation pressure opacity using the size distribution determined by RATD.", "The resulting IR radiation pressure opacity decreases with distance to the central star and can be reduced by a factor of $\\sim 3$ compared to the original opacity without RATD, whereas the UV opacity increases significantly.", "Therefore, MIR photons from the hot dust shell can escape from the envelope more efficiently.", "Radiation pressure feedback is less efficient compared to the realistic model without dust disruption by RATs.", "However, it still requires the reduction of the dust mass by a factor of $sim 5$ to form very massive stars.", "Thus, the radiation pressure barrier is indeed a challenge for massive star formation in the spherical collapse scenario.", "Dust properties in the massive star-forming core are radically different from the standard ISM dust due to rotational disruption by RATs.", "An accurate understanding of radiation pressure on massive star formation requires a detailed study of dust physics accounting for the new effects.", "We are grateful to the anonymous referee for a thorough and useful report.", "T.H.", "acknowledges the support by the National Research Foundation of Korea (NRF) grants funded by the Korea government (MSIT) through the Mid-career Research Program (2019R1A2C1087045)." ], [ "Grain Rotational Damping", "The well-known damping process for a rotating grain is sticking collision with gas species (atoms and molecules), followed by their thermal evaporation.", "Thus, for a gas with He of $10\\%$ abundance, the characteristic damping time is $\\tau _{{\\rm gas}}&=&\\frac{3}{4\\sqrt{\\pi }}\\frac{I}{1.2n_{\\rm H}m_{\\rm H}v_{\\rm th}a^{4}}=\\frac{2\\sqrt{\\pi }\\rho a}{6n_{{\\rm H}}\\sqrt{2kT_{{\\rm gas}}m_{{\\rm H}}}}\\nonumber \\\\&\\simeq & 2.6a_{-5}\\hat{\\rho }\\left(\\frac{10^{6}\\,{\\rm cm}^{-3}}{n_{{\\rm H}}}\\right)\\left(\\frac{100\\,{\\rm K}}{T_{{\\rm gas}}}\\right)^{1/2}~{\\rm yr},~~$ where $I=8\\pi \\rho a^{5}/15$ is the grain inertia moment of spherical grain of effective radius $a$ , $v_{\\rm th}=\\left(2k_{{\\rm B}}T_{\\rm gas}/m_{\\rm H}\\right)^{1/2}$ is the thermal velocity of a gas atom of mass $m_{\\rm H}$ in a plasma with temperature $T_{{\\rm gas}}$ and density $n_{{\\rm H}}$ ([12]; [24]).", "The gas damping time is estimated for spherical grains, and we disregard the factor of unity due to grain shape.", "Infrared (IR) photons emitted by the grain carry away part of the grain's angular momentum, resulting in the damping of the grain rotation.", "For strong radiation fields or not very small sizes, grains can achieve equilibrium temperature, such that the IR damping coefficient (see [10]) can be calculated as $F_{\\rm IR}\\simeq 0.12\\left(\\frac{U_{6}^{2/3}}{a_{-5}}\\right)\\left(\\frac{10^{6} \\,{\\rm cm}^{-3}}{n_{{\\rm H}}}\\right)\\left(\\frac{100 \\,{\\rm K}}{T_{{\\rm gas}}}\\right)^{1/2}.$ Other rotational damping processes include plasma drag, ion collisions, and electric dipole emission.", "These processes are mostly important for polycyclic aromatic hydrocarbons (PAHs) and very small grains of radius $a<0.01\\,{\\mu \\rm {m}}$ ([10]; [22]; [27]).", "Thus, the total rotational damping rate by gas collisions and IR emission can be written as $\\tau _{\\rm damp}^{-1}=\\tau _{{\\rm gas}}^{-1}(1+ F_{\\rm IR}).$ For strong radiation fields of $U\\gg 1$ and not very dense gas, one has $F_{\\rm IR}\\gg 1$ .", "Therefore, $\\tau _{\\rm damp}\\sim \\tau _{{\\rm gas}}/F_{{\\rm IR}}\\sim a_{-5}^{2}U^{2/3}$ , which does not depend on the gas properties.", "In this case, the only damping process is caused by IR emission." ] ]
2107.01772
[ [ "Rescaled-Expansive Flows: Unstable Sets and Topological Entropy" ], [ "Abstract In this work we introduce and explore a rescaled-theory of local stable and unstable sets for rescaled-expansive flows and its applications to topological entropy.", "We introduce a rescaled version of the local unstable sets and the unstable points.", "We find conditions for points of the phase space to exhibit non-trivial connected pieces of these type of unstable sets.", "We apply these results to the problem of proving positive topological entropy for rescaled-expansive flows with non-singular Lyapunov stable sets." ], [ "introduction", "The property of expansiveness introduced by R. Utz in 1950 is an important feature of dynamical systems.", "Its great success is in part due to its proximity with the hyperbolic theory and its close relation with many important topics of the dynamical systems theory, such as the stability theory and the entropy theory.", "Very soon, the expansiveness was perceived as source complex for behavior.", "Indeed, many expansive systems share chaotic features, see for instance [1].", "A well established way to measure complexity is the topological entropy.", "It is, in some sense, a measure of how grows the amount of distinct possible states for the system through time.", "In many contexts expansiveness is related to positive topological entropy.", "Indeed, in [8] A. Fathi showed that any expansive homeomorphism has positive topological entropy, if the phase space is rich enough.", "Later, versions of this result were obtained by H. Kato in [12] and by A. Arbiteto, W. Cordeiro and M.J. Pacífico in [3] for CW-expansive homeomorphisms and CW-expansive non-singular flows, respectively.", "Even expansiveness is a broadly studied concept on the setting of homeomorphisms and non-singular flows, it presents several challenges when one deals with singular flows.", "Indeed, a first distinction between the singular case and the non-singular case is that it is possible to define expansive singular flows on surfaces (see [6]).", "In [19] it was proved that the topological entropy of every surface flow must vanishes.", "This implies we only can have expansive flows with positive entropy in a higher dimensional setting.", "Another distinction is the existence of several distinct definitions of expansiveness for singular flows, such as $k^*$ -expansiveness, geometric expansiveness, kinematic expansiveness and rescaled-expansiveness (see [4] and [5] for details).", "This forces us to think carefully about what type of expansiveness is more appropriate to each context.", "We point out that the results in [8], [12] and [3] only use the expansiveness property and topological features of the systems.", "We recall that there are results for the topological entropy of expansive flows, but assuming stronger structures such that, dominated decomposition, singular hyperbolicity, non-uniform hyperbolicity, nice ergordic properties and others, see for instance [2].", "We remark that there are not results for positiveness of the topological entropy of higher dimensional expansive singular flows similar to the results in [8], [12] and [3].", "Some of the reasons for this lack of results are: These results are strongly dependent on the uniform expansiveness property, but expansive singular flows may not satisfy this property.", "The non existence of cross-sections for singularities and the loss of control over the size and the time of the cross sections at regular points.", "Actually, the results of entropy for the homeomorphism and the non-singular scenario are strongly supported on the existence of non-trivial connected local stable and unstable sets.", "Unfortunately, the above listed facts may forbid the existence of such local stable sets as we will see through in section 3.", "To overcome these difficulties we choose to deal with the rescaled-expansiveness property (R-expansiveness for short) introduced by L. Wen and X. Wen in [18].", "This type of expansiveness is suitable to work with flow boxes and gives us a nice control over the holonomy maps between cross-sections through regular points.", "In this setting, we introduce a rescaled-version of the classical local stable and unstable sets and investigate the existence of such sets for points of non-singular, compact and invariant sets.", "Before to state our main results, let us briefly introduce the ideas behind the definition of rescaled-unstable sets.", "Let $M$ be a closed manifold and $\\phi _t:M\\rightarrow M$ be a $C^1$ -flow.", "Let $X$ denote the velocity vector field of $\\phi _t$ .", "The spirit of the rescaled-properties is to resize distances proportionally to the size of the vector field $X$ .", "For instance we say that $y$ is $\\varepsilon $ -rescaled-closed to $X$ if $d(x,y)\\le \\varepsilon ||X(x)||$ .", "Using this rescaled-distance, L. Wen and X. Wen explored in [18] ways to guarantee a good control under the cross-sections and flow-boxes of singular flows.", "To see this, recall that if $x\\in M$ is a regular point for $\\phi $ , then it defines a local cross-section $N_{\\delta }(x)$ of size $\\delta $ which is transversal to the direction of $X(x)$ .", "In addition, if we consider that $\\delta $ and $t_x$ are small enough, then the tubular flow theorem states that the action of $\\phi _s$ on $N_{\\delta }(x)$ for $0\\le s\\le t_x$ generates a flow-box and a holonomy map between the cross-sections $N_{\\delta }(x)$ and $N_{\\delta }(\\phi _{t_x}(x))$ .", "The hard task here is to handle how small the $\\delta $ and $t_x$ must be to this construction holds.", "In [18] it is proved that we can consider $\\delta $ uniformly in the set of regular points for $\\phi $ if we consider rescaled-distances instead.", "In other words, for $\\delta $ small enough we guarantee that $N_{\\delta ||X(x)||}(x)$ are cross-sections for any regular point $x$ and we have that the holonomy maps are well defined for these rescaled-cross sections.", "Following these ideas, we define the local R-unstable set of $x$ with size $\\delta $ and time $t$ as the set $W^{r,u}_{\\delta ,t}(x)$ formed by the points in $N_{\\delta ||X(x)||}(x)$ which are always $\\delta $ -rescaled-close to $x$ during the action of the family holonomy maps in $N_{\\delta ||X(x)||}(x)$ with time $nt$ , for every negative integer $n$ .", "The local R-stable set $W^{r,s}_{\\delta ,t}(x)$ of $x$ with size $\\delta $ and time $t$ is defined in a similar way, but requiring the $\\delta $ -rescaled-proximity for every positive integer $n$ instead.", "We remark that the previous definition is informal, this is because the precise definition of $R$ -unstable sets is quite technical, so we decided to postpone the precise definition until section 3.", "Let us denote $CW^{r,s}_{\\delta ,t}(x)$ for the connected component od $W^{r,s}_{\\delta ,t}(x)$ which contains $x$ .", "Our first result deals with the existence of connected pieces of local R-stable and R-unstable sets.", "Theorem A Let $\\phi $ be a R-expansive flow, $K\\subset M$ be a compact invariant set without singularities and suppose that $\\dim (M)>1$ .", "Then for every $\\gamma >0$ small enough, there is some $p\\in K$ such that $CW_{\\delta ,t}^{r,s}(p)\\cap S_{\\gamma ||X(p)||}(p)\\ne \\emptyset \\textrm { or } CW_{\\delta ,t}^{r,u}(p)\\cap S_{\\gamma ||X(p)||}(p)\\ne \\emptyset .$ In the previous theorem, we are denoting $S_{\\gamma ||X(p)||}(p)$ for the sphere of radius $\\gamma ||X(p)||$ centered at $p$ .", "In contrast with the non-singular case, on the previous result we cannot guarantee the existence of both connect local R-stable and R-unstable sets for $p$ at same time.", "To recover this result, we define R-stable and R-unstable points and study its implications to the existence of such local connected R-stable sets.", "Theorem B Let $\\phi $ be a R-expansive flow with expansiveness constant $\\delta >0$ and $K\\subset M$ be a non-singular compact invariant set.", "If $\\Gamma $ does not contain R-stable or R-unstable points, then for any $0<\\varepsilon <\\delta $ , $t>0$ and any $x\\in K$ we have: $CW^{r,s}_{\\varepsilon ,t}(x)\\ne \\lbrace x\\rbrace \\textrm { and } CW^{r,u}_{\\varepsilon ,t}(x)\\ne \\lbrace x\\rbrace .", "$ Once we have defined the local R-unstable and R-stable sets and studied its existence, we study how these sets influence the topological entropy of R-expansive flows.", "We first show that these sets can imply positive entropy for flows containing Lyapunov stable sets, as follows results illustrates Theorem C Let $\\phi $ be a R-expansive flow.", "If there exists a non-singular Lyapunov stable set $\\Gamma \\subset M$ , containing a point with a non-trivial piece of connected local R-unstable set, then $h(\\phi )>0$ .", "In particular, any if $\\Gamma $ has not R-stable and R-unstable points, then $h(\\phi )>0$ .", "We also prove positiveness of topological entropy for R-expansive flows containing attractor sets, precisely we obtain the following result: Theorem D Let $\\phi $ be a R-expansive flow and suppose that $dim(M)>1$ .", "If there exists a non-periodic attractor $\\Gamma \\subset M\\setminus Sing(\\phi )$ , then $h(\\phi )>0$ .", "In [5], A. Artigue investigated the relationship between R-expansiveness and other forms of expansiveness for singular flows.", "In particular, it is obtained some criteria for $k^*$ -expasive flows to be R-expansive, as a consequence, we obtain the following corollary: Corollary Let $\\phi $ be a $k^*$ -expansive flow such that $Sing(\\phi )$ is a hyperbolic set.", "Suppose that $dim(M)>1$ .", "If there exists a non-periodic attractor $\\Gamma \\subset M\\setminus Sing(\\phi )$ , then $h(\\phi )>0$ .", "This paper is divided as follows: In section 2 we give the main basic definitions and results that will be used through this text.", "In section 3 we introduce the concept of local R-stable and R-unstable sets and prove Theorem REF .", "In section 4 we introduce the concept of R-stable and R-unstable points and prove Theorem REF .", "In section 5 we investigate the entropy of R-expansive flows, in particular we prove Theorems REF , REF and Corollary REF ." ], [ "Preliminaries", "In this section we state the main concepts and results that we will use in this work.", "Through out this paper $M$ denotes a compact and boundary-less smooth manifold.", "Definition 2.1 A $C^r$ -flow $\\phi $ on $M$ is a $C^r$ -map $\\phi :\\mathbb {R}\\times M \\rightarrow M$ satisfying the following conditions: $\\phi (0,x)=x$ , for every $x\\in M$ .", "$\\phi (t+s,x)=\\phi (t,\\phi (s,x)))$ , for every $t,s\\in \\mathbb {R}$ and every $x\\in M$ .", "If $\\phi $ is a smooth flow, then it generates a velocity vector field $X$ .", "We will always assume that both $X$ and $\\phi $ are $C^r$ with $r\\ge 1$ .", "Let us denote the map $\\phi (t,\\cdot ):M\\rightarrow M$ , when $t$ is fixed by $\\phi _t$ .", "We say that $x\\in M$ is a singularity for $\\phi $ if $X(x)=0$ .", "A point $x\\in M$ is a periodic point if $X(x)\\ne 0$ and there exists $t>0$ such that $\\phi _t(x)=x$ .", "The sets of singularities and periodic points are denoted by $Sing(\\phi )$ and $Per(\\phi )$ , respectively.", "A point $x\\in M$ is a critical point if $x\\in Crit(\\phi )=Sing(\\phi )\\cup Per(\\phi )$ Definition 2.2 Let $\\Gamma $ be a compact and invariant set.", "We say that $\\Gamma $ is Lyapunov Stable if for any $\\varepsilon >0$ , there is some $\\delta >0$ such that if $x\\in B_{\\delta }(\\Lambda )$ , then $\\phi _t(x)\\in B_{\\varepsilon }(\\Lambda )$ , for every $t\\ge 0$ .", "We say that $\\Gamma $ is an attractor if: $\\phi |_{\\Gamma }$ is transitive.", "There is a neighborhood $U$ of $\\Gamma $ satisfying $\\overline{\\phi _t(U)}\\subset U$ for any $t>0$ .", "$\\Gamma =\\cap _{t\\ge 0}\\phi _t(U)$ .", "It is a classical fact that any attractor set is a Lyapunov Stable set, but the converse does not hold.", "The neighborhood on above definition is called the isolating neighborhood of $\\Gamma $.", "We say that $\\Gamma $ is a non-periodic attractor if it is not a periodic orbit.", "A widely used tool to study flows are the tubular flow-boxes.", "Let $x\\in M$ be a regular point for $\\phi $ .", "The normal space of $x$ in $T_xM$ is the set $\\mathcal {N}_x=\\lbrace v\\in T_xM; v \\perp X(x)\\rbrace .$ Let us denote $\\mathcal {N}_x(r)=\\mathcal {N}_x\\cap B_{r}(0)$ , where $B_{r}(0)$ is the ball in $T_xM$ of radius $r$ and centered at 0.", "The tubular flow theorem for smooth flows asserts that for any regular point $x$ there are $\\varepsilon _x>0$ and $r_x>0$ such that the set $N_x(r)=\\exp _x(\\mathcal {N}_x(r_x)) $ is a cross section of time $\\varepsilon _x$ through $x$ , i.e., for any $y\\in N_x(r)$ we have that $\\phi _{[-\\varepsilon _x,\\varepsilon _x]}(y)\\cap N_x(r)=\\lbrace y\\rbrace $ .", "Furthermore, any $y\\in N_x(r_x)$ is regular.", "If $-\\varepsilon _x< t<\\varepsilon _x$ , then the continuity of $\\phi $ implies that for some $\\delta >0$ the points in $N_x(\\delta )$ meet the cross section $N_{\\phi _t(x)}(r_{\\phi _t(x)})$ in a time close to $t$ .", "Thus we define the holonomy map between $N_x(r_x)$ and $N_{\\phi _t(x)}(r_{\\phi _t(x)})$ to be the map: $P_{x,t}:\\mathbb {N}_x(\\delta ) \\rightarrow N_{\\phi _t(x)}(r_{\\phi _t(x)})$ defined by $P_{x,t}(y)=\\phi _t(y)$ , where $t$ is the only $-\\varepsilon _x< t <\\varepsilon _x $ such that $\\phi _t(y) \\in N_{\\phi _t(x)}(r_{\\phi _t(x)}).$ One of the main difficulties in the use of cross-sections for singular flows is the fact that the radius $r_x$ can goes to zero when $x$ approaches some singularity.", "We will fix the following notation that will be used through this entire text $N^r_{r}(x)=N_x(r||X(x)||).$ The next result allow us to a have a better control on these cross-sections.", "Theorem 2.3 ([18]) Suppose that $X$ is a $C^1$ -vector field and let $\\phi $ be the flow induced by $X$ .", "Then there exist $L>0$ and a small $\\beta _0>0$ such that for any $0<\\beta <\\beta _0$ , $t>0$ and $x\\in M \\setminus Sing(\\phi )$ we have: The set $\\phi |_{[-\\beta ||X(x)||,\\beta ||X(x)||]}(N^r_{\\beta }(x))$ is a flow box, in particular it does not contain singularities.", "The ball $B_{\\beta ||X(x)||}(x)$ is contained on $\\phi |_{[-\\beta ||X(x)||,\\beta ||X(x)||]}(N^r_{\\beta }(x))$ The holonomy map $P_{x,t}:N_{\\frac{\\beta }{L^{t}}||X(x)||}^r(X) \\rightarrow N^r_{\\beta ||X(\\phi _t(x))||}(\\phi _t(x))$ is well defined and injective.", "Moreover, for any $y\\in N_{\\frac{\\beta }{L^{t}}||X(x)||}^r(X)$ we have $d(\\phi _s(x),\\phi _s(y))\\le \\beta || X(\\phi _s(x))||$ for any $0\\le s\\le t$ .", "The same statement is valid for $t<0$ .", "Let us denote $C_{\\phi }(M)$ for the set of non-negative functions $f:M\\rightarrow [0,\\infty )$ such that $f(x)=0$ if, and only if $x\\in Sing(\\phi )$ .", "Note that for any $\\delta >0$ , the functions $\\delta ||X(x)||$ belongs to $C_{\\phi }(M)$ .", "Next result will help us to find continuity properties for the R-holonomy maps.", "Theorem 2.4 ([11]) Let $\\phi $ be a continuous flow on $M$ .", "For any $e\\in C_{\\phi }(M)$ and $T>0$ we can find $r\\in C_{\\phi }(M)$ such that: if $d(x,y)\\le r(x)$ , then $d(\\phi _{t}(x),\\phi _{t}(y))\\le e(\\phi _t(x)),$ for every $t\\in [-T,T]$ For any $e\\in C_{\\phi }(M)$ there is some $r\\in C_{\\phi }(M)$ such that $r(x)\\le \\max \\lbrace e(y);y\\in B_{r(x)}(x) \\rbrace .$ Fix $\\varepsilon >0$ and $t>0$ .", "We say that a pair of points is $t$ -$\\varepsilon $ -separated by $\\phi $ if there is some $0\\le s\\le t$ such that $d(\\phi _s(x),\\phi _s(y))>\\varepsilon $ .", "Let $K\\subset M$ .", "We say that $E\\subset K$ is a $t$ -$\\varepsilon $ -separated set if any pair of distinct point of $E$ is $t$ -$\\varepsilon $ -separated.", "Let $s_t(\\varepsilon ,K)$ denote the maximal cardinality of a $t$ -$\\varepsilon $ -separated subset of $K$ .", "This number is finite due to the compactness of $M$ .", "We define the topological entropy of $\\phi $ on $K$ to be the number $h(\\phi ,K)$ defined by $ h(\\phi ,K)=\\lim _{\\varepsilon \\rightarrow 0}\\limsup _{t\\in \\infty }\\frac{1}{t}\\log s_t(\\varepsilon ,k)= \\lim _{\\varepsilon \\rightarrow 0}\\limsup _{t\\in \\infty }\\frac{1}{t}\\log r_t(\\varepsilon ,k).$ Definition 2.5 The topological entropy $h(\\phi )$ of $\\phi $ is defined to be $h(\\phi )=h(\\phi ,M)$ .", "Now we precise some concepts of expansiveness for flows.", "We start giving the definition of BW-expansiveness which was introduced by R. Bowen and P. Walters in [7].", "Definition 2.6 (BW-Expansiveness) We say that a continuous flow is $BW$ -Expansive if for every $\\varepsilon >0$ , there is $\\delta >0$ such that the following holds: If $x,y\\in M$ , $\\rho :\\mathbb {R}\\rightarrow \\mathbb {R}$ is a continuous function with $\\rho (0)=0$ and $d(\\phi _t(x),\\phi _{\\rho (t)}(y))\\le \\delta $ for every $t\\in \\mathbb {R}$ , then $y=\\phi _s(x)$ for some $|s|\\le \\varepsilon $ .", "The BW-expansiveness property was realized to be the model of the type of expansive behavior displayed by axiom A and Anosov flows.", "Unfortunately, this concept do not capture the expansive behavior of flows with singularities accumulated by regular orbits, such as the Lorenz Attractor.", "To cover these flows M. Komuro introduced in [13] the $k^*$ -expansiveness.", "Definition 2.7 ($k^*$ -Expansiveness) We say that a continuous flow $\\phi $ is $k^*$ -expansive if for every $\\varepsilon >0$ , there is some $\\delta >0$ such that the following holds: If $x,y\\in M$ , $\\rho \\in Rep(\\mathbb {R})$ and we have $d(\\phi _t(x),\\phi _{\\rho (t)}(y))\\le \\delta $ for every $t\\in \\mathbb {R}$ , then there is some $t_0\\in \\mathbb {R}$ such that $y=\\phi _{t_0+s}(x)$ for some $|s|\\le \\varepsilon $ .", "Here we are denoting $Rep(\\mathbb {R})=\\lbrace \\rho :\\mathbb {R}\\rightarrow \\mathbb {R}; \\textrm {$ $ is an increasing homemorphism and }\\rho (0)=0 \\rbrace $$$ In the absence of singularities $k^*$ -expansiveness and $BW$ -expansiveness are equivalent.", "More recently a new concept of expansiveness was introduced by L. Wen and X. Wen in [18].", "It is very close to the BW-expansiveness, but the distance of separation of the orbits is \"resized\" by the size of the vector field.", "Next we precise this idea.", "Definition 2.8 A $C^r$ -flow $\\phi $ on $M$ is said to be R-expansive if for every $\\varepsilon >0$ , there is some $\\delta >0$ such that the following is satisfied: \"If $x,y\\in M$ , $\\rho \\in Rep(\\phi )$ and $d(\\phi _t(x),\\phi _{\\rho (t)}(y))\\le \\delta ||X(\\phi _t(x)) ||$ for every $t\\in \\mathbb {R}$ , then $\\phi _{\\rho (t)}(y)\\in \\phi |_{[-\\varepsilon ,\\varepsilon ]}(x)$ for any $t\\in \\mathbb {R}$ .", "Although the previous definition has a high level of similarity with the previous versions of expansiveness, there are crucial distinctions here.", "For instance, it is quite surprising that this definition cover highly chaotic flows such as multi-singular hyperbolic flows and highly non-expansive flows such as the identity flows at same time (See [18]).", "The problem of finding positive topological entropy for expansive systems was it first considered in the 80's and 90's by Fathi, Kato and Lewowicz independently (see [8], [12] and [10]).", "Its version for flows was established for CW-expansive flows by A. Arbieto, W. Cordero and M. J. Pacífico in [3] in the non-singular scenario.", "Theorem 2.9 ([3]) Let $\\phi $ be a continuous flow and suppose $\\dim (M)>1$ .", "If $\\phi $ is CW-expansive, then $h(\\phi )>0$ .", "We remark that the previous result also applies to expansive non-singular flows.", "In the next sections we work in order to obtain versions of previous theorem to singular flows." ], [ "R-Stable and R-Unstable Sets for R-Expansive Flows", "This section is intended to introduce the concept of R-stable and R-unstable sets for singular flows and prove Theorem REF .", "This concept is directly inspired by the R-techniques used in [18].", "Hereafter, $\\phi $ denotes a $C^r$ -flow and $x\\in M$ denotes a regular point.", "We use the estimatives from Theorem REF on the size of the cross-sections to obtain our versions of stable sets for flows.", "Fix some regular point $x\\in M$ .", "As discussed in previous section, the tubular flow theorem gives us some $\\varepsilon (x)>0$ and $\\delta (x)>0$ such that the set $N_x^r(\\delta (x))=\\exp _x(B_{\\delta (x)}(0)\\cap <X(x)>^{\\perp })$ is a cross section of time $\\varepsilon (x)$ through $x$ .", "If $X$ is non-singular we can obtain that $\\varepsilon (x), \\delta (x)>C>0$ for any $x\\in M$ .", "If $X$ has singularities, we may have $\\varepsilon (x),\\delta (x)\\rightarrow 0$ when $x\\rightarrow Sing(\\phi )$ .", "On the other hand, Theorem REF gave us a \"uniform\" control on how these constants collapse.", "Actually, for any $\\beta >0$ sufficiently small the set $N_{\\beta }^r(x)=\\exp _x(B_{\\beta ||X(x)||}(0)\\cap <X(x)>^{\\perp })$ is a cross section of time $\\beta ||X(x)||$ for the flow.", "Moreover, the holonomy map $P_{x,t}$ is well defined on $N^r_{\\beta }(x)$ and if $y\\in N^r_{\\frac{\\beta }{L^t}}(x)$ , the orbit segment between $y$ and $P_{x,t}(y)$ belongs to the $\\beta $ -rescaled tubular neighborhood of $O(x)$ .", "This gives us a way to guarantee that the holonomy maps are well defined.", "Let us fix $x\\in M\\setminus Sing(\\phi )$ , $t>0$ and $\\beta >0$ .", "Definition 3.1 The $\\beta $ -$r$ -stable and $\\beta $ -$r$ -unstable local sets of $x$ are respectivelly $W_{\\beta ,t}^{r,s}(x)=\\left\\lbrace y\\in N^r_{\\frac{\\beta }{L^t}}\\left(x\\right); d(P_{x,nt}(x),P_{x,nt}(y))\\le \\frac{\\beta }{L^t}||X(P_{x,nt}(x)) ||, \\forall n\\in \\mathbb {N}\\right\\rbrace $ $and $ $W_{\\beta ,t}^{r,u}(x)=\\left\\lbrace y\\in N^r_{\\frac{\\beta }{L^t}}\\left(x\\right); d(P_{-nt,x}(x),P_{x,-nt}(y))\\le \\frac{\\beta }{L^t}||X(P_{x,-nt}(x)) ||, \\forall n\\in \\mathbb {N}\\right\\rbrace $ Notice that these sets are well defined if $\\beta $ is small enough.", "An interesting consequence of the definition is that we can use these sets to characterize R-expansiveness.", "Theorem 3.2 The flow $\\phi $ is R-expansive if, and only if, there exists $\\delta >0$ such that for any regular point $x\\in M$ and any $t>0$ , one has $W_{\\delta ,t}^{r,s}(x)\\cap W_{\\delta ,t}^{u,s}(x)=\\lbrace x\\rbrace $ .", "Fix $x$ a regular point, $\\beta >0$ small enough and let $0<\\varepsilon <\\beta $ .", "Let $0<\\delta <\\varepsilon $ be given by the $r$ -expansiveness of $\\phi $ related to $\\varepsilon $ .", "Now suppose that $y\\in W_{\\delta ,t}^{r,s}(x)\\cap W_{\\delta ,t}^{r,u}(x)$ .", "Since $y\\in W_{\\delta }^{r,s}(x)$ we have $ d(P_{x,n}(x),P_{x,n}(y))<\\frac{\\delta }{L^t}||X(P_{x,n}(x))||$ for any non-negative integer $n$ .", "By the Theorem REF we obtain a reparametrization $\\rho $ such that $d(\\phi _s(x),\\phi _{\\rho (s)}(y))\\le \\delta ||(X(\\phi _t(x))||$ for every $s\\ge 0$ .", "On the other hand, since $y\\in W_{\\delta ,t}^{r,u}(x)$ , a similar argument shows that $d(\\phi _s(x),\\phi _{\\rho (s)}(y))\\le \\delta ||(X(\\phi _t(x))||$ for every $s\\le 0$ .", "Now $R$ -expansiveness implies that $y\\in \\phi _{[-\\varepsilon ,\\varepsilon ]}(x)$ , but since $x,y\\in N^{r}_{\\frac{\\delta }{L^t}}(x)$ , we have that $y=x$ .", "Conversely suppose that $P=\\sup _{x\\in M}\\lbrace ||X(x)||\\rbrace $ , fix $0<\\varepsilon <\\beta $ and let $0<\\delta <\\varepsilon $ such that $W_{\\frac{\\delta }{P},t}^{r,s}(x)\\cap W_{\\frac{\\delta }{P},t}^{r,u}(x)=\\lbrace x\\rbrace $ for any regular point $x$ and any $t>0$ .", "Fix $t>0$ such that $L^t>1$ and suppose there exist a reparametrization $h$ and two points $x,y$ satisfying $ d(\\phi _s(x),\\phi _{h(s)}(y))\\le \\frac{\\delta }{PL^t}||X(\\phi _s)(x)) ||$ for $ s\\in \\mathbb {R}$ .", "This implies in particular that $d(x,y)<\\frac{\\delta }{PL^t}||X(x)||.$ Since $\\delta <\\varepsilon <\\beta $ , Theorem REF implies that there exists some $|s_0|<\\frac{\\delta }{PL^t}||X(x)||\\le \\delta \\le \\varepsilon $ such that $y_0=\\phi _{s_0}(y)\\in N^r_{\\delta }(x)$ .", "More generally, since $d(\\phi _{nt}(x),\\phi _{h(nt)}(y))<\\frac{\\delta }{PL^t}||X(\\phi _{nt}(x)||$ , for any $n\\in \\mathbb {Z}$ , there exists $|s_n|<\\varepsilon $ such that $y_n=\\phi _{h(nt)+s_n}(y)\\in N^r_{\\delta }(\\phi _{nt}(x))$ .", "But last fact implies that the set $\\lbrace y_n\\rbrace $ is the orbit of $y_0$ under the holonomy maps $\\lbrace P_{x,nt}\\rbrace $ .", "In additon, one has that $y_0\\in W_{\\delta ,t}^{r,s}(x)\\cap W_{\\delta ,t}^{r,u}(x)$ and therefore we must have $y_0=x$ .", "Then $\\phi _{s_0}(y)=x$ and the flow $\\phi $ is R-expansive.", "An interesting fact about the above characterization is that we do not need to concern about reparametrizations, since we are only working with the holonomy maps of $\\phi $ .", "For the remaining of this section, we are assuming that the flows in consideration are R-expansive and the constant $\\delta $ given by previous result will be called a constant of R-expansiveness of $\\phi $ .", "Next we work in order to obtain versions of some well know results about non-singular expansive flows for the R-expansive case.", "We begin with a version of uniform expansiveness.", "Theorem 3.3 Let $K\\subset M$ be a compact and invariant set without singular points and let $\\delta >0$ be a constant of R-expansiveness for $\\phi $ .", "Denote $A=\\inf _{x\\in K }\\lbrace ||X(x)||\\rbrace $ .", "Then for any $0<\\eta \\le \\delta A$ and $t>0$ , there exists $N_{\\eta }>0$ such that if $x\\in K$ and $y\\in N^r_{\\delta }(x)$ with $d(x,y)>\\eta $ , then there is some $-N_{\\eta }\\le i\\le N_{\\eta }$ such that $d(P_{x,it}(x),P_{x,it}(y))\\ge \\delta ||X(P_{x,it}(x))||$ .", "If the result is false, we can find some $\\eta >0$ , sequences $x_n\\in K$ , $y_n\\in N^r_{\\delta }(x_n)$ , $m_n\\rightarrow \\infty $ and $t>0$ , such that $d(P_{x_n,it}(x_n),P_{x_n,it}(y_n))\\le \\delta ||X(P_{x_n,it}(x_n)||$ for $-m_n\\le i\\le m_n$ .", "Now by compactness of $K$ we can suppose that $x_n\\rightarrow x\\in K$ , $y_n\\rightarrow y\\in M$ .", "Furthermore, since $||X(x)||>A>0$ for every $x\\in K$ , we have that $diam(N^r_{\\delta }(x_n))>C>0$ for any $x_n$ .", "Since $X$ is a $C^r$ vector field, the normal direction of $X$ also varies continuously with $x$ , so we have that $y\\in N^r_{\\delta }(x)$ .", "But now, the continuity of the holonomy maps implies that $d(P_{x,it}(x),P_{x,it}(y))\\le \\delta ||X(P_{x,it}(x)||$ for every $i\\in \\mathbb {Z}$ and then $x=y$ , a contradiction, since $d(x,y)>\\eta $ .", "Next we show that R-stable (R-unstable) sets need to contract in the future (in the past).", "Theorem 3.4 Let $K \\subset M$ be a compact and invariat set without singular points.", "Then For any $0<\\eta <\\delta $ , there is $N_{\\eta }>0$ such that $P_{x,nt}(W_{\\delta ,t}^{r,s}(x))\\subset W_{\\gamma ,t}^{r,s}(P_{x,nt}(x)) \\text{ and } P_{x,-nt}(W_{\\delta ,t}^{r,u}(x))\\subset W_{\\gamma ,t}^{r,u}(P_{x,-nt}(x))$ for every $n\\ge N_{\\eta }$ and every $x\\in K$ and every $t>0$ .", "Let us fix $0<\\eta <\\delta \\inf _{x\\in K}\\lbrace ||X(x)||\\rbrace $ .", "Let $N$ be given by the previous theorem.", "Now suppose that there exists $x\\in K$ and $t>0$ such that $P_{x,nt}(W_{\\delta ,t}^{r,s}(x))\\lnot \\subset W_{\\gamma ,t}^{r,s}(P_{x,nt}(x))$ .", "Then there is some $y\\in W_{\\delta ,t}^{r,s}(x))$ and $n>N$ satisfying $d(P_{x,nt}(x),P_{x,nt}(y))>\\eta $ .", "Now by the choice of $N$ we must have $d(P_{x,(n+i)t}(x),P_{x,(n+i)t}(y))>\\delta ||X(P_{x,(n+i)t}(x))||)$ for some $-N\\le i\\le N$ .", "But this is impossible since $n>N$ .", "Corollary 3.5 Let $x$ be a periodic point with period $\\pi (x)=t$ .", "For every $\\gamma >0$ there exists $N$ such that: $P_{x,nt}(W_{\\delta ,t}^{r,s}(x))\\subset W_{\\gamma ,t}^{r,s}(P_{x,nt}(x)) \\text{ and } P_{x,-nt}(W_{\\delta ,t}^{r,u}(x))\\subset W_{\\gamma ,t}^{r,u}(P_{x,-nt}(x))$ for every $n\\ge N$ .", "Let $A\\subset X$ and $x\\in A$ .", "Let us denote $C(A,x)$ for the connected component of the $A$ containing $x$ .", "For any $x\\in M$ , $t>0$ and $\\varepsilon >0$ , we denote $CW^{r,s}_{\\varepsilon ,t}(x)=C(W^{r,s}_{\\varepsilon ,t}(x),x).$ We also denote $S^r_{\\delta }(x)=\\exp (S_{\\delta || X(x)||}(x)\\cap \\mathcal {N}(x)),$ where $S_{\\varepsilon }(x)=\\lbrace v\\in T_xM; ||v||=\\varepsilon \\rbrace .$ With the above notations and results we are able to prove Theorem REF .", "Suppose that $\\dim (M)>1$ and notice that $N_{\\eta }^r(x)$ is connected.", "Fix some $x\\in K$ and suppose that there exists some $\\gamma >0$ such that $W^{r,u}_{\\gamma ,t}(y)\\cap S^r_{\\gamma }=\\emptyset $ for every $y\\in K$ .", "Denote $K_0= \\overline{(N_{\\gamma }^r(x))}$ .", "Since $K_0$ is connected, there exists some $y\\in (K_0\\cap S^r_{\\gamma }(x))$ which is not it $W_{\\gamma ,t}^{r,u}(x)$ .", "This implies that we can take a minimal $m_0>0$ such that $diam(P_{x,-m_0t}(K_0))>\\delta ||X(P_{x,m_0t}(x))||.$ Since $K_0$ is connected, we have that $P_{x,-m_0t }(K_0)$ is also connected and then $P_{x,-m_0t}(x)\\cap S^r_{\\gamma }(P_{x,-m_0t}(x))\\ne \\emptyset .$ Define $K_1$ to be the closure of the connected component of $x$ in $P_{-m_0t,x}(K_0)\\cap N^r_{\\gamma }(x)$ .", "Thus we can repeat the previous steps to find $m_1$ and a continuum $K_2\\subset P_{x,-m_1t,}(K_1)\\cap \\overline{(N^r_{\\gamma }(P_{x,-(m_0+m_1)t}(x))}.$ Inductively, we can find a sequence of times $\\lbrace m_k\\rbrace $ and a sequence of continuum sets $\\lbrace K_k\\rbrace $ sucht that the following is valid: $P_{x,-m_kt}(K_k)\\supset K_{k+1}$ $K_k\\cap S^r_{\\gamma }(P_{x,-m_kt}(x))\\ne \\emptyset $ $ P_{x,-nt}(K_k)\\subset N^r_{\\delta }(P_{x,-nt}(x))$ , if $0\\le n\\le m_k$ Let us denote $s_n=\\sum _{i=0}^km_i$ .", "By the compactness of the continuum hyperspace, we can assume that the sequence $K_k$ converges to a continuum $K$ .", "Now we can assume that $P_{x,-s_nt}(x)\\rightarrow p \\in K$ and therefore we have $K\\subset W^{r,s}_{\\delta ,t}(p)$ , by the continuity of the holonomy maps.", "Moreover, since $diam(K_k)\\ge \\gamma $ we have that $K\\cap S^r_{\\gamma }(x)\\ne \\emptyset $ and this concludes the proof.", "An immediate consequence of the previous result is the following corollary: Corollary 3.6 If $x\\in M$ is a periodic point, then for any $t,\\varepsilon >0$ we have $CW_{\\delta ,t}^{r,s}(p)\\cap S^r_{\\gamma }(p)\\ne \\emptyset \\textrm { or } CW_{\\delta ,t}^{r,u}(p)\\cap S^r_{\\gamma }(p)$ for every $p\\in O(x)$ .", "The next example illustrates the existence of flows such that none of its regular points have non-trivial $CW^{r,s}_{\\beta ,t}(x)$ .", "Example 1 Let $D$ be the closed disk of $\\mathbb {R}^2$ centered at 0 and with radius 1 and consider $M=S^1\\times D$ the solid torus.", "We start defining a periodic flow $\\psi $ on the solid torus $M$ where the induced vector field is with constant velocity equal to one.", "Here we will see the solid torus torus as a cylinder $C=[-2,2]\\times D$ with sides and identifying right and left discs.", "Let us consider on $C$ the vector field $X$ constant and equal to $(1,0,0)$ .", "Thus $X$ generates the flow $\\psi $ desired.", "Now we modify this flow to obtain an R-expansive flow.", "First consider the function $\\rho $ on $C$ satisfying the following conditions: $\\rho $ is constant along the disks $\\lbrace x\\rbrace \\times D$ .", "$\\rho ((x,y,z))=1$ , if $(x,y,z)\\in [-2,-1]\\times D$ or $(x,y,z)\\in [1,2]\\times D$ .", "$\\rho ((x,y,z))=0$ , if $(x,y,z)\\in \\lbrace 0\\rbrace \\times D$ .", "$\\rho ((x,y,z))=-x$ , if $p\\in [-1,0]\\times D$ $\\rho ((x,y,z))=x$ , if $p\\in [0,1]\\times D$ .", "Figure: R-expansive flow on the solid torus without non-trivial connected R-stable and R-unstabe sets.Let $\\phi $ be the flow generated by the field $\\rho X$ (see the figure).", "Claim: $\\phi $ is R-expansive To prove the claim we use the characterization in Theorem REF .", "Fix some regular point $p=(x,y,z)\\in M$ .", "Notice that $\\lbrace x\\rbrace \\times D$ is a cross-section through $p$ for any time $t>0$ .", "So fix some $t>0$ and $\\delta >0$ .", "The set $W^{r,s}_{\\delta ,t}(p)$ is formed by all the points $q\\in N^r_{\\delta }(p)$ such that $d(P_{p,nt}(p),P_{p,nt}(q))\\le ||X(P_{p,nt}(p))||$ for any $n>0$ .", "But by the choice of $\\rho $ , we have that there is some $k\\in \\mathbb {Z}$ such that $||X(P_{p,(n+k)t}(p))||\\le e^{-n}$ for every $n>0$ .", "This implies that $||X(P_{p,nt}(p))||\\rightarrow 0$ as $n\\rightarrow \\infty $ .", "On the onther hand, $\\phi $ acts isometrically on the disks $\\lbrace x\\rbrace \\times D$ and this implies that $d(P_{p,nt}(p),P_{p,nt}(q))=d(p,q)$ for any $q\\in N^r_{\\delta }(p)$ and every $n\\in \\mathbb {Z}$ , we have that $W^{r,s}_{\\delta ,t}(p)=\\lbrace p\\rbrace $ .", "A similar argument shows that $W^{r,u}_{\\delta ,t}(p)=\\lbrace p\\rbrace $ .", "This proves that $\\phi $ is R-expansive and none of its points can have non-trivial connected local R-stable and R-unstable points." ], [ "R-Stable Points, R-unstable Points", "This section is devoted to prove Theorem REF .", "Recall that in Theorem REF we have obtained the existence of a point for a closed non-singular invariant set with at least one of the connected component of its R-stable or R-unstable sets being non-trivial.", "It is desirable that, as in the non-singular case, all points of this set actually have both non-trivial connected R-stable and R-stable sets.", "Indeed, this is true for any point on the phase space of an expansive homeomorphisms and a non-singular flow and have interesting consequences.", "For instance, this implies dimensional restrictions on the phase space for the existence of such systems.", "Next we proceed in order to obtain this for R-expansive flows, but first we need to introduce the concept of R-stable and R-unstable points.", "We remark that the results here are inspired on the techniques developed in [10].", "Hereafter we will denote $N^r_t(x,n,\\varepsilon )$ for the $n$ -$\\varepsilon $ -R-dynamical ball for $\\lbrace P_{x,nt}\\rbrace $ centered at $x$ , that is, $N^r_t(x,n,\\varepsilon )=\\lbrace y\\in N^r_{\\varepsilon }(x); d(P_{x,it}(x),P_{x,it}(y))\\le \\varepsilon ||X(P_{x,it}(x))||, 0\\le i\\le n\\rbrace .$ Definition 4.1 We say that $x\\in M\\setminus Sing(\\phi )$ is an R-stable (R-unstable ) point of $\\phi $ if for every $t>0$ , the set $\\lbrace W^{r,s}_{\\varepsilon ,t}(x)\\rbrace _{\\varepsilon >0}$ ($\\lbrace W^{r,u}_{\\varepsilon ,t}(x)\\rbrace _{\\varepsilon >0}$ ) is a neighborhood basis for $x$ on $N^r_{\\delta }(x)$ .", "In other words, if for every $\\varepsilon >0$ , there is some $\\eta >0$ such that if $y\\in N^r_{\\eta }(x)$ and $d(x,y)\\le \\frac{\\eta }{L^t}||X(x)||$ , then $d(P_{x,nt}(x),P_{x,nt}(y))\\le \\varepsilon ||X(P_{x,nt}(x))||$ for every $n\\ge 0$ ($n\\le 0$ ).", "Next theorem is a trivial consequence of the definitions and then we shall omit its proof.", "Theorem 4.2 If $\\overline{O(x)}\\cap Sing(\\phi )=\\emptyset $ , then are equivalent: $x$ is an R-stable point.", "$W^{r,s}_{\\delta ,t}(x)$ is a neighborhood of $x$ on $N^r_{\\delta }(x)$ .", "There is some $0<\\varepsilon _0<\\delta $ such that for any $0<\\varepsilon <\\varepsilon _0$ and $t>0$ we have $W^{r,s}_{\\varepsilon ,t}(x)=N^r_{t}(x,N_{\\varepsilon },\\varepsilon ).$ Hereafter we will always suppose $x\\in \\Lambda $ , where $\\Lambda $ is a compact invariant set without singularities.", "An easy corollary of Theorem $\\ref {UnifContr}$ is the following proposition.", "Proposition 4.3 If for some $t>0$ , we have $y\\in W^{r,s}_{\\varepsilon ,t}(x)$ , then $\\omega (x)=\\omega (y)$ .", "Before to prove the proposition, let us make some remarks that will be used on next results.", "By Theorem REF , if $y\\in W^{r,s}_{\\varepsilon ,t}(x)$ , then $d(P_{x,nt}(x),P_{x,nt}(y))\\rightarrow 0$ as $n\\rightarrow \\infty $ .", "In addition, Theorem REF implies that $d(\\phi _t(x),\\phi _t(y))\\rightarrow 0$ as $t\\rightarrow \\infty $ .", "Now we prove the proposition.", "Let $z\\in \\omega (x)$ and suppose that $y\\in W_{\\varepsilon ,t}^{r,s}(x)$ .", "If $t_k\\rightarrow \\infty $ is such that $\\phi _{t_k}(x)\\rightarrow z $ , then previous remarks implies that $\\phi _{t_k}(y)\\rightarrow z$ and therefore $z\\in \\omega (y)$ .", "The contrary inclusion is analogous.", "As a consequence of previous proposition we obtain the following.", "Theorem 4.4 Suppose that $x$ is R-stable point which is recurrent.", "Then $x$ is a periodic point.", "Suppose that $x$ is an recurrent R-stable point and fix $\\eta >0$ such that $N^r_{\\eta }(x)\\subset W_{\\varepsilon ,t}^{r,s}(x)$ .", "Since $x$ is a recurrent point, we can find a sequence $t_k\\rightarrow \\infty $ such that $\\phi _{t_k}(x)\\rightarrow x$ .", "In particular, if we chose $k$ big enough, we have that $\\phi _{t_k}(x)\\in B^{r}_{\\delta }(x)$ .", "Since Theorem REF implies that $B^{r}_{\\delta }(x)$ is contained on the R-flow box of $N^{r}_{\\delta }(x)$ , we find a sequence of times $n_kt\\rightarrow \\infty $ such that $P_{x,n_kt}(x)\\rightarrow x$ .", "Let $r\\in C_{\\phi }(M)$ be a function given by item 2 of Theorem REF such that $0<r(x)\\le \\frac{\\eta }{4}||X(x)||$ .", "Let us fix a sequence $n_k$ with $n_k>N_{r(x)}$ such that $P_{x,n_kt}(x)\\in N_{r(x)}(x)$ .", "Then by Theorem REF we have that $(P_{x,n_kt}(N^r_{\\eta }(x))\\cap N_{r(x)}(x))\\subset W_{\\frac{\\eta }{4},t}^{r,s}(P_{x,n_kt}(x))$ .", "Theorem REF implies that $P_{x,n_kt}(N^r_{\\eta }(x))\\cap N_{r(x)}(x))\\subset N^r_{\\frac{\\eta }{2}}(x).$ Now, if we apply again $P_{x,n_kt}$ to $(P_{x,n_kt}(N^r_{\\eta }(x))\\cap N_{r(x)}(x))$ , we obtain that $P_{x,2n_kt}(P_{x,n_kt}(N^r_{\\eta }(x))\\cap N_{r(x)}(x)))\\subset N^r_{\\frac{\\eta }{2}}(x).$ Finally, Theorem REF implies that $\\bigcap _{j=1}^{\\infty } P_{x,jn_kt}(\\overline{N^r_{\\eta }(x)})=\\lbrace z\\rbrace $ and by construction we have that $z$ is periodic for $\\lbrace P_{x,nt}\\rbrace $ .", "This implies that $z$ is periodic for $\\phi $ and by the previous proposition, we have that $x\\in \\omega (x)=\\omega (z)=O(z)$ .", "This finishes the proof.", "Theorem 4.5 If $\\phi $ is R-expansive and $x\\in M$ such that $\\overline{O(x)}\\cap Sing(\\phi )=\\emptyset $ .", "If $x$ is an R-stable point, then there is a neighborhood of $x$ on $N^r_{\\delta }(x)$ formed by R-stable points.", "To prove this, suppose that $x$ is an R-stable set and fix $0< 4\\varepsilon <\\delta $ such that $\\left(\\bigcup _{t\\ge 0}\\overline{N^r_{\\varepsilon }(\\phi _t(x))})\\right)\\cap Sing(\\phi )=\\emptyset $ Since $x$ is R-stable, then there is some $0<\\eta <\\varepsilon $ such that $N^r_{\\eta }(x)\\subset W_{\\varepsilon ,t}^{r,s}(x)$ .", "This imples that if $y\\in N^r_{\\eta }(x)$ then $\\inf \\limits _{t\\ge 0}\\lbrace ||X(\\phi _t(y))||>A>0$ .", "Now fix some $\\nu >0$ and set $0<\\gamma \\le \\nu A$ .", "Fix some $y\\in N^r_{\\eta }(x)$ .", "Theorem REF combined with Theorem REF implies that we can find some $N_{\\eta }$ such that $d(P_{y,nt}(y),P_{y,nt}(z))\\le \\frac{\\gamma }{L^t}$ for any $z\\in B_{\\eta }(x)$ and any $n\\ge N_{\\eta }$ .", "Finally, the continuity of the holonomy maps allows us to find some $\\mu >0$ (Theorem REF ) such that if $z\\in N^r_{\\eta }(x)$ and $d(z,y)<\\mu $ , then $d(P_{y,nt}(y),P_{y,nt}(z))\\le \\frac{\\gamma }{L^t}$ for $0\\le n\\le N_{\\eta }$ .", "But this says that $N^r_{\\mu }(y)\\subset W_{\\nu ,t}^{r,s}(y)$ and therefore, $y$ is R-stable.", "Theorem 4.6 Let $\\phi $ be a R-expansive flow and $K\\subset M$ be a compact invariant set without singularities.", "If $x\\in K$ is a R-stable or R-unstable point, then $x$ is periodic.", "Suppose that $x\\in K$ is a R-stable point and let $\\delta >0$ be the R-expansiveness constant of $\\phi $ .", "Before to continue, let us set some notation.", "Let $F_x(N^r_{\\gamma }(x))$ be an R-flow box.", "Suppose that, $A\\subset F_x(N^r_{\\gamma }(x))$ .", "Define $A(x)$ to be the set $\\lbrace \\phi _{t_y}(y)\\rbrace $ where $y\\in A$ and $t_y$ is the unique $t$ satisfying $|t|\\le \\gamma ||X(x)||$ and $\\phi _{t_y}(y)\\in N^r_{\\gamma }(x)$ .", "For any $\\gamma $ denote $A^z_{\\gamma }=B_{\\gamma }(z)$ .", "Claim: Suppose that $z\\in \\alpha (x)$ .", "Then there are a sequence $n_k\\rightarrow \\infty $ and $\\gamma >0$ such that $P_{x,-n_kt}(x)\\rightarrow z$ and $A^z_{\\gamma }(P_{x,-n_kt}(x))\\subset W^{r,s}_{\\delta ,t}(P_{x,-n_kt}(x))$ , for every $k>0$ .", "If the claim if false, we can find a subsequence $n_k$ such that $P_{x,-n_kt}(x)\\subset A^z_{\\frac{1}{k}}(P_{x,-n_kt}(x)) \\textrm { and } A^z_{\\frac{1}{k}}(P_{x,-n_kt}(x))\\lnot \\subset W^{r,s}_{\\delta ,t}(P_{x,-n_kt}(x)).$ For any $k\\ge 1$ .", "Fix some $\\varepsilon >0$ as in intem (3) of Theorem REF .", "Since $A^z_{\\frac{1}{k}}(P_{x,-n_kt}(x))\\lnot \\subset W^{r,s}_{\\varepsilon ,t}(P_{x,-n_kt}(x))$ , we can find a point $P_{x,-n_kt}(y_k)\\in A^z_{\\frac{1}{k}}(P_{x,-n_kt}(x))\\cap \\partial W^{r,s}_{\\varepsilon ,t}(P_{x,-n_kt}(x))$ This implies: $\\sup \\limits _{n\\ge \\-n_k}d(P_{x,nt}(x),P_{x,nt}(y_k))=d(P_{x,(m_k-n_k)t}(x),P_{x,(m_k-n_k)t}(y_k))=\\varepsilon $ for some $m_k>0$ .", "The continutity of $\\lbrace P_{x,nt}\\rbrace $ implies that $m_k\\rightarrow \\infty $ .", "Suppose that $P_{x,(m_k-n_k)t}(x)\\rightarrow x*$ and $P_{x,(m_k-n_k)t}(y_k)\\rightarrow y*$ .", "It holds that $d(x*,y*)=\\varepsilon .$ But now, we have that for any $i\\in \\mathbb {Z}$ $d(P_{x*,it}(x*),P_{x*,it}(y*))=\\lim _{k\\rightarrow \\infty }d(P_{x,(i+m_k-n_k)t}(x),P_{x,(i+m_k-n_k)t}(y_k)),$ but $\\lim _{k\\rightarrow \\infty }d(P_{x,(i+m_k-n_k)t}(x),P_{x,(i+m_k-n_k)t}(y_k))\\le \\sup _{n>-n_k}d(P_{x,nt}(x),P_{x,nt}(y_k))=\\varepsilon ,$ since $l+m_k$ is positive if we suppose $k$ big enough.", "This contradicts R-expansiveness by Theorem REF and then the claim is valid.", "Now fix $\\varepsilon >0$ , $z\\in \\alpha (x)$ , $n_k$ and let $\\gamma $ as in the claiming.", "Since $A^z_{\\gamma }(P_{x,-n_kt}(x))\\subset W^{r,s}_{\\varepsilon ,t}(P_{x,-n_kt}(x)),$ then $P_{x,n_1t}(y)\\in B^r_{\\varepsilon }(x)$ , for every $y\\in A^z_{\\gamma }(P_{x,-n_kt}(x))$ .", "In particular, any $y\\in A^z_{\\gamma }(P_{x,-n_kt}(x))$ satisfies $d(P_{x,(n_1-n_k)t}(y),x)\\le 2\\varepsilon .$ Since $\\phi _{-n_kt}(x)\\rightarrow z$ , this implies that $\\phi _{(-n_k+n_1)t}(x)\\rightarrow x$ and then $x\\in \\alpha (x)$ .", "Finally, $x$ is periodic due to Theorem REF Now we are able to prove the Theorem REF .", "The proof is based on the following claiming: Claim: For every $0<\\varepsilon <\\delta $ , and $\\eta >0$ , there is some $K=K_{\\varepsilon ,\\eta }$ such that $N^r_{\\eta }(x)\\lnot \\subset N^r_t(x,K,\\varepsilon )\\textrm { and } N^r_{\\eta }(x)\\lnot \\subset N^r_{-t}(x,K,\\varepsilon )$ for every $x\\in K$ .", "If the claim is false, we can find $\\varepsilon >0$ and $\\eta >0$ and a sequence of points $x_k\\in K$ such that $N^r_{\\eta }(x)\\subset N^r_t(x_k,k,\\varepsilon )$ for any $k>0$ .", "Now, if we suppose that $x_k\\rightarrow x$ , then $x$ must an R-stable point of $K$ and this is a contradiction.", "The case of R-unstable points is analogous and the claim is proved.", "Now fix $x\\in K$ , $0<\\varepsilon <\\delta $ and let $N_{\\varepsilon }$ be given by Theorem REF .", "Let $\\eta >0$ be such that if $d(x,y)\\le \\eta $ , then $d(P_{x,nt}(x),P_{x,nt}(y))\\le \\varepsilon $ if $|n|\\le N_{\\varepsilon }$ .", "Fix some $n\\ge \\max \\lbrace N_{\\varepsilon },K_{\\varepsilon ,\\eta }\\rbrace $ .", "By the claiming, we have that $P_{x,-nt}(N^r_{\\eta }(P_{x,nt}(x))\\lnot \\subset C(N^r_t(x,n,\\varepsilon ),x).$ Thus there is some $y_0\\in P_{x,-nt}(N^r_{\\eta }(P_{x,nt}(x))\\cap \\partial C(N^r_t(x,n,\\varepsilon ),x).$ In particular, this implies that for some $0\\le k\\le n$ , we have that $d(P_{x,kt}(x),P_{x,kt}(y_0))=\\varepsilon .$ But now, $k\\notin [n-N_{\\varepsilon },n-1]$ , by the choice of $\\eta .$ Also $k\\notin [N_{\\varepsilon },n-N_{\\varepsilon }]$ , otherwise there should exists some $0\\le j\\le n$ such that $d(P_{x,jt}(x),P_{x,jt}(y_0))> \\delta $ contradicting the $y_0\\in N^{r}_{t}(x,n,\\varepsilon )$ .", "Thus $k\\in [0, N_{\\varepsilon }]$ and therefore $d(x,y_0)>\\eta $ , by the choice of $\\eta $ .", "Finally, we have that for any $n\\ge \\max \\lbrace N_{\\varepsilon },K_{\\varepsilon ,\\eta }\\rbrace $ we have that $C(N^{r}_t(x,n,\\varepsilon ),x)$ is a connected set with diameter greater than $\\eta $ .", "Thus by the compactness of the continuum hyperspace of $M$ we have that the set $\\bigcap _{n>0}\\overline{C(N^r_t(x,n,\\varepsilon ),x)}$ is connected set contained on $W^{r,s}_{\\varepsilon ,t}(x)$ with diameter greater than $\\eta $ .", "Since the case for the R-unstable sets is analogous, the theorem is proved." ], [ "The Topological Entropy of R-expansive flows", "In this section we prove the Theorems REF , REF and the corollary REF .", "Let us start with the the former.", "Suppose that $\\phi $ is R-expansive and let $\\Gamma $ be a non-singular Lyapunov stable set for $\\phi $ .", "We start the proof fixing some terms Fix $\\gamma >0$ the constant of R-expansiveness of $\\phi $ .", "Fix some point $x\\in \\Gamma $ has non-trivial connected component on $W^{r,u}_{\\gamma ,t}(x)$ (this point is guaranteed by hypothesis).", "Fix $0<\\varepsilon \\le \\gamma $ such that $\\overline{B_{\\varepsilon }(\\Gamma )}\\cap Sing(\\phi )=\\emptyset $ Let $\\delta >0$ be given by the Lyapunov stability of $\\Gamma $ with respect to $\\varepsilon $ fix $x\\in \\Gamma $ such that for some $0<\\eta <\\delta $ and $t>0$ , the set $CW_{\\eta ,t}^{r,u}(x)$ is a non-trivial connected set.", "Now let $y\\in CW_{\\eta ,t}^{r,u}(x)$ be such that $y\\ne x$ .", "Then Theorem REF implies that $d(P_{x,-nt}(y),P_{x,-nt}(x))\\rightarrow 0$ .", "But Theorem REF will imply that in fact $d(O(x),\\phi _{-t}(y))\\rightarrow 0$ .", "On the other hand, the Lyapunov stability of $\\Gamma $ guarantees that $\\phi _t(y)\\in B_{\\varepsilon }(\\Gamma )$ for any $t\\ge 0$ .", "Last facts imply that $\\Lambda =\\overline{\\bigcup _{t\\in \\mathbb {R}}\\phi _t({\\overline{CW_{\\eta ,t}^{r,u}(x)}})}\\subset \\overline{B_{\\varepsilon }(\\Gamma )}$ But since $\\overline{B_{\\varepsilon }(\\Gamma )}$ does not contain singularities, then $\\phi |_{\\Lambda }$ is a R-expansive non-singular flow.", "In particular, it is BW-expansive and have dimension greater than one, since it contains $O(x)$ and $W^{r,u}_{\\eta ,t}(x)$ .", "So, we conclude by Theorem REF that $h(\\phi )>0$ .", "Figure: The idea behind the proof of Theorem ACombining Theorems REF and REF we have the following immediate consequence: Corollary 5.1 Let $\\phi $ be a R-expansive flow.", "If there exists a non-singular Lyapunov stable set $\\Gamma \\subset M$ without R-stable and R-unstable points, then $h(\\phi )>0$ .", "Now we concentrate ourselves on the proof of Theorem REF .", "Let $\\phi $ be a R-expansive flow and suppose that $\\Gamma $ is a non-periodic attractor for $\\phi $ .", "The proof is based on the following claim: Claim: If $\\Gamma $ ia non-periodic attractor, then has not R-stable and R-unstable points.", "Indeed, since $\\Gamma $ is an attractor, then it has some point with dense orbit.", "In particular, we have that $x\\in \\omega (x)$ .", "Now, suppose that $\\Gamma $ contains a stable point $p$ .", "Therefore, there is some sequence $t_k\\rightarrow \\infty $ such that $\\phi _{t_k}(x)\\rightarrow p$ .", "But now Theorem REF implies that $x$ is also an R-stable point and therefore Theorem REF implies that $x$ is a periodic orbit.", "But it is a contradiction, since $\\overline{O(x)}=\\Gamma $ and the claim is proved.", "Once we have obtained the claim, now the result is a direct consequence of Theorems REF and REF Finally we prove Corrollary REF .", "Let $\\phi $ be a $k^*$ -expansive flow such that $Sing(\\phi )$ is a hyperbolic set.", "In [5] it is proved that $\\phi $ is $R$ -expansive.", "Now if $\\Gamma $ is an non-periodic attractor without singularities, then Theorem REF implies directly the result." ] ]
2107.01708
[ [ "2D hybrid CrCl2(N2C4H4)2 with tunable ferromagnetic half-metallicity" ], [ "Abstract Two-dimensional ferromagnetic (2D FM) half-metal holds great potential for quantum magnetoelectronics and spintronic devices.", "Here, using density functional calculations and magnetic pictures, we study the electronic structure and magnetic properties of the novel van der Waals (vdW) metal-organic framework (MOF), CrCl2(N2C4H4)2, i.e.", "CrCl2(pyrazine)2.", "Our results show that CrCl2(pyrazine)2 is a 2D FM half-metal, having a strong intralayer FM coupling but a much weak interlayer one due to the vdW spacing.", "Its spin-polarized conduction bands are formed by the pyrazine molecular orbitals and are polarized by the robust Cr3+ local spin = 3/2.", "These results agree with the recent experiments [Pedersen et al., Nature Chemistry, 2018, 10, 1056].", "More interestingly, CrCl2(pyrazine)2 monolayer has a strong doping tunability of the FM half-metallicity, and the FM coupling would be significantly enhanced by electron doping.", "Our work highlights a vital role of the organic ligand and suggests that vdW MOF is also worth exploration for new 2D magnetic materials." ], [ "Introduction", "The possibility to achieve manipulation of magnetic properties through changes of the structure of materials has always been an attractive topic for basic and applied research in material science.", "In particular, transition-metal based inorganic compounds offer a wide playground where electronic and magnetic properties could be tuned to achieve novel phenomena such as superconductivity[1], quantum Hall effect[2], topological insulators[3] and multiferroicity[4].", "The multifunctional properties are often linked to the interplay of charge, orbital and spin degrees of freedom.", "[5] Therefore, transition-metal atoms represent essential ingredients of several technologically interesting materials[6], [7], [8].", "Hybrid compounds, i.e., compounds showing coexistence of organic and inorganic components, further increase the possibility to tune physical properties of materials, thus enlarging the horizons for possible device applications.", "For example, yet another interesting approach to tune structural, electronic and magnetic properties is to explore the effects of the ligands.", "Recently, a promising class of materials has emerged such as metal-organic frameworks (MOFs)[9], [10], [11], [12].", "They are made up of a network of metal ions bridged by organic ligands, forming a porous framework.", "In these materials, different ligands can lead to totally different conducting and/or magnetic properties.", "Eventually, the organic ligands retains a free-radical character, thus making the hybrid compound conductive[13], [14].", "After the successful exfoliation of graphene, two-dimensional materials have become one of the hottest research field in the last decades, due to their novel and diverse physical properties[15], [16], [17], [18].", "In particular, ferromagnetism has been recently observed in several new layered inorganic materials, such as monolayer CrI$_3$[16] and few layer Cr$_2$ Ge$_2$ Te$_6$[17], both of them showing wide application potential.", "Efforts on exfoliating similar materials were reported recently[19], [20], [21], [22], [23], [24].", "Also, theoretical studies try to understand, predict and utilize the 2D magnetic properties[25], [26], [27], [28], [29], [30], [31], [32].", "Therefore, 2D magnetic materials are still a rapid growing and developing field[33], [34].", "Figure: (a) Side view of the bulk CrCl 2 _2(pyrazine) 2 _2.", "(b) Top view into the abab plane of the top layer.", "Hydrogen atoms in (a) and Chlorine atoms in (b) are hidden for simplicity.Very recently, hybrid materials have joined to the 2D materials landscape[35], [36], [37], [38].", "This certainly defines new directions to explore, since the dual organic-inorganic nature of the materials together with the dimensionality decrease from 3D to 2D adds new functional and structural flexibility as well as increases the tunability of relevant physical properties.", "Therefore, it is of great interest to search for new 2D materials starting from bulk layered materials which could be easily exfoliated into monolayer.", "Recently, a new bulk but layered material CrCl$_2$ (pyrazine)$_2$ has been synthesized[37].", "This compound is very promising because it not only shows magnetic properties related to both transition metal and organic ligands, but also could be exfoliated into a new 2D hybrid material.", "According to experimental measurements[37], CrCl$_2$ (pyrazine)$_2$ is a ferromagnetic (FM) metal with Curie temperature $T_{\\rm C}$ $\\simeq $ 55 K. All these considerations suggest that this new material is suitable for nanotechnology applications and is therefore calling for a deeper theoretical study.", "This represents the main motivation of our work.", "In this article, we provide new insights on the structural, electronic and magnetic properties of bulk and monolayer of CrCl$_2$ (pyrazine)$_2$ using density functional theory (DFT) calculations.", "Our results show that the bulk is a robust half-metal with strong intralayer FM and weak interlayer coupling, which come from molecular orbitals induced by a hybridization between Cr-$3d$ and pyrazine molecular orbitals.", "More interestingly, the FM coupling of CrCl$_2$ (pyrazine)$_2$ monolayer can be significantly enhanced by electron doping, but can also be changed into antiferromagnetic (AF) state by hole doping.", "Therefore, we predict that CrCl$_2$ (pyrazine)$_2$ would be an appealing 2D spintronic material." ], [ "Computational details", "Density functional theory (DFT) calculations were carried out using the Vienna Ab-initio Simulation Package (VASP)[39].", "The wave function was expressed with the plane-wave basis set and a cut-off energy of 450 eV was used.", "The exchange and correlation energy was described by the generalized gradient approximation (GGA) with the Perdew, Burke, and Ernzerhof functional[40].", "To better describe the on-site Coulomb interactions of Cr 3$d$ electrons, the typical value of the Hubbard $U = 4.0$ eV and Hund exchange $J = 0.9$ eV were used in the GGA+U calculations[41].", "$\\sqrt{2} \\times \\sqrt{2} \\times $ 1 supercell was chosen for bulk in order to study different magnetic structures.", "For monolayer, a lateral $\\sqrt{2}\\times \\sqrt{2}$ supercell was chosen with a vacuum of 7 Å.", "The Monkhorst-Pack k-mesh of 5${\\times 5\\times 4}$ (5${\\times 5\\times 1}$ ) was used for bulk (monolayer) calculations.", "The total energy converged to 10$^{-5}$ eV and all the atoms were fully relaxed till the forces converged to 0.01 eV/Å.", "The PBE-D2 corrections within the Grimme's approach was used for the cleavage energy calculation[42]." ], [ "CrCl$_2$ (pyrazine){{formula:203902bc-e950-44d6-98e5-b5b4fd2d083c}} bulk", "We start with the bulk CrCl$_2$ (pyrazine)$_2$ for which the experimental results are available for comparison[37].", "The bulk is a van de Waals (vdW) material with AB stacking, see Fig.", "REF .", "The Cr ion is surrounded by four pyrazines in a,b plane and two Cl ions along c axis.", "It has a local distorted octahedra, which splits the Cr-3$d$ orbitals into $t_{2g}$ triplet and $e_g$ doublet.", "We first perform GGA calculations with spin-polarization.", "We carry out a full structural optimization for bulk CrCl$_2$ (pyrazine)$_2$ in four different structures, see Table S1 in Supporting Information (SI).", "Our results show that $\\alpha $ structure is most stable, which is the case in the monolayer as detailed in section 3.2.", "The optimized lattice constants agree well with the experimental ones.", "The Cr local spin moment is 2.35 ${\\rm \\mu _B}$ which is reduced from Cr$^{3+}$ $S = 3/2$ state by a covalence.", "N (C) local spin moment is –0.08 ${\\rm \\mu _B}$ (–0.01 ${\\rm \\mu _B}$ ) which is polarized by the Cr spin.", "It is important to note that the total spin moment is 2.00 ${\\rm \\mu _B}$ per formula unit (f.u.)", "which is indicative of an antiparallel ${S}$ = –1/2 contribution from the organic ligands.", "The total magnetic moment agrees with the experimental one of 1.8 ${\\rm \\mu _B}$[37].", "Note that the magnetic ground state of CrCl$_2$ (pyrazine)$_2$ is ferrimagnetic[37], with opposite spins of Cr and pyrazines.", "However, to better describe the effective Cr-Cr FM coupling and compare it with a possible Cr-Cr AF state, we refer to the ferrimagnetic ground state as the FM state throughout the main text.", "To better describe the correlated Cr 3$d$ electrons, we perform GGA+U calculations.", "The local Cr$^{3+}$ spin moment is now increased up to 2.69 ${\\rm \\mu _B}$ , see Table REF .", "Again, the total spin moment is 2.00 ${\\rm \\mu _B}$ /f.u., which well corresponds to the Cr$^{3+}$ $S = 3/2$ and the induced opposite ${S}$ = –1/2 on the organic ligands.", "In order to estimate the magnetic coupling in the bulk CrCl$_2$ (pyrazine)$_2$ , we calculated FM, interlayer-AF (with intralayer-FM) and intralayer-AF state by GGA+U.", "We find that FM is the ground state which accords with the experiment[37].", "The intralayer-AF state turns out to be much less stable than the FM state by 163 meV/f.u., demonstrating a strong intralayer FM coupling.", "In contrast, the interlayer coupling is much weak due to the vdW spacing, with the interlayer-AF being 6 meV/f.u.", "higher than the FM ground state.", "We also check the spin-orbit coupling (SOC) effect.", "The GGA+U and GGA+U+SOC results are practically the same, see the band structures in Fig.", "S1 in SI.", "In addition, the interlayer-AF state is less stable than the FM ground state by 163 (137) meV/f.u.", "for bulk (monolayer) by GGA+U+SOC, which is (almost) the same as the GGA+U results of 163 (136) meV/f.u.", "This is due to the negligible SOC effects of the closed Cr$^{3+}$ $t_{2g}^3$ shell and the pyrazine molecule with the light C/N/H atoms.", "Figure: Band structures of (a) the FM state and (b) the intralayer-AF state in A-type bulk CrCl 2 _2(pyrazine) 2 _2, calculated by GGA+U.", "The blue (red) lines stand for the up (down) spin channel.", "The Fermi level is set at zero.Next, to study the electronic properties of the material, we plot the band structure for the FM and intralayer-AF state, see Fig.", "REF .", "A clear half metal is demonstrated in the FM state since eight down-spin bands crossing the Fermi level, and a large up-spin band gap of more than 3 eV can be observed, see also Fig.", "REF .", "This coincides with the reported high electronic conductivity in experiment[37].", "Due to this electronic itinerancy, the down-spin bands show more dispersion compared with the up-spin ones.", "Note that owing to the similarity between layers, all these bands show similar curves in pairs which are dispersed by the weak interlayer interaction.", "In contrast, the intralayer-AF state has a less band dispersion and becomes an insulator with a small energy gap.", "Note that we also perform the hybrid functional HSE06 calculations for a comparison with the GGA+U results, see Fig.", "S2 in SI.", "The major FM half-metallicity remains unchanged in both functionals, and the shape of the band structure crossing the Fermi level is quite similar.", "The calculated spin moments are close, 2.73 ${\\rm \\mu _B}$ vs 2.69 ${\\rm \\mu _B}$ for the Cr$^{3+}$ (–0.12 ${\\rm \\mu _B}$ vs –0.10 ${\\rm \\mu _B}$ for the N atom).", "Figure: (a) Cr 3+ ^{3+} 3dd (b) N and C 2pp density of states (DOS) calculated by GGA+U.", "The blue (red) lines stand for the up (down) spin.", "The Fermi level is set at zero.In order to further analyze the band composition near the Fermi level, we plot in Fig.", "REF the orbitally resolved density of states (DOS) of the FM ground state.", "For Cr 3$d$ states there is a clear splitting between $t_{2g}$ and empty $e_g$ , and the half-occupied $t_{2g}$ shell confirms the high-spin configuration.", "DOS intensity across the Fermi level can be found in N 2$p$ and C 2$p$ down-spin channel, which corresponds to the down-spin bands in Fig.", "REF (a).", "Also, there is a small DOS intensity from Cr $t_{2g}$ down-spin states, which suggests a hybridization between Cr and pyrazines.", "Note that hybridization states can also be found in the up-spin channel, but lie about 0.5 eV higher than the down-spin one due to an exchange splitting induced by the FM Cr sublattice.", "Hence, those bands across the Fermi level, being split by Cr polarization, are dominated by N $2p$ and C $2p$ states hybridized with Cr $t_{2g}$ .", "These results highlight the vital role of the organic ligands in the FM half-metallicity." ], [ "CrCl$_2$ (pyrazine){{formula:9c170656-6b69-4190-a78a-6ddc6aa830f6}} monolayer ", "Motivated by the above finding of the FM half-metallicity in the vdW MOF and the strong intralayer (weak interlayer) FM coupling , we now study the CrCl$_2$ (pyrazine)$_2$ monolayer which could well be an interesting 2D magnetic material.", "Here we calculate the cleavage energy using the PBE-D2 correction, see Fig.", "REF .", "The total energy results as a function of the increasing interlayer distance allow us to estimate the cleavage energy, and it is 0.22 J/$m^2$ and is even lower than 0.3 J/$m^2$ for CrI$_3$[43] which has been successfully exfoliated from the bulk.", "Therefore, an exfoliation of the CrCl$_2$ (pyrazine)$_2$ is likely, and we now explore the electronic and magnetic properties of the monolayer by GGA+U calculations and then establish a physical picture.", "We first investigate how the pyrazines affect the structural energy and magnetic order, and at the same time search the stable structure of the monolayer.", "In CrCl$_2$ (pyrazine)$_2$ each pyrazine has two possible orientations, and after taking symmetry into account there exist four different possible monolayer structures $\\alpha , \\beta , \\gamma $ and $\\delta $ , see Fig.", "REF .", "$\\alpha $ is the layer component of our bulk structure, containing two pairs of pyrazines in different orientations.", "$\\gamma $ also contains two pairs of pyrazine in different orientations, but has exchanged one pair of pyrazines from $\\alpha $ .", "$\\beta $ has two pairs of pyrazines in the same orientations, and $\\delta $ has one different pair of pyrazines and one same pair.", "After atomic relaxations, the results of total energy calculation (see Table REF ) show that $\\alpha $ is the energetically most favorable structure.", "For comparison, energy of $\\beta $ and $\\gamma $ is respectively 182 and 302 meV against $\\alpha $ , while the initialized $\\delta $ structure is unstable and converges to $\\alpha $ .", "The different structural energies arise from different ligand repulsion, which is related to the H ion distance of adjacent pyrazines.", "All the adjacent pyrazine pairs in $\\alpha $ locally avoid each other and thus effectively lower the repulsion energy.", "In contrast, instability is induced by stronger repulsion in $\\beta $ and $\\gamma $ since respectively two and four pairs of H ions have a much closer distance compared with $\\alpha $ .", "FM ground states can be found for all monolayer structures, which is consistent with the bulk case.", "Moreover, the most favored $\\alpha $ structure also has the largest relative magnetic energy (136 meV/f.u.", "), indicating that a stable distribution of pyrazine orientations could benefit the FM coupling.", "Notice, the FM coupling of $\\alpha $ is comparable with bulk (163 meV/f.u.", "), since the former is the layer component of the later one.", "Hereafter, we focus on the most stable $\\alpha $ structure, exploring the electronic and magnetic properties of monolayer CrCl$_2$ (pyrazine)$_2$ .", "In Fig.", "REF (a) and (b) we present the band structure for the two magnetic configurations.", "The FM ground state, with four down-spin bands crossing the Fermi level (and an up-spin gap of 2.5 eV, not shown), is predicted to be a robust half-metal.", "On the other hand, the AF state is insulating with a much reduced bandwidth.", "To clearly show the magnetic alignment, we plot in Fig.", "REF (c) and (d) the spin density for the FM and AF state.", "As expected for a high-spin $S = 3/2$ configuration, Cr$^{3+}$ has about 2.7 ${\\rm \\mu _B}$ local spin moment in both FM and AF states.", "In the FM state, down-spin density is found at N-sites corresponding to the N –0.1 ${\\rm \\mu _B}$ local spin moment, and apparently, each pyrazine carries a considerable negative spin moment.", "As in the bulk, the ligand contribution reduces the total magnetization to 2.00 ${\\rm \\mu _B}$ /f.u., corresponding to the total $S = 1$ state.", "In contrast, spin density almost vanishes at the N sites in the AF state, due to the counteracted spin-polarization induced by the AF Cr sublattice.", "Table: Relative total energies ΔE{\\Delta \\emph {E}} (meV/f.u.)", "for monolayer structures α\\alpha , β\\beta and γ\\gamma in the FM and AF state.", "δ\\delta -type structure converges to α\\alpha -type after relaxation.Figure: Band structures of (a) the FM state and (b) the AF state in α\\alpha -type monolayer CrCl 2 _2(pyrazine) 2 _2, calculated by GGA+U.", "The blue (red) lines stand for the up (down) spin.", "The Fermi level is set at zero.", "The inserts in (a) and (b) are charge density plots of the pyrazine part for one of the bands near the Fermi level.", "Spin density plots of (c) the FM state and (d) the AF state.", "The purple (green) lines are contours for up (down) spin density.", "The black arrows represent the spin moment directions of Cr.", "(e) Charge density plots and separated energy levels of the four molecular orbitals nearest to the Fermi level in an isolated molecular pyrazine, by a single k-point calculation.Figure: Schematic plot of the effective Cr-Cr FM interactions via the molecular orbitals of pyrazine ions in the monolayer CrCl 2 _2(pyrazine) 2 _2.In order to study which orbitals the pyrazine bands near the Fermi level originate from, we carry out a calculation for an isolated pyrazine molecular.", "The four energy levels near the Fermi level and the corresponding partial charge density are listed in Fig.", "REF (e).", "The energy levels fully occupied under the Fermi level should become deep valence bands in CrCl$_2$ (pyrazine)$_2$ due to the higher chemical potential.", "Among the empty energy levels above the Fermi level, the lowest one of 3.1 eV is of our concern.", "This energy level, consisting of N 2$p_z$ and C 2$p_z$ , is a molecular orbital which exists in organic cyclic compounds such as benzene or pyrazine.", "[44] As discussed above, in CrCl$_2$ (pyrazine)$_2$ one electron per Cr is transferred to pyrazines, and considering a lift of the Fermi level, the electron should come to this lowest unoccupied orbital.", "Due to the two Cr and four pyrazines in a cell, there exist four such orbitals, which indicates a same origination for all the bands of concern.", "To verify this, we plot the partial charge density for the bands near Fermi level in monolayer CrCl$_2$ (pyrazine)$_2$ (see inserts of Fig.", "REF (a) and (b)), and find all the charge density is similar to the isolated pyrazine molecular orbital except a small contribution from Cr.", "A $p$ -$d$ hybridization between pyrazine molecular orbitals and Cr $t_{2g}$ opens small energy splittings, as shown in Fig.", "REF .", "Then the large Cr polarization in the FM state gives rise to an exchange splitting, which results in a half-metallic character.", "According to the Goodenough-Kanamori-Anderson (GKA) rules, a collinear Cr$^{3+}$ -ligand-Cr$^{3+}$ superexchange should be AF.", "Interestingly, what we find in CrCl$_2$ (pyrazine)$_2$ is FM instead.", "This is because, different from common cases in which no magnetization exists on the ligands, e.g.", "O$^{2-}$ , here the pyrazine ligands have magnetic moments which give rise to the Cr-pyrazine coupling.", "Owing to the half occupation of Cr $t_{2g}$ shell, the Cr-pyrazine direct exchange must be AF.", "Therefore, the Cr-Cr FM coupling is stabilized, since FM allows electron itinerancy and gains much kinetic energy[5], as shown in Fig.", "REF ." ], [ "Carrier Doping", "The magnetism of 2D materials may be tuned by carrier or electrostatic doping[45], [46].", "Hereby, as CrCl$_2$ (pyrazine)$_2$ monolayer is a potential spintronic material, the possibility to enhance its FM coupling is of concern.", "For two Cr and four pyrazines in a cell, there exist four molecular orbitals near the Fermi level which are occupied by two electrons, and the polarization of the pyrazines by the Cr$^{3+}$ spin=3/2 gives rise to an exchange splitting with the down-spin levels being lower in energy, see Fig.", "7.", "Thus, doping of $\\pm $ 1 electron/f.u.", "($\\pm $ 2 electrons/cell) would completely occupy or deplete these four down-spin bands.", "Therefore, the electron doping from 0 to 1e/f.u.", "will increase the number of electrons hopping between pyrazine and Cr, and accordingly, gain more kinetic energy to further stabilize the FM state.", "But this would decrease the total magnetic moment as Cr and pyrazines have opposite spins.", "An even higher electron doping would occupy the four up-spin bands.", "Then the magnetic coupling between Cr and pyrazines will decrease, and this would reduce the Cr-Cr FM coupling.", "In contrast, the hole doping from 0 to –1e/f.u.", "will reduce the magnetic coupling between Cr and pyrazines, and in particular, the hole doping of –1e/f.u.", "will make the pyrazines formally nonmagnetic and then the tiny superexchange will give a weak Cr-Cr AF coupling.", "An even higher hole doping might deplete the deep valence bands which seems unrealistic and therefore is not discussed here.", "Table: Relative total energies ΔE{\\Delta \\emph {E}} (meV/f.u.", "), total and local spin moments (μ B {\\rm \\mu _B}) for the carrier doped FM and AF states.We now perform calculations for the carrier-doped CrCl$_2$ (pyrazine)$_2$ monolayer, with the electron doping from 0 (pure) to 1.5 e/f.u., or hole doping from 0 to –1 e/f.u., both in a step of 0.25 e/f.u.", "The doping effect is simulated by adding or removing electrons in the unit cell, which is then neutralized by a background charge.", "As seen in Table 3, our calculations indeed confirm that the FM stability first increases but then decreases with the increasing electron doping from 0 to +1.5 e/f.u., and the FM ground state is most stable against the AF state by 319 meV/f.u.", "at the +1 e/f.u.", "doping.", "The local spin moment of Cr basically stays constant during the doping from 0 to +1.5 e/f.u., and the doped electrons fill up the ligands.", "In the FM ground state, the N atom (and the pyrazine) carries an increasing spin moment in the electron doping from 0 to 1 e/f.u., and the increasing down-spin density at the N atoms is clearly observed in Figs.", "8(a) and 8(c).", "A reverse process occurs for the hole doping, and the Cr-Cr FM coupling decreases for the hole doping from 0 to –1 e/f.u., as seen in Table 3.", "The doped holes into the ligands reduce the spin moment for the N atom (and the pyrazine), giving a lower down-spin density at the N atoms, see, e.g., Fig.", "8(e).", "Note that for the hole doping of –0.75 e/f.u., there is 0.25 e/f.u.", "remaining in the pyrazine bands, and this still gives a strong Cr-Cr FM coupling (FM stability against AF by 56 meV/f.u.)", "as the magnetic coupling between Cr and pyrazines is quite effective.", "However, for the –1 e/f.u.", "doping, the pyrazine bands are completely depleted and become formally nonmagnetic, and then the above FM coupling is no longer effective, but there is now a weak superexchange Cr-Cr AF coupling (7 meV/f.u.", "AF stability against FM).", "Therefore, a FM-AF transition point could be very close to the –1 e/f.u.", "doping.", "Figure: Spin density plots of (a) 1 electron/f.u.", "FM, (b) 1 electron/f.u.", "AF, (c) 0.5 electron/f.u.", "FM, (d) 0.5 electron/f.u.", "AF, (e) 0.5 hole/f.u.", "FM, (f) 0.5 hole/f.u.", "AF, (h) 1 hole/f.u.", "FM and (h) 1 hole/f.u.", "AF states of monolayer CrCl 2 _2(pyrazine) 2 _2.", "The purple (green) lines are contours for up (down) spin density.", "The black arrows represent the spin moment directions of Cr.As seen above, we predict a significantly enhanced FM coupling in the electron-doped monolayer CrCl$_2$ (pyrazine)$_2$ with the optimal doping of 1 e/f.u.", "Moreover, a magnetic transition from FM to AF is predicted for hole doping very close to –1 e/f.u.", "Then we establish the picture of the FM interaction via the spin-polarized molecular orbitals of the pyrazine ligands.", "Thus, the CrCl$_2$ (pyrazine)$_2$ monolayer, having a tunable FM half-metallicity, could be an appealing 2D spintronic material." ], [ "Conclusions", "In summary, using density functional calculations and a magnetic picture, we confirm the half-metallicity in bulk CrCl$_2$ (pyrazine)$_2$ , and its strong intralayer FM and weak interlayer FM.", "These results agree well with the very recent experiments.", "Our calculations show that the monolayer CrCl$_2$ (pyrazine)$_2$ could be exfoliated from the bulk, and that its 2D FM half-metallicity remains robust.", "Moreover, we provide a picture about the molecular orbitals of the pyrazine ligands and the magnetic couplings.", "Based on this, we find that the electron doping can significantly enhance the FM coupling, but that the hole doping may even drive a FM-AF transition.", "Therefore, the monolayer CrCl$_2$ (pyrazine)$_2$ seems to be an appealing 2D spintronic material.", "This work highlights the vital role of the organic ligands, and it suggests that 2D hybrid materials represent an interesting new platform with tunable electronic and magnetic properties which still need to be fully explored.", "This work was supported by the NSF of China (Grant No.11674064) and by the National Key Research and Development Program of China (Grant No.", "2016YFA0300700)." ] ]
2107.01862
[ [ "A solvable class of non-Markovian quantum multipartite dynamics" ], [ "Abstract We study a class of multipartite open quantum dynamics for systems of arbitrary number of qubits.", "The non-Markovian quantum master equation can involve arbitrary single or multipartite and time-dependent dissipative coupling mechanisms, expressed in terms of strings of Pauli operators.", "We formulate the general constraints that guarantee the complete positivity of this dynamics.", "We characterize in detail underlying mechanisms that lead to memory effects, together with properties of the dynamics encoded in the associated system rates.", "We specifically derive multipartite \"eternal\" non-Markovian master equations that we term hyperbolic and trigonometric due to the time dependence of their rates.", "For these models we identify a transition between positive and periodically divergent rates.", "We also study non-Markovian effects through an operational (measurement-based) memory witness approach." ], [ "Introduction", "In the theory of open quantum systems, the formulation of quantum Markovian master equations is completely determined by the theory of quantum semigroups [1].", "In contrast, the study of non-Markovian memory effects presents two problems.", "The first one is that the most general structure of a quantum master equation that captures memory effects, and at the same time is consistent with the completely positive (CP) condition of the solution map [2], [3], [4], is not known.", "The second one is that different inequivalent memory witnesses can be used to define and measure non-Markovian effects [5], [6].", "The first problem has been known for many years.", "In fact, arbitrary non-Markovian quantum master equations may lead to unphysical solutions [7], [8], [9], [10] where the average state (the density matrix) being not positive definite.", "For tackling this issue a broad class of phenomenological and theoretical approaches has been formulated [3], dealing with both time-convoluted and convolutionless master equations [11].", "Examples include the dynamics induced by stochastic Hamiltonians defined by non-white noises [12], phenomenological single memory kernels [13], [14], [15], [16], interaction with incoherent degrees of freedom [17], [18], [19], [22], [20], [21] and arbitrary ancilla systems [23], [24], related quantum collisional models [25], [26], [29], [27], [30], [31], [32], [28], quantum generalizations of semi-Markov processes [33], [34], and random unitary dynamics [35], [36], together with some exact derivations from underlying (microscopic or effective) unitary dynamics [37], [38], [39], [45], [41], [42], [43], [40], [44].", "Despite these advances [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [22], [20], [21], [25], [26], [29], [27], [30], [31], [32], [28], [33], [34], [35], [36], [37], [38], [39], [45], [41], [42], [43], [40], [44], [23], [24] most studies of non-Markovian evolutions are restricted in general to single or bipartite systems.", "In fact, in general checking the CP condition of the dynamics is a non trivial task, whose difficulty in turn increases with the system's Hilbert space dimension.", "However, quantum information intrinsically requires multipartite processing, and as a consequence the formulation of multipartite non-Markovian dynamics is of interest from both theoretical and practical points of view.", "Our main goal in this paper is to formulate and study a class of solvable multipartite non-Markovian master equations.", "The class of systems we consider are defined in terms an arbitrary number $N$ of qubits, whose interaction with the environment can be taken into account through arbitrary Pauli channels.", "The evolution of the system's density matrix $\\rho _{t}$ is given by the time-local master equation $(d/dt)\\rho _{t}=\\mathcal {L}[\\rho _{t}],$ where the generator of the evolution has the general structure $\\ \\mathcal {L}[\\bullet ]\\!", "&=&\\!\\!\\!\\sum _{\\begin{array}{c} i=1,\\cdots N \\\\ \\alpha =x,y,z\\end{array}}\\!\\!\\Gamma _{i}^{\\alpha }(t)(\\sigma _{i}^{\\alpha }\\bullet \\sigma _{i}^{\\alpha }-\\bullet )\\!", "\\\\&&\\!\\!\\!\\!+\\!\\!\\!\\!\\sum _{\\begin{array}{c} i=1,\\cdots N \\\\ \\alpha ,\\beta =x,y,z\\end{array}}\\!\\!\\Gamma _{i}^{\\alpha \\beta }(t)(\\sigma _{i}^{\\alpha }\\sigma _{i+1}^{\\beta }\\bullet \\sigma _{i+1}^{\\beta }\\sigma _{i}^{\\alpha }-\\bullet ) \\\\&&\\!\\!\\!\\!+\\!\\!\\!\\!\\sum _{\\begin{array}{c} i=1,\\cdots N \\\\ \\alpha ,\\beta ,\\gamma =x,y,z\\end{array}}\\!\\!\\!\\!\\Gamma _{i}^{\\alpha \\beta \\gamma }(t)(\\sigma _{i}^{\\alpha }\\sigma _{i+1}^{\\beta }\\sigma _{i+2}^{\\gamma }\\bullet \\sigma _{i+2}^{\\gamma }\\sigma _{i+1}^{\\beta }\\sigma _{i}^{\\alpha }-\\bullet ) \\\\&&\\!\\!\\!\\!+\\cdots .", "$ Here, $\\sigma _{i}^{\\alpha }$ is the $\\alpha $ -th Pauli operator $(\\alpha =x,y,z)$ acting on qubit $i$ , while $\\Gamma _{i}^{\\alpha \\cdots \\beta }(t)$ define local and bipartite time-dependent (coupling) rates.", "In general, these rate functions may take both positive and negative values.", "The problem is to characterize which constraints must be fulfilled by them in order to obtain physically valid solutions.", "Interestingly, the resolution of this issue leads us to consider all possible multipartite interaction terms, that is, decoherence channels that involve coupling between an arbitrary number of qubits.", "We also explore which rates emerge when the memory effects arise from different underlying mechanisms based on coupling with incoherent degrees of freedom [20], [21].", "The explicit formulation of an operational (measurement based) memory witness [46], [47], [48] further provides an alternative characterization of non-Markovian effects.", "As a specific example we study a family of “hyperbolic” and “trigonometric” eternal multipartite non-Markovian master equations where some rates are negative or develop divergences at all times, respectively.", "These cases provide a non-trivial extension and generalization of previous results valid for single systems [49].", "The paper is structured as follows.", "In Sec.", "II we present the general class of multipartite dynamics we consider, characterizing solution of the master equation, resolving in consequence the constraints that guarantee the CP condition of the map.", "General properties are derived for this class of models.", "In Sec.", "III the eternal multipartite dynamics are characterized.", "In Sec.", "IV we study memory effects through an operational memory witness.", "In Sec.", "V we provide our Conclusions.", "The Appendixes give details of derivations and also obtain the rates associated to different underlying memory mechanisms." ], [ "Multipartite dynamics", "The system of interest consists of an arbitrary number $N$ of qubits.", "For notational convenience we define a set of Pauli strings $S_{\\mathbf {a}}\\equiv \\sigma _{a_{1}}\\otimes \\sigma _{a_{2}}\\otimes \\sigma _{a_{N}},$ each one associated to the vector $\\mathbf {a}=(a_{1},a_{2},\\cdots ,a_{N}).$ Each component $a_{k}$ $(k=1,2,\\cdots N)$ assumes the values $a_{k}=(0,1,2,3)\\leftrightarrow (\\mathrm {I},\\sigma _{x},\\sigma _{y},\\sigma _{z}),$ each one being associated to the (two-dimensional) identity matrix and the standard three Pauli matrices.", "The evolution of the system's density matrix $\\rho _{t}$ is written in a local-in-time way.", "Arbitrary multipartite decoherence channels are considered, $\\frac{d}{dt}\\rho _{t}=\\mathcal {L}[\\rho _{t}]=\\sum _{ \\mathbf {a\\ne 0}}\\gamma _{t}^{\\mathbf {a}}(S_{\\mathbf {a}}\\rho _{t}S_{\\mathbf {a}}-\\rho _{t}).$ The set of functions $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ define the rates associated to the multipartite Pauli channel.", "In general, there are $4^{N}-1$ different rate functions, as the vector $\\mathbf {0}=(0,0,\\cdots ,0)$ is associated to the identity operator in the full Hilbert space.", "Our goal is to characterize the different aspects of this general evolution.", "A time-convoluted formulation of the above dynamics is provided in Appendix A." ], [ "Subsystem dynamics", "Given the evolution above, we ask about the dynamics of any particular subsystem.", "Introducing the splitting $\\mathbf {a}=(\\mathbf {a}_{\\mathbf {s}},\\mathbf {a}_{\\mathbf {e}}),$ where $\\mathbf {a}_{\\mathbf {s}}$ corresponds to the set of local operators that define the marginal Pauli string of the subsystem of interest, and $\\mathbf {a}_{\\mathbf {e}}$ that of the rest of qubits (now considered as part of the environment), from Eq.", "(REF ) the subsystem density matrix $\\rho _{t}^{\\mathbf {s}}=\\mathrm {Tr}_{\\mathbf {e}}[\\rho _{t}]$ (where $\\mathrm {Tr}[\\bullet ]$ is the trace operation) reads $\\frac{d}{dt}\\rho _{t}^{\\mathbf {s}}=\\sum _{\\mathbf {a}_{\\mathbf {s}}}\\gamma _{t}^{\\mathbf {a}_{\\mathbf {s}}}(S_{\\mathbf {a}_{\\mathbf {s}}}\\rho _{t}^{\\mathbf {s}}S_{\\mathbf {a}_{\\mathbf {s}}}-\\rho _{t}^{\\mathbf {s}}),\\ \\ \\ \\ \\ \\ \\gamma _{t}^{\\mathbf {a}_{\\mathbf {s}}}\\equiv \\sum _{\\mathbf {a}_{\\mathbf {e}}}\\gamma _{t}^{\\mathbf {a}_{\\mathbf {s}},\\mathbf {a}_{\\mathbf {e}}}.$ From this equation we conclude that any subsystem, even when in general is correlated with the complementary part, has an independent self-evolution.", "In addition, the structure of this evolution belongs to the same class as that of the full system [Eq.", "(REF )].", "Consequently, the following results can be particularized for any subsystem of arbitrary size." ], [ "Solution map and completely positive condition", "We now show that by using the method of damping bases or spectral decomposition [50], the solution map $\\rho _{0}\\rightarrow \\rho _{t}$ corresponding to Eq.", "(REF ) can be obtained in an exact way.", "In order the apply this technique, first we establish a set of relations fulfilled by the (two dimensional) Pauli operators.", "Maintaining the notation $(\\sigma _{0},\\sigma _{1},\\sigma _{2},\\sigma _{3})\\leftrightarrow (\\mathrm {I},\\sigma _{x},\\sigma _{y},\\sigma _{z}),$ it is easy to check that $\\sigma _{a}\\mathrm {Tr}[\\sigma _{a}\\bullet ]=\\frac{1}{2}\\sum _{b}H_{ab}(\\sigma _{b}\\bullet \\sigma _{b}), $ where the input $[\\bullet ]$ is an arbitrary two dimensional operator and $b=0,1,2,3.$ The inverse relation reads $\\sigma _{a}\\bullet \\sigma _{a}=\\frac{1}{2}\\sum _{b}H_{ab}\\ \\sigma _{b}\\mathrm {Tr}[\\sigma _{b}\\bullet ].", "$ In these expressions, the coefficients $\\lbrace H_{ab}\\rbrace $ define a four dimensional Hadamard matrix $H,$ which reads $H\\equiv \\left(\\begin{array}{cccc}1 & 1 & 1 & 1 \\\\1 & 1 & -1 & -1 \\\\1 & -1 & 1 & -1 \\\\1 & -1 & -1 & 1\\end{array}\\right) .", "$ In deriving Eq.", "(REF ), we used that its inverse reads $H^{-1}=H/4.$ Also notice that $H=H^T.$ Now, we introduce an extra rate $\\gamma _{t}^{\\mathbf {0}},$ which is associated to the identity string in the full Hilbert space $\\gamma _{t}^{\\mathbf {0}}\\equiv -\\sum _{\\mathbf {a\\ne 0}}\\gamma _{t}^{\\mathbf {a}}.", "$ With this definition, the Lindbladian-like structure of Eq.", "(REF )] can straightforwardly be written as $\\mathcal {L}[\\bullet ]=\\sum _{\\mathbf {a}}\\gamma _{t}^{\\mathbf {a}}(S_{\\mathbf {a}}\\bullet S_{\\mathbf {a}}),$ where the sum now includes the (identity) string $\\mathbf {a=0}$ .", "Written in this way, applying the “vectorial extension” of Eq.", "(REF ) to the Hilbert space of $N$ qubits, it follows that $\\mathcal {L}[\\bullet ]=\\frac{1}{2^{N}}\\sum _{\\mathbf {a}} S_{\\mathbf {a}}\\mathrm {Tr}[ S_{\\mathbf {a}}\\bullet ]\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\gamma _{t}^{\\mathbf {b}},$ where $H_{\\mathbf {ab}}\\equiv H_{a_{1}b_{1}}H_{a_{2}b_{2}}\\cdots H_{a_{N}b_{N}}$ can be read as the matrix elements of the external product of $N$ single Hadamard matrices, cf.", "Eq.", "(REF ).", "From this last expression, by using that $\\mathrm {Tr}[S_{\\mathbf {a}}S_{\\mathbf {b}}]=2^{N}\\delta _{\\mathbf {a},\\mathbf {b}},$ it is straightforward to determine the eigenvalues and eigenoperators of $\\mathcal {L}[\\bullet ].$ They read $\\mathcal {L}[S_{\\mathbf {a}}]=\\mu _{t}^{\\mathbf {a}} S_{\\mathbf {a}},\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\mu _{t}^{\\mathbf {a}}=\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\gamma _{t}^{\\mathbf {b}}.", "$ Consequently, any Pauli string $S_{\\mathbf {a}}$ is a right eigenoperator with eigenvalue $\\mu _{t}^{\\mathbf {a}}.$ Given that $\\mathcal {L}[\\bullet ]$ also defines the adjoint evolution (as the “jump operators” are Hermitian) [50], $S_{\\mathbf {a}}$ is also a left eigenoperator.", "Notice also that by using the inverse of the Hadamard matrix, the inverse relation $\\gamma _{t}^{\\mathbf {a}}=\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\mu _{t}^{\\mathbf {b}}/4^{N}$ follows.", "From the method of damping bases [50], Eq.", "(REF ) allows us to write the solution of Eq.", "(REF ) as $\\rho _{t}=\\frac{1}{2^{N}}\\sum _{\\mathbf {a}}\\exp \\left[ \\int _{0}^{t}dt^{\\prime }\\mu _{t^{\\prime }}^{\\mathbf {a}}\\right] S_{\\mathbf {a}}\\mathrm {Tr}[S_{\\mathbf {a}}\\rho _{0}].", "$ One can see that the conditions $\\mathrm {Tr}[\\rho _{t}]=\\mathrm {Tr}[\\rho _{0}]=1$ are satisfied after noting that $\\mathrm {Tr}[S_{\\mathbf {a}}]=2^{N}\\delta _{\\mathbf {a},\\mathbf {0}}$ and $\\mu _{t^{\\prime }}^{\\mathbf {0}}=0.$ This last equality follows from Eqs.", "(REF ) and (REF ) jointly with the property $H_{\\mathbf {0b}}=1$ $\\forall \\mathbf {b.", "}$ By using the vectorial extension of Eq.", "(REF ), we get the density matrix written in a Kraus representation [1], [2], $\\rho _{t}=\\sum _{\\mathbf {a}}p_{t}^{\\mathbf {a}}(S_{\\mathbf {a}}\\rho _{0}S_{\\mathbf {a}}).", "$ The weights are $p_{t}^{\\mathbf {a}}=4^{-N}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\exp [\\int _{0}^{t}dt^{\\prime }\\mu _{t^{\\prime }}^{\\mathbf {b}}],$ which from Eq.", "(REF ) can explicitly be written in terms of the time-dependent rates as $p_{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\exp \\left[\\sum _{\\mathbf {c}}H_{\\mathbf {bc}}\\int _{0}^{t}dt^{\\prime }\\gamma _{t^{\\prime }}^{\\mathbf {c}}\\right] .", "$ The final expressions (REF ) and (REF ) are the main results of this section.", "They completely characterize the solution map in terms of the set of rates $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ and the initial condition $\\rho _{0}.$ In addition, they naturally provide a constraint that the rates must to fulfill in order to obtain a CP map, that is, one that gives physical solution.", "In fact, the Kraus representation theorem [1], [2] implies the conditions $0\\le p_{t}^{\\mathbf {a}}\\le 1,$ which means that $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ are a set of normalized probabilities.", "In the single qubit case $(N=1),$ previously obtained constraints are recovered [35].", "In the general case, $4^{N}$ inequalities must be fulfilled.", "We notice that a sufficient, but not necessary, condition is $\\int _{0}^{t}dt^{\\prime }\\gamma _{t^{\\prime }}^{\\mathbf {a}}\\ge 0$ $\\forall \\mathbf {a\\ne 0.", "}$ In fact, this constraint implies that all eigenvalues, cf.", "Eq.", "(REF ), satisfy $\\mu _{t}^{\\mathbf {a}}\\le 0$ $(\\mathbf {a}\\ne \\mathbf {0}).$ Consequently, taking an arbitrary but fixed time $t,$ the solution (REF ) of the non-Markovian dynamics, via the association $\\int _{0}^{t}dt^{\\prime }\\mu _{t^{\\prime }}^{\\mathbf {a}}=t\\mu _{M}^{\\mathbf {a}},$ is equivalent to the solution of a (well behaved) Markovian dynamics generated by a Lindbladian with eigenvalues $\\lbrace \\mu _{M}^{\\mathbf {a}}\\rbrace .$" ], [ "Non-Markovianity and time-dependent rates", "Different (inequivalent) memory witnesses based only on the system propagator can be used to define non-Markovianity [5], [6] such as for example the trace distance between two different initial conditions [51] or those based on the $k$ -positivity of the solution map [52].", "Here, as the dynamics is written naturally in a canonical form [49], memory effects can also be defined by the negativity of the time-dependent rates $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace .$ In this way, it is of interest to determine these elements for any well behaved solution defined by the probabilities $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ in Eq.", "(REF ).", "We can invert Eq.", "(REF ), $\\mu _{t}^{\\mathbf {a}}=\\frac{d}{dt}\\ln \\left[ \\sum _{\\mathbf {b}}H_{\\mathbf {ab}}p_{t}^{\\mathbf {b}}\\right] ,$ and using Eq.", "(REF ) we get explicit expressions for the set of rates $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ in terms of the normalized time-dependent weights $0\\le p_{t}^{\\mathbf {c}}\\le 1$ , $\\gamma _{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\frac{d}{dt}\\ln \\left[ \\sum _{\\mathbf {c}}H_{\\mathbf {bc}}p_{t}^{\\mathbf {c}}\\right] .", "$ The signs of $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ can be taken as a signature of departure from a Markovian regime [49].", "Alternatively, in Sec.", "V we study operational measures for non-Markovianity.", "We notice that Eqs.", "(REF ) and (REF ) provide a multipartite generalization of the case $N=1$ studied in Ref.", "[35]." ], [ "Additivity of non-Markovian master equations", "Given two sets of (arbitrary) normalized probabilities $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ and $\\lbrace \\tilde{p}_{t}^{\\mathbf {a}}\\rbrace ,$ the relation (REF ) allows us to obtain the corresponding sets of rates $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ and $\\lbrace \\tilde{\\gamma }_{t}^{\\mathbf {a}}\\rbrace .$ From these we can obtain a new master equation defined by Eq.", "(REF ) with rates $\\lbrace \\gamma _{t}^{\\mathbf {a}}+\\tilde{\\gamma }_{t}^{\\mathbf {a}}\\rbrace $ .", "In fact, it is always possible to associate a set of probabilities $\\lbrace q_{t}^{\\mathbf {a}}\\rbrace $ to these added rates, that is, $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace \\leftrightarrow \\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace ,\\ \\ \\ \\lbrace \\tilde{p}_{t}^{\\mathbf {a}}\\rbrace \\leftrightarrow \\lbrace \\tilde{\\gamma }_{t}^{\\mathbf {a}}\\rbrace ,\\ \\ \\Rightarrow \\ \\ \\exists \\lbrace q_{t}^{\\mathbf {a}}\\rbrace \\leftrightarrow \\lbrace \\gamma _{t}^{\\mathbf {a}}+\\tilde{\\gamma }_{t}^{\\mathbf {a}}\\rbrace .$ Consequently, as occurs to Markovian Lindblad equations [2], for our class of models arbitrary well behaved evolutions (defined by a given set of rates) can be added in an arbitrary way.", "The validity of this result follows from the commutation of two arbitrary propagators, Eq.", "(REF ), a property supported by the relation $S_{\\mathbf {a}}S_{\\mathbf {b}}\\bullet S_{\\mathbf {b}}S_{\\mathbf {a}}=S_{\\mathbf {b}}S_{\\mathbf {a}}\\bullet S_{\\mathbf {a}}S_{\\mathbf {b}}=S_{\\mathbf {c}}\\bullet S_{\\mathbf {c}}^{\\dag }, $ which is valid for arbitrary Pauli strings $S_\\mathbf {a}$ and $S_\\mathbf {b},$ and where $S_{\\mathbf {c}}=S_{\\mathbf {a}}S_{\\mathbf {b}}$ or equivalently $S_{\\mathbf {c}}=S_{\\mathbf {b}}S_{\\mathbf {a}}.$ Eq.", "(REF ) can be straightforwardly demonstrated from Eq.", "(REF )." ], [ "Coupling with incoherent degrees of freedom", "Memory effects are induced whenever extra degrees of freedom are traced out.", "Here, we consider a general coupling with incoherent degrees of freedom.", "Based on Ref.", "[17], the more general case can always be described by writing the system density matrix $\\rho _{t}$ and the probabilities of the incoherent system $\\lbrace q_{t}^{\\mathbf {h}}\\rbrace $ as $\\rho _{t}=\\sum _{\\mathbf {h}}\\rho _{t}^{\\mathbf {h}},\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ q_{t}^{\\mathbf {h}}=\\mathrm {Tr}[\\rho _{t}^{\\mathbf {h}}],$ where the auxiliary states $\\lbrace \\rho _{t}^{\\mathbf {h}}\\rbrace $ correspond to the system state given that the extra (hidden) incoherent degrees of freedom are in the particular state $\\mathbf {h.}$ The evolution of the states $\\lbrace \\rho _{t}^{\\mathbf {h}}\\rbrace $ may involve coupling between all of them [17].", "Given the structure Eq.", "(REF ), each auxiliary state $\\rho _{t}^{\\mathbf {h}}$ must to assume the form $\\rho _{t}^{\\mathbf {h}}=\\sum _{\\mathbf {\\alpha }}g_{\\mathbf {\\alpha }}^{\\mathbf {h}}(t)(S_{\\mathbf {\\alpha }}\\rho _{0}S_{\\mathbf {\\alpha }}), $ where the parameter $\\mathbf {\\alpha }$ runs over a set of Pauli strings that depends on each specific problem.", "The functions $g_{\\mathbf {\\alpha }}^{\\mathbf {h}}(t)$ in turn obey a classical master equation whose structure also depends on each specific model.", "The initial conditions read $\\rho _{0}^{\\mathbf {h}}=\\rho _{0}q_{0}^{\\mathbf {h}},$ where $\\rho _{0}$ is the initial system state and $q_{0}^{\\mathbf {h}}$ is the initial probability of the incoherent degrees of freedom.", "In fact, at time $t,$ $q_{t}^{\\mathbf {h}}=\\sum _{\\mathbf {\\alpha }}g_{\\mathbf {\\alpha }}^{\\mathbf {h}}(t).$ On the other hand, the system density matrix evolution [Eq.", "(REF )] is defined by the probabilities $p_{t}^{\\mathbf {\\alpha }}=\\sum _{\\mathbf {h}}g_{\\mathbf {\\alpha }}^{\\mathbf {h}}(t).$ A general treatment is not possible.", "Relevant examples are worked out in Appendix B such as a mapping with a classical Markovian master equation, stochastic Hamiltonians, and statistical mixtures of Markovian evolutions.", "In all cases, explicit expressions for the rates [Eq.", "(REF )] can be obtained.", "A representative class of dynamics is studied in the next section." ], [ "Multipartite eternal non-Markovianity", "For a single qubit, $N=1$ , the system density matrix evolution, Eq.", "(REF ), may involve rates that are negative at all times.", "This property was called “eternal non-Markovianity” [49], [20].", "The results of Appendix B [see Eqs.", "(REF ), (REF ), and (REF )] and Appendix C [see Eqs.", "(REF ) and ()] guarantee that this property also emerges in multipartite dynamics, $N>1$ , which have $4^{N}-1$ rates.", "In order to provide simple (multipartite) examples, here we restrict to the case where the evolution is $\\mathcal {L}[\\bullet ] &=&\\Big {\\lbrace }\\gamma _{t}^{\\underline{\\mathbf {a}}}(S_{\\underline{\\mathbf {a}}}\\bullet S_{\\underline{\\mathbf {a}}}-\\bullet )+\\gamma _{t}^{\\underline{\\mathbf {b}}}(S_{\\underline{\\mathbf {b}}}\\bullet S_{\\underline{\\mathbf {b}}}-\\bullet ) \\\\&&+\\gamma _{t}^{\\underline{\\mathbf {c}}}(S_{\\underline{\\mathbf {c}}}\\bullet S_{\\underline{\\mathbf {c}}}^{\\dagger }-\\bullet )\\Big {\\rbrace },$ where $S_{\\underline{\\mathbf {a}}}\\mathbf {\\ }$ and $S_{\\underline{\\mathbf {b}}}$ are two arbitrary multipartite Pauli strings, while $S_{\\underline{\\mathbf {c}}}=S_{\\underline{\\mathbf {a}}}S_{\\underline{\\mathbf {b}}}.", "$ Depending on the time-dependence of the rates we define what we term “hyperbolic” and “trigonometric” cases of eternal non-Markovianity." ], [ "Hyperbolic eternal non-Markovianity", "The system density matrix is written as the addition of two auxiliary states $\\rho _{t}=\\rho _{t}^{(1)}+\\rho _{t}^{(2)}$ [Eq.", "(REF )], whose evolution reads $\\frac{d\\rho _{t}^{(1)}}{dt} &=&-\\gamma \\rho _{t}^{(1)}+\\gamma S_{\\underline{\\mathbf {a}}}\\rho _{t}^{(1)}S_{\\underline{\\mathbf {a}}}, \\\\\\frac{d\\rho _{t}^{(2)}}{dt} &=&-\\varphi \\rho _{t}^{(2)}+\\varphi S_{\\underline{\\mathbf {b}}}\\rho _{t}^{(2)}S_{\\underline{\\mathbf {b}}}.$ The initial conditions for the auxiliary states are taken to be $\\rho _{0}^{(1)}=\\rho _{0}^{(2)}=\\rho _{0}/2.$ Given that the auxiliary states do not couples, the rates of the non-Markovian evolution follow Eq.", "(REF ) with probabilities $p_{t}^{\\mathbf {a}}=[p_{1}^{\\mathbf {a}}(t)+p_{2}^{\\mathbf {a}}(t)]/2$ , with the sets $\\lbrace p_{1}^{\\mathbf {a}}(t)\\rbrace $ and $\\lbrace p_{2}^{\\mathbf {a}}(t)\\rbrace ,$ via Eq.", "(REF ), associated to $\\rho _{t}^{(1)}$ and $\\rho _{t}^{(2)}$ , respectively.", "Taking $\\varphi =\\gamma ,$ we get [see also derivation from Eq.", "(REF ) in Appendix C] $\\gamma _{t}^{\\underline{\\mathbf {a}}}=\\gamma _{t}^{\\underline{\\mathbf {b}}}=\\frac{1}{2}\\gamma ,\\ \\ \\ \\ \\ \\ \\ \\gamma _{t}^{\\underline{\\mathbf {c}}}=-\\frac{1}{2}\\gamma \\tanh (\\gamma t).", "$ This result provides a multipartite generalization, $(N>1)$ , of the single qubit case $(N=1)$ studied in Ref. [49].", "Similarly to the results of Ref.", "[20] we notice that in this particular case alternative dynamics such as the mapping to a classical master equation [see Eq.", "(REF )] and stochastic Hamiltonians [see Eq.", "(REF )] also lead to the same rates." ], [ "Trigonometric eternal non-Markovianity", "Based on Eq.", "(REF ), instead of the evolution (REF ), here we consider $\\frac{d\\rho _{t}^{(1)}}{dt} &=&-\\gamma \\rho _{t}^{(1)}+\\varphi S_{\\underline{\\mathbf {b}}}\\rho _{t}^{(2)}S_{\\underline{\\mathbf {b}}}, \\\\\\frac{d\\rho _{t}^{(2)}}{dt} &=&-\\varphi \\rho _{t}^{(2)}+\\gamma S_{\\underline{\\mathbf {a}}}\\rho _{t}^{(1)}S_{\\underline{\\mathbf {a}}}.$ The initial conditions are taken as $\\rho _{0}^{(1)}=[\\varphi /(\\varphi +\\gamma )]\\rho _{0}$ and $\\rho _{0}^{(2)}=[\\gamma /(\\varphi +\\gamma )]\\rho _{0},$ where $\\rho _{0}$ is the system initial state.", "Notice that the incoherent transitions $(1)\\leftrightarrow (2)$ imply the system transformations $\\rho \\rightarrow S_{\\underline{\\mathbf {a}}/\\underline{\\mathbf {b}}} \\rho S_{\\underline{\\mathbf {a}}/\\underline{\\mathbf {b}}}$ .", "Taking into account Eq.", "(REF ), in order to solve Eq.", "(REF ) each auxiliary state is written as $(h=1,2)$ $\\rho _{t}^{(h)}=g_{\\mathbf {0}}^{(h)}\\rho _{0}+g_{\\underline{\\mathbf {a}}}^{(h)}S_{\\underline{\\mathbf {a}}}\\rho _{0}S_{\\underline{\\mathbf {a}}}+g_{\\underline{\\mathbf {b}}}^{(h)}S_{\\underline{\\mathbf {b}}}\\rho _{0}S_{\\underline{\\mathbf {b}}}+g_{\\underline{\\mathbf {c}}}^{(h)} S_{\\underline{\\mathbf {c}}}\\rho _{0}S_{\\underline{\\mathbf {c}}}^{\\dagger },$ where as before $S_{\\underline{\\mathbf {c}}}=S_{\\underline{\\mathbf {a}}}S_{\\underline{\\mathbf {b}}}$ , and $g_{\\mathbf {\\alpha }}^{(h)}$ are time-dependent functions.", "Using Eq.", "(REF ), it is possible to derive a classical master equation for the (eight) $g$ -functions which involves coupling between pairs of them.", "The corresponding solutions allow to obtain the probabilities $p_{t}^{\\mathbf {a}}=\\sum _{h}g_{\\mathbf {a}}^{(h)}(t).$ Finally, the rates associated to the non-Markovian evolution follow from Eq.", "(REF ) $\\gamma _{t}^{\\underline{\\mathbf {a}}}=\\gamma _{t}^{\\underline{\\mathbf {b}}}=\\frac{\\varphi \\gamma (\\varphi +\\gamma )}{e^{t(\\varphi +\\gamma )}(\\varphi -\\gamma )^{2}+4\\varphi \\gamma }.", "$ Furthermore, $\\gamma _{t}^{\\underline{\\mathbf {c}}} &=&\\varphi \\gamma \\lbrace \\delta _{+}\\Upsilon ^{2}(1-e^{t\\Upsilon })-\\delta _{-}^{2}[\\Upsilon (1+e^{t\\Upsilon }) \\\\&&+e^{t\\delta _{+}}[(\\delta _{+}-\\Upsilon )-e^{t\\Upsilon }(\\delta _{+}+\\Upsilon )]]\\rbrace \\\\&&\\times \\lbrace (e^{t\\delta _{+}}\\delta _{-}^{2}+4\\varphi \\gamma )[(1+e^{t\\Upsilon })\\Upsilon \\delta _{+} \\\\&&-(1-e^{t\\Upsilon })\\delta _{-}^{2}]\\rbrace ^{-1}, $ where the coefficients are $\\Upsilon \\equiv (\\varphi ^{2}-6\\varphi \\gamma +\\gamma ^{2})^{1/2},\\ \\ \\ \\ \\ \\ \\ \\delta _{\\pm }\\equiv \\varphi \\pm \\gamma .", "$ Depending on the ratio $\\varphi /\\gamma ,$ different characteristic behaviors are obtained.", "In Fig.", "1 we plot both rates.", "Consistent with Eq.", "(REF ), $\\gamma _{t}^{\\underline{\\mathbf {a}}}$ and $\\gamma _{t}^{\\underline{\\mathbf {b}}}$ are always positive functions.", "However, this is not the case for $\\gamma _{t}^{\\underline{\\mathbf {c}}},$ Eq.", "(REF ), which depending on $\\varphi /\\gamma $ develops a transition between positivity [Figs.", "5(a) and 5(d)] and a periodic divergent behavior [Figs.", "5(b) and 5(c)].", "From Eq.", "(REF ) we deduce that this change occurs in the boundaries of the interval $3-\\sqrt{8}<(\\varphi /\\gamma )<3+\\sqrt{8}$ , with $\\gamma _{t}^{\\underline{\\mathbf {c}}}$ developing divergences in this interval, while being positive outside it.", "From the plots it is also evident that $\\gamma _{t}^{\\underline{\\mathbf {a}}}$ and $\\gamma _{t}^{\\underline{\\mathbf {b}}}$ approach a constant when $\\varphi \\approx \\gamma .$ In fact, when $\\varphi =\\gamma ,$ the previous expressions reduce to $\\gamma _{t}^{\\underline{\\mathbf {a}}}=\\gamma _{t}^{\\underline{\\mathbf {b}}}=\\frac{1}{2}\\gamma ,\\ \\ \\ \\ \\ \\ \\ \\gamma _{t}^{\\underline{\\mathbf {c}}}=\\frac{1}{2}\\gamma \\tan (\\gamma t).", "$ Based on Eq.", "(REF ), we name this case as a trigonometric eternal non-Markovian.", "The probabilities $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ [Eq.", "(REF )] also assume a simple form, $p_{t}^{\\mathbf {0}} &=&\\frac{1}{2}e^{-\\gamma t}[\\cosh (\\gamma t)+\\cos (\\gamma t)], \\\\p_{t}^{\\underline{\\mathbf {a}}} &=&p_{t}^{\\underline{\\mathbf {b}}}=\\frac{1}{4}[1-e^{-2\\gamma t}], \\\\p_{t}^{\\underline{\\mathbf {c}}} &=&\\frac{1}{2}e^{-\\gamma t}[\\cosh (\\gamma t)-\\cos (\\gamma t)].$ These solutions apply to arbitrary multipartite Pauli strings $\\underline{\\mathbf {a}}$ and $\\underline{\\mathbf {b}}.$ Figure: Time dependent rates [Eqs.", "() and ()] corresponding to the multipartite trigonometric eternalnon-Markovian evolution [Eq.", "()] for differentvalues of the rate ratio ϕ/γ.\\protect \\varphi /\\protect \\gamma ." ], [ "Adding non-Markovian evolutions", "Added to the previous examples (see also Appendix C), the possibility of adding arbitrary (well defined) rates [Eq.", "(REF )] gives us a procedure for constructing a large family of well behaved dynamics.", "For example, we write $\\mathcal {L}[\\bullet ] &=&\\sum _{i=1}^{N}\\frac{\\gamma _{i}}{2}\\Big {\\lbrace }(\\sigma _{i}^{x}\\sigma _{i+1}^{x}\\bullet \\sigma _{i+1}^{x}\\sigma _{i}^{x}-\\bullet )\\\\&&\\ \\ \\ \\ \\ \\ \\ +(\\sigma _{i}^{y}\\sigma _{i+1}^{y}\\bullet \\sigma _{i+1}^{y}\\sigma _{i}^{y}-\\bullet ) \\\\&&\\ \\ \\ \\ \\ \\ \\ +f_{i}(t)(\\sigma _{i}^{z}\\sigma _{i+1}^{z}\\bullet \\sigma _{i+1}^{z}\\sigma _{i}^{z}-\\bullet )\\Big {\\rbrace }.", "$ In this traslational invariant generator (say with periodic boundaries in one dimension), we may chose $f_{i}(t)=-\\tanh (\\gamma _{i}t)$ or alternatively $f_{i}(t)=\\tan (\\gamma _{i}t)$ [see Eqs.", "(REF ) and (REF ) respectively].", "One interesting aspect of using additivity for constructing multipartite evolutions is that, even when the underlying evolutions have a clear memory mechanism (see also Appendix B), the resulting dynamics does not necessarily.", "For example, while our approach guarantees that Eq.", "(REF ) leads to a completely positive dynamics [with solution defined by Eqs.", "(REF ) and (REF )] it is not evident which underlying processes may lead to this master equation.", "In addition, in general there may be subsystems that are coupled between then, one part being Markovian and the other non-Markovian.", "For example, take $f_{i}(t)=\\gamma _{i}/2$ for $i\\le N_{0},$ and $f_{i}(t)=\\tan (\\gamma _{i}t)$ for $i>N_{0}.$" ], [ "Operational memory witness", "An alternative and deeper characterization of quantum non-Markovianity can be obtained by defining memory effects via measurement based approaches [46], [47], [48].", "Here, we study a conditional past-future (CPF) correlation [47].", "This object relies on performing three successive measurement of arbitrary system observables and calculating the correlation between the last (future) and first (past) outcomes conditioned to a given intermediate (present) outcome.", "For Markovian dynamics it vanishes identically, while memory effects leads to a non null CPF correlation.", "The measurements, denoted in successive order by $\\underline{\\mathbf {x}},$ $\\underline{\\mathbf {y}},$ and $\\underline{\\mathbf {z}},$ correspond to observations of three Hermitian operators $S_{\\underline{\\mathbf {m}}}$ with eigenvectors $\\lbrace |m\\rangle \\rbrace $ and eigenvalues $\\lbrace m\\rbrace ,$ $S_{\\underline{\\mathbf {m}}}|m\\rangle =m|m\\rangle ,\\ \\ \\ \\ \\ \\underline{\\mathbf {m}}=\\underline{\\mathbf {x}},\\underline{\\mathbf {y}},\\underline{\\mathbf {z}}.", "$ The CPF correlation then reads [47] $C_{pf}(t,\\tau )|_{y}=\\sum _{z,x}zx[P(z,x|y)-P(z|y)P(x|y)],$ where $\\lbrace x\\rbrace ,$ $\\lbrace y\\rbrace ,$ and $\\lbrace z\\rbrace $ denotes the three sets of successive outcomes (operators eigenvalues), while $t$ and $\\tau $ are the (first and second) time intervals between the successive measurements.", "With $P(u|v)$ we denote the conditional probability of $u$ given $v.$ All probabilities appearing in Eq.", "(REF ) can be determine from the (outcomes) joint probability $P(z,y,x)\\leftrightarrow P(z,t+\\tau ,y,t;x,0),$ which in turn can be calculated after knowing the underlying system-environment dynamics.", "In Appendix D we show that $P(z,y,x)$ and $C_{pf}(t,\\tau )|_{y}$ can be calculated exactly assuming that memory effects emerge due to the coupling with incoherent degrees of freedom [Eqs.", "(REF ) and (REF )].", "Each specific model [see examples (REF ) and (REF )] is completely defined by the set of functions $\\lbrace g_{\\mathbf {\\alpha }}^{\\mathbf {h}}(t)\\rbrace $ [Eq.", "(REF )].", "Given that they obey a (linear) classical master equation, they can be written as $g_{\\mathbf {\\alpha }}^{\\mathbf {h}}(t)=\\sum _{\\mathbf {h}^{\\prime }}f_{\\mathbf {\\alpha }}^{\\mathbf {hh}^{\\prime }}(t)q_{0}^{\\mathbf {h}^{\\prime }}\\equiv (\\mathbf {h}|\\mathbb {F}_{\\mathbf {\\alpha }}(t)|q_{0}), $ where the set of functions $\\lbrace f_{\\mathbf {\\alpha }}^{\\mathbf {hh}^{\\prime }}(t)\\rbrace $ are independent of the initial conditions $\\lbrace q_{0}^{\\mathbf {h}}\\rbrace .$ Furthermore, for notational simplicity, we introduced a vectorial orthogonal base $\\lbrace |\\mathbf {h})\\rbrace $ for the incoherent degrees of freedom, such that $f_{\\mathbf {\\alpha }}^{\\mathbf {hh}^{\\prime }}(t)\\leftrightarrow (\\mathbf {h}|\\mathbb {F}_{\\mathbf {\\alpha }}(t)|\\mathbf {h}^{\\prime })$ and $q_{0}^{\\mathbf {h}}\\leftrightarrow (\\mathbf {h}|q_{0}).$ The observables $S_{\\underline{\\mathbf {m}}}$ [Eq.", "(REF )] may in principle be defined by arbitrary linear combinations of Pauli strings $\\lbrace S_{\\mathbf {a}}\\rbrace .$ Here, for simplicity they are defined by a unique Pauli string.", "In this case, the general expression for the CPF correlation [Eq.", "(REF )] reduces to (see Appendix D) $C_{pf}(t,\\tau )|_{y}=\\delta _{\\underline{\\mathbf {z}},\\mathbf {y}}\\delta _{\\underline{\\mathbf {y}},\\underline{\\mathbf {x}}}\\frac{(1-\\langle x\\rangle ^{2})}{[2^{N}P(y)]^{2}}\\sum _{\\mathbf {\\alpha ,\\beta }}H_{\\underline{\\mathbf {y}}\\mathbf {\\alpha }}H_{\\underline{\\mathbf {y}}\\mathbf {\\beta }}[(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0})-(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )|q_{t})(1|\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0})].$ In here, $|q_{t})=\\sum _{\\mathbf {\\alpha }}\\mathbb {F}_{\\mathbf {\\alpha }}(t)|q_{0})$ define the probabilities of the incoherent degrees of freedom at time $t,$ while $(1|\\equiv \\sum _{\\mathbf {h}}(\\mathbf {h}|.$ Furthermore, $\\langle x\\rangle \\equiv \\sum _{x}xP(x)$ where $P(x)=\\langle x|\\rho _{0}|x\\rangle .$ Finally, $P(y)$ is the probability for the outcomes of the second measurement.", "It is $P(y)=\\frac{1}{2^{N}}\\Big {[}1+y\\langle x\\rangle \\delta _{\\underline{\\mathbf {y}},\\underline{\\mathbf {x}}}\\sum _{\\mathbf {\\alpha }}H_{\\underline{\\mathbf {y}}\\mathbf {\\alpha }}(1|\\mathbb {F}_{\\mathbf {\\alpha }}(t)|q_{0})\\Big {]}.$ The term $\\delta _{\\underline{\\mathbf {z}},\\mathbf {y}}\\delta _{\\underline{\\mathbf {y}},\\underline{\\mathbf {x}}}$ in Eq.", "(REF ) implies that, for observables defined by unique Pauli strings, memory effects are detected only when the three observables are the same $S_{\\underline{\\mathbf {x}}}=S_{\\underline{\\mathbf {y}}}=S_{\\underline{\\mathbf {z}}}.$ This constraint does not emerge when the observables correspond to other basis of operators (see for example Ref. [48]).", "The general solution Eq.", "(REF ) can be specified for the trigonometric eternal model [Eq.", "(REF )].", "Stationary initial conditions are assumed, $|q_{t})=|q_{0}),$ with $q_{0}^{(1)}=\\varphi /(\\varphi +\\gamma )$ and $q_{0}^{(2)}=\\gamma /(\\varphi +\\gamma ).$ For simplicity, first we consider the case $N=1.$ When the three measurements are performed in direction $\\mathbf {\\underline{a}}$ or $\\mathbf {\\underline{b}}$ we get $C_{pf}(t,\\tau )|_{y} &=&-\\frac{(1-\\langle x\\rangle ^{2})}{[2^{N}P(y)]^{2}}\\exp \\left[ -(t+\\tau )(\\gamma +\\varphi )/2\\right] \\\\&&\\!\\!\\!\\!\\!\\!\\!\\times \\frac{4^{2}\\gamma ^{2}\\varphi ^{2}}{(\\gamma +\\varphi )^{2}\\Upsilon ^{2}}\\sinh \\left( \\frac{\\Upsilon t}{2}\\right) \\sinh \\left(\\frac{\\Upsilon \\tau }{2}\\right) .\\ \\ \\ \\ \\ $ When the three measurements are performed in direction $\\mathbf {\\underline{c},}$ we get $C_{pf}(t,\\tau )|_{y} &=&\\frac{(1-\\langle x\\rangle ^{2})}{[2^{N}P(y)]^{2}}\\ \\frac{4\\gamma \\varphi (\\gamma -\\varphi )^{2}}{(\\gamma +\\varphi )^{4}} \\\\&&\\times [1-e^{-\\tau (\\gamma +\\varphi )}][1-e^{-t(\\gamma +\\varphi )}].$ These results allow us to analyze the transition to divergent rates [Eq.", "(REF )] in a complementary way.", "In Fig.", "2 we plot the CPF correlation Eq.", "(REF ).", "We observe that when the rate $\\gamma _{t}^{\\underline{\\mathbf {c}}}$ does not develop divergences [Fig.", "2(a)], the CPF correlation is negative for any value of the time intervals $t$ and $\\tau .$ On the other hand, in the interval $3-\\sqrt{8}<(\\varphi /\\gamma )<3+\\sqrt{8}$ where the rate $\\gamma _{t}^{\\underline{\\mathbf {c}}}$ develops divergences [Fig.", "2(b)], the CPF correlation presents oscillations between positive an negative values.", "For the model (REF ), the generalization to $N>1,$ independently of the chosen observables, always lead to Eq.", "(REF ) or Eq.", "(REF ).", "This results follows by noting that in Eq.", "(REF ) the coefficients $\\mathbf {\\alpha }$ and $\\mathbf {\\beta }$ only assume the four values $\\mathbf {\\alpha ,\\beta }=(\\mathbf {0,}\\underline{\\mathbf {a}},\\underline{\\mathbf {b}},\\underline{\\mathbf {c}})$ [see Eq.", "(REF )].", "Furthermore, using that $H_{\\underline{\\mathbf {y}}\\mathbf {\\alpha }}H_{\\underline{\\mathbf {y}}\\mathbf {\\beta }}=H_{\\underline{\\mathbf {y}}\\mathbf {\\gamma }},$ where $\\mathbf {\\gamma }$ corresponds to the string $S_{\\mathbf {\\gamma }}=S_{\\mathbf {\\alpha }}S_{\\mathbf {\\beta }},$ for a fixed $\\underline{\\mathbf {y}}$ $(\\underline{\\mathbf {y}}=\\underline{\\mathbf {a}},$ or $\\underline{\\mathbf {y}}=\\underline{\\mathbf {b}},$ or $\\underline{\\mathbf {y}}=\\underline{\\mathbf {c}})$ the four matrix elements $H_{\\underline{\\mathbf {y}}\\mathbf {\\gamma }},$ similarly to the case $N=1,$ can only assume the values $(\\pm 1),$ which always lead to Eq.", "(REF ) or Eq.", "(REF ).", "On the other hand, for $N>1$ accidentally it may also happen that the CPF correlation vanishes.", "This occur because we assumed that the incoherent degrees of freedom are stationary, which implies $\\sum _{\\mathbf {\\alpha ,\\beta }}[(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0})-(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )|q_{t})(1|\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0})]=0.$ Thus, when $H_{\\underline{\\mathbf {y}}\\mathbf {\\gamma }}=1,$ it follows that $C_{pf}(t,\\tau )|_{y}=0$ [see Eq.", "(REF )].", "These accidental cases can always be surpassed by considering arbitrary measurement operators written as linear combinations of the Pauli strings.", "Figure: CPF correlation [Eq.", "()] corresponding tothe eternal non-Markovian trigonometric model [Eq.", "()], for different values of ϕ/γ\\protect \\varphi /\\protect \\gamma and measurement time-interval relations τ/t.\\protect \\tau /t.", "In allcases, the system initial condition is such that 〈x〉=0.\\langle x\\rangle =0.As an example, we consider a bipartite case where $S_{\\underline{\\mathbf {a}}}=\\sigma _{1}^{x}\\sigma _{2}^{x},$ $S_{\\underline{\\mathbf {b}}}=\\sigma _{1}^{y}\\sigma _{2}^{y},$ and $S_{\\underline{\\mathbf {c}}}=\\sigma _{1}^{z}\\sigma _{2}^{z}.$ The CPF correlation Eq.", "(REF ) is obtained when the three measurement are defined by any of the bipartite operators $S_{\\underline{\\mathbf {m}}}=(\\sigma _{1}^{x}\\sigma _{2}^{z}), $ $(\\sigma _{1}^{y}\\sigma _{2}^{z}),$ $(\\sigma _{1}^{z}\\sigma _{2}^{x}),$ $(\\sigma _{1}^{z}\\sigma _{2}^{y}),$ Eq.", "(REF ) is obtained when $S_{\\underline{\\mathbf {m}}}=(\\sigma _{1}^{x}\\sigma _{2}^{y}),(\\sigma _{1}^{y}\\sigma _{2}^{x}),$ while $C_{pf}(t,\\tau )|_{y}=0$ when $S_{\\underline{\\mathbf {m}}}=(\\sigma _{1}^{x}\\sigma _{2}^{x}),$ $(\\sigma _{1}^{y}\\sigma _{2}^{y}),$ $(\\sigma _{1}^{z}\\sigma _{2}^{z}).$" ], [ "Summary and conclusions", "We studied a class of solvable multipartite non-Markovian master equations where the system consists of an arbitrary number of qubits and whose structure is written in terms of arbitrary multipartite Pauli coupling terms.", "Starting from a local-in-time representation of the evolution, we found the explicit solution for the system density matrix, which in turn allowed us to formulate the constraints that time-dependent rates must obey in order to guarantee the completely positive condition of the solution map.", "We also found explicit analytical expressions for the time-dependent rates associated to a given evolution.", "Their sign (positive or negative) can be used as an indicator of non-Markovianity.", "Memory effects were also characterized by operational methods, where a CPF correlation defined by a set of three consecutive system measurements becomes a memory witness.", "We showed that this quantity can be obtained in an exact way for arbitrary measurement processes and arbitrary interaction with incoherent degrees of freedom.", "As application of the previous results, we presented simple underlying dynamics that lead to the phenomenon of eternal non-Markovianity, that is, multipartite dynamics where some rates depart at all times from that of a Markovian regime.", "Both hyperbolic and trigonometric cases were established, characterized by a rate that is negative at all times or that develops periodical divergences.", "Even when these features develop, the CPF correlation is always a smooth function.", "In the Appendices we found the rates associated to different underlying memory mechanisms such as a mapping with a classical master equation, stochastic Hamiltonians and statistical superpositions of Markovian dynamics.", "We showed that under particular conditions different mechanisms may lead to the same time-dependent rates.", "Nevertheless, these accidental degeneracies do not occur in general.", "We also found that the phenomenon of eternal non-Markovianity becomes quite common in multipartite dynamics.", "The class of models we studied here provides a useful solvable framework for studying quantum non-Markovianity in multipartite settings.", "This allows to formulate a wide range of well-behaved multipartite non-Markovian master equations.", "The study of diverse memory witness can be tackled starting from here.", "Our results also lead to interesting questions such as determining which kind of underlying dynamics can be associated to an arbitrary non-Markovian multipartite Pauli evolution." ], [ "Acknowledgments", "AAB acknowledges support from CONICET, Argentina.", "JPG acknowledges financial support from EPSRC Grant no.", "EP/R04421X/1 and the Leverhulme Trust Grant No.", "RPG-2018-181." ], [ "Time-convoluted approach", "Instead of the local-in-time formulation defined by Eq.", "(REF ), alternatively one may start with a time convoluted evolution $\\frac{d}{dt}\\rho _{t}=\\mathcal {L}[\\rho _{t}]=\\sum _{\\begin{array}{c} \\mathbf {a} \\\\\\mathbf {a\\ne 0}\\end{array}}\\int _{0}^{t}dt^{\\prime }k_{\\mathbf {a}}(t-t^{\\prime })(S_{\\mathbf {a}}\\rho _{t^{\\prime }}S_{\\mathbf {a}}-\\rho _{t^{\\prime }}),$ where the set of time-dependent kernels $\\lbrace k_{\\mathbf {a}}(t)\\rbrace $ must to be constrained such that the solution map is CP.", "Similarly to Sec.", "II, by defining the kernel $k_{\\mathbf {0}}(t)\\equiv -\\sum _{\\mathbf {a\\ (a\\ne 0)}}k_{\\mathbf {a}}(t),$ here the weights of the solution (REF ) can be written as $p_{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\lambda _{\\mathbf {b}}(t),$ where the coefficients $\\lambda _{\\mathbf {b}}(t)$ obey the evolution $\\frac{d}{dt}\\lambda _{\\mathbf {b}}(t)=\\int _{0}^{t}dt^{\\prime }k_{\\mathbf {b}}(t-t^{\\prime })\\lambda _{\\mathbf {b}}(t^{\\prime }).$ The inverse relations for determining the kernels $\\lbrace k_{\\mathbf {a}}(t)\\rbrace $ a function of probabilities $\\lbrace p_{\\mathbf {a}}(t)\\rbrace $ can be written in a Laplace domain $[f(z)=\\int _{0}^{\\infty }dte^{-zt}f(t)]$ as $k_{\\mathbf {a}}(z)=\\frac{z\\lambda _{\\mathbf {a}}(z)-1}{\\lambda _{\\mathbf {a}}(z)},\\ \\ \\ \\ \\ \\ \\ \\lambda _{\\mathbf {a}}(z)=\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}p_{\\mathbf {b}}(z).$" ], [ "Non-Markovian underlying mechanisms", "Here, we consider different mechanisms that lead to memory effects.", "The present analysis provides nontrivial multipartite extensions of some results developed in Ref.", "[20] for the case $N=1.$" ], [ "Mapping with a classical Markovian master equation", "The solution map [Eq.", "(REF )] is defined by a set of normalized probabilities $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace .$ It is possible to formulate an underlying mechanism such that $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ correspond to the solution of an arbitrary Markovian classical master equation with $4^{N}$ different states.", "We assume that the system density matrix interacts with an incoherent system whose states, in contrast to Eq.", "(REF ), can be put in one-to-one correspondence with the Pauli string vectors $\\lbrace \\mathbf {a}\\rbrace .$ Therefore, the system density matrix $\\rho _{t}$ can be written in terms of a set of auxiliary states $\\lbrace \\rho _{t}^{\\mathbf {a}}\\rbrace $ [17] such that $\\rho _{t}=\\sum _{\\mathbf {a}}\\rho _{t}^{\\mathbf {a}}.", "$ The evolution of the auxiliary states is Markovian and involves coupling between all of them.", "We write $\\frac{d}{dt}\\rho _{t}^{\\mathbf {a}}=-\\sum _{\\begin{array}{c} \\mathbf {b} \\\\ \\mathbf {b}\\ne \\mathbf {a}\\end{array}}\\phi _{\\mathbf {ba}}\\rho _{t}^{\\mathbf {a}}+\\sum _{\\begin{array}{c}\\mathbf {b} \\\\ \\mathbf {b}\\ne \\mathbf {a}\\end{array}}\\phi _{\\mathbf {ab}}S_{\\mathbf {a}}S_{\\mathbf {b}}\\rho _{t}^{\\mathbf {b}}S_{\\mathbf {b}}S_{\\mathbf {a}}.$ Here, $\\lbrace \\phi _{\\mathbf {ba}}\\rbrace $ are arbitrary rates.", "The stochastic interpretation of this equation is quite simple.", "Whenever the incoherent system undergoes the transition $\\mathbf {b}\\rightarrow \\mathbf {a},$ the quantum system undergoes the transformation $\\rho \\rightarrow S_{\\mathbf {a}}S_{\\mathbf {b}}\\rho S_{\\mathbf {b}}S_{\\mathbf {a}}.$ Between transition the system is frozen.", "The average system dynamics is given by Eq.", "(REF ), where $\\rho _{t}^{\\mathbf {a}}$ corresponds to the conditional system state given that the incoherent one is in the state associated to $\\mathbf {a}.$ It is simple to check that the solutions $\\lbrace \\rho _{t}^{\\mathbf {a}}\\rbrace $ of Eq.", "(REF ) can be written as $\\rho _{t}^{\\mathbf {a}}=p_{t}^{\\mathbf {a}}(S_{\\mathbf {a}}\\rho _{0}S_{\\mathbf {a}}), $ where the weights $p_{t}^{\\mathbf {a}}$ must to fulfill the classical master equation $\\frac{d}{dt}p_{t}^{\\mathbf {a}}=-\\sum _{\\begin{array}{c} \\mathbf {b} \\\\ \\mathbf {b}\\ne \\mathbf {a}\\end{array}}\\phi _{\\mathbf {ba}}p_{t}^{\\mathbf {a}}+\\sum _{\\begin{array}{c}\\mathbf {b} \\\\ \\mathbf {b}\\ne \\mathbf {a}\\end{array}}\\phi _{\\mathbf {ab}}p_{t}^{\\mathbf {b}}.", "$ Consequently, from Eqs.", "(REF ) and (REF ) we recover the solution Eq.", "(REF ) $[\\rho _{t}=\\sum _{\\mathbf {a}}p_{t}^{\\mathbf {a}}(S_{\\mathbf {a}}\\rho _{0}S_{\\mathbf {a}})],$ where the probabilities $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ fulfill a the classical master equation (REF ).", "For consistence, its initial condition must be $p_{0}^{\\mathbf {a}}=\\delta _{\\mathbf {a,0}}.$ Particular case: Given that Eq.", "(REF ) is arbitrary, it is not possible to find a general expression for the rates $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ [Eq.", "(REF )] in terms of the underlying ones $\\lbrace \\phi _{\\mathbf {ba}}\\rbrace .$ Nevertheless, this mapping can be performed, for example, when Eq.", "(REF ) assumes the form $\\frac{d}{dt}p_{t}^{\\mathbf {0}}=-\\phi p_{t}^{\\mathbf {0}}+\\varphi \\sum _{\\begin{array}{c} \\mathbf {a} \\\\ \\mathbf {a\\ne 0}\\end{array}}p_{t}^{\\mathbf {a}},\\ \\ \\ \\ \\ \\frac{d}{dt}p_{t}^{\\mathbf {a}}=-\\varphi p_{t}^{\\mathbf {a}}+x_{\\mathbf {a}}\\phi p_{t}^{\\mathbf {0}},$ where $\\phi $ and $\\varphi $ are arbitrary rates and the weights $\\lbrace x_{\\mathbf {a}}\\rbrace $ satisfies $\\sum _{\\mathbf {a(a}\\ne \\mathbf {0)}}x_{\\mathbf {a}}=1.$ The probabilities, with initial condition $p_{0}^{\\mathbf {a}}=\\delta _{\\mathbf {a,0}},$ can be written as $p_{t}^{\\mathbf {a}}=p_{\\infty }^{\\mathbf {a}}[1-\\exp (-\\Phi t)]+\\delta _{\\mathbf {a,0}}\\exp (-\\Phi t), $ where $\\Phi \\equiv (\\phi +\\varphi ),$ and the stationary values are $p_{\\infty }^{\\mathbf {0}}=\\frac{\\varphi }{\\phi +\\varphi },\\ \\ \\ \\ \\ \\ p_{\\infty }^{\\mathbf {a}}=\\frac{x_{\\mathbf {a}}\\phi }{\\phi +\\varphi }.$ From the solutions (REF ), the general expression (REF ), after some calculations steps [53], lead to $\\gamma _{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}\\frac{\\Phi }{2}H_{\\mathbf {ab}}\\left[ \\tanh \\left( \\frac{t\\Phi }{2}+\\zeta _{\\mathbf {b}}\\right)-1\\right] , $ where the parameters are $\\zeta _{\\mathbf {b}}\\equiv \\frac{1}{2}\\ln \\left( \\frac{h_{\\infty }^{\\mathbf {b}}}{1-h_{\\infty }^{\\mathbf {b}}}\\right) ,\\ \\ \\ \\ \\ \\ \\ \\ \\ h_{\\infty }^{\\mathbf {b}}\\equiv \\sum _{\\mathbf {c}}H_{\\mathbf {bc}}p_{\\infty }^{\\mathbf {c}}.$ It is simple to check that, due to probability normalization, $h_{\\infty }^{\\mathbf {0}}=1.$ Hence, in Eq.", "(REF ) the term with $\\mathbf {b=0}$ cancels out.", "Furthermore if $h_{\\infty }^{\\mathbf {b}}=0,$ it follows $\\tanh (t\\Phi /2+\\zeta _{\\mathbf {b}})\\rightarrow -1.$ In general, the time dependence of the rate $\\gamma _{t}^{\\mathbf {a}}$ arise from a linear combination of hyperbolic tangent functions with coefficient that are $\\pm 1.$ Thus, in general some rates can be negative at any time." ], [ "Stochastic Hamiltonians", "We consider a stochastic evolution, where the system wave vector $|\\psi _{t}\\rangle $ is driven by a stochastic Hamiltonian, $\\frac{d|\\psi _{t}\\rangle }{dt}=-iH_{st}|\\psi _{t}\\rangle =-i\\frac{1}{2}\\xi _{t}^{\\mathbf {\\alpha }}S_{\\mathbf {\\alpha }}|\\psi _{t}\\rangle .$ The Hamiltonian $H_{st}$ is characterized by a noise with an arbitrary statistics but null average $\\langle \\langle \\xi _{t}^{\\mathbf {\\alpha }}\\rangle \\rangle =0.$ The index $\\mathbf {\\alpha \\leftrightarrow \\alpha }_{t}$ run overs all possible Pauli strings.", "Its time variation is very slow such that over a single realization it can be considered as a frozen parameter.", "Thus, the average state $\\rho _{t}^{\\mathbf {\\alpha }}=\\langle \\langle |\\psi _{t}\\rangle \\langle \\psi _{t}|\\rangle \\rangle $ for a given $\\mathbf {\\alpha }$ reads $\\rho _{t}^{\\mathbf {\\alpha }}=(1/2)[1+G_{t}^{\\mathbf {\\alpha }}]\\rho _{0}+(1/2)[1-G_{t}^{\\mathbf {\\alpha }}](S_{\\mathbf {\\alpha }}\\rho _{0}S_{\\mathbf {\\alpha }}),$ where $G_{t}^{\\mathbf {\\alpha }}\\equiv \\Big {\\langle }\\Big {\\langle }\\exp \\Big {(}i\\int _{0}^{t}dt^{\\prime }\\xi _{t^{\\prime }}^{\\mathbf {\\alpha }}\\Big {)}\\Big {\\rangle }\\Big {\\rangle }, $ is the characteristic noise function for a given $\\mathbf {\\alpha .", "}$ After averaging this parameter, the system state can be written as $\\rho _{t}=\\sum _{\\mathbf {\\alpha },(\\mathbf {\\alpha }\\ne \\mathbf {0})}x_{\\mathbf {\\alpha }}\\rho _{t}^{\\mathbf {\\alpha }},$ where $\\sum _{\\mathbf {\\alpha },(\\mathbf {\\alpha }\\ne \\mathbf {0})}x_{\\mathbf {\\alpha }}=1.$ The parameters $\\lbrace x_{\\mathbf {\\alpha }}\\rbrace $ correspond to the statistical weight of each Pauli string during the variation of the coefficient $\\mathbf {\\alpha .", "}$ It is straightforward to check that $\\rho _{t}=\\sum _{\\mathbf {a}}p_{t}^{\\mathbf {a}}(S_{\\mathbf {a}}\\rho _{0}S_{\\mathbf {a}}),$ which recovers Eq.", "(REF ) with $p_{t}^{\\mathbf {0}}=\\frac{1}{2}(1+\\sum _{\\begin{array}{c} \\mathbf {a} \\\\ \\mathbf {a}\\ne \\mathbf {0}\\end{array}}x_{\\mathbf {a}}G_{t}^{\\mathbf {a}}),\\ \\ \\ \\ \\ \\ p_{t}^{\\mathbf {a}}=\\frac{x_{\\mathbf {a}}}{2}(1-G_{t}^{\\mathbf {a}}).$ Similarly to the previous model, it is not possible to find a general simple expression for the rates $\\gamma _{t}^{\\mathbf {a}}$ in terms of these probabilities.", "Manageable expressions arise in the following situations.", "Particular cases: If the noise is the same for all “directions” $G_{t}^{\\mathbf {a}}=G_{t},$ from Eqs.", "(REF ) and (REF ), after some algebra [53], we get the rates $\\gamma _{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}\\frac{\\dot{g}_{t}}{2}H_{\\mathbf {ab}}\\left[ \\tanh \\left( \\frac{g_{t}}{2}+\\zeta _{\\mathbf {b}}\\right) -1\\right] , $ where the scalar functions read $g_{t}=\\ln (1/G_{t}),\\ \\ \\ \\ \\ \\ \\ \\dot{g}_{t}=-\\frac{1}{G_{t}}\\frac{dG_{t}}{dt}, $ and where $\\zeta _{\\mathbf {b}}$ is defined by Eq.", "(REF ) with, instead of Eq.", "(REF ), with $p_{\\infty }^{\\mathbf {0}}=1/2,$ and $p_{\\infty }^{\\mathbf {a}}=x_{\\mathbf {a}}/2.$ For a stationary Gaussian white noise, where $\\langle \\langle \\xi _{t}\\xi _{t^{\\prime }}\\rangle \\rangle =\\Phi \\delta (t-t^{\\prime }),$ Eq.", "(REF ) becomes $G(t)=\\exp (-\\Phi t).$ It is simple to check that in this situation Eq.", "(REF ) recovers the solution (REF ) of the previous model with $\\varphi =\\phi .$ This results show that there are different underlying models that may lead to the same system density matrix evolution.", "This degeneracy is not universal and clearly depends on the underlying parameters.", "For a stationary symmetric dichotomic noise with amplitude $A$ and switching rate $\\eta ,$ the characteristic noise function [Eq.", "(REF )] is $G_{t}=e^{-\\eta t}[\\cosh (\\chi t)+\\frac{\\eta }{\\chi }\\sinh (\\chi t)],\\ \\ \\ \\ \\ \\chi \\equiv \\sqrt{\\eta ^{2}-A^{2}}.$ In contrast to the previous cases, here the rates defined by Eq.", "(REF ) may develop divergences.", "In fact, the functions (REF ) become $g_{t}=\\ln (1/G_{t}),\\ \\ \\ \\ \\ \\ \\ \\dot{g}_{t}=\\frac{A^{2}}{\\eta +\\chi [1/\\tanh (\\chi t)]}.$ Hence, divergent rates are found whenever $\\eta <A.$" ], [ "Statistical mixtures of Markovian evolutions", "Departures with respect to a Markovian regime emerge whenever the system evolution is written as the statistical superposition of different Markovian propagators.", "Hence, we write $p_{t}^{\\mathbf {a}}=\\sum _{k=1}^{n}q_{k}p_{k}^{\\mathbf {a}}(t),$ where $\\lbrace q_{k}\\rbrace $ are normalized positive weights $(\\sum _{k=1}^{n}q_{k}=1),$ and each set of probabilities $\\lbrace p_{k}^{\\mathbf {a}}(t)\\rbrace $ is associated to a Markovian solution of Eq.", "(REF ) with time-independent positive rates $\\lbrace \\gamma _{k}^{\\mathbf {a}}\\rbrace .$ From Eq.", "(REF ), the non-Markovian evolution is characterized by the rates $\\gamma _{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\frac{\\sum _{k=1}^{n}q_{k}\\mu _{k}^{\\mathbf {b}}\\exp (t\\mu _{k}^{\\mathbf {b}})}{\\sum _{k^{\\prime }=1}^{n}q_{k^{\\prime }}\\exp (t\\mu _{k^{\\prime }}^{\\mathbf {b}})}.", "$ where $\\mu _{k}^{\\mathbf {b}}$ are eigenvalues of the $k$ -Markovian dynamics, $\\mu _{k}^{\\mathbf {b}}=\\sum _{\\mathbf {c}}H_{\\mathbf {bc}}\\gamma _{k}^{\\mathbf {c}}.$ The specific properties of these rates strongly depend on the considered Markovian evolutions and statistic weights.", "Particular cases: In the two-state case, $n=2,$ the probabilities are $p_{t}^{\\mathbf {a}}=q_{1}p_{1}^{\\mathbf {a}}(t)+q_{2}p_{2}^{\\mathbf {a}}(t),$ where each solution is associated to the rates $\\gamma _{1}^{\\mathbf {a}}$ and $\\gamma _{2}^{\\mathbf {a}},$ and $q_{1}+q_{2}=1.$ From Eq.", "(REF ), after some algebra [53], we get $\\gamma _{t}^{\\mathbf {a}}=\\frac{1}{2}(\\gamma _{1}^{\\mathbf {a}}+\\gamma _{2}^{\\mathbf {a}})+\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\Delta _{\\mathbf {b}}\\tanh (t\\Delta _{\\mathbf {b}}+\\zeta ), $ where the parameters are $\\Delta _{\\mathbf {b}}\\equiv \\frac{1}{2}\\sum _{\\mathbf {c}}H_{\\mathbf {bc}}(\\gamma _{1}^{\\mathbf {c}}-\\gamma _{2}^{\\mathbf {c}}),\\ \\ \\ \\ \\ \\ \\ \\zeta \\equiv \\frac{1}{2}\\ln (\\frac{q_{1}}{q_{2}}).$ In this case, many rates may also be negative at all times (see next section).", "In the other extreme, a continuos-state case can be considered.", "Thus, Eq.", "(REF ) is rewritten as $p_{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\Big {\\langle }\\prod _{\\mathbf {c}}\\exp (tH_{\\mathbf {bc}}\\gamma ^{\\mathbf {c}})\\Big {\\rangle },$ where we used the explicit expression (REF ) and the replacement $\\sum _{k=1}^{n}q_{k}\\rightarrow \\left\\langle \\cdots \\right\\rangle .$ The symbol $\\left\\langle \\cdots \\right\\rangle $ denotes an average over the set of random rates $\\lbrace \\gamma ^{\\mathbf {c}}\\rbrace ,$ each “realization” defining a Markov solution.", "Assuming that all rates are independent random variables it follows that $\\left\\langle \\cdots \\right\\rangle \\rightarrow \\int _{0}^{\\infty }d\\gamma ^{\\mathbf {c}}\\cdots P(\\gamma ^{\\mathbf {c}}),$ where $P(\\gamma ^{\\mathbf {c}})$ is the corresponding probability density.", "By assuming an exponential probability density $P(\\gamma ^{\\mathbf {c}})=\\tau _{\\mathbf {c}}\\exp (-\\gamma ^{\\mathbf {c}}\\tau _{\\mathbf {c}}),$ by using that $\\gamma ^{\\mathbf {0}}=-\\sum _{\\mathbf {c}(\\mathbf {c}\\ne \\mathbf {0})}\\gamma ^{\\mathbf {c}},$ [see Eq.", "(REF )] we get $p_{t}^{\\mathbf {a}}=\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\prod _{\\begin{array}{c} \\mathbf {c} \\\\ \\mathbf {c}\\ne \\mathbf {0}\\end{array}}\\frac{\\tau _{\\mathbf {c}}}{\\tau _{\\mathbf {c}}+(1-H_{\\mathbf {bc}})t},$ where we have used that $H_{\\mathbf {b0}}=1.$ From Eq.", "(REF ), the corresponding rates associated to the non-Markovian evolution are $\\gamma _{t}^{\\mathbf {a}}=-\\frac{1}{4^{N}}\\sum _{\\mathbf {b}}H_{\\mathbf {ab}}\\sum _{\\begin{array}{c} \\mathbf {c} \\\\ \\mathbf {c}\\ne \\mathbf {0}\\end{array}}\\frac{(1-H_{\\mathbf {bc}})}{\\tau _{\\mathbf {c}}+(1-H_{\\mathbf {bc}})t}.$ We notice that both $\\lbrace p_{t}^{\\mathbf {a}}\\rbrace $ and $\\lbrace \\gamma _{t}^{\\mathbf {a}}\\rbrace $ develop a power-law behavior.", "In spite of this feature the rates are positive at all times, $\\gamma _{t}^{\\mathbf {a}}>0$ $(\\mathbf {a}\\ne \\mathbf {0}).$ While most of the memory witnesses [5], [6] associate this property to a Markovian regime, from operational approaches it is possible to detect and infer the presence of memory effects [47], [48]." ], [ "Bipartite and tripartite eternal non-Markovian evolutions", "Besides the previous examples, the developed approach allow us to show that master equations characterized by eternal non-Markovian effects are quite common for multipartite systems.", "As an example, we consider the statistical superposition of two different Markovian dynamics characterized by the rates $\\gamma _{1}^{\\mathbf {a}}$ and $\\gamma _{2}^{\\mathbf {a}}$ and equal weights $[q_{1}=q_{2}$ in Eq.", "(REF )].", "Taking $\\gamma _{1}^{\\mathbf {a}}=\\gamma (\\delta _{\\mathbf {a},\\underline{\\mathbf {a}}}-\\delta _{\\mathbf {a},\\mathbf {0}}),$ and $\\gamma _{2}^{\\mathbf {a}}=\\gamma (\\delta _{\\mathbf {a},\\underline{\\mathbf {b}}}-\\delta _{\\mathbf {a},\\mathbf {0}}),$ and using that $(H_{\\mathbf {\\alpha }\\underline{\\mathbf {a}}}-H_{\\mathbf {\\alpha }\\underline{\\mathbf {b}}})/2=(\\pm 1,0),$ and $H_{\\mathbf {\\alpha }\\underline{\\mathbf {a}}}H_{\\mathbf {\\alpha }\\underline{\\mathbf {b}}}=H_{\\mathbf {\\alpha }\\underline{\\mathbf {c}}},$ from Eq.", "(REF ) we recover the rates defined in Eq.", "(REF ).", "When each (vectorial) rate involves different Pauli channels more complex expressions are obtained.", "As a first example, take a bipartite system $(N=2)$ with $\\gamma _{1}^{\\mathbf {a}} &=&\\gamma (\\delta _{\\mathbf {a},10}+\\delta _{\\mathbf {a},01}-2\\delta _{\\mathbf {a},00}), \\\\\\gamma _{2}^{\\mathbf {a}} &=&\\gamma (\\delta _{\\mathbf {a},20}+\\delta _{\\mathbf {a},02}-2\\delta _{\\mathbf {a},00}).$ Thus, each dynamics is defined by a local (single) dephasing local mechanism acting alternatively in $x$ - and $y$ -directions.", "From Eq.", "(REF ) we obtain $\\gamma _{t}^{\\mathbf {a}_{0}}=\\frac{1}{2}\\gamma ,\\ \\ \\ \\ \\ \\ \\ \\gamma _{t}^{\\mathbf {a}_{\\pm }}=\\pm \\frac{1}{4}\\gamma \\tanh (2\\gamma t), $ where $\\mathbf {a}_{0}$ and $\\mathbf {a}_{\\pm }$ correspond to the following Pauli strings, $\\mathbf {a}_{0}=(10),\\ (01),$ $(20),$ $(02),$ and $\\mathbf {a}_{+}=(11),\\ (22),$ while $\\mathbf {a}_{-}=(30),$ $(03),$ $(12),$ $(21).$ Furthermore, $\\gamma _{t}^{33}=-\\frac{\\gamma }{4}[2\\tanh (\\gamma t)-\\tanh (2\\gamma t)]=-2\\gamma \\frac{\\sinh ^{4}(\\gamma t)}{\\sinh (4\\gamma t)},$ while $\\gamma _{t}^{\\mathbf {a}}=0$ if $\\mathbf {a}\\ne (\\mathbf {a}_{0},\\mathbf {a}_{+},\\mathbf {a}_{-}).$ There are eleven non-null rates out of the fifteen possible ones, five of them being negative at all times.", "As a second example we consider a tripartite system $(N=3),$ where $\\gamma _{1}^{\\mathbf {a}} &=&\\gamma (\\delta _{\\mathbf {a},110}+\\delta _{\\mathbf {a},101}+\\delta _{\\mathbf {a},011}-3\\delta _{\\mathbf {a},000}), \\\\\\gamma _{2}^{\\mathbf {a}} &=&\\gamma (\\delta _{\\mathbf {a},220}+\\delta _{\\mathbf {a},202}+\\delta _{\\mathbf {a},022}-3\\delta _{\\mathbf {a},000}).$ Hence, each Markovian evolution correspond to dephasing in $x$ - and $y$ -directions but now considering all pairs of bipartite dephasing operators.", "From Eq.", "(REF ) we get $\\gamma _{t}^{\\mathbf {a}_{+}} &=&\\frac{1}{4}\\gamma [2+\\tanh (2\\gamma t)], \\\\\\gamma _{t}^{\\mathbf {a}_{-}} &=&-\\frac{1}{4}\\gamma \\tanh (2\\gamma t),$ where $\\mathbf {a}_{\\pm }$ correspond to the following Pauli strings, $\\mathbf {a}_{+}=(110),\\ (101),$ $(011),\\ (220),\\ (202),\\ (022),$ while $\\mathbf {a}_{-}=(330),$ $(303),$ $(033),$ $(123),$ $(132),$ $(213),$ $(231),$ $(312),$ $(321),$ and $\\gamma _{t}^{\\mathbf {a}}=0$ if $\\mathbf {a}\\ne \\mathbf {a}_{+},\\mathbf {a}_{-}.$ In this case, out of sixty-three possible rates, fifteen are non-null, nine of them being negative at all times." ], [ "CPF correlation calculus", "For a system coupled to incoherent degrees of freedom [Eq.", "(REF )], the (bipartite) system-environment state $\\rho _{t}^{se}=\\sum _{\\mathbf {h}}\\rho _{t}^{\\mathbf {h}}|\\mathbf {h}),$ from Eqs.", "(REF ) and (REF ), reads $\\rho _{t}^{se}=\\sum _{\\mathbf {\\alpha }}(S_{\\mathbf {\\alpha }}\\rho _{0}S_{\\mathbf {\\alpha }})\\ \\mathbb {F}_{\\mathbf {\\alpha }}(t)|q_{0}).$ This evolution defines the system-environment dynamics between measurements.", "The measurement of operator $S_{\\underline{\\mathbf {m}}}$ [Eq.", "(REF )] leads to the transformation $\\rho ^{se}=\\sum _{\\mathbf {h}}\\rho ^{\\mathbf {h}}|\\mathbf {h})\\rightarrow |m\\rangle \\langle m||q_{m}),$ where $|q_{m})=\\sum _{\\mathbf {h}}\\ \\langle m|\\rho _{t}^{\\mathbf {h}}|m\\rangle /\\mathrm {Tr}[\\langle m|\\rho _{t}^{\\mathbf {h}}|m\\rangle ]|\\mathbf {h}).$ With these ingredients, the calculation of the joint probability can be performed in a standard way.", "We get, $\\frac{P(z,y,x)}{P(x)}=\\sum _{\\mathbf {\\alpha ,\\beta }}|\\langle z|\\sigma _{\\mathbf {\\alpha }}|y\\rangle |^{2}|\\langle y|S_{\\mathbf {\\beta }}|x\\rangle |^{2}(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0}), $ where $P(x)=\\langle x|\\rho _{0}|x\\rangle $ and $(1|\\equiv \\sum _{\\mathbf {h}}(\\mathbf {h}|.$ This result is valid for arbitrary Hermitian system observables.", "Using Bayes rule, the conditional probabilities that define the CPF correlation [Eq.", "(REF )] can be written as $P(z,x|y)=P(z,y,x)/P(y),$ where $P(y)=\\sum _{z,x}P(z,y,x).$ Furthermore, $P(z|y)=\\sum _{x}P(z,x|y),$ and $P(x|y)=\\sum _{z}P(z,x|y).$ From Eq.", "(REF ), and using $\\sum _{z}z|\\langle z|S_{\\mathbf {\\alpha }}|y\\rangle |^{2} &=&\\langle y|S_{\\mathbf {\\alpha }}S_{\\underline{\\mathbf {z}}}S_{\\mathbf {\\alpha }}|y\\rangle , \\\\\\sum _{x}x|\\langle y|S_{\\mathbf {\\beta }}|x\\rangle |^{2}P(x) &=&\\langle y|S_{\\mathbf {\\beta }}S_{\\underline{\\mathbf {x}}}\\rho _{\\underline{\\mathbf {x}}}S_{\\mathbf {\\beta }}|y\\rangle , \\\\\\sum _{x}|\\langle y|S_{\\mathbf {\\beta }}|x\\rangle |^{2}P(x) &=&\\langle y|S_{\\mathbf {\\beta }}\\rho _{\\underline{\\mathbf {x}}}S_{\\mathbf {\\beta }}|y\\rangle ,$ where the system state $\\rho _{\\underline{\\mathbf {x}}}$ is $\\rho _{\\underline{\\mathbf {x}}}\\equiv \\sum _{x}P(x)\\ |x\\rangle \\langle x|=\\sum _{x}\\langle x|\\rho _{0}|x\\rangle \\ |x\\rangle \\langle x|,$ the CPF correlation can be written as $C_{pf}(t,\\tau )|_{y}=\\frac{1}{P(y)^{2}}\\sum _{\\mathbf {\\alpha ,\\beta ,\\gamma }}\\Theta ^{\\mathbf {\\alpha \\beta \\gamma }}|_{y}\\Lambda _{\\mathbf {\\alpha \\beta \\gamma }}(t,\\tau ).", "$ The coefficients $\\Theta ^{\\mathbf {\\alpha \\beta \\gamma }}|_{y}$ are $\\Theta ^{\\mathbf {\\alpha \\beta \\gamma }}|_{y}=\\langle y|S_{\\mathbf {\\alpha }}S_{\\underline{\\mathbf {z}}}S_{\\mathbf {\\alpha }}|y\\rangle \\langle y|S_{\\mathbf {\\beta }}S_{\\underline{\\mathbf {x}}}\\rho _{\\underline{\\mathbf {x}}}S_{\\mathbf {\\beta }}|y\\rangle \\langle y|\\sigma _{\\mathbf {\\gamma }}\\rho _{\\underline{\\mathbf {x}}}S_{\\mathbf {\\gamma }}|y\\rangle ,$ while the time-dependence follows from $\\Lambda _{\\mathbf {\\alpha \\beta \\gamma }}(t,\\tau ) &=&+(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0})(1|\\mathbb {F}_{\\mathbf {\\gamma }}(t)|q_{0}) \\\\&&-(1|\\mathbb {F}_{\\mathbf {\\alpha }}(\\tau )\\mathbb {F}_{\\mathbf {\\gamma }}(t)|q_{0})(1|\\mathbb {F}_{\\mathbf {\\beta }}(t)|q_{0}),$ where $|q_{t})=\\sum _{\\mathbf {\\alpha }}\\mathbb {F}_{\\mathbf {\\alpha }}(t)|q_{0}),$ and the probability $P(y)$ is $P(y)=\\sum _{\\mathbf {\\alpha }}(1|\\mathbb {F}_{\\mathbf {\\alpha }}(t)|q_{0})\\langle y|S_{\\mathbf {\\alpha }}\\rho _{\\underline{\\mathbf {x}}}S_{\\mathbf {\\alpha }}|y\\rangle .$ The expression (REF ) is valid for arbitrary observables $\\sigma _{\\underline{\\mathbf {m}}}$ [Eq.", "(REF )].", "In general, they can be written as linear combinations of Pauli strings $S_{\\mathbf {a}}.$ Assuming, for simplicity, that each $S_{\\underline{\\mathbf {m}}}$ correspond to a unique Pauli string operator, from Eq.", "(REF ) it follows the relations $\\langle y|S_{\\mathbf {\\alpha }}S_{\\underline{\\mathbf {z}}}\\sigma _{\\mathbf {\\alpha }}|y\\rangle &=&H_{\\mathbf {\\alpha }\\underline{\\mathbf {y}}}\\delta _{\\underline{\\mathbf {z}},\\mathbf {y}}a_{y}, \\\\\\langle y|S_{\\mathbf {\\beta }}S_{\\underline{\\mathbf {x}}}\\rho _{\\underline{\\mathbf {x}}}S_{\\mathbf {\\beta }}|y\\rangle &=&\\frac{1}{2^{N}}(H_{\\mathbf {\\beta }\\underline{\\mathbf {y}}}\\delta _{\\underline{\\mathbf {y}},\\underline{\\mathbf {x}}}a_{y}+\\langle x\\rangle ), \\\\\\langle y|S_{\\mathbf {\\gamma }}\\rho _{\\underline{\\mathbf {x}}}\\sigma _{\\mathbf {\\gamma }}|y\\rangle &=&\\frac{1}{2^{N}}(1+H_{\\mathbf {\\gamma }\\underline{\\mathbf {y}}}\\delta _{\\underline{\\mathbf {y}},\\underline{\\mathbf {x}}}a_{y}\\langle x\\rangle ),\\ \\ \\ $ where $\\langle x\\rangle \\equiv \\mathrm {Tr}[S_{\\underline{\\mathbf {x}}}\\rho _{\\underline{\\mathbf {x}}}].$ By introducing these equalities in Eq.", "(REF ), after some algebra we get Eq.", "(REF ).", "Generalization to arbitrary observables can be worked out in a similar way from Eq.", "(REF )." ] ]
2107.01692
[ [ "Control of rough terrain vehicles using deep reinforcement learning" ], [ "Abstract We explore the potential to control terrain vehicles using deep reinforcement in scenarios where human operators and traditional control methods are inadequate.", "This letter presents a controller that perceives, plans, and successfully controls a 16-tonne forestry vehicle with two frame articulation joints, six wheels, and their actively articulated suspensions to traverse rough terrain.", "The carefully shaped reward signal promotes safe, environmental, and efficient driving, which leads to the emergence of unprecedented driving skills.", "We test learned skills in a virtual environment, including terrains reconstructed from high-density laser scans of forest sites.", "The controller displays the ability to handle obstructing obstacles, slopes up to 27$^\\circ$, and a variety of natural terrains, all with limited wheel slip, smooth, and upright traversal with intelligent use of the active suspensions.", "The results confirm that deep reinforcement learning has the potential to enhance control of vehicles with complex dynamics and high-dimensional observation data compared to human operators or traditional control methods, especially in rough terrain." ], [ "INTRODUCTION", "Deep reinforcement learning has recently shown promise for locomotion tasks, but its usefulness to learn control of heavy vehicles in rough terrain is widely unknown.", "Conventionally, the design of rough terrain vehicles strives to promote high traversability and be easily operated by humans.", "The drivelines involve differentials and bogie suspension that provide ground compliance and reduces the many degrees of freedom, leaving only speed and heading for the operator to control.", "An attractive alternative is to use actively articulated suspensions and individual wheel control.", "These have the potential to reduce the energy consumption and ground damage, yet increase traversability and tip over stability [11], [6], [21], [10], [9].", "The concepts have been a reappearing topic in planetary exploration, military, construction, agriculture, and forestry applications, but not yet reached the maturity of practical use [7].", "However, there is reason to believe that the full potential of the vehicles is not being utilized.", "The benefits of active suspension and individual wheel control can only be unlocked if the many degrees of freedom are controlled at sufficient speed, precision, and robustness.", "Traditional control methods are not well suited to account for the vehicle dynamics and the surrounding environment observed through high-dimensional sensor data, which raises a need for alternatives.", "Only in recent years has reinforcement learning (RL) emerged as a candidate approach for smart control in locomotion applications.", "Deep learning based control of legged locomotion demonstrate robustness over a variety of environments and learnt behaviour not seen before [13].", "The success in legged locomotion indicates the capability of deep RL to learn control of wheeled ground vehicles.", "However, only a handful of papers deal with RL applied to wheeled ground vehicles [12], [23].", "Local navigation using RL in rough terrain is addressed in [12] with improved performance over traditional planning methods.", "Their application to search and rescue robots considers safe traversal but discards energy consumption, explicit wheel slip, and ground damage; important aspects in agriculture and forestry.", "In addition, they only use a 3-dimensional, binary control signal.", "To the best of our knowledge, RL has not yet been applied to wheeled ground vehicles in rough terrain with high dimensional, continuous, control signals.", "To test the usefulness of deep RL on vehicles in rough terrain, we use physics-based simulation to develop a controller for a novel concept forwarder, with actively articulated suspensions and individual control of its six wheels.", "Based on a 634-dimensional observation attainable from onboard sensors, we demonstrate learned skills on challenging terrains with steep slopes and obstacles, where performance is measured in terms of our reward signal.", "A reward carefully designed to encapsulate safety, energy consumption, environmental impact, and success of the overall goal; to reach a specified vehicle pose.", "A forestry use case is studied using a forest terrain reconstructed from high-density laser scans, where the controller is assigned a sequence of waypoints along a transport route.", "We assess model robustness and domain transferability by varying friction and vehicle load." ], [ "BACKGROUND", "Wheeled locomotion in rough terrain involves perceiving the terrain features to make up time and energy-efficient motion plans.", "Preferably, the motion plans are without risk of getting stuck on obstacles or damaging sensitive parts of the vehicle.", "Traversing the terrain involves controlling the actuators and making use of sensor data for estimating the current state.", "Some wheel slip is inevitable, but excessive slip is associated with ground damage and unnecessary fuel consumption.", "Tipping over is a rare but disastrous event, but with higher risk when the vehicle carries a load.", "With active suspensions, a vehicle can distribute its load on the wheels to maximise traction or minimise ground pressure, cross otherwise impassable obstacles, and shift its centre of mass to handle inclined terrain.", "Individual wheel control can reduce wheel slip and shearing soil surface compared to wheeled and tracked bogies.", "We address smart control applied to forestry and the Xt28 forwarder (eXtractor AB).", "The Xt28 has individual wheel control and actively articulated suspensions, designed for slopy, rough terrain, and the aim to reduce soil compaction and shearing.", "A typical forestry scenario involves an approximate route, where we assume that a global path planner provides target locations, see Fig.", "REF .", "In cut-to-length logging, the dominating method in Europe [14], targets can be manually extracted from the harvester route.", "Alternatively, a more general and sophisticated way is to use a trafficability map [8].", "To take into consideration all the aforementioned rough terrain objectives, coupled with the many control degrees of freedom of the Xt28 is a challenging task.", "In this paper, we explore learning a control policy using reinforcement learning." ], [ "Reinforcement learning", "Reinforcement learning is a process of interaction between an agent (controller) and its environment.", "An environment consists of a state space $\\mathcal {S}$ , action space $\\mathcal {A}$ , transition probabilities $p(s^{\\prime }| s, a)$ , and a reward function $r: \\mathcal {S} \\times \\mathcal {A} \\rightarrow \\mathbb {R}$ .", "At each step, the agent selects an action following its policy $a \\sim \\pi ( \\cdot , s)$ and current state $s$ , and the environment responds with a new state $s^{\\prime }$ and reward $r = r(s, a)$ .", "The goal of the agent is to maximize the expected future sum of discounted rewards $\\mathbb {E}_\\pi \\left[ R_t | s_t \\right]$ , where $R_t= \\sum _{k=0}^\\infty \\gamma ^k r_{t + k + 1}$ is called the return and the discount factor, $\\gamma \\in [0, 1]$ , values the importance of short-term, compared to long-term rewards.", "In the actor-critic framework, the actor contains the policy, which in deep RL is modelled as a neural network with parameters $\\theta $ .", "The role of the actor is to sample actions from its policy, $\\pi _\\theta $ , and adjust its parameters as suggested by the critic.", "The critic, or state-value function $V^\\pi (s) = \\mathbb {E}_\\pi [R_t | s_t]$ , evaluates the actor by giving critique to its actions.", "Most often the purpose of the state-value function is to compute the advantage $A^\\pi (s_t, a_t) = Q^\\pi (s_t, a_t) - V^\\pi (s_t)$ , where the action-value function is given by $Q^\\pi (s_t, a_t) = \\mathbb {E}_\\pi [R_t | s_t, a_t]$ .", "The advantage measures the benefit of taking a specific action $a_t$ when in $s_t$ compared to being in that state in general and following policy $\\pi _\\theta $ .", "It yields almost the smallest possible variance in policy gradient estimates, but must be approximated in practice, e.g.", "using generalize advantage estimate GAE($\\lambda $ ) [18]." ], [ "Proximal policy optimization", "Proximal policy optimization (PPO) is an on-policy method which attempts to keep policy updates close enough to the current policy to improve performance without the risk of collapse [19].", "After collecting a batch of samples under the current policy $\\pi _{\\theta _k}$ , PPO performs minibatch stochastic gradient decent to find $\\theta $ which maximizes the objective [2] $& \\mathcal {L}(s, a, \\theta _k, \\theta ) \\nonumber \\\\& \\qquad = \\mathrm {min} \\left( \\frac{\\pi _\\theta (a| s)}{\\pi _{\\theta _k} (a|s)}A^{\\pi _{\\theta _k}} (s, a),~g(\\epsilon , A^{\\pi _{\\theta _k}}(s, a))\\right), \\\\& g(\\epsilon , A) ={\\left\\lbrace \\begin{array}{ll}(1 + \\epsilon )A \\qquad A \\ge 0 \\\\(1 - \\epsilon )A \\qquad A < 0.\\end{array}\\right.", "}$ The loss motivates policy updates to encourage actions which lead to a positive advantage and discourage the opposite.", "To avoid moving too far from the old policy the objective sets a limit on the policy probability ratio by clipping it in relation to the advantage, where the clipping range is controlled by the hyperparameter $\\epsilon $ .", "It is common to also include two additional terms in the loss function.", "One is an error term on value estimates which is only necessary if using a network architecture which shares parameters between policy and value function.", "The other is an entropy bonus with purpose to boost exploration." ], [ "SIMULATION ENVIRONMENT AND CONTROL", "We model the environment in terms of rigid multibodies, frictional contacts, joints, and motors using the physics engine AGX Dynamics [3].", "For actuation, we use hinge and linear joints with 1D motors.", "A 1D motor is a speed constraint that operates along its remaining degree of freedom by applying a force/torque to meet a specified target speed." ], [ "Xt28 forwarder model", "The Xt28 vehicle, a six-wheeled articulated forwarder, is modelled from a CAD drawing of the actual vehicle as a rigid multibody system with 37 bodies and 14 actuated joints, see Fig.", "REF .", "Hinge motors act at the frame articulation and wheel joints, and linear motors control the suspension arms that are hinged to the chassis.", "The wheels are treated as rigid and modelled using spheres due to the computational benefit in contact detection.", "Figure: The Xt28 model with passive (black arrows) and actuated joints (greenarrows).", "The frame of reference is located 30 cm behind the cabin.", "The local height map is represented as a15×10m 2 15\\times 10~\\mathrm {m}^2 grid with 30×2030\\times 20 resolution.Figure: Snapshot of the vehicle and target pose given by a position and heading.The local height map follows the translation and heading ofthe vehicle and all heights are taken relative to the height under the referenceframe.To have the model state-space agree with the real one, we use realistic masses and limits on torque and force.", "The linear motors have a force limit of 270 kN and the torque limit at articulation joints and wheel motors are set to 50 kNm and 20 kNm, respectively [5].", "The Xt28 model has a mass of 16 800 kg [6], where the centre of mass position of each body was estimated to match that of the physical vehicle." ], [ "Controller", "The main goal of the controller is to drive the vehicle to a target pose, given by a position and a heading.", "It receives directions to the target $(x, y, \\Psi )$ , relative to its reference frame, as well as proprioceptive and additional exteroceptive information.", "The proprioceptive information consists of velocities in the vehicle frame, roll and pitch angles in world coordinates, articulation frame joint angles, and the piston displacement related to each suspension.", "It also receives the longitudinal wheel slip and slip angle.", "Longitudinal wheel slip is measured as the difference in forward and surface speed of the wheel normalized by its forward speed.", "The slip angle is the angle between the wheel direction and the direction it is actually travelling.", "Additionally, we observe the load on each wheel, normalized by vehicle weight.", "The exteroceptive information consists of a local height map of $15\\times 10~\\mathrm {m}^2$ with $30\\times 20$ grid resolution, which follows the vehicle translation and heading, see Fig.", "REF .", "In a real world scenario, SLAM, or maps from airborne laser scans of the terrain together with a GNSS provide similar height maps.", "The heights are expressed relative to the reference frame and scaled to be in $[0,1]$ .", "Together these form a 634-dimensional state representation used by the controller to select a 14-dimensional action.", "For the frame articulation joints and the suspensions, the controller action specifies target angles and piston positions which are passed to P controllers.", "The P controllers compute the appropriate target speed for each joint motor, operating within their force and torque limits.", "The wheel motors are controlled by setting each motor torque individually.", "If the angular speed exceeds $1.5$  rad/s the torque is clamped to not accelerate it further.", "Each action is in $[-1, 1]$ and mapped to available joint and torque ranges." ], [ "Terrains", "Terrains are constructed from height maps of size $70 \\times 70$  m$^2$ and $700\\times 700$ resolution, see Fig.", "REF .", "To form a continuous surface, the heights are interpolated using triangular, piecewise planar, elements into a geometric mesh.", "The geometry is assigned to the ground which is represented as a static rigid body.", "There are two different types of terrains.", "One is procedurally generated from Perlin noise [16] and semi-ellipsoids to represent discrete features such as boulders.", "The semi-ellipsoids are $[0.5, 3.5]$  m large, $[0.25, 1.75]$  m tall and the terrain height difference is limited to 5 m. The procedural terrains are useful for designing training and testing scenarios on e.g.", "slopes or terrains with impassable objects at certain locations.", "The other type is reconstructions from high-density laser scans, referred to as scanned terrains.", "Recently 600 Ha of forestry sites were scanned using 600 points/m$^2$ around Sundsvall, Sweden [1].", "The dataset is filtered to contain only ground points ($\\sim 100$  points/m$^2$ ) and converted to a digital elevation model, from which we extract regular height maps." ], [ "LEARNING CONTROL", "To learn a control policy we use PPO because it has proven successful in other locomotion tasks, e.g.", "[22], [15], is easy to parallelize, and insensitive to hyperparameter settings.", "The adopted implementation uses PyTorch [17] and is based on the original paper [19].", "We let the simulation run at 60 Hz and query the controller at $f_\\text{control}=12$  Hz." ], [ "Network", "As the action space is continuous, a natural choice is to use a diagonal Gaussian policy, which maps state $s$ to mean actions $\\mu _\\theta (s)$ represented by a neural network with parameters $\\theta $ .", "The variance vector, $\\sigma ^2$ , is treated as a stand-alone parameter, independent of state.", "Thus, the probability of action $a_t$ in state $s_t$ is given by $\\pi _\\theta (a_t, s_t)= \\mathcal {N}(\\mu _\\theta , \\sigma ^2I)$ , where $I$ is the identity matrix.", "Because part of our inputs are from 2D height maps, we process them separately with a convolutional neural network.", "To extract height features, we pass height maps through two layers with 16 and 32 filter of $3 \\times 3$ kernel size, followed by a fully connected layer with 64 units.", "We argue that height based features of importance are similar for the actor and critic and let them share this part of the network.", "In the non-shared part, the height map features are concatenated with the rest of the observations and passed through two fully connected layers with 128 units each.", "For the actor, the action is produced by a linear output layer of 16 units.", "For the critic, the value function is produced by a linear output layer of 1 unit." ], [ "Reward", "We formulate a reward that encourages steady progress towards the target in an upright position, without wheel slip, and with limited ground forces, energy consumption, and damaging tyre sidewall contacts.", "The net reward takes the form $r = r_\\mathrm {tar} +r_\\mathrm {prog} r_\\mathrm {roll} r_\\mathrm {speed} r_\\mathrm {forces}\\times \\frac{r_\\mathrm {head} + r_{\\mathrm {slip}\\parallel } + r_{\\mathrm {slip}\\perp }}{3}+ r_\\mathrm {energy} + r_\\mathrm {side},$ where the terms are explained below.", "The main goal of the controller is met when the vehicle is closer than 0.3 m and $9^{\\circ }$ relative to the target position and heading.", "We define the target bonus as $r_\\mathrm {tar} = k_\\mathrm {tar} {1}(\\Psi , d_t),$ where $k_\\mathrm {tar}$ is a constant set to 5 per cent of the maximum, undiscounted return and 1 is the indicator function which evaluates to 1 at the target and 0 otherwise.", "The target reward yields a sparse signal unlikely to be discovered in early stages of training.", "As guidance we provide a dense reward which reflects the progress toward the target as $r_\\mathrm {prog} = (d_{t - 1} - d_t)f_\\mathrm {control},$ where $d_t$ , $d_{t-1}$ is the current and previous distance from the vehicle to the target projected to the horizontal plane.", "We reason that heading alignment is increasingly important as the vehicle approaches the target and introduce it as a reward multiplier $r_\\mathrm {head} =\\exp \\left[-\\frac{1}{2}\\left( \\frac{\\Psi }{d_t / k_\\mathrm {d}} \\right)^2\\right],$ where the constant $k_\\mathrm {d}=5$  m is tuned with the turning radius of the vehicle.", "In the reward shaping process we observed that a reward $r = r_\\mathrm {tar} +r_\\mathrm {prog} r_\\mathrm {head}$ is essential for learning to reach the target quickly, but does not promote efficient, safe and environmental friendly driving.", "Therefore, we introduce a set of additional reward multipliers with range $[0, 1]$ .", "To avoid risk of overturn we define the roll reward as $r_\\mathrm {roll} =\\exp \\left[-\\frac{1}{2}\\left( \\frac{\\phi }{k_\\mathrm {\\phi }} \\right)^2\\right],$ for roll angle $|\\phi | > 5^{\\circ }$ and 1 else, where we use $k_\\mathrm {\\phi } =\\pi / 16$ .", "To encourage limited vehicle speeds, we use $r_\\mathrm {speed} = \\min (1, \\exp [k_\\mathrm {speed} (v_\\mathrm {lim} - |v|)]),$ where $v_\\mathrm {lim} = 0.8$  m/s, and $k_\\mathrm {speed}=2$ is a constant manually tuned to control the rate of reward decay for speeds above $v_\\mathrm {lim}$ .", "To limit ground pressure we consider the standard deviation of normalized ground forces, $\\sigma _\\mathrm {forces}$ .", "Ground pressure is at its lowest in case of an even distribution over the 6 wheels.", "Each wheel then carries 1/6 of the vehicle weight, and $\\sigma _\\mathrm {forces} = 0$ .", "We promote even weight distribution through $r_\\mathrm {forces} =\\exp \\left[-\\frac{1}{2}\\left( \\frac{\\sigma _\\mathrm {forces}}{k_\\mathrm {forces}} \\right)^2\\right],$ where $k_\\mathrm {forces} = 0.1~\\mathrm {N}$ .", "Reaching the target is not considered a success with excessive slip during the episode.", "Therefore we include two terms related to longitudinal slip $\\lambda $ and slip angle $\\alpha $ as $r_{\\mathrm {slip}\\parallel } &= \\prod _i^{n_{wheels}}\\exp \\left[-\\frac{1}{2}\\left( \\frac{\\lambda _i}{k_\\mathrm {\\lambda }} \\right)^2\\right],& k_\\mathrm {\\lambda } & = 0.3\\\\r_{\\mathrm {slip}\\perp } &=\\prod _i^{n_\\mathrm {wheels}} 0.5 \\cos (k_\\mathrm {\\alpha } \\alpha _i) + 0.5,& k_\\mathrm {\\alpha } & = 6,$ where $\\alpha _i$ is clipped at $\\pm ~ \\pi / k_\\mathrm {\\alpha }$ such that any slip angle outside that range yields zero reward.", "The slip rewards are constructed as products to induce well behaved wheel motions for all wheels simultaneously.", "The slip and heading terms are mutually conflicting objectives.", "Therefore we sum them to a single multiplier, as seen in REF .", "To promote smooth, efficient motions, energy consumption is included as $r_\\mathrm {energy} =k_\\mathrm {energy} \\frac{W_\\mathrm {joints}}{W_\\mathrm {max}},$ where $W_\\mathrm {joints}$ is the total work carried out by all actuated joints over the previous action step, ${W_\\mathrm {max}}$ is its upper bound, and $k_\\mathrm {energy} = -1$ .", "Damage to tyre sidewalls are penalized through the number of sidewall contacts $n_\\mathrm {contacts}$ as $r_\\mathrm {side} = k_\\mathrm {sw} n_\\mathrm {contacts},$ where $k_\\mathrm {sw} = -0.2$ .", "We found this reward term necessary to avoid use of the sides of the tyres for traction.", "A contact is classified as being on the sidewall if the angle between the contact point in the wheel frame and the rotational axis is less than 60$^\\circ $ .", "A nice feature of the reward in (REF ) is that the maximum undiscounted return is easily calculated as the initial distance to the target, times the control frequency, plus the target reward.", "Although, the maximum is not attainable in practice, is serves as good reference for designing a curriculum and evaluating policy performance." ], [ "Training and evaluation", "During training, an episode starts with the vehicle being deployed on the terrain at random position, $x_0, y_0~[\\mathrm {m}] \\in [-1, 1]$ , and heading $\\psi _0 \\in [0,2\\pi ]$ .", "We let the vehicle drop to the ground and settle.", "To get natural variations of initial vehicle configurations we apply a simple controller to the suspensions during a simulated time period of 1 s. To enable curriculum with altered target difficulty, a target heading parameter $\\phi _{\\text{max}}$ is defined, affecting both target placement and heading.", "The target is placed a distance $r_0 = 20$ m away, within an angle $\\phi \\in [-\\phi _\\text{max}, \\phi _\\text{max}]$ relative to the vehicle heading, see Fig.", "REF .", "To put emphasis on learning stearing, the target position along this arc is sampled from a quadratic distribution, increasing the probability toward the edges.", "The target heading is then sampled from a uniform distribution, $\\psi _1 \\in [-\\phi _{\\text{max}}/2, \\phi _{\\text{max}}/2]$ relative to $\\phi $ , i.e.", "the angle to the target.", "Figure: Target generation.", "The vehicle is initialised in the green square withrandom heading.", "The target is then placed a distance 20 m away along a limitedarc with heading ψ 1 \\psi _1.A training episode runs until the target is reached, or terminated after 400 or 500 steps, depending on the curriculum.", "Early termination occurs if the vehicle moves beyond the target, if it reaches a roll beyond $25^\\circ $ , or if a terminal contact is detected.", "A terminal contact is when any part of the chassis comes in contact with the terrain.", "Training is done on 10 parallel environments on a cluster with 28 cores, where each environment uses a different terrain.", "After every 25k steps the controller is evaluated in a separate environment on a terrain not used in training, with deterministic initial vehicle positions, target placements, and action selection based on the latest policy." ], [ "Curriculum", "In our experience, a curriculum is essential for the controller to reach its full potential.", "Our goal is to form a curriculum such that there is a solid foundation in basic driving skills after the first lesson, e.g.", "acceleration, turning, and speed control.", "The purpose of the following lessons is to specialize driving skills towards preference.", "To emulate natural forest environments, we focus on boulder-like obstacle avoidance, unevenness, and slopes.", "Our approach is to use a fixed order boundary curriculum [22] for the terrain and target placements, where the learning process is divided into four lessons with increasing difficulty according to our intuition.", "In the simplest, initial lesson, the terrain is level with Perlin noise to mimic features of natural terrain.", "To put emphasise on sharp turns we set the target heading parameter to $\\phi _\\mathrm {max} = \\pi / 3$ already in the first lesson.", "The second lesson focuses on learning height map features to avoid impassable objects.", "We use the same terrain base with Perlin noise but add 8 semi-ellipsoids placed randomly between the initial vehicle position and the target.", "To both avoid obstacles and reach the target is challenging, so we simplified the task by setting $\\phi _\\mathrm {max} = \\pi / 9$ .", "The third level uses a similar setting with tougher Perlin noise to form a hilly/slopy terrain, but with only 6 impassable semi-ellipsoids and 6 smaller ones.", "In the final level, the controller practices driving on scanned terrains with $\\phi _\\mathrm {max} = \\pi / 3$ and 500 max steps.", "We chose terrain patches that appeared trafficable, yet challenging with steep slopes, boulders, and ditches, see Fig.", "REF .", "Figure: Patches from scanned terrains.", "a) and b) are two examples used for training, c) is used in domain sensitivity experiments, and d) in obstacle perception.", "The images are rendered with terrain colour according to height." ], [ "Hyperparameters", "For the PPO related hyperparameters we use a horizon of 1280, minibatch size of 800, and 10 epochs.", "We use the Adam optimizer with a gradually decreased step size between lessons.", "A step size of $25\\times 10^{-5}$ is used in the first, $10\\times 10^{-5}$ in the second and third, and $1\\times 10^{-5}$ in the forth lesson respectively.", "The discount is $\\gamma =0.99$ and the GAE parameter $\\lambda = 0.95$ .", "The value function and policy both have clipping range $0.2$ .", "The value function coefficient for the loss calculation is $0.5$ and the entropy coefficient $0.01$ ." ], [ "RESULTS AND DISCUSSION", "We present a controller that shows smooth progression towards the target while adapting to terrain irregularities.", "When turning, torques are adjusted so that the outer wheels rotate faster than the inner, thereby moving with limited slip and effort.", "The suspensions are used conservatively and kept in fixed position unless the vehicle is challenged by slopes or unevenness in the terrain.", "When faced with a Gaussian bump of 1 m height, the controller makes intelligent use of the suspensions for levelling and ground compliance, as shown in Fig.", "REF .", "The maximum slip is 1.5% and the average slip per wheel is only 0.15%.", "To see highlights of the learnt driving skills on a number of different terrains we refer to the supplementary video.", "Figure: Sequential snapshots of the vehicle traversing a 1 m tall gaussian bump,avoiding chassis roll and wheel slip.Training is done according to the curriculum in Section REF , where the best policy in the preceding lesson is used as starting point for the next, see Fig.", "REF .", "In total, the controller is trained for 19.22 M steps and 108 h CPU hours.", "Learning is rapid during the first lesson except during a plateau.", "We found that penalizing energy consumption was key to develop strategies to limit speed and keep progressing, but it also eliminated jerky and unnecessary movements.", "Figure: Learning curves over four consecutive curriculum lessons with increasingdifficulty.", "The controller was evaluated every 25 k steps over 20 episodes withdeterministic action selection." ], [ "Sloped terrains", "The controller shows the ability to traverse steep slopes and uses different strategies depending on the slope direction.", "We use two perfectly even terrains with 18$^\\circ $ and 27$^\\circ $ incline, and place the vehicle around the centre, with equally spaced heading in 40 directions following a full rotation, see Fig.", "REF .", "The success rates are 92.5% and 65% with undiscounted mean normalized return $0.64 \\pm 0.09$ and $0.40 \\pm 0.13$ for the 18$^\\circ $ and 27$^\\circ $ terrains, respectively.", "As reference, the terrains are rated as 4/5 and 5/5 in difficulty according to the terrain classification system for forestry work in nordic countries [4].", "On side slope, the controller utilizes one of the claimed benefits of the Xt28 and adjusts the suspensions to shift the centre of mass and maintain an upright position in an attempt to minimize ground forces, wheel slip, and roll.", "We note that the maximum side slope which allows for complete levelling is 27.5$^\\circ $ due to the range limits of the suspensions.", "Even so, the mean rolls are $1.93 \\pm 0.94^\\circ $ and $3.83 \\pm 2.59^\\circ $ , respectively, including the unfavourable initial configurations.", "Although the success rate is not as high for the steeper terrain, there are no complete failures, and the missed targets are typically due to side slip.", "Curiously, it is more demanding to drive downhill than uphill.", "The loss in reward is mainly due to the inability to maintain speed below the upper limit.", "Figure: Comparison of controller performance on two terrains with 18 and27 ∘ ^\\circ incline.", "The arrows show target placements.", "A target is reached when the vehicle is closer than 0.3 m and 9 ∘ ^\\circ relative to the target position and heading.", "The vehicle is true to scale." ], [ "Obstacle perception", "If faced with objects of different sizes, the controller shows an ability to distinguish between passable and impassable ones and places the wheels to avoid sidewall contacts.", "To see the strategies we test the controller on a terrain similar to those with semi ellipsoids used in training, see Fig.", "REF .", "Targets and initial vehicle positions are the same as for the sloped terrains, resulting in a 90% success rate and undiscounted mean normalized return of $0.62 \\pm 0.15$ over 40 episodes.", "Impassable objects that appear within the range of the local height map are well reflected in the value function estimates, far before reaching the problematic location.", "States with impassable objects straight ahead are expected to result in poor performance unless easy to circumvent, at which the trajectory is planned by taking out turns enough to avoid contact and reach the target placement.", "Smaller objects are easily overcome without significant loss in reward due to efficient use of the suspensions.", "Because some episodes are practically impossible and require going in reverse, a driving skill not practised during training, we cannot expect full success.", "The four episodes with terminal chassis-ground contacts occur when the vehicle is directly facing large objects and is unable to choose which way to turn.", "Figure: Motion trajectories on procedurally generated terrain withsemi-ellipsoids representing boulders.", "The trajectories are coloured bynormalized cumulative reward in [0,1][0,1] (left) and the learnt value functionestimates (right).", "The controller displays the ability to perceive by drivingaround impassable objects and over smaller.", "The vehicle and local height map istrue to scale.To test if the learnt skills generalize to natural environments we repeat our previous experiment on a terrain patch extracted from the real data set.", "The selected area (Fig.", "REF d) contains the highest density of large boulders ($>1$  m tall) from the 600 Ha test site and poses a severe challenge.", "The target is reached 70% of episodes with a mean normalized return of $0.48 \\pm 0.14$ , see Fig.", "REF .", "The results are similar to the artificial terrains, where the controller surpasses smaller boulders, circumvents others, and the majority of unsuccessful episodes is due to chassis-ground collisions.", "We note that most terminal contacts occur when the target is in the vicinity of a large boulder or when several boulders obstruct the passage, e.g.", "east in Fig.", "REF .", "Without a clear passage, the expected return is immediately small, indicating that the controller recognizes when put to a task it cannot successfully complete.", "To further study the value function is valuable if we want to enhance obstacle perception.", "However, when it comes to obstacle avoidance, it is not clear if the responsibility should lie completely in a low level controller or one at higher level doing path planning.", "Figure: Obstacle perception on a scanned rough terrain.", "The trajectories arecoloured by normalized cumulative reward (left) and value function estimates(right)." ], [ "Smart control on real forest terrain", "To simulate the use of the controller in a purposeful forestry application we test its driving skills on scanned terrains.", "We emulate a higher level planner and manually place a sequence of targets, or waypoints, starting and ending at a primary road to complete a full cycle, see Figs.", "REF and REF .", "The terrain has a mean slope of $12^\\circ $ , a deep ditch alongside the road, and enough roughness to serve as a challenging test.", "Despite being a difficult route on demanding terrain 6 out of 9 waypoints are reached, where the misses are small and do not affect the higher level goal of completing the route.", "The controller displays an ability to cross ditches, a challenging real world scenario, and handles target placements not seen in training with ease.", "The mean normalized return is $0.60 \\pm 0.12$ where, as discussed with sloped terrains, the vast majority of lost reward comes from driving too fast downhill.", "Still, there is no tendency towards unsafe traversal and we note that the top speed was no more than 0.37 m/s above limit.", "Figure: Top view of vehicle trajectories followinga sequence of waypoints placed on a reconstruction of real terrain from high-densitylaser scans.", "The vehicle starts and ends at a primary road along a route similarto a real world forestry scenario.Figure: 3D rendering of the vehicle and waypoints." ], [ "Domain sensitivity", "The controller is insensitive to variations in ground-terrain friction coefficient $\\mu $ , and able to adapt to load cases not seen during training.", "In natural environments, surface friction varies over space and time, while variable load is relevant in any transport application, e.g.", "forestry, agriculture.", "We chose a typical forestry site from the real dataset (Fig.", "REF c) and let $\\mu \\in \\lbrace 0.2, 0.3, \\ldots , 1.1 \\rbrace $ for two vehicle load cases: one with nominal weight and another where a static 10000 kg load (60% weight increase) is placed on the load bunk.", "The targets are placed 20 m away with random heading $[-\\pi /3, \\pi /3]$ , relative to the vehicle starting position.", "For each of the 20 cases we simulate 40 episodes and compute the undiscounted mean normalized return and standard deviation, see Fig.", "REF .", "Figure: a) Undiscounted mean normalized return over 40 episodes as a functionof tyre-ground friction coefficient, μ\\mu , where the error bars show onestandard deviation.", "The vehicle is either unloaded or carries 10000 kg.", "b)Episode termination.", "The left bar in each pair corresponds to an unloadedvehicle and the right, slightly brighter, to one with 10000 kg load.As expected, the controller performs at its best around the settings used for training, i.e., unloaded with $\\mu = 0.7$ , and equally well for higher friction.", "Performance is not significantly affected until $\\mu $ drops below 0.4, which roughly corresponds to the average sliding friction between tyres and wet earth roads [20].", "From Fig.", "REF , it is clear that the target is frequently reached at $\\mu = 0.3$ , but more seldom for $\\mu = 0.2$ .", "The loaded case shows similar behaviour but with  10% lower episodic return.", "To some degree this is due to the higher energy consumption with the increase in weight, but Fig.", "REF shows that in 10-20% of the cases, the heavier vehicle fails to reach the target.", "Notably, performance drops for friction above $0.8$ , where a fair portion of episodes terminate due to maximum roll being exceeded.", "The high friction and load resists turning at moderate speed and the controller compensates by tilting to increase traction on the outer wheels.", "With no experience in similar states, it proceeds until failure occurs.", "To further understand the effect of different vehicle load and ground-tyre friction on performance we look at individual reward contributions.", "Fig.", "REF shows $r_\\mathrm {energy},r_{\\mathrm {slip}\\parallel }$ , and $r_{\\mathrm {slip}\\perp }$ for the two cases with lowest mean return, and training settings.", "Not surprisingly, low friction and added load leads to an increase in energy consumptions and slip.", "We observe that a loaded vehicle in high friction setting drives with significantly less slip compared to low friction, but similar side slip except in the first quarter of episodes.", "This again is due to the resistance in turning, and also the difficulties to control the frame articulation.", "Figure: Mean reward contributions and standard deviation over 40 episodes for different friction and vehicle load.", "Number of steps was truncated at the shortest episode." ], [ "CONCLUSIONS", "We conclude that deep RL is more than capable of learning control for rough terrain vehicles with continuous, high dimensional, observation, and action space.", "We have presented a controller that perceives, plans, and individually controls six suspensions, six wheels, and two frame articulation joints, without the use of frame stacking or recurrent networks as memory support.", "The controller relies on a local height map to perceive which obstacles to circumvent, how to handle steep slopes, etc., and then couples its perception with proprioceptive features to efficiently traverse rough terrain.", "The traversal is done with minimal slip, roll, and energy consumption, to reach a target placement.", "The controller is robust to friction between tyre and ground, as long as it does not fall below a critical value.", "It is more sensitive to changes in the vehicle weight, which poses a problem when collecting and transporting heavy objects.", "We suggest that deep RL will be a future cornerstone for control of vehicles with high dimensional state space, especially in environments where it is easier to react to the dynamics than predict them with sufficient accuracy." ], [ "ACKNOWLEDGEMENTS", "This work has in part been supported by Mistra Digital Forest (Grant DIA 2017/14 6) and Algoryx Simulation AB.", "The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC 410 dnr 2021/5-234) at High Performance Computing Center North (HPC2N)." ] ]
2107.01867
[ [ "Bag of Instances Aggregation Boosts Self-supervised Distillation" ], [ "Abstract Recent advances in self-supervised learning have experienced remarkable progress, especially for contrastive learning based methods, which regard each image as well as its augmentations as an individual class and try to distinguish them from all other images.", "However, due to the large quantity of exemplars, this kind of pretext task intrinsically suffers from slow convergence and is hard for optimization.", "This is especially true for small-scale models, in which we find the performance drops dramatically comparing with its supervised counterpart.", "In this paper, we propose a simple but effective distillation strategy for unsupervised learning.", "The highlight is that the relationship among similar samples counts and can be seamlessly transferred to the student to boost the performance.", "Our method, termed as BINGO, which is short for Bag of InstaNces aGgregatiOn, targets at transferring the relationship learned by the teacher to the student.", "Here bag of instances indicates a set of similar samples constructed by the teacher and are grouped within a bag, and the goal of distillation is to aggregate compact representations over the student with respect to instances in a bag.", "Notably, BINGO achieves new state-of-the-art performance on small-scale models, i.e., 65.5% and 68.9% top-1 accuracies with linear evaluation on ImageNet, using ResNet-18 and ResNet-34 as the backbones respectively, surpassing baselines (52.5% and 57.4% top-1 accuracies) by a significant margin.", "The code is available at https://github.com/haohang96/bingo." ], [ "Introduction", "Convolutional Neural Networks (CNNs) have achieved great success in the field of computer vision, including image classification [13], object detection [20] and semantic segmentation [2].", "However, most of the time, CNNs cannot succeed without enormous human-annotated data.", "Recently, self-supervised learning, typified by contrastive learning [11], [3], has been fighting with the annotation-eager challenge and achieves great success.", "Most current self-supervised methods yet focus on networks with large size, e.g., ResNet-50 [13] with more than 20M parameters, but real-life implementation usually involves computation-limited scenarios, e.g., mobile/edge devices.", "Due to annotation lacking in unsupervised tasks, learning from unlabeled data becomes challenging.", "Recent contrastive learning methods [11], [3] tackle this problem by narrowing gaps between embeddings of different augmentations from the same image.", "Techniques like momentum encoder for stable updating, memory bank for storing negative pairs, complicated data augmentation strategies etc., are proposed to avoid collapse and promote the performance.", "With the above techniques, contrastive learning methods show promising performance.", "However, contrastive learning requires discriminating all instances, due to the large quantity of exemplars, this kind of pretext task intrinsically suffers from slow convergence and is hard for optimization.", "This issue becomes severe for small scale models, which carry too few parameters to fit the enormous data.", "Inspired by supervised learning that knowledge from large models can effectively promote the learning ability of small models with distillation, exploring knowledge distillation on unsupervised small models becomes an important topic.", "Figure: Overall performance comparisons between BINGO and other unsupervised distillation methods.Compress [7] and SEED [7] are two typical methods for unsupervised distillation, which propose to transfer knowledge from the teacher in terms of similarity distributions among different instances.", "However, as the similarity distribution is computed by randomly sampling instances from a dynamically maintained queue, this kind of knowledge is mostly constructed based on instances with low relation, which fails to effectively model similarity of those highly related samples.", "To solve this issue, we propose a new self-supervised distillation method, which transfers knowledge by aggregating bags of related instances, named BINGO.", "In our empirical studies, transferring knowledge based on highly related samples helps boost performance more effectively compared with previous relation-agnostic methods.", "Specifically, we select an unsupervised pretrained large model as the teacher.", "First, we map the conventional instance-wise dataset into a bag-wise one.", "Each original instance is set as an anchor instance of the bag.", "By matching similarities of all the other instances' embeddings produced by the teacher model, we feed instances which show high similarity with the anchor instance into the bag.", "Then we apply the bagged dataset to the small model distillation process.", "To this end, we propose a bag-aggregation distillation loss, which consists of two components: inter-sample distillation and intra-sample distillation.", "For inter-sample distillation, embeddings of the student and teacher from two augmentations of the same instance are pushed together; for intra-sample distillation, embeddings of all instances in one bag are pushed to be more similar with the anchor one.", "Equipped with the two proposed distillation loss, the bag-based knowledge from the teacher can be well transferred to the student, which shows significant advantages over previous relation-agnostic ones [7], [15].", "Our contributions can be summarized as follows.", "We propose a new self-supervised distillation method, which bags related instances by matching similarities of instance embeddings produced by the teacher.", "The bagged dataset can effectively boost small model distillation by aggregating instance embeddings in bags.", "The proposed relation-guided method shows stronger performance than previous relation-agnostic ones.", "BINGO promotes the performance of both ResNet-18 and -34 to new state-of-the-art (SOTA) ones in unsupervised scenarios.", "It is worth noting that the distilled models also present far better performance compared with previous SOTA methods on other tasks, i.e., KNN classification and semi-supervised learning.", "BINGO provides new inspiration for unsupervised distillation that knowledge between instances with high relation could be more effective than relation-agnostic ones.", "This may be heuristic for further explorations on knowledge transfer in unsupervised scenarios." ], [ "Self-supervised Learning", "As a generic framework to learn representations with unlabeled data, self-supervised learning has experienced remarkable progress over the past few years.", "By constructing a series of pretext tasks, self-supervised learning aims at extracting discriminative representations from input data.", "Previous methods obtain self-supervised representations mainly via a corrupting and recovering manner, from perspectives of spatial ordering [17], rotation changes [9], in-painting [19], or colorization [25], et al.", "Recently, contrastive learning based methods [11], [3] emerge and significantly promote the performance of self-supervised learning, which aim at maximizing the mutual information between two augmented views of a image.", "A series of subsequent works [10], [23] further improve the performance to a very high level.", "However, rare of them pays attention to self-supervised learning on small-scale models, which are of critical importance to implement self-supervised models on lightweight devices.", "We propose an effective method to boost the self-supervised learning of small models, which takes advantage of relation-based knowledge between data and shows superior performance than previous ones." ], [ "Knowledge Distillation", "Knowledge distillation aims to transfer knowledge from a model (teacher) to another one (student), usually from a large to small one, which is commonly used for improving the performance of the lightweight model.", "[14] first proposes knowledge distillation via minimizing the KL-divergence between the student and teacher's logits, which uses the predicted class probabilities from the teacher as soft labels to guide the student model.", "Instead of mimicking teacher's logits, [21] transfers the knowledge by minimizing the $\\ell _2$ distance between intermediate outputs of the teacher and student model.", "To solve the dimension mismatch, [21] uses a randomly initialized projection layer to enlarge the dimension of a narrower student model.", "Based on [21], [24] utilizes knowledge stored in the attention map generated by the teacher model, and pushes the student model to pay attention to the area where the teacher focuses on.", "[26] improves weighted soft labels to adaptively improve the bias-variance tradeoff of each sample.", "Besides perspectives of soft labels and intermediate features, relation between samples is also an important knowledge.", "[18] and [16] train student model by aligning the pair-wise similarity graph with the teacher.", "Recently, some works extend the above distillation method into self-supervised learning scenarios.", "[22] uses the contrastive loss to learn cross-modality consistency.", "[7] and [15] compute the pair-wise similarities between student's outputs and features stored in memory bank.", "However, the above relation-based self-supervised distillation methods only compute the similarity between anchor sample and randomly sampled instances from a maintained queue, which ignores the relation between sampled and anchor instances.", "[8] strengthens the student model by adding a regularization loss on the original contrastive loss, which aims at minimizing the $\\ell _2$ distance between the student's and teacher's embedding.", "We propose to transfer the relation knowledge between models via a new type of dataset, which bags related instances.", "By aggregating the bagged instances, the relation knowledge can be effectively transferred.", "Figure: An overview of the proposed method.", "The samples are first bagged via feature similarity.", "Then the related instances in a bag is aggregated via intra-sample and inter-sample distillation loss.", "The figure on top-right is an intuitive explanation of how bag aggregation works." ], [ "Approach", "In this section, we introduce the proposed BINGO in details.", "First, we discuss how to bag samples in the instance-wise dataset.", "After the samples are bagged, the bag-aggregation based knowledge distillation is introduced.", "We also discuss how to compute bag-aggregation loss and how they improve the performance of the lightweight model.", "The overall framework is illustrated in Fig.", "REF ." ], [ "Bagging Instances with Similarity Matching", "Given the unlabeled training set $\\mathbf {X}=\\left\\lbrace \\mathbf {x_{1}},\\mathbf {x_{2}},...,\\mathbf {x_{N}}\\right\\rbrace $ , we define the corresponding bag-wise training set as $\\mathbf {\\Omega } = \\left\\lbrace \\mathbf {\\Omega _1}, \\mathbf {\\Omega _2}, ..., \\mathbf {\\Omega _N} \\right\\rbrace $ , where each bag $\\mathbf {\\Omega _i}$ consists of a set of instances.", "To transfer the instance-wise dataset to a bag-wise one, we first feed $\\mathbf {X}$ into a pretrained teacher model $f_\\mathbf {T}$ and get the corresponding features $\\mathbf {V}=\\left\\lbrace \\mathbf {v_{1}},\\mathbf {v_{2}},...,\\mathbf {v_{N}} \\right\\rbrace $ where $\\mathbf {v_i} = f_\\mathbf {T}(\\mathbf {x_i})$ .", "For each anchor sample $\\mathbf {x}_a$ in the dataset, we find positive samples which share high similarity with the anchor sample.", "Then the anchor sample as well as the similar samples are combined to form a bag.", "The samples in one bag have a compact representation in the embedding space.", "Several mapping function can be used to find similar samples:" ], [ "K-nearest Neighbors", "For each anchor sample $\\mathbf {x}_a$ in the instance-wise dataset, we first compute the pairwise similarity with all samples in the dataset $\\mathbf {S}_a = \\lbrace \\mathbf {v}_a \\cdot \\mathbf {v_i} \\ |\\ \\mathbf {i}=1,2,...,N\\rbrace $ .", "The bag $\\mathbf {\\Omega }_a$ corresponding to $\\mathbf {x}_a$ is defined as: $\\mathbf {\\Omega }_a = \\mathbf {top}\\mathbf {-}\\mathbf {rank}(\\mathbf {S}_a, \\mathbf {K}),$ where $\\mathbf {top}\\mathbf {-}\\mathbf {rank}(\\cdot , \\mathbf {K})$ returns the indices of top $\\mathbf {K}$ items in a set." ], [ "K-means Clustering", "Given the training feature set $\\mathbf {V} = \\lbrace \\mathbf {v_1}, \\mathbf {v_2}, ..., \\mathbf {v_N}\\rbrace $ , we first assign a pseudo-label $\\mathbf {q_i}$ to each sample $\\mathbf {i}$ , where $\\mathbf {q_i} \\in \\lbrace \\mathbf {q_1}, ..., \\mathbf {q_K}\\rbrace $ .", "The clustering process is performed by minimizing the following term, $\\frac{1}{N} \\sum _{i=1}^N -\\mathbf {v_i^T} \\mathbf {c}_\\mathbf {q_i},$ where $\\mathbf {c}_\\mathbf {q_i}$ denotes the centering feature of all features belonging to the label $\\mathbf {q_i}$ , i.e., $\\mathbf {c}_\\mathbf {q_i} = \\sum _{\\mathbf {q_j}=\\mathbf {q_i}} \\mathbf {v_j}, \\forall j = 1, ..., N$ .", "The bag $\\mathbf {\\Omega }_a$ of anchor sample $\\mathbf {x}_a$ is defined as: $\\mathbf {\\Omega }_a = \\lbrace \\mathbf {i} \\ |\\ \\mathbf {q_i} = \\mathbf {q}_a, \\forall \\mathbf {i}=1, 2, ..., N\\rbrace .$" ], [ "Ground Truth Label", "If the ground truth label is available, we can also bag samples with the human-annotated semantic labels.", "Given the label set $\\mathbf {Y} = \\lbrace \\mathbf {y_1}, \\mathbf {y_2}, ..., \\mathbf {y_N}\\rbrace $ , we can bag related instances of the anchor sample $\\mathbf {x}_a$ via: $\\mathbf {\\Omega }_a = \\lbrace \\mathbf {i} \\ |\\ \\mathbf {y_i} = \\mathbf {y}_a, \\forall \\mathbf {i}=1, 2, ..., N\\rbrace .$ In this paper, we use K-nearest neighbors as the bagging strategy.", "More details about performance of using the K-means clustering based bagging strategy can be found in Appendix.", "Note that bagging instances via the ground truth label is just used to measure the upper bound of the proposed method." ], [ "Knowledge Distillation via Bag Aggregation", "Once we get the bag-wise dataset $\\mathbf {\\Omega }$ utilizing a pretrained teacher model, it can be used for distillation process.", "In each feed-forward process, the anchor sample $\\mathbf {x}_a$ and the positive sample $\\mathbf {x}_p$ which belong to the same bag $\\mathbf {\\Omega }_a$ are sampled together in one batch.", "We propose the bag-aggregation distillation loss including the intra-sample distillation loss $\\mathcal {L}_{intra}$ and inter-sample distillation loss $\\mathcal {L}_{inter}$ .", "To aggregate the representations within a bag into more compact embeddings, we minimize the following target function: $\\min _{\\theta _S} \\mathcal {L} = \\mathop \\mathbb {E} \\limits _{\\mathbf {x_i} \\sim \\mathbf {\\Omega }_a } (L(f_\\mathbf {S}(\\mathbf {x_i}), f_\\mathbf {T}(\\mathbf {x}_a))),$ where $L$ is a metric function to measure the distance between two embeddings – there are many metrics can be selected, such as cosine similarity, euclidean distance, etc.", "Here we use the normalized cosine similarity, i.e., the contrastive loss commonly used in self-supervised learning to measure the distance between $\\mathbf {x_i}$ and the anchor sample $\\mathbf {x}_a$ .", "The target function in Eq.", "REF can be divided into two components: $\\mathcal {L}&= L(f_\\mathbf {S}(\\mathbf {x}_a), f_\\mathbf {T}(\\mathbf {x}_a)) + \\mathop \\mathbb {E} \\limits _{\\mathbf {x}_i \\sim \\mathbf {\\Omega }_a \\setminus \\mathbf {x}_a} (L(f_\\mathbf {S}(\\mathbf {x_i}), f_\\mathbf {T}(\\mathbf {x}_a))),$ where the first item focuses on pulling different views (augmentations) of the same sample together, and the second item targets at pulling different samples that are within a same bag into more related ones.", "We term the first item as $\\mathcal {L}_{intra}$ and the second item as $\\mathcal {L}_{inter}$ ." ], [ "Intra-Sample Distillation", "The intra-sample distillation loss is a variant of conventional contrastive loss.", "Contrastive learning aims to learn representations by discriminating the positive key among negative samples.", "Given two augmented views $\\mathbf {x}$ and $\\mathbf {x^{\\prime }}$ of one input image, MoCo [5] uses a online encoder $f_{\\rm q}$ and a momentum encoder $f_{\\rm k}$ to generate embeddings of the positive pairs: $q = f_{\\rm q}(\\mathbf {x})$ , $k = f_{\\rm k}(\\mathbf {x^{\\prime }})$ .", "The contrastive loss is defined as ${\\mathcal {L}_{contrast}} \\!=\\!", "- \\log \\frac{{\\exp (\\mathbf {q} \\cdot {\\mathbf {k^+} }/\\tau )}}{{\\sum _{i=0}^{N} \\exp (\\mathbf {q} \\cdot {\\mathbf {k_i} }/\\tau )}}.$ During distillation, we simply replace $f_\\mathbf {q}$ and $f_\\mathbf {k}$ by the student model $f_\\mathbf {S}$ and teacher model $f_\\mathbf {T}$ , while weights of the teacher model $f_\\mathbf {T}$ are pretrained and are not updated during distillation.", "The intra-sample distillation loss can be formulated as ${\\mathcal {L}_{intra} \\!=\\!", "- \\log \\frac{ \\exp (f_\\mathbf {S}(\\mathbf {x}_a) \\cdot f_\\mathbf {T}(\\mathbf {x^{\\prime }}_a) /\\tau ) }{\\sum _{i=0}^{N} \\exp (f_\\mathbf {S}(\\mathbf {x}_a) \\cdot {\\mathbf {k_i^-} }/\\tau )}},$ where $\\tau $ is the temperature parameter and $\\mathbf {k^-}$ is the negative samples generated by the teacher model $f_\\mathbf {T}$ ." ], [ "Inter-Sample Distillation", "Given the anchor sample $\\mathbf {x}_a$ and a positive sample $\\mathbf {x}_p$ in the bag $\\mathbf {\\Omega }_a$ , it is natural to map highly related samples to more similar representations.", "In other words, we want the bag filled with related samples to be more compact.", "Inspired by Eq.", "REF , we define the inter-sample distillation loss as ${\\mathcal {L}_{inter} \\!=\\!", "- \\log \\frac{ \\exp (f_\\mathbf {S}(\\mathbf {x}_p) \\cdot f_\\mathbf {T}(\\mathbf {x}_a) /\\tau ) }{\\sum _{i=0}^{N} \\exp (f_\\mathbf {S}(\\mathbf {x}_p) \\cdot {\\mathbf {k_i^-} }/\\tau )}}.$ The intra- and inter-sample distillation loss serve as different roles.", "The intra-sample distillation works like conventional distillation [14], [21], which aims at minimizing distances between outputs of the teacher and student model given the same input.", "However, the inter-sample distillation mainly focuses on transferring the data relation knowledge taking the bag-wise dataset as the carrier, which is obtained from the pretrained teacher model." ], [ "Experiments", "In this section, we evaluate the feature representations of the distilled student networks on several widely used benchmarks.", "We first report the performance on ImageNet under the linear evaluation and semi-supervised protocols.", "Then we conduct evaluation on several downstream tasks including object detection and instance segmentation, as well as some ablation studies to diagnose how each component and parameter affect the performance." ], [ "Pre-training of Teacher Model", "Two models are used as teachers: ResNet-50 trained with MoCo-v2 [5] for 800 epochs and ResNet-50$\\times $ 2 trained with SwAV for 400 epochs.", "The officially released weights The checkpoint of teacher model can be downloaded from https://github.com/facebookresearch/moco and https://github.com/facebookresearch/swav.", "are used to initialize teacher models during distillation for fair comparisons with other methods." ], [ "Self-supervised Distillation of Student Model", "Two models are used as students: ResNet-18 and ResNet-34.", "Following the settings of MoCo in [5], we add a 2-layer MLP on top of the last averaged pooling layer to form a 128-d embedding vector.", "During distillation, the model is trained with the SGD optimizer with momentum 0.9 and weight decay 0.0001 for 200 epochs on ImageNet [6].", "The batch size and learning rate are set as 256 and 0.03 for 8 GPUs, which simply follow the hyper-parameter settings as in [5].", "The learning rate is decayed to 0 by a cosine scheduler during training process.", "The temperature $\\tau $ and the size of memory bank are set as 0.2 and 65,536 respectively.", "For the bagging strategy, we use K-nearest neighbors strategy unless specified." ], [ "Linear Evaluation", "In order to evaluate the performance of BINGO, we train a linear classifier upon the frozen representation, following the common evaluation protocol in [5].", "For fair comparisons, we use the same hyper-parameters as [7], [8] during linear evaluation stage.", "The classifier is trained for 100 epochs, using the SGD optimizer with 30 as initial learning rate.", "As shown in Table REF , using ResNet-50$\\times $ 2 as teacher, BINGO achieve $65.5\\%$ and $68.9\\%$ accuracies on ResNet-18/34, respectively, which consistently surpass previous state-of-the-art DisCo (65.2%/67.6%) and SEED (63.0%/65.7%) using the same teacher.", "Note that SEED distills for 800 epochs while DINGO is running for 200 epochs, which demonstrates the effectiveness of the proposed method." ], [ "KNN Classification", "We also evaluate representation of student model using nearest neighbor classifier with cosine similarity.", "KNN classifier can evaluate the learned feature more directly without any parameter tuning.", "Following [1], [15], [7], we extract features from center-cropped images after the last averaged pooling layers.", "For convenient comparisons with other methods, we report the validation accuracy with 10 NN(we use the student model distilled from ResNet-50$\\times $ 2).", "As shown in Table REF , BINGO achieves $61.0\\%$ and $64.9\\%$ accuracies on ResNet-18/34 models, respectively, which outperforms previous methods by a large margin." ], [ "Semi-supervised Classification", "Following previous works [3], [4], we also evaluate the proposed method by fine-tuning the student model ResNet-18 with $1\\%$ and $10\\%$ labeled data.", "We follow the training split settings as in [3] for fair comparisons.", "The network is fine-tuned for 60 epochs with SGD optimizer.", "The learning rate of the last randomly initialized fc layer is set as 10.", "As shown in Table REF , BINGO obtains accuracies of $48.2\\%$ and $60.2\\%$ when using 1% and 10% labels, respectively, both results beats the previous best performance.", "Table: Transfer learning accuracy (%) on COCO detection.Table: Transfer learning accuracy (%) on COCO instance segmentation.Table: Semi-supervised learning by fine-tuning 1%1\\% and 10%10\\% labeled images on ImageNet using ResNet-18 as backbone." ], [ "Transfer to Detection and Instance Segmentation", "We also evaluate the generalization ability of the student model on detection and instance segmentation tasks.", "The COCO dataset is used for evaluation.", "Following [11], we use Mask R-CNN [12] for object detection and instance segmentation and fine-tune all the parameters of student model ResNet-18 end-to-end.", "As shown in Table REF and Table REF , BINGO consistently outperforms models pretrained without distillation." ], [ "Ablation Study", "In this section, we conduct detailed ablation studies to diagnose how each component affect the performance of the distilled model.", "Unless specified, all results in this section are based on ResNet-18, and distilled for 200 epochs." ], [ "Impact of $k$ in K-nearest Neighbors", "We inspect the influence of $k$ in K-nearest neighbors bagging strategy.", "As shown in Fig.", "REF , the results are relatively robust for a range of $k$ ($k$ =1,5,10,20).", "In addition, we find that the classification accuracy decrease with $k=10,20$ compared with $k=5$ , because the noise is introduced when $k$ becomes large.", "However, the performance with a relative small $k=1$ is no better than $k=5$ , we think the diversity is sacrificed when we only select the top-1 nearest neighborhood all the time.", "Table: Lower and Upper bound performance exploration via the bagging criterion.Figure: Top-1 accuracy with different kk in K-nearest neighbors." ], [ "Lower and Upper Bound of The Proposed Method", "As shown in Table REF , using data relation extracted from a random initialized model gives a poor performance of $46.6\\%$ , which can be a lower bound of our method.", "Then we try to explore the upper bound performance by bagging instances via a supervised-pretrained model, the performance gets an improvement of $0.8\\%$ over using data relation extracted from the unsupervised pretrained teacher model.", "When we directly use the ground truth labels to bag instances, we get a highest upper bound performance, i.e., $65.8\\%$ Top-1 accuracy." ], [ "Impacts of Data-Relation and Teacher Parameters. ", "In our experiments, both the data relation and model parameters of teacher model are used to distill student model.", "We diagnose how each component affects the distillation performance.", "As shown in Table REF , no matter the teacher's parameters are loaded or not, using the the data relation from pretrained teacher model always gets better results than using data relation from online student model, which verifies the efficiency of transferring teacher's data relation to student model.", "Interestingly, we find that BINGO even gets good result only utilizing teacher's data relation (Row 3 of Table REF ), which is about $10\\%$ higher than model training without distillation.", "Table: Effects of utilizing teacher's data-relation and teacher's pretrained weights.", "The column of Student Relation means that we bag data with features extracted from student model online and the column of Teacher Relation means that we bag data with features extracted from a pretrained teacher model.", "When teacher parameters are not used, we replace the pretrained teacher model as a momentum update of student model like .Figure: t-sne visualization of student's representations pretrained with the MoCo-v2 baseline (a), and distilled with (b) and without bag aggregation (c).Table: Top-1 accuracy of linear classification results on ImageNet using different distillation methods on ResNet-18 student model (ResNet-50 is used as teacher model)" ], [ "Compare with Other Distillation Methods.", "We now compare with several other distillation strategies to verify the effectiveness of our method.", "We compare with two distillation schemes: feature-based distillation method and relation-based distillation, which is termed as KD and RKD, respectively.", "Feature-based distillation method aims at minimizing $l2$ -distance of teacher & student's embeddings.", "Relation-based distillation method aims at minimizing the difference between inter-sample-similarity graph obtained from teacher and student model.", "As shown in Table REF , BINGO outperforms all theses alternative methods." ], [ "Analysis and Discussions", "We now inspect what the student learns during the distillation.", "Firstly we compute the average distance between anchor sample $\\mathbf {x}_a$ and its positive samples $\\mathbf {x_p}$ in a bag $\\mathbf {\\Omega }_a$ over the whole dataset: $\\mathbf {BagDis} = \\mathop \\mathbb {E} \\limits _{\\mathbf {x}_a \\sim \\mathbf {X}} \\mathop \\mathbb {E} \\limits _{\\mathbf {x}_p \\sim \\mathbf {\\Omega }_a} || (f_\\mathbf {S}(\\mathbf {x}_a) - f_\\mathbf {S}(\\mathbf {x}_p)) ||_2^2$ According to Eq.", "REF , we compute the averaged distance in the bag using distilled student model.", "As shown in Table REF , the averaged distance in a bag is smallest when the student model is distilled with bag-aggregation loss.", "We also compute the intra-class distance among all intra-class pairwise samples.", "As shown in Table REF , the proposed method also aggregate the bag of labels with the same ground truth labels on the unseen validation set.", "Table: Averaged intra-class distance on ImageNet validation setFinally, we visualize the last embedding feature to understanding the aggregating properties of the proposed method.", "10 classes are randomly selected from validation set.", "We provide the t-sne visualization of the student features.", "As shown in Fig.", "REF , the same color denotes features with the same label.", "It can be seen that BINGO gets more compact representations compared with models without distillation or distilling without pulling related samples in a bag." ], [ "Conclusions", "This paper proposes a new self-supervised distillation method, named BINGO, which bags related instances by matching embeddings of the teacher.", "With the instance-wise dataset mapped into a bag-wise one, the new dataset can be applied to the distillation process for small models.", "The knowledge which represents the relation of bagged instances can be transferred by aggregating the bag, including inter-sample and intra-sample aggregation.", "Our BINGO follows a relation-guided principle, which shows stronger effectiveness than previous relation-agnostic methods.", "The proposed relation-based distillation is a general strategy for improving unsupervised representation, and we hope it would shed light on new directions for unsupervised learning." ], [ "Results of K-means bagging strategy", "We also evaluate the performance of using K-means clustering as the bagging strategy.", "According to Eq.", "2 in the main text, given the pseudo-label $\\mathbf {q} = \\lbrace \\mathbf {q_1}, \\mathbf {q_2}, ..., \\mathbf {q_N}\\rbrace $ and the anchor instance $\\mathbf {x}_a$ , the bag associated with $\\mathbf {x}_a$ as : $\\mathbf {\\Omega }_a = \\lbrace \\mathbf {i} \\ |\\ \\mathbf {q_i} = \\mathbf {q}_a, \\mathbf {i}=1, 2,..., N \\rbrace $ For implementation, ResNet-18 and ResNet-50 are used as the student and teacher model respectively.", "We evaluate the linear classification accuracy of the student model on ImageNet-1K.", "We study various cluster numbers $C$ as shown in Tab.", "REF .", "We find that a bigger cluster number can bring better results than a smaller one.", "Noting that the linear classification accuracy of bagging with K-nearest neighbors (where $k$ = 5) is slightly better than bagging via K-means clustering (with $C$ = 20000), i.e.", "$64.0\\%$ vs. $63.8\\%$ .", "Moreover, bagging with KNN is more convenient to implement, so we choose the KNN-based bagging strategy in implementation.", "Table: Top-1 accuracy with different cluster numbers CC in K-means clustering." ] ]
2107.01691
[ [ "Cost-Oriented Load Forecasting" ], [ "Abstract Accurate load prediction is an effective way to reduce power system operation costs.", "Traditionally, the mean square error (MSE) is a common-used loss function to guide the training of an accurate load forecasting model.", "However, the MSE loss function is unable to precisely reflect the real costs associated with forecasting errors because the cost caused by forecasting errors in the real power system is probably neither symmetric nor quadratic.", "To tackle this issue, this paper proposes a generalized cost-oriented load forecasting framework.", "Specifically, how to obtain a differentiable loss function that reflects real cost and how to integrate the loss function with regression models are studied.", "The economy and effectiveness of the proposed load forecasting method are verified by the case studies of an optimal dispatch problem that is built on the IEEE 30-bus system and the open load dataset from the Global Energy Forecasting Competition 2012 (GEFCom2012)." ], [ "Introduction", "Accurate load prediction is an effective way to reduce the power system operation costs.", "Both over- and under-forecasts may result in extra operational costs.", "For the over-forecasting case, the extra costs can be attributed to the start-up of unnecessary units, the purchase of surplus power, and selling surplus power at an unfavorable balance price [1].", "For the under-forecasting case, the additional costs may be due to the sub-optimal dispatch, purchase of expensive balancing power, and load shedding [2].", "The economic benefit of reducing the load forecasting errors in a generic model was investigated in [3].", "It concludes that a $5\\%$ forecasting error can be set as an economically acceptable allowance for the forecasting model because a further reduction of forecasting errors does not lead to a noticeable additional economic improvement.", "The economic impact of forecasting errors was quantified using three sources in [4], namely start-up costs, dispatch costs, and outage costs.", "In [5], the economic value associated with limited and inaccurate load forecasts in a specific time frame was determined in a unit commitment problem.", "A significant amount of work has been done in the area of load forecasting, which includes both deterministic forecasting [6], [7], [8] and probabilistic forecasting [9], [10], [11].", "The forecasting objectives range from predicting the consumption of an individual consumer [12] to the total aggregated load of the whole system [13].", "Traditionally, the regression models for deterministic load forecasting are trained using the metric of mean square error (MSE) loss function, with the implication that forecasting errors, identical in magnitude, cause the same and quadratic costs.", "Apparently, this assumption is not accurate, especially for cases where asymmetric costs are observed for forecasting errors in the same magnitude but with opposite sign [14], [15], [16], [17], [18].", "Thus, the forecasting errors are not able to precisely quantify the economic value among different forecasting models.", "With the discrepancy observed between the economic value and the forecasting accuracy metrics, several works studied the possibility of incorporating cost-oriented loss functions into the regression models.", "An asymmetric monetary loss function was proposed in [15] to guide the day-ahead load forecasting.", "On this basis, a genetic algorithm (GA) was applied to update the parameters of the radial basis function (RBF) network with a discontinuous and non-symmetric loss function.", "Similarly, an asymmetric error penalty function was designed in [16] based on the simulation results from the day-ahead unit commitment (DAUC) problem.", "The derived non-differentiable penalty function was optimized through GA for the combined backpropagation and RBF neural networks.", "Nevertheless, the error measurement between the forecasted loads and the actual loads over the 24 hours was averaged, resulting in an inexact measurement of the loss function.", "In addition to load forecasting, a general cost-oriented wind power forecasting model was formulated in [19] by integrating a predefined loss function with a boosted regression tree model.", "However, the loss function was designed based on a simple retail market which likely does not scale to complex scenarios.", "Another asymmetric loss function for solar power forecasting was studied in [20], where the loss function was formulated by adding a linear term and a cubic term as perturbations to the MSE loss function.", "The coefficients of the two additional terms control the degree of asymmetry of the loss function.", "Nonetheless, the asymmetric loss function was still defined from a statistical perspective and may not fully reflect the real cost of the system.", "The current research on cost-oriented load forecasting is still limited, and the two following main issues are not properly addressed.", "First, the asymmetric loss function is generally defined for only one specific application, and there is no general framework or approach to obtain the cost-oriented loss function.", "Second, differentiability is not incorporated as an important characteristic for the cost-oriented loss function, which limits the usage of the respective loss functions with traditional regression models.", "To address these two issues, this paper presents a generalized cost-oriented load forecasting framework that can be divided into the three stages of loss function generation, approximation, and integration into regression models.", "The losses caused by forecasting errors are first generated by a large number of simulations.", "On this basis, a differentiable piecewise loss function is mathematically approximated and derived.", "Finally, the differentiable piecewise loss function can be integrated with all regression models that can be trained by gradient-based optimization methods.", "Figure: Solution framework for cost-oriented load forecasting.Hence, this paper makes the following main contributions: A generalized framework to quantify losses with respect to different forecasting errors is proposed.", "Specifically, an example model is formulated to generate the loss data associated with forecasting errors by comparing the day-ahead economic dispatch (DAED) problem and the intraday power balance (IPB) process.", "A fully differentiable cost-oriented loss function is derived by applying an optimal piecewise linear approximation method and the Huber norm embedding technique on the generated loss data.", "The cost-oriented loss function is integrated with multiple linear regression (MLR) and artificial neural network (ANN) models that are trained under gradient-based optimization methods.", "The rest of the paper is organized as follows.", "Section  identifies the critical issues in cost-oriented load forecasting and provides a general solution framework.", "In Section , the process to generate losses with respect to different forecasting errors and to derive the differentiable cost-oriented loss function from the discrete loss data is illustrated.", "Section  shows how to integrate the derived loss function with MLR and ANN models to formulate forecasting models.", "Section  presents case studies to verify the effectiveness of the proposed method based on a DAED-IPB problem.", "Section  draws conclusions." ], [ "Problem Statement and Framework", "Traditional load forecasting models try to minimize the MSE or other statistical indices.", "In contrast, the cost-oriented load forecasting model discussed in this paper tries to minimize the real economic costs caused by forecasting errors in the decision-making process.", "In general, the loss function and regression model are two parts that form the basis for a load forecasting model.", "Thus, to develop a cost-oriented load forecasting model, we need to address two main challenges.", "The first is to define a loss function that can reflect the real cost of power system operation.", "This is necessary since the economic costs caused by forecasting errors may vary in different situations and time periods.", "For example, a 3% load forecasting error at a certain time on different days may yield different costs due to the varying operating conditions of the power system.", "The second challenge is to integrate the cost-oriented loss function with a regression model.", "The mature and off-the-shelf regression models use quadratic loss functions (e.g., MSE).", "Hence, it is necessary to study how to replace the quadratic loss function with the cost-oriented loss function and meanwhile guarantee that the model can be easily trained.", "Thus, the proposed framework for the cost-oriented load forecasting model is shown in Fig.", "REF .", "The framework consists of the following three stages: Stage 1: Loss function generation.", "This stage quantifies the economic costs in the energy system associated with forecasting errors using a large number of scenarios.", "Stage 2: Loss function approximation.", "In this stage, an analytical and differentiable cost-oriented loss function based on the discrete cost and error data obtained in Stage 1 is formulated.", "Stage 3: Loss function integration.", "This stage integrates the differentiable cost-oriented loss function obtained in Stage 2 with various regression models that are trained by gradient-based optimization methods.", "In the following Section , we demonstrate how to generate and approximate the loss function before then integrating it into regression models in Section ." ], [ "Loss Function Generation", "To quantify the costs associated with forecasting errors, two costs, namely the ideal cost with accurate forecasts and the actual cost with forecasting errors, need to be computed.", "For the ideal cost, the system operator gathers all the precise information of the system ex-ante such that the information $I$ available at current time $i$ is the same as at the instance of time $i+k$ for which the prediction has been made, namely $I_i^*=I_{i+k}$ .", "Given the exact information, namely using the load forecasts at future time $y_{i+k}^*$ , the system operator can obtain the minimum system operation cost $C(y_{i+k}^*|I_i^*)$ .", "However, in reality the system operator can only make a decision based on the load forecasts simulated with the current existing information $I_i$ and then adjust the system accordingly when the future time arrives.", "In this case, the actual cost associated with the forecasts is denoted as $C(\\hat{y}_{i+k}|I_i)$ .", "By comparing against the ideal cost, the economic loss emanated from the forecasting errors can be derived as: $C(\\epsilon _{i+k})=C(\\hat{y}_{i+k}|I_i)-C(y_{i+k}^*|I_i^*)$ where the forecasting error $\\epsilon _{i+k}=\\hat{y}_{i+k}|I_i-y_{i+k}^*|I_i^*$ .", "Thus, by generating various scenarios of imperfect load forecasts, we can quantify the discrete economic costs associated with the simulated forecasting errors ex-ante.", "For example, in the case studies a massive number of load forecasts were simulated and fed into the DAED-IPB problem to minimize the total generation costs over 24 hours.", "On this basis, the cost $C(\\epsilon _{i+k})$ for each hour is calculated.", "The green dots in Fig.", "REF show the loss data simulated for hour 19, where the evaluation metrics for both costs (i.e., FEPC) and forecasting errors (i.e., FEP) are measured as normalized values from $C(\\epsilon _{i+k})$ and $\\epsilon _{i+k}$ , respectively.", "Figure: Smoothing spline for discrete loss data in hour 19.", "FEPC: Forecasting error percentage cost.", "FEP: Forecasting error percentage." ], [ "Loss Function Approximation", "The generated loss data correspond to discrete samples.", "However, the loss function has to be a unique mapping between errors to losses as well as analytical over the entire spectrum such that it can be integrated into an optimization problem and solved by a gradient descent algorithm.", "To generate a unique loss function, the smoothing spline is first applied to the original discrete loss data.", "However, as illustrated in [21], the smoothing spline consists of $N$ natural cubic splines where $N$ denotes the number of unique discrete data points.", "In other words, when the number of unique discrete loss data samples (i.e., the green dots shown in Fig.", "REF ) is large, the smoothing spline creates a myriad of breakpoints, making it difficult to formulate an analytical form of the loss function that then can be integrated with the regression models.", "A piecewise linear approximation, on the other hand, is simple, and the number of breakpoints depends on the precision requirement of the approximation.", "Hence, after the smoothing spline is applied on the discrete loss data, a piecewise linear approximation of the smoothing spline is introduced to form the analytical form of the loss function.", "In order to carry out piecewise linear approximation, in the following subsections, a partition scheme is first explained, and the fully differentiable cost-oriented loss function is developed on this basis." ], [ "Piecewise Linear Approximation", "The piecewise linear approximation function that is applied on the smoothing spline can be defined as: $L(\\epsilon ) = {\\left\\lbrace \\begin{array}{ll}a_1\\epsilon +b_1, & \\epsilon _{\\min } < \\epsilon < \\epsilon _{1} \\\\... \\\\a_k\\epsilon +b_k, & \\epsilon _{k-1} < \\epsilon < \\epsilon _{k} \\\\... \\\\a_K\\epsilon +b_K, & \\epsilon _{K-1} < \\epsilon < \\epsilon _{\\max }\\end{array}\\right.", "}$ with a limited set of breakpoints denoted as $\\Phi = \\left[(\\epsilon _{1},L_{1}),...,(\\epsilon _{k},L_{k}),...,(\\epsilon _{K-1},L_{K-1})\\right]$ .", "Optimally determining the set of the breakpoints is the key for piecewise linear approximation.", "To fix the set of breakpoints, here we concisely recapitulate the process of how to determine the number of breakpoints $K-1$ and the position of breakpoints $\\epsilon _k$ according to [22].", "The $L_2$ norm error between the derived piecewise linear function $L(\\epsilon )$ and the smoothing spline function $s$ is a metric used to select the optimal set of breakpoints.", "To minimize the $L_2$ norm error, a partition scheme that ensures convexity of the approximated function is implemented on the smoothing spline $s$ .", "Specifically, the $L_2$ norm approximation error based on the partition scheme is bounded by: $\\Vert s-L_{\\Phi ^{\\star }}(\\epsilon )\\Vert _2 & \\le \\frac{(\\int _{\\epsilon _{\\min }}^{\\epsilon _{\\max }}{s^{\\prime \\prime }(\\epsilon )^{\\frac{2}{5}}d\\epsilon )^{\\frac{5}{2}}}}{\\sqrt{120}K^2}$ where the upper bound of the $L_2$ approximation error is associated with both the second order derivative of the smoothing spline $s^{\\prime \\prime }(\\epsilon )$ and the number of partition segments $K$ .", "With this relationship, the minimal number of breakpoints $K-1$ required to approximate the smoothing spline within the specified tolerance of the approximation error can be determined.", "Once the number of breakpoints is fixed, the next step is to determine the position of the breakpoints.", "Reference [23] proves that more breakpoints shall be placed in the subinterval where the convexity of the smoothing spline is evident.", "Thus, the position of the breakpoints is measured by a density metric called the cumulative breakpoint distribution function $F(\\epsilon _{k})$ which is defined as: $F(\\epsilon _{k}) & = \\frac{\\int _{\\epsilon _{\\min }}^{\\epsilon _{k}}{|s^{\\prime \\prime }(x)|^{2/5}dx}}{\\int _{\\epsilon _{min}}^{\\epsilon _{\\max }}{|s^{\\prime \\prime }(x)|^{2/5}dx}} $ where $F(\\epsilon _{k})$ is bounded by the range of $\\left[0,1\\right]$ .", "Then $F(\\epsilon )$ is divided evenly according to the number of the breakpoints.", "As Fig.", "REF shows, the breakpoints $\\lbrace \\epsilon _{k}\\rbrace _{k=1}^{K-1}$ are placed in accordance with the cumulative distribution function such that each subinterval can contribute equally to the total approximation error, by which the position of the corresponding breakpoints are determined.", "Figure: Partition scheme for piecewise linear approximation." ], [ "Differentiable Piecewise Loss Function", "After the breakpoints are determined, we can form the piecewise linear approximation of the loss data.", "However, since the breakpoints on both sides of each piecewise linear function create differential discontinuity, it is difficult for the loss function to be integrated with traditional regression models such as neural networks that require a continuous gradient of loss function to optimize regression coefficients.", "Inspired by the Huber function (a.k.a Huber norm) [24], the integration of transitional curves at all breakpoints are introduced to address this issue.", "The Huber function is defined as [24]: $h_{\\delta }(\\epsilon ) = {\\left\\lbrace \\begin{array}{ll}\\frac{\\epsilon ^2}{2\\delta } & |\\epsilon | \\le \\delta \\\\|\\epsilon | - \\frac{\\delta }{2} & |\\epsilon | > \\delta \\end{array}\\right.", "}$ where $\\epsilon $ stands for the forecasting error and $\\delta \\in \\mathbb {R}^{+}$ is a real value that represents the transition range from the quadratic component to the linear components.", "Figure: Continuous and differentiable piecewise linear approximation of the loss function.As illustrated in Fig.", "REF , a modified Huber norm is introduced in the neighborhood of each breakpoint $\\epsilon _{k}$ (i.e., $[\\epsilon _{k}-\\delta , \\epsilon _{k}+\\delta ]$ ).", "Now, assume that the quadratic component and two linear functions are given by: $\\begin{aligned}L_{H}(\\epsilon ) &= A\\epsilon ^2 + B\\epsilon + C \\\\L_{p}(\\epsilon ) = a_{k+1}\\epsilon +&b_{k+1},~L_{n}(\\epsilon ) = a_{k}\\epsilon +b_k\\end{aligned}$ The following four conditions have to be satisfied for the function to be differentiable: both values and the first order derivative at the transition points have to be identical, i.e., $\\begin{aligned}&L_{H}(\\epsilon _{k}+\\delta ) = L_{p}(\\epsilon _{k}+\\delta ),~L_{H}^{\\prime }(\\epsilon _{k}+\\delta ) = L_{p}^{\\prime }(\\epsilon _{k}+\\delta )\\\\&L_{H}(\\epsilon _{k}-\\delta ) = L_{n}(\\epsilon _{k}-\\delta ),~L_{H}^{\\prime }(\\epsilon _{k}-\\delta ) = L_{n}^{\\prime }(\\epsilon _{k}-\\delta )\\end{aligned}$ Consequently, we can derive the quadratic component $L_{H}$ as follows: $\\begin{aligned}L_{H} & = \\frac{a_{k+1}-a_{k}}{4\\delta }(\\epsilon -\\epsilon _{k})^2 + \\frac{a_{k+1}+a_k}{2}(\\epsilon -\\epsilon _{k}) \\\\&+\\frac{\\delta (a_{k+1}-a_k)}{4} + L_k\\end{aligned}$ Thus, the final approximated and differentiable loss function that will be integrated into regression models is formulated as $L_{\\delta }(\\epsilon ) = {\\left\\lbrace \\begin{array}{ll}a_{k}(\\epsilon -\\epsilon _{k})+L_{k} & \\epsilon _{k} - \\epsilon \\ge \\delta \\\\\\begin{split}&\\frac{a_{k+1}-a_{k}}{4\\delta }(\\epsilon -\\epsilon _{k})^2+ \\\\ &\\frac{a_{k+1}+a_{k}}{2}(\\epsilon -\\epsilon _{k})+\\\\&\\frac{a_{k+1}-a_{k}}{4}\\delta + L_{k} \\end{split}& |\\epsilon -\\epsilon _{k}| < \\delta \\\\a_{k+1}(\\epsilon -\\epsilon _{k})+L_{k} & \\epsilon - \\epsilon _{k} \\ge \\delta \\end{array}\\right.", "}$" ], [ "Loss Function Integration", "A load forecasting model $f(X)$ tries to establish the mapping between the relevant features $X_{n\\times d}$ and the predicted loads $Y_{n\\times 1}$ , where $d$ and $n$ denote the number of features and training samples, respectively.", "For typical deterministic forecasting models, the inherent loss function for each sample $\\lbrace x_i,y_i\\rbrace $ is the quadratic loss function $(f(x_i)-y_i)^2$ .", "Considering the incompatibility of cost-oriented loss functions to most off-the-shelf training packages, the quadratic loss function cannot be directly replaced by the cost-oriented loss function $L(\\epsilon _i)$ to train the forecasting models.", "Since MLR and ANN are two commonly used regression models for load forecasting [25], this section studies how to train these two models with the cost-oriented loss function.", "In this paper, the forecasting error percentage (FEP) $\\epsilon _i=\\frac{f(x_i)-y_i}{y_i}$ is used as the input to the loss function $L(\\epsilon _i)$ instead of the absolute error." ], [ "MLR with Cost-oriented Loss Function", "For MLR model $f(X)=Xw$ , given the defined piecewise loss function $L$ , the optimization problem for determining the parameters $w$ can be formulated as: $w^{*} &= \\operatornamewithlimits{arg\\,min}_{w}\\sum _{i=1}^n{L\\left(\\frac{x_iw-y_i}{y_i}\\right)}$ It can be solved by the gradient descent method which is a process of iteratively updating the regression parameters $w$ by following the gradient of the loss function until the gradients finally converge to zero.", "If the loss function is convex, the point where the gradient is equal to zero is equivalent to the global minimum solution.", "The update rule is: $w^{t+1} = w^t&-\\eta ^t\\nabla _{w}\\sum _{i=1}^n{L\\left(\\frac{x_iw-y_i}{y_i}\\right)}$ where $\\eta ^t$ is the learning rate at the $t$ -th iteration.", "Initially, the learning rate $\\eta ^t$ is set to its default value (e.g., 0.5).", "To facilitate the convergence of the algorithm, the learning rate is time-variant during the training process, i.e., $\\eta ^{t+1}={\\eta ^{t}}/{(1+\\gamma {t})}$ , where $\\gamma $ is the decay rate [26].", "Using the chain rule, (REF ) is further derived as: $w^{t+1}= w^t-\\eta ^t\\sum _{i=1}^n{\\frac{\\partial L}{\\partial \\epsilon _i}\\frac{\\partial \\epsilon _i}{\\partial w}}=w^t-\\eta ^t\\sum _{i=1}^n{\\frac{\\partial L}{\\partial \\epsilon _i}\\frac{x_i}{y_iI^{1\\times (d+1)}}}$ Since $x_i$ is of the size $1\\times (d+1)$ , to compute the dot division, the single scalar of $y_i$ has to be propagated by an identity matrix of the same size as $x_i$ , such that $y_iI^{1\\times (d+1)}$ is of the same size as $x_i$ .", "Both initial learning rate $\\eta ^0$ and the decay rate $\\gamma $ are hyperparameters tuned by cross-validation.", "Algorithm 1 summarizes the MLR model with a cost-oriented loss function.", "[t] Loss function $L$ , training data $D=(X, Y)$ where $X\\in \\mathbb {R}^{n\\times {m}}$ , initial weights $w_0 \\in \\mathbb {R}^{d}$ , default initial learning rate $\\eta ^{0}=0.5$ , decay rate $\\gamma =0.1$ , maximum iterations $t^{\\max }$ .", "$t\\le t^{\\max }$ Gradient of the cost-oriented loss function: $g_t = \\nabla _{w}{\\sum _{i=1}^n{L_i(\\epsilon (X_iw,y_i))}}=\\sum _{i=1}^n{\\frac{\\partial L_i}{\\partial {\\epsilon _i}}\\frac{x_i}{y_iI^{1\\times (d+1)}}}$ Weight update: $w_{t+1} = w_{t}-\\eta ^t{g_t}$ Learning rate update: $\\eta ^{t+1}=\\frac{\\eta ^{t}}{1+\\gamma {t}}$ Update iteration tag: $t = t+1$ Optimized MLR model ${Y^{*}}=Xw^*$ .", "MLR with cost-oriented loss function" ], [ "ANN with Cost-oriented Loss Function", "For a feed-forward ANN with $Q$ hidden layers, the input data for every layer are aggregated in the vector $v$ .", "Thus, the initial input data $x$ from the input layer is $v^{(0)} = x$ .", "The output of the hidden layer $l=1:Q$ is: $v^{(l)} &= \\phi \\left(W^{(l)}v^{(l-1)}\\right)$ where $W^l$ is the weight vector for the input of layer $l-1$ and $\\phi (\\cdot )$ is the activation function.", "Through propagated computation of each layer, the output of the last hidden layer is passed to the output layer.", "Hence, the prediction $\\hat{y}$ is obtained from the output layer $\\hat{y} &= f(x,W) = W^{(Q+1)}v^{(Q)}$ The cost-oriented loss function $L(\\epsilon (\\hat{y},y))$ is used to determine the forecasting loss.", "Consequently, the objective function of the cost-oriented loss resembles the MLR problem (REF ) and can be defined as: $\\begin{aligned}\\hat{W}=\\operatornamewithlimits{arg\\,min}_{W}{\\sum _{i=1}^n{L\\left(\\frac{f(x_i,W)-y_i}{y_i}\\right)}}\\end{aligned}$ The learning problem now is to find the weight matrix such that the cost-oriented loss obtained at the output layer is minimized.", "Since the minimization problem is a highly non-convex optimization problem, the weights matrix $W=(W^{(1)},...,W^{(Q+1)})$ is generally optimized by gradient descent methods.", "However, unlike the MLR technique, whose prediction expression is explicit, the output of ANN is a nonlinear relationship of parameters in all hidden layers.", "Thus, the backpropagation algorithm is applied to compute the gradient with respect to the weight matrix.", "For the weight of the output layer, the chain rule can be applied.", "The partial derivative of the loss function with respect to prediction $\\hat{y}$ is computed, such that at layer $Q$ we have $\\nabla _{W^{(Q+1)}}L &= \\frac{\\partial {L}}{\\partial {\\epsilon }}\\frac{\\partial {\\epsilon }}{\\partial {\\hat{y}}}\\frac{\\partial {\\hat{y}}}{\\partial {W^{(Q+1)}}}\\\\&=\\delta ^{(Q+1)}{(v^{Q})^{T}}$ where $\\epsilon =\\frac{\\hat{y}-y}{y}$ , $\\delta ^{(Q+1)}$ is short for $\\frac{\\partial {L}}{\\partial {\\epsilon }}\\frac{\\partial {\\epsilon }}{\\partial {\\hat{y}}}$ .", "If we define $z^{(l)} = W^{(l)}v^{(l-1)}$ , the gradient of the internal weights for each hidden layer $l=Q:-1:1$ can be computed as: $\\delta ^{(l)} &= \\frac{\\partial {\\phi }}{\\partial {z^{(l)}}}\\odot ((W^{l+1})^{T}\\delta ^{(l+1)})\\\\&\\nabla _{W^{(l)}}L = \\delta ^{(l)}(v^{l-1})^{T}$ where $\\odot $ denotes the pointwise multiplication operator.", "Using both feed-forward and backpropagation algorithms, the gradient matrix for all weights in the neural network can be updated until convergence.", "However, the backpropagation for all the input observations leads to high computational efforts and reduces the computational efficiency.", "Therefore, a stochastic gradient descent method using mini-batch [27], i.e., a subset sampling method of the whole dataset, is implemented in the proposed ANN algorithm to alleviate the parameter estimation variance while having a low per-iteration computation cost over the whole dataset.", "To escape local minima in the loss function, the adaptive moment (Adam) algorithm, which has an adaptive learning rate, is implemented in the regression model [28].", "Adam carefully chooses the stepsizes when it updates the weight by introducing the first and second moments.", "Details of the weight update using Adam can be found in Algorithm 2, which summarizes the ANN with a cost-oriented loss function model.", "[t] Loss function $L$ , training data $D=(X, Y)$ where $X\\in \\mathbb {R}^{n\\times {m}}$ , number of layers $Q$ , number of neuron units in layer $l$ : $M_l$ , maximum epochs number $ep^{max}$ , random initialization weights $W_0^l \\in \\mathbb {R}^{M_l}$ , batch size $b$ (e.g.", "$b = 64$ ), parameters setting for Adam: $\\alpha =0.001,\\beta _1=0.9,\\beta _2=0.999,\\epsilon =10^{-8}$ .", "Initialize $m_0 = 0, v_0 = 0, W_0 = 0$ $ep \\le ep^{max}$ Randomly partition $X$ into $T=n/b$ subsets: $\\lbrace X^1,...,X^t,...,X^{T}\\rbrace $ $t=1$ to $T$ Feed-forward computation: $f(X^t,W_t) = W_t^{(Q+1)}v^{(Q)}$ Compute gradient of ANN using Backpropagation: $g_t = \\nabla _{W}{\\sum _{j=1}^b{L(\\epsilon (f(x_j^t,W),y_i}))}$ Biased first moment: $m_t = \\beta _1{m_{t-1}}+(1-\\beta _1)g_t$ Biased second moment: $v_t = \\beta _2{v_{t-1}}+(1-\\beta _2)g_t^2$ Unbiased first moment: $\\hat{m_t} = \\frac{m_t}{1-\\beta _1^t}$ Unbiased second moment: $\\hat{v_t} = \\frac{v_t}{1-\\beta _2^t}$ Weight update: $W_t = W_{t-1}-\\frac{\\alpha \\hat{m_t}}{\\sqrt{\\hat{v_t}+\\epsilon }}$ Update the subset index: $t = t+1$ Update epoch indicator: $ep = ep +1$ Optimized ANN model $f(x,W^*)$ .", "ANN with cost-oriented loss function" ], [ "Evaluation Metrics", "Two evaluation metrics are used to quantify the performance of the forecasting algorithm.", "The first is the Mean absolute percentage error (MAPE): $\\text{MAPE}= \\frac{1}{N}\\sum _{i=i}^N{|\\frac{\\hat{y}_{i}-y_{i}}{y_{i}}|}= \\frac{1}{N}\\sum _{i=i}^N{|\\epsilon _i|}\\times 100\\%$ where $N$ is the total number of test data samples.", "MAPE is a common-used metric to evaluate the average magnitude of forecasting errors from the statistical perspective.", "The other metric is to evaluate the operational cost caused by forecasting errors.", "The forecasting error percentage cost (FEPC) for each time period is defined as the scale-independent loss given the forecasting error $\\epsilon _i$ , namely $\\text{FEPC}(\\hat{y}_{i},y_{i})={\\frac{C(\\hat{y}_{i})-C(y_{i})}{C(y_{i})}}\\times 100\\%$ On this basis, the mean forecasting error percentage cost (MFEPC) is defined to measure the average of the FEPC cost on the test dataset with $N$ samples: $\\text{MFEPC} = \\frac{1}{N}\\sum _{i=i}^N{\\text{FEPC}(\\hat{y}_{i},y_{i})} $ By computing the MFEPC at the same hour of each day over a long period (e.g., a year), we can statistically assess the economical value of the forecasting model for that hour.", "Additionally, two other metrics, namely over forecasting percentage (OFP) and under forecasting percentage (UFP), are used to examine the asymmetry distribution of the forecasting errors: $\\text{OFP} &= \\frac{\\sum _{i=i}^N{\\max ((\\hat{y}_i-y_i)/|\\hat{y}_i-y_i|,0)}}{N}\\times 100\\%\\\\\\text{UFP} &= -\\frac{\\sum _{i=i}^N{\\min ((\\hat{y}_i-y_i)/|\\hat{y}_i-y_i|,0)}}{N}\\times 100\\%$" ], [ "Case Studies", "The day-ahead economic dispatch scenario in China is used in this section to verify the effectiveness and economic advantage of the proposed framework.", "The MLR and ANN models with different loss functions are tested.", "Note that the proposed cost-oriented load forecasting framework has various potential applications and is not limited to the dispatch problem." ], [ "Experimental Setups", "Many provincial and municipal system operators still centrally implement the administrative generation scheduling process based on daily point forecasts [29].", "In this case, a modified IEEE 30-bus test system given in [30] is used to simulate an economic dispatch optimization problem in a regulated municipal-level grid system.", "Fig.", "REF shows the structure of the 30-bus test system, where a total of 6 generators and 3 BESSs are connected to the network to supply the 21 loads, whose voltage level and types are assumed to be uniform in the system.", "Two dispatch optimization problems, namely the DAED problem and the IPB problem, are formulated to compute the loss data.", "The mathematical formulations of the two problems can be found in the Appendix.", "Alike the actual operation process, the DAED problem schedules the power output of all online generators based on the forecasted daily load profile $\\hat{y}_{i}$ in the controlled region.", "On this basis, the IPB problem with the actual load ${y}_{i}$ is implemented to adjust the outputs of generators, charging/discharging behavior of BESS, and load shedding to balance the intra-day load deviation.", "With established system parameters, the cost obtained by the IPB problem is viewed as the actual total $C(\\hat{y}_{i})$ .", "The actual total costs are then compared against the ideal total costs $C({y}_{i})$ (i.e., the DAED problem is solved with actual load ${y}_{i}$ ) to calculate the FEPC.", "Figure: Topology of the 30-bus test network.The actual hourly load data originates from the Global Energy Forecasting Competition 2012 (GEFCom2012) dataset [31] and is assumed to be distributed proportionally to each node based on the magnitude of the load provided in the modified IEEE 30-bus test case.", "The typical daily load data ${y}_{i}$ are derived from the original dataset to demonstrate the effectiveness of the proposed forecasting models.", "To simulate the forecasted loads in the day-ahead process, as shown in Fig.", "REF , a total of 10,000 hourly load forecasting scenarios are generated uniformly between $\\left[0.9{y}_{i},1.1{y}_{i}\\right]$ in each hour $i$ using the Monte-Carlo process.", "Note that the goal of the simulation of forecasted loads is to generate scenarios that are widespread across the error spectrum of loss functions, which is not related to the actual distribution of the forecasting error.", "Since in reality only total aggregated load deviation is considered by the municipal system operators in the IPB process, the variation of uncertainty in this model is generated for the total load and simulated using the Monte Carlo process.", "With uniform nominal coordinate for the hourly loss functions, the same hourly loss functions can be applied to each load of the grid, thereby reducing the overall computational complexity for loss function formulation.", "Figure: 10,000 simulated day-ahead load forecasts.Since feature selection is not the main focus of this paper, a commonly used and effective feature set that takes into account the recency effect [25] is applied for both MLR and ANN.", "For instance, the MLR model can be expressed as: $\\begin{split}\\hat{y}_i =& q_0 + q_{1}Trend + q_{2}M_i+q_{3}W_i+q_{4}H_i+q_{5}W_iH_i\\\\&+f\\left({TP}_i\\right)+\\sum _{ds}f\\left(\\tilde{TP}_{i,ds}\\right)+\\sum _{h}f\\left({TP}_{i-h}\\right)\\end{split}$ where $H_i$ , $W_i$ , $M_i$ and $TP_i$ denote calendar hour, week, month and temperature corresponding to the hour $i$ , respectively.", "The $f$ function comprises the coupling features between calendar variables and the temperature in hour $i$ ($TP_i$ ), the average temperature over the past $ds$ days ($\\tilde{TP}_{i,ds}$ ) and the $h$ lagged hourly temperature (${TP}_{i-h}$ ).", "In our case, $ds$ is 3 and $h$ is 4.", "Overall, the input dataset with 1019 features is created from the data provided in GEFCom2012.", "The configuration for the ANN models integrated with different loss functions are provided in Table REF .", "Four years of load data from GEFCom2012 is split into two parts for model training and testing; the first three years of data, namely from the year 2004 to 2006, are used as the training set, and the data from 2007 are used as the out-of-sample set to fairly evaluate the model performance.", "Table: Hyperparameters for ANN forecasting model." ], [ "Loss Function Generation", "Fig.", "REF shows the generated loss data for hour 8 and hour 19 using DAED and IPB problems with the simulated load scenarios, where each green dot corresponds to a load scenario.", "The derived hourly loss data are approximated with three cost-oriented loss functions, namely hourly loss function, daily loss function, and linear loss function.", "The hourly loss function is derived directly from the loss data of the corresponding hour using the approximation approach proposed in Section .", "The daily loss function aggregates all the data from 24 hours and approximates them with a static loss function.", "The linear loss function is formulated by a linear regression model with a linear function each for positive and negative loss values.", "Figure: Cost-oriented loss functions at (a) hour 8, (b) hour 19.The approximation of the exact hourly loss data is the closest approximation to the original loss data at each hour.", "In this case, 24 forecasting models are trained individually for the different hours of the day.", "If we compare the daily loss function with the hourly loss function, we see a larger approximation error.", "For instance, the daily loss function is higher than the loss data when FEP is negative at hour 8, while lower than the loss data in the same range at hour 19.", "Nevertheless, the model integrated with the daily loss function is uniform for 24 hours which results in a lower computational burden.", "For the linear loss function, despite being time-dependent, it cannot approximate the real costs at each hour as precisely as the hourly loss function." ], [ "Forecasting Results from MLR", "Fig.", "REF shows the forecasting performance of MLR with the four different loss functions in terms of MFEPC, MAPE, OFP, and UFP.", "The forecast with the MSE loss function is used as the benchmark.", "Figure: Forecasting performance of four loss functions using the MLR model evaluated by metrics: (a) MFEPC, (b) MAPE, (c) OFP, (d) UFP.Fig.", "REF presents that the economic costs due to forecasting errors are low between the hours of 7 and 12 and high between the hours of 15 and 19 for all four loss models.", "A similar pattern can also be observed from the MAPE results shown in Fig.", "REF .", "In fact, a monotonic relationship between the forecasting errors and the economic costs can be observed for all forecasting models integrated with various loss functions.", "As can be expected, the economic cost at a particular hour is highly influenced by the forecasting accuracy at the same hour, which explains the correlated pattern between the MFEPC and MAPE results.", "In addition, it can be seen from the load profiles in Fig.", "REF that starting from hour 7, the total load of the system is relatively constant.", "During this period, online generators produce a steady amount of power and are not scheduled to ramp up or down rapidly.", "Thus, they are relatively flexible for real-time output adjustment.", "In addition, the remaining energy stored in the BESSs is still abundant at the beginning of the day for balancing the load deviation.", "Thus, these available system resources at this period contribute to the low economic costs associated with the forecasting errors.", "When the total load of the system increases starting in hour 15, the system constraints are tightened, and balancing resources become limited.", "Thus, more costs are expected as the system is moving further away from the optimal operation point.", "In this case, the MFEPC associated with forecasting errors increases.", "Table: Daily averaged performances of four MLR models.For all loss models, the lowest MFEPC is obtained by the hourly loss model for all 24 hours.", "Furthermore, both daily loss and linear loss models achieve economically more favorable results over 24 hours compared to the benchmark model.", "Thus, the economic value of using cost-oriented forecasting models is demonstrated.", "On the other hand, the improvement of economic value from the daily and linear models is minor with respect to the benchmark model.", "This can be attributed to the simplified approximation approaches for both loss functions from the loss data, i.e., part of the information present in the hourly loss function is lost.", "Therefore, a trade-off between the approximation complexity of the loss function and the improvement in economic terms is observed.", "Overall, the MFEPC of hourly, daily, and linear models improves by 10.19%, 3.04%, and 1.96% with respect to the benchmark model, respectively.", "Figs.", "REF and REF show that the distribution of forecasting errors for the benchmark model is almost symmetric, indicated by its metrics OFP and UFP that stay around 50%.", "Nevertheless, forecasting results from the three cost-oriented loss models are biased compared to the benchmark model.", "This can be attributed to the asymmetric cost-oriented loss functions.", "In other words, the cost-oriented forecasting models impose an unbalanced economic penalty for forecasting errors with the same magnitude but with opposite signs.", "Specifically, the same magnitude of FEP in the negative direction causes higher FEPC than if it is a positive deviation.", "In this case, cost-oriented forecasting models are trained to be over-forecasting as opposed to under-forecasting.", "Thus, forecasting errors from three cost-oriented models have generally higher OFP and meanwhile lower UFP than the benchmark model, which can be further verified in Table REF ." ], [ "Forecasting Results from ANN", "Fig.", "REF shows the forecasting performance of ANN with the four different loss functions in terms of MFEPC, MAPE, OFP, and UFP.", "Similarly, the hourly model using ANN obtains the lowest economic cost at all hours among the four models.", "Compared to MLR, the MFEPC margin between the hourly model and other loss models further increases.", "Table REF shows that the MFEPC from hourly, daily, and linear models improves by 13.74%, 2.83%, and 1.71% with respect to the benchmark model, respectively.", "The additional economic advantage can be largely attributed to the lower MAPE obtained by ANN models, as shown in Fig.", "REF .", "In other words, by improving the forecasting accuracy, ANN models obtain additional economic benefits.", "Figure: Forecasting performance of four loss functions using ANN model evaluated by metrics: (a) MFEPC, (b) MAPE, (c) OFP, (d) UFP.Table: Daily averaged performances of four ANN models.For both MLR and ANN cases, despite having higher forecasting errors, all three cost-oriented models outperform the benchmark models economically.", "Among the cost-oriented loss models, hourly models achieve the highest economic gains.", "Due to the unbalanced loss functions, forecasting errors generated from cost-oriented loss models are more biased towards positive deviations compared to the benchmark models." ], [ "Conclusions", "This paper proposes a generalized cost-oriented load forecasting framework that is applicable to various load forecasting applications.", "The proposed framework enables the forecasting model to minimize the actual economic cost caused by forecasting errors.", "In the test cases, a heuristic model that combines DAED and IPB problems is used to generate loss data.", "Three differentiable cost-oriented loss functions, namely hourly loss function, daily loss function, and linear loss function, are produced from the loss data and are then integrated with MLR and ANN to form the cost-oriented forecasting models.", "The forecasting results from the test dataset show promising performance for the cost-oriented models.", "Particularly, the ANN model integrated with hourly loss function outperforms all other models in terms of the economic benefits, with up to 13.74% improvement compared with the benchmark model trained by the traditional MSE loss functions.", "Meanwhile, our test case proves the feasibility of integrating customized cost-oriented loss functions with various types of load forecasting models." ], [ "Appendix", "Here, we present the DAED and IPB combined model that is used to quantify the economic costs associated with load forecasting errors.", "By comparing the total cost difference between both problems, the economic value (a.k.a.", "loss) of forecasting errors is determined.", "The underlying mathematical optimization problems are formulated respectively as follows:" ], [ "DAED model", " where $a_j$ , $b_j$ and $c_j$ denote the generation cost coefficients and $p_{ji}$ represents the power output of the generator $j$ in hour $i$ .", "$M_{k}^{I}$ represents the set of generators located at node $k$ , $\\hat{y}_{ki}$ indicates the forecasted load attached to node $k$ in hour $i$ and $f_{\\ell {i}}$ corresponds to the DC power flow in the defined mapping set $A(\\ell ,k)$ with respect to the line $l$ and the node $k$ .", "$\\bar{p_{j}}$ is the maximum generation capacity for generator $j$ .", "$R_{j}^{U}$ and $R_{j}^{D}$ denote the ramp-up and ramp-down limits of unit $j$ , respectively, and $p_{j}^{ini}$ is the initial output of the generator $j$ at the beginning of the optimization problem.", "$B_{\\ell }$ denotes the absolute value of susceptance of line $\\ell $ and $\\delta _{ki}$ stands for the voltage angle of node $k$ in hour $i$ .", "Generally, the DAED model utilizes the hourly forecasted load $\\hat{y}_{i}$ to generate the economic dispatch schedule for system operations for the next day.", "Here we use point forecasted load, a commonly applied approach by system operators in the DAED problem.", "If a stochastic approach is used, the model can be easily transformed to a stochastic model when probabilistic load is considered.", "Eq.", "(REF ) indicates that the objective function for the DAED model is to minimize the total generation costs of all units.", "Using DC power flow, the power flow equations are incorporated via (), () and ().", "Eqs.", "() to () enforce the generation output and ramping limits.", "Lastly, () limits the range of the voltage angles, and () defines the reference point for the power flow computation." ], [ "IPB model", " where $C_{\\xi }^{+}$ and $C_{\\xi }^{-}$ , $U_{\\xi {i}}^{+}$ and $U_{\\xi {i}}^{-}$ denote the prices for positive and negative balancing services of online units and the charge and discharge power for the BESS unit $\\xi $ in hour $i$ , respectively.", "$M_{k}^{\\Xi }$ indicates the set of BESS located at node $k$ and $y_{ki}$ is the actual load at node $k$ in hour $i$ .", "$RD_{ji}$ , $RU_{ji}$ denote the down/up reserve capacity for unit $j$ in hour $i$ .", "$\\tilde{p}_{ji}$ is the adjusted generation output in the IPB process for unit $j$ in hour $i$ .", "The BESS discharge and charge processes are bounded by $\\bar{U}_{\\xi }^{+}$ and $\\bar{U}_{\\xi }^{-}$ .", "The binary variables $v_{\\xi {i}}^{+}$ and $v_{\\xi {i}}^{-}$ in constraint () ensure that the BESS $\\xi $ cannot charge and discharge simultaneously.", "The variable $e_{\\xi {i}}$ denotes the energy stored in BESS $\\xi $ and $E_{\\xi }^{ini}$ is the initial energy stored in the unit.", "When it comes to the intra-day process, the real-time load is likely to deviate from the day-ahead forecasts.", "Thus, the IPB process is necessary to maintain the system balance.", "The maximum balancing capacity, i.e., the sum of up/down reserve capacity and BESS discharge/charge capacity, is set to be larger than the largest hourly load deviation.", "As can be seen in the objective function (REF ), apart from the real-time generation cost, the costs of using the BESS services have to be included in the objective function.", "Meanwhile, the optimal power output of online units shall be adjusted according to the actual load, as shown in (), while satisfying the power flow equations (REF ).", "Additionally, the energy storage constraints with charge/discharge constraints for each BESS from () to () shall be included in the optimization problem.", "It should be noted that the constraints from () to () are still required to be included in the IPB optimization problem.", "With the formulated IPB problem, the real-time operation cost in hour $i$ can be determined as $C(y_i)$ .", "Note that if the forecasting load is exactly the actual load in real-time, there is no need to adjust the power output nor utilizing BESS in the IPB process.", "Thus, using the DAED model alone, we can achieve the ideal cost $C(y_i^*)$ for a particular system.", "By comparing the costs between the ideal cost and the costs associated with multiple scenarios gained from the IPB model, the economic costs related to forecasting errors can thus be quantified." ] ]
2107.01861
[ [ "A contextual analysis of multi-layer perceptron models in classifying\n hand-written digits and letters: limited resources" ], [ "Abstract Classifying hand-written digits and letters has taken a big leap with the introduction of ConvNets.", "However, on very constrained hardware the time necessary to train such models would be high.", "Our main contribution is twofold.", "First, we extensively test an end-to-end vanilla neural network (MLP) approach in pure numpy without any pre-processing or feature extraction done beforehand.", "Second, we show that basic data mining operations can significantly improve the performance of the models in terms of computational time, without sacrificing much accuracy.", "We illustrate our claims on a simpler variant of the Extended MNIST dataset, called Balanced EMNIST dataset.", "Our experiments show that, without any data mining, we get increased generalization performance when using more hidden layers and regularization techniques, the best model achieving 84.83% accuracy on a test dataset.", "Using dimensionality reduction done by PCA we were able to increase that figure to 85.08% with only 10% of the original feature space, reducing the memory size needed by 64%.", "Finally, adding methods to remove possibly harmful training samples like deviation from the mean helped us to still achieve over 84% test accuracy but with only 32.8% of the original memory size for the training set.", "This compares favorably to the majority of literature results obtained through similar architectures.", "Although this approach gets outshined by state-of-the-art models, it does scale to some (AlexNet, VGGNet) trained on 50% of the same dataset." ], [ "Introduction", "The different possible architectures of a neural network play a key role in achieving success with them.", "There is a plethora of different neural network methods and architectures that are used in the literature, Convolution and Capsule based Neural Networks being on the basis of the current state of the art in computer vision [1].", "Just over a span of a few years, the top-5 image classification accuracy over the ImageNet dataset has increased from 84% [2] to 95% [3], [4], using deeper networks with rather small receptive fields [5].", "However, training or using such complex models on less capable machines like a Raspberry Pi with limited memory and computational power is not trivial because of the extreme number of floating-point operations necessary.", "This paper investigates the impact of vanilla shallow and deeper neural networks models with basic data mining and varying hyper-parameters on building a strong digit and letter classifier.", "We aim to find the best trade-off between the memory, computational time needed and accuracy for a multi-layer perceptron model.", "We show that deeper multi-layer perceptron models with regularization mechanisms are much better than the other tested in this paper and performance can be even increased with data mining techniques like PCA (Principal Component Analysis) [6].", "This is to be expected as there seems to be a general rule that deeper is better and other results in this area have also underscored the superiority of deeper networks [7] in terms of accuracy and/or performance.", "In addition, we are also going to test and compare performances of the MLP models with other popular non-neural-network classifiers from the literature to justify the use of neural networks for this task.", "We also briefly compare these results with state-of-the-art classifiers on the same dataset.", "The MNIST dataset has become a standard benchmark for learning, classification and computer vision systems.", "Contributing to its widespread adoption are the understandable and intuitive nature of the task, the relatively small size and storage requirements and the accessibility and ease-of-use of the database itself [8] .", "However, in 2017, EMINST [8] was introduced as an extended MNIST dataset containing not only hand-written digits, but also all the letters in the English alphabet.", "We are going to encapsulate our experiments using a known variant of the EMINST dataset - Balanced EMNIST: out of 62 possible classes in EMNIST, we choose only 47.", "The motivation behind this is the fact that there are 15 letters for which it is confusing to discriminate between upper-case and lower-case.", "For the following letters, the labels were merged: C, I, J, K, L, M, O, P, S, U, V, W, X, Y, Z.", "This makes it more challenging to implement a well-rounded solution due to its higher complexity than the classic MNIST dataset, but also allows for richer comparisons among the methods implemented.", "We explore this dataset by using multiple techniques.", "PCA for dimensionality reduction and reconstruction of the images; deviation from the mean vectors for each class; deviation from the original when doing reconstruction for sample reduction.", "Other recent studies like [9] prove that sophisticated data mining methods (line-segment feature extraction) do work in reducing the dimensionality on this type of dataset that focus.", "Nevertheless, we show that even basic methods can work well if they are used properly, in a smart way.", "We provide visualizations of our motives and claims through the data mining process for a better understanding of the outcome of this paper.", "There are 100k samples for training data, 15.8k for validation and 15.8k for testing dataset.", "These share the same image structure and parameters as the original MNIST task, allowing for direct compatibility with all existing classifiers and systems.", "Furthermore, the data is decently balanced; for train, each label has between 2076 and 2175 samples and for validation set, each label has between 297 and 380 samples.", "The input representation consists of a vectorized version of an 28x28 image, each pixel value will represent an entry in the input layer, so we will have 784 input units as the first layer.", "On disk, the training set amounts to approximatively 86 MB, however, we have reduced it to 76 MB for this task by saving the images in numpy array format.", "The same transformation was done for validation and test data beforehand.", "For all our experiments, we chose stochastic training over 100 epochs for learning with Adam optimizer, mini-batch training with 100 samples per batch.", "Furthermore, for the weights initialization, we always choose from the same distribution.", "The implementation for the neural networks is done in pure python with numpy for maximum computational efficiency, no frameworks were used here, everything was implemented from scratch.", "For the other models used for comparison reasons (Random Forest, Logistic Regression), we have used sklearn framework.", "The motivation for this work is the abundance of research regarding convolution-based approaches in this field [10], [11], while the classic, standard MLP models have been rather forgotten.", "We want to show that these old approaches can still be a powerful tool in basic image classification at an exponential decrease in computation cost aiming to combat this shortage of MLP research in computer vision.", "To sum it all up, there are 3 research questions that this paper is trying to answer to, through empirical work.", "First, how well can a pure fine-tuned MLP model classify hand-written digits and letters?", "Second, how much can we shrink the memory necessary to capture the dataset necessary for training using basic data mining techniques such that the accuracy performance doesn't drop significantly compared to training on the full unchanged dataset?", "Third, are other non-neural-network adaptive models capable of achieving similar or better performance than a fine-tuned MLP approach?", "We answer the first question employing a standard grid search over the hyper-parameter search space and analyzing the impact of two regularization techniques to boost the generalization performance of the model.", "For the second question, we define a non-significant drop in test accuracy as having a $<1\\%$ difference.", "We respond to this by employing feature selection, dimensionality reduction and sample reduction classic approaches coming with a definite answer (a 67.2% cutback).", "For the last question, we prove empirically that other models do not come close to the performance of a neural network in this computer vision task which might be the reason why neural networks, in general, have taken over the field of image classification [11]." ], [ "Problem identification in baseline models", "We first tried a shallow architecture with vanilla gradient descent optimizer with a learning rate (lr) of 0.1 for mini-batch (100 samples) stochastic training, 100 neurons on the hidden layer and cross entropy Softmax error.", "We trained it over exactly 100 epochs, the results can be seen in figure 1: As we can observe in figure 1, the validation error is higher than training error, moreover the validation error keeps increasing over time while the training error keeps decreasing.", "We can also see that the accuracy keeps going up for the training dataset, but stagnates in case of validation set.", "An ideal situation that we want to achieve is validation error to be low, slightly higher than the training error.", "This is a well-known problem in training Neural Networks, caused by $\\textbf {overtraining}$ .", "In our view, to further elaborate on the matter, this is because the Neural Network in this stage has the sole goal of minimizing train loss, it does not consider validation set at all, this is just a tool for us to assess the performance of the model during training, making sure it is not overfitting.", "However, this won't be an issue if the network finds the relevant information and patterns in the data, as if it does, the generalization would be good, hence the validation loss would continue to decrease.", "That would be ideal, but the network can also extensively keep trying to decrease the train loss function without any relevant plan, overfitting on the data failing to generalize well, therefore the validation error becomes a parabola shaped convex figure.", "Figure: Accuracy and error curves on a baseline model on balanced EMNIST dataset.In general, though, there are multiple issues that can cause this: The distribution of the data is not the same for training and validation - the validation set might not be representative for the one that you train on.", "This usually happens when the data in not shuffled or it's unbalanced, as a solution we can try shuffling of the train and validation sets before training and also checking the distribution of the classes.", "This is not the reason for our algorithm as the data is shuffled and balanced.", "The model fails to decipher $\\textbf {general patterns}$ in the data, maybe due to the fact that it's too simple.", "If the task is difficult, we can't expect it to be solved just by a model with very few parameters.", "The training error would keep decreasing until premature convergence and the validation error might be very far away.", "The accuracy on training set will keep increasing until stagnation, but the accuracy on the validation set will always be much lower and would possible remain steady in an interval after some point.", "$\\textbf {This looks to be the case for our algorithm}$ , it is pretty clear that it does not generalize well enough and it overfits on the training data, failing to find higher level patterns to succeed when queried with unseen data units.", "A solution would be trying a model with higher number of parameters or regularization.", "The model is too complex and it overfits on the training data causing the validation error to be far away behind or to stagnate while the training error keeps rising.", "After a certain point, it is possible for a model with very high number of parameters to learn everything by heart, hence not really showing signs of improvement when queried with unseen data units.", "A solution would be applying overfitting preventing methods like dropout; L1,L2 regularization.", "There are too many inputs for a simple model to handle and make sense of.", "This is pretty similar with the second bullet point, but it is obvious that our model, having only 100 neurons on the hidden layer might not be powerful enough to assess 784 input units (as the images are 28x28).", "Solutions for this include: partial feature selection done beforehand, trying a more complex model.", "For medium-sized datasets, we can get unlucky even after checking the distribution and balancing the data, to train our model on easy samples to learn and validating mostly on hard cases.", "This happened to me in the past but it is generally very rare.", "The optimizer might not be suited for the problem or the final layer's activation is not suited for the problem.", "An optimizer based on GD and momentum might not be that great without a good learning rate, if it's too small the model might fall into a local minima and not generalize well for the validation set accuracy to get higher, it it's too high, the algorithm might diverge with continuous higher loss.", "We can clearly see a point of $\\textbf {minimum}$ for the validation loss in figure 1, that's where usually the $\\textbf {generalization}$ is at its $\\textbf {max}$ , we could basically stop at around epoch  $\\textbf {10}$ , because there is really no real $\\textbf {improvement}$ afterwards.", "In figure 1, we can also see that the error on the validation set increases on a very $\\textbf {fast pace}$ but the accuracy on the validation set remains steady around 82%.", "Why isn't it also decreasing at a fast pace?", "Basically, the model still succeeds in classifying the labels correctly, in most of the cases, but it is $\\textbf {less and less sure}$ about it.", "If before, the model classified a real \"one\" as one with a confidence of 0.8, now it might classify it with a confidence of 0.55, that is still very high for a 47-labeled classification task, but it is a noticeable difference in loss.", "It is pretty clear that the model does not generalize well and it may overfit on the training data as well.", "The most probable cause of the poor performance is the simple architecture of the current model.", "That's why we are going to try varying the number of hidden neurons and the number of hidden layers and observe what changes.", "We expect a higher number of model parameters (more hidden neurons) with regularization techniques to be more suited for our problem.", "As a more solid baseline, we tried multiple shallow neural networks with 32, 64 and 128 neurons on the hidden layer.", "The optimizer used is Adam with a learning rate of 0.1 and as a loss function we use CrossEntropySoftmaxError.", "This set-up is a classic one for a normal shallow neural network used for multi-class classification, the motivation behind choosing a learning rate of 0.1 has multiple points behind it: a higher learning rate with decay proved to be suited for the MNIST style datasets [12], [13]; standard learning rate choices are 0.1, 0.01 and 0.001 - however, for only 100 epochs, a learning rate smaller than 0.01 would make the model too slow; the algorithm from figure 1 was trained using a learning rate of 0.1, so it is good to keep it for comparison reasons (if the algorithm does not diverge).", "Figure: Shallow design, 32 hidden neurons Figure: Shallow design, 64 hidden units Figure: Shallow design, 128 hidden units Figure: 2 hidden layers, 128 neurons each, ReLu activationsFigure: 3 hidden layers, 128 neurons each, ReLu activationsThe results (fig 2, 3, 4) show that the generalization gap gets worse and worse as we keep increasing the number of neurons.", "For $\\textbf {32}$ neurons, the validation error stagnates but it does not increase, this might be as a consequence of the model having higher $\\textbf {bias}$ and lower $\\textbf {variance}$ when we have a lower number of hidden units.", "One other thing that is important to mention, although the $\\textbf {32}$ -units variant of the model does not suffer from overtraining that much compared to the other ones, the final and best validation accuracy stays at around $\\textbf {0.8}$ .", "Putting that into perspective, the best validation accuracy is around $\\textbf {0.825}$ for $\\textbf {64}$ -units version and $\\textbf {0.85}$ for the $\\textbf {128}$ -unit version.", "So if we were to choose the best model among these, the $\\textbf {128}$ -unit version, at the $\\textbf {minimum point/inflexion point}$ when the validation error begins to rise up, has the most potential.", "That's why the following 2 experiments have been conducted starting with the 128 hidden units shallow network increasing the number of layers to 2 and 3.", "Again (fig 5, 6), increasing the complexity of the model doesn't do anything but making the generalization gap higher and higher, the accuracy does not break the previous best of $\\textbf {0.85}$ either.", "We also trained the models using a learning rate of $lr=\\textbf {0.001}$ and the results were consistent, the graphs look almost identical.", "The neural network with 3 hidden layers has a lot of potential, but we need to fix the current issues by considering regularization techniques like $\\textbf {dropout}$ - averaging multiple networks or $\\textbf {L1, L2 regularization}$ to $\\textbf {lower the variance}$ of the model by penalizing the weights." ], [ "Dropout and Weight Penalty", "Dropout [14] is a way of preventing overfitting in training neural networks with high number of model parameters.", "The main idea behind it is to randomly deactivate (drop) units along with their connections from a neural network during training.", "This way, the training stage is like using multiple neural networks that we average in the end by considering smaller weights for testing.", "Intuitively, this is like forcing the flow of the neural network to find other paths, considering subsets of features in the feedforward stage - to reach the output and backpropagation stage - for learning.", "[ht] Input: data $inputs$ , var $p$ , boolean $evaluation$ $evaluation = True$ Return $p \\cdot inputs$ Initialize random matrix $\\textit {mask}$ with dimensions like inputs Return $(mask \\le p) \\cdot inputs$ .", "Dropout Layer Forward Propagation [ht] Input: data $outputs$ , var $grads$ Return $(outputs \\ne 0) \\cdot grads $ Dropout Layer Backpropagation To further elaborate on dropout, when a new mini-batch comes in, we disable with a probability $\\textbf {p}$ a portion of the neurons on each hidden layer and use this set-up for forward and back propagation.", "There are multiple ways to balance out the fact that we are deactivating units in our network, we can train it as we described before and at only the testing stage, we can scale hidden units activations by $\\textbf {p}$ .", "Analogue, we could also scale the activations during training by 1 over the probability that the neuronal is included and let the testing phase untouched.", "Another technique worth mentioning is decay-ing the dropout probability to 0, in the same way we use it for learning rate - when we start with a large value that exponentially decays over time.", "This is very effective as a regularization technique, because, as pointed out in the official paper [14], this is like training an exponential number of networks, all with different model parameters which means each one overfits in different ways, so most probably overfitting won't be a problem for each one individually.", "Moreover, this technique makes the average usefulness for each neuron increase, as if the important ones for the problems happen to be disabled, the network is forced to tune the other ones instead, to decrease the loss function.", "One can find similarities between this regularization method and ensemble learning [15], with dropout, we combine models with high variance into one with a little higher bias but the overall variance decreases, it is like bagging.", "Limitations of dropout consist in poor performance on convolutional layers that are the building blocks for CNNs [16], [17], [18].", "Dropout shouldn't be actually used between convolution layers, but it is still very useful between dense layers, so this does not make it completely irrelevant in complex CNNs.", "The reason that it does not work is the fact that the weight gradients of the convolution layers are averaged across the spatial dimensions, which tend to contain many highly correlated activations, which is a problem with a different mask at every spatial location.", "Weight penalty is another form of regularization in which we try to discourage the complexity of the model.", "It is important to note that, a model overfits on the training data when it actually also $\\textbf {learns the noise}$ present in that dataset.", "That's why training error gets better and better but validation error remains constant of gets worse.", "The goal with L1, L2 regularization, as with other regularization techniques, is to learn the general patterns in the data ignoring the noise.", "The naming L1 and L2 comes from L1 and L2 norms in mathematics.", "Given a vector $x \\in R^n$ in the Euclidean space, the $\\textit {p-norm}$ $||x||_p$ is defines as: $||x||_p = (\\sum ||x_i||_p)^{\\frac{1}{p}}$ .", "It follows that L1 norm incorporates a sum of absolute values and L2 norm incorporates a sum of squares.", "Let's consider a general regression problem in which we use MSE as the loss function (where $y_i$ is the target value and $\\hat{y_i}$ is the predicted output): $MSE = \\dfrac{1}{N} \\cdot \\sum (y_i - \\hat{y_i})^2$ .", "The model learnt by network can have any form, but to illustrate the formulas in a simpler manner, let's consider that it's represented by a third-degree polynomial: $\\hat{y_i} = f(x_i) = \\theta _0 + \\theta _1 x_i + \\theta _2 x_i^2 + \\theta _3 x_i^3$ .", "A problem: we have a classification dataset that can be discriminated well by a linear model but the third-degree polynomial model might be too complex to fit such points in the euclidean space forming curved decision boundaries that follow each exact class distribution.", "We do not want to overfit like that; instead, what L1 and L2 regularization does is penalizing some weights making them 0 or really close to 0, hence becoming negligible.", "A successful L1 or L2 regularization would penalize weights $\\theta _2, \\theta _3$ transforming the classifier (before applying a squashing function) into: $\\hat{y_i} = f(x_i) = \\theta _0 + \\theta _1 x_i + {\\theta _2 x_i^2} + {\\theta _3 x_i^3}$ that will capture well the main essence of the data and it won't involve the validation loss get higher over time.", "But how do we actually penalize the weights that way?", "And how do we know which weights to penalize?", "Well, that's the learning algorithm job to discover, we will just tell him that some weights need to be low by forcing the sum of absolute values or the sum of squares of the model parameters to be low.", "In the loss function definition, we add those sums that represent L1, L2 normalizations -corresponding to the L1, L2 norms in mathematics: $LossL1 = \\dfrac{1}{N} \\cdot \\sum (y_i - \\hat{y_i})^2 + \\lambda \\sum |\\theta _i| $ $LossL2 = \\dfrac{1}{N} \\cdot \\sum (y_i - \\hat{y_i})^2 + \\lambda \\sum (\\theta _i)^2 $ The $\\lambda $ parameter is the regularization term which dictates how severe the penalization of the weights would be [14].", "If it's 0 then we reach the normal loss function, but if it's very large, then the majority of the weights will be close to 0.", "The L1 and L2 regularization serve the same purpose but there are significant differences between them.", "The first thing that comes to mind are the outliers, of course squaring something big will make it even bigger, that's why L2 regularization is not that robust to outliers, L1 is much preferred in this case.", "Second of all, L2 usually does not make the weights 0, it can get really close to that but, if we want to completely ignore some model parameters, L1 is much preferred; in this way, we can say that L1 regularization has also built-in feature selection as it will choose by itself what input features are relevant in solving the task (by assigning insignificant input features with a zero weight).", "This is a double-edged sword though, because this way L1 may not learn complex patterns, L2 has the advantage from this point of view.", "Other notable differences include the fact that L2 regularization has a non-sparse solution and L1 regularization has multiple sparse solutions.", "Comparing dropout and L1, L2 regularization, it is hard to say for sure which one would be better in general, it depends on the problem.", "But it is no secret that the dropout does much so more than L1, L2 norms: the fact that we are trying an exponential number of neural networks to solve the problem adds much more robustness to the model, but both of them can make some model parameters negligible or completely negligible.", "The dropout and L1, L2 regularization can also be used in combination, there are papers that show success using that [14]." ], [ "Balanced EMNIST Experiments", "We want to study the implications of using Dropout and/or L1, L2 regularization over the baseline models presented in section 2 to obtain a new baseline model that we will use for our data mining experiments.", "For this, we first tested the dropout mechanic over a hyper-parameter search space and see what's the overall tendency of the results." ], [ "Dropout Experiments", "We used $\\textbf {grid search}$ for identifying the optimal values for the dropout keep rate, the learning rate and the type of activations.", "The base architecture is 128 neurons spread across 3 hidden layers and we have chosen to apply dropout to the hidden layers, one that drops outputs from the first hidden layer and a second one that drops outputs from the second hidden layer.", "The motivation for this comes from our understanding that we don't need to apply dropout on each layer to get the best performance, a certain amount of regularization should suffice (in our view).", "We also decided to keep all the information from the input layer as we do not want to lose important features from the images (a high keep rate of 0.8 can still work).", "Dropout keep probabilities of 0.25, 0.5 and 0.75 were tested.", "It is important to mention that we used dropout before the affine layers and ReLu because of how it was recommended in the official paper [14].", "We have also tried a learning rate of 0.1 first, then 0.01 and then 0.5.", "Figure 7 clearly shows an improvement when comes to generalization, the error curves look exactly like in an ideal case: the validation one comes directly above the train one and it follows it throughout without fluctuations or unexpectedly going up.", "Figure: 3 hidden layers, 128 neurons each, ReLu activations, Dropout p=0.5, lr=0.1Figure: 3 hidden layers, 128 neurons each, ReLu activations, Dropout p=0.25, lr=0.1 (Divergence)However, we did record a $\\textbf {divergence}$ of the algorithm when it comes to the dropout keep rate of 0.25 (fig.", "8), it is important to show failed cases as well.", "We thought this was because of the learning rate being too high but then we changed it to 0.01 and 0.001, the model still diverged, therefore it seems we are losing $\\textbf {too much information}$ by only keeping a quarter of the units.", "This is also illustrated in [14], where this similar scenario happens and a value between 0.4 and 0.8 is recommended for dropout probability.", "Therefore, we can conclude that our architecture works well for relatively high dropout keep rates ($\\ge 0.5$ ).", "This is expected - the authors of dropout also used a value of 0.5 in their experiments [14], though experimenting with this hyper-parameter is necessary.", "The results were very similar when using dropout keep rate of 0.5 and 0.75 when the learning rate varies below 0.1, so it seems a higher learning rate is ideal for faster training.", "In fact, a larger learning rate is also recommended in [14], appendix A, section 2, suggesting a value $\\textbf {10}$ or $\\textbf {100}$ times bigger than the one used on a standard neural network without dropout.", "This is $\\textbf {intuitive}$ and it makes sense, as we do not have the same time to train every smaller neural network that results after dropout, we want to make the learning individually faster.", "A learning rate of 0.5 proved to be too large though as the validation error stagnates far above the train error, like in some of our baseline models.", "Differences between 0.5 and 0.75 as the dropout keep rate were seen in the final accuracies on the validation set, we got 85.2% using the higher probability and 82% with the probability of 0.5, so from this point of view, the higher keep rate would be ideal for this task." ], [ "Weight Penalty Experiments", "When it comes to L1 regularization, this one also really shows that it improves the generalization strength of the model, however, we need to be very careful with picking the right regularization parameters.", "Our experiments show that the model diverges for high regularization parameters like 0.01, but does relatively well with regularization parameters in the range (1e-3, 1e-2).", "Going below 1e-3 with an order of magnitude will not improve the baseline models suffering from the same problem.", "It's worth to note that a regularization parameter of 1e-3 worked best for us (fig 9), the model achieving almost 86% validation accuracy which is a threshold that no baseline model achieved, but, compared to dropout, the curves look more $\\textbf {noisy}$ .", "Moreover, the validation error here dropped below 0.5, which no other model that we tested managed to achieve until now.", "For L2 regularization, we decided to keep a learning rate of $\\textbf {0.1}$ for all experiments and focus on the regularization parameter change as this learning rate proved to be good for standard dropout and L1 regularization.", "The results are very similar to the L1 norm, however, the regularization parameter needed changes as we found that an optimal value would be 1e-2 to really improve the generalization gap present in the baseline models.", "Training with this regularization parameter for L2 norm yielded the same exact best validation performance (fig 7).", "We also tried 1e-3, previous best value for L1, but the curves looked the same as the baseline models (with a slightly lower generalization gap).", "The same happened when we set the regularization parameter to 1e-4.", "However, with 0.01 as regularization parameter, we got an interesting result where the validation error and accuracy follow the train loss and accuracy really closely (fig 10), the difference being less than 0.25 for error and 0.1 for accuracy.", "All in all, it seems that L1 and L2 regularization parameters differ from one order of magnitude.", "Figure: 3 hidden layers, 128 neurons each, ReLu activations, L1 Regularization λ=1e-4\\lambda =1e-4Figure: 3 hidden layers, 128 neurons each, ReLu activations, L2 Regularization λ=1e-2\\lambda =1e-2" ], [ "Hybrid Regularization", "We conducted extra experiments combining Dropout and Weight Penalty to see if we can get better results.", "$\\textbf {In theory}$ , we should select softer parameters for both these regularization techniques because, if applied simultaneously, there is always the threat of adding too much bias to the model - making too many assumptions about the data, killing its predictions.", "That's why, we expect out of the grid search hyper-parameter space, a high value of 0.75 for dropout keep probability and a small L1 regularization parameter of 1e-4 to work best.", "Figure: 3 hidden layers, 128 neurons each, ReLu activations, Dropout p=0.75, L1 Regularization, coef= 1e-5We tested only with L1 penalty as it will be equivalent in the performance with L2 with careful parameter setting.", "Again, the 0.1 learning rate was preserved.", "Of course, first, we tried with the previous best parameter settings, plus other combinations with 1e-2 and 1e-5 for L1 parameter, but in the end, as expected, the model with 0.75 keep probability for dropout and 1e-5 works best, achieving 85.7% accuracy on the high end and 85.2% on average after 50 epochs on the validation set.", "This is the best algorithm that we made with this kind of architecture (figure 11).", "The best one using only L1 regularization is really close, however, this has less noisy-smoother performance as seen in the diagrams.", "The results of the majority of the experiments conducted ar present in table 1.", "Table: Results of the experiments so far." ], [ "Summary", "All these experiments prove that regularization and dropout help to establish a better generalization performance over time, even though it seems that the model falls into $\\textbf {local minima}$ - especially around the point where the validation loss hits 0.5.", "Current state-of-the-art models for this dataset hit over 99% accuracy [13], but they do use special architectures with deeper learning models and good feature selection.", "They can take dozens of hours to days to train on the same dataset making them awkward to use on less capable hardware.", "It is interesting to point out that some of these methods, like AlexNet or VGGNet, do get around 81-83% accuracy when trained on half the dataset (around 50k samples) and GoogleNet gets around 88% [13].", "Therefore, for comparable performance, we can say that more sophisticated methods need only half the dataset to achieve similar performance to vanilla end to end models, due to so much better feature selection.", "On test dataset, our best model scored really close to the validation one, $\\textbf {84.83\\%}$ accuracy, which is stronger than the baseline results on this dataset present in [8] but meets the overall expectation from the literature - topping at around 85% validation accuracy and $\\textbf {91.5\\%}$ training accuracy.", "Moreover, the model took $\\textbf {4.8s}$ per epoch which is really fast on just an i7 CPU.", "It is important to keep in mind that similar performance on validation set can be obtained with precise $\\textbf {early stopping}$ without regularization, as our initial 3 layered neural net also scored high validation accuracy before stagnating, but in the long run (for higher epoch training), our tested methods of regularization will achieve higher accuracy on the test dataset." ], [ "Applying Data Mining", "For such a non-convolutional architecture, the results are good enough and are in the expected range [19], [8].", "However, we do use a lot of memory space, just for training we have a 2D vector of shape 100k for the first dimension and 784 for the second dimension.", "On disk, with our actual numpy array representation, this amounts to approximatively 76 MB of data just for training.", "With validation and test data, we reach roughly 100 MB.", "The goal is to make this size smaller to fit on less capable hardware and to make training time faster without losing too much test accuracy performance.", "There are two intuitive ways this can be done.", "Reducting the feature space or reducting the number of samples we feed into the algorithm.", "Of course, convolutions do in a way realize feature extraction, but our main target is not to generate new features but to explore whether all of our current ones are really necessary to obtain top accuracy or not." ], [ "Reducing the feature space", "That's why we have decided to do a PCA [6] on the training set and see how many principal components are needed to catch the most of the variation in the data.", "The cumulative explained variance ratio [6] used to interpret that is shown in figure 12.", "Figure: PCA cumulative explained variance ratioThis brings good news, the area below the graph is large, meaning, we need much fewer dimensions to capture the variance of the whole training dataset.", "With only 10% of the total number of components, we are able to catch 91.81%.", "of the variance.", "It is good practice to usually aim for 90% to 95% when doing dimensionality reduction with PCA, so if we project the dataset on the first 78 components we get a 90% reduction in the feature space.", "Plugin this into the best model we developed so far (table 1 configuration from last row), we actually get a slight increase in the test accuracy in the end to 85.08% (which is equivalent to 40 more samples out of 15.8k classified correctly).", "Moreover, we get a 46.8% decrease in the time needed to train the same neural network.", "The validation accuracy gets slightly higher in the end as well, with only 10% of the original features, as we can see in table 2.", "This reduction in dimensionality decreases the memory necessary on disk by 64% as well, which is a quite a step down in terms of size, if we choose to only work with this feature space.", "We need to keep in mind, though, that the PCA transformation matrix also needs to be saved in memory - it is being used every time we project a new data unit on the new feature space.", "Table: Comparing multiple classifiers.In table 2 we can also observe the performance of other classifiers on this task.", "Of course, we can't really justify that a Neural Network is perfect for this task if we don't try other types of classifiers.", "We mainly tried Random Forest as it is usually a very powerful classifier that can rival neural networks in medium sized tasks.", "Support vector machines and KNN were also tried but the computational time necessary for these ones was incredibly high that it's not really feasible to use them here.", "The accuracy obtained by the neural network is significantly higher than of the other classifiers, which justifies their use.", "In fact, the performance of the Random Forest classifier with variable number of estimators is more comparable to what a shallow vanilla neural network with 32 hidden neurons would have.", "Granted, it is important to mention that the time spent on training is significantly less for the non-neural-network models on our machine, but this does not really motivate the big loss in accuracy." ], [ "Reducing the sample size", "This part is trickier than the feature selection part, as large amounts of data are the most important part of a Neural Network success, so we should definitely expect a loss in the test accuracy if we blindly get rid of training samples.", "However, for this specific dataset, as it comprises handwritten digits and letters, we can expect weird shapes for some specific classes, due to the fact that people write in different ways.", "These can be detrimental to a neural network because outliers can affect the gradient flow significantly.", "To observe this phenomenon, we plotted the mean vector for the first 10 classes and the 2 closest and furthest away samples from this mean.", "We used Euclidean distance to quantify the distance between two 2D arrays.", "The mean vectors (figure 13) look shady which is to be expected as we average the pixel values over multiple images.", "It is a clear difference, though, between the furthest away samples from each class and the mean vectors from each class.", "In same cases (digit 3 or 7), the shapes look so awkward even for a human to classify, therefore it should not be a bad idea to question the relevance of these samples for training the models.", "Figure: Distance from the meanAn idea of removing training samples according to the deviation from the mean comes naturally, however, we need to establish a certain threshold for each class and anything that is above that gets excluded.", "To make a decision on that, we plotted the sorted average distance from the mean over all the classes (fig 14).", "We can see that the distance grows exponentially after the about 2000 sorted sample, so an idea of keeping only the closest 2000 samples from each class arises.", "This would mean a reduction of 6% in the training size, as we would have now only $47 \\cdot 2000 = 94k$ samples, instead of 100k.", "Moreover, in this way, we can also perfectly balance the number of samples for each class during training.", "Doing that for our experiment (and retraining a PCA model on the new training dataset), we get 84.29% test set accuracy, which is of course slightly lower than our previous best results but not that significantly.", "The validation accuracy we got also stayed in the same range: 85.2%.", "All in all, getting rid of 6 thousand images, in this way, did not cost the performance of the model that much and the training time is comparable to the previous best models.", "We have also tested the best configuration on different sizes of the training set reduced using the distance from the mean method.", "The results are present in table 3.", "Table: Results for the best model varying number of training samples.As we can see, if we use half the data through the minimum deviation from the mean method, we do get a 10% decrease in accuracy, which is a lot.", "However, at 94k and 82k the test accuracy is still over 82% which is impressive.", "We can say that for a 6% decrease in the number of samples, we get comparable performance with the case in which we use all the training samples.", "This is not the only way to reduce the number of samples successfully.", "One can argue that we have used an encryption of the images with PCA to reduce the number of features; when doing the reconstruction, the newly formed image might or might not resemble that well the original image.", "Therefore, we have decided to compute the root mean square error, for each class, between the reconstructed samples and the original samples.", "This way, we can intelligently remove samples that are not a good reflection of the original data, because it can hurt the learning algorithm.", "We have plotted the sorted average RMSE over all classes (fig 15) to get an idea of the deviation from the original dataset.", "Figure: Deviation from the original - after removing 6k samples with the first methodWe can see that after around 1750 sorted samples, the deviation becomes more than 100 units and it grows exponentially after around 1900 samples.", "We tried multiple cut-off points for this as well and the results are present in table 4.", "Table: Results for the best model varying number of training samples.The results seem similar to when we were using only de deviation from the mean, they are a little bit better apart from the case when we reduced the training size to 50%.", "But with 10k samples less than the original size, we can get still over 84% test accuracy, which is impressive." ], [ "Floating-point operations short analysis", "In this subsection, we want to examine the number of floating-point multiplication operations that are necessary for a forward method in our best multi-layer perceptron models.", "The main architecture that we use has 3 hidden layers with 128 neurons each; an input layer of 78 nodes and a Softmax Crossentropy output layer of 47 nodes.", "We can obtain the number of floating-point multiplication operations by calculating the following expression: $78 \\cdot 128 + 128 \\cdot 128 + 128 \\cdot 128 + 128 \\cdot 128 + 128 \\cdot 47 =65,152$ .", "This number is very low compared to what a CNN would have as number of operations (because it has to keep the 2D image - we can't do PCA in this case and because of the convolution computational cost).", "Nowadays, CNNs number of floating-point operations easily reach into the millions, but it strongly depends on the kernel on each layer and the number of kernels in each layer.", "To just give a quick intuition to the reader, let's image that googlenet [3] had a sample from the Balanced EMNIST dataset as input: $28 \\cdot 28$ and just one channel, everything else remaining the same (size of kernel $7 \\cdot 7$ stride 2 padding 3, number of kernels for layer 1 is 64), then the number of multiplication operations needed for a forward pass through the first layer would be: $(\\dfrac{28-7+3+3}{2}+1) \\cdot (\\dfrac{28-7+3+3}{2}+1) \\cdot 7 \\cdot 7 \\cdot 64 = 614,656$ .", "Note that here we used the well known formula $\\dfrac{I-F+2P}{S}+1$ that defines the output shape after a convolution with kernel size $(F, F)$ , stride $S$ and padding $P$ .", "This is already one order of magnitude higher than in our case, for just one layer, which goes to show how expensive ConvNets really are compared to multi-layer perceptron models - vanilla neural networks." ], [ "Related work", "Experiments with different variants of MLP in [19], [8] set a literature expectation for this task and dataset asserting the performance on different variations of MLP.", "In [8], they used a 3-layered neural network as well, however, they make use of the Online Pseudo-Inverse Update Method (OPIUM) [20] for the weights update.", "No hyper-parameter-tuning was conducted.", "The performance on the Balanced EMNIST dataset (testing set) obtained is 75.68% with statistical error 0.05%.", "This is clearly lower than our model but their main target was to provide an initial benchmark for further research.", "In [19], they apply Hardware-aware training algorithms for this task.", "However, the architecture of the multi-layer perceptron model is a basic one - 3-layered neural network which seems similar to ours (before any processing step was applied).", "Not many details about the hyper-parameters are provided in the paper.", "Their test accuracy peaks below 85%, which is what we can expect usually for this task, as we saw in our experiments as well.", "Other studies have applied MLP models to EMNIST variations: Contrasting Convolutional Neural Network (CNN) with Multi-Layer Perceptron (MLP) for Big Data Analysis [21].", "This study also compares the performance between a convolutional architecture and MLP.", "The dataset used is the balanced EMNIST for characters only (26 classes instead of 47).", "We tried reconstructing the experiments from this particular paper to test how our own models would fair in a potential comparison.", "However, we haven't managed to provide a meaningful analysis because of the poor documentation of the cited study.", "On the EMNIST original paper, it is defined that the dataset for this 26-class balanced EMNIST for characters only has 145.6k samples, in total.", "This is contradictory to the cited paper's experiments where they use only about 29k samples, in total.", "It seems the authors have sampled a portion of the whole dataset without clear instructions for replication.", "Their best MLP model got 89.47% after 202 epochs during training on all the dataset.", "Judging by the fact that the model has only a 5% increase in performance with almost half the number of classes to be predicted will lean in our favour, however, we should keep in mind that a much smaller amount of samples have been used for training in their case, so we can't really pronounce a definite statement on this matter.", "We should expect, though, that our algorithm will provide better accuracy performance as it has performed hyper-parameter tuning and data selection, which this study did not (for the MLP part).", "Interesting comparisons between the CNN and MLP can be observed in this study.", "The most impressive results here concern an analysis of the test accuracy after 15 epochs: while a CNN achieves over 90%, the MLP model gets only 31.43%.", "Out of curiosity, we have also interrogated the accuracy after 15 epochs of our best model (that obtained over 85% accuracy on test set) and we get 83.9%, which is decently close to the final one after 100 epochs.", "From a comparison point of view, this is not really that relevant because, again, we have more output classes, but we also have more samples to train on in contrast to the cited study.", "Line-segment Feature Analysis Algorithm [9] is a recent study that also focuses on hand-written digit and letter recognition and that also discusses PCA for the dimensionality reduction.", "However, its main contribution is a convolution-like feature extraction method that outperforms PCA in terms of final accuracy performance of a two machine learning models: KNN and SVM.", "The new algorithm extracts contours, lines, faces from the data and then identifies the types of line segments and sums them up [9].", "To extract features from line-segment information, LFA uses $3 x 3$ and $5 x 5$ filters.", "This is indeed an interesting study - we strongly consider implementing LFA to substitute the PCA part for a future work version of this paper.", "When dropout was introduced in 2014 as a way of tackling overfitting in deep neural networks [14], they also tested the effect on dropout on MNIST dataset.", "This paper also evaluates dropout on a plethora of applications, ranging from vision and speech recognition to computational biology, as a method of improving model generalization alongside drastically reducing overfitting.", "It provides great practical recommendations for conducting our own experiments that we used in this paper.", "Although not that relevant to directly compare results, it shows strong empirical results in favor of dropout, especially for MNIST dataset.", "However it does lack a theoretical underpinning that tackles the convergence of such a technique where its hyper-parameter vary.", "Dropout is still, to this day, relevant in fitting complex neural networks, but unfortunately, state-of-the-art models in vision recently adopted a residual convolutional architecture with other forms of regularization like BatchNorm [22] with rescaling and shifting that are simply better than dropout; although, granted, these appeared after the introduction of this technique." ], [ "Discussion and Conclusions", "Vanilla dense neural networks architectures suffer from overfitting on the balanced EMNIST dataset.", "To mitigate the problem, we tested a lot of regularized models using dropout and L1, L2 norms that clearly show a massive improvement in generalization strength of the models over time, if fine hyper-parameter tuning is done.", "However, the algorithm still falls in local minima as our experiments' figures showed.", "The literature's solution for this is to use CNN models that use convolutions to extract spatial features from the data merged with dense regularized layers to add non-linearity to the tensors floating within in order to boost the accuracy on this 47-labeled classification task.", "That is because the model sometimes might struggle to extract relevant information from an image having only a flattened vector of 784 units (or 78) as input; better feature extraction can be done this way and surely will improve the results.", "This has been done already on MNIST dataset ([23], [12]) and on EMNIST, with capsules, ([13]) achieve more than 99% accuracy.", "Our fine-tuned baseline model that uses no data mining beforehand gets a test accuracy of $\\textbf {84.83\\%}$ , which is solid for this dataset considering the given architecture.", "It was obtained using dropout of 0.75 keep probability on the first 2 hidden layers, plus L1 penalty with a coefficient of 1e-5 on each layer, in a 3 hidden layered, 128 units each neural network design.", "This model takes up 470s to train and the data size of disk necessary for training takes up 76 MB.", "With a simple PCA, we deducted that we do not need all 784 dimensions for a good classification, changing the number of features from 784 to 78 actually boosted the test accuracy of the same model to $\\textbf {85.08\\%}$ .", "This newer model needs 250s to train (46.8% decrease) and only 28 MB (63% decrease) on disk for the training dataset.", "Finally, using the deviation from mean and the distance from the original samples through PCA inverse transformation to intelligently remove samples from the training set, we get comparable performance ($\\textbf {84.02\\%}$ ) with a further 10.7% decrease in terms of training data.", "This last model needs a little less than 250s to train, but needs only 25 MB of memory for the training dataset, which represents 32.8% of the total original training data size, roughly a third.", "This validates the importance of data minding in machine learning tasks.", "The following abbreviations are used in this manuscript: Table: NO_CAPTION" ], [ "Funding", "This research project did not receive any external funding." ], [ "Availability of data and material", "The dataset is publicly available here: link (EMNIST Balanced dataset)." ], [ "Code availability", "The computer code is available at: link.", "Step-by-step instructions to replicate the environment and run it properly are also present there.", "The experiments can already be visualized on the main jupyter notebook present in the mlp package." ] ]
2107.01782
[ [ "Provable Convergence of Nesterov's Accelerated Gradient Method for\n Over-Parameterized Neural Networks" ], [ "Abstract Momentum methods, such as heavy ball method~(HB) and Nesterov's accelerated gradient method~(NAG), have been widely used in training neural networks by incorporating the history of gradients into the current updating process.", "In practice, they often provide improved performance over (stochastic) gradient descent~(GD) with faster convergence.", "Despite these empirical successes, theoretical understandings of their accelerated convergence rates are still lacking.", "Recently, some attempts have been made by analyzing the trajectories of gradient-based methods in an over-parameterized regime, where the number of the parameters is significantly larger than the number of the training instances.", "However, the majority of existing theoretical work is mainly concerned with GD and the established convergence result of NAG is inferior to HB and GD, which fails to explain the practical success of NAG.", "In this paper, we take a step towards closing this gap by analyzing NAG in training a randomly initialized over-parameterized two-layer fully connected neural network with ReLU activation.", "Despite the fact that the objective function is non-convex and non-smooth, we show that NAG converges to a global minimum at a non-asymptotic linear rate $(1-\\Theta(1/\\sqrt{\\kappa}))^t$, where $\\kappa > 1$ is the condition number of a gram matrix and $t$ is the number of the iterations.", "Compared to the convergence rate $(1-\\Theta(1/{\\kappa}))^t$ of GD, our result provides theoretical guarantees for the acceleration of NAG in neural network training.", "Furthermore, our findings suggest that NAG and HB have similar convergence rate.", "Finally, we conduct extensive experiments on six benchmark datasets to validate the correctness of our theoretical results." ], [ "Introduction", "Momentum optimization methods are a central tool for solving optimization problems in numerous areas such as machine learning, signal processing[1], and control.", "Introduced by Polyak in 1964[2], the momentum idea has been adopted by many practical optimization methods to speed up the training process, such as Adam[3], AMSGrad[4], and AdaBound[5].", "The two most popular momentum methods are Heavy-ball (HB)[2] and Nesterov's accelerated gradient (NAG)[6].", "These two methods have achieved great success in convex task.", "For minimizing quadratic strongly convex objective function, HB method is able to achieve linear convergence rate globally, which attains an acceleration compared to gradient descent method.", "However, Lessard et al.", "[7] showed that HB method may fail to converge for some strongly convex objective functions.", "In 1983, Nesterov [6] proposed the NAG method and proved its convergence with a complicated algebra trick.", "Compared with HB method, NAG provably attains a global optimal convergence rate for convex tasks in terms of oracle complexity[8].", "On the other hand, momentum methods are widely used in optimizing neural network[9][10][11].", "Modern architectures of neural network involve tremendous parameters [12][13].", "In general, neural networks act in an over-parameterized regime, where the number of parameters is much higher than the number of training data.", "With the non-smooth activation functions, the landscape of the objective of the neural networks is highly non-convex and non-smooth.", "It is NP-hard to obtain the global minimum for non-convex problems.", "However, empirical results showed that first-order optimization methods are capable to achieve near zero training loss for neural networks with random initialized weights[14].", "Over-parameterization is widely regarded as the explanation for this mysterious phenomenon, which allows the neural network to fit all the training data.", "Recently, some researches found that gradient descent can find the global minima with a linear convergence rate for a two layer neural network with ReLU activations [15].", "They found the convergence is closely related to a Gram matrix that only depends on the architecture and the initialization of the neural networks.", "Besides, they proved that the training iterates is close to the initialization.", "This phenomenon is also called \"lazy training\"[16], where the model can be approximated with its linearization around the initialization.", "Based on the over-parameterized assumption, there are some theoretical results on the momentum optimization methods.", "From discrete perspective on the optimization of a two layer fully-connecte neural network, Wang et al.", "[17] proved that HB method can achieve a linear convergence to a small controllable error and attain a acceleration comparing to the GD method.", "From continuous view, Bu et al.", "[18] found a similar result for HB method on the same setting.", "Besides, they proved that NAG with a time-varying momentum coefficient converges at a sublinear rate, which converges slower than HB method.", "However, Shi et al.", "[19] found that NAG with constant momentum coefficient convergences faster than HB method on convex objective.", "This raises questions for NAG with constant momentum coefficient for training non-smooth and non-convex neural network.", "Does it converge to the global minima?", "Does it attain a faster convergence rate comparing to GD and HB method?", "In this paper, we try to answer these questions based on the framework that optimizing a two-layer fully connected ReLU neural network with a fixed output layer, which is widely studied in previous works [15][20][21][17].", "Beside, we only focus on analyzing NAG with constant momentum coefficient, where we refer it as NAG for simplicity.", "Our contributions can be summarized as: We theoretically proved that NAG converges linearly to a small controllable error, which can be decreased to zero with infinitely wide hidden layer.", "We proved that NAG with constant momentum coefficient converges faster than GD [15][22].", "Comparing with HB [17], NAG achieves a similar convergence rate." ], [ "Related Work", "Nesterov's accelerated method: For convex optimization, Nemirovski and Yudin [23] proved the lower bound of the convergence rate for the deterministic first-order methods.", "In 1983, Nesterov proposed an first-order accelerated method, and proved it could achieve this lower bound performance for smooth convex problems with an estimate sequences technique [6].", "From a continuous time viewpoint, Su et al.", "[24] considered the limit of Nesterov's accelerated method and derived its corresponding ordinary differential equations (ODEs).", "Shi et al.", "[19] further derived a more accurate high-resolution ODE of Nesterov's accelerated method that help to explain the difference between Nesterov's accelerated method and Heavy-ball method.", "Lessard et al.", "[7] developed an framework for analyzing the convergence and stability of Nesterov's accelerated method from control theory.", "For non-convex Convergence of Over-parameterized Neural Networks: As observed in [14], over-parameterized neural networks are capable to attain low training error with first-order methods, even though its landscape is highly non-convex and non-smooth.", "Many efforts have been devoted to bring deep insights to this phenomenon.", "Du et al.", "[15] studied the dynamics of the outputs of a randomly initialized two-layer fully connected neural network.", "They proved that the gradient descent can linearly converge to the global optimum by exploiting the eigenvalues of a Gram matrix, which is determined by the architecture of the neural network.", "Li and Liang [20] proved that the Gram matrix stays stable when the two-layer neural network is over-parameterized.", "Wu et al.", "[22] improved the upper bound of the learning rate compared to Du et al.", "[15], which leads to a faster convergence rate.", "Besides, they extended to adaptive gradient method and proved its convergence.", "Wang et al.", "[17] generalized Du et al.", "[15] to heavy-ball method and proved its convergence rate beyond gradient descent method.", "Jacot et al.", "[25] further analysed deep fully neural network with infinite width, and found that the convergence of gradient descent is determined by a fixed positive definite kernel called Neural Tangent Kernel (NTK).", "This result was further extended to convolutional[26], residual[27] and graph neural networks[28] with analogous kernel.", "Another important line of theoretical results applied optimal transport theory to analyse the convergence for over-parameterized neural networks trained with (stochastic) gradient descent[29][30], which approximate the evolution of the parameters by an distributional dynamics." ], [ "Notation", "We use lowercase, lowercase boldface and uppercase boldface letters to denote scalars, vectors and matrices, respectively.", "We use $[n]$ to denote the set $\\lbrace 1, 2, \\cdots , n\\rbrace $ .", "For any set $S$ , we use $|S|$ to denote its cardinality.", "For any vector $\\mathbf {x}$ , we denote $\\Vert \\mathbf {x}\\Vert $ as its $\\ell _2$ norm.", "We denote $\\langle \\cdot , \\cdot \\rangle $ as the Euclidean inner product.", "For any matrix $\\textbf {X}$ , we denote $\\Vert \\textbf {X}\\Vert $ and $\\Vert \\textbf {X}\\Vert _F$ as its spectral norm and Frobenius norm.", "We use $\\lambda _{max}(\\textbf {X})$ and $\\lambda _{min}(\\textbf {X})$ as the largest eigenvalue and smallest eigenvalue of $\\textbf {X}$ , respectively.", "We denote the condition number of matrix $\\textbf {X}$ as $\\kappa (\\textbf {X})$ .", "We denote $N(0, \\textit {I})$ and $unif\\lbrace -1, 1\\rbrace $ as the standard Gaussian distribution and Rademacher distribution, respectively.", "We denote $\\mathbb {I}\\lbrace \\omega \\rbrace $ as the indicator function, where it outputs 1 as the event $\\omega $ is true and 0 otherwise.", "We use $\\mathbf {D}= \\lbrace \\mathbf {x}_i, y_i\\rbrace _{i=1}^n$ as the training dataset, where $\\mathbf {x}_i \\in \\mathbb {R}^d$ and $y_i \\in \\mathbb {R}$ denote the features and label of the $i$ -th sample, respectively.", "We use $\\mathcal {X}=\\lbrace \\mathbf {x}: (\\mathbf {x}, y) \\in \\mathbf {D}\\rbrace $ and $\\mathcal {Y}= \\lbrace y : (\\mathbf {x}, y)\\in \\mathbf {D}\\rbrace $ as the feature space and the target space, respectively." ], [ "Problem Setting", "In this subsection, we first introduce the architecture of the neural network, which follows the previous works[15],[21],[17].", "Then we introduce the update procedures of three optimization method: GD, HB and NAG.", "To start with, we consider a two-layer fully connected neural network $f: \\mathbb {R}^d \\rightarrow \\mathbb {R}$ as follows: $f(\\mathbf {W}, \\textbf {a};\\mathbf {x})= \\frac{1}{\\sqrt{m}} \\sum _{r=1}^m a^r \\sigma (\\langle \\mathbf {w}^r, \\mathbf {x}\\rangle ),$ where $\\mathbf {x}\\in \\mathbb {R}^d$ is the input feature, $\\mathbf {W}=\\lbrace \\mathbf {w}^1, \\mathbf {w}^2, \\cdots ,\\mathbf {w}^m\\rbrace \\in \\mathbb {R}^{m \\times d}$ is the weight matrix of the hidden layer, $\\textbf {a} \\in \\lbrace a^1,a^2,\\cdots , a^m\\rbrace \\in \\mathbb {R}^m $ is the output weight and $\\sigma (z)=z \\cdot \\mathbb {I}\\lbrace z \\ge 0\\rbrace $ denotes the ReLU activation function.", "The parameters follow the random initialization scheme as $\\mathbf {w}^r \\sim N(0, \\textit {I})$ and $a^r \\sim unif\\lbrace -1,1\\rbrace $ for any $r \\in [m]$ .", "We consider the empirical risk minimization problem with the squared loss $\\mathcal {L}$ on the training set $\\mathbf {D}$ as: $\\mathcal {L}(\\mathbf {W},\\mathbf {a}) = \\frac{1}{2}\\sum _{i=1}^n (y_i - f(\\mathbf {W}, \\textbf {a};\\mathbf {x}_i))^2.$ Following the settings in [15],[21],[17], we optimize the parameters $\\mathbf {W}$ with fixed output layer $\\mathbf {a}$ .", "This problem is a non-convex and non-smooth problem.", "Note that when the hidden layer is fixed and optimize the output layer, the problem turns to a convex task.", "Next we introduce the details of three commonly used optimization methods: GD, HB, and NAG.", "In this work, we mainly focus on the deterministic optimization algorithms.", "Gradient descent is the most well-known method for its simplicity and efficiency.", "When apply gradient descent on the objective $\\mathcal {L}$ , the parameters of the first layer take the following updating rule: $\\mathbf {W}_{t+1} = \\mathbf {W}_t - \\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t},$ where $\\eta > 0$ is the step size and $\\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t}$ is the gradient with respect to the parameters for the t-th iteration.", "Combined with the specific objective function (REF ), the gradient for the r-th weight vector can be calculated as: $\\frac{\\partial \\mathcal {L}(\\mathbf {W},\\mathbf {a})}{\\partial \\mathbf {w}^r} \\!=\\!", "\\frac{1}{\\sqrt{m}}\\sum _{i=1}^n (f(\\mathbf {W}, \\textbf {a};\\mathbf {x}_i) \\!-\\!", "y_i) a^r \\mathbf {x}_i \\mathbb {I}\\lbrace {\\langle \\mathbf {w}^r, \\mathbf {x}_i \\rangle \\!\\ge \\!", "0}\\rbrace .$ Then we introduce the momentum optimization methods.", "Compared with gradient descent, momentum methods apply the history of the past gradients to achieve acceleration.", "The most common used momentum methods are Heavy-ball and Nesterov accelerated gradient.", "Heavy-ball starts from the initial parameters $\\mathbf {W}_{-1} = \\mathbf {W}_0$ , it updates the parameter as: $\\mathbf {W}_{t+1} = \\mathbf {W}_t + \\beta (\\mathbf {W}_t - \\mathbf {W}_{t-1}) - \\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t}, $ where $\\beta \\in [0, 1)$ is the momentum parameter.", "Nesterov accelerated gradient is an important development of momentum optimization methods, which achieves the optimal convergence rate in convex setting.", "In this paper, we focus on NAG with constant step size and momentum parameter.", "Given the initial parameters $\\mathbf {W}_0$ and $\\mathbf {V}_0 = \\mathbf {W}_0$ , NAG involves the following update procedures: ${\\mathbf {V}}_{t+1} &=& \\mathbf {W}_t - \\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t} \\\\\\mathbf {W}_{t+1} &=& {\\mathbf {V}}_{t+1} + \\beta ({\\mathbf {V}}_{t+1}-{\\mathbf {V}}_{t}).$ Note that the above two steps can be reformulated in a single-variable form as the Heavy-ball method: $\\mathbf {W}_{t+1} &=& \\mathbf {W}_t - \\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t} \\!+\\!", "\\beta (\\mathbf {W}_t - \\mathbf {W}_{t-1}) \\nonumber \\\\&-&\\beta \\eta (\\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t} - \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1}, \\mathbf {a})}{\\partial \\mathbf {W}_{t-1}}).$ Compared with (REF ), NAG has an additional term $\\beta \\eta (\\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t} - \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {W}_t})$ , which is called gradient correction [19].", "Next, we introduce an important Gram matrix that determines the convergence for gradient descent.", "From a functional perspective, the output of the neural network evolves via a Gram matrix: $\\mathbf {H}_t(\\mathbf {x}_i, \\mathbf {x}_j)\\!=\\!\\langle \\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta }_t;\\mathbf {x}_i) ,\\!", "\\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta }_t;\\mathbf {x}_j) \\rangle , \\forall \\; (i,j) \\in [n]\\times [n],$ where $\\mathbf {\\theta }$ denotes the parameters of the neural network.", "When the neural network is sufficiently over-parameterized, the matrix $\\mathbf {H}_t$ stays close to $\\mathbf {H}_0$  [25],[15],[21].", "For infinitely wide neural network, the matrix $\\mathbf {H}_0$ converges to a limiting matrix $\\bar{\\mathbf {H}}$ , which is also called Neural Tangent Kernel (NTK) [25].", "Therefore the parameters of the over-parameterized neural network trained by gradient descent remain close to the initialization.", "This phenomenon is also known as \"lazy training\" regime[16].", "As a consequence, the output of the neural network can be approximated by the first-order Taylor expansion on the initialization as: $f(\\mathbf {\\theta };\\mathbf {x}) \\approx f(\\mathbf {\\theta }_0;\\mathbf {x}) + \\langle \\nabla f(\\mathbf {\\theta }_0;\\mathbf {x}), \\mathbf {\\theta }- \\mathbf {\\theta }_0\\rangle , \\ $ where $\\mathbf {H}$ denotes the parameters of the neural network $f$ .", "For the specific neural network (REF ) and objective function (REF ), we have the corresponding Gram matrix $\\mathbf {H}$ as: $\\mathbf {H}_t(\\mathbf {x}_i, \\mathbf {x}_j)\\!=\\!\\frac{1}{m}\\!\\!\\sum _{r=1}^m \\langle \\mathbf {x}_i, \\mathbf {x}_j \\rangle \\mathbb {I}\\lbrace \\langle \\mathbf {w}_t^r, \\mathbf {x}_i \\rangle \\!\\ge \\!", "0 \\& \\langle \\mathbf {w}_t^r, \\mathbf {x}_j \\rangle \\!\\ge \\!", "0\\rbrace ,$ and its limiting can be calculated with the expected value: ${\\mathbf {H}}_{\\infty }(\\mathbf {x}_i, \\mathbf {x}_j) \\!\\!\\!\\!&=&\\!\\!\\!\\!", "\\mathbb {E}_{\\mathbf {w}\\sim N(0, \\textit {I})}[\\langle \\mathbf {x}_i, \\mathbf {x}_j \\rangle \\mathbb {I}\\lbrace \\langle \\mathbf {w}, \\mathbf {x}_i \\rangle \\ge 0 \\& \\langle \\mathbf {w}, \\mathbf {x}_j \\rangle \\ge 0\\rbrace ] \\nonumber \\\\\\!\\!\\!\\!&=&\\!\\!\\!\\!", "\\langle \\mathbf {x}_i, \\mathbf {x}_j \\rangle \\frac{\\pi - arccos(\\langle \\mathbf {x}_i, \\mathbf {x}_j \\rangle )}{2\\pi }.$ When the training dataset satisfies $\\mathbf {x}_i \\ne \\mathbf {x}_j$ for all $i \\ne j$ , the matrix ${\\mathbf {H}}_{\\infty }$ is positive definite[15].", "Next we introduce our main results." ], [ "Main Theory", "In this section, we provide our main idea of proving the convergence of Nesterov accelerated method for two-layer full connected neural network.", "We analyze the dynamics of the residual error between predictions and labels, which is inspired by [15] and [17].", "Firstly, we give an intuitive view to explain why NAG works.", "Recently, many theoretical and empirical works have shown that the outputs of the over-parameterized neural network can be approximated by its first-order Taylor expansion around its initial parameters in the infinite width limit [16],[31] as (REF ).", "As a result, we can approximate $\\nabla _{\\mathbf {\\theta }} f(\\mathbf {H};\\mathbf {x})$ by taking the derivative on both sides of  (REF ) with $\\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta };\\mathbf {x}) \\approx \\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta }_0;\\mathbf {x}) \\nonumber .$ Applying Nesterov accelerated gradient, the predictions of the (t+1)-th iteration can be derived as: $&&f(\\mathbf {\\theta }_{t+1};\\mathcal {X})\\nonumber \\\\&\\approx &\\!\\!\\!\\!", "f(\\mathbf {\\theta }_0;\\mathcal {X}) + \\langle \\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta }_0;\\mathcal {X}),\\mathbf {\\theta }_{t+1} - \\mathbf {\\theta }_0\\rangle \\nonumber \\\\\\!\\!\\!\\!\\!\\!&\\approx &\\!\\!\\!\\!\\!\\!", "f(\\mathbf {\\theta }_0;\\mathcal {X}) \\!+\\!", "\\langle \\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta }_0;\\mathcal {X}), \\!\\mathbf {\\theta }_t \\!-\\!", "\\eta \\nabla _{\\mathbf {\\theta }} \\mathcal {L}(\\mathbf {\\theta }_t;\\mathcal {X}) \\!+\\!", "\\beta (\\!\\mathbf {\\theta }_t \\!-\\!", "\\mathbf {\\theta }_{t-1}\\!)", "\\nonumber \\\\&-&\\!\\!\\!\\!\\!", "\\eta \\beta (\\nabla _{\\mathbf {\\theta }} \\mathcal {L}(\\mathbf {\\theta }_t;\\mathcal {X}) -\\nabla _{\\mathbf {\\theta }} \\mathcal {L}(\\mathbf {\\theta }_{t-1};\\mathcal {X})) \\!-\\!", "\\mathbf {\\theta }_0 \\rangle \\nonumber \\\\\\!\\!\\!\\!\\!\\!&\\approx &\\!\\!\\!\\!\\!\\!", "f(\\mathbf {\\theta }_t;\\mathcal {X}) \\!\\!-\\!\\!", "\\eta \\mathbf {\\theta }_0\\!\\nabla _f\\mathcal {L}(\\!\\mathbf {\\theta }_t;\\mathcal {X})\\!\\!+\\!\\!", "\\beta \\big (\\!", "f(\\mathbf {\\theta }_t;\\mathcal {X}) \\!\\!-\\!\\!", "f(\\mathbf {\\theta }_{t-1};\\mathcal {X}) \\!\\big ) \\nonumber \\\\&-&\\!\\!\\!\\!\\!", "\\eta \\beta \\nabla _{\\mathbf {\\theta }} f(\\mathbf {\\theta }_0;\\mathcal {X})(\\nabla _{\\mathbf {\\theta }} \\mathcal {L}(\\mathbf {\\theta }_t;\\mathcal {X}) -\\nabla _{\\mathbf {\\theta }} \\mathcal {L}(\\mathbf {\\theta }_{t-1};\\mathcal {X}), \\nonumber $ where $f(\\mathbf {\\theta };\\mathcal {X}) = vec([f(\\mathbf {\\theta };\\mathbf {x})]_{\\mathbf {x}\\in \\mathcal {X}})$ .", "We denote the residual error vector for the whole dataset as $\\mathbf {\\xi }$ .", "As we use the square loss as the objective function $\\mathcal {L}$ , we have the residual error as: $\\mathbf {\\xi }_{t+1} \\!\\!\\!\\!\\!\\!&=&\\!\\!\\!\\!\\!\\!", "f(\\mathbf {\\theta }_{t+1};\\mathcal {X}) - \\mathcal {Y}\\nonumber \\\\\\!\\!\\!\\!\\!\\!&\\approx &\\!\\!\\!\\!\\!\\!", "\\mathbf {\\xi }_t \\!\\!-\\!\\!", "\\eta \\mathbf {H}_0\\mathbf {\\xi }_t + \\beta (\\mathbf {\\xi }_t \\!\\!-\\!\\!", "\\mathbf {\\xi }_{t-1}) \\!\\!-\\!\\!\\eta \\beta \\mathbf {H}_0(\\mathbf {\\xi }_t\\!\\!-\\!\\!\\mathbf {\\xi }_{t-1}).", "\\nonumber $ Reformulating the above equation, we have a linear dynamics with respect to the residual error as: $\\begin{bmatrix} \\mathbf {\\xi }_{t+1} \\\\ \\mathbf {\\xi }_t \\end{bmatrix} \\approx \\begin{bmatrix}(1\\!+\\!\\beta )(\\mathbf {I}_n\\!-\\!\\eta \\mathbf {H}_0) \\!&\\!\\beta (\\!-\\mathbf {I}_n\\!+\\!\\eta \\mathbf {H}_{0}) \\\\\\mathbf {I}_n \\!&\\!", "0\\end{bmatrix} \\begin{bmatrix} \\mathbf {\\xi }_{t} \\\\ \\mathbf {\\xi }_{t-1} \\end{bmatrix},$ where $\\mathbf {I}_n \\in \\mathbb {R}^{n\\times n}$ denotes the identity matrix.", "The above linear system is similar to the convex quadratic optimization case[7],[32].", "When the spectral norm of the coefficient matrix in (REF ) is less than one, the residual error can achieve linear convergence to zero with respect to the iterations.", "However, this approximation is based on the infinite width assumption.", "In this paper, we turn to a slightly mild assumption about the width m. Let $A_{ir} = \\lbrace \\exists \\mathbf {w}: \\Vert \\mathbf {w}- \\mathbf {w}_0^r\\Vert \\le R, \\mathbb {I}\\lbrace \\langle \\mathbf {w}, \\mathbf {x}_i \\rangle \\rbrace \\ne \\mathbb {I}\\lbrace \\langle \\mathbf {w}_0^r, \\mathbf {x}_i \\rangle \\rbrace \\rbrace $ for a constant $R > 0$ .", "Besides, we denote the set $S_i = \\lbrace r\\in [m]: \\mathbb {I}\\lbrace A_{ir}\\rbrace = 0\\rbrace $ and its complement $S_i^{\\perp } = [m] \\setminus S_i $ .", "By utilizing the intuition that mentioned in the infinite width assumption, we first derive the recursion formulation of the residual error.", "Lemma 1 For all $t \\in [T]$ , the following equation describes the dynamics of the residual error with NAG for objective  (REF ): $\\mathbf {\\xi }_{t+1} \\!\\!=\\!\\!", "\\mathbf {\\xi }_t \\!+\\!", "\\beta (\\mathbf {\\xi }_t \\!-\\!", "\\mathbf {\\xi }_{t-1}) \\!-\\!", "(1\\!+\\!\\beta )\\eta \\mathbf {H}_0\\mathbf {\\xi }_t \\!+\\!", "\\beta \\eta \\mathbf {H}_{0}\\mathbf {\\xi }_{t-1} \\!+\\!", "\\mathbf {\\psi }_t \\!+\\!", "\\mathbf {\\phi }_t$ where $\\mathbf {\\psi }_t= \\beta \\eta (\\mathbf {H}_{t-1} - \\mathbf {H}_0)\\mathbf {\\xi }_{t-1} -(1+\\beta )\\eta (\\mathbf {H}_t - \\mathbf {H}_0)\\mathbf {\\xi }_t$ and the i-th element of $\\mathbf {\\phi }_t$ is bounded by $|\\mathbf {\\phi }_t[i]| \\le \\frac{ sup_{j\\in [n]}\\!|S_j^{\\perp }|\\!\\sqrt{n}\\eta }{m}[(\\!2+4\\beta \\!", ")\\Vert \\mathbf {\\xi }_t\\Vert \\!+\\!", "3\\beta \\Vert \\!\\mathbf {\\xi }_{t-1} \\!\\Vert \\!+\\!2\\!\\!\\sum _{i=0}^{t-1}\\!\\beta ^{t+1-i}\\Vert \\mathbf {\\xi }_i\\Vert ]$ .", "The proof of Lemma 1 can be found in Appendix .", "Let $\\mathbf {z}_t = [\\mathbf {\\xi }_t; \\mathbf {\\xi }_{t-1}]$ denotes the augmented residual error vector at iteration t, the dynamics of $\\mathbf {z}_t$ can be reformulated as: $\\mathbf {z}_{t+1} = \\mathbf {M}\\mathbf {z}_t + \\mathbf {\\mu }_t,$ where $\\mathbf {\\mu }_t = [\\mathbf {\\psi }_t+ \\mathbf {\\phi }_t;0]$ and the coefficient matrix $\\mathbf {M}={\\small {\\begin{bmatrix}(1\\!+\\!\\beta )(\\mathbf {I}_n\\!-\\!\\eta \\mathbf {H}_0) \\!&\\!\\beta (\\!-\\mathbf {I}_n\\!+\\!\\eta \\mathbf {H}_0) \\\\\\mathbf {I}_n \\!&\\!", "0\\end{bmatrix}}}$ .", "Compared to (REF ), (REF ) replaces the fixed coefficient matrix with a time-varying matrix $M$ and has an extra term $\\mathbf {\\mu }_t$ .", "By using Cauchy-Schwarz inequality, we have: $\\Vert \\mathbf {z}_{t+1}\\Vert \\le \\Vert \\mathbf {M}\\Vert \\Vert \\mathbf {z}_t\\Vert + \\Vert \\mathbf {\\mu }_t\\Vert .$ It is easy to derive that $\\Vert \\mathbf {\\mu }_t\\Vert \\le \\Vert \\mathbf {\\psi }_t\\Vert +\\Vert \\mathbf {\\phi }_t\\Vert $ .", "Besides, we note that the bound of $\\mathbf {\\phi }_t[i]$ largely depends on the term $|S_i^{\\perp }|$ , which describes the number of neurons change their activation patterns on the i-th training instance.", "Recently, some researchers found that $|S_i^{\\perp }|$ has an upper bound [15],[33].", "Lemma 2 ([33]) Suppose that for all $t \\in [T]$ and $r \\in [m]$ , $\\Vert \\mathbf {w}_t^{r} - \\mathbf {w}_0^{r}\\Vert \\le R$ , where $R \\in (0, 1)$ .", "With probability at least $1-n \\cdot exp(-mR)$ , we have the following bound for all $i \\in [n]$ as: $|S_i^{\\perp }| \\le 4mR.$ When $R$ is small enough, we have $|S_i^{\\perp }| \\ll m$ .", "Then $\\Vert \\mathbf {\\phi }_t\\Vert $ can be reduced to a small number with appropriate large width m. Then we turn to analyze the matrix $\\mathbf {M}$ , which is composed of $\\eta $ , $\\beta $ and $\\mathbf {H}_0$ .", "Lemma 3 Let $\\mathbf {M}:= \\begin{bmatrix}(1\\!+\\!\\beta )(\\mathbf {I}_n\\!-\\!\\eta \\mathbf {H}) &\\beta (-\\mathbf {I}_n\\!+\\!\\eta \\mathbf {H}) \\\\\\mathbf {I}_n & 0\\end{bmatrix} \\in \\mathbb {R}^{2n \\times 2n}$ .", "Assume $\\mathbf {H}\\in \\mathbb {R}^{n \\times n}$ is a symmetry positive definite matrix.", "Suppose the iterates $\\lbrace \\mathbf {v}_i\\rbrace $ satisfies $\\mathbf {v}_t = \\mathbf {M}\\mathbf {v}_{t-1}$ .", "If $\\beta $ and $\\eta $ are chosen to satisfy $1 \\ge \\beta \\ge \\frac{1-\\sqrt{\\eta \\lambda _{min}(\\mathbf {H})}}{1+\\sqrt{\\eta \\lambda _{min}(\\mathbf {H})}}$ and $0 < \\eta \\le 1/\\lambda _{max}(\\mathbf {H})$ , then $\\Vert \\mathbf {v}_k\\Vert \\le (\\sqrt{\\beta (1-\\eta \\lambda _{min}(\\mathbf {H}))})^k C_0 \\Vert \\mathbf {v}_0\\Vert \\nonumber $ The proof is given in the Appendix.", "Lemma 4 ([17]) Denote $\\lambda = \\lambda _{min}(\\mathbf {H}_{\\infty })$ .", "Set $m=\\Omega (\\lambda ^{-2}n^2\\log (n/\\delta ))$ .", "Assume $\\mathbf {w}_0^r \\sim N(0, I_d)$ for all $r \\in [n]$ .", "With probability at least $1-\\delta $ , it holds that $\\Vert \\mathbf {H}_0 -\\mathbf {H}_{\\infty }\\Vert _F \\le \\frac{\\lambda }{4}&,&\\lambda _{min}(\\mathbf {H}_0) \\ge \\frac{3}{4}\\lambda \\;\\; \\nonumber \\\\\\;\\;\\lambda _{max}(\\mathbf {H}_0) \\!\\!&\\le &\\!\\!", "\\lambda _{max}(\\mathbf {H}_{\\infty }) + \\frac{\\lambda }{4}.", "\\nonumber $ Then the condition number of $\\mathbf {H}_0$ is bounded by $\\kappa (\\mathbf {H}_0) \\le \\frac{4}{3}\\kappa (\\mathbf {H}_{\\infty }) + \\frac{1}{3}.$ Theorem 1 Define $\\lambda = \\frac{3\\lambda _{min}(\\mathbf {H}_{\\infty })}{4}$ , $\\lambda _{max} = \\lambda _{max}(\\mathbf {H}_{\\infty }) + \\frac{\\lambda }{4}$ and $\\kappa = \\frac{4}{3}\\kappa (\\mathbf {H}_{\\infty }) + \\frac{1}{3}$ .", "Assume $\\mathbf {w}_0^{r} \\sim N(0, I_d)$ and $a^r$ is sampled from Rademacher distribution for all $r \\in [m]$ , and set the step size $\\eta = 1/2\\lambda _{max}$ , the momentum parameter $\\beta = \\frac{3\\sqrt{\\kappa } - 2}{3\\sqrt{\\kappa } + 2}$ and the number of the nodes in the first layer $m=\\Omega (\\lambda ^{-2}n^{4} \\kappa ^4 log^3(n/\\delta ))$ .", "With probability at least $1-\\delta $ over the random initialization, the residual error with NAG for objective (REF ) at any iteration $t\\le T$ satisfied $\\textstyle \\left\\Vert \\begin{bmatrix} \\mathbf {\\xi }_t \\\\ \\mathbf {\\xi }_{t-1} \\end{bmatrix}\\right\\Vert \\le {(1 - \\frac{1}{2\\sqrt{\\kappa }})}^t2\\gamma \\textstyle \\left\\Vert \\begin{bmatrix} \\mathbf {\\xi }_0 \\\\ \\mathbf {\\xi }_{-1} \\end{bmatrix}\\right\\Vert ,$ where $\\gamma = \\frac{(12\\kappa -1)^2}{2\\kappa -1}$ .", "With the initialization $\\mathbf {W}_{-1} = \\mathbf {W}_0$ , we have $\\mathbf {\\xi }_{-1}=\\mathbf {\\xi }_0$ .", "Theorem 1 shows that the training error of NAG converges linearly to zero at a $1-\\frac{1}{2\\sqrt{\\kappa }}$ rate.", "In [15], Du et al.", "proved that gradient descent converges linearly to a globally optimal solution with the rate $1-{\\frac{\\eta \\lambda }{2}}$ , but with a small step size $\\eta = O(\\frac{\\lambda }{n^2})$ .", "[22] further improved the maximum step size to $\\frac{1}{\\Vert {\\mathbf {H}_{\\infty }}\\Vert }$ .", "It is noted that $\\Vert {\\mathbf {H}_{\\infty }}\\Vert \\ge \\lambda _{max}(\\mathbf {H}_{\\infty })$ .", "This leads to an improved linear convergence $1-\\frac{1}{2\\kappa }$ .", "Comparing with gradient descent method, Nesterov accelerated method obtains a small converge rate, which validates its accelerated performance.", "Recently, Wang et al.", "[17] proved Heavy-ball method achieves a linear convergence rate on the over-parameterized setting, where the rate is $1-\\frac{1}{4\\sqrt{\\kappa }}$ .", "This shows Nesterov accelerated method and Heavy-ball method are both able to converge linearly at a $1-\\Theta (1/\\sqrt{\\kappa })$ rate to a global minimum.", "Table: Summary of Convergence Result.", "Let mm denotes the width of neural network.", "Let nn denote the number of input data points.", "Let δ\\delta denote the failure probability.", "Let λ max =3λ max (𝐇 ∞ )/4\\lambda _{max} = 3\\lambda _{max}(\\mathbf {H}_{\\infty })/4 and λ=λ min (𝐇 ∞ )\\lambda = \\lambda _{min}(\\mathbf {H}_{\\infty }).", "Let κ=4κ(𝐇 ∞ )/3+1/3\\kappa = 4\\kappa (\\mathbf {H}_{\\infty })/3 + 1/3." ], [ "Conclusion and future work", "Recent line of works has shown that over-parametrized neural networks trained with gradient descent is connected with a Gram matrix.", "In this work, we study the convergence of Nesterov accelerated method on two-layer fully connected neural network with ReLU activation.", "By exploit the \"lazy training\" regime of over-parametrized neural network, we prove the Nesterov accelerated method achieves a linear convergence rate, which is faster than gradient descent.", "Compared with Heavy-ball method, Nesterov accelerated method obtains an comparable convergence rate." ], [ "Proof of Lemma 1", "Proof of Lemma 1 At iteration $t+1$ , the output of the two-layer fully connected neural network for arbitrary feature $\\mathbf {x}_i$ can be divided into two parts: $f(\\mathbf {W}_{t+1},\\mathbf {a};\\mathbf {x}_i)&=&\\frac{1}{\\sqrt{m}}\\sum _{r=1}^m a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle ) \\nonumber \\\\&=& \\frac{1}{\\sqrt{m}}\\sum _{r \\in S_i} a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle ) \\nonumber \\\\ &+& \\frac{1}{\\sqrt{m}}\\sum _{r \\in S_i^{\\perp }} a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle ).$ The subgradient of the square loss $L$ w.r.t the network parameters $\\mathbf {w}^r$ is $\\frac{\\partial \\mathcal {L}(\\mathbf {W},\\mathbf {a})}{\\partial \\mathbf {w}^r} \\!=\\!", "\\frac{1}{\\sqrt{m}}\\!\\sum _{i=1}^n (f(\\mathbf {W};\\mathbf {x}_i) \\!-\\!", "y_i) a^r \\mathbf {x}_i \\mathbb {I}\\lbrace {\\langle \\mathbf {w}^r, \\mathbf {x}_i \\rangle \\!\\ge \\!", "0}\\rbrace .$ Besides, the element of the Gram matrix $\\mathbf {H}_t$ at iteration t is: $\\mathbf {H}_t[i,j]=\\frac{1}{m}\\mathbf {x}_i^{\\top }\\mathbf {x}_j \\sum _{r=1}^m \\mathbb {I}\\lbrace \\langle \\mathbf {w}_t^{r}, \\mathbf {x}_i \\rangle \\ge 0 \\& \\langle \\mathbf {w}_t^{r}, \\mathbf {x}_j \\rangle \\ge 0\\rbrace .", "\\nonumber $ For brevity, we define $\\mathbb {I}_{r,i}(t):=\\mathbb {I}\\lbrace {\\langle \\mathbf {w}_t^r, \\mathbf {x}_i \\rangle \\!\\ge \\!", "0}\\rbrace $ .", "Based on the Nesterov accelerated gradient, the first part of  (REF ) can be decomposed as: $&&\\frac{1}{\\sqrt{m}}\\sum _{r \\in S_i} a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle ) \\nonumber \\\\&\\overset{\\text{a}}{=}&\\!\\!\\!\\!\\!\\!", "\\frac{1}{\\sqrt{m}}\\!\\sum _{r \\in S_i}\\!", "a^r\\sigma (\\langle \\mathbf {w}_t^{r} \\!-\\!", "\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t, \\mathbf {a})}{\\partial \\mathbf {w}_t^{r}} \\!+\\!", "\\beta (\\mathbf {w}_t^{r} \\!-\\!", "\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t,\\mathbf {a})}{\\partial \\mathbf {w}_t^{r}}\\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!", "(\\mathbf {w}_{t-1}^{r} - \\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}})), \\mathbf {x}_i \\rangle ) \\nonumber \\\\&\\overset{\\text{b}}{=}&\\!\\!\\!\\!\\!\\!", "\\frac{1}{\\sqrt{m}}\\!\\sum _{r \\in S_i}\\!", "a^r\\langle \\mathbf {w}_t^{r} \\!-\\!", "\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t,\\mathbf {a})}{\\partial \\mathbf {w}_t^{r}} \\!+\\!", "\\beta (\\mathbf {w}_t^{r} \\!-\\!", "\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t,\\mathbf {a})}{\\partial \\mathbf {w}_t^{r}} \\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!", "(\\mathbf {w}_{t-1}^{r} - \\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}})), \\mathbf {x}_i \\rangle \\mathbb {I}_{r,i}(t+1)\\nonumber \\\\&\\overset{\\text{c}}{=}&\\!\\!\\!\\!\\!\\!\\frac{1+\\beta }{\\sqrt{m}}\\sum _{r \\in S_i}a^r\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle \\mathbb {I}_{r,i}(t) \\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!", "\\frac{\\beta }{\\sqrt{m}}\\sum _{r \\in S_i}a^r\\langle \\mathbf {w}_{t-1}^{r} , \\mathbf {x}_i\\rangle \\mathbb {I}_{r,i}(t-1) \\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!", "\\frac{(1+\\beta )\\eta }{\\sqrt{m}}\\sum _{r \\in S_i}a^r\\langle \\frac{\\partial \\mathcal {L}(\\mathbf {W}_t,\\mathbf {a})}{\\partial \\mathbf {w}_t^{r}} , \\mathbf {x}_i\\rangle \\mathbb {I}_{r,i}(t) \\nonumber \\\\&+&\\!\\!\\!\\!\\!\\!", "\\frac{\\eta \\beta }{\\sqrt{m}}\\sum _{r \\in S_i}a^r\\langle \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}} , \\mathbf {x}_i\\rangle \\mathbb {I}_{r,i}(t-1) \\nonumber \\\\&{=}&\\!\\!\\!\\!\\!\\!", "(1+\\beta )f(\\mathbf {W}_t,\\mathbf {a};\\mathbf {x}_i) - \\frac{1\\!+\\!\\beta }{\\sqrt{m}}\\!\\!\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!a^r\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle \\mathbb {I}_{r,i}(t)\\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!", "\\!\\beta f(\\mathbf {W}_{t-1},\\mathbf {a};\\mathbf {x}_i) \\!\\!+\\!\\!", "\\frac{\\beta }{\\sqrt{m}}\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!a^r\\langle \\mathbf {w}_{t-1}^{r} , \\mathbf {x}_i\\rangle \\mathbb {I}_{r,i}(t-1) \\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!", "(1+\\beta )\\eta \\sum _{j=1}^n\\mathbf {\\xi }_t[j]\\mathbf {H}_t[i,j]+\\beta \\eta \\sum _{j=1}^n\\mathbf {\\xi }_{t-1}[j]\\mathbf {H}_{t-1}[i,j] \\nonumber \\\\&+& \\!\\!\\!\\!\\!\\!\\frac{(1\\!\\!+\\!\\!\\beta )\\eta }{m}\\!\\!\\sum _{j=1}^n \\mathbf {x}_i^{\\top }\\mathbf {x}_j\\mathbf {\\xi }_t[j]\\!\\!\\sum _{r \\in S_i^{\\perp }}\\mathbb {I}_{r,i}(t)\\mathbb {I}_{r,j}(t) \\nonumber \\\\&-&\\!\\!\\!\\!\\!\\!\\frac{\\beta \\eta }{m}\\!\\!\\sum _{j=1}^n\\!\\!", "\\mathbf {x}_i^{\\top }\\mathbf {x}_j\\mathbf {\\xi }_{t-1}[j]\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!\\mathbb {I}_{r,i}(t-1)\\mathbb {I}_{r,i}(t-1)\\nonumber $ where (a) applies the property of ReLU activation $\\sigma (x) = a\\mathbb {I}\\lbrace a\\ge 0\\rbrace $ , (b) exploits the neurons of the set $S_i$ always keep the sign of their activation output, (c) uses the expansion of the subgradient (REF ) and the Gram matrix $\\mathbf {H}$ .", "Therefore, the i-th element of the residual error between the output and the label at iteration t+1 can be derived as: $&&\\mathbf {\\xi }_{t+1}[i] \\nonumber \\\\&=& f(\\mathbf {W}_{t+1},\\mathbf {a};\\mathbf {x}_i) - y_i \\nonumber \\\\&=&\\!\\!", "\\frac{1}{\\sqrt{m}}\\!\\!\\sum _{r \\in S_i}\\!\\!", "a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle )\\!\\!", "+\\!\\!", "\\frac{1}{\\sqrt{m}}\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!", "a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle ) \\!\\!-\\!\\!y_i \\nonumber \\\\&=& \\mathbf {\\xi }_{t}[i] + \\beta (\\mathbf {\\xi }_{t}[i] - \\mathbf {\\xi }_{t-1}[i])-(1+\\beta )\\eta \\sum _{j=1}^n \\mathbf {H}_0[i,j]\\mathbf {\\xi }_t[j] \\nonumber \\\\&+&\\!\\!\\!\\!", "\\beta \\eta \\!\\!\\sum _{j=1}^n \\!\\!\\mathbf {H}_0[i,j]\\mathbf {\\xi }_{t-1}[j]\\!\\!+\\!\\!\\beta \\eta \\!\\!\\sum _{j=1}^n (\\mathbf {H}_{t-1}[i,j]\\!\\!-\\!\\!\\mathbf {H}_0[i,j])\\mathbf {\\xi }_{t-1}[j] \\nonumber \\\\&-&\\!\\!", "(1+\\beta )\\eta \\!\\!\\sum _{j=1}^n (\\mathbf {H}_{t}[i,j]\\!\\!-\\!\\!\\mathbf {H}_0[i,j])\\mathbf {\\xi }_{t}[j] \\nonumber \\\\&+&\\!\\!\\frac{(1\\!\\!+\\!\\!\\beta )\\eta }{m}\\!\\!\\sum _{j=1}^n\\!\\!", "\\mathbf {x}_i^{\\top }\\mathbf {x}_j\\mathbf {\\xi }_t[j]\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!\\mathbb {I}_{r,i}(t)\\mathbb {I}_{r,j}(t) \\nonumber \\\\&-&\\!\\!\\!\\!", "\\frac{\\beta \\eta }{m}\\!\\!\\sum _{j=1}^n \\!\\!\\mathbf {x}_i^{\\top }\\mathbf {x}_j\\mathbf {\\xi }_{t-1}[j]\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!\\mathbb {I}_{r,i}(t-1)\\mathbb {I}_{r,j}(t-1) \\nonumber \\\\&+& \\frac{1}{\\sqrt{m}}\\sum _{r \\in S_i^{\\perp }} a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle )- a^r\\sigma (\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle )\\nonumber \\\\&-& \\frac{\\beta }{\\sqrt{m}}\\sum _{r \\in S_i^{\\perp }}a^r\\sigma (\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle ) - a^r\\sigma (\\langle \\mathbf {w}_{t-1}^{r} , \\mathbf {x}_i\\rangle )\\nonumber $ In words, the residual error on the whole training dataset can be written in a recursive form as: $\\mathbf {\\xi }_{t+1} \\!\\!\\!\\!&=&\\!\\!\\!\\!", "\\mathbf {\\xi }_t \\!+\\!", "\\beta (\\mathbf {\\xi }_t \\!-\\!", "\\mathbf {\\xi }_{t-1}) \\!-\\!", "(1\\!+\\!\\beta )\\eta \\mathbf {H}_0\\mathbf {\\xi }_t \\!+\\!", "\\beta \\eta \\mathbf {H}_{0}\\mathbf {\\xi }_{t-1} \\!+\\!", "\\mathbf {\\psi }_t \\!+\\!", "\\mathbf {\\phi }_t\\nonumber $ where $\\mathbf {\\psi }_t \\!=\\!", "\\beta \\eta (\\mathbf {H}_{t-1} \\!-\\!", "\\mathbf {H}_0)\\mathbf {\\xi }_{t-1} \\!-\\!", "(1+\\beta )\\eta (\\mathbf {H}_t \\!-\\!", "\\mathbf {H}_0)\\mathbf {\\xi }_t$ and the i-th element of $\\mathbf {\\phi }_t$ have the following form: $\\mathbf {\\phi }_t[i] \\!\\!\\!\\!&=& \\!\\!\\!\\!", "\\frac{(1\\!\\!+\\!\\!\\beta )\\eta }{m}\\!\\!\\sum _{j=1}^n\\!\\!", "\\mathbf {x}_i^{\\top }\\mathbf {x}_j\\mathbf {\\xi }_t[j]\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!\\mathbb {I}_{r,i}(t)\\mathbb {I}_{r,j}(t) \\nonumber \\\\\\!\\!\\!\\!&-&\\!\\!\\!\\!\\!", "\\frac{\\beta \\eta }{m}\\!\\!\\sum _{j=1}^n\\!\\!", "\\mathbf {x}_i^{\\top }\\mathbf {x}_j\\mathbf {\\xi }_{t-1}[j]\\!\\!\\!\\sum _{r \\in S_i^{\\perp }}\\!\\!\\mathbb {I}_{r,i}(t-1)\\mathbb {I}_{r,j}(t-1) \\nonumber \\\\&+& \\frac{1}{\\sqrt{m}}\\sum _{r \\in S_i^{\\perp }} a^r \\sigma (\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle )- a^r\\sigma (\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle ) \\nonumber \\\\&-& \\frac{\\beta }{\\sqrt{m}}\\sum _{r \\in S_i^{\\perp }}a^r\\sigma (\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle ) - a^r\\sigma (\\langle \\mathbf {w}_{t-1}^{r} , \\mathbf {x}_i\\rangle ).\\nonumber $ Besides, it is easy to derive $\\sum _{r \\in S_i^{\\perp }}\\mathbb {I}_{r,i}(t)\\mathbb {I}_{r,j}(t) \\le |S_i^{\\perp }|$ , $\\Vert \\mathbf {x}_i\\Vert \\le 1$ and ReLU activation $\\sigma (\\cdot )$ is 1-Lipschitz, then $a^r\\!\\big [\\!", "\\sigma (\\!\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle \\!", ")\\!\\!- \\!\\!\\sigma (\\!\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle \\!", ")\\!\\big ] \\!\\!\\!\\!\\!\\!", "&\\le & \\!\\!\\!\\!\\!\\!|a^r \\big [\\!\\sigma (\\!\\langle \\mathbf {w}_{t+1}^{r}, \\mathbf {x}_i \\rangle \\!", ")\\!\\!-\\!\\!\\sigma (\\!\\langle \\mathbf {w}_t^{r} , \\mathbf {x}_i\\rangle \\!", ")\\!\\big ]\\!| \\nonumber \\\\&\\le &\\!\\!\\!\\!\\!|\\langle \\mathbf {w}_{t+1}^r - \\mathbf {w}_t^r, \\mathbf {x}_i \\rangle | \\nonumber \\\\&\\le &\\!\\!\\!\\!\\!\\Vert \\mathbf {w}_{t+1}^r - \\mathbf {w}_t^r\\Vert \\Vert \\mathbf {x}_i\\Vert \\nonumber \\\\&\\le &\\!\\!\\!\\!\\!\\Vert \\mathbf {w}_{t+1}^r - \\mathbf {w}_t^r\\Vert \\nonumber .$ Therefore, we have $&&| \\mathbf {\\phi }_t[i] | \\nonumber \\\\&\\overset{\\text{a}}{\\le }&\\!\\!\\!\\!", "\\frac{(1+\\beta )\\eta |S_i^{\\perp }|}{m} \\sum _{j=1}^n |\\mathbf {\\xi }_t[j]| + \\frac{\\beta \\eta |S_i^{\\perp }|}{m} \\sum _{j=1}^n |\\mathbf {\\xi }_{t-1}[j]| \\nonumber \\\\\\!\\!&+&\\!\\!\\!\\!\\!\\!\\frac{1}{\\sqrt{m}}\\!\\!\\!\\!", "\\sum _{r \\in S_i^{\\perp }}\\!\\!(\\!", "\\Vert \\mathbf {w}_{t+1}^{r} \\!\\!-\\!\\!", "\\mathbf {w}_t^r \\Vert \\!\\!+\\!\\!", "\\beta \\Vert \\mathbf {w}_t^{r} \\!\\!-\\!\\!", "\\mathbf {w}_{t-1}^r\\Vert \\!", ")\\nonumber \\\\&\\overset{\\text{b}}{\\le }&\\!\\!\\!\\!", "\\frac{(1+\\beta )\\sqrt{n}\\eta |S_i^{\\perp }|}{m}\\Vert \\mathbf {\\xi }_t\\Vert \\!\\!+\\!\\!\\frac{\\sqrt{n}\\beta \\eta |S_i^{\\perp }|}{m}\\Vert \\mathbf {\\xi }_{t-1}\\Vert \\nonumber \\\\&+&\\frac{1}{\\sqrt{m}} \\sum _{r \\in S_i^{\\perp }}( \\Vert \\mathbf {w}_{t+1}^{r} - \\mathbf {w}_t^{r} \\Vert + \\beta \\Vert \\mathbf {w}_t^{r} - \\mathbf {w}_{t-1}^{r}\\Vert ) \\nonumber \\\\&{\\le }&\\!\\!\\!\\!\\!\\!", "\\frac{ sup_{j\\in [n]}\\!|S_j^{\\perp }|\\!\\sqrt{n}\\eta }{m}(\\!", "(\\!2\\!+\\!4\\beta \\!", ")\\Vert \\mathbf {\\xi }_t\\Vert \\!\\!+\\!\\!", "2\\beta \\Vert \\!\\mathbf {\\xi }_{t-1} \\!\\Vert \\!\\!+\\!\\!2\\!\\!\\sum _{i=0}^{t-1}\\!\\beta ^{t+1-i}\\Vert \\mathbf {\\xi }_i\\Vert \\!", ")\\nonumber \\\\$ where (a) uses $\\sum _{j=1}^n |z_j| \\le \\sqrt{n\\sum _{j=1}^n z_j^2}$ , (b) uses the NAG update rule to derive the distance between two consecutive iterations: $\\mathbf {w}_{t}^{r} - \\mathbf {w}_{t-1}^{r}\\!\\!\\!\\!\\!\\!&=&\\!\\!\\!\\!\\!\\!", "-\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}} + \\beta (\\mathbf {v}_{t}^{r} - \\mathbf {v}_{t-1}^{r}) \\nonumber \\\\\\!\\!\\!\\!&=&\\!\\!\\!\\!\\!\\!", "-\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}} + \\beta \\big (-\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}} \\nonumber \\\\&+&\\!\\!\\!\\!\\!\\!", "\\beta (\\mathbf {v}_{t-1}^{r} - \\mathbf {v}_{t-2}^{r})\\big ) \\nonumber \\\\&=&\\!\\!\\!\\!\\!\\!", "-\\eta (1\\!+\\!\\beta )\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a}))}{\\partial \\mathbf {w}_{t-1}^{r}} \\!+\\!", "\\beta ^2(\\mathbf {v}_{t-1}^{r} \\!-\\!", "\\mathbf {v}_{t-2}^{r})\\nonumber \\\\&=&\\!\\!\\!\\!\\!\\!", "-\\eta (1\\!+\\!\\beta )\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}}\\nonumber \\\\&+&\\!\\!\\!\\!\\!\\!\\beta ^2\\big (\\!\\!-\\!\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-2},\\mathbf {a})}{\\partial \\mathbf {w}_{t-2}^{r}} + \\beta (\\mathbf {v}_{t-2}^{r} - \\mathbf {v}_{t-3}^{r})\\big ) \\nonumber \\\\&=&\\!\\!\\!\\!\\!\\!", "\\!-\\!\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}} \\!-\\!\\eta \\sum _{i=0}^{t-1}\\beta ^{t-i}\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{i},\\mathbf {a})}{\\partial \\mathbf {w}_{i}^{r}}$ Therefore, $\\Vert \\mathbf {w}_{t}^{r} - \\mathbf {w}_{t-1}^{r}\\Vert \\!\\!\\!\\!&\\le &\\!\\!\\!\\!", "\\eta \\Vert \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}}\\Vert \\!+\\!", "\\eta \\sum _{i=0}^{t-1}\\beta ^{t-i}\\Vert \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{i},\\mathbf {a})}{\\partial \\mathbf {w}_{i}^{r}}\\Vert \\nonumber \\\\&\\le & \\!\\!\\!\\!\\eta \\frac{\\sqrt{n}}{\\sqrt{m}}(\\Vert \\mathbf {\\xi }_t\\Vert \\!+\\!", "\\sum _{i=0}^{t-1}\\beta ^{t-i}\\Vert \\mathbf {\\xi }_i\\Vert )$ where $|\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{i},\\mathbf {a})}{\\partial \\mathbf {w}_{i}^{r}}| \\le \\frac{\\sqrt{n}}{\\sqrt{m}}\\Vert \\mathbf {\\xi }_i\\Vert $ and $\\mathbf {v}_1^{r} - \\mathbf {v}_0^{r}=-\\eta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{0},\\mathbf {a})}{\\partial \\mathbf {w}_{0}^{r}}$ .", "Note that: $\\mathbf {w}_{t+1}^{r} \\!\\!-\\!\\!", "\\mathbf {w}_t^{r} \\!\\!&=&\\!\\!", "\\beta (\\mathbf {w}_{t}^{r} \\!\\!-\\!\\!", "\\mathbf {w}_{t-1}^{r}) \\!-\\!", "\\eta (1+\\beta )\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t},\\mathbf {a})}{\\partial \\mathbf {w}_{t}^{r}} \\nonumber \\\\\\!\\!&+&\\!\\!", "\\eta \\beta \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}},\\nonumber $ then we have: $&& \\Vert \\mathbf {w}_{t+1}^{r} - \\mathbf {w}_t^{r} \\Vert + \\beta \\Vert \\mathbf {w}_t^{r} - \\mathbf {w}_{t-1}^{r}\\Vert \\nonumber \\\\&\\le &\\!\\!\\!\\!", "\\eta (1\\!\\!+\\!\\!\\beta )\\Vert \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t},\\mathbf {a})}{\\partial \\mathbf {w}_{t}^{r}}\\Vert + \\eta \\beta \\Vert \\frac{\\partial \\mathcal {L}(\\mathbf {W}_{t-1},\\mathbf {a})}{\\partial \\mathbf {w}_{t-1}^{r}}\\Vert \\nonumber \\\\\\!&+&\\!", "2\\beta \\Vert \\mathbf {w}_t^{r} \\!\\!-\\!\\!", "\\mathbf {w}_{t-1}^{r}\\Vert \\nonumber \\\\&\\le &\\!\\!\\!\\!", "\\frac{\\eta \\sqrt{n}}{\\sqrt{m}}((1+3\\beta )\\Vert \\mathbf {\\xi }_t\\Vert + \\beta \\Vert \\mathbf {\\xi }_{t-1} \\Vert + \\!2\\sum _{i=0}^{t-1}\\beta ^{t+1-i}\\Vert \\mathbf {\\xi }_i\\Vert ).\\nonumber $" ], [ "Proof of Lemma 3", "Proof of Lemma 3 First, we denote we decompose $\\mathbf {H}= \\mathbf {U}\\Lambda \\mathbf {U}^*$ with SVD method, where $\\mathbf {U}$ is an unitary matrix and $\\Lambda = diag(\\lambda _1,\\cdots , \\lambda _n)$ is a diagonal matrix, $\\lambda _i$ is the i-th eigenvalues of $\\mathbf {H}$ in a decreasing order.", "Then we have $\\mathbf {M}=\\begin{bmatrix}\\mathbf {U}& 0 \\\\0 & \\mathbf {U}\\end{bmatrix}\\!\\begin{bmatrix}(1\\!+\\!\\beta )(\\mathbf {I}_n\\!-\\!\\eta \\Lambda ) &\\beta (-\\mathbf {I}_n\\!+\\!\\eta \\Lambda ) \\\\\\mathbf {I}_n & 0\\end{bmatrix}\\!\\begin{bmatrix}\\mathbf {U}^* & 0 \\\\0 & \\mathbf {U}^* \\end{bmatrix}.\\nonumber $ We denote $\\tilde{\\mathbf {U}} =$ $\\begin{bmatrix}\\mathbf {U}& 0 \\\\0 & \\mathbf {U}\\end{bmatrix}$.", "By applying some permutation matrix $\\tilde{\\mathbf {P}}$ , M can be further decomposed to $\\mathbf {M}= \\tilde{\\mathbf {U}}\\tilde{\\mathbf {P}}\\Sigma \\tilde{\\mathbf {P}}^{\\top }\\tilde{\\mathbf {U}}^*,$ where $\\mathbf {\\Sigma }$ is a block diagonal matrix with ${\\mathbf {\\Sigma }}_i =$ $\\begin{bmatrix}(1+\\beta )(1-\\eta \\lambda _i) & \\beta (-1+\\eta \\lambda _i) \\\\1 & 0\\end{bmatrix}$.", "Applying eigendecomposition method, $\\mathbf {\\Sigma }_i$ can be factorized as: ${\\mathbf {\\Sigma }}_i = \\mathbf {Q}_i \\mathbf {D}_i \\mathbf {Q}_i^{-1},$ where the columns of $\\mathbf {Q}_i$ are the eigenvectors of ${\\mathbf {\\Sigma }}_i$ and $\\mathbf {D}_i$ is a diagonal matrix whose diagonal elements are the corresponding eigenvalues.", "Then, $\\mathbf {\\Sigma }$ can be further decomposed as: $\\mathbf {\\Sigma } &=& diag({\\mathbf {\\Sigma }}_1, \\cdots , {\\mathbf {\\Sigma }}_n)=\\mathbf {Q}\\mathbf {D}\\mathbf {Q}^{-1}$ where $\\mathbf {Q}=diag(\\mathbf {Q}_1, \\cdots , \\mathbf {Q}_n)$ is a block diagonal matrix and $\\mathbf {D}=diag(\\mathbf {D}_1, \\cdots , \\mathbf {D}_n)$ .", "Therefore, we have $\\mathbf {M}= \\mathbf {P}\\mathbf {D}\\mathbf {P}^{-1}, \\nonumber $ where $\\mathbf {P}=\\tilde{\\mathbf {U}}\\tilde{\\mathbf {P}}\\mathbf {Q}$ .", "Now, we derive the bound of the norm of $\\mathbf {v}_k$ .", "We define $\\mathbf {u}_k := \\mathbf {P}^{-1}\\mathbf {v}_k$ , then we have $\\mathbf {u}_k = \\mathbf {P}^{-1}\\mathbf {M}\\mathbf {v}_{k-1}=\\mathbf {D}\\mathbf {u}_{k-1}=\\mathbf {D}^k \\mathbf {u}_{0}.", "\\nonumber $ Therefore, we have $\\mathbf {P}^{-1}\\mathbf {v}_k &=& \\mathbf {D}^k \\mathbf {P}^{-1}\\mathbf {v}_{0} \\nonumber \\\\\\mathbf {v}_k &=& \\mathbf {P}\\mathbf {D}^k \\mathbf {P}^{-1} \\mathbf {v}_{0} \\nonumber \\\\\\Vert \\mathbf {v}_k\\Vert &\\le & (\\max _{i \\in [n]} |\\mathbf {D}_{ii}|^k)\\sqrt{\\frac{\\lambda _{max}(\\mathbf {P}\\mathbf {P}^*)}{\\lambda _{min}(\\mathbf {P}\\mathbf {P}^*)}}\\Vert \\mathbf {v}_0\\Vert .$ Note that the right-hand side of (REF ) is determined by the $\\max _{i \\in [n]}|D_{ii}|$ , the condition number of $\\mathbf {P}\\mathbf {P}^*$ and $\\Vert \\mathbf {v}_0\\Vert $ .", "We firstly analyse the choice of $\\eta $ and $\\beta $ that ensuring $\\max _{i \\in [n]}|D_{ii}| < 1$ .", "Note that the characteristic polynomial of $\\mathbf {\\Sigma }_i$ is $z^2 -(1+\\beta )(1-\\eta \\lambda _i)z + \\beta (1-\\eta \\lambda _i)$ .", "If $\\Delta _i:=((1+\\beta )(1-\\eta \\lambda _i))^2 - 4\\beta (1-\\eta \\lambda _i) \\le 0$ , the two roots $z_{i,1}$ and $z_{i,2}$ are conjugate with magnitude $\\sqrt{\\beta (1-\\eta \\lambda _i)}$ .", "Besides, we have $\\Delta _i \\!\\le \\!", "0\\!\\!\\!\\!", "&\\Leftrightarrow &\\!\\!\\!\\!\\!", "(1\\!\\!+\\!\\!\\beta )^2(1\\!-\\!\\eta \\lambda _i\\!-\\!\\frac{2\\beta }{(1\\!\\!+\\!\\!\\beta )^2})^2 \\!-\\!", "(1\\!\\!+\\!\\!\\beta )^2(\\frac{2\\beta }{(1\\!\\!+\\!\\!\\beta )^2})^2 \\le 0 \\nonumber \\\\&\\Leftrightarrow &\\!\\!\\!\\!", "0 \\le 1-\\eta \\lambda _i \\le \\frac{4\\beta }{(1+\\beta )^2}.$ For all $i \\in [n]$ , we have the constraints on $\\eta $ and $\\beta $ as: $0<\\eta \\le 1/\\lambda _{max}(\\mathbf {H}),\\;\\; 1 \\ge \\beta \\ge \\frac{1-\\sqrt{\\eta \\lambda _{min}(\\mathbf {H})}}{1+\\sqrt{\\eta \\lambda _{min}(\\mathbf {H})}}.$ Then $\\max _{i \\in [n]}|\\mathbf {D}_{ii}| = \\sqrt{\\beta (1-\\eta \\lambda _{min}(\\mathbf {H}))}< 1$ .", "Next, we bound the eigenvalues of $\\mathbf {P}\\mathbf {P}^*$ .", "Note that the spectrum of Q does not change by multiplying the unitary matrix $\\tilde{\\mathbf {U}}\\tilde{\\mathbf {P}}$ .", "Therefore, we analyse the eigenvalues of $\\mathbf {Q}\\mathbf {Q}^*$ instead of $\\mathbf {P}\\mathbf {P}^*$ .", "Then we have $\\lambda _{max}(\\mathbf {Q}\\mathbf {Q}^*) = \\max _{i \\in [n]} \\lambda _{max}(\\mathbf {Q}_i\\mathbf {Q}_i^*)$ and $\\lambda _{min}(\\mathbf {Q}\\mathbf {Q}^*) = \\min _{i \\in [n]} \\lambda _{min}(\\mathbf {Q}_i\\mathbf {Q}_i^*)$ .", "The eigenvalues $z_{i,1}, z_{i,2}$ of $\\Sigma _i$ satisfy $z_{i,1} + z_{i,2} = (1+\\beta )(1-\\eta \\lambda _i),\\;\\;z_{i,1} z_{i,2} = \\beta (1-\\eta \\lambda _i).", "\\nonumber $ For eigenvalue $z_{i,j}$ , the corresponding eigenvector is $q_{i,j} = [z_{i,j}, 1]^{\\top }$ .", "Then, we have $Q_iQ_i^* \\!=\\!", "q_{i,1}q_{i,1}^{*} + q_{i,2}q_{i,2}^{*} \\!=\\!", "{\\small {\\begin{bmatrix}z_{i,1}\\bar{z}_{i,1}+z_{i,2}\\bar{z}_{i,2} & z_{i,1} + z_{i,2}\\\\\\bar{z}_{i,1} + \\bar{z}_{i,2} & 2\\end{bmatrix}}} \\nonumber $ We denote $\\theta _{i,1}$ and $\\theta _{i,2}$ as the two eigenvalues of $\\mathbf {Q}_i\\mathbf {Q}_i^*$ , we have $\\theta _{i,1} + \\theta _{i,2} &=& z_{i,1}\\bar{z}_{i,1}+z_{i,2}\\bar{z}_{i,2}+2 \\nonumber \\\\\\theta _{i,1} \\theta _{i,2} &=& 2( z_{i,1}\\bar{z}_{i,1}+z_{i,2}\\bar{z}_{i,2}) - (z_{i,1} + z_{i,2})( \\bar{z}_{i,1} + \\bar{z}_{i,2}) \\nonumber $ The matrix $\\mathbf {Q}_i\\mathbf {Q}_i^*$ is positive semi-definite, we have $\\theta _{i,1} + \\theta _{i,2} \\ge \\max \\lbrace \\theta _{i,1}, \\theta _{i,2}\\rbrace \\ge \\frac{\\theta _{i,1} + \\theta _{i,2}}{2}$ Therefore, we have $\\min \\lbrace \\theta _{i,1}, \\theta _{i,2}\\rbrace = \\theta _{i,1}\\theta _{i,2}/\\max \\lbrace \\theta _{i,1}, \\theta _{i,2}\\rbrace $ For bounding the singular value of $QQ^*$ , we have $\\lambda _{max}(QQ^*) &\\le & \\max _{i \\in [n]}\\lbrace \\max \\lbrace \\theta _{i,1}, \\theta _{i,2}\\rbrace \\rbrace \\nonumber \\\\&\\le & \\max _{i \\in [n]}\\lbrace \\theta _{i,1} + \\theta _{i,2}\\rbrace \\nonumber \\\\&\\le & 2\\beta (1-\\eta \\lambda _{min}(\\mathbf {H})) + 2 \\nonumber $ $\\lambda _{min}({QQ^*}) &\\ge & \\min _{i \\in [n]}\\lbrace \\min \\lbrace \\theta _{i,1}, \\theta _{i,2}\\rbrace \\rbrace \\nonumber \\\\&\\ge & \\min _{i \\in [n]}\\lbrace \\theta _{i,1} \\theta _{i,2}/\\max \\lbrace \\theta _{i,1}, \\theta _{i,2}\\rbrace \\rbrace \\nonumber \\\\&\\ge & \\frac{\\min _{i \\in [n]}\\lbrace \\theta _{i,1} \\theta _{i,2}\\rbrace }{\\max _{i \\in [n]}\\lbrace \\theta _{i,1},\\theta _{i,2}\\rbrace } \\nonumber \\\\&\\ge & \\frac{\\min _{i \\in [n]}\\lbrace 4\\beta (1\\!-\\!\\eta \\lambda _i) \\!-\\!", "[(1\\!+\\!\\beta )(1\\!-\\!\\eta \\lambda _i)]^2\\rbrace }{2\\beta (1-\\eta \\lambda _{min}(\\mathbf {H})) + 2} \\nonumber $ Therefore, $\\frac{\\lambda _{max}(QQ^*)}{\\lambda _{min}({QQ^*})} \\le \\frac{(2\\beta (1-\\eta \\lambda _{min}(\\mathbf {H})) + 2)^2}{\\min _{i \\in [n]}\\lbrace 4\\beta (1\\!-\\!\\eta \\lambda _i) \\!-\\!", "[(1\\!+\\!\\beta )(1\\!-\\!\\eta \\lambda _i)]^2\\rbrace }.", "\\nonumber $ Finally, we need to bound the norm of $\\mathbf {v}_0$ , which depends on the value of $\\Vert \\mathbf {\\xi }_0\\Vert $ .", "The upper bound of $\\Vert \\mathbf {\\xi }_0\\Vert $ has been proved in [33]." ], [ "Supporting Lemmas", "Supporting Lemmas Lemma 5 ([33]) Assume that $\\mathbf {w}_0^{r} \\sim N(0, \\mathbf {I}_d)$ and $\\mathbf {a}^r$ is uniformly sampled form $\\lbrace -1,1\\rbrace $ for all $r \\in [m]$ .", "For $0 < \\delta < 1$ , we have $\\Vert \\mathbf {\\xi }_0\\Vert ^2 = O(nlog(m/\\delta )log^2(n/\\delta ))$ with probability at least $1-\\delta $ .", "Lemma 6 Assume that $\\lambda _{min}(\\mathbf {H}_0) > 0$ .", "Denote $\\kappa $ as the condition number of $\\mathbf {H}_0$ .", "With $\\eta = 1/2\\lambda _{max}(\\mathbf {H}_0)$ and $\\beta = \\frac{3\\sqrt{\\kappa } - 2}{3\\sqrt{\\kappa } + 2}$ , we have $\\max _{i \\in [n]}|\\mathbf {D}_{ii}| = 1 - \\frac{2}{3\\sqrt{\\kappa }}$ and $C_0 \\le \\frac{(12\\kappa -1)^2}{2\\kappa -1}$ .", "Lemma 7 ([33]) Assume $\\mathbf {w}_0^{r} \\sim N(0, \\mathbf {I}_d)$ for all $r \\in [m]$ .", "Suppose for any set $\\mathbf {W}= \\lbrace \\mathbf {w}^1, \\cdots , \\mathbf {w}^m\\rbrace $ that satisfy $\\Vert \\mathbf {w}^r - \\mathbf {w}_0^r\\Vert \\le R$ for all $r \\in [m]$ , then it has $\\Vert \\mathbf {H}- \\mathbf {H}_0\\Vert _F \\le 2nR$ with probability at least $1-n^2 \\exp (-mR/10)$ ." ], [ "Proof of Theorem 1", "Proof of Theorem 1 We define $ a := 1 - \\frac{2}{3\\sqrt{\\kappa }}$ and $b := a + \\frac{1}{6\\sqrt{\\kappa }} = 1 - \\frac{1}{2\\sqrt{\\kappa }}$ .", "Then we have $\\beta < b^2$ .", "From Lemma 1, we have $\\mathbf {z}_{t} &=& \\mathbf {M}\\mathbf {z}_{t-1} + \\mathbf {\\mu }_{t-1} \\nonumber $ where $\\mathbf {z}_{t} = {\\small {\\begin{bmatrix}\\mathbf {\\xi }_{s} \\\\\\mathbf {\\xi }_{s-1}\\end{bmatrix}}}$ , $\\mathbf {\\mu }_{t} = {\\small {\\begin{bmatrix}\\mathbf {\\psi }_{t} \\!+\\!", "\\mathbf {\\phi }_{t}\\\\0\\end{bmatrix}}}$ and $\\mathbf {M}= {\\small {\\begin{bmatrix}(1\\!+\\!\\beta )(\\mathbf {I}_n\\!-\\!\\eta \\mathbf {H}_0) \\!&\\!\\beta (\\!-\\mathbf {I}_n\\!+\\!\\eta \\mathbf {H}_0) \\\\\\mathbf {I}_n \\!&\\!", "0\\end{bmatrix}}}$ .", "From Lemma 3 and Lemma 6, we have $\\Vert \\mathbf {M}^s \\mathbf {z}_0\\Vert \\le a^s \\gamma \\Vert \\mathbf {z}_0\\Vert .$ Now, we need to prove the linear convergence as: $\\Vert \\mathbf {z}_{t}\\Vert \\le b^t2\\gamma \\Vert \\mathbf {z}_0\\Vert .$ We use induction to prove this result.", "At $t=0$ , it holds.", "For the induction step, assume $\\Vert z_s\\Vert \\le b^s 2\\gamma \\Vert z_0\\Vert $ for any time $ s \\le t-1$ .", "At iteration t, we have: $\\mathbf {z}_{t} &=& \\mathbf {M}\\mathbf {z}_{t-1} + \\mathbf {\\mu }_{t-1} \\nonumber \\\\\\mathbf {z}_{t} &=& \\mathbf {M}^t\\mathbf {z}_{0} + \\sum _{s=0}^{t-1}\\mathbf {M}^{t-s-1}\\mathbf {\\mu }_{s} \\nonumber \\\\\\Vert \\mathbf {z}_{t}\\Vert &\\le & \\Vert \\mathbf {M}^t\\mathbf {z}_{0}\\Vert + \\Vert \\sum _{s=0}^{t-1}\\mathbf {M}^{t-s-1}\\mathbf {\\mu }_{s}\\Vert $ Now we need to analyze the two parts of the right-hand side of (REF ).", "The first term has bound as (REF ).", "Then we need to bound the term $\\Vert \\sum _{s=0}^{t-1}\\mathbf {M}^{t-s-1}\\mathbf {\\mu }_{s}\\Vert $ .", "We first bound the distance between $\\mathbf {w}_s^r$ and the initial $\\mathbf {w}_0^r$ for all $s \\le t-1$ .", "Based on the update rule (REF ), we have $\\mathbf {w}_{s}^{r} - \\mathbf {w}_0^{r} \\!\\!\\!\\!&=&\\!\\!\\!\\!", "\\sum _{i=1}^s (\\mathbf {w}_{i}^{r} - \\mathbf {w}_{i-1}^{r}) \\nonumber \\\\\\!\\!\\!\\!&=&\\!\\!\\!\\!", "-\\eta \\!\\!\\sum _{i=0}^{s-1}\\!\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{i},\\mathbf {a})}{\\partial \\mathbf {w}_{i}^{r}} \\!-\\!\\eta \\!\\!\\sum _{g=0}^{s-1} \\!\\sum _{i=0}^g\\!", "\\beta ^{g+1-i}\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{i},\\mathbf {a})}{\\partial \\mathbf {w}_{i}^{r}}.", "\\nonumber $ Applying Cauchy-Schwarz inequality and $|\\frac{\\partial \\mathcal {L}(\\mathbf {W}_{i},\\mathbf {a})}{\\partial \\mathbf {w}_{i}^{r}}| \\le \\frac{\\sqrt{n}}{\\sqrt{m}}\\Vert \\mathbf {\\xi }_i\\Vert $ , we have the bound of the distance for all $s \\le t$ $&&\\Vert \\mathbf {w}_{s}^{r} - \\mathbf {w}_0^{r}\\Vert \\nonumber \\\\&\\le &\\!\\!\\!\\!", "\\frac{\\eta \\sqrt{n}}{\\sqrt{m}}\\sum _{i=0}^{s-1} \\Vert \\mathbf {\\xi }_i\\Vert + \\frac{\\eta \\sqrt{n}}{\\sqrt{m}}\\sum _{g=0}^{s-1}\\sum _{i=0}^g \\beta ^{g+1-i}\\Vert \\mathbf {\\xi }_i\\Vert \\nonumber \\\\&\\overset{\\text{a}}{\\le }&\\frac{2\\gamma \\eta \\sqrt{2n}}{\\sqrt{m}}\\Vert \\mathbf {\\xi }_0\\Vert \\big ( \\sum _{i=0}^{s-1}b^i +\\sum _{g=0}^{s-1}\\sum _{i=0}^g \\beta ^{g+1-i}b^i \\big ) \\nonumber \\\\&\\overset{\\text{b}}{\\le }& \\frac{2\\gamma \\eta \\sqrt{2n}}{\\sqrt{m}}\\Vert \\mathbf {\\xi }_0\\Vert \\big ( \\frac{1}{1-b} + \\frac{b^2}{(1-b)^2} \\big ) \\nonumber \\\\&\\overset{\\text{c}}{\\le }& \\frac{8\\kappa \\gamma \\eta \\sqrt{2n}}{\\sqrt{m}}\\Vert \\mathbf {\\xi }_0\\Vert \\nonumber \\\\ &\\overset{\\text{d}}{\\le }& \\frac{8\\kappa \\gamma \\eta \\sqrt{2n}}{\\sqrt{m}}O(\\sqrt{nlog(m/\\delta )log^2(n/\\delta )}) \\nonumber \\\\&\\overset{\\text{e}}{\\le }&\\frac{\\lambda }{360n\\gamma },$ where (a) uses the induction step, (b) uses $\\beta \\le b^2$ , (c) applies $\\frac{1}{1-b} + \\frac{b^2}{(1-b)^2} \\le 4\\kappa $ , (d) uses Lemma 5, (e) choose the number of neurons $m =\\Omega (\\lambda ^{-4}n^{4}\\gamma ^4 \\log ^3(n/\\delta )) = \\Omega (\\lambda ^{-4}n^{4}\\kappa ^4log^3(n/\\delta ))$ as $\\gamma = \\Theta (\\kappa )$ and $\\eta \\lambda \\le 1/\\kappa $ .", "Then we turn to analyze the value of $\\Vert \\mathbf {\\mu }\\Vert $ , which has bound as $\\Vert \\mathbf {\\mu }\\Vert \\le \\Vert \\mathbf {\\phi }\\Vert +\\Vert \\mathbf {\\psi }\\Vert $ .", "Firstly, we derive the bound of $\\Vert \\mathbf {\\phi }_{s}\\Vert $ as $&&\\Vert \\mathbf {\\phi }_{s} \\Vert \\nonumber \\\\&{=}&\\!\\!\\!\\!", "\\sqrt{\\sum _{i=1}^n \\mathbf {\\phi }_s[i]^2} \\nonumber \\\\& \\overset{\\text{a}}{\\le }&\\!\\!\\!\\!", "\\big [\\!\\sum _{i=1}^n\\!", "\\big (\\!\\!", "\\frac{ \\underset{j\\in [n]}{sup}\\!|S_j^{\\perp }|\\!\\sqrt{n}\\eta }{m}(\\!", "(\\!2\\!+\\!4\\beta \\!", ")\\Vert \\mathbf {\\xi }_s\\Vert \\!\\!+\\!\\!", "2\\beta \\Vert \\!\\mathbf {\\xi }_{s-1} \\!\\Vert \\!\\!+\\!\\!2\\!\\!\\sum _{k=0}^{s-1}\\!\\beta ^{s+1-k}\\Vert \\mathbf {\\xi }_k\\Vert \\!", ")\\big )^2\\big ]^{\\frac{1}{2}} \\nonumber \\\\&\\overset{\\text{b}}{\\le }&\\!\\!\\!\\!", "4\\eta n R\\big ((2+4\\beta )\\Vert \\mathbf {\\xi }_s\\Vert + 2\\beta \\Vert \\mathbf {\\xi }_{s-1} \\Vert \\!\\!+ \\!\\!", "2\\sum _{k=0}^{s-1}\\beta ^{s+1-k}\\Vert \\mathbf {\\xi }_k\\Vert \\big ) \\nonumber \\\\&\\overset{\\text{c}}{\\le }&\\!\\!\\!\\!", "8\\eta nR\\gamma \\Vert \\mathbf {z}_0\\Vert \\big ((2+4\\beta )b^s+ 2\\beta b^{s-1} + \\frac{2b^{s+3}}{1-b})\\nonumber \\\\&\\overset{\\text{d}}{\\le }& 48\\sqrt{\\kappa }\\eta n R \\gamma b^s \\Vert \\mathbf {z}_0\\Vert \\nonumber \\\\&\\overset{\\text{e}}{\\le }& \\frac{2b^s}{ 15 \\sqrt{\\kappa }} \\Vert \\mathbf {z}_0\\Vert $ where (a) uses Lemma 1.", "(b) uses Lemma 2.", "(c) uses the induction step.", "(d) applies $\\beta < b^2$ and $1+2b^2 + b + \\frac{b^3}{1-b} \\le 3\\sqrt{\\kappa }$ .", "(e) uses $R\\le \\lambda /(360n\\gamma )$ .", "For $\\mathbf {\\psi }_s$ , we have $\\mathbf {\\psi }_s &=& \\beta \\eta (\\mathbf {H}_{s-1} - \\mathbf {H}_0)\\mathbf {\\xi }_{s-1} -(1+\\beta )\\eta (\\mathbf {H}_s - \\mathbf {H}_0)\\mathbf {\\xi }_s \\nonumber \\\\\\Vert \\mathbf {\\psi }_s\\Vert &\\le & \\beta \\eta \\Vert \\mathbf {H}_{s-1} - \\mathbf {H}_0\\Vert \\Vert \\mathbf {\\xi }_{s-1}\\Vert + (1+\\beta )\\eta \\Vert \\mathbf {H}_s - \\mathbf {H}_0\\Vert \\Vert \\mathbf {\\xi }_s\\Vert \\nonumber \\\\\\Vert \\mathbf {\\psi }_s\\Vert &\\overset{\\text{a}}{\\le }& \\frac{1+\\beta + b}{90\\kappa }b^s \\Vert \\mathbf {z}_0\\Vert \\nonumber \\\\\\Vert \\mathbf {\\psi }_s\\Vert &\\overset{\\text{b}}{\\le }& \\frac{b^s}{30 \\kappa } \\Vert \\mathbf {z}_0\\Vert $ where (a) applies $\\beta < b^2$ , the induction step, Lemma 7 and (REF ), (b) uses $\\beta < 1$ and $b < 1$ .", "Therefore, it has $\\Vert \\mathbf {\\mu }_s\\Vert &\\le & \\Vert \\mathbf {\\phi }_s\\Vert +\\Vert \\mathbf {\\psi }_s\\Vert \\nonumber \\\\&\\le & (\\frac{2}{15\\sqrt{\\kappa }} + \\frac{1}{30\\kappa })b^s \\Vert \\mathbf {\\xi }_0\\Vert $ Then, we have $&&\\Vert \\sum _{s=0}^{t-1}M^{t-s-1}\\mathbf {\\mu }_s\\Vert \\nonumber \\\\&\\le & \\sum _{s=0}^{t-1}a^{t-s-1}\\gamma \\Vert \\mathbf {\\mu }_s\\Vert \\nonumber \\\\&\\le & (\\frac{2}{15\\sqrt{\\kappa }} + \\frac{1}{30\\kappa })\\gamma \\Vert \\mathbf {z}_0\\Vert \\sum _{s=0}^{t-1}a^{t-s-1}b^s \\nonumber \\\\&\\overset{\\text{a}}{\\le }&\\frac{1}{6\\sqrt{\\kappa }}\\gamma \\Vert \\mathbf {z}_0\\Vert 6\\sqrt{\\kappa }b^t \\nonumber \\\\&\\overset{\\text{b}}{\\le }& b^t\\gamma \\Vert \\mathbf {z}_0\\Vert $ (a) uses $\\sum _{s=0}^{t-1}a^{t-s-1}b^s \\le b^{t-1}\\sum _{s=0}^{t-1}{(\\frac{a}{b}})^{t-s-1}\\le 6\\sqrt{\\kappa }b^t$ .", "Combining (REF ) and (REF ), we have the bound on $\\Vert \\mathbf {z}_t\\Vert $ as: $\\Vert \\mathbf {z}_{t}\\Vert &\\le & \\Vert \\mathbf {M}^t\\mathbf {z}_{0}\\Vert + \\Vert \\sum _{s=0}^{t-1}\\mathbf {M}^{t-s-1}\\mathbf {\\mu }_{s}\\Vert \\nonumber \\\\&\\le & a^t\\gamma \\Vert \\mathbf {z}_0\\Vert + b^t\\gamma \\Vert \\mathbf {z}_0\\Vert \\nonumber \\\\&\\le & b^t2\\gamma \\Vert \\mathbf {z}_0\\Vert ,$ which completes the proof." ] ]
2107.01832
[ [ "Dissecting the Gaia HR diagram within 200 pc" ], [ "Abstract We analyse the high-quality Hertzsprung-Russell diagram (HRD) derived from Gaia data release 2 for the Solar Neighbourhood.", "We start building an almost-complete sample within 200 pc and for |b|>25 deg, so as to limit the impact of known errors and artefacts in the Gaia catalog.", "Particular effort is then put into improving the modelling of population of binaries, which produce two marked features in the HRD: the sequence of near-equal mass binaries along the lower main sequence, and the isolated group of hot subdwarfs.", "We describe a new tool, BinaPSE, to follow the evolution of interacting binaries in a way that improves the consistency with PARSEC evolutionary tracks for single stars.", "BinaPSE is implemented into the TRILEGAL code for the generation of \"partial models\" for both single and binary stellar populations, taking into account the presence of resolved and unresolved binaries.", "We then fit the Gaia HRD via MCMC methods that search for the star formation history (SFH) and initial binary fraction (by mass) that maximise the likelihood.", "The main results are (i) the binary fraction derived from the lower main sequence is close to 0.4, while twice larger values are favoured when the upper part of the HRD is fitted; (ii) present models predict the observed numbers of hot subdwarfs to within a factor of 2; (iii) irrespective of the prescription for the binaries, the star formation rate peaks at values 1.5e-4 Msun/yr at ages slightly above 2 Gyr, and then decreases to 0.8e-4 Msun/yr at very old ages." ], [ "Introduction", "Fitting of color-magnitude diagrams (CMD) is nowadays the gold standard tool for deriving the star formation histories (SFH) of nearby galaxies.", "The basic idea in CMD-fitting is that the sub-pieces of galaxies are made by the addition of “single-burst” stellar populations (or partial models, PMs) of different masses, ages, and metallicities (and sometimes different extinctions), which in turn can be simply modelled from the basic theory of stellar structure and evolution, with the addition of simulated observational errors.", "Several methods in the literature share these same principles, although largely differing on the way the PMs are modelled and combined to identify a best-fitting model.", "Hundreds of galaxy regions within 1 Mpc have had their CMDs studied in this way , , , , , , allowing to identify main events in their history, and opening the way for the so-called “near-field cosmology”.", "Once reliable distances and extinctions allow us to convert apparent magnitudes into absolute ones – hence allowing us to build the HR diagram (HRD) – CMD-fitting methods can also be applied to stars in the Solar Neighbourhood.", "This has been done since the first release of Hipparcos catalogue , , , , although with several limitations: First, the photometric completeness of the Hipparcos input catalog was ensured only for very bright stars (roughly for $V7.3$ ).", "Second, samples built for the HRD analysis contained very few stars at the magnitude level of the oldest main sequence turn-offs, and they completely ignored the lowest main sequence made by unevolved stars, owing to the limited accuracy of Hipparcos parallaxes.", "An emblematic case is presented by , who limited their analysis to a volume-limited sample and hence could not derive the SFH for ages older than 3 Gyr.", "Although alternative methods exist to constrain the SFH in the Solar Neighbourhood , , they are deemed to be more uncertain than the CMD-fitting involving stars in the main nuclear burning phases of stellar evolution.", "The situation has dramatically changed with the release of Gaia data release 2 which provided parallaxes more accurate than Hipparcos by a factor of $\\sim \\!20$ .", "In addition, Gaia DR2 includes accurate and homogeneous photometry, with uncertainties of the order of millimags down to apparent magnitudes of $G\\!\\sim \\!18$  mag.", "These improvements are evident in the beautiful HRDs illustrated in .", "In this paper, we aim at the interpretation of the Gaia DR2 HRD using the CMD-fitting method.", "Although some shortcomings in DR2 may still hamper a definitive analysis of the stellar content in the Solar Neighbourhood, its data clearly overcomes many limitations of the previous Hipparcos data and is of sufficient quality to allow us to verify the assumptions commonly used in the population synthesis models and CMD-fitting methods applied to external galaxies.", "A good overview of the possibilities opened by Gaia DR2 can be found in the independent CMD-fitting works by , , , and .", "Also worth of mention are the attempts to improve the determinations of the stellar initial mass function (IMF) from and .", "Specially important, in this regard, is the possibility of checking the prescriptions used to simulate unresolved binaries in CMD-fitting studies.", "Indeed, the Gaia DR2 HRD presents both a rich population of nearly equal-mass binaries distributed in a sequence parallel to the lower main sequence (which is commonly seen in HST data of Local Group dwarf galaxies and in star clusters, see for instance), and a sizeable population of hot subdwarfs (see ) originated from mass-transfer in close binaries , .", "Therefore, models of binary populations might aim at reproducing these HRD features too.", "Since the production rate of hot subdwarfs is expected to vary with the population age, reproducing their numbers cannot be separated from the problem of determining the best-fit SFH of a given volume-limited stellar sample.", "Conversely, the numbers of observed binaries might bring implications for the determination of the SFH, which are still to be fully explored in the literature.", "A preliminary investigation of the effect of unresolved binaries was recently reported by , who find that “ignoring the presence of unresolved binaries biases the inferred age-metallicity relation towards older ages and higher metallicities than the true values”.", "In this paper, we aim to do an additional step in this direction, presenting a new formalism for the analysis of the Gaia HRD that is suited to calibrate parameters in binary population models, and at the same time allows us to estimate their impact on the SFH determinations.", "Subsequent papers will develop these methods further, then taking full advantage of the expected improvements in the data from EDR3 and DR3.", "This paper is structured as follows.", "In Sect.", "we present our selection of Gaia DR2 data to build a – as far as possible – clean, volume-limited HRD for the Solar Neighbourhood.", "In Sect.", "we present the TRILEGAL population synthesis code used to model the Gaia HRD, concentrating on the new BinaPSE module to describe binary evolution and their products.", "More details are given in the Appendix , with examples of binary evolution and a few simulations of simple stellar populations focusing on the binaries.", "Sect.", "presents the modelling of the Gaia data in terms of a linear combination of simple stellar populations, and the method adopted to identify the best-fitting parameters.", "Sect.", "presents a few of the best-fitting models and the conclusions we can draw in terms of the recovery of SFH, and the presence of binaries.", "Our initial goal is to create a clean sample representative of the Solar Neighbourhood in the ${M}_{G}$ vs. ${G}_{\\rm BP}-{G}_{\\rm RP}$ HRD from Gaia DR2.", "Most importantly, we aim at having something close to a “complete volume-limited sample”, because it can be easily compared to the output of a population synthesis code – which by definition generates all stars in a given volume.", "We list below the considerations that lead us to this sample.", "Under the assumption of small parallax errors, we can approximate the distance of a star given its parallax as $d=1/\\pi $ .", "We then discard stars farther than a maximum distance $\\mbox{${d}_{\\rm max}$}$ , which corresponds to a minimum parallax $\\mbox{${\\pi }_{\\rm min}$}= 1/\\mbox{${d}_{\\rm max}$}$ .", "The absolute magnitude ${M}_{G}$ and colour of any star in the sample are given by $\\mbox{${M}_{G}$}&=& G + 5\\log \\pi + 5 - A_G \\\\ \\nonumber (\\mbox{$G_{\\rm BP}$}-\\mbox{$G_{\\rm RP}$})_0 &=& \\mbox{$G_{\\rm BP}$}-\\mbox{$G_{\\rm RP}$}- E(\\mbox{$G_{\\rm BP}$}-\\mbox{$G_{\\rm RP}$})$ where $G$ is the apparent magnitude and $A_G$ is the interstellar extinction.", "We recall that, in the limit of small extinction and for “median-temperature stars” such as the Sun, this extinction is related to the one in the $V$ band by $A_G=0.861\\,A_V$ .", "The colour excess is related to $A_V$ by $E(\\mbox{$G_{\\rm BP}$}-\\mbox{$G_{\\rm RP}$}) = 0.421\\,A_V$ (see , assuming 's extinction curve with $R_V=3.1$ ).", "We then just need to define limits that ensure these relations are accurate and provide HRDs well populated of stars.", "Trial and error experiments have led us to the following choices: ${d}_{\\rm max}$ is set to 200 pc, or equivalently to a parallax threshold of $\\mbox{${\\pi }_{\\rm min}$}=5$  mas.", "With typical errors in DR2 parallaxes being 0.04 mas , this ensures distance errors typically smaller than $\\sim \\!1$  %, and hence ${M}_{G}$ errors smaller than 0.02 mag.", "We note that the small offsets present in DR2 parallaxes, of the order of 0.03 mas , , , become insignificant compared to this 5 mas threshold.", "Inspection of the 3D extinction maps from reveals that for $d<200$  pc we have high extinction regions close to the Galactic Plane.", "We therefore limit the catalog to high galactic latitudes, with $|b|>25^\\circ $ .", "The remaining sample has $A_{G,\\mathrm {median}}\\simeq 0.03$  mag, with 16% and 84% percentiles at $0.009$  mag and $0.06$  mag, respectively, and an absolute maximum value of $0.75$  mag.", "Importantly, less than 4 per cent of the stars in this “nearby and out-of-plane” sample have extinctions larger than 0.16 mag.", "Such low extinction values ensure that the corrections in eq.", "REF are accurate even considering the possible errors present in the extinction maps from .", "A Gaia DR2 catalog with these $\\pi $ and $b$ limits contains 1 361 767 stars.", "Fig.", "REF presents its HRD between limits $-2<\\mbox{${M}_{G}$}<10.5$ and $-1.0<(\\mbox{$G_{\\rm BP}$}-\\mbox{$G_{\\rm RP}$})_0<3.0$ .", "As can be appreciated in the figure, it contains a significant number of stars in the main post-main sequence evolutionary phases.", "Just 3 star clusters from the catalogue are contained within our limits, the most notable being Praesepe." ], [ "Culling the initial catalog", "The HRDs of Fig.", "REF show all the beautiful features described by , but also some artefacts caused by known problems in the DR2 astrometry and photometry.", "It happens that some stars falling in our sample have significant errors in their parallaxes and/or apparent magnitudes, so much that they appear in the wrong place of the HRD.", "They might even be spurious objects coming from outside our maximum distance.", "To deal with these objects, we define the following additional cuts: \\item \\begin{verbatim} astrometric_excess_noise < 1 \\end{verbatim} \\label{cut_astrometry} \\item \\begin{verbatim} phot_bp_mean_flux_over_error > 10     AND phot_rp_mean_flux_over_error > 10\\end{verbatim}  \\label{cut_flux} \\item \\begin{verbatim} phot_bp_rp_excess_factor < polyLine(bp_rp,     -0.56,1.307, 0.03,1.192, 1.51,1.295, 4.31,1.808) \\end{verbatim} \\label{cut_excess} \\end{enumerate} Cuts \\ref{cut_parallax} and \\ref{cut_astrometry} aim to eliminate stars with unreliable parallaxes that passed the initial $\\pi>5$~mas cut, while \\ref{cut_flux} and \\ref{cut_excess} eliminate stars whose photometry is either uncertain or suspiciously inconsistent between the $G$, \\gbp, and \\grp\\ passbands. The last criterion is inspired by an example provided by \\cite{taylor18}.", "Figure~\\ref{fig:stars_lost} shows the fraction of stars retained after each one of these cuts, and after all the cuts, as a function of $G$ and $M_G$. As can be noticed, the most severe cuts are \\ref{cut_astrometry} and \\ref{cut_excess}, which cause the removal of a few percent of the stars over the interval $5\\gtrsim G\\gtrsim17$. The mean fraction of stars being retained after all cuts amounts to about 92\\%, but approaches 94\\% for the brighter stars, i.e. those with $G\\la11$ and $M_G\\la6$.", "Stars being eliminated by any of these cuts are plotted as red dots in the HRD of Fig.~\\ref{fig:bighrd}. It can be seen that without these dots, we have a significantly cleaner HRD. In particular, we eliminate a large number of stars between the main sequence (MS) and white dwarf (WD) sequence corresponding to stars with bad astrometric solutions. The primary parameter describing these stars is a high  \\verb$astrometric_excess_noise$, or $\\astnoise$. While many of the stars eliminated may be real binaries for which the orbital motion explains the bad astrometric solution, stars with $\\astnoise>1$ are concentrated along the Galactic Plane and clumped around sky locations for which the Gaia scanning is very poor so far, as discussed by \\cite{lindegren}.", "The drawback of applying these cuts is that we might be removing from the sample stars which in reality fulfil the $d<200$~pc and $|b|>25^\\circ$ conditions. Conversely, there might be stars fulfilling these conditions that appear at very different apparent distances and hence did not even enter in our initial catalog. This situation might improve in the next Gaia data releases.", "Finally, let us consider the ranges of absolute magnitudes being comprised in the catalog.", "\\begin{enumerate}     \\item The faintest limit is set by the maximum distance and extinction in the sample. With $\\dmax=200$~pc, the maximum true distance modulus is $\\mu_0=6.505$~mag, and $>96$~\\% of the stars have a maximum distance modulus in the $G$ band of $\\mu<(6.505+0.16)=6.665$~mag. Therefore it is fair to say that a cut at $G<17$~mag represents a sample which is essentially complete for all absolute magnitudes $\\gmag<(17-6.665)=10.335$~mag.", "\\item At the brightest limit of $G=5$~mag, we either are dealing with the closest faint stars for which the extinction is null, or with the farthest bright stars, a small fraction of which might have some appreciable extinction. Anyway, if we consider the extinction correction as accurate, we have stars coming from absolute magnitudes equal to $\\gmag=5-5\\log d+5=10-5\\log d$, that is, there is a one-to-one relation between the absolute magnitude of the observed stars, and the minimum distances being sampled.", "\\end{enumerate} Summarising, the distances being sampled for the $5<G<17$ interval are: \\begin{equation}     10^{-0.2 (M_G-10)}<(d/\\mathrm{pc}) <200\\,\\,\\,\\, \\mathrm{for~}\\gmag<10.335 \\mathrm{~mag} \\, \\end{equation} and the fraction of the total ``target volume'' is \\begin{equation}     F = 1 - 10^{-0.6 (M_G-10)} / 200^3 \\,.", "\\end{equation} The latter is actually close to 1 for an ample interval of the $\\gmag$ of interest. For instance, stars with $\\gmag=3$~mag (close to the oldest main sequence turn-off) will be sampled at the entire $25<(d/\\mathrm{pc})<200$ interval, which represents 99.8~\\% of the target volume, while stars with $\\gmag=0$ (in the red clump) will be at $100<(d/\\mathrm{pc})<200$, which samples 87.5~\\% of the target volume. These numbers suggest that we can even use $F$ as a completeness factor as a function of $\\gmag$, as we do in the following. We note that $F$ falls to zero for stars brighter than $\\gmag=-1.5$~mag. Moreover, we note that the accuracy of the extinction correction is relevant only for the samples observed prevalently at large distances, i.e., indicatively, for the stars with $-1.5<\\gmag<0$.", "Therefore, we can make use of the entire range of absolute magnitudes accessible with this catalog, from $-1.5<\\gmag<10.365$. As can be seen in the HRD of Fig.~\\ref{fig:bighrd}, this catalog provides a good sampling of the upper main sequence and of the entire RGB, RC included. At the faint end, it contains the brightest part of the beautiful WD cooling sequence delineated in \\cite{babu}. It also samples very well the lower main sequence, down to initial masses of $\\mini\\lesssim0.45$~\\Msun.", "In order to ensure a completeness larger than 90\\% and limit the impact of possible variations in the IMF of low-mass stars, we initially limit the sample to $M_G<7.5$~mag, as detailed below.", "\\section{Models} \\label{sec:models}   Now that our data has been set, our goal is to reproduce the Gaia DR2 HRD as a sum of single-burst stellar populations incorporating all the known errors and biases. To do that, we first need to build a set of single-burst stellar populations, hereafter called ``partial models'', or PMs. The subsequent step is to combine the PMs to provide a good fitting of the observed HRD. PMs can be produced in several forms, such as catalogues, luminosity functions, and Hess diagrams. In the following, we will refer to PMs mainly as Hess diagrams covering the limits defined from the Gaia DR2 HRD. However, the same PMs can be re-generated later in other forms, for the subsequent analyses.", "\\subsection{Single stars with TRILEGAL}   The primary tool we use to create PMs is TRILEGAL \\cite{girardi05,girardi12}, a generic code to simulate stellar populations that has been widely used in the literature. It makes use of extensive libraries of stellar evolutionary tracks from PARSEC-COLIBRI teams (see Sect.~\\ref{sec:binapse_code} below). Stars are sampled according to the initial mass function (IMF), and user-specified distributions of ages, metallicities, and distances. The simplest way of specifying these distributions is directly providing the SFH, intended as the star formation rate as a function of population age, SFR$(t)$, plus the age--metallicity relation, $\\feh(t)$. Then, the simulated stars are converted into magnitudes via bolometric correction tables and extinction coefficients from the YBC code \\cite{chen19}.", "Until recently, TRILEGAL was able to simulate only single stars, and non-interacting binaries via the simple addition of the light output from the two binary components. In the following, we describe a new important addition to TRILEGAL, which allows us to introduce interacting binaries in the simulations.", "\\subsection{BinaPSE: the new TRILEGAL module} \\label{sec:binapse_code}   The increasing relevance of interacting binary stars in modern astronomy motivated us to expand TRILEGAL capabilities by linking it with the BSE code \\cite{hurley02}, a popular binary evolution code for population synthesis. However, it is not possible to use TRILEGAL and BSE by simply running them in sequence because they do not share the same evolutionary tracks, i.e. the predicted stellar loci on the HRD and stellar counts of single and binary stars would not be consistent one another. More precisely, TRILEGAL interpolates among pre-computed stellar evolutionary grids (i.e. it implements a grid-based method) to match the mass, the metallicity and the age of generated stars. On the other hand, BSE evolves the binary components by following analytic formulae which approximate a different set of evolutionary grids \\cite{hurley2000}. Although the use of analytic formulae guarantees a very fast computation, we decided to revise BSE and to transform it into a grid-based code in order to satisfy our accuracy requirements and to make future changes of evolutionary grids much easier. The BSE revision led to the creation of a new TRILEGAL module that we named BinaPSE. BinaPSE shares with TRILEGAL the evolutionary grids and interpolation routines, but preserves the binary evolution methodology described in \\cite{hurley02}. Therefore, when the two components interact via mass transfer or a common envelope (CE), at each time step, the remnants are determined as in the old BSE code, but they are located in the evolutionary grids as in the original TRILEGAL code.", "The evolutionary grids used in BinaPSE are: \\begin{itemize}     \\item the PARSEC v1.2S evolutionary tracks provided by \\cite{bressan12}, revised as in \\cite{bressan15} and extended as in \\cite{chen15};     \\item new evolutionary tracks of naked helium stars, computed with the latest version of PARSEC \\cite{costa19a,costa19b};     \\item the COLIBRI TP-AGB tracks \\cite{Marigo_etal_13}, described by \\cite{rosenfield16}, which determine the initial-final mass relation (IFMR);     \\item up-to-date grids for post-asymptotic giant branch stars and carbon-oxygen white dwarfs (CO-WD) from the models described in \\cite{Bertolami_2016} and \\cite{Renedo_2010}, respectively.", "\\end{itemize} No evolutionary grids have been included in TRILEGAL for helium white dwarfs (He-WD), for oxygen-neon white dwarfs (ONe-WD) and for neutron stars (NS). We plan to do this improvement in the next future, but in the meanwhile we use the same analytic formulae of \\cite{hurley2000} to manage the evolution of these stars.", "In Appendix~\\ref{sec:examples} we provide examples of the evolution obtained with BinaPSE, compared with the one obtained with BSE. They show that the use of PARSEC tracks in the binary evolution changes not only the position of the main evolutionary features in the HR diagrams, compared to BSE, but also changes the final fate for a fraction of the binaries.", "With TRILEGAL and BinaPSE we can perform many kinds of simulations, such as: \\begin{enumerate}     \\item Any Galaxy field, limited in apparent magnitude and/or in maximum distance from the Sun, with a given initial binary fraction.", "\\item An object (galaxy or star cluster) at fixed distance with a given SFH and initial binary fraction. This is the option used to create the grid of PMs presented in Section~\\ref{sec:PMgrid}.", "\\item A set of binary stars with a given distribution of initial parameters. This option is used in Appendix~\\ref{sec:examples}.", "\\end{enumerate}   \\subsection{Probability distributions of initial parameters for binary systems} \\label{sec:probdist}   The mass of the single stars in TRILEGAL is simply derived from the IMF, which is assumed to be independent of all other parameters. In this work we adopt the IMF from \\cite{kroupa02}.", "To simulate the binaries, we adopt three different prescriptions from the literature. The simplest one is also the most frequently used in the analyses of CMDs of nearby galaxies: It assumes that the binary components do not interact during their lifetimes, and that they present a flat distribution of mass ratios, $q=m_{\\mathrm{i},2}/m_{\\mathrm{i},1}$. In practice, in this case the binaries are made of two single stars selected from the same isochrone, and no orbital parameter is specified. Since the large majority of binaries with small mass ratios have secondaries too faint to compete with the light of the primary\\footnote{The primary is defined as the initially more massive component.}, only binaries with a mass ratio above a given threshold, which we set at 0.7, are simulated.", "More realistic distributions include the possibility of interacting binaries and hence prescriptions for other initial parameters, such as the period $P$ and the eccentricity $e$, across the entire range of possible mass ratios. As our reference distribution of this kind, we adopt the Monte Carlo model proposed by \\cite{Eggleton2006}, which summarises decades of work on the statistics and evolution of binary systems: Let $m_{\\mathrm{i},1}$ and $m_{\\mathrm{i},2}\\leq m_{\\mathrm{i},1}$ be the initial masses of the two components of a binary system.", "The distributions of $P$, $q$, and $e$ are given by \\begin{equation}     P=\\frac{5\\cdot 10^4}{m_{\\mathrm{i},1}^2}\\left(\\frac{X_1}{1-X_1}\\right)^\\alpha,\\qquad\\alpha=\\frac{3.5+0.13\\,m_{\\mathrm{i},1}^{1.5}}{1+0.1\\,m_{\\mathrm{i},1}^{1.5}} , \\label{eq:period} \\end{equation} \\begin{equation}     q=1-X^\\beta_2,\\qquad \\beta=\\frac{2.5+0.7\\beta'}{1+\\beta'}, \\qquad\\beta'=\\frac{\\sqrt{P}(m_{\\mathrm{i},1}+0.5)}{10} , \\label{eq:massratio} \\end{equation} \\begin{equation}     e=X_3 ,  \\label{eq:eccentricity} \\end{equation} where $X_1$, $X_2$ and $X_3$ are independent random variables uniformly distributed in the $[0,\\,1]$ interval.", "In addition, we implement the more recent distribution of binary masses and orbital parameters from \\cite{moe17}. It combines empirical evidence from many different kinds of binaries and has a functional form far more complicated than the \\cite{Eggleton2006} one -- for instance it includes a dependence of the binary fraction and mass ratio on the mass of the primary. We use the \\cite{moe17} formulation to produce the binary populations associated with a \\cite{kroupa02} IMF, in a way similar those produced in the \\cite{Eggleton2006} case. We verify that this prescription produces a fraction of nearly-equal-mass binaries, or $F_\\mathrm{twin}$ (defined as the initial fraction of binaries with $q>0.95$ among all binaries with $q>0.3$), of 0.08. This fraction is comparable with the empirical values between 0.03 and 0.1 derived by \\cite{elbadry19} from main sequence wide binaries in Gaia DR2.", "\\subsection{Bolometric corrections} \\label{sec:bc}   Initially, TRILEGAL produces synthetic stars using only their main intrinsic properties, like the luminosity $L$, effective temperature \\Teff, current mass $m$, surface gravity $g$, surface chemical composition, etc. Then, these parameters are used to compute the photometry in the sets of filters of interest, by applying the bolometric corrections and extinction coefficients extracted from the YBC database \\cite{chen19}. The latter are interpolated inside a huge grid of pre-computed tables, derived from libraries of model atmospheres and their spectral energy distributions, which is largely based on model atmospheres from the ATLAS9 \\cite{castelli03} and PHOENIX \\cite{allard12} codes. Main parameters in the interpolation are \\Teff, \\logg, and other parameters related to the surface chemical composition, including the initial metallicity \\feh, and the surface abundance of CNO elements for AGB stars.", "To deal with the products of close binary evolution, such a procedure has to be complemented with bolometric correction tables for He-rich stars. This is especially important for stars that lose their hydrogen-rich envelope through a CE phase, as illustrated in Section~\\ref{sec:hestars} below.", "To model the bolometric correction for the helium rich stars, we compute a grid of spectral energy distributions for pure hydrogen+helium atmospheres by using the Tlusty code \\cite{Hubeny1988,Hubeny1995}. This new grid covers the ranges of $30\\,000<\\Teff/\\mathrm{K}<100\\,000$, $5<=\\logg [\\mathrm{cm\\,s^{-2}}]\\leq9$, and $X/Y=[0.,0.1,...,1.0]$. Bolometric corrections and extinction coefficients for all filter sets of interest are then produced with the YBC code \\cite{chen19}. Whenever BinaPSE produces a He-rich hot star, these tables are interpolated using $(\\log\\Teff,\\logg, X/Y)$ as the interpolation parameters.", "\\section{Methods} \\label{sec:method}   \\begin{table}     \\caption{Some properties of the partial models. The first 3 columns refer to all PMs used in this work, while the other columns refer to the PMs along the reference AMR defined in Sect.~\\ref{sec:ramr}. $N_\\mathrm{HSds}$ refers to the number of hot subdwarfs defined as in Fig.~\\ref{fig:bighrd}, derived with a constant SFR$(t)=1\\Msun\\mathrm{yr}^{-1}$, from the BinaPSE code and for the \\cite{Eggleton2006} distribution of initial binary parameters. For comparison, $N_\\mathrm{HSds}^\\mathrm{BSE}$ presents results from the BSE code. }", "\\centering     \\makebox[\\columnwidth][c]{\\begin{tabular}{l|lllllll}         \\hline         $i$ & \\logtyr\\ & $\\Delta t$ & $Z_0$ & $N_\\mathrm{HSds}$ & $N_\\mathrm{HSds}/\\Delta t$ & $N_\\mathrm{HSds}^\\mathrm{BSE}$ \\\\             & interval &   (yr)     &   (initial)    &                   \\\\         \\hline          1 & 6.6--7.1  & 8.61e+06 & 0.02146 &     0.0 & 0.0      &     0.0 \\\\          2 & 7.1--7.3  & 7.36e+06 & 0.02145 &    27.0 & 3.66e-06 &     0.0 \\\\          3 & 7.3--7.5  & 1.17e+07 & 0.02144 &   170.9 & 1.46e-05 &     0.0 \\\\          4 & 7.5--7.7  & 1.85e+07 & 0.02142 &   180.6 & 9.76e-06 &    45.2 \\\\          5 & 7.7--7.9  & 2.93e+07 & 0.02139 &   823.0 & 2.81e-05 &     0.0 \\\\          6 & 7.9--8.1  & 4.65e+07 & 0.02134 &  1814.3 & 3.91e-05 &    56.6 \\\\          7 & 8.1--8.3  & 7.36e+07 & 0.02126 &  9161.2 & 1.24e-04 &  1167.8 \\\\          8 & 8.3--8.5  & 1.17e+08 & 0.02114 & 20369.2 & 1.75e-04 &  4130.7 \\\\          9 & 8.5--8.7  & 1.85e+08 & 0.02096 & 42909.9 & 2.32e-04 &  8356.4 \\\\         10 & 8.7--8.9  & 2.93e+08 & 0.02066 & 81260.1 & 2.77e-04 & 21120.5 \\\\         11 & 8.9--9.1  & 4.65e+08 & 0.02020 & 39147.3 & 8.73e-05 & 27233.0 \\\\         12 & 9.1--9.3  & 7.36e+08 & 0.01949 & 13487.1 & 1.83e-05 & 17983.6 \\\\         13 & 9.3--9.5  & 1.17e+09 & 0.01842 & 17098.8 & 1.47e-05 &  4274.6 \\\\         14 & 9.5--9.7  & 1.85e+09 & 0.01684 &  2259.1 & 1.22e-06 &  6776.3 \\\\         15 & 9.7--9.9  & 2.93e+09 & 0.01461 &  3578.9 & 1.22e-06 &     0.0 \\\\         16 & 9.9--10.1 & 4.65e+09 & 0.01167 &  5674.0 & 1.22e-06 &  5673.9 \\\\         \\hline         $\\sum_i$ & & 1.26e+10 & & 237961.4 & & 96818.4 \\\\         \\hline     \\end{tabular}}     \\label{tab:PMs} \\end{table}   \\subsection{The grid of partial models} \\label{sec:PMgrid}   Although any age and metallicity values can be simulated with our codes, it is convenient to define PMs that cover a limited interval of such ages and metallicities. For the present work, we define PMs in 16 age intervals comprising all $\\logtyr$ values between 6.9 and 10.1. Their properties are listed in Table~\\ref{tab:PMs}. Each PM covers age intervals of $\\Delta\\log t=0.2$~dex, with the exception of the first one which covers 0.5~dex. PMs are distributed along a given reference age-metallicity relation (RAMR). Our initial guess, to be refined in Sect.~\\ref{sec:ramr} below, is to assume an almost-flat RAMR, where models have a metallicity close to solar and a constant Gaussian metallicity dispersion of $\\sigma=0.1$~dex, at all ages. This guess is motivated by the comparison with stellar models in Fig.~\\ref{fig:rc}, and also by the correlation between ages and mean metallicities derived from nearby samples of red giants observed spectroscopically \\cite{haywood13,feuillet16,lin20}. Only for very old stars there are hints of the mean metallicity shifting to values a few tenths of dex smaller than the solar value.", "PMs are built separately for single and binary populations, for the same age-metallicity bins, and for the same total initial mass of stars. This approach allows us to define the fraction of stellar mass that is used to form binaries, \\fbin. This quantity is independent of age.", "A complex population containing binaries can be derived by simply adding single and binary PMs with \\begin{equation}     \\mathrm{PM}_i = (1-\\fbin)\\,\\mathrm{PM}_{\\mathrm{sin},i} +     \\fbin\\, \\mathrm{PM}_{\\mathrm{bin},i}     \\label{eq:PMsinbin} \\end{equation}   In this work, binaries are initially considered as unresolved binaries, and their magnitudes represent either the light of both components added together, or the light from already-merged stars. This assumption will be improved starting from Sect.~\\ref{sec:unresolved} below.", "In addition, we compute a PM corresponding to the halo stars in the Solar Neighbourhood. This component is simply derived by using the standard calibration of the halo in TRILEGAL \\cite{girardi12}, inside the 200-pc distance limit and for $|b|>25^\\circ$. This is a component that can be kept fixed in our method, since it is essentially an isotropic component whose mean density has been well calibrated using faint star counts in deep surveys \\cite{groenewegen02}. Its age and metallicity distribution is also sufficiently well known and it does not need a recalibration. Moreover, it makes a so small contribution to the local star counts (about 4000 stars inside our magnitude limits), that possible errors in its description are not critical.", "\\subsection{Including photometric and astrometric errors}   We do our best to incorporate the known errors in Gaia DR2 in our models. We start by deriving the median values of the errors in all observables involved in our work (namely $G$, \\gbp, \\grp, and $\\pi$), as a function of the apparent magnitude $G$. They are computed using the quantities \\verb$phot_[F]_mean_flux_error$  and \\verb$parallax_over_error$ in the Gaia DR2 catalogue, where \\verb$[F]$ stands for the three different filters.", "Then, we generate synthetic samples of stars uniformly distributed within in a sphere of radius 200~pc, and uniformly distributed across the Gaia HRD (within limits $-2<\\gmag<12$ and $-1<\\gcolor<4$). For each fake star, 1$\\sigma$ errors are selected from the observed relations involving the apparent magnitude, and then used to generate errors from Gaussian distributions. Errors in apparent magnitudes and parallaxes are then converted into $M_G$ and \\gcolor\\ errors. This process is repeated millions of times to give us a distribution of errors to be applied to the original PMs, at every small cell of the Hess diagram. This process is akin to the ``artificial star tests'' usually performed in the CMD-fitting of external galaxies and star clusters -- but with the significant difference that we use the likely errors as derived from the Gaia DR2 catalog, instead of inserting the fake stars in the original Gaia images.", "The comparison with the original error-free PMs reveals that the impact of simulated errors is quite modest. In the entire $M_G$ range, the only stars to be significantly spread in the Hess diagram are the brightest RGB and TP-AGB stars, which are nearly absent in the $d<200$~pc sample, therefore having negligible weight in the HRD-fitting process to be discussed below.", "We then repeat the generation of synthetic samples, now extending the distances out to 250~pc. This simulation allows us to estimate the amount of stars that, being actually out of the 200~pc distance limit, turn out to be ``scattered'' inside this limit due to the errors in parallax -- and vice-versa. Since the scattering from outside-in is more frequent than the one from inside-out, this effect could lead to an overestimation of the stellar density in the 200~pc volume, especially at fainter magnitudes for which the parallax errors are large. However, we verified that this effect is less of a problem for us: in the entire magnitude interval $-1.5<\\gmag<10$, just 0.6 per cent of stars are likely to be misplaced in this way. Moreover, this effect is almost symmetrical: the numbers of stars moving inwards is just 2.5 per cent larger than the number of stars moving outwards. Only for very faint magnitudes (say for $\\gmag>15$, which is far fainter than the limits adopted in our analyses) the effect becomes relevant, owing to the increased parallax errors.", "\\subsection{Defining a model and its likelihood}   Summarising, a total of $i=1,\\ldots,16$ age bins are considered. Every one of these PMs was rescaled so as to correspond to the initial mass produced in that age interval by a constant star formation rate of 1~$\\Msun\\, \\mathrm{yr}^{-1}$. Moreover, single and binary PMs are combined with Eq.~\\ref{eq:PMsinbin}.", "Under these conditions, a model can be defined as \\begin{equation}     \\mathrm{M} = \\mathrm{PM}_0 + \\sum_i a_i \\,\\mathrm{PM}_i     \\label{eq:PMstot} \\end{equation} where the coefficients $a_i$ give the star formation rate as a function of age, directly in units of $\\Msun\\, \\mathrm{yr}^{-1}$. $\\mathrm{PM}_0$ is the halo PM, which is kept fixed. The fact that this is a simple linear combination makes the computation extremely fast, even for long Markov chains (see below).", "As a variation to this scheme, we can use sets of PMs computed, at all ages, with small shifts in metallicity, $\\Delta\\feh$, with respect to the RAMR. Usually, we adopt $\\Delta\\feh=0.12$~dex, and compute PMs for three multiple values of $\\Delta\\feh$ above the RAMR, and for three below the RAMR -- hence spanning a total range of 0.72~dex in metallicity at every age. These sets of PMs are tagged as $\\mathrm{PM}_i^{+1}$, $\\mathrm{PM}_i^{+2}$, $\\mathrm{PM}_i^{+3}$, $\\mathrm{PM}_i^{-1}$, $\\mathrm{PM}_i^{-2}$ and $\\mathrm{PM}_i^{-3}$.", "To use them, Eq.~\\ref{eq:PMstot} is modified to \\begin{equation}     \\mathrm{M} = \\mathrm{PM}_0 + \\sum_i a_i         \\left[ (1-f_i)\\, \\mathrm{PM}_i^- + f_i\\, \\mathrm{PM}_i^+ \\right]     \\label{eq:PMstotmet} \\end{equation} where $\\mathrm{PM}_i^-$ and $\\mathrm{PM}_i^+$ are the two PMs whose metallicities, $\\feh^+$ and $\\feh^{-}$, bracket the desired one at that age, and $f_i=(\\feh-\\feh^{-})/(\\feh^+-\\feh^{-})$. In this way, we simulate small changes in metallicity by means of linear combinations of PMs, keeping the computational speed in the calculation of $\\mathrm{M}$. We define 16 coefficients $z_i$, aimed describe the changes in metallicity at every age bin.", "That said, our codes produce Hess diagrams with the same limits as the observed one.", "For the data-model comparison, we adopt the following definition of likelihood ratio derived from a Poisson distribution: \\begin{equation}     \\ln \\mathcal{L} = \\sum_k \\left( O_k-M_k -         O_k \\ln\\frac{O_k}{M_k}     \\right)     \\label{eq:likelihood} \\end{equation} where $O_k$ and $M_k$ are the observed and model star counts, respectively, in the HRD bins of index $k$ \\cite{vanhollebeke09, dolphin02}. For all HRD bins in which there is a significant number of observed and model stars, results are similar to half of the classical $\\chi^2$ (or Gaussian likelihood ratio) where the standard deviation is given by the square root of the observed star counts \\cite{dolphin02}.", "\\subsection{Finding the best-fit model}   Equation~\\ref{eq:PMstotmet} comprises a maximum of 33 coefficients to be determined (16 $a_i$, 16 $z_i$, and \\fbin). Actually, this number can be reduced by imposing that the same $z_i$ is valid for many age bins -- since, for instance, young populations are expected to be chemically homogeneous. We initially simplify the problem imposing that \\textit{all} age bins have the same $z_i$, hence reducing the number of coefficients to 18.", "The problem is then easily solvable by means of a Nelder-Mead minimization of $-\\ln \\mathcal{L}$. We use the routine taken from \\cite{nr}, which typically converges to the likelihood maximum in a question of seconds.", "There is no warranty that the solution found by the Nelder-Mead step is not trapped into a local minimum. To find more likely solutions and estimate the errors, we proceed with a Markov Chain Monte Carlo method. Two new codes were used in this case: either the \\verb$trifit$ quick code in C, which implements a Metropolis-Hastings \\cite{metropolis53} algorithm  following the guidelines by \\cite{hogg18}, or the \\verb$trimcmc$ python code built around the \\verb$emcee$ package by \\cite{emcee}, which appears to more efficiently explore the space of parameters in the case of long Markov chains. In the first case we start with $\\sim\\!500$ walkers from the solution indicated by the Nelder-Mead step, while in the second case we pick up from the solution determined by \\verb$trifit$ using $100$ walkers.", "As a rule, more than 2000 steps are followed in the explorative runs made with \\verb$trifit$, while 50000 steps are performed in the final runs with \\verb$trimcmc$ presented below. The final models generally represent a visibly better fit to the observations than those resulting from the Nelder-Mead step. We use the final position of the walkers to derive the median, 68 and 95 per cent confidence intervals for all model parameters. We also derive a ``best-fit Hess diagram'', which is the simple average of the Hess diagrams obtained from the final positions of the walkers. This latter is used to illustrate the final solutions in the following.", "Two additional features are turned on during the MCMC step: The first are age-dependent variations in $z_i$, which are aimed to explore possible changes in the adopted RAMR. After a few experiments with different prescriptions, we opt for a simple scheme in which there are just 3 $z_i$ coefficients to be varied, representing metallicity changes at the two age extremes (0 and 12.6 Gyr) and at an intermediate age of 2 Gyr. For any other age value, metallicity is linearly interpolated among these values. In this way, we adopt smooth changes in metallicity occurring over scales of Gyr, avoiding rapid changes that could be considered as unrealistic. The second change is to allow models $\\mathrm{M}$ to be displaced in the HRD by quantities $\\Delta\\gmag$ and $\\Delta(\\gbp-\\grp)_0$. These displacements are intended to simulate systematic offsets in the zeropoints in the Gaia photometry -- offsets that can be present either in the data, or in the models if the bolometric corrections are calculated with the wrong filter transmission curves. Given the succession of different filter transmission curves and corrections provided for Gaia DR2 \\cite{evans18, weiler18, maiz18}\\footnote{\\url{https://www.cosmos.esa.int/web/gaia/dr2-known-issues}}, and now for Gaia EDR3\\footnote{\\url{https://www.cosmos.esa.int/web/gaia/edr3-passbands}}, we cannot exclude that such offsets are present in our analysis. Fortunately, we find that the derived $\\Delta\\gmag$ and $\\Delta(\\gbp-\\grp)_0$ never exceed 1 bin size of our Hess diagrams, which means they are always less than a few hundredths of magnitude.", "In the following, we will refer to the final distribution of $a_i$ and $z_i$ coefficients as our SFH solution. This solution can be easily decomposed into the star formation as a function of age, SFR$(t)$, and in the age-metallicity relation (AMR), $\\feh(t)$, and their confidence intervals.", "\\subsection{Metallicity range and definition of the RAMR} \\label{sec:ramr}   \\begin{figure} \t\\includegraphics[width=\\columnwidth]{figs_rc_zoom_extended.png}     \\caption{Zoom in the RC region of the HRD. The extinction-corrected data is marked with black dots, the original data with gray dots.  For comparison, the arrow illustrates the reddening vector corresponding to a range in extinction of $\\Delta A_V=0.2$~mag, which exceeds the correction applied to the bulk of the data. The coloured lines present the mean location of the RC for several values of age and metallicity, as derived from PARSEC isochrones: Each line is for a different metallicity (as in the legend), and comprises 5 values of $\\log\\mathrm{(age/yr)}$ going from 9.3 to 10.1 at steps of 0.2 dex (the faintest point in a sequence is the oldest one). It can be seen that the RC stars follow a slope roughly consistent with a range of metallicities, with the bulk of RC stars having colours compatible with the $-0.4<\\feh<+0.5$ interval.}", "\\label{fig:rc} \\end{figure}   Figure~\\ref{fig:bighrd} presents the Gaia data after the correction by extinction using the \\cite{lallement18} 3D extinction map. The RC appears about 0.3 mag wide in color, and with a slope very similar to the reddening vector. This is illustrated with more detail in the zoomed HRD of Fig.~\\ref{fig:rc}.", "As can be appreciated, the reddening correction has a marginal effect in this HRD. In the latter figure, we overplot stellar models giving the mean position of the RC for several metallicities and in sequences covering very wide age intervals. This comparison clarifies that the RC slope closely follows the mean slope expected for stellar populations spanning a range of metallicities, which then appears as the main factor driving the RC spread. Models in the range $-0.4<\\feh<0.5$ suffice to explain the bulk of RC stars. We take this a first indication of the metallicity range needed to model these data.", "\\begin{figure}     \\includegraphics[width=\\columnwidth]{figs_RAMR.png}     \\caption{The ages and metallicities adopted in this work, compared to empirical data.", "The central red line shows the mean RAMR from eq.~\\ref{eq:ramr}, together with the $\\pm1\\sigma$ interval of its Gaussian width (light red-shaded area). Similar sequences are then drawn for the PM sequences labelled as PM$_i^{\\pm1}$, PM$_i^{\\pm2}$, and PM$_i^{\\pm3}$ (in orange, green and blue, respectively). We note that our seven PM sequences completely fill the area between the PM$_i^{\\pm3}$ limits, with adjacent PMs largely overlapping in metallicity. Also, all sequences extend to younger ages, not shown in this figure. The dots with error bars represent the 598 giants and subgiants with ages and metallicities measured by \\cite{feuillet16} that are inside our 200~pc distance limit (according to Gaia DR2 parallaxes). The blue dots identify the 384 stars with $0.8<\\gcolor<1.6$ and $1.2<\\gmag<-0.4$, that is, around the red clump region of the HRD (see Fig.~\\ref{fig:rc}). 97 per cent of these stars are covered by the $\\pm1\\sigma$ limits of our PMs.}", "\\label{fig:ramr} \\end{figure}   We then explore models with a large range of different RAMRs, centred on this metallicity range, verifying whether they converge to acceptable solutions. This is performed using the entire $\\gmag<7.5$~mag interval, and using the code \\verb$trifit$ limited to 2000 steps. Among many different possibilities tested, we soon opted for a family of models and solutions in which the reference AMR linearly varies as a function of age, with a slope $\\alpha$ and the metallicity being solar at the age of the Sun's birth\\footnote{We note that this is an approximation, because the Sun was not born with its present surface composition. According to the PARSEC tracks the present Sun with $(Z_\\odot,Y_\\odot)=(0.0152,0.2485)$ derives from a star with initial $(Z,Y)=(0.01774,0.28)$ \\cite{bressan12}, and hence with $\\feh\\simeq0.02$~dex. This small offset can be taken into account in our method, by means of the metallicity shifts that are derived at all ages.", "}: \\begin{equation}     \\feh = \\alpha(t-4.5\\mathrm{Gyr})     \\label{eq:ramr} \\end{equation} Not surprisingly, we find that models in which there is no metallicity change, and in which the RAMR total metallicity changes by more than 0.6~dex between young and old ages, simply do not provide good fits to the data. In contrast, models with $\\alpha$ values comprised between $-0.2$ and $-0.6$ dex/12~Gyr present a clear marked decrease in their $-\\ln\\mathcal{L}$, achieving a final AMR in which the old disk is $\\sim\\!0.3$~dex more metal poor than the present young disk.", "We therefore define our RAMR to be the one with $\\alpha=-0.4$~dex/12Gyr. We emphasize that the adopted PMs cover a $\\pm0.36$~dex wide interval around this RAMR, therefore this choice by no means represent a strong limitation to the fitting solutions to be discussed below.", "The RAMR, and the total range of metallicities explored around it, are plotted in Fig.~\\ref{fig:ramr}. For comparison, we overplot the stellar sample with APOGEE spectroscopic metallicities\\footnote{We use the [M/H] from APOGEE, which closely corresponds to [Fe/H].} and their Bayesian-derived ages from \\cite{feuillet16}, limited to stars within 200~pc. We verify that these data uniformly sample the $\\gcolor>1$ interval of RC stars in the Solar Neighbourhood, which was illustrated in Fig.~\\ref{fig:rc}. One can appreciate that our PMs almost entirely cover the range of ages and metallicities indicated by APOGEE. These metallicities are concentrated at values around $+0.2$~dex, with just 7 out of 384 RC stars falling in the $-1<[\\mathrm{M/H}]<-0.7$ metal-poor tail of the distribution. We also recall that the metallicities from \\cite{feuillet16} are confirmed by the most recent APOGEE data releases \\cite{feuillet19}, and that the scarcity of giants in the Solar Neighbourhood with metallicities smaller than $-0.7$~dex is confirmed by recent GALAH data \\cite{nandakumar20}.", "An independent confimation of the limited range of metallicities that needs to be explored, is given by the metallicity distribution of long-lived G and K dwarfs in the immediate Solar Neighbourhood. Different versions of these distributions \\cite{rochapinto96, haywood01, haywood13, kotoneva02, casagrande11} indicate that the bulk of \\feh\\ values is comprised between $-0.5$ and $+0.5$~dex, with just a tiny fraction of dwarfs extending down to $\\feh=-1$~dex.", "To better quantify the fraction of metal-poor stars that might be present in our Gaia sample, we can reason as follows: There are at most 48 stars observed in the box with $1>\\gmag>-0.7$, $0.6<\\gcolor<1$, which is the HRD region corresponding to RC stars with metallicities between $-0.7$ and $-1.6$ dex. This number is to be compared to the 2009 redder RC stars, i.e.\\ with $1<\\gcolor<1.4$, which have higher metallicities. The RC corresponds to a major evolutionary stage (the core helium burning) appearing in all stellar populations older than $\\sim1.5$~Gyr at a rate of {\\em at least} 0.5 RC stars per every $10^3$~\\Msun\\ of star formation \\cite{girardi16}. This implies that the above-mentioned 48 stars -- if they are really old RC stars, and not younger RC stars, or even younger stars crossing the Hertzsprung gap -- set an upper limit of $3\\times10^4$~\\Msun\\ to the mass of stellar populations formed with $\\feh<-0.7$~dex and presently in the Solar Neighbourhood. As we will see later in this section, this is a small fraction of the $\\sim\\!10^6$~\\Msun\\ total mass formed that can be estimated from the HRD fitting of our Gaia sample. Moreover, a $3500$~\\Msun\\ of metal poor populations are anyway included in our PM$_0$ model for the halo stars (Sect.~\\ref{sec:PMgrid}).", "Therefore, we can assume that our set of PMs cover the properties of the bulk of Solar Neighbourhood stars, just missing the trace populations with metallicities smaller than $-0.7$~dex -- which are, at least partially, included in the PM$_0$ model.", "\\begin{table*} \t\\caption{Results of HRD fitting using different input models, parameters, and HRD limits} \t\\label{tab:chi2} \\begin{tabular}{llcccccccc} \t\t\\hline \t\t model & \t\t HRD             & binary           & binary           & $-\\ln\\mathcal{L}$ & \\fbin & $\\Delta(\\gbp\\!-\\!\\grp)_0$ & $\\Delta\\gmag$ & N of hot & comments\\\\ \t\t family & region$^1$      & evolution        & parameters$^2$        &              &      &  (mag) & (mag)  &  subdwarfs & \\\\ \t\t\\hline \t\t\\multirow{4}*{\\textbf{1}} \t\t& D      & BinaPSE & E06, unresolved  &  876 & 0.411 [0.392,0.426] & -0.0005 & -0.049 &  8.6 & only upper MS \\\\ & C      & BinaPSE & E06, unresolved  & 1897 & 0.635 [0.620,0.654] & -0.0003 & -0.047 & 13.5 & upper MS+giants \\\\ & A      & BinaPSE & E06, unresolved  & 5339 & 0.423 [0.420,0.428] & -0.009  & -0.024 &  9.1 & all HRD \\\\ & B      & BinaPSE & E06, unresolved  & 1934 & 0.228 [0.219,0.235] & -0.032  & -0.080 &  1.4 & only lower MS \\\\ \\hline \t\t\\multirow{4}*{\\textbf{2}} \t\t& D      & BinaPSE & E06, resolved  & 889 & 0.898 [0.846,0.952]] & 0.009 & -0.086 & 19 & \\\\ & C      & BinaPSE & E06, resolved & 2246 & 0.993 [0.981,0.998] & -0.003 & -0.071 & 22 & \\\\ & A      & BinaPSE & E06, resolved  & 6816 & 0.9989 [0.9981,0.9994] & 0.021 & 0.023 & 26 & \\\\ & B      & BinaPSE & E06, resolved  & 2143 & 0.408 [0.385,0.605] & -0.035 & -0.076 & 1.9 & \\\\ \\hline \t\t\\multirow{3}*{\\textbf{2a}} \t\t& D      & BinaPSE & E06, resolved  & 926 & 0.408 & 0.009 & -0.083 & 8.3 & fixed $\\fbin$ \\\\ & C      & BinaPSE & E06, resolved & 2368 & 0.408 & -0.002 & -0.068 & 8.3 & fixed $\\fbin$ \\\\ & A      & BinaPSE & E06, resolved  & 9296 & 0.408 & 0.014 & -0.050 & 7.9 & fixed $\\fbin$ \\\\ \\hline \t\t\\multirow{4}*{\\textbf{3}} \t\t& D      & BinaPSE & MDS17, resolved  &  889 & 0.789 [0.733,0.858]] & 0.009 & -0.087 & 20 & \\\\ & C      & BinaPSE & MDS17, resolved  & 2334 & 0.860 [0.834,0.881] & -0.002 & -0.062 & 22 & \\\\ & A      & BinaPSE & MDS17, resolved  & 11587 & 0.973 [0.969,0.976] & 0.006 & 0.032 & 24 & \\\\ & B      & BinaPSE & MDS17, resolved  & 2651 & 0.370 [0.354,0.389] & -0.007 & -0.065 & 5.3 & \\\\ \\hline \t\t\\multirow{4}*{\\textbf{4}} \t\t& D     & none  & unresolved &  828 & 0.409 [0.382,0.434] & -0.001 & -0.038 & 0 & traditional prescription\\\\ & C     & none  & unresolved & 2012 & 0.644 [0.628,0.661] & -0.001 & -0.046 & 0 & traditional prescription\\\\ & A     & none  & unresolved & 5784 & 0.490 [0.487,0.494] & -0.008 &  0.009 & 0 & traditional prescription\\\\ & B     & none  & unresolved & 1729 & 0.403 [0.346,0.409] & -0.037 & -0.063 & 0 & traditional prescription\\\\ \\hline \\end{tabular} \\\\ \t$^1$ Region of the HRD being analysed: D = upper main sequence, $-1.5\\leq\\gmag\\leq4.0$, $-1.0\\leq(\\gbp\\!-\\!\\grp)_0\\leq1.0$; C = upper main sequence plus evolved stars, $-1.5\\leq\\gmag\\leq4.0$, $-1.0\\leq(\\gbp\\!-\\!\\grp)_0\\leq3.5$; B is the lower main sequence $4.0\\leq\\gmag\\leq7.5$, $-1.0\\leq(\\gbp\\!-\\!\\grp)_0\\leq3.5$; A is the entire HRD from there above, that is $-1.5\\leq\\gmag\\leq7.5$, $-1.0\\leq(\\gbp\\!-\\!\\grp)_0\\leq3.5$.", "\\\\ \t$^2$ References for binary parameters: E06 = \\cite{Eggleton2006}; MDS17 = \\cite{moe17}. ``Resolved'' means that binary components with angular separation larger than the separation line in  Fig.~\\ref{fig:resolved_bin} are counted as two distinct stars.", "\\end{table*}   \\begin{figure*}     \\includegraphics[width=\\textwidth]{figs_summary2_binapse_highMS.png}     \\caption{The fitting of the upper MS (region D), with limits $-1.5<\\gmag<4$ and $-1.0<\\gcolor<1$. The left panels show the Hess diagrams of the observations and the mean best-fitting model, in the same color scale, plus the map of residuals in approximate units of $\\sigma$ -- that is, the result of $(M_k-O_k)/\\sqrt{O_k}$. This latter is shown only where the comparison can be meaningful at all, i.e. for bins with more than 10 observed stars. The right panels shown the SFH, separated into two panels: the upper for the SFR$(t)$, the lower for the $\\feh(t)$. For the SFR$(t)$, the central heavy green line marks the median value at each age, with two shaded areas marking the 68 and 95 per cent confidence intervals. For the $\\feh(t)$, the central heavy green line mark the median value, while the two shaded areas mark the intervals that comprise 68 and 95 per cent of the metallicities effectively used in the mean best-fitting model, respectively.     }", "\\label{fig:fit_D} \\end{figure*}   \\section{Discussion} \\label{sec:discuss}   Let us now discuss a few of the many different fittings made possible by our codes. The discussion is limited to the $\\gmag\\!<\\!7.5$~mag interval, for which we can be sure the incompleteness is smaller than a few percent (cf. Section~\\ref{sec:data}), and which includes the entire region of red giants and subgiants in the HRD. It is also large enough to contain a significant piece of the lower main sequence, which is very sensitive to the presence of unresolved binary systems.", "On the other hand, by limiting the HRD to this magnitude limit, we avoid having to discuss possible changes in the low-mass IMF \\cite{sollima19, hallakoun20}, which we leave for subsequent papers.", "In the following, we comment the fittings in groups, or ``model families'' in Table~\\ref{tab:chi2}, that represent common physical assumptions for the binaries.", "\\subsection{Analyses of different HRD sections with unresolved binaries}   Different regions of the HRD provide different constraints on the SFH, AMR, and binary fraction of the Solar Neighbourhood. In the following, we present the family of models labelled as \\textbf{1} in Table~\\ref{tab:chi2}. They make use of unresolved binaries computed with BinaPSE and with the \\cite{Eggleton2006} distribution of initial masses and orbital parameters. All binaries are assumed to be unresolved. This family of models is \\textit{not} our favoured one (as it will become clear in the next subsections), but it represents a well-accepted approach to model binaries in CMD-fitting works, and it illustrates some problems that are common to all models to be presented afterwards.", "Figure~\\ref{fig:fit_D} shows a fitting in which only the upper part of the MS (region D), with $\\gmag<4$~mag and $(\\gbp-\\grp)_0<1$~mag, is included. This exercise essentially uses the simplest age indicator -- i.e. the number of stars still on the MS at every magnitude interval -- to derive the SFH. As can be appreciated, in this simple case we obtain an excellent fitting of the entire HRD region being considered.  The pattern of residuals is quite uniform, resembling the results obtained under ideal conditions (i.e. using mock catalogues) in the Appendix~\\ref{sec:testing}. The SFH indicates a maximum SFR$(t)$ of $1.4\\times10^{-4}$~\\Msun$\\mathrm{yr}^{-1}$ at ages of about 2 Gyr, with SFR$(t)$ being reduced both at older and younger ages. At ages younger than $10^8$~yr, we find  an upper limit of $\\mathrm{SFR}(t)\\lesssim4\\times10^{-5}$~\\Msun$\\mathrm{yr}^{-1}$, which is not surprising for a sample that avoids the Galactic Plane. The binary fraction turns out to be well constrained, at $\\fbin\\simeq0.41$.", "\\begin{figure*}     \\includegraphics[width=\\textwidth]{figs_summary2_binapse_upperhess.png}     \\caption{The same as Fig.~\\ref{fig:fit_D} but now for case C, fitting the entire upper HRD with $-1.5<\\gmag<4.0$ and $-1.0<\\gcolor<3.5$.}", "\\label{fig:fit_C} \\end{figure*}   Figure~\\ref{fig:fit_C} then includes all the red giants and subgiants with $\\gmag<4$~mag (region C) in the analysis. In comparison with the previous case, it can be immediately noticed that the fitting residuals increase in a few regions of the HRD, for instance close to the oldest MS turn-off ($\\gmag\\sim4$), and around the RC region. But over most of the HRD, the quality of this fit still looks acceptable. It is also evident that the general shape of the SFR$(t)$ is very much consistent with the one derived from the MS alone in Fig.~\\ref{fig:fit_D}. The most evident differences compared to the previous case are in the AMR, which appears ``flattened'' for ages older than 2 Gyr, and in the binary fraction which increases to 0.63.", "Despite these apparently-modest changes in the fittings, in passing from model D to C the changes in SFR$(t)$ and \\fbin\\ largely exceed those expected from the error bars of both best-fitting models. This result by itself indicates that the error bars derived from the MCMC -- despite being formally correct (see Appendix~\\ref{sec:testing}) -- largely underestimate the true errors. At the very least, they do not include some source of systematic error, that forces the MCMC to pursue different solutions when selecting different areas of the HRD.", "\\begin{figure*}     \\includegraphics[width=\\textwidth]{figs_summary2_binapse.png}     \\caption{The same as Fig.~\\ref{fig:fit_D} but now for case A, i.e. fitting the entire HRD above $\\gmag=7.5$~mag.}", "\\label{fig:fit_A} \\end{figure*}   Figure~\\ref{fig:fit_A} shows the fitting of the entire HRD above $\\gmag=7.5$~mag (region A in Table~\\ref{tab:chi2}), i.e. now including a significant part of the lower MS. The changes in the results are dramatic. First, the residuals increase, especially close to the oldest MS turn-offs at $\\gmag\\simeq4$, and also along stripes on the lower MS. Regarding the SFR$(t)$, the most evident difference in case A is the appearance of a strong peak of young SFH at ages smaller that $\\sim2\\times10^7$~yr. Apart from this peak, the derived SFH is similar to those derived from cases C and D. Regarding the binary fraction, it turns out to be 0.42, i.e. intermediate between cases B and C. Another marked difference is that the presence of the low-MS also drives the solution to higher values of metallicity at young ages.", "The strong peak in the very young SFH is probably an artefact, caused by the few observed stars being scattered at colours that are redder than the lower main sequence, and even redder than the sequence of equal-mass MS+MS binaries. In the partial models, the only stars that reach colours redder than the equal-mass MS+MS binaries are in short-lived pre-main sequence phases, present in the very young partial models. Therefore, the only way for our MCMC code to fit these few scattered red stars is to increase the SFR at very young ages -- within the limits allowed by the young massive stars present at the top of the observed MS. We regard this feature as indicative of the errors that can appear when a code is allowed to \\textit{blindly} fit all stars present on a HRD. In our specific case, the ``too-red'' stars amount to about 500, and might be either artefacts, or unresolved triple systems (not included in our PMs).", "\\begin{figure}     \\includegraphics[width=\\columnwidth]{figs_lf_allstars_cropped.png}     \\caption{Luminosity functions for stars in model A compared to the observations, separately for subgiants+giants (top panel) and dwarfs (bottom panel).     }", "\\label{fig:lfs} \\end{figure}   Another interesting fact is that the fitting of the entire HRD turns out to produce significant residuals close to the oldest MS turn-offs, at $\\gmag\\sim4$~mag. Such higher residuals were just slightly hinted at the previous case C above. It is worth noticing that these high bin-to-bin residuals are nearly cancelled if one looks at the residuals over larger fractions (or larger bins) of the HRD. This point is illustrated by the luminosity functions (LF) shown in Fig.~\\ref{fig:lfs}, comparing data and best-fitting model for case A, separately for subgiants+giants and dwarfs. It can be appreciated that there is overall a good agreement between data and model LFs over the entire $-1.5<\\gmag<7.5$ interval, with the notable exception of a small bump at $\\gmag=2$~mag, which is present in the data but not in the model. This small discrepancy is probably related to the RGB bump which is slightly misplaced in PARSEC v1.2S tracks \\cite{fu18}.", "From the models presented so far, it is evident that case A provides the worst result, and the less reliable determination of the SFH in the Solar Neighbourhood. The reasons for this failure should probably be looked for in the details of the stellar models, which might not fit simultaneously the field stars observed across the very wide colour and magnitude intervals of Fig.~\\ref{fig:fit_A} -- which also involve a wide range of masses, effective temperatures, and surface gravities. A careful testing of these models is required. It is worth remarking that the present isochrones were already used to fit the Gaia photometry of hundreds of open clusters \\cite{babu, bossini19, monteiro20, dias21, medina21} with apparently good results, even when the photometry spans several magnitudes along the main sequence. But, taken individually, the open clusters contain too few evolved stars to provide a good quantitative test of our partial models.", "\\subsection{The fraction of unresolved binaries from the lower MS} \\label{sec:bin_lowMS}   \\begin{figure*}     \\includegraphics[width=\\textwidth]{figs_summary2_binapse_lowMS.png}     \\caption{The same as Fig.~\\ref{fig:fit_D} but now for case D, i.e. fitting only the lower MS. The results for the SFH have little relevance in this case, but are shown for the sake of completeness.     }", "\\label{fig:fit_B} \\end{figure*}   The fittings presented above give contrasting results for the initial binary fraction, \\fbin. Independent information about this fraction is provided by the lower main sequence (with $4<\\gmag<7.5$, region B in Table~\\ref{tab:chi2}). This HRD region presents the clearest manifestation of the presence of \\textbf{unresolved} binaries, which are evident as the ``second MS'' observed to the right of the MS. This feature is known since \\cite{haffner37} and evinces the formation of a relatively high fraction of near-mass binaries, with mass ratios in excess of $\\sim\\!0.7$. On the other hand, the bulk of stars in this HRD region are unevolved, and therefore it should provide little information about the SFH. Therefore, this exercise is mainly intended to constrain the binary fraction.", "The results are illustrated in Fig.~\\ref{fig:fit_B}, which shows both the best-fit models and the derived SFH. As can be noticed, this kind of fitting clearly points to a binary fraction close to 0.23.", "Also in this case, the fitted solution favours a weird SFH in which there is a strong burst of very young star formation (with ages $\\logtyr<8$) with a very high metallicity ($\\feh\\simeq0.5$~dex). This is likely the same problem that happened for the model A in Fig.~\\ref{fig:fit_A}, which we regard as an artefact caused by the few scattered red stars. Moreover, the map of residuals indicates that this is far from being a perfect fit of the data, with some red/blue strips in the HRD indicating sub/overproduction of stars at the level of $\\sim3\\sigma$. This is in stark contrast with the level of agreement we are used to, while fitting the CMDs of external galaxies. One main culprit, in this case, is probably the extreme (and unusual) accuracy of the Gaia data we feed to the HRD-fitting algorithm. In external galaxies, accuracies better than a few hundredths of magnitude cannot be reached in the placement of stars at the bottom of the main sequence, even in the most favourable cases (namely for HST observations of the Magellanic Clouds, see \\cite{holtzman06} and \\cite{merica20} for examples). With Gaia DR2, this is possible, and the adoption of a similar resolution in the HRD provides very demanding constraints to the model fitting. Being our model prescriptions not perfect, the improved constraints result in higher-than-usual residuals in some places of the HRD. Irrespective of these problems, the main indication we get from this fitting is that about 1/4 of all low-MS stars are unresolved binaries.", "\\subsection{Rough corrections from the recoverability of binaries in Gaia DR2} \\label{sec:unresolved}   The family of models \\textbf{1}, just discussed, includes a common assumption in CMD-fitting codes: that all binaries are unresolved and hence always contribute to the HRD as single sources. While this assumption is valid for external galaxies analysed with similar methods, it is not for our nearby sample. Wide binaries are common in the Solar Neighbourhood and their catalogues were greatly expanded as a result of Gaia proper motions \\cite{jimenez19, zavada20, hartman20}. For instance, we verify that the \\cite{hartman20} catalogue contains 8775 wide binaries in our sampled volume, which means that twice as many ``apparently-single stars'' are included in our analysis. While this represents just a small fraction of our total sample, this might be just the tip of an iceberg that might be more clearly revealed after Gaia DR3.", "On the other hand, the \\cite{Eggleton2006} and \\cite{moe17} prescriptions provide very broad distributions of initial semi-major axis $a$, with median values of the order of 1000~AU. This implies a large fraction of predicted binaries with separation in excess of a few arcsec, and hence well resolvable by Gaia.", "\\begin{figure} \t\\includegraphics[width=\\columnwidth]{figs_recoverability_Eggleton.png} \t\\includegraphics[width=\\columnwidth]{figs_recoverability_MDS.png}     \\caption{\\textbf{Top panel}: A sample of simulated binaries within 200 pc (dots), plotting their angular separation versus magnitude contrast. Initial parameters are taken from the \\cite{Eggleton2006} distribution. Black dots are binaries in which both components are bright enough to be observable as single stars; red dots are those in which only the brightest component is observable. The continuous blue line shows the 50\\%-recoverabilty line defined by \\cite{ziegler18}, while the green line is the same limit measured by \\cite{brandeker19}. Binaries to their right have a high probability of being resolved in Gaia DR2. \\textbf{Bottom panel}: The same for a simulation derived from \\cite{moe17} initial distribution.     }", "\\label{fig:resolved_bin} \\end{figure}   To quantify the fraction of binaries that would be resolved in Gaia DR2, we use BinaPSE together with the \\cite{Eggleton2006} distribution to simulate a uniform sample of binaries within 200~pc, made of stars of near-solar metallicity and with a constant SFR$(t)$, and subjected to the broad magnitude cuts described in Sect.~\\ref{sec:data}. The present-day binary orbits are then attributed a random orientation and are ``observed'' at a random epoch. The top panel of  Fig.~\\ref{fig:resolved_bin} shows their final distribution of the angular separation $\\theta$, and the magnitude difference between the primary and secondary in the $G$ band, $|\\Delta G|$. As can be appreciated, very broad distributions of these parameters are derived.", "The ``recoverability'' of binaries in Gaia DR2 has been measured by \\cite{ziegler18} by means of a large imaging survey using adaptive optics, up to separations of 3.5$\\arcsec$, and then by \\cite{brandeker19} using Gaia DR2 data itself, for separations up to 12$\\arcsec$. Their 50\\%-recoverability limits are displayed in Fig.~\\ref{fig:resolved_bin}. We adopt as a reference the curve drawn by using the point at $|\\Delta G|=0$ from \\cite{ziegler18} and then the \\cite{brandeker19} curve for all other values. All binaries with a separation larger than given by this curve have a high probability of being included as two stars in Gaia DR2. In our simulations, 69 per cent of all binaries satisfy this condition, but for the large majority of them, only one of the components is bright enough to be included in our sample, implying that these binaries would anyway be counted as single stars. The fraction of our binary sample that satisfies the 50\\%-recoverability limit and ensures both primary and secondary are counted, $\\fresolved$, is of 0.18 for the total $\\gmag<7.5$~mag sample (case A). For stars at the lower main sequence, with $\\gmag>4$~mag (case B), $\\fresolved$ increases to 0.73 -- mainly because for binaries in the lower MS the magnitude contrast $|\\Delta G|$ is much more frequently limited to just a few magnitudes. For the upper part of the HRD ($\\gmag<4$~mag, case C), instead, $|\\Delta G|$ values become larger, and $\\fresolved$ falls to 0.09.", "Before proceeding, we recall that similar numbers are derived when we use the \\cite{moe17} distribution of binary parameters, as illustrated in the bottom panel of Fig.~\\ref{fig:resolved_bin}: the values of $\\fresolved$ become 0.20, 0.80, and 0.11, for the A, B and C samples, respectively.", "The distinction between resolved and unresolved binaries would imply significant corrections to the binary fraction. For every binary fitted in the HRD, there are $2\\fresolved$ apparently-single stars that are, in reality, $\\fresolved$ binaries.", "The approximate relation between the true binary fraction, $\\fbin^\\mathrm{true}$, and the value derived from unresolved binaries, $\\fbin^\\mathrm{app}$, is \\begin{equation} \\fbin^\\mathrm{true} \\simeq \\frac{\\fbin^\\mathrm{app} + \\fbin^\\mathrm{true} \\fresolved}{1-\\fbin^\\mathrm{true}\\fresolved}  , \\label{eq:correct0} \\end{equation} which implies \\begin{equation} \\fbin^\\mathrm{true} \\simeq \\frac{1-\\fresolved - \\sqrt{(\\fresolved-1)^2-4\\fresolved\\fbin^\\mathrm{app}}}{2\\fresolved}  .", "\\label{eq:correct} \\end{equation}   Applying this correction, the binary fractions measured from cases A, B and C, are increased from 0.423 to 0.59, from 0.228 to 0.30, and from 0.635 to 0.99, respectively. It is evident that the consideration of the recoverability of the binaries can alter, significantly, the \\fbin\\ values found for the several sections of the HRD.", "\\subsection{Fully implementing unresolved and resolved binaries in the HRD fitting}   Therefore, we implement a more realistic simulation of the binaries, in the family of models referred to as \\textbf{2} in Table~\\ref{tab:chi2}. To prepare these models, the first step is to modify the preparation of the binary PMs: For every binary produced by BinaPSE, a random location, orientation and observation epoch is assigned so as to place it in Fig.~\\ref{fig:resolved_bin}, hence defining whether it is counted as a single star, or as two stars in different HRD locations. All binary PMs are affected by this procedure.", "The results of these HRD fittings are presented in the block \\textbf{2} of Table~\\ref{tab:chi2}. Compared to the previous family of models \\textbf{1}, the fitting of the data is slightly worse (i.e.\\ with larger values of $-\\ln\\mathcal{L}$), although the features that appear in the solutions are quite similar to the previous case. But the one aspect that strikes in all these solutions is that the \\fbin\\ values derived from the upper part of the HRD are quite large, i.e. between 0.8 and 1.0, whereas the lower MS indicates a value close to 0.4.", "For completeness, we perform the same exercise using the \\cite{moe17} distribution of initial binary parameters, producing the family of models in the block \\textbf{3} of Table~\\ref{tab:chi2}. Overall, the results are similar to those obtained with the \\cite{Eggleton2006} distribution. In both cases, the \\fbin\\ values derived from the lower MS are nearly half the values derived from the analyses of upper parts of the HRD.", "This particular result could be indicating two different things: either the binary fraction decreases with stellar mass much more rapidly than assumed in the \\cite{Eggleton2006} and \\cite{moe17} distributions, or the upper parts of the HRD, for some unknown reason, tend to be better fit with larger fractions of binaries than present in stellar populations. In order to explore this second possibility, we perform the family of models 2a, which simply fits the upper part of the HRD keeping the value of \\fbin\\ fixed at the 0.408 value inferred from the lower MS. For the cases D and C, the quality of the fitting results does not change much. For the case A, instead, the results are much worse, again reflecting the intrinsic difficulty of the stellar models to fit the entire HRD simultaneously. The impact of the choice of \\fbin\\ on the SFH will be discussed in Sect.~\\ref{sec:SFH} below.", "\\subsection{On the usual assumption of unresolved binaries} \\label{sec:binfrac}   The model family \\textbf{4} in Table~\\ref{tab:chi2} presents the results of the HRD fitting using the classical prescription used in CMD fitting -- i.e.\\ that all binaries are unresolved and have a flat distribution of mass ratios.  This prescription is adopted, for instance, by \\cite{dolphin02}, \\cite{rubele18}, \\cite{ruiz20}, with only minor changes -- especially regarding the minimum value of mass ratio, which varies from author to author.", "In our case, we have limited the mass ratios in the interval $0.7<q<1$, but extending this range would not change the situation much, since non-interacting unresolved binaries with smaller $q$ values are nearly identical to single stars. As we can observe in the table, this kind of model provides (a) nearly the same values of $\\fbin$ (going from 0.40 to 0.64) for different parts of the HRD, and (b) fits nearly as good as the other cases already examined. In summary, nothing in our solutions indicates that these models are any worse than those obtained with more detailed prescriptions for the binaries.", "Indeed, the problem with these models is not in the quality of the HRD fitting they provide. It is simply that they do not offer the possibility of properly accounting for (and quantifying) the resolved and unresolved binaries in different samples. For instance, in our $d<200$~pc sample just a fraction of the binaries are in the form of unresolved systems (see Sect.~\\ref{sec:unresolved}); therefore, it is not appropriate to assume that it has the same fraction of unresolved binaries that were used to fit, successfully, the CMDs of nearby galaxies. And considering the very wide distributions of orbital separations provided by \\cite{Eggleton2006} and \\cite{moe17} prescriptions (Fig.~\\ref{fig:resolved_bin}), this problem should affect, in a significant way, even samples built for distances ten times larger than our $200$-pc limit. The problem only disappears at distances of \\textit{tens of kiloparsecs}, as for instance in the Magellanic Clouds, where the fraction of resolved binaries becomes negligible.", "\\subsection{Indications about the fraction of close binaries}   Hot subdwarfs derive only from binaries which have the chance of exchanging mass before the onset of helium core-burning, as illustrated in the example of Sect.~\\ref{sec:hestars}. Therefore they specifically sample the fraction of binaries formed in close orbits. Moreover, as indicated in Table~\\ref{tab:PMs}, the formation of hot subdwarfs peaks at ages close to 1~Gyr, i.e. just before the onset of extended RGBs in stellar populations. These ages correspond to turn-off masses of about 2~\\Msun, and therefore to stars observed at $\\gmag\\lesssim2$~mag while on their MS. These magnitude intervals are well-sampled and well-fit in our HRDs.", "Therefore, it was to be expected that our best-fit models would reproduce the observed numbers of hot subdwarfs, but this is not exactly the case. There are 17 hot subdwarfs in the observations (see Fig.~\\ref{fig:bighrd}). Our best models for cases D and C predict between 19 and 22 of them (see Table~\\ref{tab:chi2}, for model families 2 and 3), but these cases are likely overestimating \\fbin\\ by a factor of about 2 (see Sect.~\\ref{sec:bin_lowMS}). The alternative set of models 2a, build for the $\\fbin=0.408$ value inferred from the lower MS, indicates an expected number of 8.3 hotdwarfs, i.e. a factor of 2 less than observed.", "Considering the low-number statistics, the discrepancy is not dramatic, but it is slightly worrying -- indeed, the probability of observing $\\geq17$ out of expected 8.3 stars is just 0.5 per cent.", "In the case this deficit is confirmed by future studies including larger subsamples of Gaia data, it is worth recalling that: \\begin{itemize}     \\item The original BSE code generates about 2.4 times \\textit{less} subdwarfs than BinaPSE, for the same SFH (last row of Table~\\ref{tab:PMs}).", "\\item Only stars with initial separation closer than $\\sim\\!1$~AU produce hot subdwarfs, therefore this is the range for the possible revisions in the distribution of initial binary parameters in \\cite{Eggleton2006} and \\cite{moe17} prescriptions.", "\\item The number of predicted hot subdwarfs depends on a series of other parameters in the binary evolution code, in addition to the distribution of initial separations. In particular, the production of hot-subdwarfs is expected to increase with the common-envelope ejection efficiency and the critical mass ratio for dynamically unstable mass transfer \\cite{heber09, han02, han03}.", "\\end{itemize} This probably means that these parameters have to be recalibrated in BinaPSE, in order to produce about twice more hot subdwarfs. The Gaia data represents the ideal sample to face this problem in the near future. As the production of hot subdwarfs changes with age (Table~\\ref{tab:PMs}), this recalibration goes hand in hand with the determination of the SFH in the Solar Neighbourhood.", "\\begin{figure*} \t\\includegraphics[width=\\textwidth]{figs_sfh_comp.png}     \\caption{     Several of the SFHs derived in this work, for models introduced in Table~\\ref{tab:chi2}. Left panels correspond to the results obtained by fitting the upper MS, the right ones to the fitting of the entire uper HRD. In both cases the upper panel shows the SFR$(t)$, with solid lines corresponding to the median solutions, and light-shaded areas illustrating the 16\\% to 84\\% confidence interval. The bottom panel shows the median AMR, \\feh$(t)$, for the same models, together with the metallicity interval that spans 16\\% to 84\\% of the values adopted at every age.}", "\\label{fig:sfh_comp} \\end{figure*}   \\subsection{The SFH in the Solar Neighbourhood} \\label{sec:SFH}   In this work, we concentrate on the analysis of the basic building blocks and assumptions of CMD (or HRD) fitting. But assuming that our best models (cases C and D) are good enough representations of reality, we also derive a potentially important constraint to Galaxy evolution models: the SFH of the Solar Neighbourhood.", "The left panel of Fig.~\\ref{fig:sfh_comp} shows several of the solutions we find along this work, limited to the upper MS (case D in Table~\\ref{tab:chi2}), which provide the smallest residuals. As can be appreciated, all these models produce qualitatively similar SFR$(t)$, with the maximum being in the age bin between 2--3.2 Gyr and with values between $1.25\\times10^{-4}$ and $1.5\\times10^{-4}\\Msun\\mathrm{yr}^{-1}$. Other maximums are found at the age bins 0.8--1.2 Gyr and 0.32--0.5 Gyr. Despite the limited age resolution we have adopted for older ages, the SFR$(t)$ clearly decreases up to the 12.6~Gyr limit of our determination, reaching values of $\\sim\\!0.85\\times10^{-4}\\Msun\\mathrm{yr}^{-1}$.", "The mean $\\feh(t)$, instead, appears generally decreasing by about 0.3~dex between young and old ages, in all cases, although following slightly different median curves. If we consider the Gaussian dispersion of $\\sigma=0.1$~dex adopted to build the PMs at every age, there is a great degree of superposition between the metallicity distributions represented by these different cases.", "The right panel of Fig.~\\ref{fig:sfh_comp} presents the results from the analyses of the entire upper HRD (case C in Table~\\ref{tab:chi2}). The quantitative differences between the different SFR$(t)$ become slightly larger in this case, but anyway essentially the same kind of solutions are found as in previous case D. The most discrepant results, in this case, come from models in family 4, in which all binaries are non resolved and have a flat distribution of mass ratios: these particular solutions present a SFR$(t)$ peaking at slightly older ages than in the other cases, that is in the 3.2--5 Gyr age bin.", "Overall, these plots reveal that the prescription adopted for the binaries has a modest impact on the final derivation of the SFH. Based on Fig.~\\ref{fig:sfh_comp}, we estimate that the uncertainty on the binary prescription alone adds an uncertainty of about 20 per cent on the values of SFR$(t)$ at any age. The impact on the AMR seems also quite modest.", "How our SFHs compare to other ones recently derived from Gaia? Any direct comparison with other authors is complicated by the very different samples they analysed -- in addition to the different methods and stellar models they adopted. Therefore we refrain from a doing detailed comparison.", "But it is worth to mention that: \\begin{itemize}     \\item We do not find the marked increase in the SFR$(t)$ for ages older than about 10~Gyr that can be appreciated in the solution by \\cite{ruiz20}. Their sample refers to a wider volume, extending up to 2~kpc from the Sun, therefore higher fractions of old stars (and lower metallicities) are expected from their sampling of larger sections of the thick disk and halo. What comes as a surprise is that a similar increase in the star formation rate of old stars (with ages $\\ga8$~Gyr in this case) can also be inferred from the \\cite{alzate20} ``age-metallicity distributions'', which were derived from a sample extending to just 100~pc.", "\\item Our solutions do not provide hints of the SFR$(t)$ peaks detected by \\cite{ruiz20} at ages 1.9 and 5.7 Gyr, while we might agree that there is a SFR$(t)$ peak at 1~Gyr. The absence of the older peaks in our case might be simply caused by the the smaller sample and the coarse age resolution we adopted in our method. It is notable that our solution for case 2a produces a SFR$(t)$ peak in the 3.2--5 Gyr age bin, which could be related to the 5.7~Gyr peak found by \\cite{ruiz20} using similar assumptions for the binaries. However, in the absence of a detailed reanalyses of both samples using the same methods, this could be regarded as a coincidence.", "\\end{itemize}       \\section{Summary and conclusions}   In this work, we develop methods to simulate stellar populations including detailed prescriptions for binaries, and apply them to the fitting of the high-quality HRD from Gaia DR2.", "We take special care to limit the analyses to an almost-complete sample little affected by the known errors and artefacts in the Gaia catalog; these considerations lead us to limit the sample to a maximum distance of 200~pc and to exclude the $|b|<25^\\circ$ region. We put a particular effort into verifying if the populations of binaries can be well fit with present prescriptions. Our analysis stands on our refined binary evolution code BinaPSE, which is able to produce the main features caused by binaries in the Gaia HRD -- namely the sequence of near-equal mass binaries along the lower main sequence, and the isolated group of hot subdwarfs.", "Some indications from this work are: \\begin{itemize}     \\item Attempts to fit the entire HRD from Gaia DR2 within 200 pc, using its full colour-magnitude resolution from the lower MS to the upper MS, may fail to provide good solutions. Significant residuals appear probably as a result of discrepancies in the modelling of stars across a wide range of masses, effective temperatures, surface gravities, and metallicity. Zero-point errors in the Gaia photometry may also be playing a role. Until these problems are not completely characterised and their related parameters recalibrated (hopefully with the help of star clusters in the Gaia database), we recommend the fitting of separated and smaller sections of the Gaia HRD.", "\\item Attempts to fit the initial fraction of mass formed in binaries, $\\fbin$, will in general provide different results depending on the section of the HRD being analysed. This is illustrated by the case in which all binaries are assumed unresolved, which leads to \\fbin\\ values increasing from $\\sim0.2$ to 0.6 as we go from the lower to the upper MS (assuming the \\cite{Eggleton2006} prescription for the binary parameters together with the \\cite{kroupa02} IMF).", "\\item The problem remains when we use a more realistic prescription in which binaries are considered either as resolved or unresolved depending on their instantaneous angular separation compared to the 50\\%-recoverability line measured for Gaia DR2. We have verified this for models adopting both the \\cite{Eggleton2006} and the \\cite{moe17} distributions of initial binary parameters, finding that the lower MS favours \\fbin\\ values close to 0.4, while the best-fit solutions of the upper-HRD favour values closer to 0.8 or 0.9. Solving this problem probably requires a more detailed analysis of the distributions of the adopted binary parameters, compared to observed samples.", "\\item Anyway, when we adopt the \\fbin\\ derived from the lower MS -- which is certainly very sensitive to the fraction of unresolved binaries -- the fitting of the upper HRD is not dramatically worse than in the case where \\fbin\\ is also fitted. Overall, the star counts across the upper parts of the HRD are less affected by the presence of binaries, while extremely sensitive to the SFH.", "\\item Our models provide the expected numbers of hot subdwarfs, produced from the evolution of close binaries especially at ages close to 1~Gyr. 17 such stars are present in our Gaia sample, while predicted numbers are 20--22 when both the SFH and $\\fbin$ are fitted using the upper part of the HRD. Predicted numbers of hot subdwarfs decrease to 8.3 when we fix the \\fbin\\ value at $\\simeq0.4$ as suggested by the lower MS. In any case, we regard reproducing this number within a factor of 2, without any additional tuning of model parameters, as a very promising start.", "\\item In addition to the binary fraction, the method returns the SFH that best fits the HRD, in units of $\\Msun\\mathrm{yr}^{-1}$. The results depend moderately on the assumptions adopted for the binaries, and on other model aspects such as the section of the HRD and of the age-metallicity space being explored by the fitting algorithms. Anyway, the results clearly indicate a SFR$(t)$ peaking at ages close to 2~Gyr. Representative values for the SFR$(t)$ vary between the $\\sim\\!1.5\\times10^{-4}\\Msun\\mathrm{yr}^{-1}$ at the peak, and $\\sim\\!0.8\\times10^{-4}\\Msun\\mathrm{yr}^{-1}$ at old ages.", "\\end{itemize} In summary, this work presents an independent analysis of the Gaia DR2 HRD and its SFH, producing generally good solutions in the upper part of the HRD, but also introducing a series of problems and questions that might be faced as future data releases from Gaia provide better data and larger samples. Overall, we are very pleased to find that \\textit{the general goal of accurately fitting the star counts across the Gaia HRD, including their binary sequences, does not seem beyond reach}.", "Progress into improving the fitting of these HRDs certainly passes through a more detailed check of the binary models produced by our codes, and of their recoverability in Gaia.", "It is worth remarking that BinaPSE produces not only detailed information about the fraction of binaries across the HRD, but also the distribution of other observables like the radial velocities, angular separations and magnitude contrast (as in Fig.~\\ref{fig:resolved_bin}), proper motions, and light curves for eclipsing binaries.", "In forthcoming works, we will add comparisons between the predictions of our TRILEGAL+BinaPSE codes with the wealth of data regarding binaries (including their white dwarf remnants) provided by other wide-field photometric and spectroscopic surveys, in the pursuit of more satisfactory prescriptions for modelling single and binary populations in the fields of nearby resolved galaxies. This work is a necessary step to prepare for the revolution in binary data expected from Gaia DR3, and from ambitious imminent surveys like the Vera Rubin Observatory Legacy Survey of Space and Time \\cite{lsst} and the Sloan Digital Sky Survey V \\cite{sdssv}.", "\\section*{Data Availability} The data underlying this article were accessed from the ESA Gaia archive (\\url{https://gea.esac.esa.int/archive/}). The best-fitting models generated in this research will be shared on reasonable request to the corresponding author.", "\\section*{Acknowledgements} We thank Onno Pols, Omar Benvenuto, and the anonymous referee for the useful comments, Eduardo Balbinot, Meredith Durbin and Giada Pastorelli for the programming tips, and the organizers of the COST MW-Gaia WG2/WG3 for an inspiring workshop in Zagreb.", "L.G.\\ acknowledges partial funding from ``INAF mainstreams 1.05.01.86.20\". P.M., P.D.T, L.G.,G.C. acknowledge the financial support from the European Union (ERC Consolidator Grant project STARKEY, no.\\ 615604).", "A.B.\\ acknowledges PRIN MIUR 2017 prot.\\ 20173ML3WW. Y.C.\\ acknowledges NSFC funding No.\\ 12003001. G.C.\\ acknowledges financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contract no.\\ 770017.", "\\bibliographystyle{mnras} \\begin{thebibliography}{} \\makeatletter \\relax \\def\\mn@urlcharsother{\\let\\do\\@makeother \\do\\$\\do\\&\\do\\#\\do\\^\\do\\_\\do\\%\\do\\~} \\def\\mn@doi{\\begingroup\\mn@urlcharsother \\@ifnextchar [ {\\mn@doi@}   {\\mn@doi@[]}} \\def\\mn@doi@[#1]#2{\\def\\@tempa{#1}\\ifx\\@tempa\\@empty \\href   {http://dx.doi.org/#2} {doi:#2}\\else \\href {http://dx.doi.org/#2} {#1}\\fi   \\endgroup} \\def\\mn@eprint#1#2{\\mn@eprint@#1:#2::\\@nil} \\def\\mn@eprint@arXiv#1{\\href {http://arxiv.org/abs/#1} {{\\tt arXiv:#1}}} \\def\\mn@eprint@dblp#1{\\href {http://dblp.uni-trier.de/rec/bibtex/#1.xml}   {dblp:#1}} \\def\\mn@eprint@#1:#2:#3:#4\\@nil{\\def\\@tempa {#1}\\def\\@tempb {#2}\\def\\@tempc   {#3}\\ifx \\@tempc \\@empty \\let \\@tempc \\@tempb \\let \\@tempb \\@tempa \\fi \\ifx   \\@tempb \\@empty \\def\\@tempb {arXiv}\\fi \\@ifundefined   {mn@eprint@\\@tempb}{\\@tempb:\\@tempc}{\\expandafter \\expandafter \\csname   mn@eprint@\\@tempb\\endcsname \\expandafter{\\@tempc}}}   \\bibitem{allard12} {Allard} F.,  {Homeier} D.,  {Freytag} B.,   {Sharp} C.~M.,  2012, in   {Reyl{\\'e}} C.,  {Charbonnel} C.,   {Schultheis} M.,  eds,  EAS Publications   Series Vol. 57, EAS Publications Series.", "pp 3--43 (\\mn@eprint {arXiv}   {1206.1021}), \\mn@doi{10.1051/eas/1257001}   \\bibitem{alzate20} {Alzate} J.~A.,  {Bruzual} G.,   {D{\\'\\i}az-Gonz{\\'a}lez} D.~J.,  2021, \\mn@doi   [\\mnras] {10.1093/mnras/staa3576}, \\href   {https://ui.adsabs.harvard.edu/abs/2021MNRAS.501..302A} {501, 302}   \\bibitem{bertelli01} {Bertelli} G.,  {Nasi} E.,  2001, \\mn@doi [\\aj] {10.1086/318781}, \\href   {https://ui.adsabs.harvard.edu/abs/2001AJ....121.1013B} {121, 1013}   \\bibitem{bossini19} {Bossini} D.,  et~al., 2019, \\mn@doi [\\aap] {10.1051/0004-6361/201834693},   \\href {https://ui.adsabs.harvard.edu/abs/2019A&A...623A.108B} {623, A108}   \\bibitem{brandeker19} {Brandeker} A.,  {Cataldi} G.,  2019, \\mn@doi [\\aap]   {10.1051/0004-6361/201834321}, \\href   {https://ui.adsabs.harvard.edu/abs/2019A&A...621A..86B} {621, A86}   \\bibitem{bressan12} {Bressan} A.,  {Marigo} P.,  {Girardi} L.,  {Salasnich} B.,  {Dal Cero} C.,   {Rubele} S.,   {Nanni} A.,  2012, \\mn@doi [\\mnras]   {10.1111/j.1365-2966.2012.21948.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2012MNRAS.427..127B} {427, 127}   \\bibitem{bressan15} {Bressan} A.,  {Girardi} L.,  {Marigo} P.,  {Rosenfield} P.,   {Tang} J.,   2015, in Asteroseismology of Stellar Populations in the Milky Way.", "p.~25   (\\mn@eprint {arXiv} {1409.2268}), \\mn@doi{10.1007/978-3-319-10993-0_3}   \\bibitem{cardelli89} {Cardelli} J.~A.,  {Clayton} G.~C.,   {Mathis} J.~S.,  1989, \\mn@doi [\\apj]   {10.1086/167900}, \\href   {https://ui.adsabs.harvard.edu/abs/1989ApJ...345..245C} {345, 245}   \\bibitem{casagrande11} {Casagrande} L.,  {Sch{\\\"o}nrich} R.,  {Asplund} M.,  {Cassisi} S.,   {Ram{\\'\\i}rez} I.,  {Mel{\\'e}ndez} J.,  {Bensby} T.,   {Feltzing} S.,  2011,   \\mn@doi [\\aap] {10.1051/0004-6361/201016276}, \\href   {https://ui.adsabs.harvard.edu/abs/2011A&A...530A.138C} {530, A138}   \\bibitem{castelli03} {Castelli} F.,  {Kurucz} R.~L.,  2003, in {Piskunov} N.,  {Weiss} W.~W.,   {Gray} D.~F.,  eds,  IAU Symposium Vol.", "210, Modelling of Stellar   Atmospheres.", "p.~A20 (\\mn@eprint {arXiv} {astro-ph/0405087})   \\bibitem{chan20} {Chan} V.~C.,  {Bovy} J.,  2020, \\mn@doi [\\mnras] {10.1093/mnras/staa571},   \\href {https://ui.adsabs.harvard.edu/abs/2020MNRAS.493.4367C} {493, 4367}   \\bibitem{chen15} {Chen} Y.,  {Bressan} A.,  {Girardi} L.,  {Marigo} P.,  {Kong} X.,   {Lanza}   A.,  2015, \\mn@doi [\\mnras] {10.1093/mnras/stv1281}, \\href   {https://ui.adsabs.harvard.edu/abs/2015MNRAS.452.1068C} {452, 1068}   \\bibitem{chen19} {Chen} Y.,  et~al., 2019, \\mn@doi [\\aap] {10.1051/0004-6361/201936612}, \\href   {https://ui.adsabs.harvard.edu/abs/2019A&A...632A.105C} {632, A105}   \\bibitem{cignoni06} {Cignoni} M.,  {Degl'Innocenti} S.,  {Prada Moroni} P.~G.,   {Shore} S.~N.,   2006, \\mn@doi [\\aap] {10.1051/0004-6361:20065645}, \\href   {https://ui.adsabs.harvard.edu/abs/2006A&A...459..783C} {459, 783}   \\bibitem{costa19a} {Costa} G.,  {Girardi} L.,  {Bressan} A.,  {Marigo} P.,  {Rodrigues} T.~S.,   {Chen} Y.,  {Lanza} A.,   {Goudfrooij} P.,  2019a, \\mn@doi [\\mnras]   {10.1093/mnras/stz728}, \\href   {https://ui.adsabs.harvard.edu/abs/2019MNRAS.485.4641C} {485, 4641}   \\bibitem{costa19b} {Costa} G.,  {Girardi} L.,  {Bressan} A.,  {Chen} Y.,  {Goudfrooij} P.,   {Marigo} P.,  {Rodrigues} T.~S.,   {Lanza} A.,  2019b, \\mn@doi [\\aap]   {10.1051/0004-6361/201936409}, \\href   {https://ui.adsabs.harvard.edu/abs/2019A&A...631A.128C} {631, A128}   \\bibitem{Dessart2020} {Dessart} L.,  {Yoon} S.-C.,  {Aguilera-Dena} D.~R.,   {Langer} N.,  2020,   \\mn@doi [\\aap] {10.1051/0004-6361/202038763}, \\href   {https://ui.adsabs.harvard.edu/abs/2020A&A...642A.106D} {642, A106}   \\bibitem{dias21} {Dias} W.~S.,  {Monteiro} H.,  {Moitinho} A.,  {L{\\'e}pine} J.~R.~D.,   {Carraro} G.,  {Paunzen} E.,  {Alessi} B.,   {Villela} L.,  2021, \\mn@doi   [\\mnras] {10.1093/mnras/stab770}, \\href   {https://ui.adsabs.harvard.edu/abs/2021MNRAS.504..356D} {504, 356}   \\bibitem{dolphin02} {Dolphin} A.~E.,  2002, \\mn@doi [\\mnras] {10.1046/j.1365-8711.2002.05271.x},   \\href {https://ui.adsabs.harvard.edu/abs/2002MNRAS.332...91D} {332, 91}   \\bibitem{Eggleton2006} {Eggleton} P.,  2006, {Evolutionary Processes in Binary and Multiple Stars}   \\bibitem{elbadry19} {El-Badry} K.,  {Rix} H.-W.,  {Tian} H.,  {Duch{\\^e}ne} G.,   {Moe} M.,  2019,   \\mn@doi [\\mnras] {10.1093/mnras/stz2480}, \\href   {https://ui.adsabs.harvard.edu/abs/2019MNRAS.489.5822E} {489, 5822}   \\bibitem{evans18} {Evans} D.~W.,  et~al., 2018, \\mn@doi [\\aap] {10.1051/0004-6361/201832756},   \\href {https://ui.adsabs.harvard.edu/abs/2018A&A...616A...4E} {616, A4}   \\bibitem{feuillet16} {Feuillet} D.~K.,  {Bovy} J.,  {Holtzman} J.,  {Girardi} L.,  {MacDonald} N.,   {Majewski} S.~R.,   {Nidever} D.~L.,  2016, \\mn@doi [\\apj]   {10.3847/0004-637X/817/1/40}, \\href   {https://ui.adsabs.harvard.edu/abs/2016ApJ...817...40F} {817, 40}   \\bibitem{feuillet19} {Feuillet} D.~K.,  {Frankel} N.,  {Lind} K.,  {Frinchaboy} P.~M.,   {Garc{\\'\\i}a-Hern{\\'a}ndez} D.~A.,  {Lane} R.~R.,  {Nitschelm} C.,   {Roman-Lopes} A.,  2019, \\mn@doi [\\mnras] {10.1093/mnras/stz2221}, \\href   {https://ui.adsabs.harvard.edu/abs/2019MNRAS.489.1742F} {489, 1742}   \\bibitem{emcee} {Foreman-Mackey} D.,  {Hogg} D.~W.,  {Lang} D.,   {Goodman} J.,  2013, \\mn@doi   [\\pasp] {10.1086/670067}, \\href   {https://ui.adsabs.harvard.edu/abs/2013PASP..125..306F} {125, 306}   \\bibitem{fu18} {Fu} X.,  {Bressan} A.,  {Marigo} P.,  {Girardi} L.,  {Montalb{\\'a}n} J.,   {Chen} Y.,   {Nanni} A.,  2018, \\mn@doi [\\mnras] {10.1093/mnras/sty235},   \\href {https://ui.adsabs.harvard.edu/abs/2018MNRAS.476..496F} {476, 496}   \\bibitem{gaiaDR2} {Gaia Collaboration} et~al., 2018a, \\mn@doi [\\aap]   {10.1051/0004-6361/201833051}, \\href   {https://ui.adsabs.harvard.edu/abs/2018A&A...616A...1G} {616, A1}   \\bibitem{babu} {Gaia Collaboration} et~al., 2018b, \\mn@doi [\\aap]   {10.1051/0004-6361/201832843}, \\href   {https://ui.adsabs.harvard.edu/abs/2018A&A...616A..10G} {616, A10}   \\bibitem{gallart15} {Gallart} C.,  et~al., 2015, \\mn@doi [\\apjl] {10.1088/2041-8205/811/2/L18},   \\href {https://ui.adsabs.harvard.edu/abs/2015ApJ...811L..18G} {811, L18}   \\bibitem{gallart19a} {Gallart} C.,  {Ruiz-Lara} T.,  {Bernard} E.~J.,  {Brook} C.~B.,  {Cassisi} S.,    {Hill} V.,   {Monelli} M.,  2019a, in The Gaia Universe.", "p.~40,   \\mn@doi{10.5281/zenodo.3089011}   \\bibitem{gallart19b} {Gallart} C.,  {Bernard} E.~J.,  {Brook} C.~B.,  {Ruiz-Lara} T.,  {Cassisi} S.,    {Hill} V.,   {Monelli} M.,  2019b, \\mn@doi [Nature Astronomy]   {10.1038/s41550-019-0829-5}, \\href   {https://ui.adsabs.harvard.edu/abs/2019NatAs...3..932G} {3, 932}   \\bibitem{geier19} {Geier} S.,  {Raddi} R.,  {Gentile Fusillo} N.~P.,   {Marsh} T.~R.,  2019,   \\mn@doi [\\aap] {10.1051/0004-6361/201834236}, \\href   {https://ui.adsabs.harvard.edu/abs/2019A&A...621A..38G} {621, A38}   \\bibitem{girardi16} {Girardi} L.,  2016, \\mn@doi [\\araa] {10.1146/annurev-astro-081915-023354},   \\href {https://ui.adsabs.harvard.edu/abs/2016ARA&A..54...95G} {54, 95}   \\bibitem{girardi05} {Girardi} L.,  {Groenewegen} M.~A.~T.,  {Hatziminaoglou} E.,   {da Costa} L.,   2005, \\mn@doi [\\aap] {10.1051/0004-6361:20042352}, \\href   {https://ui.adsabs.harvard.edu/abs/2005A&A...436..895G} {436, 895}   \\bibitem{girardi12} {Girardi} L.,  et~al., 2012, \\mn@doi [Astrophysics and Space Science   Proceedings] {10.1007/978-3-642-18418-5_17}, \\href   {https://ui.adsabs.harvard.edu/abs/2012ASSP...26..165G} {26, 165}   \\bibitem{groenewegen02} {Groenewegen} M.~A.~T.,  et~al., 2002, \\mn@doi [\\aap]   {10.1051/0004-6361:20020766}, \\href   {https://ui.adsabs.harvard.edu/abs/2002A&A...392..741G} {392, 741}   \\bibitem{haffner37} {Haffner} H.,  {Heckmann} O.,  1937, Veroeffentlichungen der   Universitaets-Sternwarte zu Goettingen, \\href   {https://ui.adsabs.harvard.edu/abs/1937VeGoe...4...77H} {0004, 77}   \\bibitem{hallakoun20} {Hallakoun} N.,  {Maoz} D.,  2020, arXiv e-prints, \\href   {https://ui.adsabs.harvard.edu/abs/2020arXiv200905047H} {p. arXiv:2009.05047}   \\bibitem{han02} {Han} Z.,  {Podsiadlowski} P.,  {Maxted} P.~F.~L.,  {Marsh} T.~R.,   {Ivanova}   N.,  2002, \\mn@doi [\\mnras] {10.1046/j.1365-8711.2002.05752.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2002MNRAS.336..449H} {336, 449}   \\bibitem{han03} {Han} Z.,  {Podsiadlowski} P.,  {Maxted} P.~F.~L.,   {Marsh} T.~R.,  2003,   \\mn@doi [\\mnras] {10.1046/j.1365-8711.2003.06451.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2003MNRAS.341..669H} {341, 669}   \\bibitem{hartman20} {Hartman} Z.~D.,  {L{\\'e}pine} S.,  2020, \\mn@doi [\\apjs]   {10.3847/1538-4365/ab79a6}, \\href   {https://ui.adsabs.harvard.edu/abs/2020ApJS..247...66H} {247, 66}   \\bibitem{haywood01} {Haywood} M.,  2001, \\mn@doi [\\mnras] {10.1046/j.1365-8711.2001.04510.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2001MNRAS.325.1365H} {325, 1365}   \\bibitem{haywood13} {Haywood} M.,  {Di Matteo} P.,  {Lehnert} M.~D.,  {Katz} D.,   {G{\\'o}mez} A.,   2013, \\mn@doi [\\aap] {10.1051/0004-6361/201321397}, \\href   {https://ui.adsabs.harvard.edu/abs/2013A&A...560A.109H} {560, A109}   \\bibitem{heber09} {Heber} U.,  2009, \\mn@doi [\\araa] {10.1146/annurev-astro-082708-101836}, \\href   {https://ui.adsabs.harvard.edu/abs/2009ARA&A..47..211H} {47, 211}   \\bibitem{hernandez00} {Hernandez} X.,  {Valls-Gabaud} D.,   {Gilmore} G.,  2000, \\mn@doi [\\mnras]   {10.1046/j.1365-8711.2000.03537.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2000MNRAS.316..605H} {316, 605}   \\bibitem{hogg18} {Hogg} D.~W.,  {Foreman-Mackey} D.,  2018, \\mn@doi [\\apjs]   {10.3847/1538-4365/aab76e}, \\href   {https://ui.adsabs.harvard.edu/abs/2018ApJS..236...11H} {236, 11}   \\bibitem{holtzman06} {Holtzman} J.~A.,  {Afonso} C.,   {Dolphin} A.,  2006, \\mn@doi [\\apjs]   {10.1086/507074}, \\href   {https://ui.adsabs.harvard.edu/abs/2006ApJS..166..534H} {166, 534}   \\bibitem{Hubeny1988} {Hubeny} I.,  1988, \\mn@doi [Computer Physics Communications]   {10.1016/0010-4655(88)90177-4}, \\href   {https://ui.adsabs.harvard.edu/abs/1988CoPhC..52..103H} {52, 103}   \\bibitem{Hubeny1995} {Hubeny} I.,  {Lanz} T.,  1995, \\mn@doi [\\apj] {10.1086/175226}, \\href   {https://ui.adsabs.harvard.edu/abs/1995ApJ...439..875H} {439, 875}   \\bibitem{hurley2000} {Hurley} J.~R.,  {Pols} O.~R.,   {Tout} C.~A.,  2000, \\mn@doi [\\mnras]   {10.1046/j.1365-8711.2000.03426.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2000MNRAS.315..543H} {315, 543}   \\bibitem{hurley02} {Hurley} J.~R.,  {Tout} C.~A.,   {Pols} O.~R.,  2002, \\mn@doi [\\mnras]   {10.1046/j.1365-8711.2002.05038.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2002MNRAS.329..897H} {329, 897}   \\bibitem{lsst} {Ivezi{\\'c}} {\\v{Z}}.,  et~al., 2019, \\mn@doi [\\apj]   {10.3847/1538-4357/ab042c}, \\href   {https://ui.adsabs.harvard.edu/abs/2019ApJ...873..111I} {873, 111}   \\bibitem{jimenez19} {Jim{\\'e}nez-Esteban} F.~M.,  {Solano} E.,   {Rodrigo} C.,  2019, \\mn@doi [\\aj]   {10.3847/1538-3881/aafacc}, \\href   {https://ui.adsabs.harvard.edu/abs/2019AJ....157...78J} {157, 78}   \\bibitem{khan19} {Khan} S.,  et~al., 2019, \\mn@doi [\\aap] {10.1051/0004-6361/201935304}, \\href   {https://ui.adsabs.harvard.edu/abs/2019A&A...628A..35K} {628, A35}   \\bibitem{kharchenko05} {Kharchenko} N.~V.,  {Piskunov} A.~E.,  {R{\\\"o}ser} S.,  {Schilbach} E.,   {Scholz} R.~D.,  2005, \\mn@doi [\\aap] {10.1051/0004-6361:20042523}, \\href   {https://ui.adsabs.harvard.edu/abs/2005A&A...438.1163K} {438, 1163}   \\bibitem{sdssv} {Kollmeier} J.~A.,  et~al., 2017, arXiv e-prints, \\href   {https://ui.adsabs.harvard.edu/abs/2017arXiv171103234K} {p. arXiv:1711.03234}   \\bibitem{kotoneva02} {Kotoneva} E.,  {Flynn} C.,  {Chiappini} C.,   {Matteucci} F.,  2002, \\mn@doi   [\\mnras] {10.1046/j.1365-8711.2002.05825.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2002MNRAS.336..879K} {336, 879}   \\bibitem{kroupa02} {Kroupa} P.,  2002, \\mn@doi [Science] {10.1126/science.1067524}, \\href   {https://ui.adsabs.harvard.edu/abs/2002Sci...295...82K} {295, 82}   \\bibitem{lallement18} {Lallement} R.,  et~al., 2018, \\mn@doi [\\aap] {10.1051/0004-6361/201832832},   \\href {https://ui.adsabs.harvard.edu/abs/2018A&A...616A.132L} {616, A132}   \\bibitem{lewis15} {Lewis} A.~R.,  et~al., 2015, \\mn@doi [\\apj] {10.1088/0004-637X/805/2/183},   \\href {https://ui.adsabs.harvard.edu/abs/2015ApJ...805..183L} {805, 183}   \\bibitem{lin20} {Lin} J.,  et~al., 2020, \\mn@doi [\\mnras] {10.1093/mnras/stz3048}, \\href   {https://ui.adsabs.harvard.edu/abs/2020MNRAS.491.2043L} {491, 2043}   \\bibitem{lindegren} {Lindegren} L.,  et~al., 2018, \\mn@doi [\\aap] {10.1051/0004-6361/201832727},   \\href {https://ui.adsabs.harvard.edu/abs/2018A&A...616A...2L} {616, A2}   \\bibitem{luri18} {Luri} X.,  et~al., 2018, \\mn@doi [\\aap] {10.1051/0004-6361/201832964}, \\href   {https://ui.adsabs.harvard.edu/abs/2018A&A...616A...9L} {616, A9}   \\bibitem{maiz18} {Ma{\\'\\i}z Apell{\\'a}niz} J.,  {Weiler} M.,  2018, \\mn@doi [\\aap]   {10.1051/0004-6361/201834051}, \\href   {https://ui.adsabs.harvard.edu/abs/2018A&A...619A.180M} {619, A180}   \\bibitem{Marigo_etal_13} {Marigo} P.,  {Bressan} A.,  {Nanni} A.,  {Girardi} L.,   {Pumo} M.~L.,  2013,   \\mn@doi [\\mnras] {10.1093/mnras/stt1034}, \\href   {https://ui.adsabs.harvard.edu/abs/2013MNRAS.434..488M} {434, 488}   \\bibitem{marigo17} {Marigo} P.,  et~al., 2017, \\mn@doi [\\apj] {10.3847/1538-4357/835/1/77}, \\href   {https://ui.adsabs.harvard.edu/abs/2017ApJ...835...77M} {835, 77}   \\bibitem{medina21} {Medina} G.~E.,  {Lemasle} B.,   {Grebel} E.~K.,  2021, \\mn@doi [\\mnras]   {10.1093/mnras/stab1267}, \\href   {https://ui.adsabs.harvard.edu/abs/2021MNRAS.tmp.1230M} {}   \\bibitem{metropolis53} {Metropolis} N.,  {Rosenbluth} A.~W.,  {Rosenbluth} M.~N.,  {Teller} A.~H.,   {Teller} E.,  1953, \\mn@doi [\\jcp] {10.1063/1.1699114}, \\href   {https://ui.adsabs.harvard.edu/abs/1953JChPh..21.1087M} {21, 1087}   \\bibitem{Bertolami_2016} {Miller Bertolami} M.~M.,  2016, \\mn@doi [\\aap] {10.1051/0004-6361/201526577},   \\href {https://ui.adsabs.harvard.edu/abs/2016A&A...588A..25M} {588, A25}   \\bibitem{moe17} {Moe} M.,  {Di Stefano} R.,  2017, \\mn@doi [\\apjs] {10.3847/1538-4365/aa6fb6},   \\href {https://ui.adsabs.harvard.edu/abs/2017ApJS..230...15M} {230, 15}   \\bibitem{monteiro20} {Monteiro} H.,  {Dias} W.~S.,  {Moitinho} A.,  {Cantat-Gaudin} T.,   {L{\\'e}pine} J.~R.~D.,  {Carraro} G.,   {Paunzen} E.,  2020, \\mn@doi [\\mnras]   {10.1093/mnras/staa2983}, \\href   {https://ui.adsabs.harvard.edu/abs/2020MNRAS.499.1874M} {499, 1874}   \\bibitem{mor19} {Mor} R.,  {Robin} A.~C.,  {Figueras} F.,  {Roca-F{\\`a}brega} S.,   {Luri} X.,   2019, \\mn@doi [\\aap] {10.1051/0004-6361/201935105}, \\href   {https://ui.adsabs.harvard.edu/abs/2019A&A...624L...1M} {624, L1}   \\bibitem{nandakumar20} {Nandakumar} G.,  et~al., 2020, arXiv e-prints, \\href   {https://ui.adsabs.harvard.edu/abs/2020arXiv201102783N} {p. arXiv:2011.02783}   \\bibitem{noh90} {Noh} H.-R.,  {Scalo} J.,  1990, \\mn@doi [\\apj] {10.1086/168562}, \\href   {https://ui.adsabs.harvard.edu/abs/1990ApJ...352..605N} {352, 605}   \\bibitem{pols97} {Pols} O.,  1997, in {Leung} K.-C.,  ed.,  Astronomical Society of the Pacific   Conference Series Vol.", "130, The Third Pacific Rim Conference on Recent   Development on Binary Star Research.", "p.~153   \\bibitem{nr} {Press} W.~H.,  {Teukolsky} S.~A.,  {Vetterling} W.~T.,   {Flannery} B.~P.,   1992, {Numerical recipes in C. The art of scientific computing}   \\bibitem{Renedo_2010} {Renedo} I.,  {Althaus} L.~G.,  {Miller Bertolami} M.~M.,  {Romero} A.~D.,   {C{\\'o}rsico} A.~H.,  {Rohrmann} R.~D.,   {Garc{\\'\\i}a-Berro} E.,  2010,   \\mn@doi [\\apj] {10.1088/0004-637X/717/1/183}, \\href   {https://ui.adsabs.harvard.edu/abs/2010ApJ...717..183R} {717, 183}   \\bibitem{rochapinto96} {Rocha-Pinto} H.~J.,  {Maciel} W.~J.,  1996, \\mn@doi [\\mnras]   {10.1093/mnras/279.2.447}, \\href   {https://ui.adsabs.harvard.edu/abs/1996MNRAS.279..447R} {279, 447}   \\bibitem{rosenfield16} {Rosenfield} P.,  {Marigo} P.,  {Girardi} L.,  {Dalcanton} J.~J.,  {Bressan}   A.,  {Williams} B.~F.,   {Dolphin} A.,  2016, \\mn@doi [\\apj]   {10.3847/0004-637X/822/2/73}, \\href   {https://ui.adsabs.harvard.edu/abs/2016ApJ...822...73R} {822, 73}   \\bibitem{rowell13} {Rowell} N.,  2013, \\mn@doi [\\mnras] {10.1093/mnras/stt1110}, \\href   {https://ui.adsabs.harvard.edu/abs/2013MNRAS.434.1549R} {434, 1549}   \\bibitem{rubele18} {Rubele} S.,  et~al., 2018, \\mn@doi [\\mnras] {10.1093/mnras/sty1279}, \\href   {https://ui.adsabs.harvard.edu/abs/2018MNRAS.478.5017R} {478, 5017}   \\bibitem{ruiz20} {Ruiz-Lara} T.,  {Gallart} C.,  {Bernard} E.~J.,   {Cassisi} S.,  2020, \\mn@doi   [Nature Astronomy] {10.1038/s41550-020-1097-0}, \\href   {https://ui.adsabs.harvard.edu/abs/2020NatAs...4..965R} {4, 965}   \\bibitem{sollima19} {Sollima} A.,  2019, \\mn@doi [\\mnras] {10.1093/mnras/stz2093}, \\href   {https://ui.adsabs.harvard.edu/abs/2019MNRAS.489.2377S} {489, 2377}   \\bibitem{sollima07} {Sollima} A.,  {Beccari} G.,  {Ferraro} F.~R.,  {Fusi Pecci} F.,   {Sarajedini}   A.,  2007, \\mn@doi [\\mnras] {10.1111/j.1365-2966.2007.12116.x}, \\href   {https://ui.adsabs.harvard.edu/abs/2007MNRAS.380..781S} {380, 781}   \\bibitem{taylor18} {Taylor} M.,  2018, {Tutorial: exploring Gaia data with TOPCAT and STILTS}   \\bibitem{tolstoy09} {Tolstoy} E.,  {Hill} V.,   {Tosi} M.,  2009, \\mn@doi [\\araa]   {10.1146/annurev-astro-082708-101650}, \\href   {https://ui.adsabs.harvard.edu/abs/2009ARA&A..47..371T} {47, 371}   \\bibitem{vanhollebeke09} {Vanhollebeke} E.,  {Groenewegen} M.~A.~T.,   {Girardi} L.,  2009, \\mn@doi   [\\aap] {10.1051/0004-6361/20078472}, \\href   {https://ui.adsabs.harvard.edu/abs/2009A&A...498...95V} {498, 95}   \\bibitem{vergely02} {Vergely} J.~L.,  {K{\\\"o}ppen} J.,  {Egret} D.,   {Bienaym{\\'e}} O.,  2002,   \\mn@doi [\\aap] {10.1051/0004-6361:20020334}, \\href   {https://ui.adsabs.harvard.edu/abs/2002A&A...390..917V} {390, 917}   \\bibitem{weiler18} {Weiler} M.,  2018, \\mn@doi [\\aap] {10.1051/0004-6361/201833462}, \\href   {https://ui.adsabs.harvard.edu/abs/2018A&A...617A.138W} {617, A138}   \\bibitem{weisz11} {Weisz} D.~R.,  et~al., 2011, \\mn@doi [\\apj] {10.1088/0004-637X/739/1/5}, \\href   {https://ui.adsabs.harvard.edu/abs/2011ApJ...739....5W} {739, 5}   \\bibitem{Woosley2019} {Woosley} S.~E.,  2019, \\mn@doi [\\apj] {10.3847/1538-4357/ab1b41}, \\href   {https://ui.adsabs.harvard.edu/abs/2019ApJ...878...49W} {878, 49}   \\bibitem{merica20} {Yanchulova Merica-Jones} P.,  et~al., 2021, \\mn@doi [\\apj]   {10.3847/1538-4357/abc48b}, \\href   {https://ui.adsabs.harvard.edu/abs/2021ApJ...907...50Y} {907, 50}   \\bibitem{zavada20} {Zavada} P.,  {P{\\'\\i}{\\v{s}}ka} K.,  2020, \\mn@doi [\\aj]   {10.3847/1538-3881/ab5865}, \\href   {https://ui.adsabs.harvard.edu/abs/2020AJ....159...33Z} {159, 33}   \\bibitem{ziegler18} {Ziegler} C.,  et~al., 2018, \\mn@doi [\\aj] {10.3847/1538-3881/aad80a}, \\href   {https://ui.adsabs.harvard.edu/abs/2018AJ....156..259Z} {156, 259}   \\makeatother \\end{thebibliography}   \\appendix   \\section{Examples of binary evolution with BINAPSE} \\label{sec:examples}     In this Section we describe in detail some examples of binary evolution that involve the main evolutionary channels and interaction process in binary systems, namely mass transfer, CE and mergers. Then, we present simulations of simple stellar populations (SSPs) with initial binary fraction equal to 1.0, metallicity $Z=0.007$, distance 1 kpc, initial mass $10^5\\,M_\\odot$ and ages 0.5, 1.0, 3.0, and 12.0 Gyr. All these SSPs have been simulated with TRILEGAL according to the distributions described in Section \\ref{sec:probdist} and by evolving binaries in three different ways: as if they were non-interacting, with the BinaPSE code and with the original BSE code.", "\\subsection{The evolution of an Algol system} \\label{sec:algol} Consider a binary system with the following initial parameters: $m_{\\mathrm{i},1}=1.4\\,M_\\odot$, $m_{\\mathrm{i},2}=1.1\\,M_\\odot$, $P=2.2$ days, $e=0.5$ and metallicity $Z=0.007$. The evolution of this system is illustrated in the three of panels of Fig. \\ref{fig:ex1binapse}. We call primary the initially more massive star, i.e. star 1. The evolutionary path of star 1 is color coded according to the evolutionary phase (see upper legend for details). The black solid lines, instead, represent the evolutionary path of star 2, the secondary. The two stars start their evolution on the zero-age main sequence (ZAMS). We marked the initial positions with open circles. As we can see in the HRD of the upper panel, star 1 evolves faster and it is able to complete the MS phase before filling its Roche lobe during the sub-giant phase. At this moment, marked with a cross in Fig. \\ref{fig:ex1binapse}, orbit circularization and corotation of the stars with the orbit have already been achieved because of tidal interactions. The latter are also responsible for the orbital shrinkage before the RLOF which corresponds to the initial decrease of $P$ visible in the middle panel. Then, star 1 loses mass via RLOF for all the remaining part of the sub-giant phase and also in the red-giant phase. In the middle panel we clearly see how the hydrogen-rich material lost by the primary is accreted by the MS secondary during the mass transfer process while the binary period increases from 1.3 to over 20 days. The separation, $a$, analogously increases from 6.0 to over 42 $R_\\odot$. In the lower panel we can easily identify the instants of RLOF onset and end as the intersections between the primary radius, $R_1$, and the radius of its Roche lobe, $R_{\\mathrm{lob},1}$. Moreover, in the same panel, we can note that $R_1$ is slightly above $R_{\\mathrm{lob},1}$ during the mass transfer phase. This is typical of the so-called thermal mass transfer: it is unstable on a thermal timescale and the donor does not remain in thermal equilibrium so it contracts and just fills the Roche lobe. The accretion of the material lost by the primary envelope allows the secondary to move upward along the ZAMS until the mass transfer terminates. When the mass transfer terminates (open diamond points in Fig. \\ref{fig:ex1binapse}) the primary has lost about 1.15 $M_\\odot$ and it has not completed the red-giant phase yet. The remaining envelope is not massive enough for the star to reach the critical mass for helium ignition so it leaves the Hayashi line and becomes a 0.25 $M_\\odot$ He-WD. On the other hand, the secondary evolves now as a 2.25 $M_\\odot$ ZAMS star.", "At this point the binary can be defined as an Algol system, i.e. a binary where the secondary has become more massive than the primary. The secondary overfills its Roche lobe during the red-giant phase but this time the mass transfer is of a different nature. The donor is a red giant as in the first case, but the ratio between its mass and the companion exceeds a critical value, $q_{\\mathrm{crit}}$, defined as in \\cite{hurley02}. Under these conditions the radius of the donor increases faster than the Roche lobe radius and this fact makes mass transfer unstable on a dynamical timescale. For this reason we refer to this process as dynamical mass transfer. The consequence of dynamical mass transfer is a CE phase that, in this case, leads to the expulsion of the giant envelope from the system. The result is a double He-WD binary system because, again, the 0.32 $M_\\odot$ core of the giant star was not massive enough to ignite helium. After the CE phase the separation is about 0.6 $R_\\odot$, but the orbit shrinks because of gravitational radiation and finally the two He-WDs merge. The temperature reached after the merging is assumed to be high enough to ignite the triple-$\\alpha$ reaction. The energy released by the nuclear runaway is greater than the binding energy and the binary is completely destroyed. The total lifetime of the binary is about 3.5 Gyr.", "\\begin{figure}     \\centering     \\includegraphics[width=\\columnwidth]{figs_Ex1paper.png}     \\caption{The evolution of the Algol binary system described in Section \\ref{sec:algol}. The evolutionary path of star 1, the initially more massive one, is color coded according as indicated in the upper legend. Black solid lines represent the evolutionary path of star 2, the secondary. Initial positions are marked with open circles, the RLOF onset and end with crosses and open diamonds respectively. The upper panel shows the HRD, cut at $\\log(L/L_\\odot)=0$. The middle panel shows the dependency between the period and the stellar masses until the end of the mass transfer phase. The lower panel shows the evolution of star 1 radius and how it is affected by the Roche lobe radius.}", "\\label{fig:ex1binapse} \\end{figure}   \\subsection{The formation of naked helium stars} \\label{sec:hestars} Naked helium stars form when, in a binary system, the hydrogen envelope of one of the two components is removed because of a mass transfer event or even expelled from the system after a CE phase. In this example we see two possible evolutionary channels leading to the formation of a naked helium star. We consider a binary system with $m_{\\mathrm{i},1}=4.0\\,M_\\odot$, $m_{\\mathrm{i},2}=3.0\\,M_\\odot$, $P=0.5$ yr, $e=0.5$ and metallicity $Z=0.007$. The evolution of the two stars in the HRD is shown in Fig.~\\ref{fig:ex2binapse}: the primary path is in red, the secondary path in blue. Open circles mark the initial positions on the ZAMS and equal letters link the positions of the stars before and after a CE phase or any sudden evolutionary transformation. The alphabetical order of letters reflects the chronological order of such events. The primary is able to reach the early asymptotic giant branch phase (E-AGB) before filling its Roche lobe (position `a'). At that moment the secondary is still on the MS and a CE phase begins. The hydrogen envelope of the primary is lost by the system leaving a so-called helium Hertzsprung-gap star (HeHG star) of $1.06$ $M_\\odot$ located at $(\\log(T_{\\mathrm{eff}}/\\mathrm{K}),\\,\\log(L/L_\\odot)) \\simeq (4.8,\\,2.8)$. This is a naked helium star with a CO core as massive as the progenitor core mass, that is burning He in a shell. It evolves very quickly towards lower temperatures and after a few Myrs it has lost the helium envelope by winds (position `b') leaving a $0.66$ $M_\\odot$ CO-WD. HeHG stars more massive than this experience mass loss rates so high that allow us to classify them as Wolf-Rayet stars \\cite{Woosley2019} and their peculiar composition makes them suitable candidates as progenitors of Supernovae Ib and Ic \\cite{pols97,Dessart2020}. In the subsequent evolution, when the secondary reaches position `c' during the RG phase, a second CE phase begins. Again the hydrogen envelope of the giant is removed and lost by the system. Therefore, the secondary becomes a $0.47$ $M_\\odot$ helium main sequence star (HeMS star), i.e. a naked helium star that is burning helium in the core. The HeMS phase lasts more than the HeHG phase in general, but even so it is quite fast (about $110$ Myr in this case).", "HeMS stars form the hot-subdwarfs group that is very well populated in Gaia HRDs. The mass of the HeMS star is around $0.47\\,M_\\odot$, so it is not able to produce a significant carbon core, it avoids the HeHG phase and becomes a $0.33$ $M_\\odot$ He-WD. The final system is a double-degenerate system with a CO-WD and a He-WD that survives at least until $12$ Gyr when the simulation is stopped.", "\\begin{figure}     \\centering     \\includegraphics[width=\\columnwidth]{figs_Ex2paper.png}     \\caption{The evolution in the HRD of the binary system described in Section \\ref{sec:hestars}. The evolutionary path of star 1, the initially more massive one, and star 2 are the red and blue solid lines respectively. Open circles mark the initial positions on the ZAMS and equal letters link the positions of the stars before and after a CE phase or any sudden evolutionary transformation. The alphabetical order of letters reflects the chronological order of such events.}", "\\label{fig:ex2binapse} \\end{figure}   \\subsection{The birth of a new E-AGB star} \\label{sec:eagbstar} We now consider a binary system with $m_{\\mathrm{i},1}=3.93\\,M_\\odot$, $m_{\\mathrm{i},2}=1.71\\,M_\\odot$, $P=0.35$ yr, $e=0.66$ and metallicity $Z=0.007$. The peculiarity of this system with respect to the previous examples consists in a merging between a CO-WD and a HG star whose outcome is a new E-AGB star. The evolution of the two stars in the HRD is shown in Fig. \\ref{fig:ex3binapse}: the primary path is in red, the secondary path in blue and the new E-AGB star path is in magenta. Open circles mark the initial positions on the ZAMS. The primary evolves until it fills its Roche lobe during the red-giant phase with a CE formation. The hydrogen envelope of the primary is lost by the system leaving a 0.68 $M_\\odot$ HeMS star which evolves into a 0.60 $M_\\odot$ CO-WD. The secondary, still close to the initial state, completes the MS phase and fills its Roche lobe during the HG phase. Thermal mass transfer allows the formation of an accretion disk around the CO-WD. The CO-WD accretes part of the hydrogen that is burnt into helium on the WD surface. Nova explosions occur and are responsible for the ejection of a fraction of the accreted material. At this point the binary system can be classified as a cataclismic variable (CV). In our case the CO-WD mass does not grow enough and the supernova explosion does not take place. On the other hand, the accretion rate, $\\dot{m}_1$, constantly increases. At first it reaches the value of $1.03\\cdot 10^{-7}\\,M_\\odot\\,\\mathrm{yr}^{-1}$, the novae sequence interrupts and a steady accretion phase begins. During this phase the binary is a supersoft X-ray source. Then $\\dot{m}_1$ increases above $2.71\\cdot 10^{-7}\\,M_\\odot\\,\\mathrm{yr}^{-1}$ and the CO-WD should become a TP-AGB star. However, immediately after the transformation of the WD into a TP-AGB star, the system passes through another CE phase and the final coalescence of the two components. The outcome of the merging is a new 1.86 $M_\\odot$ E-AGB star. In Fig. \\ref{fig:ex3binapse}, the progenitors are marked with crosses and the position of the new E-AGB star is marked with and open diamond. The E-AGB star evolves as a single star, it passes through the TP-AGB phase and finally becomes a 0.69 $M_\\odot$ CO-WD. Details about the formation of merging products can be found in \\cite{hurley02}. The simulation is stopped at 6 Gyr.", "\\begin{figure}     \\centering     \\includegraphics[width=\\columnwidth]{figs_Ex3paper.png}     \\caption{The evolution in the HRD of the binary system described in Section \\ref{sec:eagbstar}. The evolutionary path of star 1, the initially more massive one, and star 2 are the red and blue solid lines respectively. Open circles mark the initial positions on the ZAMS. The crosses and the open diamond points mark the positions of the binary components just before the coalescence and the product of the merging respectively.}", "\\label{fig:ex3binapse} \\end{figure}   \\subsection{Binary evolution: comparisons with BSE} \\label{sec:BSEcomp} The original BSE code is publicly available and anyone can compare the evolution of the binary systems described above with the BSE output. The BinaPSE and BSE results are qualitatively similar, i.e. they predict the same sequence of binary interactions and evolutionary phases and the same final products. However, quantitative differences are especially visible in the HRD and in the evolutionary timescales. As an example, we show in Fig. \\ref{fig:ex3binapse} the evolution of the binary system of Sec. \\ref{sec:algol} simulated with BSE. We superimposed in the HRD (upper panel) the evolutionary tracks predicted with BinaPSE (dashed lines). In the lower panel we compare the radius of the primary star in the two simulations and we note that mass transfer begins about 120 Myr later in the BSE simulation and it lasts 150 Myr more. As a consequence of this and of the different evolutionary timescales of the secondary, the final disruption of the system is postponed by 350 Myr. Finally, we remind that BSE analytic formulae don't take into account the first dredge-up. Therefore, BSE is not able to reproduce the RGB bump of stellar populations.", "\\begin{figure}     \\centering     \\includegraphics[width=\\columnwidth]{figs_Ex1bse.png}     \\caption{The evolution in the HRD of the binary system described in Section \\ref{sec:algol} as modelled by BSE in comparison with predictions by BinaPSE. BSE evolutionary tracks follow the same notations of Fig. \\ref{fig:ex1binapse}. Coloured dashed lines indicate star 1 evolution with BinaPSE. The black dashed line in the upper panel is star 2 evolution with BinaPSE.}", "\\label{fig:ex1bse} \\end{figure}   It is worth reminding that the PARSEC tracks result from a decades-long work of fine-tuning of stellar parameters such as the mixing-length parameter, the helium-to metal enrichment ratio, and the efficiency of envelope and core overshooting. The adopted parameters allow these tracks to reproduce, in an acceptable way, a wide variety of observations of stellar aggregates dominated by single stars. Moreover, the algorithms we use to interpolate PARSEC tracks as a function of mass and metallicity, are extensively tested and are the same that are used to produce the widely-used PARSEC isochrones. Therefore, it is reasonable to expect that these tracks represent an improvement over the fitting relations originally adopted in BSE.", "\\subsection{Simulation of SSPs with BinaPSE and BSE} \\label{sec:binapseXbse}   \\begin{figure*}     \\centering     \\includegraphics[width=0.88\\textwidth]{figs_Clusterspaper.png}     \\caption{Gaia CMDs of SSPs simulations with initial binary fraction equal to 1.0, metallicity $Z=0.007$, distance 1 kpc, initial mass $10^5\\,M_\\odot$ and ages 0.5, 1.0, 3.0, and 12.0 Gyr. Left column: simulations with non-interacting binaries. Central column: simulations with BinaPSE. Right column: simulations with BSE. PARSEC isochrones are superimposed to BSE simulations as red solid lines. The isochrones show a discontinuity in the TP-AGB part that is due to the increase of the surface abundance ratio C/O over unity: when C/O becomes greater than 1 then the stellar spectrum of TP-AGB stars changes dramatically \\cite{marigo17}.}", "\\label{fig:clusters} \\end{figure*}   Simulations of simple stellar populations (SSPs) with initial binary fraction equal to 1.0, metallicity $Z=0.007$, distance 1 kpc, initial mass $10^5\\,M_\\odot$ and ages 0.5, 1.0, 3.0, and 12.0 Gyr are shown in the CMDs of Fig.~\\ref{fig:clusters}. They are build in the Gaia DR2 photometric system. Binary systems are generated by TRILEGAL according to distributions described in Section \\ref{sec:probdist}. These SSPs have been simulated in three different ways: as if all binary systems were non-interacting, with the BinaPSE code, and with the BSE code (left, central and right columns of Fig. \\ref{fig:clusters}, respectively). For these plots, all binaries are assumed not to be resolved, i.e. the fluxes of the two components in the Gaia filters are summed together, and each point represents either a binary system or single stars resulting from binary interactions.", "We first analyse the simulations with non-interacting binaries and compare them with the BinaPSE simulations. In the 0.5 Gyr old SSP we can easily notice the absence of hot-subdwarfs at $G\\sim 15$ and of MS stars above the turn-off, when neglecting binary interactions. Moreover, binary interactions lead to an under-populated AGB, especially at younger ages, and to a greater complexity in the WDs region. The latter is due to the presence of He-WDs, which are another peculiarity of binary evolution, and to ONe-WD, which are not included in the simulations without binary interactions. Common features between BinaPSE and non-interacting binaries simulations are: the binary MS, which leads to a double turn-off; some CHeB stars are spread towards bluer colors because of a MS companion; at younger ages CHeB-CHeB binary systems are found at higher luminosity with respect to the other stars or binary systems in the CHeB region of the CMD. Finally, notice the binary CO-WD cooling sequence and the stream of MS-COWD binaries that connects the WDs region with the MS.", "On the other hand, Fig.~\\ref{fig:clusters} allows us to emphasise the main differences between BinaPSE and BSE synthetic stellar populations. BinaPSE evolution is based on the evolutionary tracks described in Section \\ref{sec:binapse_code}, so binary systems lie on or around the corresponding isochrone in a SSP simulation. BSE simulations are based on the evolutionary tracks of \\cite{hurley2000} that lead to different isochrones. In order to facilitate comparisons, we superimposed PARSEC isochrones to BSE simulations as red solid lines. The most remarkable differences can be observed in the post-MS phases: the predicted Hayashi line is systematically steeper in BSE than for the PARSEC isochrones. As a consequence, the color distribution in a BSE simulation is shifted towards the blue with respect to BinaPSE simulations. The shape of the CHeB part of the isochrones presents important differences: starting from the 3-Gyr old SSPs, PARSEC isochrones lead to the formation of a compact red clump while BSE simulations always show a blue-loop like shape.", "Low main sequences show different inclinations too. At very low masses the MS of binaries is interrupted before the MS of single stars, because of the lower mass limit of 0.1 $M_\\odot$ in both BinaPSE and BSE. For what concerns hot-subdwarfs, we notice that in both kinds of simulations they disappear at older ages, but stellar counts are significantly lower in the BinaPSE case. This fact can be justified with the adoption of different HeMS timescales. The binary systems composed by a HeMS star and a MS star, which are located in the HRD between the HeMS group and the MS, follow the same rule, so in BinaPSE they are present only at very young ages. Finally, the two codes differ also for the predicted stellar counts of ONe-WDs. These WDs lie on a sequence, well populated in BSE simulations with ages up to 3 Gyr, beginning from the tail of CO-WDs cooling sequence and it extending towards the lower left corner of the CMD. ONe-WDs are very few in BinaPSE simulations, highlighting the different IFMR with respect to BSE.", "Finally, we find that BinaPSE simulations require at most 3 times the computational time with respect to BSE simulations. However, we consider the accuracy reached by BinaPSE as a good reward for this additional time. Codes have been executed by using a single Intel Core i7-8750H CPU with a clock rate of 2.20 GHz on a personal laptop. We have registered a maximum CPU time of around 10 min for the 12 Gyr old SSP simulated with BinaPSE.", "\\section{Testing the HRD-fitting method} \\label{sec:testing}   \\begin{table*}     \\caption{The results for 3 exercises of mock CMD fitting (set1 to set3). Mocks are created with the SFR, metallicities, and binary fraction listed in the columns with ``(in)''. The next column then lists the results for the median and 68\\% confidence interval, either divided by the input value (in the cases of the SFR and binary fraction), or with the difference between output and input values in the case of metallicities. The second row indicates the total numbers of simulated stars, which is, by construction, comparable with the observed value of 485\\,833 stars.}", "\\label{tab:mock_solutions_all}     \\centering     \\begin{tabular}{ccccccc}     \\toprule     & \\multicolumn{2}{c}{\\textbf{set1}} & \\multicolumn{2}{c}{\\textbf{set2}} & \\multicolumn{2}{c}{\\textbf{set3}} \\\\     & \\multicolumn{2}{c}{\\textbf{(535\\,995 stars)}} & \\multicolumn{2}{c}{\\textbf{(461\\,418 stars)}} & \\multicolumn{2}{c}{\\textbf{(422\\,307 stars)}} \\\\       \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7}     age bin & SFR(in) & SFR(out)/SFR(in) & SFR(in) & SFR(out)/SFR(in) & SFR(in) & SFR(out)/SFR(in) \\\\     $\\logtyr$ & [$10^{-3}\\Msun\\,\\mathrm{yr}^{-1}$] &  & [$10^{-3}\\Msun\\,\\mathrm{yr}^{-1}$] &  & [$10^{-3}\\Msun\\,\\mathrm{yr}^{-1}$] &  \\\\       \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7}       6.6-7.1 & 0.015 &  1.188 (+0.332/-0.297) & 0.060 &  1.225 (+0.364/-0.323) & 0.135 &  0.801 (+0.148/-0.119) \\\\       7.1-7.3 & 0.060 &  1.871 (+0.436/-0.400) & 0.060 &  0.322 (+0.531/-0.241) & 0.120 &  0.494 (+0.523/-0.357) \\\\       7.3-7.5 & 0.120 &  1.183 (+0.536/-0.565) & 0.015 &  3.947 (+3.651/-2.863) & 0.075 &  0.516 (+0.744/-0.360) \\\\       7.5-7.7 & 0.075 &  0.837 (+0.724/-0.610) & 0.075 &  0.474 (+0.532/-0.359) & 0.045 &  0.614 (+0.816/-0.464) \\\\       7.7-7.9 & 0.090 &  0.432 (+0.499/-0.319) & 0.060 &  1.303 (+0.662/-0.744) & 0.015 &  4.223 (+2.886/-2.607) \\\\       7.9-8.1 & 0.060 &  1.242 (+0.629/-0.550) & 0.045 &  0.693 (+0.562/-0.409) & 0.060 &  1.599 (+0.682/-0.611) \\\\       8.1-8.3 & 0.090 &  0.843 (+0.265/-0.249) & 0.030 &  1.178 (+0.524/-0.593) & 0.120 &  0.764 (+0.185/-0.217) \\\\       8.3-8.5 & 0.045 &  1.059 (+0.262/-0.248) & 0.105 &  1.069 (+0.102/-0.104) & 0.030 &  1.431 (+0.367/-0.344) \\\\       8.5-8.7 & 0.135 &  1.007 (+0.053/-0.067) & 0.105 &  1.014 (+0.070/-0.079) & 0.090 &  0.878 (+0.076/-0.070) \\\\       8.7-8.9 & 0.075 &  1.023 (+0.061/-0.066) & 0.090 &  0.973 (+0.056/-0.050) & 0.030 &  0.954 (+0.114/-0.115) \\\\       8.9-9.1 & 0.075 &  1.010 (+0.050/-0.052) & 0.135 &  1.044 (+0.039/-0.037) & 0.120 &  1.067 (+0.028/-0.024) \\\\       9.1-9.3 & 0.105 &  0.960 (+0.035/-0.029) & 0.135 &  0.970 (+0.029/-0.033) & 0.075 &  0.992 (+0.030/-0.032) \\\\       9.3-9.5 & 0.105 &  0.982 (+0.025/-0.023) & 0.090 &  1.018 (+0.034/-0.035) & 0.135 &  0.983 (+0.019/-0.022) \\\\       9.5-9.7 & 0.030 &  0.992 (+0.059/-0.057) & 0.015 &  1.001 (+0.180/-0.170) & 0.135 &  1.012 (+0.027/-0.030) \\\\       9.7-9.9 & 0.105 &  1.018 (+0.016/-0.019) & 0.120 &  0.996 (+0.018/-0.019) & 0.120 &  0.992 (+0.023/-0.022) \\\\      9.9-10.1 & 0.105 &  1.007 (+0.010/-0.011) & 0.090 &  1.005 (+0.014/-0.012) & 0.030 &  1.000 (+0.030/-0.025) \\\\       \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7}     & {[Fe/H]}(in) & {[Fe/H](out)-[Fe/H](in)} & {[Fe/H]}(in) & {[Fe/H](out)-[Fe/H](in)} & {[Fe/H]}(in) & {[Fe/H](out)-[Fe/H](in)} \\\\     & [dex] & [dex] & [dex] & [dex] & [dex] & [dex] \\\\       \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7} 6.6-7.1  &  0.2153 &   0.0007 (+0.0035/-0.0036) &  0.4313 & 0.0095 (+0.0037/-0.0037) & -0.0007 &   0.0047 (+0.0035/-0.0038) \\\\       9.1-9.3  & -0.0621 &  -0.0016 (+0.0025/-0.0025) &  0.1539 & 0.0024 (+0.0026/-0.0026) &  0.1595 &   0.0018 (+0.0024/-0.0025) \\\\       9.9-10.1 & -0.2306 &   0.0018 (+0.0013/-0.0012) & -0.2066 & 0.0002 (+0.0012/-0.0007) & -0.1586 &  -0.0008 (+0.0027/-0.0026) \\\\       \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7}            & f$_{\\text{bin}}$(in) & f$_{\\text{bin}}$(out)/f$_{\\text{bin}}$(in) & f$_{\\text{bin}}$(in) & f$_{\\text{bin}}$(out)/f$_{\\text{bin}}$(in) & f$_{\\text{bin}}$(in) & f$_{\\text{bin}}$(out)/f$_{\\text{bin}}$(in) \\\\       \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7} all ages           & 0.5 & 1.002 (+0.007/-0.007) & 0.9 & 0.998 (+0.004/-0.005) & 0.8 & 1.001 (+0.004/-0.005) \\\\     \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7}      & & $-\\ln\\mathcal{L}$ & & $-\\ln\\mathcal{L}$ & & $-\\ln\\mathcal{L}$ \\\\     \\cmidrule(lr){1-1} \\cmidrule(lr){2-3}  \\cmidrule(lr){4-5}  \\cmidrule(lr){6-7}     all ages & & 964.5 & & 935.4 & & 986.0 \\\\     \\bottomrule     \\end{tabular} \\end{table*}       \\begin{figure*}      \\centering      \\includegraphics[width=\\textwidth]{figs_summary_mock_set1_noisy_filled_solid.png} \\caption{Results for the test ``set1''. The first two panels show the observed and simulated Hess diagrams, while the third one shows the residuals. The rightmost panels show the SFR$(t)$ (upper panel) and AMR (lower panel) for each age bin with their respective 68\\% and 95\\% confidence limits, shown as shaded lines. The black line shows the ``true'' solution used to create the observed Hess diagram.}", "\\label{fig:summary_set1_noisy}  \\end{figure*}   We describe here a series of experiments aimed at testing the capacity of our method in finding the best-fit solution, and the reliability of the error bars.", "The first point to mention is that we have initially tested the code by fitting mock HRDs containing only white noise, at several levels. For instance, we produce a noisy Hess diagram with 100 bins randomly populated from a Poisson distribution with an average rate of 20 stars per bin. This diagram is then fitted with our MCMC code, producing a result of $19.365(-0.427,+0.438)$ for the median and the 68\\% confidence interval. The error derived agrees with the $\\sigma=0.447$ value expected from a simple analytical formula. For a similar experiment with an average rate of 2 stars per bin, the result is $1.948(-0.134,+0.144)$, whereas the expected error is of $\\sigma=0.141$. This kind of test ensures the correctness of the likelihood formula in eq.~\\ref{eq:likelihood} -- and hence the correct size of our derived error bars -- even for very low levels of star counts.", "Then, we tested the fitting with mock catalogues of the Gaia 200 ~pc sample. These were produced from the same PMs we employed to determine the SFH of the Solar Neighbourhood using known input SFR$(t)$, AMRs and \\fbin. At first we tested the code using a flat SFR$(t)$ and AMR, and the code was able to recover the original parameters with very high accuracy. However, this case is not very realistic. To simulate as much as possible a real fitting, we switched to randomly-chosen SFR$(t)$, AMRs, and \\fbin\\ values. In short, the input models are built with SFR$(t)$ that randomly assume, at any age bin, one of 10 equally-spaced values of SFR between $1.5 \\times 10^{-5}\\Msun/yr$ and $13.5 \\times 10^{-5}\\Msun/yr$. These SFR$(t)$ values ensure that the numbers of simulated stars is always similar to the one observed in our Gaia data (485\\,833 stars, Sect.~\\ref{sec:culling}).", "Similarly, the metallicity shifts assume values between $-0.36$~dex and $0.36$~dex with steps of $0.1$~dex, and the \\fbin\\ assumes any value between $0$ and $1$ with steps of $0.1$.", "One of these mock tests, namely ``set1'', is illustrated in Fig.~\\ref{fig:summary_set1_noisy}. The first panel to the left shows the ``observed'' Hess diagram, while the second panel shows the model recovered by the code. It is evident that our code determined a very good solution in this case, as highlighted by a very low value of $-\\ln \\mathcal{L}$ and the very low level of residuals, shown in the third panel. The two rightmost panels illustrate the recovered SFH and their errors. As shown by the top panel, the uncertainties on the SFR$(t)$ from $\\logtyr=7.1$ to $\\logtyr=8.1$ are significantly larger than for other age intervals. The errors in the output metallicities, however, are always very small, and the same happens for the binary fraction: while the input \\fbin\\ is 0.5, the recovered value is $0.501\\pm0.004$.", "Table~\\ref{tab:mock_solutions_all} presents all the input values for this model, plus two other similar models, compared to the output values and their confidence intervals. As can be seen, all the output values turn out to agree with input values within their 68\\% confidence intervals, with the exception of a couple of SFR$(t)$ values at young ages (for instance, for the age intervals 7.1-7.3 and 7.7-7.9 in ``set1''), which would nevertheless still agree with the input values within their 95\\% confidence intervals.  Similar levels of ``mild disagreement'' are also found for the metallicity values at young ages, but in this case the confidence intervals are extremely narrow (of the order of 0.004~dex) -- so narrow that this discrepancy is likely irrelevant compared to those caused by systematic errors in the stellar models. As for the binary fraction, it is always well recovered within the 68\\% confidence interval.", "Another point which is apparent in Fig.~\\ref{fig:summary_set1_noisy}, is the low degree of correlation in the SFR$(t)$ found for adjacent age bins, especially at old ages. Let us look for instance at the pronounced SFR$(t)$ peak present in the mock data at the age interval $8.5<\\logtyr<8.7$, or the drop in SFR$(t)$ at $9.5<\\logtyr<9.7$. These marked features did not ``spread'' into neighbouring age bins, in the recovered solution. This is a consequence not only of the ideal conditions at which these mock simulations are performed, but also of the small size of the HRD bins we adopted for the Gaia data: indeed, our 0.2-dex wide age bins produce PMs with HRD sequences typically much wider than the $0.04\\,\\mathrm{mag}\\times0.04\\,\\mathrm{mag}$ bins in which the HRDs were split. Therefore, there are large numbers of stars in HRD locations (as the MS turn-off regions, or the subgiant branch) that can be reproduced by models in just one age bin. In this case, the correlation in the SFR between adjacent age bins is much weaker than in the case of more distant galaxies, or more distant Gaia samples -- where photometric errors or extinction can create wide PM sequences in the HRD, creating ample superposition between adjacent age bins.", "Overall, these tests show that our code can recover, without significant issues, the parameter set used to build the mock Hess diagrams. It also supports the very small relative error bars found at intermediate and old ages: they are a consequence of the very large star counts present in these mock catalogues at these age intervals, and of the low degree of correlation between adjacent age bins.", "On the other hand, these extremely good results are made possible by the perfect consistency between the stellar models used for build the mock data, and those used in the model fitting. In the real world, this consistency does not exist -- although it is assumed in every single work of CMD fitting in the literature. The larger residuals we find when fitting the real data in Sect.~\\ref{sec:method}, are very likely a consequence of systematic errors in the models, that cannot be properly tested with mock data.", "\\bsp\t\\label{lastpage} \\end{document}" ] ]
2107.01844
[ [ "Beyond the spontaneous scalarization: New fully nonlinear dynamical\n mechanism for formation of scalarized black holes" ], [ "Abstract In the present letter we show the existence of a fully nonlinear dynamical mechanism for the formation of scalarized black holes which is different from the spontaneous scalarization.", "We consider a class of scalar-Gauss-Bonnet gravity theories within which no tachyonic instability can occur.", "Although the Schwarzschild black holes are linearly stable against scalar perturbations, we show dynamically that for certain choices of the coupling function they are unstable against nonlinear scalar perturbations.", "This nonlinear instability leads to the formation of new black holes with scalar hair.", "The fully nonlinear and self-consistent study of the equilibrium black holes reveals that the spectrum of solutions is more complicated and more than one scalarized branch can exist.", "We have also considered classes of scalar-Gauss-Bonnet theories where both the standard and the nonlinear scalarization can be present, and they are smoothly connected that completes in an interesting way the picture of black hole scalarization.", "The fully nonlinear (de)scalarization of a Schwarzschild black hole will always happen with a jump because the stable \"scalarized branch\" is not continuously connected to the Schwarzschild one that can leave clear observational signatures." ], [ "Introduction", "Spontaneous scalarization is a dynamical mechanism for endowing black holes (and other compact objects) with scalar hair without altering the predictions in the weak field limit [1], [2], [3], [4], [5].", "It is a strong gravity phase transition triggered by tachyonic instability due to the non-minimal coupling between the scalar field(s) and the spacetime curvature (or matter).", "The realistic black hole spontaneous scalarization was mainly studied within the scalar-Gauss-Bonnet (sGB) gravity defined by the action $S=&&\\frac{1}{16\\pi }\\int d^4x \\sqrt{-g}\\Big [R - 2\\nabla _\\mu \\varphi \\nabla ^\\mu \\varphi + \\lambda ^2 f(\\varphi ){\\cal R}^2_{GB} \\Big ] .$ where $R$ is the Ricci scalar with respect to the spacetime metric $g_{\\mu \\nu }$ , $\\varphi $ denotes the scalar field with a coupling function $f(\\varphi )$ , $\\lambda $ is the so-called Gauss-Bonnet coupling constant having dimension of $length$ and ${\\cal R}^2_{GB}=R^2 - 4 R_{\\mu \\nu } R^{\\mu \\nu } + R_{\\mu \\nu \\alpha \\beta }R^{\\mu \\nu \\alpha \\beta }$ is the Gauss-Bonnet invariant with $R_{\\mu \\nu \\alpha \\beta }$ being the curvature tensor.", "Black hole spontaneous scalarization was extensively studied (see e.g.", "[3], [4], [5], [6], [7], [8], [9], [10], [11]), including the rotating case [12], [13] and the spin-induced scalarization [14], [15], [16], [17], [18].", "It turn out that these black holes are energetically favorable over the GR solutions and they are stable [19], [20], [21].", "The nonlinear dynamics of scalarized black holes in sGB gravity, including mergers and stellar core-collapse, was examined in [22], [23], [24], [25], [26].", "Spontaneous scalarization was also considered in other alternative theories of gravity [27], [28], [29], [30], [31], [32].", "In the present work we would like to go beyond the spontaneous scalarization.", "We pose the following problem.", "Let us consider sGB theories with coupling functions that do not allow for a tachyonic instability to occurs, i.e.", "sGB theories which do not exhibit spontaneous scalarization.", "Then we ask: Is there another dynamical mechanism for producing scalar hair of the black holes?", "The purpose of this paper is to answer this question.", "We identify a class of sGB theories with coupling between the scalar field and the Gauss-Bonnet invariant that can not exhibit spontaneous scalarization for black holes but admits all of the stationary solutions of general relativity, including the Schwarzschild black holes.", "Within the mentioned class of sGB theories the Schwarzschild black holes are linearly stable.", "However, we show dynamically that they are unstable against nonlinear scalar perturbations within the sGB gravity and that the nonlinear instability leads to the formation of new black holes with scalar hair.", "In Sec.", "II we discuss the nonlinear instability of the Schwarzschild black hole in the mentioned class of sGB gravity and the dynamical formation of scalarized black holes for a certain Gauss-Bonnet coupling function.", "The results for the equilibrium scalarized black hole branches are presented in Sec.", "III.", "The paper ends with conclusions.", "In the appendixes we present the necessary mathematical background as well as our results for a second Gauss-Bonnet coupling function." ], [ "Nonlinear instability of the Schwarzschild black hole and dynamical formation of scalarzied black holes", "As discussed above we consider sGB theories which can not exhibit tachyonic instability.", "Therefore we impose the following conditions on the Gauss-Bonnet coupling function $f(0)=0, \\; \\; \\frac{df}{d\\varphi }(0)=0, \\;\\; \\frac{d^2f}{d\\varphi ^2}(0)=0.", "$ The first condition can always be imposed since the field equations include only the first derivative of the coupling function but not the coupling function itself.", "The second condition guarantees the Schwarzschild solution is also a black hole solution to the equations of the sGB gravity with $\\varphi =0$ .", "It is no difficult one to see that the third condition $\\frac{d^2f}{d\\varphi ^2}(0)=0$ imposed on the Gauss-Bonnet coupling function leads to the fact that no tachyonic instability is possible – the Schwarzschild solution is stable against linear scalar perturbations.", "Indeed, in the case of $ \\frac{d^2f}{d\\varphi ^2}(0)=0$ the equation governing the scalar perturbations is just the scalar wave equation and it is a well-known fact the Schwarzschild black hole is stable against the linear perturbations of a massless scalar perturbations.", "Therefore, if black holes with scalar hair exist, they should form a new black hole phase coexisting with the usual Schwarzschild black hole phase.", "It will be called a scalarized black hole phase.", "We will investigate how the scalarized phases of the Schwarzschild black hole can be dynamically formed and whether they are stable.", "Since the full nonlinear dynamics of black holes in sGB gravity is a complicated task, we shall base our study on an approximate model which is free from heavy technical complications but preserves the leading role of the nonlinearity associated with the coupling function.", "More precisely, we shall consider a model where the spacetime geometry is kept fixed and the whole dynamics is governed by the nonlinear equation for the scalar field.", "This dynamical model is a very good approximation for example when the energy of the scalar hair is less than the full mass of the black hole [24].", "Technically speaking, we will consider the nonlinear wave equation () on the Schwarzschild background geometry with the requirements that the scalar field perturbation has the form of an outgoing wave at infinity and an ingoing wave at the black hole horizon.", "We will follow closely the methodology described in [15], [24].", "The conditions for the existence of scalarized phases (REF ) is easily satisfied by a coupling function containing power of $\\varphi $ higher than two.", "In the present paper we are focusing on $Z_2$ symmetric theories and the first two possible choices are $\\varphi ^4$ and $\\varphi ^6$ .", "In the case when the coupling functions are directly proportional to $\\varphi ^4$ and $\\varphi ^6$ , though, we could not find any stable solutions.", "This observation is similar to the case of standard scalarization in sGB theories where different ways to “stabilize” the scalarized black hole solutions were proposed [6], [7], [8], [9].", "Perhaps the most straightforward and easy to handle from a numeral point of view is to consider an exponential coupling function.", "In our numerical simulations we consider two forms of the coupling function: $f_1(\\varphi )= \\frac{1}{4\\beta }\\left(1- e^{-\\beta \\varphi ^4}\\right), \\;\\;\\;f_2(\\varphi )= \\frac{1}{6\\beta }\\left(1- e^{-\\beta \\varphi ^6}\\right)$ where $\\beta $ is a parameter.", "Both of them fulfill the conditions (REF ).", "In the main text we present our results for the coupling function $f_1(\\varphi )$ while the results for the coupling function $f_2(\\varphi )$ are given in Appendix .", "For these choices of the coupling function, the Schwarzschild black hole is stable only with respect to linear perturbations and it might be unstable with respect to larger nonlinear perturbations.", "By evolving in time the nonlinear scalar field equation (REF ) on the Schwarzschild background (i.e.", "consider the time evolution of the system in the decoupling limit approximation), it turns out that there is a threshold amplitude of the perturbation above which the Schwarzschild black hole loses stability and a scalarized phase forms.", "This is demonstrated in Fig.", "REF where the time evolution of the scalar field on the Schwarzschild background is shown for several amplitudes of the initial perturbation.", "As one can see, for small amplitudes the standard quasinormal modes and exponential decay is observed.", "Once the amplitude exceed a certain threshold, the scalar field stabilizes to a constant at late times that is exactly the formation of a scalarized phase.", "It is important to note, that through this approach only the formation of a stable scalarized phase can be observed.", "As we will see later, more than one scalarized phase can exist that can be constructed by solving the static field equations.", "Independent on the initial data, though, we have always seen the formation of solutions belonging to only one of the branches that is a strong indication that it is the only stable one.", "In order to prove rigorously the stability of a scalarized phase one should consider the coupled system of equations of the metric and the scalar field, i.e.", "to drop the decoupling limit approximation.", "Our experience shows, though, that considering only the scalar field evolution often gives correct results for the stability/instability of scalarized black holes [24].", "Figure: The time evolution of the scalar field on the background of a Schwarzschild black hole with mass M/λ=0.1M/\\lambda =0.1 and β=50\\beta =50.", "The initial data is a Gauss pulse with amplitudes 0.001, 0.002 and 0.005, dispersion σ/λ=1\\sigma /\\lambda =1, located at coordinate distance x/λ=12x/\\lambda =12.", "The coupling function is f 1 (ϕ)f_1(\\varphi ).The method described above can give us very important intuition where stable scalarized phases can exist and how they form dynamically.", "In order to study the full spectrum of solution, including the unstable ones, and their domain of existence taking into account the regularity condition (REF ), one has to solve the fully nonlinear and self-consistent system of reduced field equations in the case of static and spherically symmetric spacetime [3].", "The results are presented in the next section." ], [ "Scalarized phases", "We study in detail the spectrum of solutions describing scalarized phases of the Schwarzschild black hole by solving the fully nonlinear and self-consistent system of reduced field equations (REF )–() with the appropriate boundary conditions.", "The solutions will be calculated numerically using a methodology similar to [3].", "All the results in this sections are for the Gauss-Bonnet coupling function $f_1(\\varphi )$ .", "The scalarized phases of the Schwarzschild black hole are displayed in Figs.", "REF and REF where the scalar field on the horizon, the mass and the scalar charge are given for several values of $\\beta $ .", "Here the scalar charge $D$ is defined as the coefficient in the leading order asymptotic of the scalar field at infinity $\\varphi |_{r \\rightarrow \\infty } \\sim \\frac{D}{r}.$ All dimensional quantities are normalized with respect to $\\lambda $ .", "Only the solutions with $\\varphi >0$ are displayed but one should keep in mind that the field equations, with this choice of the coupling function (REF ), are symmetric with respect to a change of the sign of $\\varphi $ and thus the solutions with the same metric functions but opposite sign of $\\varphi $ also exist.", "Several branches of hairy black holes can exit and the qualitative picture is different for different $\\beta $ .", "The three possible cases with proper choices of $\\beta $ are shown separately in Fig.", "REF : For large enough $\\beta $ (bottom panel) two branches of solutions exist and both of them start from the $M=0$ limit.", "One of the branches has vanishing scalar field at the horizon for $M \\rightarrow 0$ , while the scalar field for the other seems to increase as $M$ approaches zero.", "According to the results from the nonlinear evolution of the scalar field, the latter branch (depicted with solid line) is the only stable one.", "For intermediate $\\beta $ (middle panel) up to three branches can exist.", "The lower one (dotted line) having $\\varphi _{H} \\rightarrow 0$ when $M\\rightarrow 0$ is terminated at some finite $M/\\lambda $ where the solutions disappear due to violation of the regularity condition (REF ).", "The upper branch (solid line) starts from zero mass and with the increase of $M/\\lambda $ the scalar field on the horizon decreases until a minimum of $M/\\lambda $ is reached.", "At that point the branch merges with a third middle branch of solutions (dashed line).", "This middle branch is also terminated at a finite $M/\\lambda $ .", "According to the results from the nonlinear evolution of the scalar field, the uppermost branch (depicted with solid line) is the only stable one.", "For small $\\beta $ (upper panel) there are also three branches of solutions but the upper two merge at two points – at their beginning and at the end of the branch.", "Again, the stable branch is the one depicted with solid line.", "This picture is clearly different from the Einstein-Maxwell-scalar gravity [33] where up to two scalarized branches exist and one of them originates from the extremal Reissner-Nordström solution.", "Here clearly no extremal solution exists.", "Instead, most probably the solutions are connected with a solitonic-like solutions in sGB gravity of the type considered in [34], [35].", "The change of the branches as $\\beta $ is varied can be better observed in the top panel of Fig.", "REF .", "In this figure there is a clustering of the lower mass black hole branches for different $\\beta $ depicted with dotted line (the area shaded in gray in the figure), but the rest of the branches can be well resolved.", "As one can see, with the decrease of $\\beta $ the larger mass branch depicted with solid line, that is actually the stable one, gets shorter until it completely disappears that happens for roughly $\\beta \\approx 20$ .", "On the contrary, we could not find an upper limit on $\\beta $ for the existence of scalarized phases.", "The branches, though, span a smaller range of masses with the increase of $\\beta $ .", "The radius of the horizon can differ significantly from the Schwarzschild one especially for small $\\beta $ and gets closer to GR with the increase of $\\beta $ .", "This behavior is similar to the standard scalarization [36] where also the differences from the Schwarzschild black hole are reduced for larger $\\beta $ .", "As one can see in the top and middle panels of Fig.", "REF the differences especially between the two upper branches (for a fixed $\\beta $ ) are very small and seem negligible.", "We have performed, though, extensive numerical experiments to confirm that indeed these solutions exist and they are not a numerical artifact.", "If one looks at the scalar charge, that is plotted in the lower panel, then a clear difference between the branches is observed.", "Thus, even though at the black hole horizon their properties are quite similar, the overall scalar field profile and asymptotic are quite distinct.", "Figure: The scalar field at the horizon as a function of mass for tree value of β\\beta demonstrating the tree possible configurations of scalarized phase branches discussed in the text.Figure: The scalar field at the horizon (top panel), the mass (middle panel) and the scalar charge (bottom panel) as functions of the black hole horizon mass for the f 1 (ϕ)f_1(\\varphi ) coupling.Having discussed the properties of the scalarized phases, the next step is to determine which are the stable branches (apart from the Schwarzschild solution that is always linearly stable for the coupling (REF )).", "We already had an indication from the evolution of the scalar field that only the branch depicted with solid line in Figs.", "REF and REF is stable.", "Here we will take a different approach by looking at the thermodynamical properties of these black holes.", "We can define the entropy at the horizon in the following way [3] $S_H = \\frac{1}{4} A_H + 4\\pi \\lambda ^2 f(\\varphi _{H}).$ The black hole entropy as a function of mass is plotted in Fig.", "REF for all of the branches and values of $\\beta $ .", "Something important that might be a bit hard to notice in this scale is that among the scalarized phases (for fixed $\\beta $ ), the one depicted with solid line has the largest entropy and hence it is probably the only stable one.", "This potentially stable scalarized phase, though, does not always have larger entropy compared to the Schwarzschild one.", "For larger $\\beta $ the following behavior is typical.", "For larger black hole masses the entropy is smaller making the Schwarzschild black hole the thermodynamically preferred one while there is a transition point at smaller masses, below which the scalarized phase acquires the larger entropy.", "Interestingly, the behavior changes in the small $\\beta $ regime where the whole scalarized phase branch has larger entropy compared to the Schwarzschild one and is thus thermodynamically preferred.", "Figure: The entropy at the black hole horizon as a function of the black hole mass for the f 1 (ϕ)f_1(\\varphi ) coupling.the" ], [ "Conclusion", "In the present paper we have considered the possibilities to go beyond the standard scalarization of black holes in scalar-Gauss-Bonnet gravity.", "We showed the existence of a fully nonlinear dynamical mechanism for the formation of scalarized black holes which is different from the spontaneous scalarization.", "We considered types of coupling functions for which the Schwarzschild black hole is still a linearly stable solution of the field equations but for certain ranges of the parameters scalarized phases of the Schwarzschild black hole can exist.", "The reason for the appearance of such phases is that even though the Schwarzschild phase is stable against small (linear) perturbations, this stability can be lost for larger amplitudes of the perturbations that will bring us in the nonlinear regime.", "We demonstrate this explicitly by evolving the nonlinear scalar field equation in sGB gravity on the Schwarzschild background and observe that indeed a nontrivial scalar field can develop for certain ranges of the parameters if the amplitude of the perturbations is large enough.", "The obtained in this way scalarized phases are not continuously connected to the Schwarzschild black hole, i.e.", "they do not bifurcate from it unlike the case of standard scalarization, and they are most probably stable.", "Even though the time evolution method can demonstrate the formation of scalarized phases, it can not be used to obtain the full spectrum of hairy solutions, including the unstable ones.", "For this purpose we solved the reduced system of field equations in the case of static and spherically symmetric spacetime and scalar field configurations.", "We have found that up to three branches of scalarized phases can exist that have a complicated structure depending on the parameter $\\beta $ in the coupling functions (REF ).", "A common feature is that a minimum $\\beta $ exists below which no scalarized phases are present, while such hairy solutions can be found for arbitrary larger $\\beta $ but the range of black hole masses where they exist is decreased.", "As expected, the stable scalarized phase has the largest entropy among all the branches of hairy black holes.", "For larger $\\beta $ and close to the maximum mass of the corresponding branch, the stable scalarized phase can have smaller entropy than the Schwarzschild black hole making it thermodynamically less favorable.", "This quickly changes with the decrease of the mass and for smaller masses the scalarized phase has larger entropy.", "For smaller $\\beta $ the scalarized phase is always thermodynamically favorable over the Schwarzschild phase.", "As we commented, the stable scalarized phase is not continuously connected to the Schwarzschild one but instead there is a (sometimes large) “gap” between the two.", "This can have serious astrophysical implications because the transition from one phase to the other will happen with a jump contrary to the standard scalarization where the transition from non-scalarized to scalarized state can be continuous.", "In the present paper we have examined how large scalar field perturbations excite the scalar field but it will be extremely interesting to understand whether and how perturbations of the metric can excite the scalarized phase.", "This will have direct application to astrophysical processes such as black hole mergers.", "In order to do this one has to drop the decoupling limit approximation and consider the full dynamics of the system that is a study underway.", "In a forthcoming paper we will show that the discussed nonlinear mechanism for formation of black holes with scalar hair works also for rotating black holes.", "Within the consider class of sGB theories (with coupling functions of the form $f_1(\\varphi )$ and $f_2(\\varphi )$ ) the Kerr solution is linearly stable but suffers from nonlinear instability within sGB gravity and this nonlinear instability gives rise to new rotating scalarized black holes." ], [ "Acknowledgements", "DD acknowledges financial support via an Emmy Noether Research Group funded by the German Research Foundation (DFG) under grant no.", "DO 1771/1-1.", "SY would like to thank the University of Tuebingen for the financial support.", "The partial support by the Bulgarian NSF Grant KP-06-H28/7 and the Networking support by the COST Actions CA16104 and CA16214 are also gratefully acknowledged." ], [ "Basic mathematical equations", "The variation of action (REF ) with respect to the metric $g_{\\mu \\nu }$ and scalar field $\\varphi $ gives the following field equations $&&R_{\\mu \\nu }- \\frac{1}{2}R g_{\\mu \\nu } + \\Gamma _{\\mu \\nu }= 2\\nabla _\\mu \\varphi \\nabla _\\nu \\varphi - g_{\\mu \\nu } \\nabla _\\alpha \\varphi \\nabla ^\\alpha \\varphi ,\\\\&&\\nabla _\\alpha \\nabla ^\\alpha \\varphi = - \\frac{\\lambda ^2}{4} \\frac{df(\\varphi )}{d\\varphi } {\\cal R}^2_{GB},$ where $\\nabla _{\\mu }$ is the covariant derivative with respect to $g_{\\mu \\nu }$ and $\\Gamma _{\\mu \\nu }$ is defined by $\\Gamma _{\\mu \\nu } &=& - R(\\nabla _\\mu \\Psi _{\\nu } + \\nabla _\\nu \\Psi _{\\mu } ) - 4\\nabla ^\\alpha \\Psi _{\\alpha }\\left(R_{\\mu \\nu } - \\frac{1}{2}R g_{\\mu \\nu }\\right) \\nonumber \\\\& +& 4R_{\\mu \\alpha }\\nabla ^\\alpha \\Psi _{\\nu } + 4R_{\\nu \\alpha }\\nabla ^\\alpha \\Psi _{\\mu } \\nonumber \\\\& - & 4 g_{\\mu \\nu } R^{\\alpha \\beta }\\nabla _\\alpha \\Psi _{\\beta }+ \\, 4 R^{\\beta }_{\\;\\mu \\alpha \\nu }\\nabla ^\\alpha \\Psi _{\\beta }$ with $\\Psi _{\\mu }= \\lambda ^2 \\frac{df(\\varphi )}{d\\varphi }\\nabla _\\mu \\varphi .$" ], [ "Dynamics", "In the decoupling limit we solve eq.", "() on the Schwarzschild background which in the standard coordinate reads $ds^2= - \\frac{\\Delta }{r^2} dt^2 + \\frac{r^2}{\\Delta } dr^2 + r^2 (d\\theta ^2 + \\sin ^2\\theta d\\phi ^2),$ where $\\Delta =r^2-2Mr$ .", "It is convenient to introduce the coordinate $x$ defined by $dx=\\frac{r^2}{\\Delta } dr$ .", "In the coordinates $(t, x, \\theta ,\\phi )$ equation eq.", "() takes the following explicit form $&&- \\partial ^2_t \\varphi + \\partial ^2_x \\varphi + \\frac{2 \\Delta }{r^3} \\partial _x\\delta \\varphi \\\\&&+ \\frac{\\Delta }{r^4}\\left[\\frac{1}{\\sin \\theta } \\partial _\\theta (\\sin \\theta \\partial _\\theta \\varphi ) + \\frac{1}{\\sin ^2\\theta }\\partial ^2_{\\phi }\\varphi \\right]= -\\lambda ^2 \\frac{12 M^2\\Delta }{r^8}\\frac{df(\\varphi )}{d\\varphi }.", "\\nonumber $ The boundary conditions we have to impose when evolving in time eq.", "(REF ) is that the scalar field has the form of an outgoing wave at infinity and an ingoing wave at the black hole horizon" ], [ "Static and spherically symmetric equations", "We consider further static and spherically symmetric spacetimes as well as static and spherically symmetric scalar field configurations.", "The spacetime metric can be written then as $ds^2= - e^{2\\Phi (r)}dt^2 + e^{2\\Lambda (r)} dr^2 + r^2 (d\\theta ^2 + \\sin ^2\\theta d\\phi ^2 ).$ After using this form of the metric a system of reduced field equations can be derived that describes the static black hole solutions in sGB gravity $&&\\frac{2}{r}\\left[1 + \\frac{2}{r} (1-3e^{-2\\Lambda }) \\Psi _{r} \\right] \\frac{d\\Lambda }{dr} + \\frac{(e^{2\\Lambda }-1)}{r^2} \\nonumber \\\\&& \\hspace{14.22636pt} - \\frac{4}{r^2}(1-e^{-2\\Lambda }) \\frac{d\\Psi _{r}}{dr} - \\left( \\frac{d\\varphi }{dr}\\right)^2=0, \\\\ && \\nonumber \\\\&&\\frac{2}{r}\\left[1 + \\frac{2}{r} (1-3e^{-2\\Lambda }) \\Psi _{r} \\right] \\frac{d\\Phi }{dr} - \\frac{(e^{2\\Lambda }-1)}{r^2} - \\left( \\frac{d\\varphi }{dr}\\right)^2=0,\\\\ && \\nonumber \\\\&& \\frac{d^2\\Phi }{dr^2} + \\left(\\frac{d\\Phi }{dr} + \\frac{1}{r}\\right)\\left(\\frac{d\\Phi }{dr} - \\frac{d\\Lambda }{dr}\\right) \\nonumber \\\\&& \\hspace{14.22636pt} + \\frac{4e^{-2\\Lambda }}{r}\\left[3\\frac{d\\Phi }{dr}\\frac{d\\Lambda }{dr} - \\frac{d^2\\Phi }{dr^2} - \\left(\\frac{d\\Phi }{dr}\\right)^2 \\right]\\Psi _{r}\\nonumber \\\\&& \\hspace{14.22636pt} - \\frac{4e^{-2\\Lambda }}{r}\\frac{d\\Phi }{dr} \\frac{d\\Psi _r}{dr} + \\left(\\frac{d\\varphi }{dr}\\right)^2=0, \\\\ && \\nonumber \\\\&& \\frac{d^2\\varphi }{dr^2} + \\left(\\frac{d\\Phi }{dr} \\nonumber - \\frac{d\\Lambda }{dr} + \\frac{2}{r}\\right)\\frac{d\\varphi }{dr} \\nonumber \\\\&& \\hspace{14.22636pt} - \\frac{2\\lambda ^2}{r^2} \\frac{df(\\varphi )}{d\\phi }\\Big \\lbrace (1-e^{-2\\Lambda })\\left[\\frac{d^2\\Phi }{dr^2} + \\frac{d\\Phi }{dr} \\left(\\frac{d\\Phi }{dr} - \\frac{d\\Lambda }{dr}\\right)\\right] \\nonumber \\\\&& \\hspace{14.22636pt} + 2e^{-2\\Lambda }\\frac{d\\Phi }{dr} \\frac{d\\Lambda }{dr}\\Big \\rbrace =0, $ with $\\Psi _{r}=\\lambda ^2 \\frac{df(\\varphi )}{d\\varphi } \\frac{d\\varphi }{dr}.$ In order to obtain black hole solutions the following conditions should be imposed coming from the requirements for asymptotic flatness at infinity and the regularity at the black hole horizon $r=r_H$ : $&&\\Phi |_{r\\rightarrow \\infty } \\rightarrow 0, \\;\\; \\Lambda |_{r\\rightarrow \\infty } \\rightarrow 0,\\;\\; \\varphi |_{r\\rightarrow \\infty } \\rightarrow 0\\;\\;, \\\\&&e^{2\\Phi }|_{r\\rightarrow r_H} \\rightarrow 0, \\;\\; e^{-2\\Lambda }|_{r\\rightarrow r_H} \\rightarrow 0.", "$ The regularity of the scalar field and its first and second derivatives on the black hole horizon leads to an additional condition for the existence of black hole solutions $\\left(\\frac{d\\varphi }{dr}\\right)_{H}&=& \\frac{r_{H}}{4 \\lambda ^2 \\frac{df}{d\\varphi }(\\varphi _{H})} \\times \\nonumber \\\\&&\\times \\left[-1 + \\sqrt{1 - \\frac{24\\lambda ^4}{r^4_{H}} \\left(\\frac{df}{d\\varphi }(\\varphi _{H})\\right)^2}\\right].", "$ Clearly, solutions can exist only in case the expression inside the square root is positive.", "For black holes with nontrivial scalar field this condition can be easily violated, though, and it limits (sometimes severely) the domain of existence of scalarized solutions [3]." ], [ "Dynamics with $f_2(\\varphi )$", "It is clear that the condition for the existence of scalarized phases (REF ) can we satisfied also for higher powers of $\\varphi $ .", "In the present paper we are focusing on $Z_2$ symmetric theories and that is why the next possible choice is to consider coupling function of the type $f_2(\\phi )$ defined in eq.", "(REF ).", "The time evolution of scalar field on the Schwarzschild background is shown in Fig.", "REF for two different amplitudes of the initial perturbation close to the threshold for the development of a nonlinear instability.", "As one can see, for smaller amplitude the scalar field perturbation decays exponentially and the quasinormal modes of the Schwarzschild black hole within sGB gravity are observed.", "A slight increase of the amplitude, though, leads to the formation of a stable equilibrium scalar field configuration at later times." ], [ "Scalarized phases with $f_2(\\varphi )$", "Having demonstrated that scalarized phases can be indeed dynamically formed for large enough amplitude of the initial perturbation, here we will discuss the complete spectrum of scalarized phase solutions for the coupling $f_2(\\phi )$ obtained after solving the reduced field equations (REF )–().", "The scalar field on the horizon, the horizon radius and the scalar charge as a function of mass are plotted in Fig.", "REF for several values of $\\beta $ .", "The minimum $\\beta =460$ plotted in the figure is very close to the threshold where the scalarized phases disappear.", "With the increase of $\\beta $ we have three distinct cases similar to the $f_1(\\varphi )$ case: For smaller $\\beta $ three branches of solutions exist – two with higher mass spanning a limited range of masses (and not reaching the $M=0$ limit), and one branch appearing at relatively low masses.", "Only the high-mass branch depicted with solid line is potentially stable.", "For intermediate $\\beta $ a similar picture is observed with the difference that the potentially stable branch (depicted with solid line) actually reaches the $M=0$ limit.", "For large enough $\\beta $ only two branches of solutions exist starting from the zero mass limit and merging again at some finite $M/\\lambda $ .", "Figure: The scalar field at the horizon (top panel), the mass (middle panel) and the scalar charge (bottom panel) as functions of the black hole mass for the f 2 (ϕ)f_2(\\varphi ) coupling.The main difference with the results for $f_1(\\varphi )$ is on quantitative level.", "Scalarized phases appear for much smaller masses (normalized with respect to $\\lambda $ ) and larger values of $\\beta $ .", "The maximum differences with Schwarzschild is smaller compared to the $f_1(\\varphi )$ coupling judging from the middle panels in Figs.", "REF and REF .", "Again, even though the differences between some of the branches (for a fixed $\\beta $ ) look very small if one focuses for example on the behavior of the scalar field at the horizon or the black hole horizon radius, a clear distinction between the scalarized phases is observed in terms of the scalar charge.", "The nonlinear simulations show that the only stable branch is the one depicted with a solid line in Fig.", "REF .", "In addition, this is the scalarized phase with the highest entropy that is another argument in favor of its stability.", "Similar to the previous section, for large values of $\\beta $ a small part of the stable hairy black hole branch (close to the maximum mass) can have entropy smaller than the Schwarzschild one but this is quickly reversed with the decrease of the black hole mass.", "For smaller $\\beta $ the scalarized phase has always larger entropy with respect to Schwarzschild." ] ]
2107.01738
[ [ "Model Predictive Control for Electron Beam Stabilization in a\n Synchrotron" ], [ "Abstract Electron beam stabilization in a synchrotron is a disturbance rejection problem, with hundreds of inputs and outputs, that is sampled at frequencies higher than $10$ kHz.", "In this feasibility study, we focus on the practical issues of an efficient implementation of model predictive control (MPC) for the heavily ill-conditioned plant of the electron beam stabilization problem.", "To obtain a tractable control problem that can be solved using only a few iterations of the fast gradient method, we investigate different methods for preconditioning the resulting optimization problem and relate our findings to standard regularization techniques from cross-directional control.", "We summarize the single- and multi-core implementations of our control algorithm on a digital signal processor (DSP), and show that MPC can be executed at the rate required for synchrotron control.", "MPC overcomes various problems of standard electron beam stabilization techniques, and the successful implementation can increase the stability of photon beams in synchrotron light sources." ], [ "Introduction", "A synchrotron light source is a special type of particle accelerator in which charged particles, typically electrons, travel around a circular path called the storage ring.", "When the electrons' paths are bent around the storage ring at relativistic speeds, they lose kinetic energy and emit it in the form of exceptionally bright light, which is used for microscopic experiments.", "An assembly of magnets produces a magnetic field that confines the electrons in the storage ring.", "Large magnets steer and focus the electron beam whilst smaller corrector magnets attenuate vibrations induced by disturbances and reduce the trajectory error of the electrons down to a few $\\mu $ m. These disturbances are caused by internal devices, such as the beam light extraction devices, or transmitted through the girders on which the magnet arrays are attached.", "The position of the electron beam is measured using beam position monitors (BPMs) and the corrector magnets are controlled in a feedback loop that is sampled within a frequency range of $10-100$  kHz.", "The beam trajectory error must be minimized in order to produce high brilliance synchrotron light.", "This control system is referred to as fast orbit feedback and typically has a few hundred BPMs (outputs) and few hundred corrector magnets (inputs).", "Diamond Light Source (DLS) is the UK's national synchrotron facility, and its 560 m circumference storage ring accommodates over 20 experimental stations.", "DLS has completed the conceptual design phase of a significant upgrade (DLS-II), which will increase the brightness of the synchrotron light by raising the electron beam energy from 3 GeV to $3.5$  GeV [1] and the number of sensors and actuators from 172 to 252 and 173 to 396, respectively.", "In the current facility only one type of corrector magnet is used, but DLS-II will instead use separate types for high and low bandwidth correction.", "In addition, the sampling frequency will be increased from 10 kHz to 100 kHz.", "The consequences of introducing two types of corrector magnets are twofold.", "First, the widely used modal decomposition [2] that diagonalizes the input-output transfer function matrix using a singular value decomposition can no longer be applied.", "Second, amplitude and slew-rate actuator constraints must be considered.", "Model predictive control (MPC) allows for an arbitrary number of actuator arrays and provides a systematic way to handle actuator constraints while achieving the same or better disturbance attenuation [3] as linear control methods.", "However, the MPC algorithm uses real-time optimization and considerably increases the computational complexity of the fast orbit feedback system.", "Formulating MPC for the electron beam stabilization problem results in a constrained quadratic program with hundreds of decision variables, and the highly ill-conditioned plant negatively affects the convergence properties of the solver.", "A tailored MPC implementation is therefore required to obtain an MPC scheme that operates at frequencies higher than 10 kHz.", "In anticipation of the upcoming DLS-II upgrade, it was decided to assess the feasibility and performance of installing MPC on the existing DLS-I storage ring.", "This paper describes an assessment of the design and conception of the future DLS-II fast orbit feedback architecture and allows for an optimal dimensioning of the required controller hardware.", "The paper is organized as follows.", "The process model is introduced in section Section  and a state-space model and observer introduced in Section .", "We use standard modelling techniques for setpoint tracking and observer design, but include these details for the benefit of practitioners in the synchrotron community who may be unfamiliar with these methods.", "We formulate our MPC problem in Section , which we solve using the fast gradient method, and analyze the solver convergence with respect to preconditioning.", "Finally, Section  details the parallel implementation of MPC on a multicore digital signal processor (DSP).", "The developments presented in this paper apply to DLS-I and II, but the implementation has been tailored to DLS-I.", "For DLS-II, the relationship between the $n_y\\!=\\!252$ beam displacements ${\\mathbf {y}_k}{n_y}$ measured around the ring, the $n_s\\!=\\!252$ slow corrector magnets inputs ${\\mathbf {u}_{s,k}}{n_s}$ and the $n_f\\!=\\!144$ fast corrector magnets inputs ${\\mathbf {u}_{f,k}}{n_f}$ at time $t=k \\Delta t$ is given by $\\mathbf {y}_k = \\mathbf {R}_s g_s({z})\\mathbf {u}_{s,k} + \\mathbf {R}_f g_f({z})\\mathbf {u}_{f,k} + \\mathbf {d}_{k},$ where $\\Delta t=10$  $\\mu $ s is the sampling time, ${z}$ represents the backward shift operator and $\\mathbf {d}_k$ the disturbances.", "The matrix $\\mathbf {R}\\left[\\mathbf {R}_s\\,\\mathbf {R}_f\\right]\\in ^{n_y\\times n_u}$ with $n_u=n_s+n_f$ is called the orbit response matrix and typically has a condition number on the order of $10^4$ .", "The scalar transfer functions $g_{(\\cdot )}$ model the corrector magnet dynamics plus a transport delay that accounts for unmodeled elements between the central computing node and the power supply of the magnets, and take the form $g_{(\\cdot )}({z})= z^{(\\mu +1)} \\frac{1-e^{a_{(\\cdot )}{\\Delta t}}}{1-{z}e^{a_{(\\cdot )}{\\Delta t}}},$ where $\\mu =10$ is the delay in terms of time steps.", "The slow magnets have a small bandwidth $a_s = 2\\pi \\times 100$  Hz but strong a magnetic field, while the fast magnets have a high bandwidth $a_f = 2\\pi \\times 10$  kHz but a weak magnetic field.", "In contrast, the DLS-I storage ring has $n_y=172$ position measurements and $n_u=173$ corrector magnets.", "Most of the corrector magnets have a medium bandwidth $a_m=2\\pi \\times 700$  Hz, but $n_s=3$ slow and $n_f=2$ fast magnets have been installed for testing purposes in anticipation of the DLS-II upgrade.", "The DLS-I feedback is sampled at $\\Delta t=100$  $\\mu $ s with a delay of $\\mu =7$ time steps.", "Note that the vector $\\mathbf {y}_k$ describes the displacement in either horizontal or vertical direction perpendicular to the motion of the electron beam.", "These directions are independent and the electron beam stabilization problem includes two different systems of the form of (REF ).", "In the following, we will focus on the vertical direction, which is more difficult to control." ], [ "Cross-Directional Control", "The plant model (REF ) is usually referred to as a cross-directional system, and similar models are obtained for web forming processes [2] such as those encountered in paper manufacturing or plastic film extrusion.", "For cross-directional systems, the response can be split into a spatial component ($\\mathbf {R}_{(\\cdot )}$ ) and a temporal component ($g_{(\\cdot )}({z})$ ).", "The design of feedback systems for electron beam stabilization has many parallels to cross-directional control [4].", "In the case of only one type of actuator, a standard approach is to decompose the orbit response matrix using a singular value decomposition (SVD) as $\\mathbf {R}=\\mathbf {U}\\mathbf {\\Sigma }{\\mathbf {V}}$ , where $\\mathbf {\\Sigma }$ may contain blocks of zeros depending on the shape of $\\mathbf {R}$ ,.", "By defining the modal outputs and inputs as $\\mathbf {\\hat{y}}_k={\\mathbf {U}}\\mathbf {y}_k$ and $\\mathbf {\\hat{u}}_k={\\mathbf {V}}\\mathbf {u}_k$ , the multi-input multi-output system is decoupled into a set of single-input single-output (SISO) systems.", "The control input can then be calculated as $\\mathbf {u}_k=-\\mathbf {V}\\mathbf {\\hat{K}}c({z})\\mathbf {U}\\mathbf {y}_k$ , where the gain matrix $\\mathbf {\\hat{K}}{{\\mathbf {\\Sigma }}\\mathbf {\\Sigma }+\\lambda I}{\\mathbf {\\Sigma }}$ with $\\lambda >0$ is diagonal and $c({z})$ is often chosen to be identical for each mode and given by a Dahlin [5] or PID [6] controller.", "Regularizing the inverse of $\\mathbf {\\Sigma }$ is essential to prevent large control gains in the direction of small singular values.", "When the system has more than one actuator array, such as in (REF ), the modal decomposition can no longer be applied because the SVDs $\\mathbf {R}_{(\\cdot )}=\\mathbf {U}_{(\\cdot )}\\mathbf {\\Sigma }_{(\\cdot )} {\\mathbf {V}}_{(\\cdot )}$ with ${(\\cdot )}=\\lbrace \\text{s,f}\\rbrace $ do not share the same matrix of left singular vectors $\\mathbf {U}_{(\\cdot )}$ .", "In this case, the orbit response matrices can be simultaneously decomposed using alternative methods [4], [7].", "Other approaches introduce a frequency deadband between slow and fast actuators and setup two independent control loops [6], which is a suboptimal approach because it prevents control action in the frequency deadband.", "To handle actuator constraints, the standard controllers must be extended with an anti-windup scheme." ], [ "Symmetries", "In most synchrotrons the monitors and magnets are placed in repeated patterns around the storage ring.", "These patterns produce a circulant and centrosymmetric structure in $\\mathbf {R}$  [8].", "In contrast to the modal decomposition, which requires the outputs and inputs to be multiplied by dense matrices, the symmetric transformations can be carried out using the computationally efficient Fast Fourier Transformation (FFT).", "In our previous work, we have shown how these symmetries can be exploited for cross-directional [8] and MPC [3] to increase the computational speed of the controller and reduce the memory requirements.", "These symmetries stand out in the DLS-II orbit response matrix, but have been corrupted in the current orbit response matrix after adding modifications in anticipation DLS-II.", "Because our MPC algorithm will be tested on the DLS-I storage ring, the structural symmetries are not considered further.", "However, our implementation could be extended to consider symmetries and would produce significant performance improvements." ], [ "State-Space System", "The standard linear MPC formulation requires a state-space model, and we choose to define the states $\\mathbf {x}_{(\\cdot ),k}\\in ^{n_{(\\cdot )}}$ as $\\mathbf {x}_{(\\cdot ),k} = z^{-1}\\frac{1-e^{a_{(\\cdot )}{\\Delta t}}}{1-{z}e^{a_{(\\cdot )}{\\Delta t}}} \\mathbf {u}_{(\\cdot ),k},$ where ${(\\cdot )}=\\lbrace \\text{s,f}\\rbrace $ .", "Applying the backward shift operator to $\\mathbf {x}_{(\\cdot ),k}$ and $\\mathbf {u}_{(\\cdot ),k}$ yields a state-space representation of (REF ) as $\\begin{aligned}\\begin{pmatrix}\\mathbf {x}_{s,k+1}\\\\\\mathbf {x}_{f,k+1}\\end{pmatrix}&\\!=\\!\\begin{bmatrix}\\mathbf {A}_s & 0\\\\ 0 & \\mathbf {A}_f\\end{bmatrix}\\!\\begin{pmatrix}\\mathbf {x}_{s,k}\\\\\\mathbf {x}_{f,k}\\end{pmatrix}\\!+\\!\\begin{bmatrix}\\mathbf {B}_s & 0\\\\ 0 & \\mathbf {B}_f\\end{bmatrix}\\!\\begin{pmatrix}\\mathbf {u}_{s,k}\\\\\\mathbf {u}_{f,k}\\end{pmatrix},\\\\\\mathbf {y}_k &\\!=\\!\\begin{bmatrix}\\mathbf {R}_s & \\mathbf {R}_f\\end{bmatrix}\\!\\begin{pmatrix}\\mathbf {x}_{s,k\\mu }\\\\\\mathbf {x}_{f,k\\mu }\\end{pmatrix}+\\mathbf {d}_k,\\end{aligned}$ where $\\mathbf {A}_{(\\cdot )} = I e^{a_{(\\cdot )}{\\Delta t}}$ and $\\mathbf {B}_{(\\cdot )} = I -\\mathbf {A}_{(\\cdot )}$ .", "In the form (REF ), the states $\\mathbf {x}_{s,k}$ and $\\mathbf {x}_{f,k}$ are proportional to the magnetic fields of the slow and fast correctors acting on the electron beam.", "We will use the more compact notation $\\begin{aligned}\\mathbf {x}_{k+1}=\\mathbf {A}\\mathbf {x}_{k}+\\mathbf {B}\\mathbf {u}_{k},\\qquad \\mathbf {y}_{k}=\\mathbf {C}\\mathbf {x}_{k-\\mu }+\\mathbf {d}_{k},\\end{aligned}$ where $\\mathbf {x}_{k}\\!\\!\\!\\!", "(\\mathbf {x}_{s,k}^,\\mathbf {x}_{f,k}^)^$ and $\\mathbf {u}_{k}\\!\\!\\!\\!", "(\\mathbf {u}_{s,k}^,\\mathbf {u}_{f,k}^)^$ .", "A widely used control approach for (REF ) is the linear quadratic regulator (LQR) that computes a control law as $\\mathbf {u}_k=-\\mathbf {K}\\mathbf {x}_k$ and can be interpreted as an unconstrained version of MPC.", "In practice, the actuator inputs (currents) $\\mathbf {u}_{s,k}$ and $\\mathbf {u}_{f,k}$ are subjected to slew-rate constraints and amplitude constraints, respectively.", "The constraints can be modeled as $\\mathcal {U}_\\text{a} &= {\\mathbf {u}_{f,k}\\in ^{n_f}}{\\alpha \\le \\mathbf {u}_{f,k} \\le \\alpha },\\\\\\mathcal {U}_\\text{r} &= {\\mathbf {u}_{s,k},\\mathbf {u}_{s,k1}\\in ^{n_s}\\!", "}{\\!\\rho \\le \\mathbf {u}_{s,k}\\mathbf {u}_{s,k1} \\le \\rho },$ where the inequalities are to be read component-wise.", "The magnitude of the amplitude limit $\\alpha $ depends on the normalization of the inputs and the slew-rate constant $\\rho $ is chosen as $\\rho =\\alpha /10$ , which reflects results obtained from preliminary simulations of the fast corrector magnets.", "We consider symmetric limits on both slew-rate and amplitude, but the algorithm is easily modified to allow asymmetric limits.", "Analogous to the shorthand notation (REF ), we will abbreviate (REF ) as $\\mathbf {u}_k\\in \\mathcal {U}$ .", "Note that in our implementation we will assume that slow and fast actuators are constrained by both slew-rate and amplitude constraints, but the limits for each actuator type are adjusted accordingly." ], [ "Setpoint Calculation", "The aim of the control system is to reject the disturbances $\\mathbf {d}_k$ in (REF ).", "In response to a constant disturbance, a zero steady-state output $\\mathbf {y}_k$ requires the open-loop transfer function of (REF ) to have integrating behavior [9].", "Because the plant transfer functions $g_{(\\cdot )}({z})$ lack integrating behavior, the controller must implement the integrator.", "For an LQR approach, there exist several methods to add integrating behavior.", "One way is to augment the system with a set of output integrators.", "However, this method would slow down the subsequent MPC algorithm by increasing the number of optimization variables.", "Alternatively, one can compute the setpoints $\\mathbf {\\bar{u}}$ and $\\mathbf {\\bar{x}}$ and use the feedback law $\\mathbf {u}_k=\\mathbf {\\bar{u}}+\\mathbf {u}_k^\\star $  [10], where $\\mathbf {u}_k^\\star $ is obtained from $\\mathbf {u}_k^\\star =\\mathbf {K}\\mathbf {x}_k$ in the case of LQR or as the solution to an optimization problem in the case of MPC.", "The setpoints should be calculated such that $\\lim _{k\\rightarrow \\infty }\\mathbf {y}_k=0$ , which using (REF ) yields $\\begin{pmatrix}0\\\\ \\mathbf {\\bar{d}}_k\\end{pmatrix}=\\begin{bmatrix}I-\\mathbf {A} & -\\mathbf {B}\\\\-\\mathbf {C} & 0\\end{bmatrix}\\begin{pmatrix}\\mathbf {\\bar{x}}_k\\\\\\mathbf {\\bar{u}}_k\\end{pmatrix}\\mathbf {S}\\begin{pmatrix}\\mathbf {\\bar{x}}_k\\\\\\mathbf {\\bar{u}}_k\\end{pmatrix},$ where ${\\mathbf {S}}{n_u+n_y}{2n_u}$ and $\\mathbf {\\bar{d}}_k\\in ^{n_y}$ is a disturbance estimate that is obtained from the observer.", "The coefficient matrix ${\\mathbf {S}}{n_u+n_y}{2n_u}$ has more columns than rows and the Moore-Penrose pseudoinverse ${\\mathbf {S}}={{\\mathbf {S}}\\mathbf {S}}{\\mathbf {S}}$ can be used to solve for $\\mathbf {\\bar{x}}_k$ and $\\mathbf {\\bar{u}}_k$ .", "Note the zeros in the left-hand side vector of (REF ), so that in practice, only the last $n_y$ columns of ${\\mathbf {S}}$ need to be considered." ], [ "State and Disturbance Observer", "Standard methods from cross-directional control use output feedback to control (REF ), whereas LQR and MPC use state feedback to control the equivalent state-space system (REF ).", "The states $\\mathbf {x}_k$ and disturbances $\\mathbf {d}_k$ are not measurable and these values must be inferred from the measured outputs using an observer.", "The observer continuously computes the state-transition equation in (REF ) and adds the term $\\mathbf {L}(\\mathbf {y}_k-\\mathbf {C}\\mathbf {x}_k)$ , where we chose the observer gain $\\mathbf {L}$ as the steady-state Kalman filter gain [11].", "For modelling the disturbance, a first-order model that is driven by zero-mean independent and identically distributed white noise [10] is used, i.e.", "$\\mathbf {d}_{k+1} = \\mathbf {A}_d \\mathbf {d}_k +\\mathbf {v}_k,$ where $\\mathbf {v}_k\\sim \\mathcal {N}(0,\\sigma _\\mathbf {v}^2)$ and we choose $\\mathbf {A}_d=I$ .", "Alternatively, the matrix $\\mathbf {A}_d$ can be obtained from a first-order autoregressive fit from the measurement data.", "All measurements of system (REF ) are delayed by $\\mu $ time steps and the incoming measurement $\\mathbf {y}_k$ at time $t=k\\Delta t$ contains information about the state $\\mathbf {x}_{k\\mu }$ at time $t=(k-\\mu )\\Delta t$ .", "One possibility to integrate the delayed measurements is to formulate a delay-free system by augmenting (REF ) with $\\mu \\times (n_s+n_f)$ states, i.e.", "defining $\\mathbf {z}_k^i\\mathbf {x}_{k-i}$ , $i=1,\\dots ,7$ and adding $\\mathbf {z}_{k+1}^{1}= \\mathbf {x}_{k}$ and $\\mathbf {z}_{k+1}^{i+1}= \\mathbf {z}_{k}^i$ and rewriting the state transition equations as $\\begin{aligned}\\begin{pmatrix}\\mathbf {\\hat{x}}_{k+1}\\\\\\mathbf {\\hat{z}}_{k+1}^1\\\\\\vdots \\\\\\mathbf {\\hat{z}}_{k+1}^\\mu \\\\\\mathbf {\\hat{d}}_{k+1}\\end{pmatrix}=&\\begin{bmatrix}\\mathbf {A} & & & \\\\I & 0 & & \\\\[-4pt]0 & \\ddots & \\ddots & \\\\[-4pt]& \\ddots & I & 0\\\\& & 0& \\mathbf {A}_d\\end{bmatrix}\\begin{pmatrix}\\mathbf {\\hat{x}}_{k}\\\\\\mathbf {\\hat{z}}_{k}^1\\\\\\vdots \\\\\\mathbf {\\hat{z}}_{k}^\\mu \\\\\\mathbf {\\hat{d}}_{k}\\end{pmatrix}+\\begin{bmatrix}\\mathbf {B}\\\\0\\\\\\vdots \\\\0\\end{bmatrix}\\mathbf {u}_k\\\\&+\\mathbf {L}\\left(\\mathbf {y}_k-\\mathbf {C}\\mathbf {\\hat{z}}_{k}^\\mu -\\mathbf {\\hat{d}}_{k}\\right),\\end{aligned}$ where variables with a hat denote estimated quantities and the state-space system (REF ) has been combined with the disturbance model (REF ).", "The observer (REF ) requires a matrix-vector multiplication with a dense ${\\mathbf {L}}{((\\mu +1)n_u+n_y)}{n_y}$ , which is a computationally expensive operation that can be avoided as follows.", "First, partition the observer gain as $\\mathbf {L} = [\\mathbf {L}_\\mathbf {x}^,\\,\\mathbf {L}_{\\mathbf {z}^1}^,\\dots \\,\\mathbf {L}_{\\mathbf {z}^\\mu }^,\\,\\mathbf {L}_{\\mathbf {d}}^]^$ , where the partitioning of $\\mathbf {L}$ matches the partitioning of the vector on the left-hand side of (REF ).", "Then, update the most delayed state $\\mathbf {\\hat{z}}^\\mu $ and the disturbance estimate $\\mathbf {\\hat{d}}$ using $\\mathbf {L}_{\\mathbf {z}^\\mu }$ and $\\mathbf {L}_{\\mathbf {d}}$ , respectively, and reserve $\\Delta \\mathbf {\\hat{y}}_k\\mathbf {L}_{\\mathbf {z}^\\mu }(\\mathbf {y}_k-\\mathbf {C}\\mathbf {\\hat{z}}_{k}^\\mu -\\mathbf {\\hat{d}}_{k})$ .", "Finally, update the states $\\mathbf {\\hat{z}}^i$ , $i=1,\\dots ,\\mu 1$ by adding $\\mathbf {A}^{\\mu i}\\Delta \\mathbf {\\hat{y}}_k$ and in particular $\\mathbf {\\hat{x}}$ using $\\mathbf {A}^{\\mu }\\Delta \\mathbf {\\hat{y}}_k$ .", "Note that the matrices $\\mathbf {A}^i$ are diagonal and can be pre-computed offline." ], [ "Problem Formulation", "At time $t=k\\Delta t$ , the MPC scheme computes a control input by predicting the future evolution of the system and minimizing a quadratic objective function over the planning horizon $N$ , while considering inputs that lie in the constraint set (REF ) only.", "This can be achieved via repeated solution of the following constrained quadratic program (CQP): $\\begin{aligned}\\min &\\sum _{i=0}^{N1} {\\mathbf {x}_i-\\mathbf {\\bar{x}}}{\\mathbf {Q}}^2+{\\mathbf {u}_i-\\mathbf {\\bar{u}}}{\\mathbf {R}}^2 + {\\mathbf {x}_N-\\mathbf {\\bar{x}}}{\\mathbf {P}}^2\\\\\\text{s.t.", "}\\,&\\qquad \\mathbf {x}_{i+1} = \\mathbf {A}\\mathbf {x}_i+\\mathbf {B}\\mathbf {u}_i,\\,\\,\\mathbf {x}_0=\\mathbf {\\hat{x}}_k,\\,\\,\\mathbf {u}_i\\in \\mathcal {U},\\\\&\\qquad \\mathbf {y}_i = \\mathbf {C}\\mathbf {x}_i,\\\\\\end{aligned}$ for $i=0,\\dots ,N1$ , where the optimization variables are $\\mathbf {x}_{(\\cdot )}$ and $\\mathbf {u}_{(\\cdot )}$ .", "Even though the solution of (REF ) is a sequence of inputs $\\mathbf {u}_0^\\star ,\\dots ,\\mathbf {u}_{N1}^\\star $ , only the first input $\\mathbf {u}_0^\\star $ is applied to the plant and the optimization repeated at the next time step $t+\\Delta t$ .", "The matrices $\\mathbf {Q}\\mathbf {C}^\\mathbf {C}$ and $\\mathbf {R}$ are the state and output weighting matrices, respectively, while $\\mathbf {P}={\\mathbf {P}}\\succ 0$ is the terminal cost matrix.", "The optimization problem (REF ) has a unique solution if $\\mathbf {R}\\succ 0$ , $\\mathbf {Q}\\succeq 0$ and if the pairs $(\\mathbf {A},\\mathbf {B})$ and $(\\mathbf {A},\\mathbf {Q}^{\\frac{1}{2}})$ are controllable and observable, respectively [12].", "Because the system (REF ) is stable and there are no state constraints, the MPC scheme is guaranteed to be feedback stable if the terminal cost matrix $\\mathbf {P}$ is obtained from the discrete-time Riccati equation (DARE) associated with the unconstrained LQR, $\\mathbf {A}^\\mathbf {P} \\mathbf {A} - \\mathbf {A}^\\mathbf {P} \\mathbf {B} \\left(\\mathbf {B}^\\mathbf {P} \\mathbf {B} +\\mathbf {R}\\right)^{-1}\\mathbf {B}^\\mathbf {P} \\mathbf {A} + \\mathbf {Q} =\\mathbf {P},$ where we choose the matrices $\\mathbf {Q}$ and $\\mathbf {R}$ to be the same as in (REF ).", "By defining $\\mathbf {x}=(\\mathbf {x}_0^,\\dots ,\\mathbf {u}_{N}^)^$ and $\\mathbf {u}(\\mathbf {u}_0^,\\dots ,\\mathbf {u}_{N1}^)^$ , the state-transition equations $\\mathbf {x}_{i+1} = \\mathbf {A}\\mathbf {x}_i+\\mathbf {B}\\mathbf {u}_i$ can be rewritten as $\\mathbf {x}=\\mathbf {G}\\mathbf {u}+\\mathbf {H}\\mathbf {x}_0$ , where $\\mathbf {G}=\\begin{bmatrix}0 & \\dots \\\\\\mathbf {B} & & \\\\\\mathbf {A}\\mathbf {B} & \\mathbf {B} & & \\\\[-0.5em]\\vdots & & \\ddots \\\\\\mathbf {A}^{N1}\\mathbf {B} & \\mathbf {A}^{N2}\\mathbf {B} & \\dots & \\mathbf {B}\\end{bmatrix},\\qquad \\mathbf {H}=\\begin{bmatrix}I \\\\ \\mathbf {A} \\\\ \\mathbf {A}^2\\\\[-0.5em]\\vdots \\\\ \\mathbf {A}^N\\end{bmatrix}.$ By substituting $\\mathbf {x}=\\mathbf {G}\\mathbf {u}+\\mathbf {H}\\mathbf {x}_0$ , the states $\\mathbf {x}$ can be eliminated from (REF ), producing the equivalent condensed problem $\\min _{{\\mathbf {u}}{Nn_u}} \\frac{1}{2}{\\mathbf {u}}\\mathbf {J}\\mathbf {u}+{\\mathbf {q}}\\mathbf {u}\\quad \\text{s.t.", "}\\quad \\mathbf {u}\\in \\mathcal {U}_N,$ where $\\mathcal {U}_N=\\mathcal {U}\\times \\dots \\times \\mathcal {U}$ and $\\mathbf {J}$ and $\\mathbf {q}$ are obtained as $\\mathbf {J} &\\mathbf {G}^\\left( (I_N\\otimes \\mathbf {Q})\\oplus \\mathbf {P} \\right)\\mathbf {G}+(I_N\\otimes \\mathbf {R}),\\\\\\begin{split}\\mathbf {q} &\\mathbf {G}^\\left( (I_N\\otimes \\mathbf {Q})\\oplus \\mathbf {P} \\right)\\mathbf {H}\\mathbf {x}_0-\\mathbf {G}^\\begin{bmatrix}\\mathbf {1}_N\\otimes \\mathbf {Q}\\\\\\mathbf {P}\\end{bmatrix}\\mathbf {\\bar{x}}\\\\&\\qquad -(\\mathbf {1}_N\\otimes \\mathbf {R})\\mathbf {\\bar{u}},\\end{split}$ with $\\otimes $ and $\\oplus $ denoting the Kronecker product and block-diagonal concatenation, respectively, $I_N$ the identity matrix of size $N\\times N$ and $\\mathbf {1}_N$ a vector of ones of length $N$ .", "Note that the slew-rate constraints couple the inputs across horizon stages and the set $\\mathcal {U}_N$ depends on the previously calculated input $\\mathbf {u}_{k1}^\\star $ .", "After finding a solution to (REF ), the set $\\mathcal {U}_N$ must therefore be updated as well as the vector $\\mathbf {q}$ on the arrival of a new measurement.", "In practice, we substitute (REF ) in () to avoid computing the setpoints $\\mathbf {\\bar{x}}$ and $\\mathbf {\\bar{u}}$ .", "Figure: Integrated beam motion (IBM) for the uncontrolled beam, IMC without and with applied constraints and MPC (NN) with NN denoting the horizon." ], [ "Synchrotron Performance Metric", "The performance of the control algorithm can be evaluated using the integrated beam motion (IBM), which is defined as the square root of $\\sum _{f=0}^F\\frac{2}{F^2}{y_i(f)}^2$ , where $y_i(f)$ is the discrete Fourier transform (DFT) of monitor output $i$ and $F$ the frequency in Hz.", "The IBM is the discrete integral of the DFT of $\\mathbf {S}_i({z})\\mathbf {d}_k$ , where $\\mathbf {S}_i({z})$ is the sensitivity transfer function matrix of output $i$ .", "Fig.", "REF shows the IBM averaged over all monitors for different horizons $N$ and for a particular choice of $\\mathbf {Q}$ and $\\mathbf {R}$ that will be discussed in Section REF .", "The figure also shows the uncontrolled beam, the simulated IMC for the unconstrained system and IMC for the case that the computed inputs are clipped using (REF ).", "It can be seen that there is little performance improvement $N$ larger than 1 or 2.", "Compared to the unconstrained IMC, MPC performs slightly better for lower frequencies but slightly worse for higher frequencies.", "This “waterbed” effect can be controlled by tuning the weighting matrices.", "Because the computation time is limited to 100 $\\mu $ s and we see little improvement for larger horizons, we consider only $N\\le 2$ in the following." ], [ "Fast Gradient Method", "Suitable algorithms for solving the CQP (REF ) can be split into first-order methods, such as the fast gradient method (FGM) and the alternating direction method of multipliers (ADMM), and second-order methods, such as the interior-point method.", "First-order methods use only the first derivative of the objective function, while second-order methods also use the second derivative.", "First-order methods typically converge quickly to a low-accuracy solution with a low per-iteration computational cost, but need far more iterations to achieve a high-accuracy solution.", "By contrast, second-order methods need fewer iterations to achieve a high-accuracy solution, but also have a higher per-iteration computational cost.", "A low-accuracy solution produced by a first-order algorithm is sufficient for an MPC problem [13].", "In [14], we showed how FGM outperforms ADMM in terms of computational speed for our particular constraint set.", "The convergence of ADMM is less affected by ill-conditioned problem data than the FGM, but the algorithm augments the vector $\\mathbf {u}$ in (REF ) to accommodate the constraints, which slows down the implementation on the DSP.", "The FGM is summarized in Alg.", "REF (lines 5-10) with a constant step size $\\beta = ({\\lambda _{max}}-{\\lambda _{min}})/({\\lambda _{max}}+{\\lambda _{min}})$ , where $\\lambda _\\text{min}$ and $\\lambda _\\text{max}$ are the minimum and maximum eigenvalues of the Hessian $\\mathbf {J}$  [15].", "In contrast to ADMM, the FGM does not require augmentation of the decision variables but applies the projection operator $\\mathcal {P}_{\\mathcal {U}_N}$ onto $\\mathcal {U}_N$ instead.", "For $N\\!=\\!1$ , the projection simply limits each component of $\\mathbf {u}$ to a minimum and maximum value given by (REF ) or ().", "For $N\\!=\\!2$ , the projection onto () is more complicated and consists of projecting the pairs $(\\mathbf {u}_{0}^i, \\mathbf {u}_{1}^i)$ , where $i$ denotes the $i$ th actuator, onto a hexagon with corner points that depend on the input $\\mathbf {u}_{1}^i$ calculated at time step $k-1$  [14].", "Note that on line 5, we warm-start by initializing the FGM using the input calculated at time $(k-1)\\Delta t$ , which considerably improves the convergence properties of the algorithm [16].", "Lines marked with the circled arrow pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ; denote synchronization steps of the parallel implementation (Section REF ) and we will consider the fixed iteration number $I_\\text{max}$ in Section REF .", "[H] MPC for electron beam stabilization [1] $\\mathbf {y}_k$ $\\mathbf {u}_k$ Transfer $\\mathbf {y}_k$ pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ; Update observer $\\Rightarrow $ $\\mathbf {\\hat{x}}_k, \\mathbf {\\hat{d}}_k$ pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ;    pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ; Update $\\mathbf {q}=\\mathbf {q}(\\mathbf {\\hat{x}}_k, \\mathbf {\\hat{d}}_k)$ pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ; Update $\\mathcal {U}_N=\\mathcal {U}_N(\\mathbf {u}_{k1})$ Set $\\mathbf {v}_i=\\mathbf {u}_{k1}$ and $\\mathbf {p}_i=0$ $i = 0$ to $I_{max}$ $\\mathbf {t}_i = (I - \\mathbf {J}{\\lambda }_{max})\\mathbf {v}_i - \\mathbf {q} {\\lambda }_{max}$ pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ; $\\mathbf {p}_{i+1} = \\mathcal {P}_{\\mathcal {U}_N}(\\mathbf {t}_i)$ $\\mathbf {v}_{i+1} = (1+\\beta )\\mathbf {p}_{i+1} - \\beta \\mathbf {p}_i$ pic[black,thick]carc=45:190:0.4em; pic[black,thick]carc=225:370:0.4em; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=90] at (0.38em,0.06em) ; draw,fill,single arrow, single arrow tip angle=45, single arrow head extend=0.75pt, single arrow head indent=0pt, inner sep=0pt, shape border rotate=270] at (-0.38em,-0.06em) ; Transfer $\\mathbf {u}_k=\\mathbf {p}_{I_{max}+1}$ Figure: (a) State and (b) input weights and (c) corresponding average FGM iteration number." ], [ "Preconditioning of the Hessian", "In Alg.", "REF , we have chosen a fixed number of iterations $I_\\text{max}$ rather than using a stopping criterion, which would increase the computational complexity.", "This is common in embedded systems applications, and an upper iteration bound can be obtained  [16] from $I_\\text{max}\\!=\\!\\max \\left\\lbrace \\!0,\\,\\min \\left\\lbrace \\!", "*{\\!\\frac{\\ln \\epsilon \\!\\!\\ln \\Delta }{\\ln (1\\!\\!\\sqrt{\\frac{1}{\\kappa }})}\\!", "},*{\\!2\\sqrt{\\frac{\\Delta }{\\epsilon }}\\!\\!2}\\!\\right\\rbrace \\!\\!\\right\\rbrace ,$ where $\\epsilon =10^{3}$ is the desired solution accuracy, $\\kappa \\kappa (\\mathbf {J})$ is the condition number of the Hessian and $\\Delta $ is a constant that depends on the constraint set $\\mathcal {U}_N$ .", "From (REF ), it can be seen that if $\\kappa $ is large, then $I_\\text{max}$ tends to be large.", "For $N\\!=\\!1$ , $\\mathbf {Q}={\\mathbf {C}}\\mathbf {C}$ and $\\mathbf {R}=I$ , $\\kappa (\\mathbf {J})\\approx 6000$ , which is far too large to solve Alg.", "REF at 10 kHz.", "The condition number of the Hessian can be reduced by setting $\\mathbf {R}=rI$ with $r\\gg 1$ , but the performance of the controller then rapidly degrades.", "Alternatively, the Hessian can be preconditioned using an invertible transformation matrix ${\\mathbf {E}}{n_u}{n_u}$ such that the condition number of the Hessian ${I_N\\otimes \\mathbf {E}}\\mathbf {J}{I_N\\otimes \\mathbf {E}}$ is minimized.", "The matrix $\\mathbf {E}$ can be found using semidefinite programming methods [17].", "However, choosing a dense $\\mathbf {E}$ significantly increases the computational complexity of the FGM as the projection is complicated.", "If $\\mathbf {E}$ is instead restricted to be a diagonal matrix, then $\\kappa (\\mathbf {J})$ is not improved substantially.", "Figure: Computation times for (a) single-core and (b) parallel implementations.", "Unnumbered slices contribute with 1%.A well-conditioned Hessian can also be obtained from choosing appropriate $\\mathbf {Q}$ and $\\mathbf {R}$ .", "For $N\\!=\\!1$ , the Hessian is $\\mathbf {J}=\\mathbf {B}^\\mathbf {P} \\mathbf {B}+\\mathbf {R}$ and the analysis can be greatly simplified by transforming system (REF ) into modal space, i.e.", "by approximating $\\mathbf {A}\\approx aI$ , $\\mathbf {B}\\approx bI$ and using $\\mathbf {C}=\\mathbf {U}\\mathbf {\\Sigma }{\\mathbf {V}}$ to diagonalize (REF ).", "In modal space, the matrix $\\mathbf {\\hat{P}}=(\\hat{p}_1,\\dots ,\\hat{p}_{n_u}){\\mathbf {V}}\\mathbf {P}\\mathbf {V}$ is diagonal and the DARE (REF ) is solved by $\\hat{p}_i = \\frac{1}{2b^2}\\left(-\\xi _i+\\sqrt{\\xi _i^2+4b^2 \\hat{q}_i\\hat{r}_i}\\right),$ where $\\xi _i =\\hat{r}_i-a^2\\hat{r}_i-b^2\\hat{q}_i$ and $\\hat{q}_i$ and $\\hat{r}_i$ are the diagonal elements of $\\mathbf {\\hat{Q}}{\\mathbf {\\Sigma }}\\mathbf {\\Sigma }$ and $\\mathbf {\\hat{R}}(\\hat{r}_1,\\dots ,\\hat{r}_{n_u})$ .", "For fixed $\\hat{r}_i$ , the low-order modes (large $\\hat{q}_i$ ) have a large cost (large $\\hat{p}_i$ ), which in turn yields a large LQR control gain $\\hat{k}_i=ab\\hat{p}_i/(\\hat{r}_i+b^2\\hat{p}_i)$  [9].", "The DLS-I internal model controller (IMC) is closely related to LQR [18] and computes the open-loop gains as ${{\\mathbf {\\Sigma }}\\mathbf {\\Sigma }+\\lambda I}{\\mathbf {\\Sigma }}$ , where $\\lambda >0$ is a regularization parameter.", "The modal input weights $\\hat{r}_i$ can be chosen such that the LQR controller gain matches the IMC open-loop gain for each mode $i$ , which is depicted in Fig.", "REF (a) and (b) (IMC, red).", "For IMC with $\\lambda = 0$ , the open-loop gain is proportional to ${\\mathbf {\\Sigma }}$ , so the higher order modes must be detuned, whereas for LQR, the open-loop gain is “proportional” to $\\mathbf {\\Sigma }$ , so the low order modes must be detuned.", "However, this choice of input weights does not decrease the condition number of the Hessian.", "For $N\\!=\\!1$ , the condition number of the resulting Hessian is $7,485$ and $11,616$ for $N\\!=\\!2$ .", "A simple way to significantly decrease the condition number of the Hessian is to choose $\\mathbf {\\hat{R}}=I$ and to limit the diagonal elements of $\\mathbf {\\hat{Q}}$ to a minimum and maximum value, which is depicted in Fig.", "REF .a (saturated, blue).", "This approach has also been chosen in Fig.", "REF , where it can be seen that there is no decrease in controller performance compared to IMC.", "For $N\\!=\\!1$ , the condition number of the resulting Hessian is 21, while it is 31 for $N\\!=\\!2$ .", "Note that the state weighting matrix in the original space can be recovered by setting $\\mathbf {Q}=\\mathbf {V}\\mathbf {\\hat{Q}}{\\mathbf {V}}$ .", "The required number of iterations for Alg.", "REF is illustrated in Fig.", "REF .b, which shows the number of iterations averaged over 10,000 MPC problem instances.", "For each instance we count the number of iterations required for the algorithm's iterates to satisfy ${\\mathbf {p}_{i+1}-\\mathbf {p}_{i}}<\\epsilon $ and ${\\mathbf {p}_{i+1}-\\mathbf {p}_{i}}<\\epsilon {\\mathbf {p}_{i}}$ with $\\epsilon =10^{3}$ .", "As expected from the upper bound (REF ), significantly more iterations are required when the condition number is large." ], [ "Implementation", "DLS-I has implemented the network topology shown in Fig.", "REF for transmitting the BPM measurements across the storage ring.", "At each time instant, the BPMs (gray dots) inject new measurements into the network, which are synchronized and forwarded to each of the 24 nodes that compute the control inputs for the neighboring corrector magnets.", "DLS-II will considerably simplify the topology of Fig.", "REF and implement a centralized network, where the BPM signals from each cell will be sent to one central computing node.", "For testing our algorithm on DLS-I, we connect the new hardware to the communication network as illustrated in Fig.", "REF .", "The computed control signals will then be “disguised” as BPM signals again and each of the 24 distributed nodes will select the corresponding signal to pass to the neighboring magnets.", "Figure: Diamond-I communication network topology.The new central computing node is a VadaTech AMC540 board [19] that embeds a Xilinx Virtex-7 FPGA and two Texas Instruments (TI) C6678 digital signal processors (DSPs) [20] with 8 cores each.", "For our tests, the control algorithm will be implemented on the DSPs, which are more flexible to program, while the FPGA will be responsible for signal routing.", "A PCIe link is used to transfer BPM and control input data between the FPGA and the DSPs, which takes roughly $5~\\mu $ s ($6.6$  Gbps) when executed by the direct memory access (DMA) engine of the DSP.", "The DSPs are clocked at 1.4 GHz and the sampling frequency of 10 kHz allows for 140,000 processor cycles (100 $\\mu $ s).", "One core of each DSP is used to communicate with the control room through a gigabit ethernet link.", "The control problems for the vertical and horizontal beam directions are independent and one DSP is used for each direction." ], [ "Single-Core Implementation", "The TI C6678 is a floating point processor with single-instructions multiple-data (SIMD) capabilities that can be programmed in C. It has two levels of core-local memory (L1, 32 kB and L2, 512 kB) and a third level of shared memory (L3, 4 MB) with the L1 memory being configured as cache.", "Accessing the L2 memory is twice as fast as accessing the L3 memory [20].", "For the gradient step of Alg.", "REF , we have implemented a highly optimized routine that exploits the core architecture and uses SIMDs.", "Analogous to standard row-major matrix-vector multiplication, the routine implements two nested for-loops, where the first loop iterates over rows and the second over columns.", "To minimize memory transactions and maximize the use of SIMD, the inner loop computes 8 rows and 4 columns at once.", "To maximize the efficiency of the cache, the arrays are aligned to cache line boundaries, zero-padded to multiples of 4 or 8 floats and rearranged such that the unrolled rows are contiguous in memory.", "For $N\\!=\\!1$ , the algorithm can be implemented as shown in Alg.", "REF and all the problem data, such as the Hessian $\\mathbf {J}$ , can be saved in L2 memory.", "For $N\\!=\\!2$ , the Hessian uses almost the whole L2 memory, so some data must be moved to the slower L3 memory.", "The cache efficiency for the projection can be increased by permuting the data using a perfect shuffle, so that the inputs for magnet $i$ and horizon stages 0 and 1 are contiguous in memory.", "The computational complexity of the gradient step could be reduced by considering the sparsity patterns in the definition of the Hessian (REF ), i.e.", "by separating the multiplications by $\\mathbf {G}$ and $(I_N\\otimes \\mathbf {Q})\\oplus \\mathbf {P}$ .", "The single-core performance with $I_\\text{max}=20$ and horizon $N\\!=\\!\\lbrace 1,2\\rbrace $ is shown in Fig.", "REF .a.", "It requires 543 $\\mu $ s for $N\\!=\\!1$ and 3550 $\\mu $ s for $N\\!=\\!2$ to compute the control inputs, which is more than the desired 100 $\\mu $ s. The most expensive operation is the gradient step, which takes 357 $\\mu $ s for $N\\!=\\!1$ and 3000 $\\mu $ s for $N\\!=\\!2$ .", "As the algorithm is dominated by the gradient step, one would expect the computation time to quadruple when doubling the problem size.", "However, transferring problem data that lies in the L3 memory and additional cache inefficiencies incur substantial overheads.", "The single core performance could certainly be increased by configuring the L2 memory as cache, but our parallel implementation uses the L2 memory for saving core-local data and this approach was not pursued further." ], [ "Parallelization", "All steps of Alg.", "REF can be parallelized using a standard manager-worker framework, but variable dependencies require core communication and cache operations that are denoted by circled arrows.", "The same executable is used for all cores and the code is branched off based on the core ID.", "Note that the observer operations $\\mathbf {\\hat{y}}_k\\mathbf {C}\\mathbf {\\hat{z}}_k^\\mu $ and $\\mathbf {L}(\\mathbf {y}_k-\\mathbf {\\hat{y}}_k-\\mathbf {\\hat{d}}_k)$ are computed separately and require two synchronization steps.", "For the problem size of the MPC problem (REF ), the cost of parallelization is not negligible.", "Fig.", "REF shows the overhead introduced by interprocessor communication measured by the elapsed time between a manager request and the acknowledgement of $n_w$ worker cores without worker payload.", "Three different implementations are compared: The TI Notify scheme, which is a library provided by TI and used by the TI open multi-processing (openMP) toolbox, the TI multicore navigator (NAV), which is implemented through a separate on-chip processor, and our custom interrupt-free implementation.", "The TI notification schemes are flexible, but introduce a considerable delay.", "Note that with 20 synchronization points, the TI Notify scheme alone would introduce 200 $\\mu $ s of overhead.", "For our custom approach, we chose to implement a simpler scheme using integer flags that are saved in the L3 memory.", "For further speed-up, the L1 cache is by-passed by creating a non-cacheable virtual memory section.", "In practice, at each communication step it is also required to invalidate or write-back the cache, which can be manually triggered using TI's chip support library.", "Alg.", "REF is sliced into $6\\times 32$ row-blocks with 192 columns each and deployed on 6 worker cores and 1 manager core.", "The length of the slices must be a multiple of the cache line size (64 B) and using 7 worker cores would not yield any speed up.", "The master core coordinates the various steps of Alg.", "REF , communicates with the adjacent FPGA and triggers the DMA.", "A breakdown of the computation time of Alg.", "REF with $I_\\text{max}=20$ is shown in Fig.", "REF .b.", "For $N\\!=\\!1$ , the algorithm uses 69 $\\mu $ s, which is well below the allowed 100 $\\mu $ s, but for $N\\!=\\!2$ , the computation time of 272 $\\mu $ s is far above the time limit.", "Comparing Fig.", "REF .a and b, the parallelization reduces the computation time by a factor between about 8 and 13.", "In theory, one would expect the computation time to be reduced by a factor smaller than $n_w$ when deployed onto $n_w$ worker cores.", "We suspect that this discrepancy is due to memory and cache bandwidth limitations on the single core implementation.", "Figure: Interprocessor communication overhead." ], [ "Conclusion", "In this feasibility study, we have focused on the practical issues of implementing MPC for the DLS-I electron beam stabilization problem.", "To obtain an implementation that runs at the desired speed, we tailored the MPC algorithm to the application.", "Firstly, we avoided removing the time delay by augmenting the system with additional states and designed an observer for the delayed states instead.", "The delayed measurement updates were then projected into the future, which exploited the diagonal structure of the state-space system.", "Secondly, because standard preconditioning techniques with diagonal preconditioning matrices were unable to reduce the condition number of the Hessian, we used the modal decomposition to choose appropriate state- and input-weighting matrices that led to a Hessian with a small condition number.", "Finally, we showed that standard parallelization toolboxes, such as openMP, introduce overheads that would prohibit the algorithm from running at the desired speed, and we therefore implemented a customized core-synchronization framework.", "Our investigation showed that MPC is applicable to the electron beam stabilization problem, but requires investment of significant effort into the theoretical and practical implementation as well as paying particular attention to details, such as overheads introduced by the CQP initialization or parallelization, which are often neglected in theoretical investigations.", "Our practical tests also showed that assumptions on computational complexities can be inaccurate, e.g.", "doubling the CQP problem size does not necessarily result in quadruple computation time nor does parallelizing the algorithm on $n_w$ cores increase the computation speed by a factor of $n_w$ .", "In anticipation of our tests, we demonstrated the feasibility for the DLS-I storage ring, but we have not considered a number of additional changes that DLS-II will introduce.", "The number of actuators will be increased from 173 to 396 for Diamond-II, which will significantly increase the computational complexity of the algorithm and further slow down the controller.", "However, in contrast to the current system the DLS-II system will have a block-circulant and centrosymmetric symmetry, which can be exploited to increase the computational speed of the controller by a factor of 10 [8].", "For DLS-I, all corrector magnets are actuated at 10 kHz, whereas at Diamond-II, the 144 fast actuators will be actuated at 100 kHz and the slow actuators at 1 kHz.", "This would give rise to another MPC scheme in which the control inputs for the slow actuators are computed every 100 time step and the control inputs for the fast actuators are computed every other time step.", "For such an MPC scheme, the closed-loop stability would need to be assessed separately.", "A communication controller in the DLS-I storage ring manages the communication between computing nodes and BPMs.", "At DLS-II, the BPM measurements will be sent to one central node and not all measurements will be synchronized.", "In this paper, we designed an observer that receives measurements that have the same time delay and projects the measurement update to the current state.", "If the measurement have different delays, this could be considered in the observer.", "All our simulations used measurement data from DLS-I and it is expected that the power spectrum of the DLS-II disturbances will change.", "A disturbance model was used to compute the feedforward setpoint, and it was assumed that the disturbances are independent and identically distributed.", "For DLS-II, the disturbances might be correlated, in which case a different disturbance model could be used.", "Considering correlated disturbances could increase the performance of the controller in terms of disturbance attenuation." ] ]
2107.01694
[ [ "Lower and upper order of harmonic mappings" ], [ "Abstract In this paper, we define both the upper and lower order of a sense-preserving harmonic mapping in $\\mathbb{D}$.", "We generalize to the harmonic case some known results about holomorphic functions with positive lower order and we show some consequences of a function having finite upper order.", "In addition, we improve a related result in the case when there is equality in a known distortion theorem for harmonic mappings with finite upper order.", "Some examples are provided to illustrate the developed theory." ], [ "Introduction", "In 1964 Pommerenke [19] introduces the linear invariance order for a locally univalent holomorphic function defined in the unit disk $\\mathbb {D}:=\\left\\lbrace z: |z|<1 \\right\\rbrace $ by $\\alpha (f):=\\sup _{z\\in \\mathbb {D}}|A_f(z)|,$ where, as in [7], [20], $A_f(z)=\\frac{1-|z|^2}{2}\\frac{f^{\\prime \\prime }(z)}{f^{\\prime }(z)}-\\overline{z}.$ This operator has played an important role in the study of both analytic and geometric properties of locally univalent holomorphic functions; it is closely related to (Euclidean) convexity and concavity properties of the function under study, as well as to growth and distortion results, some of which can be obtained by using the connection between $A_f,$ the second coefficient $a_2=f^{\\prime \\prime }(0)/2$ and the notion of linear invariant family of locally univalent analytic functions.", "An important result in this direction was proved by Pommerenke [19], who showed that $\\alpha (f)\\ge 1$ for all locally univalent holomorphic function $f$ and $\\alpha (f)=1$ exactly if $f$ is a convex univalent function.", "Likewise, a straightforward calculation shows that (see for example [20]) $\\frac{f\\left(\\frac{z+z_0}{1+\\overline{z_0}z} \\right)-f(z_0) }{(1-|z_0|^2)f^{\\prime }(z_0)}=z+A_f(z_0)z^2+\\cdots ,\\quad z\\in \\mathbb {D},$ for all $z_0\\in \\mathbb {D}$ given.", "Also, $\\frac{\\partial }{\\partial z}\\log |(1-|z|^2)f^{\\prime }(z)|=\\frac{1}{1-|z|^2}A_f(z),\\quad z\\in \\mathbb {D}$ and, if $\\varphi :\\mathbb {D}\\rightarrow \\mathbb {D}$ is analytic and locally univalent, then $A_{f\\circ \\varphi }(z)=\\frac{(1-|z|^2)\\varphi ^{\\prime }(z)}{1-|\\varphi (z)|^2}\\left( A_f(\\varphi (z))+\\overline{\\varphi (z)} \\right)+A_{\\varphi }(z),$ for all $z\\in \\mathbb {D}.$ In particular, if $\\varphi \\in \\text{Aut}(\\mathbb {D}),$ where $\\text{Aut}(\\mathbb {D})$ denotes the family of automorphisms of $\\mathbb {D},$ then $A_{f\\circ \\varphi }(z)=\\frac{\\varphi ^{\\prime }(z)}{|\\varphi ^{\\prime }(z)|}A_f(\\varphi (z)),$ for all $z\\in \\mathbb {D}.$ Due to the previously described, and based on the work developed by Pommerenke, some authors have defined and studied in other contexts operators with similar properties to $A_f$ .", "For example, Ma and Minda [15], [16] investigated topics related to linear invariance but in two non Euclidean geometries: they proposed both a definition of spherical linear invariance for locally univalent meromorphic functions defined on the unit disk and a definition of hyperbolic linear invariance for locally univalent functions that map the unit disk into itself.", "More precisely, they defined the order of a locally univalent meromorphic function $f$ defined in $\\mathbb {D}$ by $\\alpha _s(f):=\\sup _{z\\in \\mathbb {D}}|A^{\\#}_f(z)|,$ where $A^{\\#}_f(z)=\\frac{1-|z|^2}{2}\\frac{f^{\\prime \\prime }(z)}{f^{\\prime }(z)}-\\overline{z}-\\frac{(1-|z|^2)f^{\\prime }(z)\\overline{f(z)}}{1+|f(z)|^2},$ and the order of a locally univalent holomorphic function $f:\\mathbb {D}\\rightarrow \\mathbb {D}$ by $\\alpha _h(f):=\\sup _{z\\in \\mathbb {D}}|A^{h}_f(z)|,$ where $A^{h}_f(z)=\\frac{1-|z|^2}{2}\\frac{f^{\\prime \\prime }(z)}{f^{\\prime }(z)}-\\overline{z}+\\frac{(1-|z|^2)f^{\\prime }(z)\\overline{f(z)}}{1-|f(z)|^2}.$ Just like in the Euclidean setting, in both cases the authors proved that the order always is greater or equal than 1, and the equality occurs precisely when $f$ is spherically or hyperbolically convex, respectively.", "We remark that other authors, in particular J.", "A. Pfaltzgraff and T. J. Suffridge [17], [18], have investigated the $n$ -dimensional version of Pommerenke's theory of (Euclidean) linear invariance.", "It has also been extended to the setting of planar harmonic mappings, scenario in which the present paper is framed.", "In this context the pioneering work was that of T. Sheil-Small [24], in which introduced the notion of an affine and linear invariant family of univalent harmonic functions.", "On the other hand, Cruz and Pommerenke [7] defined the concept of lower linear-invariance order for a locally univalent holomorphic function defined in $\\mathbb {D}$ by $\\mu (f):=\\inf _{z\\in \\mathbb {D}}|A_f(z)|.$ It is known that $0\\le \\mu (f)\\le 1$ [7].", "Moreover, the lower order is closely related with properties of concavity; for example, in [7] is proved that $0\\le \\mu (f)\\le 1$ and if $f$ is univalent, $\\mu (f)=1$ holds exactly if $f$ is concave.", "Subsequently in [20] the author carries out a deeper study of the lower order and proved interesting results under the hypothesis $\\mu (f)>0$ .", "More recently and following the ideas of Pommerenke, J. Arango et al.", "in [1] and [2], defined the lower linear invariant spherical order by $\\mu _s(f)=\\inf _{z\\in \\mathbb {D}}|A^{\\#}_f(z)|$ and the lower linear invariant hyperbolic order by $\\mu _h(f)=\\inf _{z\\in \\mathbb {D}}|A^{h}_f(z)|$ .", "In [1] and [2], the authors noted that, as in the Euclidean case, $0\\le \\mu _s(f)\\le 1$ and there are functions with lower order positive, however, in the hyperbolic case, every function $f$ , satisfies that $\\mu _h(f)=0$ .", "In this work, in a similar way to the ideas discussed above, we introduce the concepts of lower and upper order for a sense-preserving harmonic mapping $f:\\mathbb {D}\\rightarrow \\mathbb {C}$ , which match respectively with $\\mu (f)$ and $\\alpha (f)$ in the case when $f$ is analytic.", "Although these notions appear implicitly in previous researches about affine and linear invariant families of harmonic functions, we focus our attention on properties of the operator $A_f$ in the setting of planar harmonic mappings; in Section  we show that it has invariance properties similar to those of the operator defined by (REF ).", "In Section , we define the lower order for a sense-preserving harmonic mapping, we show that there are functions with positive lower order, and we generalize, to the harmonic case, some of the results obtained by Pommerenke in [20].", "In Section , we establish a relation between the upper order and linearly connected domains.", "Finally, we study a characterization of functions with finite order in terms of bounds for the Jacobian.", "In addition, we exhibit some interesting examples, which illustrate the theory." ], [ "Preliminaries on harmonic mappings", "In this section we introduce briefly some basic results about harmonic functions in the plane, which are used throughout this paper.", "Let $f$ be a planar harmonic mapping defined on a domain $\\Omega \\subseteq \\mathbb {C}.$ It is well known that if $\\Omega $ is simply connected, $f$ has the canonical representation $f=h+\\overline{g}$ , where $h$ and $g$ are analytic functions in $\\Omega $ ; this representation is unique up to an additive constant, which is usually determined by imposing the condition $g(z_0)=0$ for some $z_0$ fixed in $\\Omega $ .", "Lewy [13] proved that $f$ is locally univalent in $\\Omega $ if and only if its Jacobian $J_f=|h^{\\prime }|^2-|g^{\\prime }|^2$ does not vanish.", "Thus, if $f$ is locally univalent in $\\Omega ,$ it is either sense-preserving or sense-reversing depending on the conditions $J_f>0$ or $J_f<0$ throughout the domain $\\Omega ,$ respectively.", "Along this paper we will consider sense-preserving harmonic mappings on the unit disk $\\mathbb {D},$ in this case the analytic part $h$ is locally univalent in $\\mathbb {D}$ and the second complex dilatation of $f,$ $\\omega = g^{\\prime }/h^{\\prime },$ is an analytic function in $\\mathbb {D}$ satisfying $|\\omega |<1.$ The family of all sense-preserving univalent harmonic mappings $f=h+\\overline{g}$ defined in $\\mathbb {D}$ , normalized by $h(0) = 0,$ $g(0) = 0,$ and $h^{\\prime }(0) = 1$ , will be denoted by $S_H$ .", "Also, $S_H^0$ will denote the subclass of functions in $S_H$ that satisfy the further normalization $g^{\\prime }(0) = 0,$ $K_H$ will denote the subclass of functions in $S_H$ that are convex, and $K^0_H=K_H\\cap S_H^0.$" ], [ "The operator $A_f.$", "In a similar form to the analytic case, given a sense-preserving harmonic mapping $f=h+\\overline{g}$ in $\\mathbb {D},$ we define the operator $A_{f}$ by $A_{f}(z) =\\frac{\\left(1-|z|^{2}\\right)}{2}P_{f}(z)-\\overline{z},\\qquad z\\in \\mathbb {D},$ where $P_{f}=\\partial _z\\log J_{f} =\\frac{h^{\\prime \\prime }}{h^{\\prime }}-\\frac{\\overline{\\omega }\\omega ^{\\prime }}{1-\\left|\\omega \\right|^{2}}$ is the pre-schwarzian derivative of$f$ introduced in [12].", "Hence, we can express $A_f$ in the form $A_f(z)=A_h(z)-\\frac{(1-|z|^2)}{2}\\frac{\\overline{\\omega (z)}\\omega ^{\\prime }(z)}{1-\\left|\\omega (z)\\right|^{2}},\\qquad z\\in \\mathbb {D},$ where $A_h$ is given by (REF ).", "Observe that if $f$ is analytic, then $\\omega =0$ and therefore $A_f=A_h.$ Thus, in the analytic case $A_f$ coincides with the operator introduced in [19] for analytic functions defined in $\\mathbb {D}.$ We remark that $A_f$ arises in a natural way when studying problems related to the second coefficient of affine and linear invariant families of harmonic mappings in the unit disk.", "Indeed, as in the analytic case, it appears in an expression of the type (REF ), which can be seen by the following standard argument: given $a\\in \\mathbb {D},$ we consider the function $F(z)=\\frac{f\\left(\\frac{a+z}{1+\\overline{a}z} \\right) -f(a)}{(1-|a|^2)h^{\\prime }(a)}:=H(z)+\\overline{G(z)},$ which satisfies $A_f(a)=A_F(0).$ So, if $B_1:=G^{\\prime }(0),$ $F_0(z)=\\frac{F(z)-\\overline{B_1F(z)}}{1-|B_1|^2}=H_0(z)+\\overline{G_0(z)}$ satisfies $H_0^{\\prime }(0)=1,$ $G_0^{\\prime }(0)=0,$ and $A_f(a)=A_F(0)=A_{F_0}(0)=A_{H_0}(0)=\\frac{H_0^{\\prime \\prime }(0)}{2},$ being the penultimate equality a consequence of (REF ).", "Further properties of $A_f,$ including among them equalities of the type (REF ) and (REF ), are provided in the following proposition.", "The proof is a straightforward calculation, which we include here in order to make our exposition self contained.", "Proposition 1 Let $f=h+\\overline{g}$ be a sense-preserving harmonic mapping in $\\mathbb {D}$ .", "Then (i) $A_f$ does not vanish identically; (ii) For all $\\sigma \\in \\text{Aut}\\left( \\mathbb {D}\\right),$ we have $A_{f\\circ \\sigma }=\\dfrac{\\sigma ^{\\prime }}{\\left|\\sigma ^{\\prime }\\right|} \\left( A_{f}\\circ \\sigma \\right);$ (iii) For all affine mapping $L\\left( z\\right)=az+b\\overline{z}+c,$ with $a,b,c\\in \\mathbb {C}$ and $\\left|b/a\\right|<1,$ we have $A_{L\\circ f}=A_{f}.$ To prove (i), we first note that $J_{f}=\\left|h^{\\prime }\\right|^{2}-\\left|g^{\\prime }\\right|^{2}\\le \\left|h^{\\prime }\\right|^{2}$ and moreover $A_{f}$ can be expressed in the form $A_{f}(z)=(1-|z|^{2})\\frac{\\partial }{\\partial z}\\log \\left\\lbrace (1-|z|^{2})J_{f}^{1/2}(z)\\right\\rbrace .$ Thus, if $A_{f}$ is identically zero, then there is $k\\in \\mathbb {R}$ such that $(1-|z|^{2})J_{f}^{1/2}(z)=k$ for all $z\\in \\mathbb {D}.$ It follows from here and from (REF ) that $\\left|h^{\\prime }(z)\\right|\\ge \\frac{k}{1-|z|^{2}},\\qquad \\text{ for all }z\\in \\mathbb {D},$ from where $\\left|h^{\\prime }\\right|\\rightarrow \\infty $ as $|z|\\rightarrow 1,$ which is a contradiction with the maximum principle for the analytic function $1/h^{\\prime }.$ Now we prove (ii).", "A straightforward calculation shows that $P_{f\\circ \\sigma }=\\left( \\frac{h^{\\prime \\prime }}{h^{\\prime }}\\circ \\sigma \\right) \\sigma ^{\\prime }+\\frac{\\sigma ^{\\prime \\prime }}{\\sigma ^{\\prime }}-\\frac{\\overline{\\omega \\circ \\sigma }\\left( \\omega \\circ \\sigma \\right)^{\\prime }}{1-\\left|\\omega \\circ \\sigma \\right|^{2}}\\sigma ^{\\prime }=\\left( P_{f}\\circ \\sigma \\right) \\sigma ^{\\prime }+\\frac{\\sigma ^{\\prime \\prime }}{\\sigma ^{\\prime }}.$ Thus, by Schwarz-Pick's lemma, $A_{f\\circ \\sigma }(z) & =\\frac{\\left( 1-\\left|z\\right|^{2}\\right) \\sigma ^{\\prime }(z) }{2}P_{f}\\left( \\sigma (z) \\right) +\\frac{\\left( 1-\\left|z\\right|^{2}\\right) }{2}\\frac{\\sigma ^{\\prime \\prime }}{\\sigma ^{\\prime }}(z)-\\overline{z}\\\\& =\\frac{\\left( 1-\\left|\\sigma (z)\\right|^{2}\\right)}{2}\\frac{\\sigma ^{\\prime }(z)}{\\left|\\sigma ^{\\prime }(z)\\right|}P_{f}\\left( \\sigma (z)\\right) -\\frac{\\sigma ^{\\prime }(z)}{\\left|\\sigma ^{\\prime }(z)\\right|}\\overline{\\sigma (z)}\\\\&=\\frac{\\sigma ^{\\prime }(z)}{\\left|\\sigma ^{\\prime }(z)\\right|}A_{f}\\left(\\sigma (z)\\right),$ for $z\\in \\mathbb {D}$ arbitrary.", "Finally, by Proposition 1 in [12] we have $P_{L\\circ f}=P_{f}.$ So, $A_{L\\circ f}\\left( z\\right) =\\frac{\\left( 1-\\left|z\\right|^{2}\\right) }{2}P_{L\\circ f}\\left( z\\right) -\\overline{z}=\\frac{\\left(1-\\left|z\\right|^{2}\\right) }{2}P_{f}\\left( z\\right)-\\overline{z}=A_{f}\\left( z\\right),$ for $z\\in \\mathbb {D}$ arbitrary, which proves (iii)." ], [ "Lower order", "Following the ideas presented in [7] and [20], we introduce the notion of lower order in the setting of harmonic mappings and we will prove similar results to Proposition 4.1 and Proposition 5.1 in [20].", "Let $f=h+\\bar{g}$ be a sense-preserving harmonic mapping in $\\mathbb {D}$ .", "We define the lower order of $f$ by $\\mu (f)=\\inf _{z\\in \\mathbb {D}}|A_f(z)|,$ where $A_f(z)=A_h(z)-\\frac{(1-|z|^2)}{2}\\frac{\\overline{\\omega (z)}\\omega ^{\\prime }(z)}{1-|\\omega (z)|^2},\\qquad z\\in \\mathbb {D}.$ From the properties of $A_f$ it follows that $\\mu (L\\circ f\\circ \\sigma )=\\mu (f)$ , for $L$ an affine mapping and $\\sigma \\in \\text{Aut}(\\mathbb {D})$ .", "Since $0\\le \\mu (h)\\le 1,$ because $h$ is a locally univalent analytic function, we obtain from the Schwarz-Pick classical inequality that $0\\le \\mu (f)\\le 3/2.$ To illustrate we present two examples, which show that the bound 3/2 is sharp, and that there are harmonic mappings with positive lower order.", "Example 1 We consider the harmonic mapping $L(z) =\\frac{1}{2}[l(z)+k(z)]+\\overline{\\frac{1}{2}[l(z)-k(z)]},$ where $l(z)=\\frac{z}{1-z}\\qquad \\text{and}\\qquad k(z)=\\frac{z}{(1-z)^{2}}.$ We know that $L$ belongs to $K_H^0$ and it maps $\\mathbb {D}$ onto the full half plane $\\text{Re}\\left\\lbrace w\\right\\rbrace >-1/2,$ see for example [8].", "Now, a direct calculation gives $A_L(z)=\\frac{3}{2}\\left( \\frac{1-\\bar{z}}{1-z}\\right),\\qquad z\\in \\mathbb {D},$ and consequently $\\mu (L)=3/2,$ which shows that the bound referred above is sharp.", "Example 2 We now consider the Koebe harmonic function, which is defined by $K(z)=h(z)+\\overline{g(z)}=k(z)+2\\text{Re}\\left\\lbrace g(z) \\right\\rbrace ,$ where $h(z)=\\frac{z-z^2/2+z^3/6}{(1-z)^3}\\qquad \\text{ and }\\qquad g(z)=\\frac{z^2/2+z^3/6}{(1-z)^3}.$ The Koebe harmonic function was constructed by Clunie and Sheil-Small [5] as a candidate to play the role of extremal function for some problems in the class $S_H^0,$ see also [8].", "It maps $\\mathbb {D}$ harmonically onto the full plane slit along the negative real axis from $-1/6$ to infinity.", "After an algebraic calculation, we obtain $A_K(z)=\\frac{3}{2}\\left[ \\frac{1-|z|^2}{1-z^2}\\left(\\frac{5}{3}+z \\right)-\\overline{z} \\right]=\\frac{3}{2}\\left[\\frac{\\frac{5}{3}(1-|z|^2)+2i\\rm {Im}\\lbrace z\\rbrace }{1-z^2}\\right] .$ A straightforward calculation gives that $\\left|\\frac{5}{3}(1-|z|^2)+2i\\rm {Im}\\lbrace z\\rbrace \\right|^2-\\left|1-z^2\\right|^2=\\frac{16}{9}\\left(1-|z|^2\\right)^2.$ Then, we have that $|A_K(z)|=\\frac{3}{2}+2\\cdot \\frac{1-|z|^2}{|1-z^2|}.$ Therefore, $\\mu (K)=3/2.$ We recall that a continuous, injective, and unbounded function $f:\\mathbb {D}\\rightarrow \\mathbb {C}$ is said to be concave, if $\\mathbb {C}\\setminus f(\\mathbb {D})$ is convex.", "The functions $L$ and $K$ of Examples REF and REF are concave harmonic functions.", "As was mentioned in the introduction, if $f$ is analytic and locally univalent in $\\mathbb {D}$ , then $0\\le \\mu (f)\\le 1.$ When $f$ is univalent, the upper bound is attained if and only if $f$ is concave, which is equivalent (if we assume $f(\\mathbb {D})$ with angle $\\pi \\alpha $ at $\\infty ,$ $1\\le \\alpha \\le 2$ ) to that $f$ satisfies $\\text{Re}\\left\\lbrace \\frac{\\alpha +1}{2}\\frac{1+z}{1-z}-1-z\\frac{f^{\\prime \\prime }(z)}{f^{\\prime }(z)} \\right\\rbrace >0,\\qquad z\\in \\mathbb {D},$ see [7].", "In this direction, in the setting of harmonic functions, we provide a lower bound for the lower order of a sense-preserving harmonic mapping $f=h+\\overline{g},$ satisfying the property $\\varphi _{\\lambda }=h+\\lambda g$ concave for all $|\\lambda |=1$ .", "The existence of such functions can be proved as follows.", "For any $\\alpha >1$ there existe an unbounded univalent analytic function $h$ satisfying $\\text{Re}\\left\\lbrace \\frac{\\alpha +1}{2}\\frac{1+z}{1-z}-1-z\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)} \\right\\rbrace \\ge \\beta >0,\\qquad z\\in \\mathbb {D}, $ for some $1<\\alpha \\le 2$ (for instance $h(z)=(1-z)^{-(1+2\\beta )}$ ).", "Therefore, $\\text{Re}\\left\\lbrace \\frac{\\alpha +1}{2}\\frac{1+z}{1-z}-1-z\\frac{\\varphi _{\\lambda }^{\\prime \\prime }(z)}{\\varphi _{\\lambda }^{\\prime }(z)} \\right\\rbrace &= \\text{Re}\\left\\lbrace \\frac{\\alpha +1}{2}\\frac{1+z}{1-z}-1-z\\left(\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)}+\\frac{\\lambda \\omega ^{\\prime }(z)}{1+\\lambda \\omega (z)} \\right) \\right\\rbrace \\\\& = \\text{Re}\\left\\lbrace \\frac{\\alpha +1}{2}\\frac{1+z}{1-z}-1-z\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)} \\right\\rbrace -\\text{Re}\\left\\lbrace \\frac{\\lambda z \\omega ^{\\prime }(z)}{1+\\lambda \\omega (z)} \\right\\rbrace \\\\&\\ge \\beta - \\text{Re}\\left\\lbrace \\frac{\\lambda z \\omega ^{\\prime }(z)}{1+\\lambda \\omega (z)} \\right\\rbrace .$ In consequence, if we put the condition that $\\omega (z)=\\rho z,$ with $|\\rho |<1,$ we obtain that $\\text{Re}\\left\\lbrace \\frac{\\alpha +1}{2}\\frac{1+z}{1-z}-1-z\\frac{\\varphi _{\\lambda }^{\\prime \\prime }(z)}{\\varphi _{\\lambda }^{\\prime }(z)} \\right\\rbrace \\ge \\beta - \\text{Re}\\left\\lbrace \\frac{\\rho \\lambda z}{1+\\rho \\lambda z} \\right\\rbrace \\ge \\beta -\\frac{|\\rho |}{1-|\\rho |}.$ It follows that $\\varphi _{\\lambda }$ is concave for all $|\\lambda |=1,$ if $|\\rho |<\\beta /(1+\\beta ),$ [7].", "Proposition 2 Let $f=h+\\overline{g}:\\mathbb {D}\\rightarrow \\mathbb {C}$ be a sense-preserving harmonic mapping such that $\\varphi _\\lambda =h+\\lambda g$ is concave for all $|\\lambda |=1$ .", "Then $1\\le \\mu (f)\\le 3/2.$ It only remains to prove that $1\\le \\mu (f).$ A tedious but straightforward calculation yields $A_f(z)=A_{\\varphi _\\lambda }(z)-\\frac{\\lambda +\\overline{\\omega (z)}}{1+\\lambda \\omega (z)}\\frac{(1-|z|^2)\\omega ^{\\prime }(z)}{2(1-|\\omega (z)|^2)},$ for all $z\\in \\mathbb {D}$ and $|\\lambda |=1.$ From the concavity of $\\varphi _{\\lambda }$ we know that $\\mu (\\varphi _{\\lambda })=1,$ so $\\left|A_f(z) \\right|\\ge \\left| A_{\\varphi _\\lambda }(z)\\right| -\\left| \\frac{\\lambda +\\overline{\\omega (z)}}{1+\\lambda \\omega (z)}\\frac{(1-|z|^2)\\omega ^{\\prime }(z)}{2(1-|\\omega (z)|^2)}\\right| \\ge 1- \\frac{1}{2}\\left| \\frac{(1-|z|^2)\\omega ^{\\prime }(z)}{1-|\\omega (z)|^2} \\right|\\ge \\frac{1}{2}.$ On the other hand, (REF ) can be expressed in the form $A_f(z)=A_{\\varphi _\\lambda }(z)-\\sigma (\\lambda )\\frac{(1-|z|^2)\\omega ^{\\prime }(z)}{2(1-|\\omega (z)|^2)},$ for all $z\\in \\mathbb {D}$ and $|\\lambda |=1,$ where $\\sigma $ is the automorphism of $\\mathbb {D}$ defined by $\\sigma (\\zeta )=\\frac{\\zeta +\\overline{\\omega (z)}}{1+\\zeta \\omega (z)}.$ Thus, for each $z\\in \\mathbb {D}$ we can choose $\\lambda ,$ with $|\\lambda |=1,$ such that $ 1\\le \\left| A_{\\varphi _\\lambda }(z) \\right|=\\left| A_f(z) \\right|- \\frac{1}{2}\\left| \\frac{(1-|z|^2)\\omega ^{\\prime }(z)}{(1-|\\omega (z)|^2)} \\right|\\le \\left| A_f(z) \\right|, $ which completes the proof.", "Next, we present a criterion to establish a positive lower bound of $\\mu (f),$ which is obtained as follows (see Proposition 3.1 in [20] for the analytic case): we suppose that $f=h+\\bar{g}$ is a sense-preserving harmonic mapping in $\\mathbb {D},$ $0\\le \\lambda \\le 1,$ and $\\left|P_f(z)-\\frac{2}{1-z}\\right| \\le \\frac{2\\lambda }{1-|z|^2},\\qquad z\\in \\mathbb {D}.$ Then, from the equality $A_f(z)-\\frac{1-\\overline{z}}{1-z}=\\frac{1-|z|^2}{2}P_f(z)-\\frac{1-|z|^2}{1-z}=\\frac{1-|z|^2}{2}\\left[P_f(z)-\\frac{2}{1-z} \\right],$ it follows that $\\left| A_f(z)\\right| \\ge 1-\\lambda ,$ for all $z\\in \\mathbb {D},$ from which we deduce that $\\mu (f)\\ge 1-\\lambda .$ As a consequence, we have the following example.", "Example 3 Let $f$ be the harmonic mapping defined by $f(z)=\\dfrac{z}{1-z}-\\overline{\\left(\\dfrac{z}{1-z}+\\log (1-z) \\right) }.$ By direct calculation one sees that $P_f(z)=\\frac{2}{1-z}-\\frac{\\overline{z}}{1-|z|^2},$ whence $\\left|P_f(z)-\\frac{2}{1-z} \\right|=\\frac{|z|}{1-|z|^2} \\le \\frac{1}{1-|z|^2}.$ By the discussed above we get that $\\mu (f) \\ge 1/2$ .", "On the other hand, $A_f(z)=\\frac{1-|z|^2}{1-z}-\\frac{\\overline{z}}{2},$ which implies $A_f(x)=1+\\frac{x}{2},$ $-1<x<1,$ and in consequence $\\mu (f)\\le 1/2.$ So, $\\mu (f)=1/2.$ For the following result we need to recall the definition of Schwarzian derivative of a sense-preserving harmonic mapping $f$ in $\\mathbb {D},$ which was introduced in [12] through the expression $S_f=\\partial _zP_f-\\frac{1}{2}(P_f)^2,$ where $P_f$ is given by (REF ).", "In [3] the authors define the family $NH_\\lambda (K),$ $0<\\lambda \\le 1$ and $K>0,$ as the set of all $K$ -quasiconformal sense-preserving harmonic mappings $f$ in $\\mathbb {D}$ such that $|S_f(z)|+\\frac{|\\omega ^{\\prime }(z)|^2}{\\left(1-|\\omega (z)|^2\\right)^2} \\le \\frac{2\\lambda }{\\left(1-|z|^2 \\right)^2 }, \\qquad z\\in \\mathbb {D}.$ In that same paper, the authors proved the following theorem.", "Theorem A.", "Let $\\lambda \\in (0,1)$ and $f\\in NH_\\lambda (K)$ such that $|P_f(0)|<2\\sqrt{1-\\lambda }$ , then $f$ is bounded.", "The condition $|P_f(0)|<2\\sqrt{1-\\lambda }$ is sharp.", "Using this, we easily get the following generalization of Proposition 4.1 in [20].", "Theorem 1 Let $\\lambda \\in (0,1)$ and $f\\in NH_\\lambda (K)$ , $f$ unbounded.", "Then $\\mu (f)\\ge \\sqrt{1-\\lambda }$ .", "Let $z_0\\in \\mathbb {D}$ and $\\sigma (z)=\\dfrac{z+z_0}{1+\\overline{z_0}z}$ an automorphism of $\\mathbb {D}.$ Since $f\\circ \\sigma \\in NH_\\lambda (K)$ and also is unbounded, it follows from Theorem A that $|P_{f\\circ \\sigma }(0)|\\ge 2\\sqrt{1-\\lambda }.$ On the other hand, by direct calculation $|A_f(z_0)|=|P_{f\\circ \\sigma }(0)|/2.$ Thus $|A_f(z_0)|\\ge \\sqrt{1-\\lambda }$ , for $z_0\\in \\mathbb {D}$ arbitrary.", "In order to formulate our next result, we need to introduce, in the context of sense-preserving harmonic mappings, the concept of level sets of the density of the Poincaré metric.", "This concept was introduced by Pommerenke in [20] in the Euclidean case and recently studied, in the hyperbolic and spherical cases, by J. Arango et al.", "in [2] and [1], respectively.", "Let $f:\\mathbb {D}\\rightarrow \\mathbb {C}$ be a sense-preserving harmonic mapping.", "We consider the level sets $C_f(t)=\\left\\lbrace z\\in \\mathbb {D}\\mid (1-|z|^2)J_f^{1/2}(z)=t\\right\\rbrace , \\qquad \\text{for}\\quad 0<t<\\infty .$ Following the ideas developed in [2], we assume that $A_f(z)\\ne 0$ in $\\mathbb {D}$ and observe that $\\nabla \\left[(1-|z|^2)J_f^{1/2}(z) \\right]=2\\overline{\\frac{\\partial }{\\partial z}\\left[(1-|z|^2)J_f^{1/2}(z) \\right]}=\\frac{2J_f^{1/2}(z)|A_f(z)|^2}{A_f(z)}.$ From differential equation theory (see [25]), we know that if $\\delta (t),$ $0<t<\\infty $ , is a smooth and positive function, then the initial value problem ${\\left\\lbrace \\begin{array}{ll}z^{\\prime }(t)=\\dfrac{\\delta (t)}{A_f(z(t))} \\\\z(t_0)=z_0\\end{array}\\right.", "}$ has a unique maximal solution near of $z_0$ , which forms an open Jordan arc orthogonal to $C_f(t)$ .", "If we choose $\\delta (t)$ such that $t=(1-|z(t)|^2)J_f^{1/2}(z(t))$ , namely, the parameter of the solution $z(t)$ matches the level of the point, by (REF ) $\\frac{1}{t}=\\frac{d}{dt}\\log \\left[ (1-|z(t)|^2)J_f^{1/2}(z(t))\\right]=2\\operatorname{Re} \\left[\\frac{A_f(z(t))z^{\\prime }(t)}{1-|z|^2} \\right]=\\frac{2\\delta (t)}{1-|z|^2},$ and so $\\delta (t)=\\dfrac{1-|z(t)|^2}{2t}$ .", "From this we can consider instead of (REF ) the next initial value problem ${\\left\\lbrace \\begin{array}{ll}z^{\\prime }(t)=\\dfrac{1-|z(t)|^2}{2tA_f(z(t))} \\\\z(t_0)=z_0.\\end{array}\\right.", "}$ The solutions $z(t)$ , $a<t<b$ , of (REF ) are called the trajectories of $f$ through $z_0$ .", "It is known that for all $z_0\\in \\mathbb {D},$ there is a unique trajectory through $z_0$ that goes from $\\mathbb {T}=\\partial \\mathbb {D}$ to $\\mathbb {T}.$ Using the previous development we will prove the following theorem, whose proof is similar to that of Proposition 5.1 in [20].", "Theorem 2 Let $\\mu (f)>0$ and $a<t_1<t_2<b$ .", "If $z_j=z(t_j),$ $j=1,2,$ are on the trajectory $\\Gamma $ , then $\\frac{(1-|z_2|^2)J_f^{1/2}(z_2)}{(1-|z_1|^2)J_f^{1/2}(z_1)}\\ge \\operatorname{exp} \\left[2\\mu (f) \\rho (z_1,z_2) \\right].$ Here and subsequently, $\\rho (z,w)$ stands for the hyperbolic distance between $z,w\\in \\mathbb {D}.$ By (REF ) and (REF ), $\\log \\frac{t_2}{t_1}=\\int _{t_1}^{t_2}\\frac{1}{t}dt=\\int _{t_1}^{t_2}\\frac{2|A_f(z(t))||z^{\\prime }(t)|}{1-|z(t)|^2}dt,$ from where $\\begin{split}\\log \\frac{t_2}{t_1}&\\ge 2\\mu (f)\\int _{t_1}^{t_2}\\frac{|z^{\\prime }(t)|}{1-|z(t)|^2}dt\\\\&=2\\mu (f)l_h(\\Gamma (z_1,z_2))\\\\&\\ge 2\\mu (f)\\rho (z_1,z_2),\\end{split}$ where $\\Gamma (z_1,z_2)$ is the segment of $\\Gamma $ between $z_1$ and $z_2$ .", "For the following corollary, we recall that a harmonic mapping $f:\\mathbb {D}\\rightarrow \\mathbb {C}$ is said to be a Bloch-type function if $\\sup _{z\\in \\mathbb {D}}(1-|z|^2)\\sqrt{|J_f(z)|\\,}<\\infty .$ For some results on Bloch-type functions, we refer to the reader to [9].", "Corollary 1 Let $f$ be a sense-preserving harmonic mapping in $\\mathbb {D}.$ If $\\mu (f)>0,$ then $f$ is not a Bloch-type function.", "We assume the notation of Theorem REF .", "Since $\\Gamma $ goes from $\\mathbb {T}$ to $\\mathbb {T},$ we can fix $z_1$ and let $|z_2|\\rightarrow 1.$ Then from (REF ) we obtain that $t_2\\rightarrow \\infty $ and consequently $b=\\infty .$ Similarly we can conclude that $a=0,$ whence the trajectory $z(t)$ satisfies $(1-|z(t)|^2)\\sqrt{J_f(z(t))\\,}=t,\\qquad \\qquad 0<t<\\infty .$ Hence, $\\sup _{z\\in \\mathbb {D}}(1-|z|^2)\\sqrt{J_f(z)\\,}=\\infty ,$ which ends the proof." ], [ "Linear invariance order for harmonic mappings", "Given a sense-preserving harmonic mapping $f=h+\\bar{g}$ in $\\mathbb {D},$ the upper order (or simply the order) of $f$ is defined by $\\left\\Vert A_f \\right\\Vert :=\\sup _{z\\in \\mathbb {D}}|A_f(z)|.$ Note that by Proposition REF it follows easily that for all affine mapping $L$ and $\\sigma \\in \\text{Aut}\\left(\\mathbb {D}\\right),$ $A_{L\\circ f\\circ \\sigma }(z)= \\frac{\\sigma ^{\\prime }(z)}{|\\sigma ^{\\prime }(z)|}A_f(\\sigma (z)),\\qquad \\text{for all}\\quad z\\in \\mathbb {D},$ which implies $\\left\\Vert A_{L\\circ f\\circ \\sigma }\\right\\Vert =\\left\\Vert A_{f}\\right\\Vert .$ It is remarkable that the order of $f$ coincides with the specified order of the affine and linear invariant family generated by $f$ (the linear-affine hull of $\\left\\lbrace f \\right\\rbrace $ ), which was defined in .", "It also coincides with the strong order studied in [23], see also [14].", "Remark 1 As we have seen, in the setting of harmonic mappings $A_f$ has similar properties to the operator defined by Pommerenke in (REF ), but there are some striking differences as well.", "For example, a classical result in geometric function theory states that for an analytic function $h,$ $\\left\\Vert A_{h}\\right\\Vert =1$ exactly if $h$ is a convex univalent function.", "In the context of harmonic mappings, it is known (see [5]) that if $f$ is a convex sense-preserving harmonic mapping in $\\mathbb {D}$ , then $\\left\\Vert A_f \\right\\Vert \\le 3/2,$ the function $L$ of Example REF shows that the bound is sharp.", "However, the following example shows that in the setting of harmonic mappings, the condition $\\left\\Vert A_{f}\\right\\Vert =3/2$ does not imply that $f$ is convex.", "Example 4 Given $n\\ge 2,$ we consider the harmonic mapping $f(z)=z+\\dfrac{1}{n}\\overline{z}^{n},$ which is univalent in $\\mathbb {D}$ and $f\\left( \\mathbb {D}\\right)$ is not convex [8].", "Moreover $\\left\\Vert A_{f}\\right\\Vert =3/2.$ Indeed, by direct calculation $A_{f}(z)=-\\overline{z}\\left[ 1+\\frac{n-1}{2}\\frac{\\left|z\\right|^{2\\left( n-2\\right) }}{1+\\left|z\\right|^{2}+\\left|z\\right|^{4}+\\cdots +\\left|z\\right|^{2\\left(n-2\\right) }}\\right],$ and, in consequence $\\left|A_{f}(z) \\right|=\\left|z\\right|+\\frac{n-1}{2}\\frac{\\left|z\\right|^{2n-3}}{1+\\left|z\\right|^{2}+\\left|z\\right|^{4}+\\cdots +\\left|z\\right|^{2\\left(n-2\\right) }}.$ Since the function $g(x)=x+\\dfrac{n-1}{2}\\dfrac{x^{2n-3}}{1+x^{2}+\\left(x^{2}\\right) ^{2}+\\cdots +\\left( x^{2}\\right) ^{\\left( n-2\\right) }}$ is increasing on $[0,1]$ , it follows that $\\left\\Vert A_{f}\\right\\Vert =g(1)=3/2.$ At present the order $\\left\\Vert A_f \\right\\Vert $ is only known for very few types of harmonic mappings.", "In the following proposition, the order of a stable harmonic convex mapping is established.", "Let $f=h+\\overline{g}$ be a sense-preserving harmonic mapping in $\\mathbb {D}.$ In [11] the authors define that $f$ is stable harmonic convex (SHC) if all the functions $f_{\\lambda } = h + \\lambda g$ with $|\\lambda | = 1$ are convex in $\\mathbb {D}$ .", "Proposition 3 If $f=h+\\overline{g}$ is a sense-preserving harmonic mapping in $\\mathbb {D}$ , then $\\left\\Vert A_{f}\\right\\Vert \\ge 1.$ Also, if $f$ is SHC in $\\mathbb {D}$ , then $\\left\\Vert A_{f}\\right\\Vert =1.$ For the proof of $\\left\\Vert A_{f}\\right\\Vert \\ge 1,$ see [23].", "The other part of the proof can be done by following a similar argument to that used to obtain (REF ); however we provide an alternative proof, which uses the same ideas as in the proof of Proposition REF and it will be used later in the proof of the next corollary.", "As in the proof of Proposition REF , we have $A_f(z)+\\sigma (\\lambda )\\frac{(1-|z|^2)\\omega ^{\\prime }(z)}{2(1-|\\omega (z)|^2)}=A_{f_\\lambda }(z),$ for all $z\\in \\mathbb {D}$ and $|\\lambda |=1,$ where $\\sigma $ is defined as in (REF ).", "Thus, given $z_0\\in \\mathbb {D},$ we conclude of the condition $\\omega (z_0)\\in \\mathbb {D}$ and the fact that $\\varphi $ maps $\\partial \\mathbb {D}$ onto $\\partial \\mathbb {D},$ that there is $\\lambda :=\\lambda (z_0)$ such that $\\left|A_f(z_0)+\\sigma (\\lambda )\\frac{(1-|z_0|^2)\\omega ^{\\prime }(z_0)}{2(1-|\\omega (z_0)|^2)}\\right|=\\left| A_f(z_0) \\right|+\\left| \\frac{(1-|z_0|^2)\\omega ^{\\prime }(z_0)}{2(1-|\\omega (z_0)|^2)} \\right|.", "$ It follows from (REF ) and $\\left\\Vert A_{f_\\lambda } \\right\\Vert \\le 1$ that $\\left| A_f(z_0) \\right|+\\left| \\frac{(1-|z_0|^2)\\omega ^{\\prime }(z_0)}{2(1-|\\omega (z_0)|^2)} \\right|\\le 1,$ for all $z_0\\in \\mathbb {D}.$ Therefore $\\left\\Vert A_f \\right\\Vert \\le 1,$ which implies $\\left\\Vert A_f \\right\\Vert = 1.$ Corollary 2 There is not a SHC mapping with dilatation satisfying $\\inf _{z\\in \\mathbb {D}}\\left\\lbrace \\frac{(1-|z|^2)|\\omega ^{\\prime }(z)|}{1-|\\omega (z)|^2} \\right\\rbrace >0.", "$ In particular, there is not a SHC mapping with dilatation $\\omega (z)=z,$ $z\\in \\mathbb {D}.$ The proof follows immediately from (REF ) and the fact that $\\left\\Vert A_f\\right\\Vert \\ge 1,$ for every sense-preserving harmonic mapping $f$ defined in $\\mathbb {D}.$ Next, we will prove a result which establishes a relation between the order of a sense-preserving harmonic mapping and linearly connected domains.", "We recall that a domain $\\Omega \\subset \\mathbb {C}$ is linearly connected if there exists a constant $1\\le M<\\infty $ such that any two points $w_1,w_2\\in \\Omega $ are joined by a path $\\gamma \\subset \\Omega $ of length $\\ell (\\gamma )\\le M|w_1-w_2|$ .", "In such case, we say that $\\Omega $ is $M-$ linearly connected.", "For piecewise smoothly bounded domains, linear connectivity is equivalent to the boundary having no inward-pointing cusps.", "It is clear that if $\\Omega $ is $M-$ linearly connected, then so is $c\\Omega =\\left\\lbrace cz\\mid z\\in \\Omega \\right\\rbrace ,$ for all $c\\in \\mathbb {C}.$ Moreover, for all $\\alpha \\in \\mathbb {D},$ if $L(z)=z+\\alpha \\overline{z}$ then $L(\\Omega )$ is $b-$ linearly connected, where $b=M\\frac{1+|\\alpha |}{1-|\\alpha |}.$ Pommerenke [21] proved that if $f$ maps $\\mathbb {D}$ conformally onto a linearly connected domain, then $\\left\\Vert A_{f}\\right\\Vert <2.$ The hyperbolic version of this result was later presented in [4].", "In the setting of harmonic mappings we have the following result.", "Theorem 3 Let $\\Omega \\subset \\mathbb {C}$ be a $M-$ linearly connected domain.", "There is $0<c<1$ such that if a univalent harmonic mapping $f$ satisfies the conditions $f(\\mathbb {D})=\\Omega $ and $|\\omega (z)|\\le c$ for all $z\\in \\mathbb {D},$ then $\\left\\Vert A_{f}\\right\\Vert <2.$ We will prove that $0<c\\le 1/(7+2M)$ satisfies the requirement of the theorem.", "Under this condition on $c,$ which guarantees that $0<c<1/(1+M),$ it is shown in [6] that $|\\omega (z)|\\le c$ implies $h$ univalent.", "So proceeding as in the proof of (REF ), we get $|A_f(z)|\\le 2$ for all $z\\in \\mathbb {D}.$ Thus, if we suppose that $\\left\\Vert A_{f}\\right\\Vert =2,$ we can then choose a sequence $(z_n)\\subset \\mathbb {D}$ such that $|A_f(z_n)|$ tends to 2 as $n$ tends to infinity.", "Next, we consider the sequences of functions in $S_H$ and $S_H^0,$ given respectively by $f_n(z)=\\frac{f\\left(\\frac{z+z_n}{1+\\overline{z_n}z}\\right)-f(z_n) }{(1-|z_n|^2)h^{\\prime }(z_n)}=:h_n(z)+\\overline{g_n(z)}$ and $F_n(z)=\\frac{f_n(z)-\\overline{b_1(n)f_n(z)}}{1-|b_1(n)|^2}=:H_n(z)+\\overline{G_n(z)},$ where $b_1(n)=g_n^{\\prime }(0)=\\omega _{f_n}(0)=\\omega (z_n).$ From what was discussed above, it can be concluded that $F_n(\\mathbb {D})$ is a $b-$ linearly connected domain, with $b=2+M.$ Moreover, if $W_n$ and $\\omega _n$ denote the dilatation of $F_n$ and $f_n$ respectively, we obtain $W_n(z)=\\frac{G_n^{\\prime }(z)}{H_n^{\\prime }(z)}=\\frac{g_n^{\\prime }(z)-b_1(n)h_n^{\\prime }(z)}{h_n^{\\prime }(z)-\\overline{b_1(n)}g_n^{\\prime }(z)}=\\frac{\\omega _n(z)-b_1(n)}{1-\\overline{b_1(n)}\\omega _n(z)}.$ It follows from the condition on $c$ and $\\left\\Vert \\omega \\right\\Vert \\le c$ that $\\left\\Vert W_n \\right\\Vert <1/(1+b),$ for all $n.$ We conclude from the remark after Theorem 2 in [6] that $H_n(\\mathbb {D})$ is $\\rho -$ linearly connected, for some constant $\\rho >1$ independent on $n.$ On the other hand, given that $(H_n)$ is a sequence of conformal mappings in the unit disk with $H_n(0)=0$ and $H_n^{\\prime }(0)=1$ for all $n,$ we can suppose, without loss of generality, that $(H_n)$ converges locally uniformly in $\\mathbb {D}$ to a univalent function $H:\\mathbb {D}\\rightarrow \\mathbb {C},$ satisfying $H(0)=0$ and $H^{\\prime }(0)=1.$ It can be shown that $H(\\mathbb {D})$ also is linearly connected.", "Indeed, given $w,\\tilde{w}\\in H(\\mathbb {D}),$ there are $z,\\tilde{z}\\in \\mathbb {D}$ such that $H(z)=w$ and $H(\\tilde{z})=\\tilde{w}.$ Thus, for all $n$ there is a curve $\\gamma _n\\subset \\mathbb {D},$ with endpoints $z,\\tilde{z},$ satisfying $\\ell (H_n(\\gamma _n))\\le \\rho |H_n(z)-H_n(\\tilde{z})|.$ It follows from a result of Gehring and Hayman [10] (see also [21]) that there exists an absolute constant $C$ such that $\\ell (H_n(S))\\le C |H_n(z)-H_n(\\tilde{z})|,\\quad \\text{ for all }\\quad n, $ where $S$ is the hyperbolic segment with endpoints $z,\\tilde{z}.$ By letting $n\\rightarrow \\infty ,$ we conclude that $\\ell (H(S))\\le C |H(z)-H(\\tilde{z})|=C |w-\\tilde{w}|,$ whence $H(\\mathbb {D})$ is a $C-$ linearly connected domain.", "This contradicts the fact that $H$ is a rotation of the Koebe function $k(z)=z/(1-z)^2,$ which is a consequence of $\\left| \\frac{H_n^{\\prime \\prime }(0)}{2}\\right|=|A_f(z_n)|\\rightarrow 2\\quad \\text{ as }\\quad n\\rightarrow \\infty .$ Thus, we can conclude that $\\left\\Vert A_f\\right\\Vert <2.$ We finish with some remarks (see Proposition REF and subsequent remark) about a distortion theorem for harmonic mappings with finite order, which can be found, with some changes in its presentation, in [23].", "See also [14], [22], [24] for related results.", "We will present the proof since its argument will be used to analyze the case when there is equality in (REF ) and the proof of the part “only if” is slightly different to that of [23].", "Theorem 4 Let $f=h+\\overline{g}$ be a sense-preserving harmonic mapping in $\\mathbb {D}$ and $\\alpha \\ge 0.$ Then $\\left|A_{f}\\left(z\\right) \\right|\\le \\alpha $ for all $z\\in \\mathbb {D}$ if and only if $\\exp \\left( -2\\alpha \\rho \\left( z_{0},z_{1}\\right) \\right) \\le \\frac{\\left( 1-\\left|z_{1}\\right|^{2}\\right) J_f^{1/2}\\left( z_{1}\\right) }{\\left( 1-\\left|z_{0}\\right|^{2}\\right) J_f^{1/2}\\left( z_{0}\\right) }\\le \\exp \\left( 2\\alpha \\rho \\left( z_{0},z_{1}\\right) \\right),$ for all $z_{0},z_{1}\\in \\mathbb {D}.$ If equality holds in any of these inequalities for $z_0,z_1\\in \\mathbb {D}$ , $z_0 \\ne z_1$ , then $\\left|A_{f}\\left(z\\right) \\right|=\\alpha ,$ for all $z$ in the hyperbolic segment $S:=S(z_0,z_1)$ with extremes $z_0$ and $z_1$ and we get equality in the corresponding side of (REF ) for all $\\widehat{z_0}, \\widehat{z_1}$ in $S$ .", "Let $\\gamma $ be the hyperbolic segment joining $z_{0}$ and $z_{1}$ in $\\mathbb {D}$ and let $z=z\\left( s\\right) ,$ $0\\le s\\le L:=l_{h}\\left(\\gamma \\right),$ be a parametrization of $\\gamma $ by hyperbolic arc length.", "Then $z^{\\prime }=\\left( 1-\\left|z\\right|^{2}\\right) e^{i\\theta \\left( s\\right) },$ from which we get $\\frac{d}{ds}\\log \\left( 1-\\left|z\\right|^{2}\\right) J_f^{1/2} & = -\\frac{2\\mathrm {Re} \\left\\lbrace z^{\\prime }\\overline{z}\\right\\rbrace }{1-\\left|z\\right|^{2}} + \\frac{1}{2}\\frac{2\\mathrm {Re} \\left\\lbrace \\left( h^{\\prime \\prime }\\overline{h^{\\prime }}+g^{\\prime \\prime }\\overline{g^{\\prime }}\\right)z^{\\prime } \\right\\rbrace }{J_f}\\\\& = 2\\mathrm {Re} \\left\\lbrace -\\overline{z}\\frac{ z^{\\prime }}{1-\\left|z\\right|^{2}} + \\frac{1}{2}\\frac{ \\left( h^{\\prime \\prime }\\overline{h^{\\prime }}+g^{\\prime \\prime }\\overline{g^{\\prime }}\\right)z^{\\prime } }{J_f}\\right\\rbrace \\\\& =2\\mathrm {Re} \\left\\lbrace \\left(-\\overline{z}+\\frac{1-|z|^2}{2}P_f \\right)e^{i\\theta (s)} \\right\\rbrace $ and, in consequence, $\\frac{d}{ds}\\log \\left( 1-\\left|z\\right|^{2}\\right) J_f^{1/2}(z) = 2\\mathrm {Re} \\left\\lbrace A_{f}\\left( z\\right) e^{i\\theta (s)}\\right\\rbrace .$ It follows by hypothesis that $-2\\alpha \\le \\frac{d}{ds}\\log \\left( 1-\\left|z\\right|^{2}\\right)J_f^{1/2}\\left( z\\right) \\le 2\\alpha ,$ whence we get, after integration with respect to $s$ in the interval $\\left[ 0,L\\right],$ $-2\\alpha L\\le \\log \\frac{\\left( 1-\\left|z_{1}\\right|^{2}\\right)J_f^{1/2}\\left( z_{1}\\right) }{\\left( 1-\\left|z_{0}\\right|^{2}\\right)J_f^{1/2}\\left( z_{0}\\right) }\\le 2\\alpha L,$ which is equivalent to (REF ).", "For the converse, we set $u(z)=\\log \\left( \\left(1-|z|^2\\right) J_f^{1/2}(z) \\right) $ for all $z\\in \\mathbb {D}.$ For $z_{0}\\in \\mathbb {D}$ arbitrary, we obtain from (REF ) $\\left|u(z) -u( z_{0}) \\right|=\\left|\\log \\frac{\\left( 1-|z|^{2}\\right)J_f^{1/2}(z) }{\\left( 1-|z_{0}|^{2}\\right)J_f^{1/2}(z_{0}) }\\right|\\le 2\\alpha \\rho \\left( z_{0},z\\right)$ and therefore, $\\dfrac{\\left|u\\left( z\\right) -u\\left( z_{0}\\right) \\right|}{\\left|z-z_{0}\\right|}\\le 2\\alpha \\dfrac{\\rho \\left(z_{0},z\\right) }{\\left|z-z_{0}\\right|}.$ Now let $z$ approach $z_{0}$ in the direction of maximum growth of $u$ at $z_{0}$ , to get $2\\left|\\frac{\\partial u}{\\partial z}(z_0) \\right|=\\left|\\nabla u\\left( z_{0}\\right) \\right|=\\lim _{z\\rightarrow z_{0}}\\dfrac{\\left|u\\left( z\\right) -u\\left(z_{0}\\right) \\right|}{\\left|z-z_{0}\\right|}\\le 2\\alpha \\lambda _{\\mathbb {D}}\\left(z_{0}\\right),$ where $\\lambda _{\\mathbb {D}}$ denotes the Poincaré density of the unit disk.", "It follows from here and from (REF ) that $|A_{f}(z_0)|\\le \\alpha .$ Next we consider the case of equality.", "Without loss of generality, we assume that there is equality on the right side of (REF ) in $z_0,z_1$ , $z_0 \\ne z_1$ .", "That is $\\log \\frac{\\left( 1-\\left|z_{1}\\right|^{2}\\right) J_f^{1/2}\\left( z_{1}\\right) }{\\left( 1-\\left|z_{0}\\right|^{2}\\right) J_f^{1/2}\\left( z_{0}\\right) }= 2\\alpha L,$ which implies $ \\int _0^L \\frac{d}{ds}\\log \\left( 1-\\left|z(s)\\right|^{2}\\right) J_f^{1/2}(z(s)) ds= 2\\alpha L. $ From here and $\\frac{d}{ds}\\log \\left( 1-\\left|z(s)\\right|^{2}\\right) J_f^{1/2}(z(s))\\le 2\\alpha , \\;\\;\\; \\text{for all} \\;z\\in S, $ we obtain from (REF ) that $2\\mathrm {Re} \\left\\lbrace A_{f}\\left( z\\right) e^{i\\theta (s)}\\right\\rbrace =\\frac{d}{ds}\\log \\left( 1-\\left|z(s)\\right|^{2}\\right) J_f^{1/2}(z(s)) = 2\\alpha , \\;\\;\\; \\text{for all} \\;z\\in S.$ It follows that $|A_f(z)|=\\alpha $ for all $z\\in S.$ The same integration argument just given allows us to conclude from (REF ) that $\\log \\frac{\\left( 1-\\left|\\widehat{z_1}\\right|^{2}\\right) J_f^{1/2}\\left(\\widehat{z_1}\\right) }{\\left( 1-\\left|\\widehat{z_0}\\right|^{2}\\right) J_f^{1/2}\\left( \\widehat{z_0}\\right) }= 2\\alpha \\rho (\\widehat{z_0}, \\widehat{z_1}),$ for all $\\widehat{z_0}, \\widehat{z_1}$ in $S$ .", "Remark 2 If we suppose $z_0=0$ then the inequality (REF ) becomes $\\frac{\\left(1-|z|\\right)^{2\\alpha -2}}{\\left(1+|z|\\right)^{2\\alpha +2}}\\le \\frac{J_{f}(z)}{J_{f}(0)} \\le \\frac{\\left(1+|z|\\right)^{2\\alpha -2}}{\\left(1-|z|\\right)^{2\\alpha +2}},$ for all $z\\in \\mathbb {D},$ which coincides with (2.1) in [23].", "In that same paper, the authors mention that the equality is achieved by a certain affine mapping $f_{\\alpha }$ of the function $k_{\\alpha }(z)=\\frac{1}{2\\alpha }\\left[ \\left( \\frac{1+z}{1-z}\\right)^{\\alpha }-1 \\right],\\qquad z\\in \\mathbb {D},$ which follows by direct calculations.", "We will obtain a condition under which a harmonic function, with dilatation having certain property, satisfies the equality in one of the inequalities of (REF ).", "As a particular case, it is shown that $f_{\\alpha }$ is essentially the only harmonic mapping with constant dilatation satisfying the equality in one of the inequalities of (REF ).", "Proposition 4 Let $f=h+\\bar{g}$ be a sense-preserving harmonic mapping in $\\mathbb {D},$ normalized under the condition $J_f(0)=1,$ and we suppose that $|A_f(z)|\\le \\alpha $ for all $z\\in \\mathbb {D}.$ (i) If there exists $z_0=re^{i\\theta }\\ne 0$ satisfying the equality in the right hand side of (REF ) and either $\\omega (te^{i\\theta })\\in \\mathbb {R},$ for $0\\le t<1,$ or $\\omega $ is constant, then $h$ satisfies the equation $\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)}=\\frac{\\omega (z)\\omega ^{\\prime }(z)}{1-\\omega ^2(z)}+2e^{-i\\theta }\\frac{\\alpha +e^{-i\\theta }z}{1-(e^{-i\\theta }z)^2}, \\qquad z\\in \\mathbb {D}.$ (ii) If there exists $z_0=re^{i\\theta }\\ne 0$ satisfying the equality in the left hand side of (REF ) and either $\\omega (te^{i\\theta })\\in \\mathbb {R},$ for $0\\le t<1,$ or $\\omega $ is constant, then $h$ satisfies the equation $\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)}=\\frac{\\omega (z)\\omega ^{\\prime }(z)}{1-\\omega ^2(z)}-2e^{-i\\theta }\\frac{\\alpha -e^{-i\\theta }z}{1-(e^{-i\\theta }z)^2}, \\qquad z\\in \\mathbb {D}.$ Since the proofs of (REF ) and (REF ) are similar, we only prove (REF ).", "By defining $\\varphi (z)=f(e^{i\\theta }z),$ we have $\\left|A_{\\varphi }\\left(z\\right) \\right|\\le \\alpha ,$ $\\omega _{\\varphi }(z)=\\omega _f(e^{i\\theta }z),$ and $J_{\\varphi }(z)=J_{f}(e^{i\\theta }z)$ for all $z\\in \\mathbb {D}$ .", "Then, without loss of generality, we can assume that $J_{f}(r)=\\frac{\\left(1+r\\right)^{2\\alpha -2}}{\\left(1-r\\right)^{2\\alpha +2}}$ and either $\\omega (t)\\in \\mathbb {R},$ for $0\\le t<r,$ or $\\omega $ is constant.", "So, proceeding as in the proof of (REF ), we find that $\\text{Re}\\left\\lbrace A_f(t) \\right\\rbrace =\\alpha ,$ for all $0\\le t\\le r.$ Consequently, the hypothesis $\\left\\Vert A_f \\right\\Vert \\le \\alpha $ implies $A_f(t)=\\alpha ,$ for all $0\\le t\\le r,$ or equivalently, $ A_h(t)-\\frac{1-t^2}{2}\\frac{\\overline{\\omega (t)}\\omega ^{\\prime }(t)}{1-|\\omega (t)|^2}=\\alpha ,\\qquad 0\\le t\\le r. $ Then it follows from the conditions on $\\omega $ and (REF ) that $ \\frac{1-t^2}{2}\\frac{h^{\\prime \\prime }(t)}{h^{\\prime }(t)}-t-\\frac{1-t^2}{2}\\frac{\\omega (t)\\omega ^{\\prime }(t)}{1-\\omega ^2(t)}=\\alpha ,\\qquad 0\\le t\\le r.$ Hence, by analytic continuation we obtain that $ \\frac{1-z^2}{2}\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)}-z-\\frac{1-z^2}{2}\\frac{\\omega (z)\\omega ^{\\prime }(z)}{1-\\omega ^2(z)}=\\alpha ,\\qquad z\\in \\mathbb {D},$ which proves $(i).$ Remark 3 The above reasoning leads us to the following conclusions: (a) If $\\omega $ is constant and $f=h+\\overline{g}$ satisfies equality on the right hand side of (REF ) at some $z_0\\in (0,1),$ then by (REF ), $\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)}=2\\frac{\\alpha +z}{1-z^2}=2\\left[ \\frac{\\alpha +1}{2}\\frac{1}{1-z}+\\frac{\\alpha -1}{2}\\frac{1}{1+z}\\right], $ from where, assuming that $h^{\\prime }(0)=1$ , $\\log h^{\\prime }(z)=(\\alpha -1)\\log (1+z)-(\\alpha +1)\\log (1-z).$ Therefore, $h^{\\prime }(z)=\\frac{(1+z)^{\\alpha -1}}{(1-z)^{\\alpha +1}}.$ Now, if we suppose that $h(0)=0,$ we easily see that $h(z)=\\frac{1}{2\\alpha }\\left[\\left( \\frac{1+z}{1-z}\\right)^\\alpha -1 \\right]=:k_\\alpha (z).$ Thus, the harmonic mapping $f=h+\\overline{g},$ with constant dilatation and normalized under the conditions $h(0)=g(0)=0$ and $h^{\\prime }(0)=1,$ satisfying the equality on the right hand side of (REF ) at some $z_0\\in (0,1),$ is given by $f(z)=k_\\alpha (z)+\\overline{\\omega k_\\alpha (z)}.$ Moreover, we can see in general that if $f$ satisfies equality on the right hand side of (REF ) at some $z_0\\in \\mathbb {D}\\setminus \\left\\lbrace 0\\right\\rbrace $ , then $f(z)=k_{\\alpha ,\\theta }(z)+\\overline{\\omega k_{\\alpha ,\\theta }(z)},$ where $k_{\\alpha ,\\theta }$ is some rotation of $k_\\alpha $ , with $\\theta $ depending on $z_0.$ (b) The same conclusion as in (a) is obtained if we assume that $f,$ with the above properties, satisfies equality on the left hand side of (REF ) at some $z_0\\in \\mathbb {D}.$ (c) Let $L$ be as in Example REF .", "Then, a straightforward calculation gives $\\omega _L(z):=\\omega (z)=-z\\qquad \\text{and}\\qquad J_L(z)=\\frac{1-|z|^2}{|1-z|^6},\\quad z\\in \\mathbb {D}.$ Therefore, with $\\alpha =3/2,$ $L$ satisfies equality on the left hand side of (REF ), for all $z\\in (-1,0).$ We prove that if a sense-preserving harmonic mapping $f=h+\\overline{g},$ with $h(0)=g(0)=0,$ $h^{\\prime }(0)=1,$ and $\\omega (z)=-z,$ satisfies equality on the left hand side of (REF ) at some $z_0\\in (-1,0),$ then $f=L.$ Indeed, if such $f$ exists, we must have by (REF ) that $\\frac{h^{\\prime \\prime }(z)}{h^{\\prime }(z)}=\\frac{z}{1-z^2}+2\\frac{\\alpha +z}{1-z^2},$ where we have used the condition $\\theta =\\pi .$ From here, if $\\alpha =3/2,$ we obtain by integration and $g^{\\prime }=\\omega h^{\\prime }$ that $h^{\\prime }(z)=\\frac{1}{(1-z)^3}\\qquad \\text{ and }\\qquad g^{\\prime }(z)=-\\frac{z}{(1-z)^3},$ whence $f(z)=L(z)$ for all $z\\in \\mathbb {D}.$" ] ]
2107.01756